Oct 11 10:26:05.017216 master-2 systemd[1]: Starting Kubernetes Kubelet... Oct 11 10:26:05.705060 master-2 kubenswrapper[4776]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 10:26:05.705060 master-2 kubenswrapper[4776]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 11 10:26:05.705060 master-2 kubenswrapper[4776]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 10:26:05.705060 master-2 kubenswrapper[4776]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 10:26:05.705060 master-2 kubenswrapper[4776]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 11 10:26:05.705060 master-2 kubenswrapper[4776]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 10:26:05.706386 master-2 kubenswrapper[4776]: I1011 10:26:05.706043 4776 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 11 10:26:05.717953 master-2 kubenswrapper[4776]: W1011 10:26:05.717876 4776 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 10:26:05.717953 master-2 kubenswrapper[4776]: W1011 10:26:05.717917 4776 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 10:26:05.717953 master-2 kubenswrapper[4776]: W1011 10:26:05.717930 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 10:26:05.717953 master-2 kubenswrapper[4776]: W1011 10:26:05.717950 4776 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 10:26:05.717953 master-2 kubenswrapper[4776]: W1011 10:26:05.717962 4776 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.717976 4776 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.717988 4776 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.717999 4776 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718010 4776 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718020 4776 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718030 4776 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718044 4776 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718057 4776 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718069 4776 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718079 4776 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718089 4776 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718102 4776 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718113 4776 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718126 4776 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718136 4776 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718146 4776 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718159 4776 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718172 4776 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718184 4776 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718195 4776 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718205 4776 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718219 4776 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718233 4776 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718244 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718256 4776 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718267 4776 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718277 4776 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718289 4776 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718302 4776 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718312 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718322 4776 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718331 4776 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718341 4776 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718351 4776 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718361 4776 feature_gate.go:330] unrecognized feature gate: Example Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718371 4776 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718381 4776 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718390 4776 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718400 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718410 4776 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718419 4776 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718429 4776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718439 4776 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718452 4776 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718464 4776 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718475 4776 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718487 4776 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718498 4776 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718508 4776 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718517 4776 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718524 4776 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718532 4776 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718542 4776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718550 4776 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718557 4776 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718565 4776 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718572 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718580 4776 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: W1011 10:26:05.718588 4776 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: W1011 10:26:05.718597 4776 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: W1011 10:26:05.718606 4776 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: W1011 10:26:05.718614 4776 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: W1011 10:26:05.718621 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: W1011 10:26:05.718630 4776 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: W1011 10:26:05.718638 4776 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: W1011 10:26:05.718645 4776 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: W1011 10:26:05.718653 4776 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719646 4776 flags.go:64] FLAG: --address="0.0.0.0" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719704 4776 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719723 4776 flags.go:64] FLAG: --anonymous-auth="true" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719742 4776 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719753 4776 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719762 4776 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719775 4776 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719787 4776 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719796 4776 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719805 4776 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719815 4776 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719824 4776 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719834 4776 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719843 4776 flags.go:64] FLAG: --cgroup-root="" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719851 4776 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719860 4776 flags.go:64] FLAG: --client-ca-file="" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719870 4776 flags.go:64] FLAG: --cloud-config="" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719881 4776 flags.go:64] FLAG: --cloud-provider="" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719890 4776 flags.go:64] FLAG: --cluster-dns="[]" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719900 4776 flags.go:64] FLAG: --cluster-domain="" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719909 4776 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719918 4776 flags.go:64] FLAG: --config-dir="" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719927 4776 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719937 4776 flags.go:64] FLAG: --container-log-max-files="5" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719948 4776 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719956 4776 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719965 4776 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719975 4776 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719983 4776 flags.go:64] FLAG: --contention-profiling="false" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719993 4776 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.720002 4776 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.720012 4776 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.720056 4776 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.720068 4776 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.720077 4776 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.720087 4776 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.720095 4776 flags.go:64] FLAG: --enable-load-reader="false" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.720104 4776 flags.go:64] FLAG: --enable-server="true" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720113 4776 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720125 4776 flags.go:64] FLAG: --event-burst="100" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720134 4776 flags.go:64] FLAG: --event-qps="50" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720143 4776 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720152 4776 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720162 4776 flags.go:64] FLAG: --eviction-hard="" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720173 4776 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720183 4776 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720195 4776 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720206 4776 flags.go:64] FLAG: --eviction-soft="" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720221 4776 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720233 4776 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720245 4776 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720256 4776 flags.go:64] FLAG: --experimental-mounter-path="" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720268 4776 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720279 4776 flags.go:64] FLAG: --fail-swap-on="true" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720291 4776 flags.go:64] FLAG: --feature-gates="" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720305 4776 flags.go:64] FLAG: --file-check-frequency="20s" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720317 4776 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720328 4776 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720340 4776 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720352 4776 flags.go:64] FLAG: --healthz-port="10248" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720364 4776 flags.go:64] FLAG: --help="false" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720376 4776 flags.go:64] FLAG: --hostname-override="" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720387 4776 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720399 4776 flags.go:64] FLAG: --http-check-frequency="20s" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720411 4776 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720422 4776 flags.go:64] FLAG: --image-credential-provider-config="" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720433 4776 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720442 4776 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720453 4776 flags.go:64] FLAG: --image-service-endpoint="" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720462 4776 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720472 4776 flags.go:64] FLAG: --kube-api-burst="100" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720482 4776 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720491 4776 flags.go:64] FLAG: --kube-api-qps="50" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720500 4776 flags.go:64] FLAG: --kube-reserved="" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720509 4776 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720517 4776 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720527 4776 flags.go:64] FLAG: --kubelet-cgroups="" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720535 4776 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720544 4776 flags.go:64] FLAG: --lock-file="" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720552 4776 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720562 4776 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720571 4776 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720585 4776 flags.go:64] FLAG: --log-json-split-stream="false" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720593 4776 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720602 4776 flags.go:64] FLAG: --log-text-split-stream="false" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720611 4776 flags.go:64] FLAG: --logging-format="text" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720620 4776 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720629 4776 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720638 4776 flags.go:64] FLAG: --manifest-url="" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720647 4776 flags.go:64] FLAG: --manifest-url-header="" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720659 4776 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720668 4776 flags.go:64] FLAG: --max-open-files="1000000" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720710 4776 flags.go:64] FLAG: --max-pods="110" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720720 4776 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720729 4776 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720738 4776 flags.go:64] FLAG: --memory-manager-policy="None" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720746 4776 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720755 4776 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720764 4776 flags.go:64] FLAG: --node-ip="192.168.34.12" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720773 4776 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720793 4776 flags.go:64] FLAG: --node-status-max-images="50" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720802 4776 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720810 4776 flags.go:64] FLAG: --oom-score-adj="-999" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720820 4776 flags.go:64] FLAG: --pod-cidr="" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720833 4776 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d66b9dbe1d071d7372c477a78835fb65b48ea82db00d23e9086af5cfcb194ad" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720850 4776 flags.go:64] FLAG: --pod-manifest-path="" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720861 4776 flags.go:64] FLAG: --pod-max-pids="-1" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720873 4776 flags.go:64] FLAG: --pods-per-core="0" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720885 4776 flags.go:64] FLAG: --port="10250" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720897 4776 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720908 4776 flags.go:64] FLAG: --provider-id="" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720920 4776 flags.go:64] FLAG: --qos-reserved="" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720932 4776 flags.go:64] FLAG: --read-only-port="10255" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.720943 4776 flags.go:64] FLAG: --register-node="true" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.720955 4776 flags.go:64] FLAG: --register-schedulable="true" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.720967 4776 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.720986 4776 flags.go:64] FLAG: --registry-burst="10" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.720997 4776 flags.go:64] FLAG: --registry-qps="5" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721007 4776 flags.go:64] FLAG: --reserved-cpus="" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721019 4776 flags.go:64] FLAG: --reserved-memory="" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721031 4776 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721043 4776 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721054 4776 flags.go:64] FLAG: --rotate-certificates="false" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721064 4776 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721075 4776 flags.go:64] FLAG: --runonce="false" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721086 4776 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721097 4776 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721109 4776 flags.go:64] FLAG: --seccomp-default="false" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721120 4776 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721131 4776 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721143 4776 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721155 4776 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721167 4776 flags.go:64] FLAG: --storage-driver-password="root" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721178 4776 flags.go:64] FLAG: --storage-driver-secure="false" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721189 4776 flags.go:64] FLAG: --storage-driver-table="stats" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721199 4776 flags.go:64] FLAG: --storage-driver-user="root" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721208 4776 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721217 4776 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721227 4776 flags.go:64] FLAG: --system-cgroups="" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721235 4776 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721253 4776 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721262 4776 flags.go:64] FLAG: --tls-cert-file="" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721271 4776 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721282 4776 flags.go:64] FLAG: --tls-min-version="" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721290 4776 flags.go:64] FLAG: --tls-private-key-file="" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721298 4776 flags.go:64] FLAG: --topology-manager-policy="none" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721317 4776 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721326 4776 flags.go:64] FLAG: --topology-manager-scope="container" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721335 4776 flags.go:64] FLAG: --v="2" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721347 4776 flags.go:64] FLAG: --version="false" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721358 4776 flags.go:64] FLAG: --vmodule="" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721368 4776 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721378 4776 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: W1011 10:26:05.721580 4776 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: W1011 10:26:05.721591 4776 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: W1011 10:26:05.721602 4776 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: W1011 10:26:05.721610 4776 feature_gate.go:330] unrecognized feature gate: Example Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: W1011 10:26:05.721619 4776 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: W1011 10:26:05.721627 4776 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: W1011 10:26:05.721634 4776 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: W1011 10:26:05.721642 4776 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721651 4776 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721660 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721668 4776 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721708 4776 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721717 4776 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721724 4776 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721732 4776 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721740 4776 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721748 4776 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721755 4776 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721763 4776 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721770 4776 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721778 4776 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721786 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721794 4776 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721803 4776 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721811 4776 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721822 4776 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721830 4776 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721838 4776 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721846 4776 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721856 4776 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721864 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721872 4776 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721881 4776 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721889 4776 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721897 4776 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721904 4776 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721913 4776 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721920 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721928 4776 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721935 4776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721946 4776 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721955 4776 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721963 4776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721971 4776 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721979 4776 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721986 4776 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721994 4776 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.722002 4776 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722009 4776 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722017 4776 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722024 4776 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722032 4776 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722040 4776 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722047 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722054 4776 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722062 4776 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722070 4776 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722083 4776 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722091 4776 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722100 4776 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722107 4776 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722115 4776 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722123 4776 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722130 4776 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722138 4776 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722146 4776 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722153 4776 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722161 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 10:26:05.731637 master-2 kubenswrapper[4776]: W1011 10:26:05.722171 4776 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 10:26:05.731637 master-2 kubenswrapper[4776]: W1011 10:26:05.722182 4776 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 10:26:05.731637 master-2 kubenswrapper[4776]: W1011 10:26:05.722191 4776 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 10:26:05.731637 master-2 kubenswrapper[4776]: W1011 10:26:05.722203 4776 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 10:26:05.731637 master-2 kubenswrapper[4776]: I1011 10:26:05.724811 4776 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 11 10:26:05.738998 master-2 kubenswrapper[4776]: I1011 10:26:05.738926 4776 server.go:491] "Kubelet version" kubeletVersion="v1.31.13" Oct 11 10:26:05.738998 master-2 kubenswrapper[4776]: I1011 10:26:05.738987 4776 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 11 10:26:05.739198 master-2 kubenswrapper[4776]: W1011 10:26:05.739156 4776 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 10:26:05.739198 master-2 kubenswrapper[4776]: W1011 10:26:05.739187 4776 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739200 4776 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739214 4776 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739227 4776 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739243 4776 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739263 4776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739272 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739283 4776 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739292 4776 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739301 4776 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739313 4776 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739325 4776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739336 4776 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739347 4776 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739356 4776 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739365 4776 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739374 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739383 4776 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739393 4776 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739405 4776 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739414 4776 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739424 4776 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739435 4776 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739446 4776 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739454 4776 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739462 4776 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739470 4776 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739481 4776 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739493 4776 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739502 4776 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739511 4776 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739519 4776 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739527 4776 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739536 4776 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739545 4776 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739553 4776 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739561 4776 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739570 4776 feature_gate.go:330] unrecognized feature gate: Example Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739578 4776 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739586 4776 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739594 4776 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739602 4776 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739611 4776 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739619 4776 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739630 4776 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739638 4776 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739647 4776 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739655 4776 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739663 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739671 4776 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739715 4776 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739726 4776 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739737 4776 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739747 4776 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739759 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739769 4776 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739778 4776 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739787 4776 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739795 4776 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739804 4776 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739812 4776 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739820 4776 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739829 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739837 4776 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739845 4776 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739854 4776 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739862 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739870 4776 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739878 4776 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739886 4776 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.739895 4776 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: I1011 10:26:05.739910 4776 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741488 4776 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741520 4776 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741535 4776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741558 4776 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741572 4776 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741583 4776 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741594 4776 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741606 4776 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741618 4776 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741629 4776 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741640 4776 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741653 4776 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741665 4776 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741707 4776 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741719 4776 feature_gate.go:330] unrecognized feature gate: Example Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741730 4776 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741752 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741762 4776 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741778 4776 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741796 4776 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741807 4776 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741818 4776 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741829 4776 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741840 4776 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741850 4776 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741861 4776 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741871 4776 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741881 4776 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741901 4776 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741912 4776 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741922 4776 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741933 4776 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741943 4776 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741953 4776 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.741964 4776 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.741974 4776 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.741984 4776 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.741994 4776 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742005 4776 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742015 4776 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742028 4776 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742049 4776 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742061 4776 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742075 4776 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742088 4776 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742173 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742183 4776 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742193 4776 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742205 4776 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742551 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742577 4776 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742586 4776 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742595 4776 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742603 4776 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742612 4776 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742621 4776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742629 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742637 4776 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742652 4776 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742666 4776 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742711 4776 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742720 4776 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742729 4776 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742737 4776 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742746 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742755 4776 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742764 4776 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742776 4776 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742787 4776 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742798 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742807 4776 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742819 4776 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 10:26:05.747125 master-2 kubenswrapper[4776]: I1011 10:26:05.742834 4776 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 11 10:26:05.747125 master-2 kubenswrapper[4776]: I1011 10:26:05.744278 4776 server.go:940] "Client rotation is on, will bootstrap in background" Oct 11 10:26:05.749159 master-2 kubenswrapper[4776]: I1011 10:26:05.749106 4776 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Oct 11 10:26:05.752216 master-2 kubenswrapper[4776]: I1011 10:26:05.752167 4776 server.go:997] "Starting client certificate rotation" Oct 11 10:26:05.753027 master-2 kubenswrapper[4776]: I1011 10:26:05.752984 4776 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 11 10:26:05.753301 master-2 kubenswrapper[4776]: I1011 10:26:05.753216 4776 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Oct 11 10:26:05.782297 master-2 kubenswrapper[4776]: I1011 10:26:05.782207 4776 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 11 10:26:05.786190 master-2 kubenswrapper[4776]: I1011 10:26:05.786121 4776 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 11 10:26:05.803281 master-2 kubenswrapper[4776]: I1011 10:26:05.803186 4776 log.go:25] "Validated CRI v1 runtime API" Oct 11 10:26:05.809772 master-2 kubenswrapper[4776]: I1011 10:26:05.809714 4776 log.go:25] "Validated CRI v1 image API" Oct 11 10:26:05.811955 master-2 kubenswrapper[4776]: I1011 10:26:05.811894 4776 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 11 10:26:05.818861 master-2 kubenswrapper[4776]: I1011 10:26:05.818802 4776 fs.go:135] Filesystem UUIDs: map[76af800b-3127-4b99-b103-7e68794afee3:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Oct 11 10:26:05.819004 master-2 kubenswrapper[4776]: I1011 10:26:05.818848 4776 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Oct 11 10:26:05.832019 master-2 kubenswrapper[4776]: I1011 10:26:05.831934 4776 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Oct 11 10:26:05.854919 master-2 kubenswrapper[4776]: I1011 10:26:05.854264 4776 manager.go:217] Machine: {Timestamp:2025-10-11 10:26:05.85088475 +0000 UTC m=+0.635311539 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2799998 MemoryCapacity:50514149376 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:5bc5ca53875847afb260297b16f63643 SystemUUID:5bc5ca53-8758-47af-b260-297b16f63643 BootID:7aa46b32-7bb5-4c5a-8660-726bba203ff5 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257074688 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:91:4f:09 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:3e:91:4f:09 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:e1:55:c5 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:40:77:bf Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:86:fb:14:35:af:42 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514149376 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 11 10:26:05.854919 master-2 kubenswrapper[4776]: I1011 10:26:05.854806 4776 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 11 10:26:05.855214 master-2 kubenswrapper[4776]: I1011 10:26:05.855039 4776 manager.go:233] Version: {KernelVersion:5.14.0-427.91.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202509241235-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 11 10:26:05.855581 master-2 kubenswrapper[4776]: I1011 10:26:05.855536 4776 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 11 10:26:05.856867 master-2 kubenswrapper[4776]: I1011 10:26:05.856787 4776 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 11 10:26:05.857237 master-2 kubenswrapper[4776]: I1011 10:26:05.856871 4776 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-2","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 11 10:26:05.857296 master-2 kubenswrapper[4776]: I1011 10:26:05.857236 4776 topology_manager.go:138] "Creating topology manager with none policy" Oct 11 10:26:05.857296 master-2 kubenswrapper[4776]: I1011 10:26:05.857258 4776 container_manager_linux.go:303] "Creating device plugin manager" Oct 11 10:26:05.857296 master-2 kubenswrapper[4776]: I1011 10:26:05.857285 4776 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 11 10:26:05.857406 master-2 kubenswrapper[4776]: I1011 10:26:05.857314 4776 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 11 10:26:05.858529 master-2 kubenswrapper[4776]: I1011 10:26:05.858493 4776 state_mem.go:36] "Initialized new in-memory state store" Oct 11 10:26:05.858703 master-2 kubenswrapper[4776]: I1011 10:26:05.858646 4776 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 11 10:26:05.865043 master-2 kubenswrapper[4776]: I1011 10:26:05.864986 4776 kubelet.go:418] "Attempting to sync node with API server" Oct 11 10:26:05.865043 master-2 kubenswrapper[4776]: I1011 10:26:05.865023 4776 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 11 10:26:05.865233 master-2 kubenswrapper[4776]: I1011 10:26:05.865092 4776 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 11 10:26:05.865233 master-2 kubenswrapper[4776]: I1011 10:26:05.865118 4776 kubelet.go:324] "Adding apiserver pod source" Oct 11 10:26:05.865233 master-2 kubenswrapper[4776]: I1011 10:26:05.865140 4776 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 11 10:26:05.869864 master-2 kubenswrapper[4776]: I1011 10:26:05.869812 4776 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.12-3.rhaos4.18.gitdc59c78.el9" apiVersion="v1" Oct 11 10:26:05.873531 master-2 kubenswrapper[4776]: I1011 10:26:05.873472 4776 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 11 10:26:05.873749 master-2 kubenswrapper[4776]: I1011 10:26:05.873718 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 11 10:26:05.873749 master-2 kubenswrapper[4776]: I1011 10:26:05.873745 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 11 10:26:05.873911 master-2 kubenswrapper[4776]: I1011 10:26:05.873756 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 11 10:26:05.873911 master-2 kubenswrapper[4776]: I1011 10:26:05.873767 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 11 10:26:05.873911 master-2 kubenswrapper[4776]: I1011 10:26:05.873779 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 11 10:26:05.873911 master-2 kubenswrapper[4776]: I1011 10:26:05.873788 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 11 10:26:05.873911 master-2 kubenswrapper[4776]: I1011 10:26:05.873799 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 11 10:26:05.873911 master-2 kubenswrapper[4776]: I1011 10:26:05.873813 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 11 10:26:05.873911 master-2 kubenswrapper[4776]: I1011 10:26:05.873823 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 11 10:26:05.873911 master-2 kubenswrapper[4776]: I1011 10:26:05.873833 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 11 10:26:05.873911 master-2 kubenswrapper[4776]: I1011 10:26:05.873857 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 11 10:26:05.874403 master-2 kubenswrapper[4776]: I1011 10:26:05.874370 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 11 10:26:05.875174 master-2 kubenswrapper[4776]: I1011 10:26:05.875138 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 11 10:26:05.875716 master-2 kubenswrapper[4776]: I1011 10:26:05.875650 4776 server.go:1280] "Started kubelet" Oct 11 10:26:05.876219 master-2 kubenswrapper[4776]: I1011 10:26:05.876068 4776 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 11 10:26:05.876661 master-2 kubenswrapper[4776]: I1011 10:26:05.876024 4776 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 11 10:26:05.876661 master-2 kubenswrapper[4776]: I1011 10:26:05.876464 4776 server_v1.go:47] "podresources" method="list" useActivePods=true Oct 11 10:26:05.877604 master-2 kubenswrapper[4776]: I1011 10:26:05.877533 4776 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 11 10:26:05.878132 master-2 systemd[1]: Started Kubernetes Kubelet. Oct 11 10:26:05.878906 master-2 kubenswrapper[4776]: W1011 10:26:05.878561 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-2" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 11 10:26:05.878906 master-2 kubenswrapper[4776]: I1011 10:26:05.878779 4776 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 11 10:26:05.878906 master-2 kubenswrapper[4776]: W1011 10:26:05.878782 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 11 10:26:05.878906 master-2 kubenswrapper[4776]: E1011 10:26:05.878794 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-2\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:05.878906 master-2 kubenswrapper[4776]: E1011 10:26:05.878849 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:05.878906 master-2 kubenswrapper[4776]: I1011 10:26:05.878820 4776 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 11 10:26:05.879172 master-2 kubenswrapper[4776]: E1011 10:26:05.878995 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:05.879172 master-2 kubenswrapper[4776]: I1011 10:26:05.879044 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:05.879172 master-2 kubenswrapper[4776]: I1011 10:26:05.879098 4776 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 11 10:26:05.879381 master-2 kubenswrapper[4776]: I1011 10:26:05.879217 4776 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Oct 11 10:26:05.879381 master-2 kubenswrapper[4776]: I1011 10:26:05.879227 4776 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 11 10:26:05.879543 master-2 kubenswrapper[4776]: I1011 10:26:05.879385 4776 reconstruct.go:97] "Volume reconstruction finished" Oct 11 10:26:05.879543 master-2 kubenswrapper[4776]: I1011 10:26:05.879402 4776 reconciler.go:26] "Reconciler: start to sync state" Oct 11 10:26:05.881249 master-2 kubenswrapper[4776]: I1011 10:26:05.881108 4776 server.go:449] "Adding debug handlers to kubelet server" Oct 11 10:26:05.884580 master-2 kubenswrapper[4776]: I1011 10:26:05.883909 4776 factory.go:153] Registering CRI-O factory Oct 11 10:26:05.884580 master-2 kubenswrapper[4776]: I1011 10:26:05.883938 4776 factory.go:221] Registration of the crio container factory successfully Oct 11 10:26:05.884580 master-2 kubenswrapper[4776]: I1011 10:26:05.884059 4776 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 11 10:26:05.884580 master-2 kubenswrapper[4776]: I1011 10:26:05.884071 4776 factory.go:55] Registering systemd factory Oct 11 10:26:05.884580 master-2 kubenswrapper[4776]: I1011 10:26:05.884079 4776 factory.go:221] Registration of the systemd container factory successfully Oct 11 10:26:05.884580 master-2 kubenswrapper[4776]: I1011 10:26:05.884131 4776 factory.go:103] Registering Raw factory Oct 11 10:26:05.884580 master-2 kubenswrapper[4776]: I1011 10:26:05.884148 4776 manager.go:1196] Started watching for new ooms in manager Oct 11 10:26:05.885331 master-2 kubenswrapper[4776]: I1011 10:26:05.884915 4776 manager.go:319] Starting recovery of all containers Oct 11 10:26:05.899286 master-2 kubenswrapper[4776]: W1011 10:26:05.899204 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:05.899462 master-2 kubenswrapper[4776]: E1011 10:26:05.899300 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:05.899620 master-2 kubenswrapper[4776]: E1011 10:26:05.899552 4776 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-2\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 11 10:26:05.902134 master-2 kubenswrapper[4776]: E1011 10:26:05.901942 4776 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Oct 11 10:26:05.914504 master-2 kubenswrapper[4776]: E1011 10:26:05.910951 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5df58100d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.875621901 +0000 UTC m=+0.660048610,LastTimestamp:2025-10-11 10:26:05.875621901 +0000 UTC m=+0.660048610,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:05.920877 master-2 kubenswrapper[4776]: I1011 10:26:05.920840 4776 manager.go:324] Recovery completed Oct 11 10:26:05.932848 master-2 kubenswrapper[4776]: I1011 10:26:05.931906 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:05.934074 master-2 kubenswrapper[4776]: I1011 10:26:05.933968 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:05.934158 master-2 kubenswrapper[4776]: I1011 10:26:05.934083 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:05.934158 master-2 kubenswrapper[4776]: I1011 10:26:05.934101 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:05.935231 master-2 kubenswrapper[4776]: I1011 10:26:05.935179 4776 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 11 10:26:05.935231 master-2 kubenswrapper[4776]: I1011 10:26:05.935213 4776 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 11 10:26:05.935389 master-2 kubenswrapper[4776]: I1011 10:26:05.935267 4776 state_mem.go:36] "Initialized new in-memory state store" Oct 11 10:26:05.938799 master-2 kubenswrapper[4776]: I1011 10:26:05.938749 4776 policy_none.go:49] "None policy: Start" Oct 11 10:26:05.940021 master-2 kubenswrapper[4776]: I1011 10:26:05.939975 4776 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 11 10:26:05.940021 master-2 kubenswrapper[4776]: I1011 10:26:05.940016 4776 state_mem.go:35] "Initializing new in-memory state store" Oct 11 10:26:05.944785 master-2 kubenswrapper[4776]: E1011 10:26:05.944503 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d3ba3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-2 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934058043 +0000 UTC m=+0.718484762,LastTimestamp:2025-10-11 10:26:05.934058043 +0000 UTC m=+0.718484762,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:05.957502 master-2 kubenswrapper[4776]: E1011 10:26:05.957237 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d44d92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-2 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934095762 +0000 UTC m=+0.718522491,LastTimestamp:2025-10-11 10:26:05.934095762 +0000 UTC m=+0.718522491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:05.967358 master-2 kubenswrapper[4776]: E1011 10:26:05.966899 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d48761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-2 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934110561 +0000 UTC m=+0.718537280,LastTimestamp:2025-10-11 10:26:05.934110561 +0000 UTC m=+0.718537280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:05.979709 master-2 kubenswrapper[4776]: E1011 10:26:05.979547 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:06.016143 master-2 kubenswrapper[4776]: I1011 10:26:06.016068 4776 manager.go:334] "Starting Device Plugin manager" Oct 11 10:26:06.016312 master-2 kubenswrapper[4776]: I1011 10:26:06.016166 4776 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 11 10:26:06.016312 master-2 kubenswrapper[4776]: I1011 10:26:06.016193 4776 server.go:79] "Starting device plugin registration server" Oct 11 10:26:06.017291 master-2 kubenswrapper[4776]: I1011 10:26:06.017251 4776 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 11 10:26:06.017408 master-2 kubenswrapper[4776]: I1011 10:26:06.017285 4776 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 11 10:26:06.017512 master-2 kubenswrapper[4776]: I1011 10:26:06.017473 4776 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 11 10:26:06.017631 master-2 kubenswrapper[4776]: I1011 10:26:06.017605 4776 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 11 10:26:06.017631 master-2 kubenswrapper[4776]: I1011 10:26:06.017623 4776 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 11 10:26:06.019567 master-2 kubenswrapper[4776]: E1011 10:26:06.019487 4776 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-2\" not found" Oct 11 10:26:06.030930 master-2 kubenswrapper[4776]: E1011 10:26:06.030765 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e7e006ca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:06.018750154 +0000 UTC m=+0.803176903,LastTimestamp:2025-10-11 10:26:06.018750154 +0000 UTC m=+0.803176903,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.055339 master-2 kubenswrapper[4776]: I1011 10:26:06.055237 4776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 11 10:26:06.057290 master-2 kubenswrapper[4776]: I1011 10:26:06.057216 4776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 11 10:26:06.057290 master-2 kubenswrapper[4776]: I1011 10:26:06.057268 4776 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 11 10:26:06.057290 master-2 kubenswrapper[4776]: I1011 10:26:06.057289 4776 kubelet.go:2335] "Starting kubelet main sync loop" Oct 11 10:26:06.057608 master-2 kubenswrapper[4776]: E1011 10:26:06.057330 4776 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 11 10:26:06.066435 master-2 kubenswrapper[4776]: W1011 10:26:06.066350 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 11 10:26:06.066435 master-2 kubenswrapper[4776]: E1011 10:26:06.066405 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:06.108912 master-2 kubenswrapper[4776]: E1011 10:26:06.108793 4776 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-2\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 11 10:26:06.117941 master-2 kubenswrapper[4776]: I1011 10:26:06.117881 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:06.119072 master-2 kubenswrapper[4776]: I1011 10:26:06.119021 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:06.119072 master-2 kubenswrapper[4776]: I1011 10:26:06.119066 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:06.119072 master-2 kubenswrapper[4776]: I1011 10:26:06.119078 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:06.119329 master-2 kubenswrapper[4776]: I1011 10:26:06.119109 4776 kubelet_node_status.go:76] "Attempting to register node" node="master-2" Oct 11 10:26:06.129302 master-2 kubenswrapper[4776]: E1011 10:26:06.129133 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d3ba3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d3ba3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-2 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934058043 +0000 UTC m=+0.718484762,LastTimestamp:2025-10-11 10:26:06.119051552 +0000 UTC m=+0.903478271,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.129488 master-2 kubenswrapper[4776]: E1011 10:26:06.129200 4776 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-2" Oct 11 10:26:06.138534 master-2 kubenswrapper[4776]: E1011 10:26:06.138388 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d44d92\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d44d92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-2 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934095762 +0000 UTC m=+0.718522491,LastTimestamp:2025-10-11 10:26:06.119073711 +0000 UTC m=+0.903500440,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.147714 master-2 kubenswrapper[4776]: E1011 10:26:06.147436 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d48761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d48761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-2 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934110561 +0000 UTC m=+0.718537280,LastTimestamp:2025-10-11 10:26:06.119084461 +0000 UTC m=+0.903511180,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.158556 master-2 kubenswrapper[4776]: I1011 10:26:06.158476 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-2"] Oct 11 10:26:06.158667 master-2 kubenswrapper[4776]: I1011 10:26:06.158608 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:06.159824 master-2 kubenswrapper[4776]: I1011 10:26:06.159754 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:06.159921 master-2 kubenswrapper[4776]: I1011 10:26:06.159861 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:06.159921 master-2 kubenswrapper[4776]: I1011 10:26:06.159888 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:06.160634 master-2 kubenswrapper[4776]: I1011 10:26:06.160543 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" Oct 11 10:26:06.160634 master-2 kubenswrapper[4776]: I1011 10:26:06.160623 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:06.162238 master-2 kubenswrapper[4776]: I1011 10:26:06.162161 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:06.162238 master-2 kubenswrapper[4776]: I1011 10:26:06.162203 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:06.162238 master-2 kubenswrapper[4776]: I1011 10:26:06.162220 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:06.167718 master-2 kubenswrapper[4776]: E1011 10:26:06.167468 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d3ba3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d3ba3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-2 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934058043 +0000 UTC m=+0.718484762,LastTimestamp:2025-10-11 10:26:06.159827412 +0000 UTC m=+0.944254161,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.175249 master-2 kubenswrapper[4776]: E1011 10:26:06.175104 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d44d92\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d44d92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-2 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934095762 +0000 UTC m=+0.718522491,LastTimestamp:2025-10-11 10:26:06.15987773 +0000 UTC m=+0.944304489,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.182958 master-2 kubenswrapper[4776]: E1011 10:26:06.182793 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d48761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d48761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-2 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934110561 +0000 UTC m=+0.718537280,LastTimestamp:2025-10-11 10:26:06.159902499 +0000 UTC m=+0.944329288,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.191239 master-2 kubenswrapper[4776]: E1011 10:26:06.191039 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d3ba3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d3ba3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-2 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934058043 +0000 UTC m=+0.718484762,LastTimestamp:2025-10-11 10:26:06.162190545 +0000 UTC m=+0.946617264,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.199813 master-2 kubenswrapper[4776]: E1011 10:26:06.199614 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d44d92\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d44d92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-2 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934095762 +0000 UTC m=+0.718522491,LastTimestamp:2025-10-11 10:26:06.162213824 +0000 UTC m=+0.946640543,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.208070 master-2 kubenswrapper[4776]: E1011 10:26:06.207903 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d48761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d48761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-2 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934110561 +0000 UTC m=+0.718537280,LastTimestamp:2025-10-11 10:26:06.162229053 +0000 UTC m=+0.946655772,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.280506 master-2 kubenswrapper[4776]: I1011 10:26:06.280423 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f022eff2d978fee6b366ac18a80aa53c-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-2\" (UID: \"f022eff2d978fee6b366ac18a80aa53c\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" Oct 11 10:26:06.280774 master-2 kubenswrapper[4776]: I1011 10:26:06.280513 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/f022eff2d978fee6b366ac18a80aa53c-etc-kube\") pod \"kube-rbac-proxy-crio-master-2\" (UID: \"f022eff2d978fee6b366ac18a80aa53c\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" Oct 11 10:26:06.330114 master-2 kubenswrapper[4776]: I1011 10:26:06.329995 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:06.331503 master-2 kubenswrapper[4776]: I1011 10:26:06.331444 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:06.331772 master-2 kubenswrapper[4776]: I1011 10:26:06.331535 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:06.331772 master-2 kubenswrapper[4776]: I1011 10:26:06.331559 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:06.331772 master-2 kubenswrapper[4776]: I1011 10:26:06.331626 4776 kubelet_node_status.go:76] "Attempting to register node" node="master-2" Oct 11 10:26:06.340056 master-2 kubenswrapper[4776]: E1011 10:26:06.339930 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d3ba3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d3ba3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-2 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934058043 +0000 UTC m=+0.718484762,LastTimestamp:2025-10-11 10:26:06.331507909 +0000 UTC m=+1.115934658,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.340191 master-2 kubenswrapper[4776]: E1011 10:26:06.340110 4776 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-2" Oct 11 10:26:06.342469 master-2 kubenswrapper[4776]: E1011 10:26:06.342314 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d44d92\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d44d92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-2 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934095762 +0000 UTC m=+0.718522491,LastTimestamp:2025-10-11 10:26:06.331550439 +0000 UTC m=+1.115977188,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.352385 master-2 kubenswrapper[4776]: E1011 10:26:06.352235 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d48761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d48761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-2 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934110561 +0000 UTC m=+0.718537280,LastTimestamp:2025-10-11 10:26:06.331570748 +0000 UTC m=+1.115997497,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.381314 master-2 kubenswrapper[4776]: I1011 10:26:06.381162 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/f022eff2d978fee6b366ac18a80aa53c-etc-kube\") pod \"kube-rbac-proxy-crio-master-2\" (UID: \"f022eff2d978fee6b366ac18a80aa53c\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" Oct 11 10:26:06.381314 master-2 kubenswrapper[4776]: I1011 10:26:06.381315 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f022eff2d978fee6b366ac18a80aa53c-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-2\" (UID: \"f022eff2d978fee6b366ac18a80aa53c\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" Oct 11 10:26:06.381735 master-2 kubenswrapper[4776]: I1011 10:26:06.381399 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f022eff2d978fee6b366ac18a80aa53c-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-2\" (UID: \"f022eff2d978fee6b366ac18a80aa53c\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" Oct 11 10:26:06.381735 master-2 kubenswrapper[4776]: I1011 10:26:06.381506 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/f022eff2d978fee6b366ac18a80aa53c-etc-kube\") pod \"kube-rbac-proxy-crio-master-2\" (UID: \"f022eff2d978fee6b366ac18a80aa53c\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" Oct 11 10:26:06.487613 master-2 kubenswrapper[4776]: I1011 10:26:06.487365 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" Oct 11 10:26:06.521132 master-2 kubenswrapper[4776]: E1011 10:26:06.521010 4776 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-2\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 11 10:26:06.741395 master-2 kubenswrapper[4776]: I1011 10:26:06.741212 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:06.742407 master-2 kubenswrapper[4776]: I1011 10:26:06.742375 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:06.742407 master-2 kubenswrapper[4776]: I1011 10:26:06.742406 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:06.742476 master-2 kubenswrapper[4776]: I1011 10:26:06.742415 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:06.742476 master-2 kubenswrapper[4776]: I1011 10:26:06.742441 4776 kubelet_node_status.go:76] "Attempting to register node" node="master-2" Oct 11 10:26:06.753986 master-2 kubenswrapper[4776]: E1011 10:26:06.753937 4776 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-2" Oct 11 10:26:06.754135 master-2 kubenswrapper[4776]: E1011 10:26:06.753995 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d3ba3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d3ba3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-2 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934058043 +0000 UTC m=+0.718484762,LastTimestamp:2025-10-11 10:26:06.742396025 +0000 UTC m=+1.526822734,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.762071 master-2 kubenswrapper[4776]: E1011 10:26:06.761949 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d44d92\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d44d92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-2 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934095762 +0000 UTC m=+0.718522491,LastTimestamp:2025-10-11 10:26:06.742412231 +0000 UTC m=+1.526838940,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.771920 master-2 kubenswrapper[4776]: E1011 10:26:06.771661 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d48761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d48761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-2 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934110561 +0000 UTC m=+0.718537280,LastTimestamp:2025-10-11 10:26:06.742420464 +0000 UTC m=+1.526847173,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.818535 master-2 kubenswrapper[4776]: W1011 10:26:06.818428 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:06.818535 master-2 kubenswrapper[4776]: E1011 10:26:06.818514 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:06.888660 master-2 kubenswrapper[4776]: I1011 10:26:06.888561 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:06.925295 master-1 systemd[1]: Starting Kubernetes Kubelet... Oct 11 10:26:07.016203 master-2 kubenswrapper[4776]: W1011 10:26:07.016038 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-2" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 11 10:26:07.016203 master-2 kubenswrapper[4776]: E1011 10:26:07.016103 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-2\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:07.183761 master-2 kubenswrapper[4776]: W1011 10:26:07.183657 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 11 10:26:07.183937 master-2 kubenswrapper[4776]: E1011 10:26:07.183776 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:07.252126 master-2 kubenswrapper[4776]: W1011 10:26:07.251543 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf022eff2d978fee6b366ac18a80aa53c.slice/crio-e541a5c6b0b8b187d872a1b29da26af99ffdeead9c703d25dc4e829a8cc73947 WatchSource:0}: Error finding container e541a5c6b0b8b187d872a1b29da26af99ffdeead9c703d25dc4e829a8cc73947: Status 404 returned error can't find the container with id e541a5c6b0b8b187d872a1b29da26af99ffdeead9c703d25dc4e829a8cc73947 Oct 11 10:26:07.258775 master-2 kubenswrapper[4776]: I1011 10:26:07.258727 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:26:07.271309 master-2 kubenswrapper[4776]: E1011 10:26:07.271010 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e631c68f1b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169\",Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:07.258595099 +0000 UTC m=+2.043021818,LastTimestamp:2025-10-11 10:26:07.258595099 +0000 UTC m=+2.043021818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:07.310900 master-2 kubenswrapper[4776]: W1011 10:26:07.310819 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 11 10:26:07.310900 master-2 kubenswrapper[4776]: E1011 10:26:07.310894 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:07.330112 master-2 kubenswrapper[4776]: E1011 10:26:07.330038 4776 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-2\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="1.6s" Oct 11 10:26:07.554971 master-2 kubenswrapper[4776]: I1011 10:26:07.554753 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:07.556854 master-2 kubenswrapper[4776]: I1011 10:26:07.556799 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:07.556996 master-2 kubenswrapper[4776]: I1011 10:26:07.556873 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:07.556996 master-2 kubenswrapper[4776]: I1011 10:26:07.556927 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:07.556996 master-2 kubenswrapper[4776]: I1011 10:26:07.556983 4776 kubelet_node_status.go:76] "Attempting to register node" node="master-2" Oct 11 10:26:07.568460 master-2 kubenswrapper[4776]: E1011 10:26:07.568381 4776 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-2" Oct 11 10:26:07.568574 master-2 kubenswrapper[4776]: E1011 10:26:07.568382 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d3ba3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d3ba3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-2 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934058043 +0000 UTC m=+0.718484762,LastTimestamp:2025-10-11 10:26:07.556848208 +0000 UTC m=+2.341274957,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:07.578011 master-2 kubenswrapper[4776]: E1011 10:26:07.577893 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d44d92\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d44d92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-2 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934095762 +0000 UTC m=+0.718522491,LastTimestamp:2025-10-11 10:26:07.556885617 +0000 UTC m=+2.341312356,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:07.587622 master-2 kubenswrapper[4776]: E1011 10:26:07.587499 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d48761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d48761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-2 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934110561 +0000 UTC m=+0.718537280,LastTimestamp:2025-10-11 10:26:07.556937328 +0000 UTC m=+2.341364067,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:07.892315 master-2 kubenswrapper[4776]: I1011 10:26:07.892110 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:08.064705 master-2 kubenswrapper[4776]: I1011 10:26:08.064590 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" event={"ID":"f022eff2d978fee6b366ac18a80aa53c","Type":"ContainerStarted","Data":"e541a5c6b0b8b187d872a1b29da26af99ffdeead9c703d25dc4e829a8cc73947"} Oct 11 10:26:08.548755 master-2 kubenswrapper[4776]: W1011 10:26:08.548660 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:08.548755 master-2 kubenswrapper[4776]: E1011 10:26:08.548754 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:08.809342 master-2 kubenswrapper[4776]: W1011 10:26:08.809240 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 11 10:26:08.809342 master-2 kubenswrapper[4776]: E1011 10:26:08.809291 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:08.888590 master-2 kubenswrapper[4776]: I1011 10:26:08.888538 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:08.940156 master-2 kubenswrapper[4776]: E1011 10:26:08.940100 4776 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-2\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="3.2s" Oct 11 10:26:09.044754 master-2 kubenswrapper[4776]: E1011 10:26:09.044520 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e69b93cc13 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169\" in 1.775s (1.775s including waiting). Image size: 458126368 bytes.,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:09.033653267 +0000 UTC m=+3.818079996,LastTimestamp:2025-10-11 10:26:09.033653267 +0000 UTC m=+3.818079996,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:09.169274 master-2 kubenswrapper[4776]: I1011 10:26:09.169162 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:09.170868 master-2 kubenswrapper[4776]: I1011 10:26:09.170807 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:09.170868 master-2 kubenswrapper[4776]: I1011 10:26:09.170868 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:09.171014 master-2 kubenswrapper[4776]: I1011 10:26:09.170889 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:09.171014 master-2 kubenswrapper[4776]: I1011 10:26:09.170932 4776 kubelet_node_status.go:76] "Attempting to register node" node="master-2" Oct 11 10:26:09.183438 master-2 kubenswrapper[4776]: E1011 10:26:09.183379 4776 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-2" Oct 11 10:26:09.183559 master-2 kubenswrapper[4776]: E1011 10:26:09.183381 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d3ba3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d3ba3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-2 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934058043 +0000 UTC m=+0.718484762,LastTimestamp:2025-10-11 10:26:09.17084912 +0000 UTC m=+3.955275869,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:09.191530 master-2 kubenswrapper[4776]: E1011 10:26:09.191378 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d44d92\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d44d92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-2 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934095762 +0000 UTC m=+0.718522491,LastTimestamp:2025-10-11 10:26:09.170880181 +0000 UTC m=+3.955306930,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:09.314872 master-2 kubenswrapper[4776]: E1011 10:26:09.314573 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e6aba86340 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:09.303438144 +0000 UTC m=+4.087864853,LastTimestamp:2025-10-11 10:26:09.303438144 +0000 UTC m=+4.087864853,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:09.331751 master-2 kubenswrapper[4776]: E1011 10:26:09.331553 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e6acc32bc5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:09.321970629 +0000 UTC m=+4.106397348,LastTimestamp:2025-10-11 10:26:09.321970629 +0000 UTC m=+4.106397348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:09.527942 master-2 kubenswrapper[4776]: W1011 10:26:09.527652 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 11 10:26:09.527942 master-2 kubenswrapper[4776]: E1011 10:26:09.527851 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:09.775049 master-2 kubenswrapper[4776]: W1011 10:26:09.774904 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-2" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 11 10:26:09.775049 master-2 kubenswrapper[4776]: E1011 10:26:09.775024 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-2\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:09.892722 master-2 kubenswrapper[4776]: I1011 10:26:09.892460 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:10.072916 master-2 kubenswrapper[4776]: I1011 10:26:10.072770 4776 generic.go:334] "Generic (PLEG): container finished" podID="f022eff2d978fee6b366ac18a80aa53c" containerID="45989505ea87eb5d207184ab8cca1a7ff41c0ae043eba001621e60253585d1e0" exitCode=0 Oct 11 10:26:10.072916 master-2 kubenswrapper[4776]: I1011 10:26:10.072857 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" event={"ID":"f022eff2d978fee6b366ac18a80aa53c","Type":"ContainerDied","Data":"45989505ea87eb5d207184ab8cca1a7ff41c0ae043eba001621e60253585d1e0"} Oct 11 10:26:10.073807 master-2 kubenswrapper[4776]: I1011 10:26:10.072967 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:10.074739 master-2 kubenswrapper[4776]: I1011 10:26:10.074642 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:10.074858 master-2 kubenswrapper[4776]: I1011 10:26:10.074766 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:10.074858 master-2 kubenswrapper[4776]: I1011 10:26:10.074796 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:10.105409 master-2 kubenswrapper[4776]: E1011 10:26:10.105174 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e6dabf9ac6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169\" already present on machine,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:10.093488838 +0000 UTC m=+4.877915577,LastTimestamp:2025-10-11 10:26:10.093488838 +0000 UTC m=+4.877915577,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:10.380528 master-2 kubenswrapper[4776]: E1011 10:26:10.380283 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e6eb293eb8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:10.368847544 +0000 UTC m=+5.153274283,LastTimestamp:2025-10-11 10:26:10.368847544 +0000 UTC m=+5.153274283,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:10.399086 master-2 kubenswrapper[4776]: E1011 10:26:10.398844 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e6ec577694 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:10.388653716 +0000 UTC m=+5.173080455,LastTimestamp:2025-10-11 10:26:10.388653716 +0000 UTC m=+5.173080455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:10.890429 master-2 kubenswrapper[4776]: I1011 10:26:10.890340 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:11.080314 master-2 kubenswrapper[4776]: I1011 10:26:11.080220 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-2_f022eff2d978fee6b366ac18a80aa53c/kube-rbac-proxy-crio/0.log" Oct 11 10:26:11.081604 master-2 kubenswrapper[4776]: I1011 10:26:11.080791 4776 generic.go:334] "Generic (PLEG): container finished" podID="f022eff2d978fee6b366ac18a80aa53c" containerID="44e8562db268ea9bb2264d46c564a6e3ff14c4925210392cab4c051e86d86afe" exitCode=1 Oct 11 10:26:11.081604 master-2 kubenswrapper[4776]: I1011 10:26:11.080845 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" event={"ID":"f022eff2d978fee6b366ac18a80aa53c","Type":"ContainerDied","Data":"44e8562db268ea9bb2264d46c564a6e3ff14c4925210392cab4c051e86d86afe"} Oct 11 10:26:11.081604 master-2 kubenswrapper[4776]: I1011 10:26:11.080959 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:11.082042 master-2 kubenswrapper[4776]: I1011 10:26:11.081983 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:11.082042 master-2 kubenswrapper[4776]: I1011 10:26:11.082036 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:11.082234 master-2 kubenswrapper[4776]: I1011 10:26:11.082050 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:11.095243 master-2 kubenswrapper[4776]: I1011 10:26:11.095169 4776 scope.go:117] "RemoveContainer" containerID="44e8562db268ea9bb2264d46c564a6e3ff14c4925210392cab4c051e86d86afe" Oct 11 10:26:11.110142 master-2 kubenswrapper[4776]: E1011 10:26:11.109970 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-2.186d68e6dabf9ac6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e6dabf9ac6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169\" already present on machine,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:10.093488838 +0000 UTC m=+4.877915577,LastTimestamp:2025-10-11 10:26:11.099208943 +0000 UTC m=+5.883635692,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:11.340725 master-2 kubenswrapper[4776]: E1011 10:26:11.340527 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-2.186d68e6eb293eb8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e6eb293eb8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:10.368847544 +0000 UTC m=+5.153274283,LastTimestamp:2025-10-11 10:26:11.329257892 +0000 UTC m=+6.113684611,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:11.353965 master-2 kubenswrapper[4776]: E1011 10:26:11.353793 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-2.186d68e6ec577694\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e6ec577694 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:10.388653716 +0000 UTC m=+5.173080455,LastTimestamp:2025-10-11 10:26:11.343758019 +0000 UTC m=+6.128184768,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:11.891031 master-2 kubenswrapper[4776]: I1011 10:26:11.890914 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:11.971171 master-2 kubenswrapper[4776]: W1011 10:26:11.971071 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:11.971360 master-2 kubenswrapper[4776]: E1011 10:26:11.971161 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:12.085418 master-2 kubenswrapper[4776]: I1011 10:26:12.085296 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-2_f022eff2d978fee6b366ac18a80aa53c/kube-rbac-proxy-crio/1.log" Oct 11 10:26:12.086483 master-2 kubenswrapper[4776]: I1011 10:26:12.085941 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-2_f022eff2d978fee6b366ac18a80aa53c/kube-rbac-proxy-crio/0.log" Oct 11 10:26:12.086584 master-2 kubenswrapper[4776]: I1011 10:26:12.086449 4776 generic.go:334] "Generic (PLEG): container finished" podID="f022eff2d978fee6b366ac18a80aa53c" containerID="8d16240c2844fab823fcb447aab9f50d10cf3af9670757f057c38aa4d5dcc97c" exitCode=1 Oct 11 10:26:12.086584 master-2 kubenswrapper[4776]: I1011 10:26:12.086523 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" event={"ID":"f022eff2d978fee6b366ac18a80aa53c","Type":"ContainerDied","Data":"8d16240c2844fab823fcb447aab9f50d10cf3af9670757f057c38aa4d5dcc97c"} Oct 11 10:26:12.086584 master-2 kubenswrapper[4776]: I1011 10:26:12.086548 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:12.086842 master-2 kubenswrapper[4776]: I1011 10:26:12.086628 4776 scope.go:117] "RemoveContainer" containerID="44e8562db268ea9bb2264d46c564a6e3ff14c4925210392cab4c051e86d86afe" Oct 11 10:26:12.087624 master-1 kubenswrapper[4771]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 10:26:12.087624 master-1 kubenswrapper[4771]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 11 10:26:12.087624 master-1 kubenswrapper[4771]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 10:26:12.087624 master-1 kubenswrapper[4771]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 10:26:12.087624 master-1 kubenswrapper[4771]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 11 10:26:12.087624 master-1 kubenswrapper[4771]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 10:26:12.088192 master-2 kubenswrapper[4776]: I1011 10:26:12.088139 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:12.088263 master-2 kubenswrapper[4776]: I1011 10:26:12.088196 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:12.088263 master-2 kubenswrapper[4776]: I1011 10:26:12.088220 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:12.090281 master-1 kubenswrapper[4771]: I1011 10:26:12.088539 4771 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 11 10:26:12.093343 master-1 kubenswrapper[4771]: W1011 10:26:12.093280 4771 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 10:26:12.093343 master-1 kubenswrapper[4771]: W1011 10:26:12.093318 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 10:26:12.093343 master-1 kubenswrapper[4771]: W1011 10:26:12.093327 4771 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 10:26:12.093343 master-1 kubenswrapper[4771]: W1011 10:26:12.093335 4771 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 10:26:12.093343 master-1 kubenswrapper[4771]: W1011 10:26:12.093344 4771 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 10:26:12.093343 master-1 kubenswrapper[4771]: W1011 10:26:12.093381 4771 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093393 4771 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093402 4771 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093411 4771 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093419 4771 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093427 4771 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093435 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093443 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093450 4771 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093458 4771 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093466 4771 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093474 4771 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093490 4771 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093498 4771 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093506 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093514 4771 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093522 4771 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093529 4771 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093540 4771 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093551 4771 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 10:26:12.093829 master-1 kubenswrapper[4771]: W1011 10:26:12.093560 4771 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093569 4771 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093577 4771 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093585 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093593 4771 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093600 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093608 4771 feature_gate.go:330] unrecognized feature gate: Example Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093618 4771 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093625 4771 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093635 4771 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093646 4771 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093655 4771 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093664 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093673 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093689 4771 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093698 4771 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093707 4771 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093715 4771 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093722 4771 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093731 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 10:26:12.095477 master-1 kubenswrapper[4771]: W1011 10:26:12.093739 4771 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 10:26:12.096828 master-1 kubenswrapper[4771]: W1011 10:26:12.093747 4771 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 10:26:12.096828 master-1 kubenswrapper[4771]: W1011 10:26:12.093755 4771 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 10:26:12.096828 master-1 kubenswrapper[4771]: W1011 10:26:12.093763 4771 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 10:26:12.096828 master-1 kubenswrapper[4771]: W1011 10:26:12.093771 4771 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 10:26:12.096828 master-1 kubenswrapper[4771]: W1011 10:26:12.093778 4771 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 10:26:12.096828 master-1 kubenswrapper[4771]: W1011 10:26:12.093786 4771 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 10:26:12.096828 master-1 kubenswrapper[4771]: W1011 10:26:12.093794 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 10:26:12.096828 master-1 kubenswrapper[4771]: W1011 10:26:12.093801 4771 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 10:26:12.096828 master-1 kubenswrapper[4771]: W1011 10:26:12.093809 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 10:26:12.096828 master-1 kubenswrapper[4771]: W1011 10:26:12.093816 4771 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 10:26:12.096828 master-1 kubenswrapper[4771]: W1011 10:26:12.093824 4771 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 10:26:12.096828 master-1 kubenswrapper[4771]: W1011 10:26:12.093833 4771 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 10:26:12.096828 master-1 kubenswrapper[4771]: W1011 10:26:12.093845 4771 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 10:26:12.096828 master-1 kubenswrapper[4771]: W1011 10:26:12.093855 4771 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 10:26:12.096828 master-1 kubenswrapper[4771]: W1011 10:26:12.093864 4771 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 10:26:12.096828 master-1 kubenswrapper[4771]: W1011 10:26:12.093873 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 10:26:12.096828 master-1 kubenswrapper[4771]: W1011 10:26:12.093882 4771 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 10:26:12.096828 master-1 kubenswrapper[4771]: W1011 10:26:12.093891 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 10:26:12.096828 master-1 kubenswrapper[4771]: W1011 10:26:12.093899 4771 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: W1011 10:26:12.093908 4771 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: W1011 10:26:12.093916 4771 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: W1011 10:26:12.093925 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: W1011 10:26:12.093933 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: W1011 10:26:12.093941 4771 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: W1011 10:26:12.093948 4771 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: W1011 10:26:12.093962 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: I1011 10:26:12.094117 4771 flags.go:64] FLAG: --address="0.0.0.0" Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: I1011 10:26:12.094137 4771 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: I1011 10:26:12.094151 4771 flags.go:64] FLAG: --anonymous-auth="true" Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: I1011 10:26:12.094162 4771 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: I1011 10:26:12.094173 4771 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: I1011 10:26:12.094182 4771 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: I1011 10:26:12.094194 4771 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: I1011 10:26:12.094205 4771 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: I1011 10:26:12.094214 4771 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: I1011 10:26:12.094223 4771 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: I1011 10:26:12.094232 4771 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: I1011 10:26:12.094242 4771 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: I1011 10:26:12.094252 4771 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: I1011 10:26:12.094261 4771 flags.go:64] FLAG: --cgroup-root="" Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: I1011 10:26:12.094269 4771 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 11 10:26:12.098100 master-1 kubenswrapper[4771]: I1011 10:26:12.094278 4771 flags.go:64] FLAG: --client-ca-file="" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094287 4771 flags.go:64] FLAG: --cloud-config="" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094295 4771 flags.go:64] FLAG: --cloud-provider="" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094304 4771 flags.go:64] FLAG: --cluster-dns="[]" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094317 4771 flags.go:64] FLAG: --cluster-domain="" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094325 4771 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094334 4771 flags.go:64] FLAG: --config-dir="" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094344 4771 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094400 4771 flags.go:64] FLAG: --container-log-max-files="5" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094412 4771 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094421 4771 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094429 4771 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094440 4771 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094449 4771 flags.go:64] FLAG: --contention-profiling="false" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094458 4771 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094467 4771 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094481 4771 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094490 4771 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094502 4771 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094511 4771 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094520 4771 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094530 4771 flags.go:64] FLAG: --enable-load-reader="false" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094539 4771 flags.go:64] FLAG: --enable-server="true" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094550 4771 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094564 4771 flags.go:64] FLAG: --event-burst="100" Oct 11 10:26:12.099657 master-1 kubenswrapper[4771]: I1011 10:26:12.094576 4771 flags.go:64] FLAG: --event-qps="50" Oct 11 10:26:12.101105 master-2 kubenswrapper[4776]: I1011 10:26:12.101052 4776 scope.go:117] "RemoveContainer" containerID="8d16240c2844fab823fcb447aab9f50d10cf3af9670757f057c38aa4d5dcc97c" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094587 4771 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094599 4771 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094611 4771 flags.go:64] FLAG: --eviction-hard="" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094625 4771 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094637 4771 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094648 4771 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094659 4771 flags.go:64] FLAG: --eviction-soft="" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094668 4771 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094678 4771 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094687 4771 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094696 4771 flags.go:64] FLAG: --experimental-mounter-path="" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094704 4771 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094714 4771 flags.go:64] FLAG: --fail-swap-on="true" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094722 4771 flags.go:64] FLAG: --feature-gates="" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094733 4771 flags.go:64] FLAG: --file-check-frequency="20s" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094742 4771 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094752 4771 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094761 4771 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094770 4771 flags.go:64] FLAG: --healthz-port="10248" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094780 4771 flags.go:64] FLAG: --help="false" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094789 4771 flags.go:64] FLAG: --hostname-override="" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094797 4771 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094809 4771 flags.go:64] FLAG: --http-check-frequency="20s" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094818 4771 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 11 10:26:12.101287 master-1 kubenswrapper[4771]: I1011 10:26:12.094827 4771 flags.go:64] FLAG: --image-credential-provider-config="" Oct 11 10:26:12.101318 master-2 kubenswrapper[4776]: E1011 10:26:12.101271 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-2_openshift-machine-config-operator(f022eff2d978fee6b366ac18a80aa53c)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" podUID="f022eff2d978fee6b366ac18a80aa53c" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.094836 4771 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.094844 4771 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.094853 4771 flags.go:64] FLAG: --image-service-endpoint="" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.094862 4771 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.094871 4771 flags.go:64] FLAG: --kube-api-burst="100" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.094880 4771 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.094891 4771 flags.go:64] FLAG: --kube-api-qps="50" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.095006 4771 flags.go:64] FLAG: --kube-reserved="" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.095021 4771 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.095033 4771 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.095044 4771 flags.go:64] FLAG: --kubelet-cgroups="" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.095055 4771 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.095067 4771 flags.go:64] FLAG: --lock-file="" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.095078 4771 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.095090 4771 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.095102 4771 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.095118 4771 flags.go:64] FLAG: --log-json-split-stream="false" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.095129 4771 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.095140 4771 flags.go:64] FLAG: --log-text-split-stream="false" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.095151 4771 flags.go:64] FLAG: --logging-format="text" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.095161 4771 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.095170 4771 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.095179 4771 flags.go:64] FLAG: --manifest-url="" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.095188 4771 flags.go:64] FLAG: --manifest-url-header="" Oct 11 10:26:12.102800 master-1 kubenswrapper[4771]: I1011 10:26:12.095200 4771 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095209 4771 flags.go:64] FLAG: --max-open-files="1000000" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095220 4771 flags.go:64] FLAG: --max-pods="110" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095229 4771 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095238 4771 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095251 4771 flags.go:64] FLAG: --memory-manager-policy="None" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095260 4771 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095269 4771 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095278 4771 flags.go:64] FLAG: --node-ip="192.168.34.11" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095287 4771 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095309 4771 flags.go:64] FLAG: --node-status-max-images="50" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095320 4771 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095331 4771 flags.go:64] FLAG: --oom-score-adj="-999" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095342 4771 flags.go:64] FLAG: --pod-cidr="" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095416 4771 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d66b9dbe1d071d7372c477a78835fb65b48ea82db00d23e9086af5cfcb194ad" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095434 4771 flags.go:64] FLAG: --pod-manifest-path="" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095443 4771 flags.go:64] FLAG: --pod-max-pids="-1" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095452 4771 flags.go:64] FLAG: --pods-per-core="0" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095463 4771 flags.go:64] FLAG: --port="10250" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095472 4771 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095482 4771 flags.go:64] FLAG: --provider-id="" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095492 4771 flags.go:64] FLAG: --qos-reserved="" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095502 4771 flags.go:64] FLAG: --read-only-port="10255" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095514 4771 flags.go:64] FLAG: --register-node="true" Oct 11 10:26:12.104348 master-1 kubenswrapper[4771]: I1011 10:26:12.095523 4771 flags.go:64] FLAG: --register-schedulable="true" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095533 4771 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095549 4771 flags.go:64] FLAG: --registry-burst="10" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095558 4771 flags.go:64] FLAG: --registry-qps="5" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095566 4771 flags.go:64] FLAG: --reserved-cpus="" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095575 4771 flags.go:64] FLAG: --reserved-memory="" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095586 4771 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095595 4771 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095604 4771 flags.go:64] FLAG: --rotate-certificates="false" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095613 4771 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095622 4771 flags.go:64] FLAG: --runonce="false" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095630 4771 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095640 4771 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095649 4771 flags.go:64] FLAG: --seccomp-default="false" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095661 4771 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095670 4771 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095679 4771 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095689 4771 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095698 4771 flags.go:64] FLAG: --storage-driver-password="root" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095706 4771 flags.go:64] FLAG: --storage-driver-secure="false" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095715 4771 flags.go:64] FLAG: --storage-driver-table="stats" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095723 4771 flags.go:64] FLAG: --storage-driver-user="root" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095732 4771 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095741 4771 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095750 4771 flags.go:64] FLAG: --system-cgroups="" Oct 11 10:26:12.105822 master-1 kubenswrapper[4771]: I1011 10:26:12.095759 4771 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: I1011 10:26:12.095773 4771 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: I1011 10:26:12.095781 4771 flags.go:64] FLAG: --tls-cert-file="" Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: I1011 10:26:12.095791 4771 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: I1011 10:26:12.095803 4771 flags.go:64] FLAG: --tls-min-version="" Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: I1011 10:26:12.095813 4771 flags.go:64] FLAG: --tls-private-key-file="" Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: I1011 10:26:12.095822 4771 flags.go:64] FLAG: --topology-manager-policy="none" Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: I1011 10:26:12.095831 4771 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: I1011 10:26:12.095840 4771 flags.go:64] FLAG: --topology-manager-scope="container" Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: I1011 10:26:12.095849 4771 flags.go:64] FLAG: --v="2" Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: I1011 10:26:12.095860 4771 flags.go:64] FLAG: --version="false" Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: I1011 10:26:12.095871 4771 flags.go:64] FLAG: --vmodule="" Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: I1011 10:26:12.095881 4771 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: I1011 10:26:12.095890 4771 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: W1011 10:26:12.096100 4771 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: W1011 10:26:12.096111 4771 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: W1011 10:26:12.096120 4771 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: W1011 10:26:12.096129 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: W1011 10:26:12.096139 4771 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: W1011 10:26:12.096147 4771 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: W1011 10:26:12.096155 4771 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: W1011 10:26:12.096166 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: W1011 10:26:12.096173 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 10:26:12.107204 master-1 kubenswrapper[4771]: W1011 10:26:12.096181 4771 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096189 4771 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096196 4771 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096204 4771 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096212 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096219 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096227 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096235 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096243 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096250 4771 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096258 4771 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096266 4771 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096274 4771 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096282 4771 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096289 4771 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096297 4771 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096305 4771 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096312 4771 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096321 4771 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096329 4771 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 10:26:12.108553 master-1 kubenswrapper[4771]: W1011 10:26:12.096336 4771 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096345 4771 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096377 4771 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096385 4771 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096392 4771 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096400 4771 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096408 4771 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096415 4771 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096423 4771 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096431 4771 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096444 4771 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096451 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096459 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096467 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096474 4771 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096482 4771 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096490 4771 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096499 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096507 4771 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096515 4771 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 10:26:12.109968 master-1 kubenswrapper[4771]: W1011 10:26:12.096523 4771 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 10:26:12.111158 master-1 kubenswrapper[4771]: W1011 10:26:12.096533 4771 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 10:26:12.111158 master-1 kubenswrapper[4771]: W1011 10:26:12.096547 4771 feature_gate.go:330] unrecognized feature gate: Example Oct 11 10:26:12.111158 master-1 kubenswrapper[4771]: W1011 10:26:12.096555 4771 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 10:26:12.111158 master-1 kubenswrapper[4771]: W1011 10:26:12.096565 4771 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 10:26:12.111158 master-1 kubenswrapper[4771]: W1011 10:26:12.096575 4771 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 10:26:12.111158 master-1 kubenswrapper[4771]: W1011 10:26:12.096583 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 10:26:12.111158 master-1 kubenswrapper[4771]: W1011 10:26:12.096592 4771 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 10:26:12.111158 master-1 kubenswrapper[4771]: W1011 10:26:12.096600 4771 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 10:26:12.111158 master-1 kubenswrapper[4771]: W1011 10:26:12.096608 4771 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 10:26:12.111158 master-1 kubenswrapper[4771]: W1011 10:26:12.096615 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 10:26:12.111158 master-1 kubenswrapper[4771]: W1011 10:26:12.096626 4771 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 10:26:12.111158 master-1 kubenswrapper[4771]: W1011 10:26:12.096636 4771 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 11 10:26:12.111158 master-1 kubenswrapper[4771]: W1011 10:26:12.096646 4771 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 10:26:12.111158 master-1 kubenswrapper[4771]: W1011 10:26:12.096655 4771 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 10:26:12.111158 master-1 kubenswrapper[4771]: W1011 10:26:12.096664 4771 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 10:26:12.111158 master-1 kubenswrapper[4771]: W1011 10:26:12.096674 4771 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 10:26:12.111158 master-1 kubenswrapper[4771]: W1011 10:26:12.096683 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 10:26:12.111158 master-1 kubenswrapper[4771]: W1011 10:26:12.096693 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 10:26:12.111900 master-2 kubenswrapper[4776]: E1011 10:26:12.111646 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e7526b1e3c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-2_openshift-machine-config-operator(f022eff2d978fee6b366ac18a80aa53c),Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:12.101217852 +0000 UTC m=+6.885644571,LastTimestamp:2025-10-11 10:26:12.101217852 +0000 UTC m=+6.885644571,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:12.112172 master-1 kubenswrapper[4771]: W1011 10:26:12.096701 4771 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 10:26:12.112172 master-1 kubenswrapper[4771]: W1011 10:26:12.096710 4771 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 10:26:12.112172 master-1 kubenswrapper[4771]: W1011 10:26:12.096718 4771 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 10:26:12.112172 master-1 kubenswrapper[4771]: W1011 10:26:12.096732 4771 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 10:26:12.112172 master-1 kubenswrapper[4771]: I1011 10:26:12.096745 4771 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 11 10:26:12.112172 master-1 kubenswrapper[4771]: I1011 10:26:12.111075 4771 server.go:491] "Kubelet version" kubeletVersion="v1.31.13" Oct 11 10:26:12.112172 master-1 kubenswrapper[4771]: I1011 10:26:12.111107 4771 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 11 10:26:12.112172 master-1 kubenswrapper[4771]: W1011 10:26:12.111228 4771 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 10:26:12.112172 master-1 kubenswrapper[4771]: W1011 10:26:12.111240 4771 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 10:26:12.112172 master-1 kubenswrapper[4771]: W1011 10:26:12.111250 4771 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 10:26:12.112172 master-1 kubenswrapper[4771]: W1011 10:26:12.111259 4771 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 10:26:12.112172 master-1 kubenswrapper[4771]: W1011 10:26:12.111270 4771 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 10:26:12.112172 master-1 kubenswrapper[4771]: W1011 10:26:12.111280 4771 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 10:26:12.112172 master-1 kubenswrapper[4771]: W1011 10:26:12.111290 4771 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 10:26:12.112172 master-1 kubenswrapper[4771]: W1011 10:26:12.111299 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 10:26:12.112172 master-1 kubenswrapper[4771]: W1011 10:26:12.111308 4771 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 10:26:12.113164 master-1 kubenswrapper[4771]: W1011 10:26:12.111316 4771 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 10:26:12.113164 master-1 kubenswrapper[4771]: W1011 10:26:12.111324 4771 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 10:26:12.113164 master-1 kubenswrapper[4771]: W1011 10:26:12.111331 4771 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 10:26:12.113164 master-1 kubenswrapper[4771]: W1011 10:26:12.111339 4771 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 10:26:12.113164 master-1 kubenswrapper[4771]: W1011 10:26:12.111347 4771 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 10:26:12.113164 master-1 kubenswrapper[4771]: W1011 10:26:12.111389 4771 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 10:26:12.113164 master-1 kubenswrapper[4771]: W1011 10:26:12.111400 4771 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 10:26:12.113164 master-1 kubenswrapper[4771]: W1011 10:26:12.111413 4771 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 10:26:12.113164 master-1 kubenswrapper[4771]: W1011 10:26:12.111427 4771 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 10:26:12.113164 master-1 kubenswrapper[4771]: W1011 10:26:12.111435 4771 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 10:26:12.113164 master-1 kubenswrapper[4771]: W1011 10:26:12.111444 4771 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 10:26:12.113164 master-1 kubenswrapper[4771]: W1011 10:26:12.111453 4771 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 10:26:12.113164 master-1 kubenswrapper[4771]: W1011 10:26:12.111462 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 10:26:12.113164 master-1 kubenswrapper[4771]: W1011 10:26:12.111473 4771 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 10:26:12.113164 master-1 kubenswrapper[4771]: W1011 10:26:12.111482 4771 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 10:26:12.113164 master-1 kubenswrapper[4771]: W1011 10:26:12.111492 4771 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 10:26:12.113164 master-1 kubenswrapper[4771]: W1011 10:26:12.111501 4771 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 10:26:12.113164 master-1 kubenswrapper[4771]: W1011 10:26:12.111511 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 10:26:12.113164 master-1 kubenswrapper[4771]: W1011 10:26:12.111518 4771 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111527 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111535 4771 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111543 4771 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111551 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111559 4771 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111567 4771 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111579 4771 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111588 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111596 4771 feature_gate.go:330] unrecognized feature gate: Example Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111604 4771 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111612 4771 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111620 4771 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111629 4771 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111637 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111645 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111653 4771 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111660 4771 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111668 4771 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111676 4771 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 10:26:12.114431 master-1 kubenswrapper[4771]: W1011 10:26:12.111683 4771 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111691 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111699 4771 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111707 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111714 4771 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111722 4771 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111730 4771 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111738 4771 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111746 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111753 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111761 4771 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111769 4771 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111777 4771 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111784 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111792 4771 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111800 4771 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111811 4771 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111820 4771 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111828 4771 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111837 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 10:26:12.115947 master-1 kubenswrapper[4771]: W1011 10:26:12.111845 4771 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 10:26:12.117538 master-1 kubenswrapper[4771]: W1011 10:26:12.111852 4771 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 10:26:12.117538 master-1 kubenswrapper[4771]: W1011 10:26:12.111861 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 10:26:12.117538 master-1 kubenswrapper[4771]: W1011 10:26:12.111877 4771 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 10:26:12.117538 master-1 kubenswrapper[4771]: W1011 10:26:12.111886 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 10:26:12.117538 master-1 kubenswrapper[4771]: I1011 10:26:12.111898 4771 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 11 10:26:12.117538 master-1 kubenswrapper[4771]: W1011 10:26:12.112112 4771 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 10:26:12.117538 master-1 kubenswrapper[4771]: W1011 10:26:12.112126 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 10:26:12.117538 master-1 kubenswrapper[4771]: W1011 10:26:12.112135 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 10:26:12.117538 master-1 kubenswrapper[4771]: W1011 10:26:12.112145 4771 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 10:26:12.117538 master-1 kubenswrapper[4771]: W1011 10:26:12.112154 4771 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 10:26:12.117538 master-1 kubenswrapper[4771]: W1011 10:26:12.112164 4771 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 10:26:12.117538 master-1 kubenswrapper[4771]: W1011 10:26:12.112173 4771 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 10:26:12.117538 master-1 kubenswrapper[4771]: W1011 10:26:12.112182 4771 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 11 10:26:12.117538 master-1 kubenswrapper[4771]: W1011 10:26:12.112191 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 10:26:12.117538 master-1 kubenswrapper[4771]: W1011 10:26:12.112199 4771 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 10:26:12.117538 master-1 kubenswrapper[4771]: W1011 10:26:12.112207 4771 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112216 4771 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112225 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112233 4771 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112241 4771 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112249 4771 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112256 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112265 4771 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112272 4771 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112280 4771 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112288 4771 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112296 4771 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112305 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112313 4771 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112321 4771 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112328 4771 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112336 4771 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112344 4771 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112379 4771 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112391 4771 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 10:26:12.118804 master-1 kubenswrapper[4771]: W1011 10:26:12.112402 4771 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 10:26:12.120614 master-1 kubenswrapper[4771]: W1011 10:26:12.112411 4771 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 10:26:12.120614 master-1 kubenswrapper[4771]: W1011 10:26:12.112421 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 10:26:12.120614 master-1 kubenswrapper[4771]: W1011 10:26:12.112433 4771 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 10:26:12.120614 master-1 kubenswrapper[4771]: W1011 10:26:12.112443 4771 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 10:26:12.120614 master-1 kubenswrapper[4771]: W1011 10:26:12.112451 4771 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 10:26:12.120614 master-1 kubenswrapper[4771]: W1011 10:26:12.112459 4771 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 10:26:12.120614 master-1 kubenswrapper[4771]: W1011 10:26:12.112467 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 10:26:12.120614 master-1 kubenswrapper[4771]: W1011 10:26:12.112475 4771 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 10:26:12.120614 master-1 kubenswrapper[4771]: W1011 10:26:12.112482 4771 feature_gate.go:330] unrecognized feature gate: Example Oct 11 10:26:12.120614 master-1 kubenswrapper[4771]: W1011 10:26:12.112490 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 10:26:12.120614 master-1 kubenswrapper[4771]: W1011 10:26:12.112498 4771 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 10:26:12.120614 master-1 kubenswrapper[4771]: W1011 10:26:12.112506 4771 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 10:26:12.120614 master-1 kubenswrapper[4771]: W1011 10:26:12.112514 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 10:26:12.120614 master-1 kubenswrapper[4771]: W1011 10:26:12.112522 4771 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 10:26:12.120614 master-1 kubenswrapper[4771]: W1011 10:26:12.112530 4771 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 10:26:12.120614 master-1 kubenswrapper[4771]: W1011 10:26:12.112537 4771 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 10:26:12.120614 master-1 kubenswrapper[4771]: W1011 10:26:12.112547 4771 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 10:26:12.120614 master-1 kubenswrapper[4771]: W1011 10:26:12.112557 4771 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 10:26:12.120614 master-1 kubenswrapper[4771]: W1011 10:26:12.112567 4771 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 10:26:12.121666 master-1 kubenswrapper[4771]: W1011 10:26:12.112576 4771 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 10:26:12.121666 master-1 kubenswrapper[4771]: W1011 10:26:12.112585 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 10:26:12.121666 master-1 kubenswrapper[4771]: W1011 10:26:12.112593 4771 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 10:26:12.121666 master-1 kubenswrapper[4771]: W1011 10:26:12.112604 4771 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 10:26:12.121666 master-1 kubenswrapper[4771]: W1011 10:26:12.112616 4771 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 10:26:12.121666 master-1 kubenswrapper[4771]: W1011 10:26:12.112625 4771 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 10:26:12.121666 master-1 kubenswrapper[4771]: W1011 10:26:12.112633 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 10:26:12.121666 master-1 kubenswrapper[4771]: W1011 10:26:12.112641 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 10:26:12.121666 master-1 kubenswrapper[4771]: W1011 10:26:12.112649 4771 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 10:26:12.121666 master-1 kubenswrapper[4771]: W1011 10:26:12.112660 4771 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 10:26:12.121666 master-1 kubenswrapper[4771]: W1011 10:26:12.112670 4771 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 10:26:12.121666 master-1 kubenswrapper[4771]: W1011 10:26:12.112679 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 10:26:12.121666 master-1 kubenswrapper[4771]: W1011 10:26:12.112687 4771 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 10:26:12.121666 master-1 kubenswrapper[4771]: W1011 10:26:12.112696 4771 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 10:26:12.121666 master-1 kubenswrapper[4771]: W1011 10:26:12.112705 4771 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 10:26:12.121666 master-1 kubenswrapper[4771]: W1011 10:26:12.112713 4771 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 10:26:12.121666 master-1 kubenswrapper[4771]: W1011 10:26:12.112721 4771 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 10:26:12.121666 master-1 kubenswrapper[4771]: W1011 10:26:12.112730 4771 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 10:26:12.121666 master-1 kubenswrapper[4771]: W1011 10:26:12.112738 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 10:26:12.122713 master-1 kubenswrapper[4771]: W1011 10:26:12.112747 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 10:26:12.122713 master-1 kubenswrapper[4771]: W1011 10:26:12.112757 4771 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 10:26:12.122713 master-1 kubenswrapper[4771]: W1011 10:26:12.112765 4771 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 10:26:12.122713 master-1 kubenswrapper[4771]: I1011 10:26:12.112776 4771 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 11 10:26:12.122713 master-1 kubenswrapper[4771]: I1011 10:26:12.113001 4771 server.go:940] "Client rotation is on, will bootstrap in background" Oct 11 10:26:12.122713 master-1 kubenswrapper[4771]: I1011 10:26:12.118815 4771 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Oct 11 10:26:12.122713 master-1 kubenswrapper[4771]: I1011 10:26:12.120723 4771 server.go:997] "Starting client certificate rotation" Oct 11 10:26:12.122713 master-1 kubenswrapper[4771]: I1011 10:26:12.120748 4771 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 11 10:26:12.122713 master-1 kubenswrapper[4771]: I1011 10:26:12.121028 4771 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Oct 11 10:26:12.149930 master-2 kubenswrapper[4776]: E1011 10:26:12.149807 4776 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-2\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Oct 11 10:26:12.152954 master-1 kubenswrapper[4771]: I1011 10:26:12.152827 4771 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 11 10:26:12.157066 master-1 kubenswrapper[4771]: I1011 10:26:12.156889 4771 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 11 10:26:12.175403 master-1 kubenswrapper[4771]: I1011 10:26:12.175300 4771 log.go:25] "Validated CRI v1 runtime API" Oct 11 10:26:12.184188 master-1 kubenswrapper[4771]: I1011 10:26:12.184125 4771 log.go:25] "Validated CRI v1 image API" Oct 11 10:26:12.186840 master-1 kubenswrapper[4771]: I1011 10:26:12.186789 4771 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 11 10:26:12.191398 master-1 kubenswrapper[4771]: I1011 10:26:12.191252 4771 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 e0ffcf34-f743-4501-a23b-0da71751fe05:/dev/vda3] Oct 11 10:26:12.191398 master-1 kubenswrapper[4771]: I1011 10:26:12.191296 4771 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Oct 11 10:26:12.195537 master-1 kubenswrapper[4771]: I1011 10:26:12.195488 4771 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Oct 11 10:26:12.216254 master-1 kubenswrapper[4771]: I1011 10:26:12.215714 4771 manager.go:217] Machine: {Timestamp:2025-10-11 10:26:12.213629119 +0000 UTC m=+4.187855630 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514157568 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:fe2953088e5c4cf684aae6b361d97c24 SystemUUID:fe295308-8e5c-4cf6-84aa-e6b361d97c24 BootID:2a34dbce-856d-4b83-8fb9-526c6edd4eae Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257078784 Type:vfs Inodes:6166279 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:3e:b4:b2 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:3e:3e:b4:b2 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:c8:69:ef Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:98:96:aa Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:6a:ea:32:ab:c4:1e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514157568 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 11 10:26:12.216254 master-1 kubenswrapper[4771]: I1011 10:26:12.216175 4771 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 11 10:26:12.216640 master-1 kubenswrapper[4771]: I1011 10:26:12.216432 4771 manager.go:233] Version: {KernelVersion:5.14.0-427.91.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202509241235-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 11 10:26:12.216977 master-1 kubenswrapper[4771]: I1011 10:26:12.216925 4771 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 11 10:26:12.217345 master-1 kubenswrapper[4771]: I1011 10:26:12.217273 4771 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 11 10:26:12.217735 master-1 kubenswrapper[4771]: I1011 10:26:12.217330 4771 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-1","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 11 10:26:12.217735 master-1 kubenswrapper[4771]: I1011 10:26:12.217728 4771 topology_manager.go:138] "Creating topology manager with none policy" Oct 11 10:26:12.217947 master-1 kubenswrapper[4771]: I1011 10:26:12.217749 4771 container_manager_linux.go:303] "Creating device plugin manager" Oct 11 10:26:12.217947 master-1 kubenswrapper[4771]: I1011 10:26:12.217779 4771 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 11 10:26:12.217947 master-1 kubenswrapper[4771]: I1011 10:26:12.217808 4771 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 11 10:26:12.219231 master-1 kubenswrapper[4771]: I1011 10:26:12.219174 4771 state_mem.go:36] "Initialized new in-memory state store" Oct 11 10:26:12.219348 master-1 kubenswrapper[4771]: I1011 10:26:12.219329 4771 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 11 10:26:12.225488 master-1 kubenswrapper[4771]: I1011 10:26:12.225443 4771 kubelet.go:418] "Attempting to sync node with API server" Oct 11 10:26:12.225488 master-1 kubenswrapper[4771]: I1011 10:26:12.225480 4771 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 11 10:26:12.225709 master-1 kubenswrapper[4771]: I1011 10:26:12.225511 4771 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 11 10:26:12.225709 master-1 kubenswrapper[4771]: I1011 10:26:12.225546 4771 kubelet.go:324] "Adding apiserver pod source" Oct 11 10:26:12.226268 master-1 kubenswrapper[4771]: I1011 10:26:12.226175 4771 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 11 10:26:12.236766 master-1 kubenswrapper[4771]: I1011 10:26:12.236691 4771 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.12-3.rhaos4.18.gitdc59c78.el9" apiVersion="v1" Oct 11 10:26:12.238768 master-1 kubenswrapper[4771]: I1011 10:26:12.238729 4771 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 11 10:26:12.239993 master-1 kubenswrapper[4771]: I1011 10:26:12.239952 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 11 10:26:12.240060 master-1 kubenswrapper[4771]: I1011 10:26:12.239996 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 11 10:26:12.240060 master-1 kubenswrapper[4771]: I1011 10:26:12.240014 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 11 10:26:12.240060 master-1 kubenswrapper[4771]: I1011 10:26:12.240029 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 11 10:26:12.240060 master-1 kubenswrapper[4771]: I1011 10:26:12.240051 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 11 10:26:12.240189 master-1 kubenswrapper[4771]: I1011 10:26:12.240065 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 11 10:26:12.240189 master-1 kubenswrapper[4771]: I1011 10:26:12.240080 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 11 10:26:12.240189 master-1 kubenswrapper[4771]: I1011 10:26:12.240105 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 11 10:26:12.240189 master-1 kubenswrapper[4771]: I1011 10:26:12.240121 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 11 10:26:12.240189 master-1 kubenswrapper[4771]: I1011 10:26:12.240135 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 11 10:26:12.240189 master-1 kubenswrapper[4771]: I1011 10:26:12.240155 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 11 10:26:12.240782 master-1 kubenswrapper[4771]: I1011 10:26:12.240743 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 11 10:26:12.243099 master-1 kubenswrapper[4771]: I1011 10:26:12.243063 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 11 10:26:12.244113 master-1 kubenswrapper[4771]: W1011 10:26:12.244074 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-1" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 11 10:26:12.244184 master-1 kubenswrapper[4771]: I1011 10:26:12.244122 4771 server.go:1280] "Started kubelet" Oct 11 10:26:12.244184 master-1 kubenswrapper[4771]: E1011 10:26:12.244166 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-1\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:12.244653 master-1 kubenswrapper[4771]: W1011 10:26:12.244577 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 11 10:26:12.244734 master-1 kubenswrapper[4771]: E1011 10:26:12.244673 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:12.245253 master-1 kubenswrapper[4771]: I1011 10:26:12.245092 4771 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 11 10:26:12.245448 master-1 kubenswrapper[4771]: I1011 10:26:12.245275 4771 server_v1.go:47] "podresources" method="list" useActivePods=true Oct 11 10:26:12.246389 master-1 systemd[1]: Started Kubernetes Kubelet. Oct 11 10:26:12.247639 master-1 kubenswrapper[4771]: I1011 10:26:12.247585 4771 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 11 10:26:12.247702 master-1 kubenswrapper[4771]: I1011 10:26:12.247578 4771 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 11 10:26:12.248914 master-1 kubenswrapper[4771]: I1011 10:26:12.248865 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 11 10:26:12.248963 master-1 kubenswrapper[4771]: I1011 10:26:12.248944 4771 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 11 10:26:12.249226 master-1 kubenswrapper[4771]: I1011 10:26:12.249187 4771 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 11 10:26:12.249226 master-1 kubenswrapper[4771]: I1011 10:26:12.249223 4771 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 11 10:26:12.249318 master-1 kubenswrapper[4771]: I1011 10:26:12.249273 4771 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Oct 11 10:26:12.249318 master-1 kubenswrapper[4771]: E1011 10:26:12.249274 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:12.249493 master-1 kubenswrapper[4771]: I1011 10:26:12.249444 4771 reconstruct.go:97] "Volume reconstruction finished" Oct 11 10:26:12.249493 master-1 kubenswrapper[4771]: I1011 10:26:12.249484 4771 reconciler.go:26] "Reconciler: start to sync state" Oct 11 10:26:12.250129 master-1 kubenswrapper[4771]: I1011 10:26:12.250091 4771 server.go:449] "Adding debug handlers to kubelet server" Oct 11 10:26:12.255397 master-1 kubenswrapper[4771]: I1011 10:26:12.255257 4771 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-1" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:12.255724 master-1 kubenswrapper[4771]: E1011 10:26:12.255453 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-1\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 11 10:26:12.255931 master-1 kubenswrapper[4771]: W1011 10:26:12.255879 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:12.256152 master-1 kubenswrapper[4771]: E1011 10:26:12.255934 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:12.256152 master-1 kubenswrapper[4771]: E1011 10:26:12.254886 4771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75aeefbfb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.244077563 +0000 UTC m=+4.218304044,LastTimestamp:2025-10-11 10:26:12.244077563 +0000 UTC m=+4.218304044,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:12.259167 master-1 kubenswrapper[4771]: I1011 10:26:12.258791 4771 factory.go:55] Registering systemd factory Oct 11 10:26:12.259167 master-1 kubenswrapper[4771]: I1011 10:26:12.258823 4771 factory.go:221] Registration of the systemd container factory successfully Oct 11 10:26:12.259622 master-1 kubenswrapper[4771]: I1011 10:26:12.259332 4771 factory.go:153] Registering CRI-O factory Oct 11 10:26:12.259622 master-1 kubenswrapper[4771]: I1011 10:26:12.259399 4771 factory.go:221] Registration of the crio container factory successfully Oct 11 10:26:12.259622 master-1 kubenswrapper[4771]: I1011 10:26:12.259518 4771 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 11 10:26:12.259622 master-1 kubenswrapper[4771]: I1011 10:26:12.259568 4771 factory.go:103] Registering Raw factory Oct 11 10:26:12.259622 master-1 kubenswrapper[4771]: I1011 10:26:12.259595 4771 manager.go:1196] Started watching for new ooms in manager Oct 11 10:26:12.260885 master-1 kubenswrapper[4771]: I1011 10:26:12.260830 4771 manager.go:319] Starting recovery of all containers Oct 11 10:26:12.265303 master-1 kubenswrapper[4771]: E1011 10:26:12.265254 4771 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Oct 11 10:26:12.286460 master-1 kubenswrapper[4771]: I1011 10:26:12.286046 4771 manager.go:324] Recovery completed Oct 11 10:26:12.300631 master-1 kubenswrapper[4771]: I1011 10:26:12.300573 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:12.303066 master-1 kubenswrapper[4771]: I1011 10:26:12.302956 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 11 10:26:12.303066 master-1 kubenswrapper[4771]: I1011 10:26:12.303038 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 11 10:26:12.303066 master-1 kubenswrapper[4771]: I1011 10:26:12.303051 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 11 10:26:12.304683 master-1 kubenswrapper[4771]: I1011 10:26:12.304652 4771 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 11 10:26:12.304683 master-1 kubenswrapper[4771]: I1011 10:26:12.304674 4771 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 11 10:26:12.304876 master-1 kubenswrapper[4771]: I1011 10:26:12.304716 4771 state_mem.go:36] "Initialized new in-memory state store" Oct 11 10:26:12.308100 master-1 kubenswrapper[4771]: E1011 10:26:12.307910 4771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e726697 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-1 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303021719 +0000 UTC m=+4.277248170,LastTimestamp:2025-10-11 10:26:12.303021719 +0000 UTC m=+4.277248170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:12.319487 master-1 kubenswrapper[4771]: E1011 10:26:12.319247 4771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e72c42f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-1 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303045679 +0000 UTC m=+4.277272130,LastTimestamp:2025-10-11 10:26:12.303045679 +0000 UTC m=+4.277272130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:12.329083 master-1 kubenswrapper[4771]: E1011 10:26:12.328884 4771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e72f48b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-1 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303058059 +0000 UTC m=+4.277284510,LastTimestamp:2025-10-11 10:26:12.303058059 +0000 UTC m=+4.277284510,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:12.341246 master-1 kubenswrapper[4771]: I1011 10:26:12.341054 4771 policy_none.go:49] "None policy: Start" Oct 11 10:26:12.343013 master-1 kubenswrapper[4771]: I1011 10:26:12.342975 4771 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 11 10:26:12.343141 master-1 kubenswrapper[4771]: I1011 10:26:12.343043 4771 state_mem.go:35] "Initializing new in-memory state store" Oct 11 10:26:12.350407 master-1 kubenswrapper[4771]: E1011 10:26:12.350341 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:12.384181 master-2 kubenswrapper[4776]: I1011 10:26:12.384093 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:12.426453 master-1 kubenswrapper[4771]: I1011 10:26:12.426347 4771 manager.go:334] "Starting Device Plugin manager" Oct 11 10:26:12.426677 master-1 kubenswrapper[4771]: I1011 10:26:12.426495 4771 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 11 10:26:12.426677 master-1 kubenswrapper[4771]: I1011 10:26:12.426527 4771 server.go:79] "Starting device plugin registration server" Oct 11 10:26:12.427285 master-1 kubenswrapper[4771]: I1011 10:26:12.427234 4771 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 11 10:26:12.428279 master-1 kubenswrapper[4771]: I1011 10:26:12.427339 4771 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 11 10:26:12.428484 master-1 kubenswrapper[4771]: I1011 10:26:12.428344 4771 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 11 10:26:12.428562 master-1 kubenswrapper[4771]: I1011 10:26:12.428519 4771 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 11 10:26:12.428562 master-1 kubenswrapper[4771]: I1011 10:26:12.428536 4771 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 11 10:26:12.429222 master-2 kubenswrapper[4776]: I1011 10:26:12.429160 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:12.429222 master-2 kubenswrapper[4776]: I1011 10:26:12.429219 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:12.429222 master-2 kubenswrapper[4776]: I1011 10:26:12.429234 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:12.429463 master-2 kubenswrapper[4776]: I1011 10:26:12.429264 4776 kubelet_node_status.go:76] "Attempting to register node" node="master-2" Oct 11 10:26:12.429760 master-1 kubenswrapper[4771]: E1011 10:26:12.429722 4771 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-1\" not found" Oct 11 10:26:12.431905 master-1 kubenswrapper[4771]: I1011 10:26:12.431818 4771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 11 10:26:12.435587 master-1 kubenswrapper[4771]: I1011 10:26:12.435530 4771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 11 10:26:12.435703 master-1 kubenswrapper[4771]: I1011 10:26:12.435616 4771 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 11 10:26:12.435703 master-1 kubenswrapper[4771]: I1011 10:26:12.435657 4771 kubelet.go:2335] "Starting kubelet main sync loop" Oct 11 10:26:12.435910 master-1 kubenswrapper[4771]: E1011 10:26:12.435797 4771 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 11 10:26:12.439910 master-2 kubenswrapper[4776]: E1011 10:26:12.439854 4776 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-2" Oct 11 10:26:12.440635 master-1 kubenswrapper[4771]: W1011 10:26:12.440556 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 11 10:26:12.440635 master-1 kubenswrapper[4771]: E1011 10:26:12.440608 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:12.441098 master-1 kubenswrapper[4771]: E1011 10:26:12.440951 4771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e7660e895e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.43069475 +0000 UTC m=+4.404921221,LastTimestamp:2025-10-11 10:26:12.43069475 +0000 UTC m=+4.404921221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:12.458252 master-1 kubenswrapper[4771]: E1011 10:26:12.458168 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-1\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 11 10:26:12.483980 master-2 kubenswrapper[4776]: W1011 10:26:12.483913 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 11 10:26:12.483980 master-2 kubenswrapper[4776]: E1011 10:26:12.483964 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:12.528590 master-1 kubenswrapper[4771]: I1011 10:26:12.528422 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:12.529923 master-1 kubenswrapper[4771]: I1011 10:26:12.529866 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 11 10:26:12.529923 master-1 kubenswrapper[4771]: I1011 10:26:12.529920 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 11 10:26:12.530130 master-1 kubenswrapper[4771]: I1011 10:26:12.529959 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 11 10:26:12.530130 master-1 kubenswrapper[4771]: I1011 10:26:12.530056 4771 kubelet_node_status.go:76] "Attempting to register node" node="master-1" Oct 11 10:26:12.536864 master-1 kubenswrapper[4771]: I1011 10:26:12.536798 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-1"] Oct 11 10:26:12.536986 master-1 kubenswrapper[4771]: I1011 10:26:12.536928 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:12.538123 master-1 kubenswrapper[4771]: I1011 10:26:12.537953 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 11 10:26:12.538123 master-1 kubenswrapper[4771]: I1011 10:26:12.538013 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 11 10:26:12.538123 master-1 kubenswrapper[4771]: I1011 10:26:12.538031 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 11 10:26:12.538572 master-1 kubenswrapper[4771]: I1011 10:26:12.538498 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" Oct 11 10:26:12.538572 master-1 kubenswrapper[4771]: I1011 10:26:12.538547 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:12.539990 master-1 kubenswrapper[4771]: I1011 10:26:12.539884 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 11 10:26:12.540244 master-1 kubenswrapper[4771]: I1011 10:26:12.540021 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 11 10:26:12.540244 master-1 kubenswrapper[4771]: I1011 10:26:12.540042 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 11 10:26:12.541811 master-1 kubenswrapper[4771]: E1011 10:26:12.541753 4771 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-1" Oct 11 10:26:12.541811 master-1 kubenswrapper[4771]: E1011 10:26:12.541612 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e726697\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e726697 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-1 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303021719 +0000 UTC m=+4.277248170,LastTimestamp:2025-10-11 10:26:12.529905392 +0000 UTC m=+4.504131873,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:12.555090 master-1 kubenswrapper[4771]: E1011 10:26:12.554761 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e72c42f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e72c42f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-1 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303045679 +0000 UTC m=+4.277272130,LastTimestamp:2025-10-11 10:26:12.529932693 +0000 UTC m=+4.504159174,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:12.567991 master-1 kubenswrapper[4771]: E1011 10:26:12.567812 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e72f48b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e72f48b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-1 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303058059 +0000 UTC m=+4.277284510,LastTimestamp:2025-10-11 10:26:12.529969584 +0000 UTC m=+4.504196065,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:12.577877 master-1 kubenswrapper[4771]: E1011 10:26:12.577724 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e726697\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e726697 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-1 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303021719 +0000 UTC m=+4.277248170,LastTimestamp:2025-10-11 10:26:12.537992016 +0000 UTC m=+4.512218497,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:12.587989 master-1 kubenswrapper[4771]: E1011 10:26:12.587755 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e72c42f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e72c42f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-1 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303045679 +0000 UTC m=+4.277272130,LastTimestamp:2025-10-11 10:26:12.538024586 +0000 UTC m=+4.512251067,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:12.598519 master-1 kubenswrapper[4771]: E1011 10:26:12.598255 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e72f48b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e72f48b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-1 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303058059 +0000 UTC m=+4.277284510,LastTimestamp:2025-10-11 10:26:12.538040577 +0000 UTC m=+4.512267058,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:12.608909 master-1 kubenswrapper[4771]: E1011 10:26:12.608755 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e726697\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e726697 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-1 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303021719 +0000 UTC m=+4.277248170,LastTimestamp:2025-10-11 10:26:12.539924047 +0000 UTC m=+4.514150518,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:12.618482 master-1 kubenswrapper[4771]: E1011 10:26:12.618230 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e72c42f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e72c42f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-1 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303045679 +0000 UTC m=+4.277272130,LastTimestamp:2025-10-11 10:26:12.54003546 +0000 UTC m=+4.514261941,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:12.628695 master-1 kubenswrapper[4771]: E1011 10:26:12.628481 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e72f48b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e72f48b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-1 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303058059 +0000 UTC m=+4.277284510,LastTimestamp:2025-10-11 10:26:12.54005138 +0000 UTC m=+4.514277851,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:12.651098 master-1 kubenswrapper[4771]: I1011 10:26:12.651026 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/3273b5dc02e0d8cacbf64fe78c713d50-etc-kube\") pod \"kube-rbac-proxy-crio-master-1\" (UID: \"3273b5dc02e0d8cacbf64fe78c713d50\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" Oct 11 10:26:12.651098 master-1 kubenswrapper[4771]: I1011 10:26:12.651093 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3273b5dc02e0d8cacbf64fe78c713d50-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-1\" (UID: \"3273b5dc02e0d8cacbf64fe78c713d50\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" Oct 11 10:26:12.743140 master-1 kubenswrapper[4771]: I1011 10:26:12.743041 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:12.744915 master-1 kubenswrapper[4771]: I1011 10:26:12.744865 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 11 10:26:12.745018 master-1 kubenswrapper[4771]: I1011 10:26:12.744956 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 11 10:26:12.745018 master-1 kubenswrapper[4771]: I1011 10:26:12.744976 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 11 10:26:12.745129 master-1 kubenswrapper[4771]: I1011 10:26:12.745056 4771 kubelet_node_status.go:76] "Attempting to register node" node="master-1" Oct 11 10:26:12.752230 master-1 kubenswrapper[4771]: I1011 10:26:12.752152 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/3273b5dc02e0d8cacbf64fe78c713d50-etc-kube\") pod \"kube-rbac-proxy-crio-master-1\" (UID: \"3273b5dc02e0d8cacbf64fe78c713d50\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" Oct 11 10:26:12.752430 master-1 kubenswrapper[4771]: I1011 10:26:12.752245 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3273b5dc02e0d8cacbf64fe78c713d50-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-1\" (UID: \"3273b5dc02e0d8cacbf64fe78c713d50\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" Oct 11 10:26:12.752430 master-1 kubenswrapper[4771]: I1011 10:26:12.752342 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/3273b5dc02e0d8cacbf64fe78c713d50-etc-kube\") pod \"kube-rbac-proxy-crio-master-1\" (UID: \"3273b5dc02e0d8cacbf64fe78c713d50\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" Oct 11 10:26:12.752519 master-1 kubenswrapper[4771]: I1011 10:26:12.752422 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3273b5dc02e0d8cacbf64fe78c713d50-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-1\" (UID: \"3273b5dc02e0d8cacbf64fe78c713d50\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" Oct 11 10:26:12.759747 master-1 kubenswrapper[4771]: E1011 10:26:12.759595 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e726697\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e726697 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-1 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303021719 +0000 UTC m=+4.277248170,LastTimestamp:2025-10-11 10:26:12.744934318 +0000 UTC m=+4.719160789,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:12.759998 master-1 kubenswrapper[4771]: E1011 10:26:12.759891 4771 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-1" Oct 11 10:26:12.769944 master-1 kubenswrapper[4771]: E1011 10:26:12.769804 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e72c42f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e72c42f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-1 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303045679 +0000 UTC m=+4.277272130,LastTimestamp:2025-10-11 10:26:12.744969049 +0000 UTC m=+4.719195530,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:12.779961 master-1 kubenswrapper[4771]: E1011 10:26:12.779704 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e72f48b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e72f48b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-1 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303058059 +0000 UTC m=+4.277284510,LastTimestamp:2025-10-11 10:26:12.744985559 +0000 UTC m=+4.719212040,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:12.809490 master-2 kubenswrapper[4776]: W1011 10:26:12.809338 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 11 10:26:12.809490 master-2 kubenswrapper[4776]: E1011 10:26:12.809394 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:12.870930 master-1 kubenswrapper[4771]: E1011 10:26:12.870699 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-1\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 11 10:26:12.875715 master-1 kubenswrapper[4771]: I1011 10:26:12.875638 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" Oct 11 10:26:12.888386 master-2 kubenswrapper[4776]: I1011 10:26:12.888296 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:13.076859 master-1 kubenswrapper[4771]: W1011 10:26:13.076760 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:13.076859 master-1 kubenswrapper[4771]: E1011 10:26:13.076849 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:13.090637 master-2 kubenswrapper[4776]: I1011 10:26:13.090490 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-2_f022eff2d978fee6b366ac18a80aa53c/kube-rbac-proxy-crio/1.log" Oct 11 10:26:13.091276 master-2 kubenswrapper[4776]: I1011 10:26:13.091243 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:13.092069 master-2 kubenswrapper[4776]: I1011 10:26:13.092017 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:13.092069 master-2 kubenswrapper[4776]: I1011 10:26:13.092063 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:13.092178 master-2 kubenswrapper[4776]: I1011 10:26:13.092075 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:13.092414 master-2 kubenswrapper[4776]: I1011 10:26:13.092383 4776 scope.go:117] "RemoveContainer" containerID="8d16240c2844fab823fcb447aab9f50d10cf3af9670757f057c38aa4d5dcc97c" Oct 11 10:26:13.093076 master-2 kubenswrapper[4776]: E1011 10:26:13.092532 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-2_openshift-machine-config-operator(f022eff2d978fee6b366ac18a80aa53c)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" podUID="f022eff2d978fee6b366ac18a80aa53c" Oct 11 10:26:13.103304 master-2 kubenswrapper[4776]: E1011 10:26:13.103185 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-2.186d68e7526b1e3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e7526b1e3c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-2_openshift-machine-config-operator(f022eff2d978fee6b366ac18a80aa53c),Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:12.101217852 +0000 UTC m=+6.885644571,LastTimestamp:2025-10-11 10:26:13.09251174 +0000 UTC m=+7.876938459,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:13.160986 master-1 kubenswrapper[4771]: I1011 10:26:13.160722 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:13.162739 master-1 kubenswrapper[4771]: I1011 10:26:13.162211 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 11 10:26:13.162739 master-1 kubenswrapper[4771]: I1011 10:26:13.162240 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 11 10:26:13.162739 master-1 kubenswrapper[4771]: I1011 10:26:13.162252 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 11 10:26:13.162739 master-1 kubenswrapper[4771]: I1011 10:26:13.162297 4771 kubelet_node_status.go:76] "Attempting to register node" node="master-1" Oct 11 10:26:13.174034 master-1 kubenswrapper[4771]: E1011 10:26:13.173868 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e726697\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e726697 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-1 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303021719 +0000 UTC m=+4.277248170,LastTimestamp:2025-10-11 10:26:13.162229626 +0000 UTC m=+5.136456077,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:13.174291 master-1 kubenswrapper[4771]: E1011 10:26:13.174227 4771 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-1" Oct 11 10:26:13.183514 master-1 kubenswrapper[4771]: E1011 10:26:13.183285 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e72c42f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e72c42f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-1 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303045679 +0000 UTC m=+4.277272130,LastTimestamp:2025-10-11 10:26:13.162246686 +0000 UTC m=+5.136473137,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:13.193291 master-1 kubenswrapper[4771]: E1011 10:26:13.193140 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e72f48b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e72f48b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-1 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303058059 +0000 UTC m=+4.277284510,LastTimestamp:2025-10-11 10:26:13.162261717 +0000 UTC m=+5.136488168,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:13.266880 master-1 kubenswrapper[4771]: I1011 10:26:13.266750 4771 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-1" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:13.363452 master-1 kubenswrapper[4771]: W1011 10:26:13.363235 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-1" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 11 10:26:13.363452 master-1 kubenswrapper[4771]: E1011 10:26:13.363327 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-1\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:13.472490 master-1 kubenswrapper[4771]: W1011 10:26:13.472222 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 11 10:26:13.472490 master-1 kubenswrapper[4771]: E1011 10:26:13.472313 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:13.539079 master-1 kubenswrapper[4771]: W1011 10:26:13.538950 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 11 10:26:13.539079 master-1 kubenswrapper[4771]: E1011 10:26:13.539045 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:13.680409 master-1 kubenswrapper[4771]: E1011 10:26:13.680281 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-1\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="1.6s" Oct 11 10:26:13.718577 master-1 kubenswrapper[4771]: W1011 10:26:13.718435 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3273b5dc02e0d8cacbf64fe78c713d50.slice/crio-d3beeb8f52049cfbfd643d5240049be7138d9ff83afdc0d99012e206f07dfb78 WatchSource:0}: Error finding container d3beeb8f52049cfbfd643d5240049be7138d9ff83afdc0d99012e206f07dfb78: Status 404 returned error can't find the container with id d3beeb8f52049cfbfd643d5240049be7138d9ff83afdc0d99012e206f07dfb78 Oct 11 10:26:13.722035 master-1 kubenswrapper[4771]: I1011 10:26:13.721986 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:26:13.733996 master-1 kubenswrapper[4771]: E1011 10:26:13.733782 4771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-1.186d68e7b304f330 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-1,UID:3273b5dc02e0d8cacbf64fe78c713d50,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169\",Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:13.721912112 +0000 UTC m=+5.696138553,LastTimestamp:2025-10-11 10:26:13.721912112 +0000 UTC m=+5.696138553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:13.891059 master-2 kubenswrapper[4776]: I1011 10:26:13.890985 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:13.974864 master-1 kubenswrapper[4771]: I1011 10:26:13.974608 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:13.976600 master-1 kubenswrapper[4771]: I1011 10:26:13.976510 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 11 10:26:13.976600 master-1 kubenswrapper[4771]: I1011 10:26:13.976567 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 11 10:26:13.976600 master-1 kubenswrapper[4771]: I1011 10:26:13.976586 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 11 10:26:13.976852 master-1 kubenswrapper[4771]: I1011 10:26:13.976632 4771 kubelet_node_status.go:76] "Attempting to register node" node="master-1" Oct 11 10:26:13.987376 master-1 kubenswrapper[4771]: E1011 10:26:13.987272 4771 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-1" Oct 11 10:26:13.988041 master-1 kubenswrapper[4771]: E1011 10:26:13.987915 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e726697\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e726697 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-1 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303021719 +0000 UTC m=+4.277248170,LastTimestamp:2025-10-11 10:26:13.97654609 +0000 UTC m=+5.950772561,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:13.998063 master-1 kubenswrapper[4771]: E1011 10:26:13.997886 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e72c42f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e72c42f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-1 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303045679 +0000 UTC m=+4.277272130,LastTimestamp:2025-10-11 10:26:13.976577261 +0000 UTC m=+5.950803742,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:14.009232 master-1 kubenswrapper[4771]: E1011 10:26:14.009044 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e72f48b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e72f48b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-1 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303058059 +0000 UTC m=+4.277284510,LastTimestamp:2025-10-11 10:26:13.976595591 +0000 UTC m=+5.950822062,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:14.250661 master-2 kubenswrapper[4776]: W1011 10:26:14.250598 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-2" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 11 10:26:14.251275 master-2 kubenswrapper[4776]: E1011 10:26:14.250667 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-2\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:14.266752 master-1 kubenswrapper[4771]: I1011 10:26:14.266539 4771 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-1" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:14.443294 master-1 kubenswrapper[4771]: I1011 10:26:14.443111 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" event={"ID":"3273b5dc02e0d8cacbf64fe78c713d50","Type":"ContainerStarted","Data":"d3beeb8f52049cfbfd643d5240049be7138d9ff83afdc0d99012e206f07dfb78"} Oct 11 10:26:14.889483 master-2 kubenswrapper[4776]: I1011 10:26:14.889420 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:15.267616 master-1 kubenswrapper[4771]: I1011 10:26:15.267547 4771 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-1" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:15.291127 master-1 kubenswrapper[4771]: E1011 10:26:15.291035 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-1\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="3.2s" Oct 11 10:26:15.379937 master-1 kubenswrapper[4771]: W1011 10:26:15.379876 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 11 10:26:15.380096 master-1 kubenswrapper[4771]: E1011 10:26:15.379941 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:15.419418 master-1 kubenswrapper[4771]: E1011 10:26:15.419199 4771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-1.186d68e8179ab840 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-1,UID:3273b5dc02e0d8cacbf64fe78c713d50,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169\" in 1.687s (1.687s including waiting). Image size: 458126368 bytes.,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:15.409449024 +0000 UTC m=+7.383675495,LastTimestamp:2025-10-11 10:26:15.409449024 +0000 UTC m=+7.383675495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:15.587782 master-1 kubenswrapper[4771]: I1011 10:26:15.587689 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:15.589489 master-1 kubenswrapper[4771]: I1011 10:26:15.589444 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 11 10:26:15.589489 master-1 kubenswrapper[4771]: I1011 10:26:15.589485 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 11 10:26:15.589489 master-1 kubenswrapper[4771]: I1011 10:26:15.589493 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 11 10:26:15.589675 master-1 kubenswrapper[4771]: I1011 10:26:15.589526 4771 kubelet_node_status.go:76] "Attempting to register node" node="master-1" Oct 11 10:26:15.600624 master-1 kubenswrapper[4771]: E1011 10:26:15.600566 4771 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-1" Oct 11 10:26:15.600813 master-1 kubenswrapper[4771]: E1011 10:26:15.600691 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e726697\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e726697 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-1 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303021719 +0000 UTC m=+4.277248170,LastTimestamp:2025-10-11 10:26:15.589474669 +0000 UTC m=+7.563701110,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:15.611136 master-1 kubenswrapper[4771]: E1011 10:26:15.611047 4771 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186d68e75e72c42f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186d68e75e72c42f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-1 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:12.303045679 +0000 UTC m=+4.277272130,LastTimestamp:2025-10-11 10:26:15.58949071 +0000 UTC m=+7.563717151,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:15.671151 master-1 kubenswrapper[4771]: E1011 10:26:15.670588 4771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-1.186d68e82685674f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-1,UID:3273b5dc02e0d8cacbf64fe78c713d50,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:15.659710287 +0000 UTC m=+7.633936768,LastTimestamp:2025-10-11 10:26:15.659710287 +0000 UTC m=+7.633936768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:15.685793 master-1 kubenswrapper[4771]: E1011 10:26:15.685595 4771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-1.186d68e82779c569 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-1,UID:3273b5dc02e0d8cacbf64fe78c713d50,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:15.675725161 +0000 UTC m=+7.649951612,LastTimestamp:2025-10-11 10:26:15.675725161 +0000 UTC m=+7.649951612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:15.741838 master-1 kubenswrapper[4771]: W1011 10:26:15.741775 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:15.741838 master-1 kubenswrapper[4771]: E1011 10:26:15.741854 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:15.780891 master-1 kubenswrapper[4771]: W1011 10:26:15.780838 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 11 10:26:15.780891 master-1 kubenswrapper[4771]: E1011 10:26:15.780882 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:15.890479 master-2 kubenswrapper[4776]: I1011 10:26:15.890432 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:16.019803 master-2 kubenswrapper[4776]: E1011 10:26:16.019709 4776 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-2\" not found" Oct 11 10:26:16.268109 master-1 kubenswrapper[4771]: I1011 10:26:16.268022 4771 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-1" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:16.450977 master-1 kubenswrapper[4771]: I1011 10:26:16.450883 4771 generic.go:334] "Generic (PLEG): container finished" podID="3273b5dc02e0d8cacbf64fe78c713d50" containerID="1bfd32ecba3d08834d22892fd6303dea33b14629ed98fdcf0e18d648b3722608" exitCode=0 Oct 11 10:26:16.451262 master-1 kubenswrapper[4771]: I1011 10:26:16.450962 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" event={"ID":"3273b5dc02e0d8cacbf64fe78c713d50","Type":"ContainerDied","Data":"1bfd32ecba3d08834d22892fd6303dea33b14629ed98fdcf0e18d648b3722608"} Oct 11 10:26:16.451262 master-1 kubenswrapper[4771]: I1011 10:26:16.451032 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:16.452482 master-1 kubenswrapper[4771]: I1011 10:26:16.452417 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 11 10:26:16.452552 master-1 kubenswrapper[4771]: I1011 10:26:16.452497 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 11 10:26:16.452552 master-1 kubenswrapper[4771]: I1011 10:26:16.452506 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 11 10:26:16.479857 master-1 kubenswrapper[4771]: E1011 10:26:16.479666 4771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-1.186d68e856b5236d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-1,UID:3273b5dc02e0d8cacbf64fe78c713d50,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169\" already present on machine,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:16.468145005 +0000 UTC m=+8.442371486,LastTimestamp:2025-10-11 10:26:16.468145005 +0000 UTC m=+8.442371486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:16.539691 master-1 kubenswrapper[4771]: W1011 10:26:16.539542 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-1" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 11 10:26:16.539691 master-1 kubenswrapper[4771]: E1011 10:26:16.539614 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-1\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:16.754830 master-1 kubenswrapper[4771]: E1011 10:26:16.754633 4771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-1.186d68e8671c480a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-1,UID:3273b5dc02e0d8cacbf64fe78c713d50,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:16.743340042 +0000 UTC m=+8.717566523,LastTimestamp:2025-10-11 10:26:16.743340042 +0000 UTC m=+8.717566523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:16.770013 master-1 kubenswrapper[4771]: E1011 10:26:16.769833 4771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-1.186d68e8682ca4e5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-1,UID:3273b5dc02e0d8cacbf64fe78c713d50,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:26:16.761189605 +0000 UTC m=+8.735416086,LastTimestamp:2025-10-11 10:26:16.761189605 +0000 UTC m=+8.735416086,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:26:16.886532 master-2 kubenswrapper[4776]: I1011 10:26:16.886491 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:16.892175 master-1 kubenswrapper[4771]: I1011 10:26:16.892074 4771 csr.go:261] certificate signing request csr-gzbc5 is approved, waiting to be issued Oct 11 10:26:16.901452 master-1 kubenswrapper[4771]: I1011 10:26:16.901335 4771 csr.go:257] certificate signing request csr-gzbc5 is issued Oct 11 10:26:16.904418 master-2 kubenswrapper[4776]: I1011 10:26:16.904365 4776 csr.go:261] certificate signing request csr-nrqbw is approved, waiting to be issued Oct 11 10:26:16.912857 master-2 kubenswrapper[4776]: I1011 10:26:16.912811 4776 csr.go:257] certificate signing request csr-nrqbw is issued Oct 11 10:26:17.121158 master-1 kubenswrapper[4771]: I1011 10:26:17.120888 4771 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 11 10:26:17.275634 master-1 kubenswrapper[4771]: I1011 10:26:17.275514 4771 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-1" not found Oct 11 10:26:17.294139 master-1 kubenswrapper[4771]: I1011 10:26:17.294041 4771 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-1" not found Oct 11 10:26:17.355253 master-1 kubenswrapper[4771]: I1011 10:26:17.355125 4771 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-1" not found Oct 11 10:26:17.455426 master-1 kubenswrapper[4771]: I1011 10:26:17.455183 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-1_3273b5dc02e0d8cacbf64fe78c713d50/kube-rbac-proxy-crio/0.log" Oct 11 10:26:17.456121 master-1 kubenswrapper[4771]: I1011 10:26:17.456007 4771 generic.go:334] "Generic (PLEG): container finished" podID="3273b5dc02e0d8cacbf64fe78c713d50" containerID="28e895da0baa6877f335970836c88af88fc281c9171fd5ced86e8b5c1c9f3b5c" exitCode=1 Oct 11 10:26:17.456121 master-1 kubenswrapper[4771]: I1011 10:26:17.456112 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" event={"ID":"3273b5dc02e0d8cacbf64fe78c713d50","Type":"ContainerDied","Data":"28e895da0baa6877f335970836c88af88fc281c9171fd5ced86e8b5c1c9f3b5c"} Oct 11 10:26:17.456304 master-1 kubenswrapper[4771]: I1011 10:26:17.456166 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:17.457362 master-1 kubenswrapper[4771]: I1011 10:26:17.457298 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 11 10:26:17.457362 master-1 kubenswrapper[4771]: I1011 10:26:17.457342 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 11 10:26:17.457362 master-1 kubenswrapper[4771]: I1011 10:26:17.457364 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 11 10:26:17.471947 master-1 kubenswrapper[4771]: I1011 10:26:17.471867 4771 scope.go:117] "RemoveContainer" containerID="28e895da0baa6877f335970836c88af88fc281c9171fd5ced86e8b5c1c9f3b5c" Oct 11 10:26:17.621590 master-1 kubenswrapper[4771]: I1011 10:26:17.621494 4771 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-1" not found Oct 11 10:26:17.621590 master-1 kubenswrapper[4771]: E1011 10:26:17.621564 4771 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-1" not found Oct 11 10:26:17.644620 master-1 kubenswrapper[4771]: I1011 10:26:17.644549 4771 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-1" not found Oct 11 10:26:17.663167 master-1 kubenswrapper[4771]: I1011 10:26:17.663067 4771 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-1" not found Oct 11 10:26:17.723246 master-1 kubenswrapper[4771]: I1011 10:26:17.723180 4771 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-1" not found Oct 11 10:26:17.750978 master-2 kubenswrapper[4776]: I1011 10:26:17.750858 4776 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 11 10:26:17.893187 master-2 kubenswrapper[4776]: I1011 10:26:17.893112 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:17.904008 master-1 kubenswrapper[4771]: I1011 10:26:17.903873 4771 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2025-10-12 10:21:12 +0000 UTC, rotation deadline is 2025-10-12 04:52:52.049163029 +0000 UTC Oct 11 10:26:17.904008 master-1 kubenswrapper[4771]: I1011 10:26:17.903945 4771 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h26m34.145220968s for next certificate rotation Oct 11 10:26:17.911176 master-2 kubenswrapper[4776]: I1011 10:26:17.911027 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:17.915482 master-2 kubenswrapper[4776]: I1011 10:26:17.915400 4776 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2025-10-12 10:21:12 +0000 UTC, rotation deadline is 2025-10-12 03:51:17.11066698 +0000 UTC Oct 11 10:26:17.915482 master-2 kubenswrapper[4776]: I1011 10:26:17.915475 4776 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h24m59.195196514s for next certificate rotation Oct 11 10:26:17.971583 master-2 kubenswrapper[4776]: I1011 10:26:17.971511 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:17.996177 master-1 kubenswrapper[4771]: I1011 10:26:17.995991 4771 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-1" not found Oct 11 10:26:17.996177 master-1 kubenswrapper[4771]: E1011 10:26:17.996042 4771 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-1" not found Oct 11 10:26:18.100463 master-1 kubenswrapper[4771]: I1011 10:26:18.100342 4771 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-1" not found Oct 11 10:26:18.117890 master-1 kubenswrapper[4771]: I1011 10:26:18.117832 4771 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-1" not found Oct 11 10:26:18.177844 master-1 kubenswrapper[4771]: I1011 10:26:18.177787 4771 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-1" not found Oct 11 10:26:18.244117 master-2 kubenswrapper[4776]: I1011 10:26:18.244047 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:18.244117 master-2 kubenswrapper[4776]: E1011 10:26:18.244096 4776 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-2" not found Oct 11 10:26:18.268226 master-2 kubenswrapper[4776]: I1011 10:26:18.268167 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:18.286038 master-2 kubenswrapper[4776]: I1011 10:26:18.285999 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:18.347440 master-2 kubenswrapper[4776]: I1011 10:26:18.347384 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:18.456455 master-1 kubenswrapper[4771]: I1011 10:26:18.456400 4771 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-1" not found Oct 11 10:26:18.456455 master-1 kubenswrapper[4771]: E1011 10:26:18.456454 4771 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-1" not found Oct 11 10:26:18.460026 master-1 kubenswrapper[4771]: I1011 10:26:18.459941 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-1_3273b5dc02e0d8cacbf64fe78c713d50/kube-rbac-proxy-crio/1.log" Oct 11 10:26:18.460707 master-1 kubenswrapper[4771]: I1011 10:26:18.460661 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-1_3273b5dc02e0d8cacbf64fe78c713d50/kube-rbac-proxy-crio/0.log" Oct 11 10:26:18.461286 master-1 kubenswrapper[4771]: I1011 10:26:18.461226 4771 generic.go:334] "Generic (PLEG): container finished" podID="3273b5dc02e0d8cacbf64fe78c713d50" containerID="a8e5e0132e090b8e8d3d5ec1f39e48dd2fe3f756d595c30bb279ad118a40f824" exitCode=1 Oct 11 10:26:18.461357 master-1 kubenswrapper[4771]: I1011 10:26:18.461291 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" event={"ID":"3273b5dc02e0d8cacbf64fe78c713d50","Type":"ContainerDied","Data":"a8e5e0132e090b8e8d3d5ec1f39e48dd2fe3f756d595c30bb279ad118a40f824"} Oct 11 10:26:18.461472 master-1 kubenswrapper[4771]: I1011 10:26:18.461365 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:18.461547 master-1 kubenswrapper[4771]: I1011 10:26:18.461399 4771 scope.go:117] "RemoveContainer" containerID="28e895da0baa6877f335970836c88af88fc281c9171fd5ced86e8b5c1c9f3b5c" Oct 11 10:26:18.463000 master-1 kubenswrapper[4771]: I1011 10:26:18.462941 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 11 10:26:18.463095 master-1 kubenswrapper[4771]: I1011 10:26:18.463006 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 11 10:26:18.463095 master-1 kubenswrapper[4771]: I1011 10:26:18.463025 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 11 10:26:18.481955 master-1 kubenswrapper[4771]: I1011 10:26:18.481900 4771 scope.go:117] "RemoveContainer" containerID="a8e5e0132e090b8e8d3d5ec1f39e48dd2fe3f756d595c30bb279ad118a40f824" Oct 11 10:26:18.482214 master-1 kubenswrapper[4771]: E1011 10:26:18.482163 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-1_openshift-machine-config-operator(3273b5dc02e0d8cacbf64fe78c713d50)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" podUID="3273b5dc02e0d8cacbf64fe78c713d50" Oct 11 10:26:18.497949 master-1 kubenswrapper[4771]: E1011 10:26:18.497896 4771 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-1\" not found" node="master-1" Oct 11 10:26:18.558368 master-2 kubenswrapper[4776]: E1011 10:26:18.558174 4776 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-2\" not found" node="master-2" Oct 11 10:26:18.621882 master-2 kubenswrapper[4776]: I1011 10:26:18.621799 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:18.621882 master-2 kubenswrapper[4776]: E1011 10:26:18.621841 4776 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-2" not found Oct 11 10:26:18.724778 master-2 kubenswrapper[4776]: I1011 10:26:18.724712 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:18.743432 master-2 kubenswrapper[4776]: I1011 10:26:18.743344 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:18.801240 master-1 kubenswrapper[4771]: I1011 10:26:18.801006 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:18.803415 master-1 kubenswrapper[4771]: I1011 10:26:18.803342 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 11 10:26:18.803484 master-1 kubenswrapper[4771]: I1011 10:26:18.803430 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 11 10:26:18.803484 master-1 kubenswrapper[4771]: I1011 10:26:18.803449 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 11 10:26:18.803509 master-2 kubenswrapper[4776]: I1011 10:26:18.803426 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:18.803540 master-1 kubenswrapper[4771]: I1011 10:26:18.803495 4771 kubelet_node_status.go:76] "Attempting to register node" node="master-1" Oct 11 10:26:18.813766 master-1 kubenswrapper[4771]: I1011 10:26:18.813693 4771 kubelet_node_status.go:79] "Successfully registered node" node="master-1" Oct 11 10:26:18.813819 master-1 kubenswrapper[4771]: E1011 10:26:18.813777 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-1\": node \"master-1\" not found" Oct 11 10:26:18.837812 master-1 kubenswrapper[4771]: E1011 10:26:18.837716 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:18.840821 master-2 kubenswrapper[4776]: I1011 10:26:18.840637 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:18.842991 master-2 kubenswrapper[4776]: I1011 10:26:18.842928 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:18.843162 master-2 kubenswrapper[4776]: I1011 10:26:18.842997 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:18.843162 master-2 kubenswrapper[4776]: I1011 10:26:18.843022 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:18.843162 master-2 kubenswrapper[4776]: I1011 10:26:18.843080 4776 kubelet_node_status.go:76] "Attempting to register node" node="master-2" Oct 11 10:26:18.854242 master-2 kubenswrapper[4776]: I1011 10:26:18.854164 4776 kubelet_node_status.go:79] "Successfully registered node" node="master-2" Oct 11 10:26:18.854242 master-2 kubenswrapper[4776]: E1011 10:26:18.854211 4776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-2\": node \"master-2\" not found" Oct 11 10:26:18.866763 master-2 kubenswrapper[4776]: E1011 10:26:18.866662 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:18.882257 master-2 kubenswrapper[4776]: I1011 10:26:18.882152 4776 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Oct 11 10:26:18.895069 master-2 kubenswrapper[4776]: I1011 10:26:18.895004 4776 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Oct 11 10:26:18.938496 master-1 kubenswrapper[4771]: E1011 10:26:18.938274 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:18.968176 master-2 kubenswrapper[4776]: E1011 10:26:18.968051 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.039014 master-1 kubenswrapper[4771]: E1011 10:26:19.038832 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:19.068537 master-2 kubenswrapper[4776]: E1011 10:26:19.068437 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.140204 master-1 kubenswrapper[4771]: E1011 10:26:19.140096 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:19.169692 master-2 kubenswrapper[4776]: E1011 10:26:19.169589 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.241212 master-1 kubenswrapper[4771]: E1011 10:26:19.241103 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:19.254842 master-1 kubenswrapper[4771]: I1011 10:26:19.254785 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Oct 11 10:26:19.271305 master-2 kubenswrapper[4776]: E1011 10:26:19.271226 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.271392 master-1 kubenswrapper[4771]: I1011 10:26:19.271318 4771 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Oct 11 10:26:19.341728 master-1 kubenswrapper[4771]: E1011 10:26:19.341560 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:19.372306 master-2 kubenswrapper[4776]: E1011 10:26:19.372217 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.441979 master-1 kubenswrapper[4771]: E1011 10:26:19.441758 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:19.465748 master-1 kubenswrapper[4771]: I1011 10:26:19.465663 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-1_3273b5dc02e0d8cacbf64fe78c713d50/kube-rbac-proxy-crio/1.log" Oct 11 10:26:19.467357 master-1 kubenswrapper[4771]: I1011 10:26:19.467293 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:19.468567 master-1 kubenswrapper[4771]: I1011 10:26:19.468499 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 11 10:26:19.468567 master-1 kubenswrapper[4771]: I1011 10:26:19.468547 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 11 10:26:19.468567 master-1 kubenswrapper[4771]: I1011 10:26:19.468563 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 11 10:26:19.469002 master-1 kubenswrapper[4771]: I1011 10:26:19.468937 4771 scope.go:117] "RemoveContainer" containerID="a8e5e0132e090b8e8d3d5ec1f39e48dd2fe3f756d595c30bb279ad118a40f824" Oct 11 10:26:19.469189 master-1 kubenswrapper[4771]: E1011 10:26:19.469152 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-1_openshift-machine-config-operator(3273b5dc02e0d8cacbf64fe78c713d50)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" podUID="3273b5dc02e0d8cacbf64fe78c713d50" Oct 11 10:26:19.474095 master-2 kubenswrapper[4776]: E1011 10:26:19.473843 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.542142 master-1 kubenswrapper[4771]: E1011 10:26:19.541971 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:19.575281 master-2 kubenswrapper[4776]: E1011 10:26:19.575161 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.643399 master-1 kubenswrapper[4771]: E1011 10:26:19.643260 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:19.677191 master-2 kubenswrapper[4776]: E1011 10:26:19.676985 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.744730 master-1 kubenswrapper[4771]: E1011 10:26:19.744430 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:19.778442 master-2 kubenswrapper[4776]: E1011 10:26:19.778265 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.844923 master-1 kubenswrapper[4771]: E1011 10:26:19.844804 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:19.878818 master-2 kubenswrapper[4776]: E1011 10:26:19.878729 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.945968 master-1 kubenswrapper[4771]: E1011 10:26:19.945815 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:19.979559 master-2 kubenswrapper[4776]: E1011 10:26:19.979434 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.046606 master-1 kubenswrapper[4771]: E1011 10:26:20.046308 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:20.079826 master-2 kubenswrapper[4776]: E1011 10:26:20.079605 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.147722 master-1 kubenswrapper[4771]: E1011 10:26:20.147588 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:20.180385 master-2 kubenswrapper[4776]: E1011 10:26:20.180250 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.248669 master-1 kubenswrapper[4771]: E1011 10:26:20.248548 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:20.281431 master-2 kubenswrapper[4776]: E1011 10:26:20.281271 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.349215 master-1 kubenswrapper[4771]: E1011 10:26:20.349048 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:20.382022 master-2 kubenswrapper[4776]: E1011 10:26:20.381794 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.450176 master-1 kubenswrapper[4771]: E1011 10:26:20.450090 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:20.482889 master-2 kubenswrapper[4776]: E1011 10:26:20.482788 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.551499 master-1 kubenswrapper[4771]: E1011 10:26:20.551305 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:20.583212 master-2 kubenswrapper[4776]: E1011 10:26:20.583108 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.597405 master-2 kubenswrapper[4776]: I1011 10:26:20.597246 4776 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 11 10:26:20.651971 master-1 kubenswrapper[4771]: E1011 10:26:20.651753 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:20.684335 master-2 kubenswrapper[4776]: E1011 10:26:20.684229 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.753020 master-1 kubenswrapper[4771]: E1011 10:26:20.752871 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:20.784582 master-2 kubenswrapper[4776]: E1011 10:26:20.784517 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.853738 master-1 kubenswrapper[4771]: E1011 10:26:20.853656 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:20.870411 master-1 kubenswrapper[4771]: I1011 10:26:20.870321 4771 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 11 10:26:20.885250 master-2 kubenswrapper[4776]: E1011 10:26:20.885192 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.954140 master-1 kubenswrapper[4771]: E1011 10:26:20.953844 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:20.985980 master-2 kubenswrapper[4776]: E1011 10:26:20.985784 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.054686 master-1 kubenswrapper[4771]: E1011 10:26:21.054567 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:21.086891 master-2 kubenswrapper[4776]: E1011 10:26:21.086814 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.154961 master-1 kubenswrapper[4771]: E1011 10:26:21.154838 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:21.187773 master-2 kubenswrapper[4776]: E1011 10:26:21.187712 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.256178 master-1 kubenswrapper[4771]: E1011 10:26:21.255944 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:21.288407 master-2 kubenswrapper[4776]: E1011 10:26:21.288178 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.357315 master-1 kubenswrapper[4771]: E1011 10:26:21.357168 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:21.388779 master-2 kubenswrapper[4776]: E1011 10:26:21.388658 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.457890 master-1 kubenswrapper[4771]: E1011 10:26:21.457738 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:21.488975 master-2 kubenswrapper[4776]: E1011 10:26:21.488865 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.535596 master-1 kubenswrapper[4771]: I1011 10:26:21.535403 4771 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 11 10:26:21.558388 master-1 kubenswrapper[4771]: E1011 10:26:21.558295 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:21.590194 master-2 kubenswrapper[4776]: E1011 10:26:21.589983 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.659317 master-1 kubenswrapper[4771]: E1011 10:26:21.659199 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:21.690719 master-2 kubenswrapper[4776]: E1011 10:26:21.690556 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.759537 master-1 kubenswrapper[4771]: E1011 10:26:21.759405 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:21.791370 master-2 kubenswrapper[4776]: E1011 10:26:21.791116 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.859815 master-1 kubenswrapper[4771]: E1011 10:26:21.859678 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:21.892606 master-2 kubenswrapper[4776]: E1011 10:26:21.892370 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.950074 master-1 kubenswrapper[4771]: I1011 10:26:21.949961 4771 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 11 10:26:21.960951 master-1 kubenswrapper[4771]: E1011 10:26:21.960872 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 11 10:26:21.993538 master-2 kubenswrapper[4776]: E1011 10:26:21.993446 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.997817 master-1 kubenswrapper[4771]: I1011 10:26:21.997697 4771 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 11 10:26:22.094032 master-2 kubenswrapper[4776]: E1011 10:26:22.093923 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:22.194731 master-2 kubenswrapper[4776]: E1011 10:26:22.194600 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:22.236943 master-1 kubenswrapper[4771]: I1011 10:26:22.236779 4771 apiserver.go:52] "Watching apiserver" Oct 11 10:26:22.242485 master-1 kubenswrapper[4771]: I1011 10:26:22.242442 4771 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 11 10:26:22.242635 master-1 kubenswrapper[4771]: I1011 10:26:22.242600 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx"] Oct 11 10:26:22.242851 master-1 kubenswrapper[4771]: I1011 10:26:22.242826 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:22.246391 master-1 kubenswrapper[4771]: I1011 10:26:22.246337 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 11 10:26:22.246640 master-1 kubenswrapper[4771]: I1011 10:26:22.246575 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 11 10:26:22.246820 master-1 kubenswrapper[4771]: I1011 10:26:22.246772 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 11 10:26:22.250513 master-1 kubenswrapper[4771]: I1011 10:26:22.250477 4771 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Oct 11 10:26:22.294873 master-2 kubenswrapper[4776]: E1011 10:26:22.294751 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:22.318018 master-1 kubenswrapper[4771]: I1011 10:26:22.317895 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/68d07184-647e-4aaa-a3e6-85e99ae0abd2-etc-ssl-certs\") pod \"cluster-version-operator-55ccd5d5cf-mqqvx\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:22.318018 master-1 kubenswrapper[4771]: I1011 10:26:22.318003 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/68d07184-647e-4aaa-a3e6-85e99ae0abd2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-55ccd5d5cf-mqqvx\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:22.318579 master-1 kubenswrapper[4771]: I1011 10:26:22.318154 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert\") pod \"cluster-version-operator-55ccd5d5cf-mqqvx\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:22.318579 master-1 kubenswrapper[4771]: I1011 10:26:22.318248 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/68d07184-647e-4aaa-a3e6-85e99ae0abd2-service-ca\") pod \"cluster-version-operator-55ccd5d5cf-mqqvx\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:22.318579 master-1 kubenswrapper[4771]: I1011 10:26:22.318283 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68d07184-647e-4aaa-a3e6-85e99ae0abd2-kube-api-access\") pod \"cluster-version-operator-55ccd5d5cf-mqqvx\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:22.395037 master-2 kubenswrapper[4776]: E1011 10:26:22.394897 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:22.419542 master-1 kubenswrapper[4771]: I1011 10:26:22.419439 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68d07184-647e-4aaa-a3e6-85e99ae0abd2-kube-api-access\") pod \"cluster-version-operator-55ccd5d5cf-mqqvx\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:22.419542 master-1 kubenswrapper[4771]: I1011 10:26:22.419531 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/68d07184-647e-4aaa-a3e6-85e99ae0abd2-etc-ssl-certs\") pod \"cluster-version-operator-55ccd5d5cf-mqqvx\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:22.419948 master-1 kubenswrapper[4771]: I1011 10:26:22.419565 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/68d07184-647e-4aaa-a3e6-85e99ae0abd2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-55ccd5d5cf-mqqvx\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:22.419948 master-1 kubenswrapper[4771]: I1011 10:26:22.419598 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert\") pod \"cluster-version-operator-55ccd5d5cf-mqqvx\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:22.419948 master-1 kubenswrapper[4771]: I1011 10:26:22.419630 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/68d07184-647e-4aaa-a3e6-85e99ae0abd2-service-ca\") pod \"cluster-version-operator-55ccd5d5cf-mqqvx\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:22.419948 master-1 kubenswrapper[4771]: I1011 10:26:22.419768 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/68d07184-647e-4aaa-a3e6-85e99ae0abd2-etc-ssl-certs\") pod \"cluster-version-operator-55ccd5d5cf-mqqvx\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:22.419948 master-1 kubenswrapper[4771]: I1011 10:26:22.419802 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/68d07184-647e-4aaa-a3e6-85e99ae0abd2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-55ccd5d5cf-mqqvx\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:22.420258 master-1 kubenswrapper[4771]: E1011 10:26:22.420050 4771 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:22.420258 master-1 kubenswrapper[4771]: E1011 10:26:22.420195 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert podName:68d07184-647e-4aaa-a3e6-85e99ae0abd2 nodeName:}" failed. No retries permitted until 2025-10-11 10:26:22.920157856 +0000 UTC m=+14.894384327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert") pod "cluster-version-operator-55ccd5d5cf-mqqvx" (UID: "68d07184-647e-4aaa-a3e6-85e99ae0abd2") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:22.421192 master-1 kubenswrapper[4771]: I1011 10:26:22.421138 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/68d07184-647e-4aaa-a3e6-85e99ae0abd2-service-ca\") pod \"cluster-version-operator-55ccd5d5cf-mqqvx\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:22.446614 master-1 kubenswrapper[4771]: I1011 10:26:22.446497 4771 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 11 10:26:22.453873 master-1 kubenswrapper[4771]: I1011 10:26:22.453829 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68d07184-647e-4aaa-a3e6-85e99ae0abd2-kube-api-access\") pod \"cluster-version-operator-55ccd5d5cf-mqqvx\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:22.496004 master-2 kubenswrapper[4776]: E1011 10:26:22.495776 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:22.596934 master-2 kubenswrapper[4776]: E1011 10:26:22.596819 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:22.698084 master-2 kubenswrapper[4776]: E1011 10:26:22.697918 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:22.799360 master-2 kubenswrapper[4776]: E1011 10:26:22.799180 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:22.899972 master-2 kubenswrapper[4776]: E1011 10:26:22.899821 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:22.935729 master-1 kubenswrapper[4771]: I1011 10:26:22.935614 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert\") pod \"cluster-version-operator-55ccd5d5cf-mqqvx\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:22.936939 master-1 kubenswrapper[4771]: E1011 10:26:22.935847 4771 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:22.936939 master-1 kubenswrapper[4771]: E1011 10:26:22.936007 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert podName:68d07184-647e-4aaa-a3e6-85e99ae0abd2 nodeName:}" failed. No retries permitted until 2025-10-11 10:26:23.935975419 +0000 UTC m=+15.910201890 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert") pod "cluster-version-operator-55ccd5d5cf-mqqvx" (UID: "68d07184-647e-4aaa-a3e6-85e99ae0abd2") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:23.001056 master-2 kubenswrapper[4776]: E1011 10:26:23.000905 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.101240 master-2 kubenswrapper[4776]: E1011 10:26:23.101053 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.202021 master-2 kubenswrapper[4776]: E1011 10:26:23.201943 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.302635 master-2 kubenswrapper[4776]: E1011 10:26:23.302557 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.403001 master-2 kubenswrapper[4776]: E1011 10:26:23.402863 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.503306 master-2 kubenswrapper[4776]: E1011 10:26:23.503222 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.603464 master-2 kubenswrapper[4776]: E1011 10:26:23.603338 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.703843 master-2 kubenswrapper[4776]: E1011 10:26:23.703748 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.804846 master-2 kubenswrapper[4776]: E1011 10:26:23.804771 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.905510 master-2 kubenswrapper[4776]: E1011 10:26:23.905426 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.941504 master-1 kubenswrapper[4771]: I1011 10:26:23.941396 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert\") pod \"cluster-version-operator-55ccd5d5cf-mqqvx\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:23.941504 master-1 kubenswrapper[4771]: E1011 10:26:23.941464 4771 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:23.942220 master-1 kubenswrapper[4771]: E1011 10:26:23.941555 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert podName:68d07184-647e-4aaa-a3e6-85e99ae0abd2 nodeName:}" failed. No retries permitted until 2025-10-11 10:26:25.941530878 +0000 UTC m=+17.915757349 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert") pod "cluster-version-operator-55ccd5d5cf-mqqvx" (UID: "68d07184-647e-4aaa-a3e6-85e99ae0abd2") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:24.006298 master-2 kubenswrapper[4776]: E1011 10:26:24.006130 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:24.107105 master-2 kubenswrapper[4776]: E1011 10:26:24.107016 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:24.208236 master-2 kubenswrapper[4776]: E1011 10:26:24.208163 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:24.309139 master-2 kubenswrapper[4776]: E1011 10:26:24.309036 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:24.410158 master-2 kubenswrapper[4776]: E1011 10:26:24.410094 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:24.510626 master-2 kubenswrapper[4776]: E1011 10:26:24.510522 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:24.611748 master-2 kubenswrapper[4776]: E1011 10:26:24.611477 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:24.712341 master-2 kubenswrapper[4776]: E1011 10:26:24.712247 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:24.813306 master-2 kubenswrapper[4776]: E1011 10:26:24.813221 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:24.914452 master-2 kubenswrapper[4776]: E1011 10:26:24.914364 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.014660 master-2 kubenswrapper[4776]: E1011 10:26:25.014505 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.063976 master-2 kubenswrapper[4776]: I1011 10:26:25.063910 4776 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 11 10:26:25.115823 master-2 kubenswrapper[4776]: E1011 10:26:25.115738 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.216544 master-2 kubenswrapper[4776]: E1011 10:26:25.216403 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.317565 master-2 kubenswrapper[4776]: E1011 10:26:25.317494 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.418143 master-2 kubenswrapper[4776]: E1011 10:26:25.418043 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.452599 master-2 kubenswrapper[4776]: I1011 10:26:25.452537 4776 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 11 10:26:25.518570 master-2 kubenswrapper[4776]: E1011 10:26:25.518393 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.619449 master-2 kubenswrapper[4776]: E1011 10:26:25.619327 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.720439 master-2 kubenswrapper[4776]: E1011 10:26:25.720317 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.821031 master-2 kubenswrapper[4776]: E1011 10:26:25.820854 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.921420 master-2 kubenswrapper[4776]: E1011 10:26:25.921312 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.955046 master-1 kubenswrapper[4771]: I1011 10:26:25.954826 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert\") pod \"cluster-version-operator-55ccd5d5cf-mqqvx\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:25.955867 master-1 kubenswrapper[4771]: E1011 10:26:25.955073 4771 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:25.955867 master-1 kubenswrapper[4771]: E1011 10:26:25.955190 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert podName:68d07184-647e-4aaa-a3e6-85e99ae0abd2 nodeName:}" failed. No retries permitted until 2025-10-11 10:26:29.955169872 +0000 UTC m=+21.929396313 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert") pod "cluster-version-operator-55ccd5d5cf-mqqvx" (UID: "68d07184-647e-4aaa-a3e6-85e99ae0abd2") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:26.020815 master-2 kubenswrapper[4776]: E1011 10:26:26.020728 4776 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-2\" not found" Oct 11 10:26:26.021494 master-2 kubenswrapper[4776]: E1011 10:26:26.021448 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:26.058302 master-2 kubenswrapper[4776]: I1011 10:26:26.058222 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:26.059278 master-2 kubenswrapper[4776]: I1011 10:26:26.059248 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:26.059332 master-2 kubenswrapper[4776]: I1011 10:26:26.059281 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:26.059332 master-2 kubenswrapper[4776]: I1011 10:26:26.059290 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:26.059584 master-2 kubenswrapper[4776]: I1011 10:26:26.059557 4776 scope.go:117] "RemoveContainer" containerID="8d16240c2844fab823fcb447aab9f50d10cf3af9670757f057c38aa4d5dcc97c" Oct 11 10:26:26.122607 master-2 kubenswrapper[4776]: E1011 10:26:26.122517 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:26.223950 master-2 kubenswrapper[4776]: E1011 10:26:26.223588 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:26.324948 master-2 kubenswrapper[4776]: E1011 10:26:26.324854 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:26.425259 master-2 kubenswrapper[4776]: E1011 10:26:26.425156 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:26.525422 master-2 kubenswrapper[4776]: E1011 10:26:26.525339 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:26.626707 master-2 kubenswrapper[4776]: E1011 10:26:26.626504 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:26.726956 master-2 kubenswrapper[4776]: E1011 10:26:26.726840 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:26.758856 master-2 kubenswrapper[4776]: I1011 10:26:26.758769 4776 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 11 10:26:26.879799 master-2 kubenswrapper[4776]: I1011 10:26:26.879642 4776 apiserver.go:52] "Watching apiserver" Oct 11 10:26:26.883363 master-2 kubenswrapper[4776]: I1011 10:26:26.883306 4776 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 11 10:26:26.883507 master-2 kubenswrapper[4776]: I1011 10:26:26.883472 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=[] Oct 11 10:26:26.979631 master-2 kubenswrapper[4776]: I1011 10:26:26.979460 4776 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Oct 11 10:26:27.118397 master-2 kubenswrapper[4776]: I1011 10:26:27.118338 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-2_f022eff2d978fee6b366ac18a80aa53c/kube-rbac-proxy-crio/2.log" Oct 11 10:26:27.118997 master-2 kubenswrapper[4776]: I1011 10:26:27.118966 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-2_f022eff2d978fee6b366ac18a80aa53c/kube-rbac-proxy-crio/1.log" Oct 11 10:26:27.119447 master-2 kubenswrapper[4776]: I1011 10:26:27.119411 4776 generic.go:334] "Generic (PLEG): container finished" podID="f022eff2d978fee6b366ac18a80aa53c" containerID="edc999680a89e34d61c0b53a78f91660c99759c03148cd7d4d198e5381947e81" exitCode=1 Oct 11 10:26:27.119506 master-2 kubenswrapper[4776]: I1011 10:26:27.119464 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" event={"ID":"f022eff2d978fee6b366ac18a80aa53c","Type":"ContainerDied","Data":"edc999680a89e34d61c0b53a78f91660c99759c03148cd7d4d198e5381947e81"} Oct 11 10:26:27.119577 master-2 kubenswrapper[4776]: I1011 10:26:27.119548 4776 scope.go:117] "RemoveContainer" containerID="8d16240c2844fab823fcb447aab9f50d10cf3af9670757f057c38aa4d5dcc97c" Oct 11 10:26:27.146538 master-2 kubenswrapper[4776]: I1011 10:26:27.146489 4776 scope.go:117] "RemoveContainer" containerID="edc999680a89e34d61c0b53a78f91660c99759c03148cd7d4d198e5381947e81" Oct 11 10:26:27.146850 master-2 kubenswrapper[4776]: E1011 10:26:27.146806 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-2_openshift-machine-config-operator(f022eff2d978fee6b366ac18a80aa53c)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" podUID="f022eff2d978fee6b366ac18a80aa53c" Oct 11 10:26:27.148820 master-2 kubenswrapper[4776]: I1011 10:26:27.148788 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-2"] Oct 11 10:26:28.125003 master-2 kubenswrapper[4776]: I1011 10:26:28.124894 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-2_f022eff2d978fee6b366ac18a80aa53c/kube-rbac-proxy-crio/2.log" Oct 11 10:26:28.126118 master-2 kubenswrapper[4776]: I1011 10:26:28.125965 4776 scope.go:117] "RemoveContainer" containerID="edc999680a89e34d61c0b53a78f91660c99759c03148cd7d4d198e5381947e81" Oct 11 10:26:28.126263 master-2 kubenswrapper[4776]: E1011 10:26:28.126201 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-2_openshift-machine-config-operator(f022eff2d978fee6b366ac18a80aa53c)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" podUID="f022eff2d978fee6b366ac18a80aa53c" Oct 11 10:26:29.989610 master-1 kubenswrapper[4771]: I1011 10:26:29.989477 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert\") pod \"cluster-version-operator-55ccd5d5cf-mqqvx\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:29.989610 master-1 kubenswrapper[4771]: E1011 10:26:29.989622 4771 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:29.990610 master-1 kubenswrapper[4771]: E1011 10:26:29.989675 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert podName:68d07184-647e-4aaa-a3e6-85e99ae0abd2 nodeName:}" failed. No retries permitted until 2025-10-11 10:26:37.989657863 +0000 UTC m=+29.963884304 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert") pod "cluster-version-operator-55ccd5d5cf-mqqvx" (UID: "68d07184-647e-4aaa-a3e6-85e99ae0abd2") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:31.447666 master-1 kubenswrapper[4771]: I1011 10:26:31.447561 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-1"] Oct 11 10:26:31.448267 master-1 kubenswrapper[4771]: I1011 10:26:31.447792 4771 scope.go:117] "RemoveContainer" containerID="a8e5e0132e090b8e8d3d5ec1f39e48dd2fe3f756d595c30bb279ad118a40f824" Oct 11 10:26:32.497911 master-1 kubenswrapper[4771]: I1011 10:26:32.497832 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-1_3273b5dc02e0d8cacbf64fe78c713d50/kube-rbac-proxy-crio/2.log" Oct 11 10:26:32.498855 master-1 kubenswrapper[4771]: I1011 10:26:32.498777 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-1_3273b5dc02e0d8cacbf64fe78c713d50/kube-rbac-proxy-crio/1.log" Oct 11 10:26:32.499291 master-1 kubenswrapper[4771]: I1011 10:26:32.499237 4771 generic.go:334] "Generic (PLEG): container finished" podID="3273b5dc02e0d8cacbf64fe78c713d50" containerID="a32ecb2841115f637b624a3f8eaa25803703a8d4195c571015a3a44c7767232c" exitCode=1 Oct 11 10:26:32.499291 master-1 kubenswrapper[4771]: I1011 10:26:32.499285 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" event={"ID":"3273b5dc02e0d8cacbf64fe78c713d50","Type":"ContainerDied","Data":"a32ecb2841115f637b624a3f8eaa25803703a8d4195c571015a3a44c7767232c"} Oct 11 10:26:32.499456 master-1 kubenswrapper[4771]: I1011 10:26:32.499333 4771 scope.go:117] "RemoveContainer" containerID="a8e5e0132e090b8e8d3d5ec1f39e48dd2fe3f756d595c30bb279ad118a40f824" Oct 11 10:26:32.510519 master-1 kubenswrapper[4771]: I1011 10:26:32.510476 4771 scope.go:117] "RemoveContainer" containerID="a32ecb2841115f637b624a3f8eaa25803703a8d4195c571015a3a44c7767232c" Oct 11 10:26:32.510974 master-1 kubenswrapper[4771]: E1011 10:26:32.510938 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-1_openshift-machine-config-operator(3273b5dc02e0d8cacbf64fe78c713d50)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" podUID="3273b5dc02e0d8cacbf64fe78c713d50" Oct 11 10:26:33.504615 master-1 kubenswrapper[4771]: I1011 10:26:33.504498 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-1_3273b5dc02e0d8cacbf64fe78c713d50/kube-rbac-proxy-crio/2.log" Oct 11 10:26:33.505939 master-1 kubenswrapper[4771]: I1011 10:26:33.505877 4771 scope.go:117] "RemoveContainer" containerID="a32ecb2841115f637b624a3f8eaa25803703a8d4195c571015a3a44c7767232c" Oct 11 10:26:33.506201 master-1 kubenswrapper[4771]: E1011 10:26:33.506150 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-1_openshift-machine-config-operator(3273b5dc02e0d8cacbf64fe78c713d50)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" podUID="3273b5dc02e0d8cacbf64fe78c713d50" Oct 11 10:26:37.294950 master-2 kubenswrapper[4776]: I1011 10:26:37.294849 4776 csr.go:261] certificate signing request csr-l2g8v is approved, waiting to be issued Oct 11 10:26:37.304004 master-1 kubenswrapper[4771]: I1011 10:26:37.303867 4771 csr.go:261] certificate signing request csr-sdd7k is approved, waiting to be issued Oct 11 10:26:37.306558 master-2 kubenswrapper[4776]: I1011 10:26:37.306482 4776 csr.go:257] certificate signing request csr-l2g8v is issued Oct 11 10:26:37.313223 master-1 kubenswrapper[4771]: I1011 10:26:37.313176 4771 csr.go:257] certificate signing request csr-sdd7k is issued Oct 11 10:26:38.042848 master-1 kubenswrapper[4771]: I1011 10:26:38.042740 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert\") pod \"cluster-version-operator-55ccd5d5cf-mqqvx\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:38.043238 master-1 kubenswrapper[4771]: E1011 10:26:38.042935 4771 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:38.043238 master-1 kubenswrapper[4771]: E1011 10:26:38.043051 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert podName:68d07184-647e-4aaa-a3e6-85e99ae0abd2 nodeName:}" failed. No retries permitted until 2025-10-11 10:26:54.043016511 +0000 UTC m=+46.017242992 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert") pod "cluster-version-operator-55ccd5d5cf-mqqvx" (UID: "68d07184-647e-4aaa-a3e6-85e99ae0abd2") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:38.308328 master-2 kubenswrapper[4776]: I1011 10:26:38.308221 4776 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-10-12 10:21:12 +0000 UTC, rotation deadline is 2025-10-12 03:47:49.858757141 +0000 UTC Oct 11 10:26:38.309258 master-2 kubenswrapper[4776]: I1011 10:26:38.308897 4776 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h21m11.549871315s for next certificate rotation Oct 11 10:26:38.315078 master-1 kubenswrapper[4771]: I1011 10:26:38.314972 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-10-12 10:21:12 +0000 UTC, rotation deadline is 2025-10-12 06:07:40.78853197 +0000 UTC Oct 11 10:26:38.315078 master-1 kubenswrapper[4771]: I1011 10:26:38.315018 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h41m2.473517472s for next certificate rotation Oct 11 10:26:39.310154 master-2 kubenswrapper[4776]: I1011 10:26:39.310012 4776 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-10-12 10:21:12 +0000 UTC, rotation deadline is 2025-10-12 03:14:41.747694024 +0000 UTC Oct 11 10:26:39.310154 master-2 kubenswrapper[4776]: I1011 10:26:39.310093 4776 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 16h48m2.437607621s for next certificate rotation Oct 11 10:26:39.315969 master-1 kubenswrapper[4771]: I1011 10:26:39.315805 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-10-12 10:21:12 +0000 UTC, rotation deadline is 2025-10-12 06:51:24.981270668 +0000 UTC Oct 11 10:26:39.315969 master-1 kubenswrapper[4771]: I1011 10:26:39.315875 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h24m45.66539847s for next certificate rotation Oct 11 10:26:43.058969 master-2 kubenswrapper[4776]: I1011 10:26:43.058831 4776 scope.go:117] "RemoveContainer" containerID="edc999680a89e34d61c0b53a78f91660c99759c03148cd7d4d198e5381947e81" Oct 11 10:26:43.059958 master-2 kubenswrapper[4776]: E1011 10:26:43.059205 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-2_openshift-machine-config-operator(f022eff2d978fee6b366ac18a80aa53c)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" podUID="f022eff2d978fee6b366ac18a80aa53c" Oct 11 10:26:44.967446 master-1 kubenswrapper[4771]: I1011 10:26:44.967382 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx"] Oct 11 10:26:44.968577 master-1 kubenswrapper[4771]: E1011 10:26:44.967538 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" podUID="68d07184-647e-4aaa-a3e6-85e99ae0abd2" Oct 11 10:26:45.527550 master-1 kubenswrapper[4771]: I1011 10:26:45.527490 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:45.548239 master-1 kubenswrapper[4771]: I1011 10:26:45.548160 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:45.692424 master-1 kubenswrapper[4771]: I1011 10:26:45.692290 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/68d07184-647e-4aaa-a3e6-85e99ae0abd2-etc-ssl-certs\") pod \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " Oct 11 10:26:45.692424 master-1 kubenswrapper[4771]: I1011 10:26:45.692387 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/68d07184-647e-4aaa-a3e6-85e99ae0abd2-etc-cvo-updatepayloads\") pod \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " Oct 11 10:26:45.692424 master-1 kubenswrapper[4771]: I1011 10:26:45.692432 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/68d07184-647e-4aaa-a3e6-85e99ae0abd2-service-ca\") pod \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " Oct 11 10:26:45.692888 master-1 kubenswrapper[4771]: I1011 10:26:45.692470 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68d07184-647e-4aaa-a3e6-85e99ae0abd2-kube-api-access\") pod \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\" (UID: \"68d07184-647e-4aaa-a3e6-85e99ae0abd2\") " Oct 11 10:26:45.692888 master-1 kubenswrapper[4771]: I1011 10:26:45.692565 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68d07184-647e-4aaa-a3e6-85e99ae0abd2-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "68d07184-647e-4aaa-a3e6-85e99ae0abd2" (UID: "68d07184-647e-4aaa-a3e6-85e99ae0abd2"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:26:45.692888 master-1 kubenswrapper[4771]: I1011 10:26:45.692583 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68d07184-647e-4aaa-a3e6-85e99ae0abd2-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "68d07184-647e-4aaa-a3e6-85e99ae0abd2" (UID: "68d07184-647e-4aaa-a3e6-85e99ae0abd2"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:26:45.693344 master-1 kubenswrapper[4771]: I1011 10:26:45.693276 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68d07184-647e-4aaa-a3e6-85e99ae0abd2-service-ca" (OuterVolumeSpecName: "service-ca") pod "68d07184-647e-4aaa-a3e6-85e99ae0abd2" (UID: "68d07184-647e-4aaa-a3e6-85e99ae0abd2"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:26:45.698153 master-1 kubenswrapper[4771]: I1011 10:26:45.698087 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68d07184-647e-4aaa-a3e6-85e99ae0abd2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "68d07184-647e-4aaa-a3e6-85e99ae0abd2" (UID: "68d07184-647e-4aaa-a3e6-85e99ae0abd2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:26:45.793757 master-1 kubenswrapper[4771]: I1011 10:26:45.793511 4771 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/68d07184-647e-4aaa-a3e6-85e99ae0abd2-service-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:26:45.793757 master-1 kubenswrapper[4771]: I1011 10:26:45.793585 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68d07184-647e-4aaa-a3e6-85e99ae0abd2-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:26:45.793757 master-1 kubenswrapper[4771]: I1011 10:26:45.793604 4771 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/68d07184-647e-4aaa-a3e6-85e99ae0abd2-etc-ssl-certs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:26:45.793757 master-1 kubenswrapper[4771]: I1011 10:26:45.793621 4771 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/68d07184-647e-4aaa-a3e6-85e99ae0abd2-etc-cvo-updatepayloads\") on node \"master-1\" DevicePath \"\"" Oct 11 10:26:46.529304 master-1 kubenswrapper[4771]: I1011 10:26:46.529200 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx" Oct 11 10:26:46.557433 master-1 kubenswrapper[4771]: I1011 10:26:46.557320 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx"] Oct 11 10:26:46.561558 master-1 kubenswrapper[4771]: I1011 10:26:46.561501 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx"] Oct 11 10:26:46.594624 master-2 kubenswrapper[4776]: I1011 10:26:46.594483 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx"] Oct 11 10:26:46.595573 master-2 kubenswrapper[4776]: I1011 10:26:46.594900 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.598739 master-2 kubenswrapper[4776]: I1011 10:26:46.598652 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 11 10:26:46.599061 master-2 kubenswrapper[4776]: I1011 10:26:46.598928 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 11 10:26:46.599432 master-2 kubenswrapper[4776]: I1011 10:26:46.599388 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 11 10:26:46.641588 master-2 kubenswrapper[4776]: I1011 10:26:46.641489 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b7b07707-84bd-43a6-a43d-6680decaa210-etc-ssl-certs\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.641588 master-2 kubenswrapper[4776]: I1011 10:26:46.641554 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b7b07707-84bd-43a6-a43d-6680decaa210-etc-cvo-updatepayloads\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.641588 master-2 kubenswrapper[4776]: I1011 10:26:46.641586 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.641982 master-2 kubenswrapper[4776]: I1011 10:26:46.641613 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7b07707-84bd-43a6-a43d-6680decaa210-service-ca\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.641982 master-2 kubenswrapper[4776]: I1011 10:26:46.641639 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7b07707-84bd-43a6-a43d-6680decaa210-kube-api-access\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.701208 master-1 kubenswrapper[4771]: I1011 10:26:46.701004 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68d07184-647e-4aaa-a3e6-85e99ae0abd2-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:26:46.742106 master-2 kubenswrapper[4776]: I1011 10:26:46.741925 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b7b07707-84bd-43a6-a43d-6680decaa210-etc-ssl-certs\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.742106 master-2 kubenswrapper[4776]: I1011 10:26:46.742043 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b7b07707-84bd-43a6-a43d-6680decaa210-etc-cvo-updatepayloads\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.742106 master-2 kubenswrapper[4776]: I1011 10:26:46.742101 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.742511 master-2 kubenswrapper[4776]: I1011 10:26:46.742155 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7b07707-84bd-43a6-a43d-6680decaa210-service-ca\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.742511 master-2 kubenswrapper[4776]: I1011 10:26:46.742204 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7b07707-84bd-43a6-a43d-6680decaa210-kube-api-access\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.742511 master-2 kubenswrapper[4776]: I1011 10:26:46.742233 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b7b07707-84bd-43a6-a43d-6680decaa210-etc-cvo-updatepayloads\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.742511 master-2 kubenswrapper[4776]: E1011 10:26:46.742453 4776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:46.742905 master-2 kubenswrapper[4776]: E1011 10:26:46.742736 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert podName:b7b07707-84bd-43a6-a43d-6680decaa210 nodeName:}" failed. No retries permitted until 2025-10-11 10:26:47.242591835 +0000 UTC m=+42.027018574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert") pod "cluster-version-operator-55bd67947c-tpbwx" (UID: "b7b07707-84bd-43a6-a43d-6680decaa210") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:46.742905 master-2 kubenswrapper[4776]: I1011 10:26:46.742835 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b7b07707-84bd-43a6-a43d-6680decaa210-etc-ssl-certs\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.743460 master-2 kubenswrapper[4776]: I1011 10:26:46.743336 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7b07707-84bd-43a6-a43d-6680decaa210-service-ca\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.763232 master-2 kubenswrapper[4776]: I1011 10:26:46.763129 4776 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 11 10:26:46.773623 master-2 kubenswrapper[4776]: I1011 10:26:46.773506 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7b07707-84bd-43a6-a43d-6680decaa210-kube-api-access\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:47.245725 master-2 kubenswrapper[4776]: I1011 10:26:47.245530 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:47.246026 master-2 kubenswrapper[4776]: E1011 10:26:47.245838 4776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:47.246026 master-2 kubenswrapper[4776]: E1011 10:26:47.245944 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert podName:b7b07707-84bd-43a6-a43d-6680decaa210 nodeName:}" failed. No retries permitted until 2025-10-11 10:26:48.245909058 +0000 UTC m=+43.030335817 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert") pod "cluster-version-operator-55bd67947c-tpbwx" (UID: "b7b07707-84bd-43a6-a43d-6680decaa210") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:47.451189 master-1 kubenswrapper[4771]: I1011 10:26:47.451089 4771 scope.go:117] "RemoveContainer" containerID="a32ecb2841115f637b624a3f8eaa25803703a8d4195c571015a3a44c7767232c" Oct 11 10:26:47.451451 master-1 kubenswrapper[4771]: E1011 10:26:47.451403 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-1_openshift-machine-config-operator(3273b5dc02e0d8cacbf64fe78c713d50)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" podUID="3273b5dc02e0d8cacbf64fe78c713d50" Oct 11 10:26:48.253083 master-2 kubenswrapper[4776]: I1011 10:26:48.253045 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:48.253812 master-2 kubenswrapper[4776]: E1011 10:26:48.253152 4776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:48.253812 master-2 kubenswrapper[4776]: E1011 10:26:48.253201 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert podName:b7b07707-84bd-43a6-a43d-6680decaa210 nodeName:}" failed. No retries permitted until 2025-10-11 10:26:50.253186541 +0000 UTC m=+45.037613250 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert") pod "cluster-version-operator-55bd67947c-tpbwx" (UID: "b7b07707-84bd-43a6-a43d-6680decaa210") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:48.442026 master-1 kubenswrapper[4771]: I1011 10:26:48.441921 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68d07184-647e-4aaa-a3e6-85e99ae0abd2" path="/var/lib/kubelet/pods/68d07184-647e-4aaa-a3e6-85e99ae0abd2/volumes" Oct 11 10:26:50.268374 master-2 kubenswrapper[4776]: I1011 10:26:50.268215 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:50.269474 master-2 kubenswrapper[4776]: E1011 10:26:50.268474 4776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:50.269474 master-2 kubenswrapper[4776]: E1011 10:26:50.268585 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert podName:b7b07707-84bd-43a6-a43d-6680decaa210 nodeName:}" failed. No retries permitted until 2025-10-11 10:26:54.268546097 +0000 UTC m=+49.052972846 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert") pod "cluster-version-operator-55bd67947c-tpbwx" (UID: "b7b07707-84bd-43a6-a43d-6680decaa210") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:53.810165 master-1 kubenswrapper[4771]: I1011 10:26:53.810059 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf"] Oct 11 10:26:53.811340 master-1 kubenswrapper[4771]: I1011 10:26:53.810408 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" Oct 11 10:26:53.813811 master-1 kubenswrapper[4771]: I1011 10:26:53.813763 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Oct 11 10:26:53.814702 master-1 kubenswrapper[4771]: I1011 10:26:53.814638 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Oct 11 10:26:53.815135 master-1 kubenswrapper[4771]: I1011 10:26:53.815074 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Oct 11 10:26:53.815334 master-1 kubenswrapper[4771]: I1011 10:26:53.815291 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Oct 11 10:26:53.815816 master-1 kubenswrapper[4771]: I1011 10:26:53.815764 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Oct 11 10:26:53.947588 master-1 kubenswrapper[4771]: I1011 10:26:53.947467 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c85t\" (UniqueName: \"kubernetes.io/projected/4ba9953d-1f54-43be-a3ae-121030f1e07b-kube-api-access-4c85t\") pod \"cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" Oct 11 10:26:53.947588 master-1 kubenswrapper[4771]: I1011 10:26:53.947544 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4ba9953d-1f54-43be-a3ae-121030f1e07b-images\") pod \"cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" Oct 11 10:26:53.947588 master-1 kubenswrapper[4771]: I1011 10:26:53.947583 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4ba9953d-1f54-43be-a3ae-121030f1e07b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" Oct 11 10:26:53.947588 master-1 kubenswrapper[4771]: I1011 10:26:53.947616 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ba9953d-1f54-43be-a3ae-121030f1e07b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" Oct 11 10:26:53.948097 master-1 kubenswrapper[4771]: I1011 10:26:53.947690 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ba9953d-1f54-43be-a3ae-121030f1e07b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" Oct 11 10:26:54.048900 master-1 kubenswrapper[4771]: I1011 10:26:54.048762 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4ba9953d-1f54-43be-a3ae-121030f1e07b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" Oct 11 10:26:54.048900 master-1 kubenswrapper[4771]: I1011 10:26:54.048860 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ba9953d-1f54-43be-a3ae-121030f1e07b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" Oct 11 10:26:54.048900 master-1 kubenswrapper[4771]: I1011 10:26:54.048903 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ba9953d-1f54-43be-a3ae-121030f1e07b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" Oct 11 10:26:54.049257 master-1 kubenswrapper[4771]: I1011 10:26:54.048943 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c85t\" (UniqueName: \"kubernetes.io/projected/4ba9953d-1f54-43be-a3ae-121030f1e07b-kube-api-access-4c85t\") pod \"cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" Oct 11 10:26:54.049257 master-1 kubenswrapper[4771]: I1011 10:26:54.048979 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4ba9953d-1f54-43be-a3ae-121030f1e07b-images\") pod \"cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" Oct 11 10:26:54.049257 master-1 kubenswrapper[4771]: I1011 10:26:54.049046 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4ba9953d-1f54-43be-a3ae-121030f1e07b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" Oct 11 10:26:54.050278 master-1 kubenswrapper[4771]: I1011 10:26:54.050223 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4ba9953d-1f54-43be-a3ae-121030f1e07b-images\") pod \"cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" Oct 11 10:26:54.050385 master-1 kubenswrapper[4771]: I1011 10:26:54.050288 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ba9953d-1f54-43be-a3ae-121030f1e07b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" Oct 11 10:26:54.054269 master-1 kubenswrapper[4771]: I1011 10:26:54.054195 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ba9953d-1f54-43be-a3ae-121030f1e07b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" Oct 11 10:26:54.079612 master-1 kubenswrapper[4771]: I1011 10:26:54.079491 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c85t\" (UniqueName: \"kubernetes.io/projected/4ba9953d-1f54-43be-a3ae-121030f1e07b-kube-api-access-4c85t\") pod \"cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" Oct 11 10:26:54.138033 master-1 kubenswrapper[4771]: I1011 10:26:54.137899 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" Oct 11 10:26:54.157007 master-1 kubenswrapper[4771]: W1011 10:26:54.156917 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ba9953d_1f54_43be_a3ae_121030f1e07b.slice/crio-45041b48c16ec7268bc8f5e7bf6ad631a6a8600e9455c7fcc91bcbcacf5e65a9 WatchSource:0}: Error finding container 45041b48c16ec7268bc8f5e7bf6ad631a6a8600e9455c7fcc91bcbcacf5e65a9: Status 404 returned error can't find the container with id 45041b48c16ec7268bc8f5e7bf6ad631a6a8600e9455c7fcc91bcbcacf5e65a9 Oct 11 10:26:54.298533 master-2 kubenswrapper[4776]: I1011 10:26:54.298424 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:54.299134 master-2 kubenswrapper[4776]: E1011 10:26:54.298604 4776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:54.299134 master-2 kubenswrapper[4776]: E1011 10:26:54.298714 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert podName:b7b07707-84bd-43a6-a43d-6680decaa210 nodeName:}" failed. No retries permitted until 2025-10-11 10:27:02.298661363 +0000 UTC m=+57.083088092 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert") pod "cluster-version-operator-55bd67947c-tpbwx" (UID: "b7b07707-84bd-43a6-a43d-6680decaa210") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:54.547864 master-1 kubenswrapper[4771]: I1011 10:26:54.547309 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" event={"ID":"4ba9953d-1f54-43be-a3ae-121030f1e07b","Type":"ContainerStarted","Data":"45041b48c16ec7268bc8f5e7bf6ad631a6a8600e9455c7fcc91bcbcacf5e65a9"} Oct 11 10:26:56.191780 master-1 kubenswrapper[4771]: I1011 10:26:56.191679 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-854f54f8c9-hw5fc"] Oct 11 10:26:56.192652 master-1 kubenswrapper[4771]: I1011 10:26:56.191895 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-854f54f8c9-hw5fc" Oct 11 10:26:56.194693 master-1 kubenswrapper[4771]: I1011 10:26:56.194598 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 11 10:26:56.194693 master-1 kubenswrapper[4771]: I1011 10:26:56.194617 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 11 10:26:56.195119 master-1 kubenswrapper[4771]: I1011 10:26:56.194821 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 11 10:26:56.367930 master-1 kubenswrapper[4771]: I1011 10:26:56.367812 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/868ea5b9-b62a-4683-82c9-760de94ef155-host-etc-kube\") pod \"network-operator-854f54f8c9-hw5fc\" (UID: \"868ea5b9-b62a-4683-82c9-760de94ef155\") " pod="openshift-network-operator/network-operator-854f54f8c9-hw5fc" Oct 11 10:26:56.367930 master-1 kubenswrapper[4771]: I1011 10:26:56.367932 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/868ea5b9-b62a-4683-82c9-760de94ef155-metrics-tls\") pod \"network-operator-854f54f8c9-hw5fc\" (UID: \"868ea5b9-b62a-4683-82c9-760de94ef155\") " pod="openshift-network-operator/network-operator-854f54f8c9-hw5fc" Oct 11 10:26:56.368270 master-1 kubenswrapper[4771]: I1011 10:26:56.367985 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqrkd\" (UniqueName: \"kubernetes.io/projected/868ea5b9-b62a-4683-82c9-760de94ef155-kube-api-access-cqrkd\") pod \"network-operator-854f54f8c9-hw5fc\" (UID: \"868ea5b9-b62a-4683-82c9-760de94ef155\") " pod="openshift-network-operator/network-operator-854f54f8c9-hw5fc" Oct 11 10:26:56.468800 master-1 kubenswrapper[4771]: I1011 10:26:56.468665 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/868ea5b9-b62a-4683-82c9-760de94ef155-metrics-tls\") pod \"network-operator-854f54f8c9-hw5fc\" (UID: \"868ea5b9-b62a-4683-82c9-760de94ef155\") " pod="openshift-network-operator/network-operator-854f54f8c9-hw5fc" Oct 11 10:26:56.468800 master-1 kubenswrapper[4771]: I1011 10:26:56.468720 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqrkd\" (UniqueName: \"kubernetes.io/projected/868ea5b9-b62a-4683-82c9-760de94ef155-kube-api-access-cqrkd\") pod \"network-operator-854f54f8c9-hw5fc\" (UID: \"868ea5b9-b62a-4683-82c9-760de94ef155\") " pod="openshift-network-operator/network-operator-854f54f8c9-hw5fc" Oct 11 10:26:56.468800 master-1 kubenswrapper[4771]: I1011 10:26:56.468745 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/868ea5b9-b62a-4683-82c9-760de94ef155-host-etc-kube\") pod \"network-operator-854f54f8c9-hw5fc\" (UID: \"868ea5b9-b62a-4683-82c9-760de94ef155\") " pod="openshift-network-operator/network-operator-854f54f8c9-hw5fc" Oct 11 10:26:56.469092 master-1 kubenswrapper[4771]: I1011 10:26:56.468833 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/868ea5b9-b62a-4683-82c9-760de94ef155-host-etc-kube\") pod \"network-operator-854f54f8c9-hw5fc\" (UID: \"868ea5b9-b62a-4683-82c9-760de94ef155\") " pod="openshift-network-operator/network-operator-854f54f8c9-hw5fc" Oct 11 10:26:56.475149 master-1 kubenswrapper[4771]: I1011 10:26:56.475086 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/868ea5b9-b62a-4683-82c9-760de94ef155-metrics-tls\") pod \"network-operator-854f54f8c9-hw5fc\" (UID: \"868ea5b9-b62a-4683-82c9-760de94ef155\") " pod="openshift-network-operator/network-operator-854f54f8c9-hw5fc" Oct 11 10:26:56.498620 master-1 kubenswrapper[4771]: I1011 10:26:56.498551 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqrkd\" (UniqueName: \"kubernetes.io/projected/868ea5b9-b62a-4683-82c9-760de94ef155-kube-api-access-cqrkd\") pod \"network-operator-854f54f8c9-hw5fc\" (UID: \"868ea5b9-b62a-4683-82c9-760de94ef155\") " pod="openshift-network-operator/network-operator-854f54f8c9-hw5fc" Oct 11 10:26:56.502907 master-1 kubenswrapper[4771]: I1011 10:26:56.502874 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-854f54f8c9-hw5fc" Oct 11 10:26:56.635371 master-1 kubenswrapper[4771]: W1011 10:26:56.635300 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod868ea5b9_b62a_4683_82c9_760de94ef155.slice/crio-8ee1d12f9e89c270226493db1b8178e230ee5b9a192f1ec9bdbb810e0d230446 WatchSource:0}: Error finding container 8ee1d12f9e89c270226493db1b8178e230ee5b9a192f1ec9bdbb810e0d230446: Status 404 returned error can't find the container with id 8ee1d12f9e89c270226493db1b8178e230ee5b9a192f1ec9bdbb810e0d230446 Oct 11 10:26:57.558336 master-1 kubenswrapper[4771]: I1011 10:26:57.558248 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf_4ba9953d-1f54-43be-a3ae-121030f1e07b/kube-rbac-proxy/0.log" Oct 11 10:26:57.560317 master-1 kubenswrapper[4771]: I1011 10:26:57.560266 4771 generic.go:334] "Generic (PLEG): container finished" podID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerID="a1a2be8c23d4db9d020822dbf9ef90de797c8b8f057fc486ea46a8f7c185dd1e" exitCode=1 Oct 11 10:26:57.560525 master-1 kubenswrapper[4771]: I1011 10:26:57.560440 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" event={"ID":"4ba9953d-1f54-43be-a3ae-121030f1e07b","Type":"ContainerDied","Data":"a1a2be8c23d4db9d020822dbf9ef90de797c8b8f057fc486ea46a8f7c185dd1e"} Oct 11 10:26:57.560697 master-1 kubenswrapper[4771]: I1011 10:26:57.560530 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" event={"ID":"4ba9953d-1f54-43be-a3ae-121030f1e07b","Type":"ContainerStarted","Data":"ab37ae9b4c5a5566e7240156eb5f4a7f5948a585bd81d70fb5ce16c5156bba62"} Oct 11 10:26:57.560697 master-1 kubenswrapper[4771]: I1011 10:26:57.560565 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" event={"ID":"4ba9953d-1f54-43be-a3ae-121030f1e07b","Type":"ContainerStarted","Data":"6ea8c6bb10eb614aa2e5d02bf9e477a7b7013b28a2734df5ba12d856f6095725"} Oct 11 10:26:57.561520 master-1 kubenswrapper[4771]: I1011 10:26:57.561466 4771 scope.go:117] "RemoveContainer" containerID="a1a2be8c23d4db9d020822dbf9ef90de797c8b8f057fc486ea46a8f7c185dd1e" Oct 11 10:26:57.562851 master-1 kubenswrapper[4771]: I1011 10:26:57.562795 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-854f54f8c9-hw5fc" event={"ID":"868ea5b9-b62a-4683-82c9-760de94ef155","Type":"ContainerStarted","Data":"8ee1d12f9e89c270226493db1b8178e230ee5b9a192f1ec9bdbb810e0d230446"} Oct 11 10:26:58.058993 master-2 kubenswrapper[4776]: I1011 10:26:58.058894 4776 scope.go:117] "RemoveContainer" containerID="edc999680a89e34d61c0b53a78f91660c99759c03148cd7d4d198e5381947e81" Oct 11 10:26:58.567666 master-1 kubenswrapper[4771]: I1011 10:26:58.567596 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf_4ba9953d-1f54-43be-a3ae-121030f1e07b/kube-rbac-proxy/1.log" Oct 11 10:26:58.568487 master-1 kubenswrapper[4771]: I1011 10:26:58.568454 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf_4ba9953d-1f54-43be-a3ae-121030f1e07b/kube-rbac-proxy/0.log" Oct 11 10:26:58.569978 master-1 kubenswrapper[4771]: I1011 10:26:58.569920 4771 generic.go:334] "Generic (PLEG): container finished" podID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerID="3fea68ffee0d3f0d9bbafc91305f308f57d148b9eb031b738b0ded06753f61e1" exitCode=1 Oct 11 10:26:58.570067 master-1 kubenswrapper[4771]: I1011 10:26:58.569978 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" event={"ID":"4ba9953d-1f54-43be-a3ae-121030f1e07b","Type":"ContainerDied","Data":"3fea68ffee0d3f0d9bbafc91305f308f57d148b9eb031b738b0ded06753f61e1"} Oct 11 10:26:58.570067 master-1 kubenswrapper[4771]: I1011 10:26:58.570020 4771 scope.go:117] "RemoveContainer" containerID="a1a2be8c23d4db9d020822dbf9ef90de797c8b8f057fc486ea46a8f7c185dd1e" Oct 11 10:26:58.570605 master-1 kubenswrapper[4771]: I1011 10:26:58.570557 4771 scope.go:117] "RemoveContainer" containerID="3fea68ffee0d3f0d9bbafc91305f308f57d148b9eb031b738b0ded06753f61e1" Oct 11 10:26:58.570824 master-1 kubenswrapper[4771]: E1011 10:26:58.570772 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf_openshift-cloud-controller-manager-operator(4ba9953d-1f54-43be-a3ae-121030f1e07b)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" Oct 11 10:26:59.197340 master-2 kubenswrapper[4776]: I1011 10:26:59.197272 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-2_f022eff2d978fee6b366ac18a80aa53c/kube-rbac-proxy-crio/2.log" Oct 11 10:26:59.198083 master-2 kubenswrapper[4776]: I1011 10:26:59.197883 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" event={"ID":"f022eff2d978fee6b366ac18a80aa53c","Type":"ContainerStarted","Data":"160ea2b844bd95411f1fc839160ddbfb5ba513bcdf40167a2589c9e26bd964ad"} Oct 11 10:26:59.436579 master-1 kubenswrapper[4771]: I1011 10:26:59.436510 4771 scope.go:117] "RemoveContainer" containerID="a32ecb2841115f637b624a3f8eaa25803703a8d4195c571015a3a44c7767232c" Oct 11 10:26:59.573480 master-1 kubenswrapper[4771]: I1011 10:26:59.573439 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf_4ba9953d-1f54-43be-a3ae-121030f1e07b/kube-rbac-proxy/1.log" Oct 11 10:26:59.575734 master-1 kubenswrapper[4771]: I1011 10:26:59.575655 4771 scope.go:117] "RemoveContainer" containerID="3fea68ffee0d3f0d9bbafc91305f308f57d148b9eb031b738b0ded06753f61e1" Oct 11 10:26:59.576026 master-1 kubenswrapper[4771]: E1011 10:26:59.575984 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf_openshift-cloud-controller-manager-operator(4ba9953d-1f54-43be-a3ae-121030f1e07b)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" Oct 11 10:27:00.581095 master-1 kubenswrapper[4771]: I1011 10:27:00.581035 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-1_3273b5dc02e0d8cacbf64fe78c713d50/kube-rbac-proxy-crio/2.log" Oct 11 10:27:00.584893 master-1 kubenswrapper[4771]: I1011 10:27:00.584837 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" event={"ID":"3273b5dc02e0d8cacbf64fe78c713d50","Type":"ContainerStarted","Data":"c1c03abafd2dbbbe940e04d24ed5f7bce2acd566241e10c2566ec5ceea96e2b5"} Oct 11 10:27:01.592608 master-1 kubenswrapper[4771]: I1011 10:27:01.591553 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-854f54f8c9-hw5fc" event={"ID":"868ea5b9-b62a-4683-82c9-760de94ef155","Type":"ContainerStarted","Data":"081fde9dac0d8c6f0177a9a06139a4e92fb38ea47b03713cf7f04ea063469f84"} Oct 11 10:27:01.606420 master-1 kubenswrapper[4771]: I1011 10:27:01.606290 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" podStartSLOduration=30.606259031 podStartE2EDuration="30.606259031s" podCreationTimestamp="2025-10-11 10:26:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:27:00.603989263 +0000 UTC m=+52.578215734" watchObservedRunningTime="2025-10-11 10:27:01.606259031 +0000 UTC m=+53.580485512" Oct 11 10:27:01.606557 master-1 kubenswrapper[4771]: I1011 10:27:01.606479 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-854f54f8c9-hw5fc" podStartSLOduration=1.896275089 podStartE2EDuration="5.606471987s" podCreationTimestamp="2025-10-11 10:26:56 +0000 UTC" firstStartedPulling="2025-10-11 10:26:56.638287025 +0000 UTC m=+48.612513496" lastFinishedPulling="2025-10-11 10:27:00.348483953 +0000 UTC m=+52.322710394" observedRunningTime="2025-10-11 10:27:01.605929132 +0000 UTC m=+53.580155603" watchObservedRunningTime="2025-10-11 10:27:01.606471987 +0000 UTC m=+53.580698468" Oct 11 10:27:02.357854 master-2 kubenswrapper[4776]: I1011 10:27:02.357768 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:27:02.358395 master-2 kubenswrapper[4776]: E1011 10:27:02.357936 4776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:27:02.358395 master-2 kubenswrapper[4776]: E1011 10:27:02.358037 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert podName:b7b07707-84bd-43a6-a43d-6680decaa210 nodeName:}" failed. No retries permitted until 2025-10-11 10:27:18.358014424 +0000 UTC m=+73.142441143 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert") pod "cluster-version-operator-55bd67947c-tpbwx" (UID: "b7b07707-84bd-43a6-a43d-6680decaa210") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:27:03.110790 master-1 kubenswrapper[4771]: I1011 10:27:03.110661 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-t9bz7"] Oct 11 10:27:03.111855 master-1 kubenswrapper[4771]: I1011 10:27:03.111029 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-t9bz7" Oct 11 10:27:03.215600 master-1 kubenswrapper[4771]: I1011 10:27:03.215511 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw99n\" (UniqueName: \"kubernetes.io/projected/e7260b6d-3070-42b1-93cd-9ec29dfa50c3-kube-api-access-rw99n\") pod \"mtu-prober-t9bz7\" (UID: \"e7260b6d-3070-42b1-93cd-9ec29dfa50c3\") " pod="openshift-network-operator/mtu-prober-t9bz7" Oct 11 10:27:03.316563 master-1 kubenswrapper[4771]: I1011 10:27:03.316306 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw99n\" (UniqueName: \"kubernetes.io/projected/e7260b6d-3070-42b1-93cd-9ec29dfa50c3-kube-api-access-rw99n\") pod \"mtu-prober-t9bz7\" (UID: \"e7260b6d-3070-42b1-93cd-9ec29dfa50c3\") " pod="openshift-network-operator/mtu-prober-t9bz7" Oct 11 10:27:03.351238 master-1 kubenswrapper[4771]: I1011 10:27:03.351114 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw99n\" (UniqueName: \"kubernetes.io/projected/e7260b6d-3070-42b1-93cd-9ec29dfa50c3-kube-api-access-rw99n\") pod \"mtu-prober-t9bz7\" (UID: \"e7260b6d-3070-42b1-93cd-9ec29dfa50c3\") " pod="openshift-network-operator/mtu-prober-t9bz7" Oct 11 10:27:03.431319 master-1 kubenswrapper[4771]: I1011 10:27:03.431051 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-t9bz7" Oct 11 10:27:03.450595 master-1 kubenswrapper[4771]: W1011 10:27:03.450490 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7260b6d_3070_42b1_93cd_9ec29dfa50c3.slice/crio-a6818c7d94ddd80836f979da47140d38cbf5ace31adebfef26025d803442a472 WatchSource:0}: Error finding container a6818c7d94ddd80836f979da47140d38cbf5ace31adebfef26025d803442a472: Status 404 returned error can't find the container with id a6818c7d94ddd80836f979da47140d38cbf5ace31adebfef26025d803442a472 Oct 11 10:27:03.599101 master-1 kubenswrapper[4771]: I1011 10:27:03.599009 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-t9bz7" event={"ID":"e7260b6d-3070-42b1-93cd-9ec29dfa50c3","Type":"ContainerStarted","Data":"a6818c7d94ddd80836f979da47140d38cbf5ace31adebfef26025d803442a472"} Oct 11 10:27:04.603459 master-1 kubenswrapper[4771]: I1011 10:27:04.603336 4771 generic.go:334] "Generic (PLEG): container finished" podID="e7260b6d-3070-42b1-93cd-9ec29dfa50c3" containerID="a31d75d150e0d2dcf8878fd1b60bee95ea19d0157365ef6735168ff809442b4b" exitCode=0 Oct 11 10:27:04.603459 master-1 kubenswrapper[4771]: I1011 10:27:04.603412 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-t9bz7" event={"ID":"e7260b6d-3070-42b1-93cd-9ec29dfa50c3","Type":"ContainerDied","Data":"a31d75d150e0d2dcf8878fd1b60bee95ea19d0157365ef6735168ff809442b4b"} Oct 11 10:27:05.631823 master-1 kubenswrapper[4771]: I1011 10:27:05.631710 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-t9bz7" Oct 11 10:27:05.735228 master-1 kubenswrapper[4771]: I1011 10:27:05.735086 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw99n\" (UniqueName: \"kubernetes.io/projected/e7260b6d-3070-42b1-93cd-9ec29dfa50c3-kube-api-access-rw99n\") pod \"e7260b6d-3070-42b1-93cd-9ec29dfa50c3\" (UID: \"e7260b6d-3070-42b1-93cd-9ec29dfa50c3\") " Oct 11 10:27:05.740678 master-1 kubenswrapper[4771]: I1011 10:27:05.740602 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7260b6d-3070-42b1-93cd-9ec29dfa50c3-kube-api-access-rw99n" (OuterVolumeSpecName: "kube-api-access-rw99n") pod "e7260b6d-3070-42b1-93cd-9ec29dfa50c3" (UID: "e7260b6d-3070-42b1-93cd-9ec29dfa50c3"). InnerVolumeSpecName "kube-api-access-rw99n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:27:05.835840 master-1 kubenswrapper[4771]: I1011 10:27:05.835714 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw99n\" (UniqueName: \"kubernetes.io/projected/e7260b6d-3070-42b1-93cd-9ec29dfa50c3-kube-api-access-rw99n\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:06.611654 master-1 kubenswrapper[4771]: I1011 10:27:06.611589 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-t9bz7" event={"ID":"e7260b6d-3070-42b1-93cd-9ec29dfa50c3","Type":"ContainerDied","Data":"a6818c7d94ddd80836f979da47140d38cbf5ace31adebfef26025d803442a472"} Oct 11 10:27:06.611987 master-1 kubenswrapper[4771]: I1011 10:27:06.611960 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6818c7d94ddd80836f979da47140d38cbf5ace31adebfef26025d803442a472" Oct 11 10:27:06.612113 master-1 kubenswrapper[4771]: I1011 10:27:06.611677 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-t9bz7" Oct 11 10:27:08.140298 master-1 kubenswrapper[4771]: I1011 10:27:08.140228 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-t9bz7"] Oct 11 10:27:08.144438 master-1 kubenswrapper[4771]: I1011 10:27:08.144394 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-t9bz7"] Oct 11 10:27:08.441727 master-1 kubenswrapper[4771]: I1011 10:27:08.441581 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7260b6d-3070-42b1-93cd-9ec29dfa50c3" path="/var/lib/kubelet/pods/e7260b6d-3070-42b1-93cd-9ec29dfa50c3/volumes" Oct 11 10:27:13.003239 master-1 kubenswrapper[4771]: I1011 10:27:13.002914 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-dgt7f"] Oct 11 10:27:13.003239 master-1 kubenswrapper[4771]: E1011 10:27:13.003073 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7260b6d-3070-42b1-93cd-9ec29dfa50c3" containerName="prober" Oct 11 10:27:13.003239 master-1 kubenswrapper[4771]: I1011 10:27:13.003101 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7260b6d-3070-42b1-93cd-9ec29dfa50c3" containerName="prober" Oct 11 10:27:13.003239 master-1 kubenswrapper[4771]: I1011 10:27:13.003138 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7260b6d-3070-42b1-93cd-9ec29dfa50c3" containerName="prober" Oct 11 10:27:13.004685 master-1 kubenswrapper[4771]: I1011 10:27:13.003589 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.008285 master-1 kubenswrapper[4771]: I1011 10:27:13.008206 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 11 10:27:13.008553 master-1 kubenswrapper[4771]: I1011 10:27:13.008483 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 11 10:27:13.008652 master-1 kubenswrapper[4771]: I1011 10:27:13.008563 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 11 10:27:13.009385 master-1 kubenswrapper[4771]: I1011 10:27:13.009168 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 11 10:27:13.013720 master-2 kubenswrapper[4776]: I1011 10:27:13.013464 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" podStartSLOduration=46.013447558 podStartE2EDuration="46.013447558s" podCreationTimestamp="2025-10-11 10:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:26:59.2134733 +0000 UTC m=+53.997900019" watchObservedRunningTime="2025-10-11 10:27:13.013447558 +0000 UTC m=+67.797874267" Oct 11 10:27:13.013720 master-2 kubenswrapper[4776]: I1011 10:27:13.013585 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-xssj7"] Oct 11 10:27:13.014540 master-2 kubenswrapper[4776]: I1011 10:27:13.013771 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.016955 master-2 kubenswrapper[4776]: I1011 10:27:13.016919 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 11 10:27:13.017294 master-2 kubenswrapper[4776]: I1011 10:27:13.017259 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 11 10:27:13.017485 master-2 kubenswrapper[4776]: I1011 10:27:13.017443 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 11 10:27:13.023424 master-2 kubenswrapper[4776]: I1011 10:27:13.023362 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 11 10:27:13.129423 master-2 kubenswrapper[4776]: I1011 10:27:13.129306 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-conf-dir\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129423 master-2 kubenswrapper[4776]: I1011 10:27:13.129426 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-daemon-config\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129715 master-2 kubenswrapper[4776]: I1011 10:27:13.129461 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gchtk\" (UniqueName: \"kubernetes.io/projected/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-kube-api-access-gchtk\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129715 master-2 kubenswrapper[4776]: I1011 10:27:13.129501 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-hostroot\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129715 master-2 kubenswrapper[4776]: I1011 10:27:13.129599 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-run-multus-certs\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129840 master-2 kubenswrapper[4776]: I1011 10:27:13.129741 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-run-k8s-cni-cncf-io\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129840 master-2 kubenswrapper[4776]: I1011 10:27:13.129785 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-run-netns\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129840 master-2 kubenswrapper[4776]: I1011 10:27:13.129818 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-var-lib-cni-bin\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129953 master-2 kubenswrapper[4776]: I1011 10:27:13.129850 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-var-lib-cni-multus\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129953 master-2 kubenswrapper[4776]: I1011 10:27:13.129890 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-cni-dir\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129953 master-2 kubenswrapper[4776]: I1011 10:27:13.129922 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-cni-binary-copy\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.130063 master-2 kubenswrapper[4776]: I1011 10:27:13.129953 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-cnibin\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.130063 master-2 kubenswrapper[4776]: I1011 10:27:13.129984 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-os-release\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.130063 master-2 kubenswrapper[4776]: I1011 10:27:13.130013 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-socket-dir-parent\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.130063 master-2 kubenswrapper[4776]: I1011 10:27:13.130044 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-var-lib-kubelet\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.130232 master-2 kubenswrapper[4776]: I1011 10:27:13.130074 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-etc-kubernetes\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.130232 master-2 kubenswrapper[4776]: I1011 10:27:13.130108 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-system-cni-dir\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.182259 master-1 kubenswrapper[4771]: I1011 10:27:13.182098 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-host-var-lib-cni-multus\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.182259 master-1 kubenswrapper[4771]: I1011 10:27:13.182228 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-host-run-multus-certs\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.182745 master-1 kubenswrapper[4771]: I1011 10:27:13.182284 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-multus-conf-dir\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.182745 master-1 kubenswrapper[4771]: I1011 10:27:13.182338 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b771285f-4d3c-4a7a-9b62-eb804911a351-cni-binary-copy\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.182745 master-1 kubenswrapper[4771]: I1011 10:27:13.182429 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r67k7\" (UniqueName: \"kubernetes.io/projected/b771285f-4d3c-4a7a-9b62-eb804911a351-kube-api-access-r67k7\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.182745 master-1 kubenswrapper[4771]: I1011 10:27:13.182560 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-multus-cni-dir\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.182745 master-1 kubenswrapper[4771]: I1011 10:27:13.182652 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-multus-socket-dir-parent\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.182745 master-1 kubenswrapper[4771]: I1011 10:27:13.182699 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-host-run-k8s-cni-cncf-io\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.182745 master-1 kubenswrapper[4771]: I1011 10:27:13.182742 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-etc-kubernetes\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.183341 master-1 kubenswrapper[4771]: I1011 10:27:13.182789 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-cnibin\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.183341 master-1 kubenswrapper[4771]: I1011 10:27:13.182836 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-host-run-netns\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.183341 master-1 kubenswrapper[4771]: I1011 10:27:13.182883 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-system-cni-dir\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.183341 master-1 kubenswrapper[4771]: I1011 10:27:13.182924 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-hostroot\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.183341 master-1 kubenswrapper[4771]: I1011 10:27:13.182971 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-os-release\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.183341 master-1 kubenswrapper[4771]: I1011 10:27:13.183015 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-host-var-lib-cni-bin\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.183341 master-1 kubenswrapper[4771]: I1011 10:27:13.183065 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-host-var-lib-kubelet\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.183341 master-1 kubenswrapper[4771]: I1011 10:27:13.183152 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b771285f-4d3c-4a7a-9b62-eb804911a351-multus-daemon-config\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.201186 master-1 kubenswrapper[4771]: I1011 10:27:13.201112 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-lvp6f"] Oct 11 10:27:13.202187 master-1 kubenswrapper[4771]: I1011 10:27:13.202111 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.206608 master-1 kubenswrapper[4771]: W1011 10:27:13.206555 4771 reflector.go:561] object-"openshift-multus"/"whereabouts-config": failed to list *v1.ConfigMap: configmaps "whereabouts-config" is forbidden: User "system:node:master-1" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'master-1' and this object Oct 11 10:27:13.206608 master-1 kubenswrapper[4771]: E1011 10:27:13.206597 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"whereabouts-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whereabouts-config\" is forbidden: User \"system:node:master-1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'master-1' and this object" logger="UnhandledError" Oct 11 10:27:13.206842 master-1 kubenswrapper[4771]: W1011 10:27:13.206603 4771 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:master-1" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'master-1' and this object Oct 11 10:27:13.206842 master-1 kubenswrapper[4771]: E1011 10:27:13.206657 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:master-1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'master-1' and this object" logger="UnhandledError" Oct 11 10:27:13.214360 master-2 kubenswrapper[4776]: I1011 10:27:13.214254 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-tmg2p"] Oct 11 10:27:13.214848 master-2 kubenswrapper[4776]: I1011 10:27:13.214802 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.218493 master-2 kubenswrapper[4776]: I1011 10:27:13.218414 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Oct 11 10:27:13.218493 master-2 kubenswrapper[4776]: I1011 10:27:13.218427 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 11 10:27:13.231496 master-2 kubenswrapper[4776]: I1011 10:27:13.231416 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gchtk\" (UniqueName: \"kubernetes.io/projected/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-kube-api-access-gchtk\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231498 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-conf-dir\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231551 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-daemon-config\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231611 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-run-k8s-cni-cncf-io\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231646 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-run-netns\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231745 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-hostroot\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231784 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-run-multus-certs\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231824 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-cni-dir\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231859 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-cni-binary-copy\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231862 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-run-netns\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231862 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-run-k8s-cni-cncf-io\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231901 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-var-lib-cni-bin\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231965 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-hostroot\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.231992 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-conf-dir\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.232055 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-var-lib-cni-multus\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.231994 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-var-lib-cni-bin\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.232116 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-socket-dir-parent\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.231975 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-run-multus-certs\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.232028 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-cni-dir\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.232171 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-var-lib-kubelet\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.232196 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-var-lib-cni-multus\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.232261 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-etc-kubernetes\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.232273 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-var-lib-kubelet\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.232335 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-system-cni-dir\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.232379 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-cnibin\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232733 master-2 kubenswrapper[4776]: I1011 10:27:13.232422 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-os-release\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232733 master-2 kubenswrapper[4776]: I1011 10:27:13.232433 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-socket-dir-parent\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232733 master-2 kubenswrapper[4776]: I1011 10:27:13.232450 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-cnibin\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232733 master-2 kubenswrapper[4776]: I1011 10:27:13.232466 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-system-cni-dir\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232733 master-2 kubenswrapper[4776]: I1011 10:27:13.232495 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-etc-kubernetes\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232733 master-2 kubenswrapper[4776]: I1011 10:27:13.232575 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-os-release\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.233075 master-2 kubenswrapper[4776]: I1011 10:27:13.233010 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-daemon-config\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.233223 master-2 kubenswrapper[4776]: I1011 10:27:13.233178 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-cni-binary-copy\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.253059 master-2 kubenswrapper[4776]: I1011 10:27:13.253004 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gchtk\" (UniqueName: \"kubernetes.io/projected/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-kube-api-access-gchtk\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.284624 master-1 kubenswrapper[4771]: I1011 10:27:13.284394 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-multus-conf-dir\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.284624 master-1 kubenswrapper[4771]: I1011 10:27:13.284485 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b771285f-4d3c-4a7a-9b62-eb804911a351-cni-binary-copy\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.284624 master-1 kubenswrapper[4771]: I1011 10:27:13.284517 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r67k7\" (UniqueName: \"kubernetes.io/projected/b771285f-4d3c-4a7a-9b62-eb804911a351-kube-api-access-r67k7\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.284624 master-1 kubenswrapper[4771]: I1011 10:27:13.284541 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-multus-cni-dir\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.284624 master-1 kubenswrapper[4771]: I1011 10:27:13.284547 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-multus-conf-dir\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.285209 master-1 kubenswrapper[4771]: I1011 10:27:13.284680 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-multus-socket-dir-parent\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.285209 master-1 kubenswrapper[4771]: I1011 10:27:13.284571 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-multus-socket-dir-parent\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.285209 master-1 kubenswrapper[4771]: I1011 10:27:13.284862 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-host-run-k8s-cni-cncf-io\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.285209 master-1 kubenswrapper[4771]: I1011 10:27:13.284891 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-multus-cni-dir\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.285209 master-1 kubenswrapper[4771]: I1011 10:27:13.284913 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-etc-kubernetes\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.285209 master-1 kubenswrapper[4771]: I1011 10:27:13.284956 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-host-run-netns\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.285209 master-1 kubenswrapper[4771]: I1011 10:27:13.284975 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-host-run-k8s-cni-cncf-io\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.285209 master-1 kubenswrapper[4771]: I1011 10:27:13.284987 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-cnibin\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.285209 master-1 kubenswrapper[4771]: I1011 10:27:13.285054 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-host-run-netns\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.285209 master-1 kubenswrapper[4771]: I1011 10:27:13.285081 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-etc-kubernetes\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.285209 master-1 kubenswrapper[4771]: I1011 10:27:13.285113 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-system-cni-dir\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.285209 master-1 kubenswrapper[4771]: I1011 10:27:13.285157 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-cnibin\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.285209 master-1 kubenswrapper[4771]: I1011 10:27:13.285194 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-hostroot\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.285209 master-1 kubenswrapper[4771]: I1011 10:27:13.285211 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-system-cni-dir\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.285209 master-1 kubenswrapper[4771]: I1011 10:27:13.285242 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-os-release\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.286500 master-1 kubenswrapper[4771]: I1011 10:27:13.285272 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-hostroot\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.286500 master-1 kubenswrapper[4771]: I1011 10:27:13.285292 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-host-var-lib-cni-bin\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.286500 master-1 kubenswrapper[4771]: I1011 10:27:13.285393 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-host-var-lib-cni-bin\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.286500 master-1 kubenswrapper[4771]: I1011 10:27:13.285409 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-os-release\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.286500 master-1 kubenswrapper[4771]: I1011 10:27:13.285437 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-host-var-lib-kubelet\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.286500 master-1 kubenswrapper[4771]: I1011 10:27:13.285501 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-host-var-lib-kubelet\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.286500 master-1 kubenswrapper[4771]: I1011 10:27:13.285508 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b771285f-4d3c-4a7a-9b62-eb804911a351-multus-daemon-config\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.286500 master-1 kubenswrapper[4771]: I1011 10:27:13.285566 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-host-var-lib-cni-multus\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.286500 master-1 kubenswrapper[4771]: I1011 10:27:13.285616 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-host-run-multus-certs\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.286500 master-1 kubenswrapper[4771]: I1011 10:27:13.285661 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-host-var-lib-cni-multus\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.286500 master-1 kubenswrapper[4771]: I1011 10:27:13.285714 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b771285f-4d3c-4a7a-9b62-eb804911a351-host-run-multus-certs\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.286500 master-1 kubenswrapper[4771]: I1011 10:27:13.286136 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b771285f-4d3c-4a7a-9b62-eb804911a351-cni-binary-copy\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.287254 master-1 kubenswrapper[4771]: I1011 10:27:13.286679 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b771285f-4d3c-4a7a-9b62-eb804911a351-multus-daemon-config\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.317084 master-1 kubenswrapper[4771]: I1011 10:27:13.316977 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r67k7\" (UniqueName: \"kubernetes.io/projected/b771285f-4d3c-4a7a-9b62-eb804911a351-kube-api-access-r67k7\") pod \"multus-dgt7f\" (UID: \"b771285f-4d3c-4a7a-9b62-eb804911a351\") " pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.325939 master-1 kubenswrapper[4771]: I1011 10:27:13.325864 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dgt7f" Oct 11 10:27:13.333116 master-2 kubenswrapper[4776]: I1011 10:27:13.332962 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-cnibin\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.333116 master-2 kubenswrapper[4776]: I1011 10:27:13.333028 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dftzf\" (UniqueName: \"kubernetes.io/projected/5839b979-8c02-4e0d-9dc1-b1843d8ce872-kube-api-access-dftzf\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.333116 master-2 kubenswrapper[4776]: I1011 10:27:13.333066 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-os-release\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.333116 master-2 kubenswrapper[4776]: I1011 10:27:13.333096 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/5839b979-8c02-4e0d-9dc1-b1843d8ce872-whereabouts-configmap\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.333494 master-2 kubenswrapper[4776]: I1011 10:27:13.333129 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5839b979-8c02-4e0d-9dc1-b1843d8ce872-cni-binary-copy\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.333494 master-2 kubenswrapper[4776]: I1011 10:27:13.333276 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.333494 master-2 kubenswrapper[4776]: I1011 10:27:13.333392 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5839b979-8c02-4e0d-9dc1-b1843d8ce872-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.333494 master-2 kubenswrapper[4776]: I1011 10:27:13.333446 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-system-cni-dir\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.337247 master-2 kubenswrapper[4776]: I1011 10:27:13.337175 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.340816 master-1 kubenswrapper[4771]: W1011 10:27:13.340753 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb771285f_4d3c_4a7a_9b62_eb804911a351.slice/crio-3efe5f729cca6f9537557a566e71fdf1ab0d7f1e220def443492089756528391 WatchSource:0}: Error finding container 3efe5f729cca6f9537557a566e71fdf1ab0d7f1e220def443492089756528391: Status 404 returned error can't find the container with id 3efe5f729cca6f9537557a566e71fdf1ab0d7f1e220def443492089756528391 Oct 11 10:27:13.353472 master-2 kubenswrapper[4776]: W1011 10:27:13.353358 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e810b8c_5973_4846_b19f_cd8aa3c4ba3e.slice/crio-b341838eb4495fbaf603b6875237902c094b327a334b8257673608dda4050f08 WatchSource:0}: Error finding container b341838eb4495fbaf603b6875237902c094b327a334b8257673608dda4050f08: Status 404 returned error can't find the container with id b341838eb4495fbaf603b6875237902c094b327a334b8257673608dda4050f08 Oct 11 10:27:13.386847 master-1 kubenswrapper[4771]: I1011 10:27:13.386754 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0b4dff81-4eaa-422f-8de9-d6133a8b2016-system-cni-dir\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.386847 master-1 kubenswrapper[4771]: I1011 10:27:13.386832 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0b4dff81-4eaa-422f-8de9-d6133a8b2016-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.387050 master-1 kubenswrapper[4771]: I1011 10:27:13.386968 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0b4dff81-4eaa-422f-8de9-d6133a8b2016-cni-binary-copy\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.387151 master-1 kubenswrapper[4771]: I1011 10:27:13.387095 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/0b4dff81-4eaa-422f-8de9-d6133a8b2016-whereabouts-configmap\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.387216 master-1 kubenswrapper[4771]: I1011 10:27:13.387162 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5m7z\" (UniqueName: \"kubernetes.io/projected/0b4dff81-4eaa-422f-8de9-d6133a8b2016-kube-api-access-t5m7z\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.387216 master-1 kubenswrapper[4771]: I1011 10:27:13.387202 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0b4dff81-4eaa-422f-8de9-d6133a8b2016-os-release\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.387333 master-1 kubenswrapper[4771]: I1011 10:27:13.387237 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0b4dff81-4eaa-422f-8de9-d6133a8b2016-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.387333 master-1 kubenswrapper[4771]: I1011 10:27:13.387320 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0b4dff81-4eaa-422f-8de9-d6133a8b2016-cnibin\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.434272 master-2 kubenswrapper[4776]: I1011 10:27:13.434071 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5839b979-8c02-4e0d-9dc1-b1843d8ce872-cni-binary-copy\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.434272 master-2 kubenswrapper[4776]: I1011 10:27:13.434202 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-system-cni-dir\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.434272 master-2 kubenswrapper[4776]: I1011 10:27:13.434303 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435129 master-2 kubenswrapper[4776]: I1011 10:27:13.434351 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5839b979-8c02-4e0d-9dc1-b1843d8ce872-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435129 master-2 kubenswrapper[4776]: I1011 10:27:13.434379 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-system-cni-dir\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435129 master-2 kubenswrapper[4776]: I1011 10:27:13.434403 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-cnibin\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435129 master-2 kubenswrapper[4776]: I1011 10:27:13.434481 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-cnibin\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435129 master-2 kubenswrapper[4776]: I1011 10:27:13.434501 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dftzf\" (UniqueName: \"kubernetes.io/projected/5839b979-8c02-4e0d-9dc1-b1843d8ce872-kube-api-access-dftzf\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435129 master-2 kubenswrapper[4776]: I1011 10:27:13.434551 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-os-release\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435129 master-2 kubenswrapper[4776]: I1011 10:27:13.434582 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/5839b979-8c02-4e0d-9dc1-b1843d8ce872-whereabouts-configmap\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435129 master-2 kubenswrapper[4776]: I1011 10:27:13.434759 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-os-release\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435129 master-2 kubenswrapper[4776]: I1011 10:27:13.434914 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435847 master-2 kubenswrapper[4776]: I1011 10:27:13.435757 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/5839b979-8c02-4e0d-9dc1-b1843d8ce872-whereabouts-configmap\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435976 master-2 kubenswrapper[4776]: I1011 10:27:13.435910 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5839b979-8c02-4e0d-9dc1-b1843d8ce872-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.436049 master-2 kubenswrapper[4776]: I1011 10:27:13.435910 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5839b979-8c02-4e0d-9dc1-b1843d8ce872-cni-binary-copy\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.436891 master-1 kubenswrapper[4771]: I1011 10:27:13.436857 4771 scope.go:117] "RemoveContainer" containerID="3fea68ffee0d3f0d9bbafc91305f308f57d148b9eb031b738b0ded06753f61e1" Oct 11 10:27:13.466039 master-2 kubenswrapper[4776]: I1011 10:27:13.465941 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dftzf\" (UniqueName: \"kubernetes.io/projected/5839b979-8c02-4e0d-9dc1-b1843d8ce872-kube-api-access-dftzf\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.488388 master-1 kubenswrapper[4771]: I1011 10:27:13.488303 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0b4dff81-4eaa-422f-8de9-d6133a8b2016-os-release\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.488507 master-1 kubenswrapper[4771]: I1011 10:27:13.488412 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0b4dff81-4eaa-422f-8de9-d6133a8b2016-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.488507 master-1 kubenswrapper[4771]: I1011 10:27:13.488461 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0b4dff81-4eaa-422f-8de9-d6133a8b2016-cnibin\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.488638 master-1 kubenswrapper[4771]: I1011 10:27:13.488514 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0b4dff81-4eaa-422f-8de9-d6133a8b2016-system-cni-dir\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.488638 master-1 kubenswrapper[4771]: I1011 10:27:13.488560 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0b4dff81-4eaa-422f-8de9-d6133a8b2016-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.488638 master-1 kubenswrapper[4771]: I1011 10:27:13.488599 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0b4dff81-4eaa-422f-8de9-d6133a8b2016-cni-binary-copy\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.488638 master-1 kubenswrapper[4771]: I1011 10:27:13.488603 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0b4dff81-4eaa-422f-8de9-d6133a8b2016-os-release\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.488857 master-1 kubenswrapper[4771]: I1011 10:27:13.488666 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0b4dff81-4eaa-422f-8de9-d6133a8b2016-cnibin\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.488857 master-1 kubenswrapper[4771]: I1011 10:27:13.488636 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/0b4dff81-4eaa-422f-8de9-d6133a8b2016-whereabouts-configmap\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.488857 master-1 kubenswrapper[4771]: I1011 10:27:13.488710 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0b4dff81-4eaa-422f-8de9-d6133a8b2016-system-cni-dir\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.488857 master-1 kubenswrapper[4771]: I1011 10:27:13.488758 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5m7z\" (UniqueName: \"kubernetes.io/projected/0b4dff81-4eaa-422f-8de9-d6133a8b2016-kube-api-access-t5m7z\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.489076 master-1 kubenswrapper[4771]: I1011 10:27:13.488906 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0b4dff81-4eaa-422f-8de9-d6133a8b2016-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.489975 master-1 kubenswrapper[4771]: I1011 10:27:13.489921 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0b4dff81-4eaa-422f-8de9-d6133a8b2016-cni-binary-copy\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.520224 master-1 kubenswrapper[4771]: I1011 10:27:13.520127 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5m7z\" (UniqueName: \"kubernetes.io/projected/0b4dff81-4eaa-422f-8de9-d6133a8b2016-kube-api-access-t5m7z\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:13.529157 master-2 kubenswrapper[4776]: I1011 10:27:13.529051 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.545525 master-2 kubenswrapper[4776]: W1011 10:27:13.545443 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5839b979_8c02_4e0d_9dc1_b1843d8ce872.slice/crio-0e1e52b36d983d17c1cf620828a7b853dc2d0ca6951947ac524751cb0f76991e WatchSource:0}: Error finding container 0e1e52b36d983d17c1cf620828a7b853dc2d0ca6951947ac524751cb0f76991e: Status 404 returned error can't find the container with id 0e1e52b36d983d17c1cf620828a7b853dc2d0ca6951947ac524751cb0f76991e Oct 11 10:27:13.632099 master-1 kubenswrapper[4771]: I1011 10:27:13.632028 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dgt7f" event={"ID":"b771285f-4d3c-4a7a-9b62-eb804911a351","Type":"ContainerStarted","Data":"3efe5f729cca6f9537557a566e71fdf1ab0d7f1e220def443492089756528391"} Oct 11 10:27:13.989043 master-1 kubenswrapper[4771]: I1011 10:27:13.988572 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-fgjvw"] Oct 11 10:27:13.989390 master-1 kubenswrapper[4771]: I1011 10:27:13.989324 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:13.989543 master-1 kubenswrapper[4771]: E1011 10:27:13.989496 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:13.998813 master-2 kubenswrapper[4776]: I1011 10:27:13.998664 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-w52cn"] Oct 11 10:27:13.999400 master-2 kubenswrapper[4776]: I1011 10:27:13.999351 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:13.999542 master-2 kubenswrapper[4776]: E1011 10:27:13.999481 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:14.039937 master-2 kubenswrapper[4776]: I1011 10:27:14.039835 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:14.040837 master-2 kubenswrapper[4776]: I1011 10:27:14.040006 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bn98\" (UniqueName: \"kubernetes.io/projected/35b21a7b-2a5a-4511-a2d5-d950752b4bda-kube-api-access-9bn98\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:14.094531 master-1 kubenswrapper[4771]: I1011 10:27:14.094423 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs\") pod \"network-metrics-daemon-fgjvw\" (UID: \"2c084572-a5c9-4787-8a14-b7d6b0810a1b\") " pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:14.094531 master-1 kubenswrapper[4771]: I1011 10:27:14.094526 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9twgg\" (UniqueName: \"kubernetes.io/projected/2c084572-a5c9-4787-8a14-b7d6b0810a1b-kube-api-access-9twgg\") pod \"network-metrics-daemon-fgjvw\" (UID: \"2c084572-a5c9-4787-8a14-b7d6b0810a1b\") " pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:14.140839 master-2 kubenswrapper[4776]: I1011 10:27:14.140759 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:14.141014 master-2 kubenswrapper[4776]: I1011 10:27:14.140869 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bn98\" (UniqueName: \"kubernetes.io/projected/35b21a7b-2a5a-4511-a2d5-d950752b4bda-kube-api-access-9bn98\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:14.141014 master-2 kubenswrapper[4776]: E1011 10:27:14.140967 4776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:14.141301 master-2 kubenswrapper[4776]: E1011 10:27:14.141281 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs podName:35b21a7b-2a5a-4511-a2d5-d950752b4bda nodeName:}" failed. No retries permitted until 2025-10-11 10:27:14.641246187 +0000 UTC m=+69.425672936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs") pod "network-metrics-daemon-w52cn" (UID: "35b21a7b-2a5a-4511-a2d5-d950752b4bda") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:14.171716 master-2 kubenswrapper[4776]: I1011 10:27:14.171624 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bn98\" (UniqueName: \"kubernetes.io/projected/35b21a7b-2a5a-4511-a2d5-d950752b4bda-kube-api-access-9bn98\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:14.195564 master-1 kubenswrapper[4771]: I1011 10:27:14.195471 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs\") pod \"network-metrics-daemon-fgjvw\" (UID: \"2c084572-a5c9-4787-8a14-b7d6b0810a1b\") " pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:14.195799 master-1 kubenswrapper[4771]: I1011 10:27:14.195576 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9twgg\" (UniqueName: \"kubernetes.io/projected/2c084572-a5c9-4787-8a14-b7d6b0810a1b-kube-api-access-9twgg\") pod \"network-metrics-daemon-fgjvw\" (UID: \"2c084572-a5c9-4787-8a14-b7d6b0810a1b\") " pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:14.195799 master-1 kubenswrapper[4771]: E1011 10:27:14.195738 4771 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:14.195913 master-1 kubenswrapper[4771]: E1011 10:27:14.195852 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs podName:2c084572-a5c9-4787-8a14-b7d6b0810a1b nodeName:}" failed. No retries permitted until 2025-10-11 10:27:14.695825369 +0000 UTC m=+66.670051850 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs") pod "network-metrics-daemon-fgjvw" (UID: "2c084572-a5c9-4787-8a14-b7d6b0810a1b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:14.230709 master-2 kubenswrapper[4776]: I1011 10:27:14.230557 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" event={"ID":"5839b979-8c02-4e0d-9dc1-b1843d8ce872","Type":"ContainerStarted","Data":"0e1e52b36d983d17c1cf620828a7b853dc2d0ca6951947ac524751cb0f76991e"} Oct 11 10:27:14.232225 master-2 kubenswrapper[4776]: I1011 10:27:14.232169 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xssj7" event={"ID":"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e","Type":"ContainerStarted","Data":"b341838eb4495fbaf603b6875237902c094b327a334b8257673608dda4050f08"} Oct 11 10:27:14.234719 master-1 kubenswrapper[4771]: I1011 10:27:14.234630 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 11 10:27:14.240500 master-1 kubenswrapper[4771]: I1011 10:27:14.240412 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0b4dff81-4eaa-422f-8de9-d6133a8b2016-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:14.241501 master-1 kubenswrapper[4771]: I1011 10:27:14.241443 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9twgg\" (UniqueName: \"kubernetes.io/projected/2c084572-a5c9-4787-8a14-b7d6b0810a1b-kube-api-access-9twgg\") pod \"network-metrics-daemon-fgjvw\" (UID: \"2c084572-a5c9-4787-8a14-b7d6b0810a1b\") " pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:14.295544 master-1 kubenswrapper[4771]: I1011 10:27:14.295478 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Oct 11 10:27:14.300306 master-1 kubenswrapper[4771]: I1011 10:27:14.300245 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/0b4dff81-4eaa-422f-8de9-d6133a8b2016-whereabouts-configmap\") pod \"multus-additional-cni-plugins-lvp6f\" (UID: \"0b4dff81-4eaa-422f-8de9-d6133a8b2016\") " pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:14.422282 master-1 kubenswrapper[4771]: I1011 10:27:14.422165 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lvp6f" Oct 11 10:27:14.636467 master-1 kubenswrapper[4771]: I1011 10:27:14.636413 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf_4ba9953d-1f54-43be-a3ae-121030f1e07b/kube-rbac-proxy/2.log" Oct 11 10:27:14.637377 master-1 kubenswrapper[4771]: I1011 10:27:14.637317 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf_4ba9953d-1f54-43be-a3ae-121030f1e07b/kube-rbac-proxy/1.log" Oct 11 10:27:14.638328 master-1 kubenswrapper[4771]: I1011 10:27:14.638284 4771 generic.go:334] "Generic (PLEG): container finished" podID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerID="8520322e3dea9c9b65488967ddc22fb2da41fdf46714aac23d0adc0f1ca902c2" exitCode=1 Oct 11 10:27:14.638409 master-1 kubenswrapper[4771]: I1011 10:27:14.638377 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" event={"ID":"4ba9953d-1f54-43be-a3ae-121030f1e07b","Type":"ContainerDied","Data":"8520322e3dea9c9b65488967ddc22fb2da41fdf46714aac23d0adc0f1ca902c2"} Oct 11 10:27:14.638446 master-1 kubenswrapper[4771]: I1011 10:27:14.638434 4771 scope.go:117] "RemoveContainer" containerID="3fea68ffee0d3f0d9bbafc91305f308f57d148b9eb031b738b0ded06753f61e1" Oct 11 10:27:14.639214 master-1 kubenswrapper[4771]: I1011 10:27:14.639171 4771 scope.go:117] "RemoveContainer" containerID="8520322e3dea9c9b65488967ddc22fb2da41fdf46714aac23d0adc0f1ca902c2" Oct 11 10:27:14.639474 master-1 kubenswrapper[4771]: E1011 10:27:14.639431 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf_openshift-cloud-controller-manager-operator(4ba9953d-1f54-43be-a3ae-121030f1e07b)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" Oct 11 10:27:14.639781 master-1 kubenswrapper[4771]: I1011 10:27:14.639688 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lvp6f" event={"ID":"0b4dff81-4eaa-422f-8de9-d6133a8b2016","Type":"ContainerStarted","Data":"4f3a0d0bd0cb0b63fa93b7e91e0a5742f68e970f73492d04d3ca1e5f37e65916"} Oct 11 10:27:14.645222 master-2 kubenswrapper[4776]: I1011 10:27:14.645130 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:14.645406 master-2 kubenswrapper[4776]: E1011 10:27:14.645336 4776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:14.645451 master-2 kubenswrapper[4776]: E1011 10:27:14.645420 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs podName:35b21a7b-2a5a-4511-a2d5-d950752b4bda nodeName:}" failed. No retries permitted until 2025-10-11 10:27:15.645400412 +0000 UTC m=+70.429827111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs") pod "network-metrics-daemon-w52cn" (UID: "35b21a7b-2a5a-4511-a2d5-d950752b4bda") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:14.699312 master-1 kubenswrapper[4771]: I1011 10:27:14.699225 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs\") pod \"network-metrics-daemon-fgjvw\" (UID: \"2c084572-a5c9-4787-8a14-b7d6b0810a1b\") " pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:14.699471 master-1 kubenswrapper[4771]: E1011 10:27:14.699412 4771 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:14.699542 master-1 kubenswrapper[4771]: E1011 10:27:14.699475 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs podName:2c084572-a5c9-4787-8a14-b7d6b0810a1b nodeName:}" failed. No retries permitted until 2025-10-11 10:27:15.699458675 +0000 UTC m=+67.673685126 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs") pod "network-metrics-daemon-fgjvw" (UID: "2c084572-a5c9-4787-8a14-b7d6b0810a1b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:15.435998 master-1 kubenswrapper[4771]: I1011 10:27:15.435859 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:15.436946 master-1 kubenswrapper[4771]: E1011 10:27:15.436060 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:15.644883 master-1 kubenswrapper[4771]: I1011 10:27:15.644808 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf_4ba9953d-1f54-43be-a3ae-121030f1e07b/kube-rbac-proxy/2.log" Oct 11 10:27:15.651753 master-2 kubenswrapper[4776]: I1011 10:27:15.651655 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:15.652257 master-2 kubenswrapper[4776]: E1011 10:27:15.651798 4776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:15.652257 master-2 kubenswrapper[4776]: E1011 10:27:15.651866 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs podName:35b21a7b-2a5a-4511-a2d5-d950752b4bda nodeName:}" failed. No retries permitted until 2025-10-11 10:27:17.651850268 +0000 UTC m=+72.436276977 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs") pod "network-metrics-daemon-w52cn" (UID: "35b21a7b-2a5a-4511-a2d5-d950752b4bda") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:15.706916 master-1 kubenswrapper[4771]: I1011 10:27:15.706799 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs\") pod \"network-metrics-daemon-fgjvw\" (UID: \"2c084572-a5c9-4787-8a14-b7d6b0810a1b\") " pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:15.707060 master-1 kubenswrapper[4771]: E1011 10:27:15.707023 4771 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:15.707213 master-1 kubenswrapper[4771]: E1011 10:27:15.707157 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs podName:2c084572-a5c9-4787-8a14-b7d6b0810a1b nodeName:}" failed. No retries permitted until 2025-10-11 10:27:17.707122286 +0000 UTC m=+69.681348767 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs") pod "network-metrics-daemon-fgjvw" (UID: "2c084572-a5c9-4787-8a14-b7d6b0810a1b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:16.058668 master-2 kubenswrapper[4776]: I1011 10:27:16.058173 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:16.058668 master-2 kubenswrapper[4776]: E1011 10:27:16.058552 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:17.239057 master-2 kubenswrapper[4776]: I1011 10:27:17.238831 4776 generic.go:334] "Generic (PLEG): container finished" podID="5839b979-8c02-4e0d-9dc1-b1843d8ce872" containerID="9262160bc177411fc7cf6da6d14f6188e43faa873f8e3c2271486fbddfecfb2d" exitCode=0 Oct 11 10:27:17.239057 master-2 kubenswrapper[4776]: I1011 10:27:17.238924 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" event={"ID":"5839b979-8c02-4e0d-9dc1-b1843d8ce872","Type":"ContainerDied","Data":"9262160bc177411fc7cf6da6d14f6188e43faa873f8e3c2271486fbddfecfb2d"} Oct 11 10:27:17.436730 master-1 kubenswrapper[4771]: I1011 10:27:17.436665 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:17.437935 master-1 kubenswrapper[4771]: E1011 10:27:17.436976 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:17.652068 master-1 kubenswrapper[4771]: I1011 10:27:17.651991 4771 generic.go:334] "Generic (PLEG): container finished" podID="0b4dff81-4eaa-422f-8de9-d6133a8b2016" containerID="75355fa584d02989c18f5564dbedfea406415ac8f1958dd2d81970ffd991509e" exitCode=0 Oct 11 10:27:17.652068 master-1 kubenswrapper[4771]: I1011 10:27:17.652046 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lvp6f" event={"ID":"0b4dff81-4eaa-422f-8de9-d6133a8b2016","Type":"ContainerDied","Data":"75355fa584d02989c18f5564dbedfea406415ac8f1958dd2d81970ffd991509e"} Oct 11 10:27:17.664659 master-2 kubenswrapper[4776]: I1011 10:27:17.664556 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:17.665071 master-2 kubenswrapper[4776]: E1011 10:27:17.664746 4776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:17.665071 master-2 kubenswrapper[4776]: E1011 10:27:17.664833 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs podName:35b21a7b-2a5a-4511-a2d5-d950752b4bda nodeName:}" failed. No retries permitted until 2025-10-11 10:27:21.664815902 +0000 UTC m=+76.449242611 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs") pod "network-metrics-daemon-w52cn" (UID: "35b21a7b-2a5a-4511-a2d5-d950752b4bda") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:17.679385 master-1 kubenswrapper[4771]: I1011 10:27:17.679283 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf"] Oct 11 10:27:17.679559 master-1 kubenswrapper[4771]: I1011 10:27:17.679494 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerName="cluster-cloud-controller-manager" containerID="cri-o://6ea8c6bb10eb614aa2e5d02bf9e477a7b7013b28a2734df5ba12d856f6095725" gracePeriod=30 Oct 11 10:27:17.679666 master-1 kubenswrapper[4771]: I1011 10:27:17.679563 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerName="config-sync-controllers" containerID="cri-o://ab37ae9b4c5a5566e7240156eb5f4a7f5948a585bd81d70fb5ce16c5156bba62" gracePeriod=30 Oct 11 10:27:17.744771 master-1 kubenswrapper[4771]: I1011 10:27:17.744694 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs\") pod \"network-metrics-daemon-fgjvw\" (UID: \"2c084572-a5c9-4787-8a14-b7d6b0810a1b\") " pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:17.744949 master-1 kubenswrapper[4771]: E1011 10:27:17.744869 4771 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:17.745129 master-1 kubenswrapper[4771]: E1011 10:27:17.745074 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs podName:2c084572-a5c9-4787-8a14-b7d6b0810a1b nodeName:}" failed. No retries permitted until 2025-10-11 10:27:21.745033945 +0000 UTC m=+73.719260426 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs") pod "network-metrics-daemon-fgjvw" (UID: "2c084572-a5c9-4787-8a14-b7d6b0810a1b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:17.821282 master-1 kubenswrapper[4771]: I1011 10:27:17.821211 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf_4ba9953d-1f54-43be-a3ae-121030f1e07b/kube-rbac-proxy/2.log" Oct 11 10:27:17.822380 master-1 kubenswrapper[4771]: I1011 10:27:17.822299 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" Oct 11 10:27:17.946474 master-1 kubenswrapper[4771]: I1011 10:27:17.946332 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4ba9953d-1f54-43be-a3ae-121030f1e07b-host-etc-kube\") pod \"4ba9953d-1f54-43be-a3ae-121030f1e07b\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " Oct 11 10:27:17.946474 master-1 kubenswrapper[4771]: I1011 10:27:17.946427 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ba9953d-1f54-43be-a3ae-121030f1e07b-cloud-controller-manager-operator-tls\") pod \"4ba9953d-1f54-43be-a3ae-121030f1e07b\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " Oct 11 10:27:17.946474 master-1 kubenswrapper[4771]: I1011 10:27:17.946471 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4ba9953d-1f54-43be-a3ae-121030f1e07b-images\") pod \"4ba9953d-1f54-43be-a3ae-121030f1e07b\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " Oct 11 10:27:17.946734 master-1 kubenswrapper[4771]: I1011 10:27:17.946508 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ba9953d-1f54-43be-a3ae-121030f1e07b-auth-proxy-config\") pod \"4ba9953d-1f54-43be-a3ae-121030f1e07b\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " Oct 11 10:27:17.946734 master-1 kubenswrapper[4771]: I1011 10:27:17.946498 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba9953d-1f54-43be-a3ae-121030f1e07b-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "4ba9953d-1f54-43be-a3ae-121030f1e07b" (UID: "4ba9953d-1f54-43be-a3ae-121030f1e07b"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:17.946734 master-1 kubenswrapper[4771]: I1011 10:27:17.946550 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c85t\" (UniqueName: \"kubernetes.io/projected/4ba9953d-1f54-43be-a3ae-121030f1e07b-kube-api-access-4c85t\") pod \"4ba9953d-1f54-43be-a3ae-121030f1e07b\" (UID: \"4ba9953d-1f54-43be-a3ae-121030f1e07b\") " Oct 11 10:27:17.946734 master-1 kubenswrapper[4771]: I1011 10:27:17.946650 4771 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4ba9953d-1f54-43be-a3ae-121030f1e07b-host-etc-kube\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:17.947107 master-1 kubenswrapper[4771]: I1011 10:27:17.947075 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ba9953d-1f54-43be-a3ae-121030f1e07b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "4ba9953d-1f54-43be-a3ae-121030f1e07b" (UID: "4ba9953d-1f54-43be-a3ae-121030f1e07b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:27:17.947655 master-1 kubenswrapper[4771]: I1011 10:27:17.947584 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ba9953d-1f54-43be-a3ae-121030f1e07b-images" (OuterVolumeSpecName: "images") pod "4ba9953d-1f54-43be-a3ae-121030f1e07b" (UID: "4ba9953d-1f54-43be-a3ae-121030f1e07b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:27:17.951934 master-1 kubenswrapper[4771]: I1011 10:27:17.951883 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ba9953d-1f54-43be-a3ae-121030f1e07b-kube-api-access-4c85t" (OuterVolumeSpecName: "kube-api-access-4c85t") pod "4ba9953d-1f54-43be-a3ae-121030f1e07b" (UID: "4ba9953d-1f54-43be-a3ae-121030f1e07b"). InnerVolumeSpecName "kube-api-access-4c85t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:27:17.952707 master-1 kubenswrapper[4771]: I1011 10:27:17.952650 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba9953d-1f54-43be-a3ae-121030f1e07b-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "4ba9953d-1f54-43be-a3ae-121030f1e07b" (UID: "4ba9953d-1f54-43be-a3ae-121030f1e07b"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:27:18.047491 master-1 kubenswrapper[4771]: I1011 10:27:18.047396 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c85t\" (UniqueName: \"kubernetes.io/projected/4ba9953d-1f54-43be-a3ae-121030f1e07b-kube-api-access-4c85t\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:18.047491 master-1 kubenswrapper[4771]: I1011 10:27:18.047468 4771 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ba9953d-1f54-43be-a3ae-121030f1e07b-cloud-controller-manager-operator-tls\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:18.047704 master-1 kubenswrapper[4771]: I1011 10:27:18.047507 4771 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4ba9953d-1f54-43be-a3ae-121030f1e07b-images\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:18.047704 master-1 kubenswrapper[4771]: I1011 10:27:18.047535 4771 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ba9953d-1f54-43be-a3ae-121030f1e07b-auth-proxy-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:18.058645 master-2 kubenswrapper[4776]: I1011 10:27:18.058240 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:18.058645 master-2 kubenswrapper[4776]: E1011 10:27:18.058361 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:18.369604 master-2 kubenswrapper[4776]: I1011 10:27:18.369447 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:27:18.370154 master-2 kubenswrapper[4776]: E1011 10:27:18.369648 4776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:27:18.370154 master-2 kubenswrapper[4776]: E1011 10:27:18.369742 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert podName:b7b07707-84bd-43a6-a43d-6680decaa210 nodeName:}" failed. No retries permitted until 2025-10-11 10:27:50.36972206 +0000 UTC m=+105.154148769 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert") pod "cluster-version-operator-55bd67947c-tpbwx" (UID: "b7b07707-84bd-43a6-a43d-6680decaa210") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:27:18.656850 master-1 kubenswrapper[4771]: I1011 10:27:18.656758 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf_4ba9953d-1f54-43be-a3ae-121030f1e07b/kube-rbac-proxy/2.log" Oct 11 10:27:18.657946 master-1 kubenswrapper[4771]: I1011 10:27:18.657866 4771 generic.go:334] "Generic (PLEG): container finished" podID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerID="ab37ae9b4c5a5566e7240156eb5f4a7f5948a585bd81d70fb5ce16c5156bba62" exitCode=0 Oct 11 10:27:18.657946 master-1 kubenswrapper[4771]: I1011 10:27:18.657899 4771 generic.go:334] "Generic (PLEG): container finished" podID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerID="6ea8c6bb10eb614aa2e5d02bf9e477a7b7013b28a2734df5ba12d856f6095725" exitCode=0 Oct 11 10:27:18.657946 master-1 kubenswrapper[4771]: I1011 10:27:18.657925 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" event={"ID":"4ba9953d-1f54-43be-a3ae-121030f1e07b","Type":"ContainerDied","Data":"ab37ae9b4c5a5566e7240156eb5f4a7f5948a585bd81d70fb5ce16c5156bba62"} Oct 11 10:27:18.658153 master-1 kubenswrapper[4771]: I1011 10:27:18.657956 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" event={"ID":"4ba9953d-1f54-43be-a3ae-121030f1e07b","Type":"ContainerDied","Data":"6ea8c6bb10eb614aa2e5d02bf9e477a7b7013b28a2734df5ba12d856f6095725"} Oct 11 10:27:18.658153 master-1 kubenswrapper[4771]: I1011 10:27:18.657974 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" event={"ID":"4ba9953d-1f54-43be-a3ae-121030f1e07b","Type":"ContainerDied","Data":"45041b48c16ec7268bc8f5e7bf6ad631a6a8600e9455c7fcc91bcbcacf5e65a9"} Oct 11 10:27:18.658153 master-1 kubenswrapper[4771]: I1011 10:27:18.657974 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf" Oct 11 10:27:18.658153 master-1 kubenswrapper[4771]: I1011 10:27:18.658023 4771 scope.go:117] "RemoveContainer" containerID="8520322e3dea9c9b65488967ddc22fb2da41fdf46714aac23d0adc0f1ca902c2" Oct 11 10:27:18.676732 master-1 kubenswrapper[4771]: I1011 10:27:18.676691 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf"] Oct 11 10:27:18.680087 master-1 kubenswrapper[4771]: I1011 10:27:18.680042 4771 scope.go:117] "RemoveContainer" containerID="ab37ae9b4c5a5566e7240156eb5f4a7f5948a585bd81d70fb5ce16c5156bba62" Oct 11 10:27:18.680925 master-1 kubenswrapper[4771]: I1011 10:27:18.680883 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf"] Oct 11 10:27:18.694020 master-1 kubenswrapper[4771]: I1011 10:27:18.693975 4771 scope.go:117] "RemoveContainer" containerID="6ea8c6bb10eb614aa2e5d02bf9e477a7b7013b28a2734df5ba12d856f6095725" Oct 11 10:27:18.706855 master-1 kubenswrapper[4771]: I1011 10:27:18.706823 4771 scope.go:117] "RemoveContainer" containerID="8520322e3dea9c9b65488967ddc22fb2da41fdf46714aac23d0adc0f1ca902c2" Oct 11 10:27:18.707932 master-1 kubenswrapper[4771]: E1011 10:27:18.707881 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8520322e3dea9c9b65488967ddc22fb2da41fdf46714aac23d0adc0f1ca902c2\": container with ID starting with 8520322e3dea9c9b65488967ddc22fb2da41fdf46714aac23d0adc0f1ca902c2 not found: ID does not exist" containerID="8520322e3dea9c9b65488967ddc22fb2da41fdf46714aac23d0adc0f1ca902c2" Oct 11 10:27:18.708058 master-1 kubenswrapper[4771]: I1011 10:27:18.707925 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8520322e3dea9c9b65488967ddc22fb2da41fdf46714aac23d0adc0f1ca902c2"} err="failed to get container status \"8520322e3dea9c9b65488967ddc22fb2da41fdf46714aac23d0adc0f1ca902c2\": rpc error: code = NotFound desc = could not find container \"8520322e3dea9c9b65488967ddc22fb2da41fdf46714aac23d0adc0f1ca902c2\": container with ID starting with 8520322e3dea9c9b65488967ddc22fb2da41fdf46714aac23d0adc0f1ca902c2 not found: ID does not exist" Oct 11 10:27:18.708058 master-1 kubenswrapper[4771]: I1011 10:27:18.707975 4771 scope.go:117] "RemoveContainer" containerID="ab37ae9b4c5a5566e7240156eb5f4a7f5948a585bd81d70fb5ce16c5156bba62" Oct 11 10:27:18.708475 master-1 kubenswrapper[4771]: E1011 10:27:18.708438 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab37ae9b4c5a5566e7240156eb5f4a7f5948a585bd81d70fb5ce16c5156bba62\": container with ID starting with ab37ae9b4c5a5566e7240156eb5f4a7f5948a585bd81d70fb5ce16c5156bba62 not found: ID does not exist" containerID="ab37ae9b4c5a5566e7240156eb5f4a7f5948a585bd81d70fb5ce16c5156bba62" Oct 11 10:27:18.708595 master-1 kubenswrapper[4771]: I1011 10:27:18.708474 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab37ae9b4c5a5566e7240156eb5f4a7f5948a585bd81d70fb5ce16c5156bba62"} err="failed to get container status \"ab37ae9b4c5a5566e7240156eb5f4a7f5948a585bd81d70fb5ce16c5156bba62\": rpc error: code = NotFound desc = could not find container \"ab37ae9b4c5a5566e7240156eb5f4a7f5948a585bd81d70fb5ce16c5156bba62\": container with ID starting with ab37ae9b4c5a5566e7240156eb5f4a7f5948a585bd81d70fb5ce16c5156bba62 not found: ID does not exist" Oct 11 10:27:18.708595 master-1 kubenswrapper[4771]: I1011 10:27:18.708497 4771 scope.go:117] "RemoveContainer" containerID="6ea8c6bb10eb614aa2e5d02bf9e477a7b7013b28a2734df5ba12d856f6095725" Oct 11 10:27:18.709005 master-1 kubenswrapper[4771]: E1011 10:27:18.708970 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ea8c6bb10eb614aa2e5d02bf9e477a7b7013b28a2734df5ba12d856f6095725\": container with ID starting with 6ea8c6bb10eb614aa2e5d02bf9e477a7b7013b28a2734df5ba12d856f6095725 not found: ID does not exist" containerID="6ea8c6bb10eb614aa2e5d02bf9e477a7b7013b28a2734df5ba12d856f6095725" Oct 11 10:27:18.709096 master-1 kubenswrapper[4771]: I1011 10:27:18.709004 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ea8c6bb10eb614aa2e5d02bf9e477a7b7013b28a2734df5ba12d856f6095725"} err="failed to get container status \"6ea8c6bb10eb614aa2e5d02bf9e477a7b7013b28a2734df5ba12d856f6095725\": rpc error: code = NotFound desc = could not find container \"6ea8c6bb10eb614aa2e5d02bf9e477a7b7013b28a2734df5ba12d856f6095725\": container with ID starting with 6ea8c6bb10eb614aa2e5d02bf9e477a7b7013b28a2734df5ba12d856f6095725 not found: ID does not exist" Oct 11 10:27:18.709096 master-1 kubenswrapper[4771]: I1011 10:27:18.709027 4771 scope.go:117] "RemoveContainer" containerID="8520322e3dea9c9b65488967ddc22fb2da41fdf46714aac23d0adc0f1ca902c2" Oct 11 10:27:18.709660 master-1 kubenswrapper[4771]: I1011 10:27:18.709477 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8520322e3dea9c9b65488967ddc22fb2da41fdf46714aac23d0adc0f1ca902c2"} err="failed to get container status \"8520322e3dea9c9b65488967ddc22fb2da41fdf46714aac23d0adc0f1ca902c2\": rpc error: code = NotFound desc = could not find container \"8520322e3dea9c9b65488967ddc22fb2da41fdf46714aac23d0adc0f1ca902c2\": container with ID starting with 8520322e3dea9c9b65488967ddc22fb2da41fdf46714aac23d0adc0f1ca902c2 not found: ID does not exist" Oct 11 10:27:18.709660 master-1 kubenswrapper[4771]: I1011 10:27:18.709525 4771 scope.go:117] "RemoveContainer" containerID="ab37ae9b4c5a5566e7240156eb5f4a7f5948a585bd81d70fb5ce16c5156bba62" Oct 11 10:27:18.710049 master-1 kubenswrapper[4771]: I1011 10:27:18.709990 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab37ae9b4c5a5566e7240156eb5f4a7f5948a585bd81d70fb5ce16c5156bba62"} err="failed to get container status \"ab37ae9b4c5a5566e7240156eb5f4a7f5948a585bd81d70fb5ce16c5156bba62\": rpc error: code = NotFound desc = could not find container \"ab37ae9b4c5a5566e7240156eb5f4a7f5948a585bd81d70fb5ce16c5156bba62\": container with ID starting with ab37ae9b4c5a5566e7240156eb5f4a7f5948a585bd81d70fb5ce16c5156bba62 not found: ID does not exist" Oct 11 10:27:18.710049 master-1 kubenswrapper[4771]: I1011 10:27:18.710028 4771 scope.go:117] "RemoveContainer" containerID="6ea8c6bb10eb614aa2e5d02bf9e477a7b7013b28a2734df5ba12d856f6095725" Oct 11 10:27:18.710464 master-1 kubenswrapper[4771]: I1011 10:27:18.710432 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ea8c6bb10eb614aa2e5d02bf9e477a7b7013b28a2734df5ba12d856f6095725"} err="failed to get container status \"6ea8c6bb10eb614aa2e5d02bf9e477a7b7013b28a2734df5ba12d856f6095725\": rpc error: code = NotFound desc = could not find container \"6ea8c6bb10eb614aa2e5d02bf9e477a7b7013b28a2734df5ba12d856f6095725\": container with ID starting with 6ea8c6bb10eb614aa2e5d02bf9e477a7b7013b28a2734df5ba12d856f6095725 not found: ID does not exist" Oct 11 10:27:18.715046 master-1 kubenswrapper[4771]: I1011 10:27:18.714995 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp"] Oct 11 10:27:18.715153 master-1 kubenswrapper[4771]: E1011 10:27:18.715076 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerName="kube-rbac-proxy" Oct 11 10:27:18.715153 master-1 kubenswrapper[4771]: I1011 10:27:18.715091 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerName="kube-rbac-proxy" Oct 11 10:27:18.715153 master-1 kubenswrapper[4771]: E1011 10:27:18.715100 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerName="kube-rbac-proxy" Oct 11 10:27:18.715153 master-1 kubenswrapper[4771]: I1011 10:27:18.715108 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerName="kube-rbac-proxy" Oct 11 10:27:18.715153 master-1 kubenswrapper[4771]: E1011 10:27:18.715116 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerName="cluster-cloud-controller-manager" Oct 11 10:27:18.715153 master-1 kubenswrapper[4771]: I1011 10:27:18.715124 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerName="cluster-cloud-controller-manager" Oct 11 10:27:18.715153 master-1 kubenswrapper[4771]: E1011 10:27:18.715133 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerName="config-sync-controllers" Oct 11 10:27:18.715153 master-1 kubenswrapper[4771]: I1011 10:27:18.715141 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerName="config-sync-controllers" Oct 11 10:27:18.715153 master-1 kubenswrapper[4771]: E1011 10:27:18.715148 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerName="kube-rbac-proxy" Oct 11 10:27:18.715153 master-1 kubenswrapper[4771]: I1011 10:27:18.715157 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerName="kube-rbac-proxy" Oct 11 10:27:18.715835 master-1 kubenswrapper[4771]: I1011 10:27:18.715185 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerName="cluster-cloud-controller-manager" Oct 11 10:27:18.715835 master-1 kubenswrapper[4771]: I1011 10:27:18.715195 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerName="config-sync-controllers" Oct 11 10:27:18.715835 master-1 kubenswrapper[4771]: I1011 10:27:18.715204 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerName="kube-rbac-proxy" Oct 11 10:27:18.715835 master-1 kubenswrapper[4771]: I1011 10:27:18.715212 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerName="kube-rbac-proxy" Oct 11 10:27:18.715835 master-1 kubenswrapper[4771]: I1011 10:27:18.715247 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" containerName="kube-rbac-proxy" Oct 11 10:27:18.715835 master-1 kubenswrapper[4771]: I1011 10:27:18.715497 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" Oct 11 10:27:18.719321 master-1 kubenswrapper[4771]: I1011 10:27:18.719260 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Oct 11 10:27:18.719842 master-1 kubenswrapper[4771]: I1011 10:27:18.719787 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Oct 11 10:27:18.720970 master-1 kubenswrapper[4771]: I1011 10:27:18.720490 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Oct 11 10:27:18.720970 master-1 kubenswrapper[4771]: I1011 10:27:18.720088 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Oct 11 10:27:18.720970 master-1 kubenswrapper[4771]: I1011 10:27:18.720688 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Oct 11 10:27:18.853301 master-1 kubenswrapper[4771]: I1011 10:27:18.853217 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e115f8be-9e65-4407-8111-568e5ea8ac1b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-779749f859-5xxzp\" (UID: \"e115f8be-9e65-4407-8111-568e5ea8ac1b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" Oct 11 10:27:18.853479 master-1 kubenswrapper[4771]: I1011 10:27:18.853313 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e115f8be-9e65-4407-8111-568e5ea8ac1b-images\") pod \"cluster-cloud-controller-manager-operator-779749f859-5xxzp\" (UID: \"e115f8be-9e65-4407-8111-568e5ea8ac1b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" Oct 11 10:27:18.853479 master-1 kubenswrapper[4771]: I1011 10:27:18.853422 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e115f8be-9e65-4407-8111-568e5ea8ac1b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-779749f859-5xxzp\" (UID: \"e115f8be-9e65-4407-8111-568e5ea8ac1b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" Oct 11 10:27:18.853479 master-1 kubenswrapper[4771]: I1011 10:27:18.853463 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5cn6\" (UniqueName: \"kubernetes.io/projected/e115f8be-9e65-4407-8111-568e5ea8ac1b-kube-api-access-h5cn6\") pod \"cluster-cloud-controller-manager-operator-779749f859-5xxzp\" (UID: \"e115f8be-9e65-4407-8111-568e5ea8ac1b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" Oct 11 10:27:18.853613 master-1 kubenswrapper[4771]: I1011 10:27:18.853512 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e115f8be-9e65-4407-8111-568e5ea8ac1b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-779749f859-5xxzp\" (UID: \"e115f8be-9e65-4407-8111-568e5ea8ac1b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" Oct 11 10:27:18.954548 master-1 kubenswrapper[4771]: I1011 10:27:18.954465 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e115f8be-9e65-4407-8111-568e5ea8ac1b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-779749f859-5xxzp\" (UID: \"e115f8be-9e65-4407-8111-568e5ea8ac1b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" Oct 11 10:27:18.954548 master-1 kubenswrapper[4771]: I1011 10:27:18.954537 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e115f8be-9e65-4407-8111-568e5ea8ac1b-images\") pod \"cluster-cloud-controller-manager-operator-779749f859-5xxzp\" (UID: \"e115f8be-9e65-4407-8111-568e5ea8ac1b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" Oct 11 10:27:18.954894 master-1 kubenswrapper[4771]: I1011 10:27:18.954567 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e115f8be-9e65-4407-8111-568e5ea8ac1b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-779749f859-5xxzp\" (UID: \"e115f8be-9e65-4407-8111-568e5ea8ac1b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" Oct 11 10:27:18.954894 master-1 kubenswrapper[4771]: I1011 10:27:18.954591 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5cn6\" (UniqueName: \"kubernetes.io/projected/e115f8be-9e65-4407-8111-568e5ea8ac1b-kube-api-access-h5cn6\") pod \"cluster-cloud-controller-manager-operator-779749f859-5xxzp\" (UID: \"e115f8be-9e65-4407-8111-568e5ea8ac1b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" Oct 11 10:27:18.954894 master-1 kubenswrapper[4771]: I1011 10:27:18.954618 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e115f8be-9e65-4407-8111-568e5ea8ac1b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-779749f859-5xxzp\" (UID: \"e115f8be-9e65-4407-8111-568e5ea8ac1b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" Oct 11 10:27:18.954894 master-1 kubenswrapper[4771]: I1011 10:27:18.954699 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e115f8be-9e65-4407-8111-568e5ea8ac1b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-779749f859-5xxzp\" (UID: \"e115f8be-9e65-4407-8111-568e5ea8ac1b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" Oct 11 10:27:18.955659 master-1 kubenswrapper[4771]: I1011 10:27:18.955606 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e115f8be-9e65-4407-8111-568e5ea8ac1b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-779749f859-5xxzp\" (UID: \"e115f8be-9e65-4407-8111-568e5ea8ac1b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" Oct 11 10:27:18.956100 master-1 kubenswrapper[4771]: I1011 10:27:18.956006 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e115f8be-9e65-4407-8111-568e5ea8ac1b-images\") pod \"cluster-cloud-controller-manager-operator-779749f859-5xxzp\" (UID: \"e115f8be-9e65-4407-8111-568e5ea8ac1b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" Oct 11 10:27:18.959536 master-1 kubenswrapper[4771]: I1011 10:27:18.959390 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e115f8be-9e65-4407-8111-568e5ea8ac1b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-779749f859-5xxzp\" (UID: \"e115f8be-9e65-4407-8111-568e5ea8ac1b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" Oct 11 10:27:18.986925 master-1 kubenswrapper[4771]: I1011 10:27:18.986813 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5cn6\" (UniqueName: \"kubernetes.io/projected/e115f8be-9e65-4407-8111-568e5ea8ac1b-kube-api-access-h5cn6\") pod \"cluster-cloud-controller-manager-operator-779749f859-5xxzp\" (UID: \"e115f8be-9e65-4407-8111-568e5ea8ac1b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" Oct 11 10:27:19.037699 master-1 kubenswrapper[4771]: I1011 10:27:19.037623 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" Oct 11 10:27:19.051644 master-1 kubenswrapper[4771]: W1011 10:27:19.051573 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode115f8be_9e65_4407_8111_568e5ea8ac1b.slice/crio-3ad4baabaec7ad253d370491f06ee20607d577a8f41656c712791c090e5f1999 WatchSource:0}: Error finding container 3ad4baabaec7ad253d370491f06ee20607d577a8f41656c712791c090e5f1999: Status 404 returned error can't find the container with id 3ad4baabaec7ad253d370491f06ee20607d577a8f41656c712791c090e5f1999 Oct 11 10:27:19.436150 master-1 kubenswrapper[4771]: I1011 10:27:19.436080 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:19.436309 master-1 kubenswrapper[4771]: E1011 10:27:19.436268 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:19.662722 master-1 kubenswrapper[4771]: I1011 10:27:19.662661 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" event={"ID":"e115f8be-9e65-4407-8111-568e5ea8ac1b","Type":"ContainerStarted","Data":"57a847f740989cb73678819627757d06b52fb1e138c2a8189da83142b3abbcbc"} Oct 11 10:27:19.662722 master-1 kubenswrapper[4771]: I1011 10:27:19.662710 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" event={"ID":"e115f8be-9e65-4407-8111-568e5ea8ac1b","Type":"ContainerStarted","Data":"42d0299cfe1d4477be84432556c151ce87928a6796608302b2a993479ff1ae79"} Oct 11 10:27:19.662722 master-1 kubenswrapper[4771]: I1011 10:27:19.662721 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" event={"ID":"e115f8be-9e65-4407-8111-568e5ea8ac1b","Type":"ContainerStarted","Data":"3ad4baabaec7ad253d370491f06ee20607d577a8f41656c712791c090e5f1999"} Oct 11 10:27:20.057937 master-2 kubenswrapper[4776]: I1011 10:27:20.057897 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:20.058500 master-2 kubenswrapper[4776]: E1011 10:27:20.058030 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:20.440214 master-1 kubenswrapper[4771]: I1011 10:27:20.440121 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ba9953d-1f54-43be-a3ae-121030f1e07b" path="/var/lib/kubelet/pods/4ba9953d-1f54-43be-a3ae-121030f1e07b/volumes" Oct 11 10:27:20.667969 master-1 kubenswrapper[4771]: I1011 10:27:20.667923 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-779749f859-5xxzp_e115f8be-9e65-4407-8111-568e5ea8ac1b/kube-rbac-proxy/0.log" Oct 11 10:27:20.668882 master-1 kubenswrapper[4771]: I1011 10:27:20.668833 4771 generic.go:334] "Generic (PLEG): container finished" podID="e115f8be-9e65-4407-8111-568e5ea8ac1b" containerID="5585d78883912bb8eeedc837fe074ce0bf4bdc8294ad85bf3cadcef69368c941" exitCode=1 Oct 11 10:27:20.668924 master-1 kubenswrapper[4771]: I1011 10:27:20.668887 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" event={"ID":"e115f8be-9e65-4407-8111-568e5ea8ac1b","Type":"ContainerDied","Data":"5585d78883912bb8eeedc837fe074ce0bf4bdc8294ad85bf3cadcef69368c941"} Oct 11 10:27:20.669320 master-1 kubenswrapper[4771]: I1011 10:27:20.669288 4771 scope.go:117] "RemoveContainer" containerID="5585d78883912bb8eeedc837fe074ce0bf4bdc8294ad85bf3cadcef69368c941" Oct 11 10:27:21.437053 master-1 kubenswrapper[4771]: I1011 10:27:21.436954 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:21.437467 master-1 kubenswrapper[4771]: E1011 10:27:21.437255 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:21.693942 master-2 kubenswrapper[4776]: I1011 10:27:21.693873 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:21.694501 master-2 kubenswrapper[4776]: E1011 10:27:21.694005 4776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:21.694501 master-2 kubenswrapper[4776]: E1011 10:27:21.694056 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs podName:35b21a7b-2a5a-4511-a2d5-d950752b4bda nodeName:}" failed. No retries permitted until 2025-10-11 10:27:29.694043026 +0000 UTC m=+84.478469725 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs") pod "network-metrics-daemon-w52cn" (UID: "35b21a7b-2a5a-4511-a2d5-d950752b4bda") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:21.800151 master-1 kubenswrapper[4771]: I1011 10:27:21.799981 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs\") pod \"network-metrics-daemon-fgjvw\" (UID: \"2c084572-a5c9-4787-8a14-b7d6b0810a1b\") " pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:21.800832 master-1 kubenswrapper[4771]: E1011 10:27:21.800229 4771 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:21.800832 master-1 kubenswrapper[4771]: E1011 10:27:21.800336 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs podName:2c084572-a5c9-4787-8a14-b7d6b0810a1b nodeName:}" failed. No retries permitted until 2025-10-11 10:27:29.80030183 +0000 UTC m=+81.774528311 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs") pod "network-metrics-daemon-fgjvw" (UID: "2c084572-a5c9-4787-8a14-b7d6b0810a1b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:22.058396 master-2 kubenswrapper[4776]: I1011 10:27:22.058268 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:22.058549 master-2 kubenswrapper[4776]: E1011 10:27:22.058423 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:23.435930 master-1 kubenswrapper[4771]: I1011 10:27:23.435865 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:23.436586 master-1 kubenswrapper[4771]: E1011 10:27:23.436101 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:24.058763 master-2 kubenswrapper[4776]: I1011 10:27:24.058024 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:24.058763 master-2 kubenswrapper[4776]: E1011 10:27:24.058159 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:24.254596 master-2 kubenswrapper[4776]: I1011 10:27:24.254510 4776 generic.go:334] "Generic (PLEG): container finished" podID="5839b979-8c02-4e0d-9dc1-b1843d8ce872" containerID="41121d6fe516e7df58567d18539545a3bcb2156ba2155868d301fff06925c843" exitCode=0 Oct 11 10:27:24.254596 master-2 kubenswrapper[4776]: I1011 10:27:24.254573 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" event={"ID":"5839b979-8c02-4e0d-9dc1-b1843d8ce872","Type":"ContainerDied","Data":"41121d6fe516e7df58567d18539545a3bcb2156ba2155868d301fff06925c843"} Oct 11 10:27:24.256581 master-2 kubenswrapper[4776]: I1011 10:27:24.256531 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xssj7" event={"ID":"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e","Type":"ContainerStarted","Data":"0413888b074dd9127214d0d2728c150c3bc3de7dddcf161739d4e47972fedb12"} Oct 11 10:27:24.310334 master-2 kubenswrapper[4776]: I1011 10:27:24.310126 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-xssj7" podStartSLOduration=0.879619747 podStartE2EDuration="11.310098816s" podCreationTimestamp="2025-10-11 10:27:13 +0000 UTC" firstStartedPulling="2025-10-11 10:27:13.356219236 +0000 UTC m=+68.140645985" lastFinishedPulling="2025-10-11 10:27:23.786698345 +0000 UTC m=+78.571125054" observedRunningTime="2025-10-11 10:27:24.309941051 +0000 UTC m=+79.094367770" watchObservedRunningTime="2025-10-11 10:27:24.310098816 +0000 UTC m=+79.094525565" Oct 11 10:27:25.388993 master-1 kubenswrapper[4771]: I1011 10:27:25.388874 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb"] Oct 11 10:27:25.389905 master-1 kubenswrapper[4771]: I1011 10:27:25.389250 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb" Oct 11 10:27:25.391985 master-1 kubenswrapper[4771]: I1011 10:27:25.391906 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 11 10:27:25.392142 master-1 kubenswrapper[4771]: I1011 10:27:25.392097 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 11 10:27:25.392640 master-1 kubenswrapper[4771]: I1011 10:27:25.392522 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 11 10:27:25.392640 master-1 kubenswrapper[4771]: I1011 10:27:25.392524 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 11 10:27:25.393352 master-1 kubenswrapper[4771]: I1011 10:27:25.393288 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 11 10:27:25.397810 master-2 kubenswrapper[4776]: I1011 10:27:25.397601 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k"] Oct 11 10:27:25.398562 master-2 kubenswrapper[4776]: I1011 10:27:25.397865 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.400482 master-2 kubenswrapper[4776]: I1011 10:27:25.400424 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 11 10:27:25.400482 master-2 kubenswrapper[4776]: I1011 10:27:25.400461 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 11 10:27:25.401728 master-2 kubenswrapper[4776]: I1011 10:27:25.401605 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 11 10:27:25.401847 master-2 kubenswrapper[4776]: I1011 10:27:25.401764 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 11 10:27:25.401847 master-2 kubenswrapper[4776]: I1011 10:27:25.401832 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 11 10:27:25.426483 master-1 kubenswrapper[4771]: I1011 10:27:25.426395 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a65a56b0-5ee8-4429-8fe5-b33a6f29bc79-env-overrides\") pod \"ovnkube-control-plane-864d695c77-5mflb\" (UID: \"a65a56b0-5ee8-4429-8fe5-b33a6f29bc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb" Oct 11 10:27:25.426483 master-1 kubenswrapper[4771]: I1011 10:27:25.426474 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snph7\" (UniqueName: \"kubernetes.io/projected/a65a56b0-5ee8-4429-8fe5-b33a6f29bc79-kube-api-access-snph7\") pod \"ovnkube-control-plane-864d695c77-5mflb\" (UID: \"a65a56b0-5ee8-4429-8fe5-b33a6f29bc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb" Oct 11 10:27:25.426716 master-1 kubenswrapper[4771]: I1011 10:27:25.426519 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a65a56b0-5ee8-4429-8fe5-b33a6f29bc79-ovnkube-config\") pod \"ovnkube-control-plane-864d695c77-5mflb\" (UID: \"a65a56b0-5ee8-4429-8fe5-b33a6f29bc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb" Oct 11 10:27:25.426716 master-1 kubenswrapper[4771]: I1011 10:27:25.426555 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a65a56b0-5ee8-4429-8fe5-b33a6f29bc79-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-864d695c77-5mflb\" (UID: \"a65a56b0-5ee8-4429-8fe5-b33a6f29bc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb" Oct 11 10:27:25.436874 master-1 kubenswrapper[4771]: I1011 10:27:25.436756 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:25.437085 master-1 kubenswrapper[4771]: E1011 10:27:25.437018 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:25.521508 master-2 kubenswrapper[4776]: I1011 10:27:25.521433 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-env-overrides\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.521508 master-2 kubenswrapper[4776]: I1011 10:27:25.521483 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njmgm\" (UniqueName: \"kubernetes.io/projected/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-kube-api-access-njmgm\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.521508 master-2 kubenswrapper[4776]: I1011 10:27:25.521520 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-ovnkube-config\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.521800 master-2 kubenswrapper[4776]: I1011 10:27:25.521597 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.527262 master-1 kubenswrapper[4771]: I1011 10:27:25.527179 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a65a56b0-5ee8-4429-8fe5-b33a6f29bc79-env-overrides\") pod \"ovnkube-control-plane-864d695c77-5mflb\" (UID: \"a65a56b0-5ee8-4429-8fe5-b33a6f29bc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb" Oct 11 10:27:25.527262 master-1 kubenswrapper[4771]: I1011 10:27:25.527242 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snph7\" (UniqueName: \"kubernetes.io/projected/a65a56b0-5ee8-4429-8fe5-b33a6f29bc79-kube-api-access-snph7\") pod \"ovnkube-control-plane-864d695c77-5mflb\" (UID: \"a65a56b0-5ee8-4429-8fe5-b33a6f29bc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb" Oct 11 10:27:25.527556 master-1 kubenswrapper[4771]: I1011 10:27:25.527282 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a65a56b0-5ee8-4429-8fe5-b33a6f29bc79-ovnkube-config\") pod \"ovnkube-control-plane-864d695c77-5mflb\" (UID: \"a65a56b0-5ee8-4429-8fe5-b33a6f29bc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb" Oct 11 10:27:25.527556 master-1 kubenswrapper[4771]: I1011 10:27:25.527319 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a65a56b0-5ee8-4429-8fe5-b33a6f29bc79-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-864d695c77-5mflb\" (UID: \"a65a56b0-5ee8-4429-8fe5-b33a6f29bc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb" Oct 11 10:27:25.528321 master-1 kubenswrapper[4771]: I1011 10:27:25.528250 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a65a56b0-5ee8-4429-8fe5-b33a6f29bc79-env-overrides\") pod \"ovnkube-control-plane-864d695c77-5mflb\" (UID: \"a65a56b0-5ee8-4429-8fe5-b33a6f29bc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb" Oct 11 10:27:25.528787 master-1 kubenswrapper[4771]: I1011 10:27:25.528726 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a65a56b0-5ee8-4429-8fe5-b33a6f29bc79-ovnkube-config\") pod \"ovnkube-control-plane-864d695c77-5mflb\" (UID: \"a65a56b0-5ee8-4429-8fe5-b33a6f29bc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb" Oct 11 10:27:25.532869 master-1 kubenswrapper[4771]: I1011 10:27:25.532819 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a65a56b0-5ee8-4429-8fe5-b33a6f29bc79-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-864d695c77-5mflb\" (UID: \"a65a56b0-5ee8-4429-8fe5-b33a6f29bc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb" Oct 11 10:27:25.546934 master-1 kubenswrapper[4771]: I1011 10:27:25.546820 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snph7\" (UniqueName: \"kubernetes.io/projected/a65a56b0-5ee8-4429-8fe5-b33a6f29bc79-kube-api-access-snph7\") pod \"ovnkube-control-plane-864d695c77-5mflb\" (UID: \"a65a56b0-5ee8-4429-8fe5-b33a6f29bc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb" Oct 11 10:27:25.594378 master-1 kubenswrapper[4771]: I1011 10:27:25.594240 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fl2bs"] Oct 11 10:27:25.595719 master-1 kubenswrapper[4771]: I1011 10:27:25.595668 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.598580 master-1 kubenswrapper[4771]: I1011 10:27:25.598530 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 11 10:27:25.598733 master-1 kubenswrapper[4771]: I1011 10:27:25.598681 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 11 10:27:25.599843 master-2 kubenswrapper[4776]: I1011 10:27:25.599739 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-p8m82"] Oct 11 10:27:25.600297 master-2 kubenswrapper[4776]: I1011 10:27:25.600262 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.602737 master-2 kubenswrapper[4776]: I1011 10:27:25.602692 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 11 10:27:25.603865 master-2 kubenswrapper[4776]: I1011 10:27:25.603821 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 11 10:27:25.622309 master-2 kubenswrapper[4776]: I1011 10:27:25.622255 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-ovnkube-config\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.622504 master-2 kubenswrapper[4776]: I1011 10:27:25.622328 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.622504 master-2 kubenswrapper[4776]: I1011 10:27:25.622355 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-env-overrides\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.622504 master-2 kubenswrapper[4776]: I1011 10:27:25.622378 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njmgm\" (UniqueName: \"kubernetes.io/projected/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-kube-api-access-njmgm\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.623075 master-2 kubenswrapper[4776]: I1011 10:27:25.623050 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-ovnkube-config\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.623265 master-2 kubenswrapper[4776]: I1011 10:27:25.623243 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-env-overrides\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.625568 master-2 kubenswrapper[4776]: I1011 10:27:25.625537 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.628383 master-1 kubenswrapper[4771]: I1011 10:27:25.628290 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/96c2d0f1-e436-480c-9e34-9068178f9df4-ovn-node-metrics-cert\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.628504 master-1 kubenswrapper[4771]: I1011 10:27:25.628445 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-run-netns\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.628572 master-1 kubenswrapper[4771]: I1011 10:27:25.628502 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-var-lib-openvswitch\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.628572 master-1 kubenswrapper[4771]: I1011 10:27:25.628553 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-cni-netd\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.628752 master-1 kubenswrapper[4771]: I1011 10:27:25.628651 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-slash\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.628752 master-1 kubenswrapper[4771]: I1011 10:27:25.628707 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-run-ovn-kubernetes\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.628874 master-1 kubenswrapper[4771]: I1011 10:27:25.628781 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/96c2d0f1-e436-480c-9e34-9068178f9df4-ovnkube-config\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.628931 master-1 kubenswrapper[4771]: I1011 10:27:25.628867 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-systemd-units\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.628931 master-1 kubenswrapper[4771]: I1011 10:27:25.628913 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-node-log\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.629043 master-1 kubenswrapper[4771]: I1011 10:27:25.628961 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-run-systemd\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.629043 master-1 kubenswrapper[4771]: I1011 10:27:25.629003 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/96c2d0f1-e436-480c-9e34-9068178f9df4-ovnkube-script-lib\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.629161 master-1 kubenswrapper[4771]: I1011 10:27:25.629101 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-etc-openvswitch\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.629224 master-1 kubenswrapper[4771]: I1011 10:27:25.629170 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-run-ovn\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.629287 master-1 kubenswrapper[4771]: I1011 10:27:25.629202 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42mmb\" (UniqueName: \"kubernetes.io/projected/96c2d0f1-e436-480c-9e34-9068178f9df4-kube-api-access-42mmb\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.629287 master-1 kubenswrapper[4771]: I1011 10:27:25.629254 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-log-socket\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.629445 master-1 kubenswrapper[4771]: I1011 10:27:25.629278 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-cni-bin\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.629445 master-1 kubenswrapper[4771]: I1011 10:27:25.629320 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-kubelet\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.629445 master-1 kubenswrapper[4771]: I1011 10:27:25.629341 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-run-openvswitch\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.629445 master-1 kubenswrapper[4771]: I1011 10:27:25.629400 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/96c2d0f1-e436-480c-9e34-9068178f9df4-env-overrides\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.629445 master-1 kubenswrapper[4771]: I1011 10:27:25.629426 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.637528 master-2 kubenswrapper[4776]: I1011 10:27:25.637489 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njmgm\" (UniqueName: \"kubernetes.io/projected/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-kube-api-access-njmgm\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.685732 master-1 kubenswrapper[4771]: I1011 10:27:25.685574 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dgt7f" event={"ID":"b771285f-4d3c-4a7a-9b62-eb804911a351","Type":"ContainerStarted","Data":"2ae231d265f99ca0d5be6ba2301ffc9f7494c6d013c892b6b113832beeac7c6a"} Oct 11 10:27:25.688422 master-1 kubenswrapper[4771]: I1011 10:27:25.688328 4771 generic.go:334] "Generic (PLEG): container finished" podID="0b4dff81-4eaa-422f-8de9-d6133a8b2016" containerID="e43fead2c3261f1d78ff1e7b98f5daaa5f216da5d07993d10eb96f3476c0730b" exitCode=0 Oct 11 10:27:25.688502 master-1 kubenswrapper[4771]: I1011 10:27:25.688437 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lvp6f" event={"ID":"0b4dff81-4eaa-422f-8de9-d6133a8b2016","Type":"ContainerDied","Data":"e43fead2c3261f1d78ff1e7b98f5daaa5f216da5d07993d10eb96f3476c0730b"} Oct 11 10:27:25.690775 master-1 kubenswrapper[4771]: I1011 10:27:25.690195 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-779749f859-5xxzp_e115f8be-9e65-4407-8111-568e5ea8ac1b/kube-rbac-proxy/1.log" Oct 11 10:27:25.691011 master-1 kubenswrapper[4771]: I1011 10:27:25.690951 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-779749f859-5xxzp_e115f8be-9e65-4407-8111-568e5ea8ac1b/kube-rbac-proxy/0.log" Oct 11 10:27:25.692065 master-1 kubenswrapper[4771]: I1011 10:27:25.691957 4771 generic.go:334] "Generic (PLEG): container finished" podID="e115f8be-9e65-4407-8111-568e5ea8ac1b" containerID="60a93fb0643f4c652f352528fa2f68ed301ee9978f9f9a0561174149a6bb77e4" exitCode=1 Oct 11 10:27:25.692065 master-1 kubenswrapper[4771]: I1011 10:27:25.692013 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" event={"ID":"e115f8be-9e65-4407-8111-568e5ea8ac1b","Type":"ContainerDied","Data":"60a93fb0643f4c652f352528fa2f68ed301ee9978f9f9a0561174149a6bb77e4"} Oct 11 10:27:25.692065 master-1 kubenswrapper[4771]: I1011 10:27:25.692070 4771 scope.go:117] "RemoveContainer" containerID="5585d78883912bb8eeedc837fe074ce0bf4bdc8294ad85bf3cadcef69368c941" Oct 11 10:27:25.692935 master-1 kubenswrapper[4771]: I1011 10:27:25.692653 4771 scope.go:117] "RemoveContainer" containerID="60a93fb0643f4c652f352528fa2f68ed301ee9978f9f9a0561174149a6bb77e4" Oct 11 10:27:25.692935 master-1 kubenswrapper[4771]: E1011 10:27:25.692898 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-779749f859-5xxzp_openshift-cloud-controller-manager-operator(e115f8be-9e65-4407-8111-568e5ea8ac1b)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" podUID="e115f8be-9e65-4407-8111-568e5ea8ac1b" Oct 11 10:27:25.708229 master-1 kubenswrapper[4771]: I1011 10:27:25.708162 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb" Oct 11 10:27:25.710465 master-2 kubenswrapper[4776]: I1011 10:27:25.710407 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.719113 master-2 kubenswrapper[4776]: W1011 10:27:25.719059 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9727aec8_dcb9_40a6_9d8d_2a61f37b6503.slice/crio-03af94422a1be0c2140e577d539730a3a9c5a868e9a5b030d28510ae80c257c7 WatchSource:0}: Error finding container 03af94422a1be0c2140e577d539730a3a9c5a868e9a5b030d28510ae80c257c7: Status 404 returned error can't find the container with id 03af94422a1be0c2140e577d539730a3a9c5a868e9a5b030d28510ae80c257c7 Oct 11 10:27:25.722856 master-1 kubenswrapper[4771]: I1011 10:27:25.722765 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-dgt7f" podStartSLOduration=2.33034688 podStartE2EDuration="13.72274408s" podCreationTimestamp="2025-10-11 10:27:12 +0000 UTC" firstStartedPulling="2025-10-11 10:27:13.344476928 +0000 UTC m=+65.318703399" lastFinishedPulling="2025-10-11 10:27:24.736874158 +0000 UTC m=+76.711100599" observedRunningTime="2025-10-11 10:27:25.706126749 +0000 UTC m=+77.680353270" watchObservedRunningTime="2025-10-11 10:27:25.72274408 +0000 UTC m=+77.696970531" Oct 11 10:27:25.723088 master-2 kubenswrapper[4776]: I1011 10:27:25.723067 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-var-lib-openvswitch\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723168 master-2 kubenswrapper[4776]: I1011 10:27:25.723094 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-netd\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723168 master-2 kubenswrapper[4776]: I1011 10:27:25.723113 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-env-overrides\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723168 master-2 kubenswrapper[4776]: I1011 10:27:25.723127 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-netns\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723168 master-2 kubenswrapper[4776]: I1011 10:27:25.723141 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-openvswitch\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723168 master-2 kubenswrapper[4776]: I1011 10:27:25.723166 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c908109b-a45d-464d-9ea0-f0823d2cc341-ovn-node-metrics-cert\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723298 master-2 kubenswrapper[4776]: I1011 10:27:25.723180 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-ovn\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723298 master-2 kubenswrapper[4776]: I1011 10:27:25.723194 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-kubelet\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723298 master-2 kubenswrapper[4776]: I1011 10:27:25.723208 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-etc-openvswitch\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723298 master-2 kubenswrapper[4776]: I1011 10:27:25.723221 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-systemd-units\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723298 master-2 kubenswrapper[4776]: I1011 10:27:25.723235 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-systemd\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723298 master-2 kubenswrapper[4776]: I1011 10:27:25.723248 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-bin\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723298 master-2 kubenswrapper[4776]: I1011 10:27:25.723261 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-config\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723298 master-2 kubenswrapper[4776]: I1011 10:27:25.723275 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-node-log\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723298 master-2 kubenswrapper[4776]: I1011 10:27:25.723290 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-ovn-kubernetes\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723621 master-2 kubenswrapper[4776]: I1011 10:27:25.723314 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-script-lib\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723621 master-2 kubenswrapper[4776]: I1011 10:27:25.723329 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-log-socket\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723621 master-2 kubenswrapper[4776]: I1011 10:27:25.723347 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-slash\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723621 master-2 kubenswrapper[4776]: I1011 10:27:25.723360 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723621 master-2 kubenswrapper[4776]: I1011 10:27:25.723376 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxlsk\" (UniqueName: \"kubernetes.io/projected/c908109b-a45d-464d-9ea0-f0823d2cc341-kube-api-access-dxlsk\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.724343 master-1 kubenswrapper[4771]: W1011 10:27:25.724235 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda65a56b0_5ee8_4429_8fe5_b33a6f29bc79.slice/crio-594577b822dd9370218d1cfefe2fcdad44b1be28decab023f29fc49ccb33659e WatchSource:0}: Error finding container 594577b822dd9370218d1cfefe2fcdad44b1be28decab023f29fc49ccb33659e: Status 404 returned error can't find the container with id 594577b822dd9370218d1cfefe2fcdad44b1be28decab023f29fc49ccb33659e Oct 11 10:27:25.729897 master-1 kubenswrapper[4771]: I1011 10:27:25.729823 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-etc-openvswitch\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.729986 master-1 kubenswrapper[4771]: I1011 10:27:25.729920 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-run-ovn\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.729986 master-1 kubenswrapper[4771]: I1011 10:27:25.729929 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-etc-openvswitch\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.729986 master-1 kubenswrapper[4771]: I1011 10:27:25.729973 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42mmb\" (UniqueName: \"kubernetes.io/projected/96c2d0f1-e436-480c-9e34-9068178f9df4-kube-api-access-42mmb\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.730236 master-1 kubenswrapper[4771]: I1011 10:27:25.730010 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-run-ovn\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.730236 master-1 kubenswrapper[4771]: I1011 10:27:25.730080 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-log-socket\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.730236 master-1 kubenswrapper[4771]: I1011 10:27:25.730129 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-cni-bin\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.730236 master-1 kubenswrapper[4771]: I1011 10:27:25.730176 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-kubelet\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.730236 master-1 kubenswrapper[4771]: I1011 10:27:25.730227 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-run-openvswitch\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.730629 master-1 kubenswrapper[4771]: I1011 10:27:25.730277 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-cni-bin\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.730629 master-1 kubenswrapper[4771]: I1011 10:27:25.730311 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-log-socket\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.730629 master-1 kubenswrapper[4771]: I1011 10:27:25.730407 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-kubelet\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.730629 master-1 kubenswrapper[4771]: I1011 10:27:25.730285 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/96c2d0f1-e436-480c-9e34-9068178f9df4-env-overrides\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.730629 master-1 kubenswrapper[4771]: I1011 10:27:25.730467 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-run-openvswitch\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.730629 master-1 kubenswrapper[4771]: I1011 10:27:25.730533 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.730984 master-1 kubenswrapper[4771]: I1011 10:27:25.730648 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/96c2d0f1-e436-480c-9e34-9068178f9df4-ovn-node-metrics-cert\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.730984 master-1 kubenswrapper[4771]: I1011 10:27:25.730698 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-run-netns\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.730984 master-1 kubenswrapper[4771]: I1011 10:27:25.730775 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-var-lib-openvswitch\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.730984 master-1 kubenswrapper[4771]: I1011 10:27:25.730811 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-cni-netd\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.730984 master-1 kubenswrapper[4771]: I1011 10:27:25.730928 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-slash\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.730984 master-1 kubenswrapper[4771]: I1011 10:27:25.730968 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-run-ovn-kubernetes\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.731304 master-1 kubenswrapper[4771]: I1011 10:27:25.730986 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-cni-netd\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.731304 master-1 kubenswrapper[4771]: I1011 10:27:25.731045 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-run-netns\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.731304 master-1 kubenswrapper[4771]: I1011 10:27:25.731045 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/96c2d0f1-e436-480c-9e34-9068178f9df4-ovnkube-config\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.731304 master-1 kubenswrapper[4771]: I1011 10:27:25.731171 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-systemd-units\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.731304 master-1 kubenswrapper[4771]: I1011 10:27:25.731205 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-node-log\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.731304 master-1 kubenswrapper[4771]: I1011 10:27:25.731237 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-run-systemd\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.731304 master-1 kubenswrapper[4771]: I1011 10:27:25.731268 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/96c2d0f1-e436-480c-9e34-9068178f9df4-ovnkube-script-lib\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.731730 master-1 kubenswrapper[4771]: I1011 10:27:25.731451 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/96c2d0f1-e436-480c-9e34-9068178f9df4-env-overrides\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.731730 master-1 kubenswrapper[4771]: I1011 10:27:25.731570 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-slash\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.731730 master-1 kubenswrapper[4771]: I1011 10:27:25.731700 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-run-ovn-kubernetes\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.731900 master-1 kubenswrapper[4771]: I1011 10:27:25.730930 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.731900 master-1 kubenswrapper[4771]: I1011 10:27:25.731835 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-node-log\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.731900 master-1 kubenswrapper[4771]: I1011 10:27:25.731896 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-systemd-units\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.732426 master-1 kubenswrapper[4771]: I1011 10:27:25.732175 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/96c2d0f1-e436-480c-9e34-9068178f9df4-ovnkube-config\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.732426 master-1 kubenswrapper[4771]: I1011 10:27:25.732221 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-var-lib-openvswitch\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.732426 master-1 kubenswrapper[4771]: I1011 10:27:25.732308 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-run-systemd\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.732847 master-1 kubenswrapper[4771]: I1011 10:27:25.732782 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/96c2d0f1-e436-480c-9e34-9068178f9df4-ovnkube-script-lib\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.734333 master-1 kubenswrapper[4771]: I1011 10:27:25.734277 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/96c2d0f1-e436-480c-9e34-9068178f9df4-ovn-node-metrics-cert\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.750140 master-1 kubenswrapper[4771]: I1011 10:27:25.750069 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42mmb\" (UniqueName: \"kubernetes.io/projected/96c2d0f1-e436-480c-9e34-9068178f9df4-kube-api-access-42mmb\") pod \"ovnkube-node-fl2bs\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.823716 master-2 kubenswrapper[4776]: I1011 10:27:25.823659 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c908109b-a45d-464d-9ea0-f0823d2cc341-ovn-node-metrics-cert\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823803 master-2 kubenswrapper[4776]: I1011 10:27:25.823722 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-netns\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823803 master-2 kubenswrapper[4776]: I1011 10:27:25.823747 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-openvswitch\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823803 master-2 kubenswrapper[4776]: I1011 10:27:25.823769 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-ovn\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823803 master-2 kubenswrapper[4776]: I1011 10:27:25.823788 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-kubelet\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823800 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-netns\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823822 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-openvswitch\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823834 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-etc-openvswitch\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823807 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-etc-openvswitch\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823848 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-ovn\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823861 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-systemd-units\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823865 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-kubelet\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823881 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-systemd\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823883 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-systemd-units\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823895 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-bin\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823911 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-config\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823905 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-systemd\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823924 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-node-log\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.823941 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-ovn-kubernetes\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.823967 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-script-lib\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.823983 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-log-socket\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.823997 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxlsk\" (UniqueName: \"kubernetes.io/projected/c908109b-a45d-464d-9ea0-f0823d2cc341-kube-api-access-dxlsk\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.823998 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-ovn-kubernetes\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.824024 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-slash\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.824031 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-node-log\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.824038 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.824056 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-var-lib-openvswitch\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.824069 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-netd\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.824083 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-env-overrides\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824609 master-2 kubenswrapper[4776]: I1011 10:27:25.824505 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-config\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824609 master-2 kubenswrapper[4776]: I1011 10:27:25.824512 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-env-overrides\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824609 master-2 kubenswrapper[4776]: I1011 10:27:25.824540 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-var-lib-openvswitch\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824609 master-2 kubenswrapper[4776]: I1011 10:27:25.824549 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-slash\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824609 master-2 kubenswrapper[4776]: I1011 10:27:25.824056 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-log-socket\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824609 master-2 kubenswrapper[4776]: I1011 10:27:25.824567 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824609 master-2 kubenswrapper[4776]: I1011 10:27:25.823940 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-bin\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824609 master-2 kubenswrapper[4776]: I1011 10:27:25.824516 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-script-lib\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824609 master-2 kubenswrapper[4776]: I1011 10:27:25.824581 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-netd\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.827526 master-2 kubenswrapper[4776]: I1011 10:27:25.827486 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c908109b-a45d-464d-9ea0-f0823d2cc341-ovn-node-metrics-cert\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.842268 master-2 kubenswrapper[4776]: I1011 10:27:25.842203 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxlsk\" (UniqueName: \"kubernetes.io/projected/c908109b-a45d-464d-9ea0-f0823d2cc341-kube-api-access-dxlsk\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.912844 master-2 kubenswrapper[4776]: I1011 10:27:25.912774 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.915904 master-1 kubenswrapper[4771]: I1011 10:27:25.915780 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:25.922030 master-2 kubenswrapper[4776]: W1011 10:27:25.921974 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc908109b_a45d_464d_9ea0_f0823d2cc341.slice/crio-8f67a229933856e98df91cbb3597a852767fb99fd9bf0ca790d3dd81716f751d WatchSource:0}: Error finding container 8f67a229933856e98df91cbb3597a852767fb99fd9bf0ca790d3dd81716f751d: Status 404 returned error can't find the container with id 8f67a229933856e98df91cbb3597a852767fb99fd9bf0ca790d3dd81716f751d Oct 11 10:27:26.058106 master-2 kubenswrapper[4776]: I1011 10:27:26.057987 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:26.058726 master-2 kubenswrapper[4776]: E1011 10:27:26.058640 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:26.264654 master-2 kubenswrapper[4776]: I1011 10:27:26.264565 4776 generic.go:334] "Generic (PLEG): container finished" podID="5839b979-8c02-4e0d-9dc1-b1843d8ce872" containerID="1ec550f2e5d0a274b3db5f617c5df7975cae753de1a01006d83a446ac870ae10" exitCode=0 Oct 11 10:27:26.264654 master-2 kubenswrapper[4776]: I1011 10:27:26.264631 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" event={"ID":"5839b979-8c02-4e0d-9dc1-b1843d8ce872","Type":"ContainerDied","Data":"1ec550f2e5d0a274b3db5f617c5df7975cae753de1a01006d83a446ac870ae10"} Oct 11 10:27:26.266520 master-2 kubenswrapper[4776]: I1011 10:27:26.265559 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerStarted","Data":"8f67a229933856e98df91cbb3597a852767fb99fd9bf0ca790d3dd81716f751d"} Oct 11 10:27:26.267395 master-2 kubenswrapper[4776]: I1011 10:27:26.267329 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" event={"ID":"9727aec8-dcb9-40a6-9d8d-2a61f37b6503","Type":"ContainerStarted","Data":"dc031f4dd9db1fa90da21ee773e117ca3278e0a2094f12e77f4c3fd673ee09ad"} Oct 11 10:27:26.267395 master-2 kubenswrapper[4776]: I1011 10:27:26.267389 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" event={"ID":"9727aec8-dcb9-40a6-9d8d-2a61f37b6503","Type":"ContainerStarted","Data":"03af94422a1be0c2140e577d539730a3a9c5a868e9a5b030d28510ae80c257c7"} Oct 11 10:27:26.699042 master-1 kubenswrapper[4771]: I1011 10:27:26.698510 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-779749f859-5xxzp_e115f8be-9e65-4407-8111-568e5ea8ac1b/kube-rbac-proxy/1.log" Oct 11 10:27:26.711057 master-1 kubenswrapper[4771]: I1011 10:27:26.701282 4771 scope.go:117] "RemoveContainer" containerID="60a93fb0643f4c652f352528fa2f68ed301ee9978f9f9a0561174149a6bb77e4" Oct 11 10:27:26.711057 master-1 kubenswrapper[4771]: E1011 10:27:26.701564 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-779749f859-5xxzp_openshift-cloud-controller-manager-operator(e115f8be-9e65-4407-8111-568e5ea8ac1b)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" podUID="e115f8be-9e65-4407-8111-568e5ea8ac1b" Oct 11 10:27:26.711057 master-1 kubenswrapper[4771]: I1011 10:27:26.702245 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" event={"ID":"96c2d0f1-e436-480c-9e34-9068178f9df4","Type":"ContainerStarted","Data":"e331eefac7c84cea9d904666f2942a1490b780d5057ea46b8fe92374c1ddb75a"} Oct 11 10:27:26.711057 master-1 kubenswrapper[4771]: I1011 10:27:26.704063 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb" event={"ID":"a65a56b0-5ee8-4429-8fe5-b33a6f29bc79","Type":"ContainerStarted","Data":"6943eb663574984dc2c6e6a328d40af44972ef8e57e6085814bb716f16c316dc"} Oct 11 10:27:26.711057 master-1 kubenswrapper[4771]: I1011 10:27:26.704102 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb" event={"ID":"a65a56b0-5ee8-4429-8fe5-b33a6f29bc79","Type":"ContainerStarted","Data":"594577b822dd9370218d1cfefe2fcdad44b1be28decab023f29fc49ccb33659e"} Oct 11 10:27:27.436656 master-1 kubenswrapper[4771]: I1011 10:27:27.436591 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:27.436872 master-1 kubenswrapper[4771]: E1011 10:27:27.436814 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:27.710449 master-1 kubenswrapper[4771]: I1011 10:27:27.710179 4771 generic.go:334] "Generic (PLEG): container finished" podID="0b4dff81-4eaa-422f-8de9-d6133a8b2016" containerID="a9cdbd63dbf81d2a97c21e49b0386b8b1e8f6d6a57c9051435ab9ea7049d83ea" exitCode=0 Oct 11 10:27:27.710449 master-1 kubenswrapper[4771]: I1011 10:27:27.710255 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lvp6f" event={"ID":"0b4dff81-4eaa-422f-8de9-d6133a8b2016","Type":"ContainerDied","Data":"a9cdbd63dbf81d2a97c21e49b0386b8b1e8f6d6a57c9051435ab9ea7049d83ea"} Oct 11 10:27:28.058385 master-2 kubenswrapper[4776]: I1011 10:27:28.058341 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:28.058948 master-2 kubenswrapper[4776]: E1011 10:27:28.058480 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:28.276112 master-2 kubenswrapper[4776]: I1011 10:27:28.276055 4776 generic.go:334] "Generic (PLEG): container finished" podID="5839b979-8c02-4e0d-9dc1-b1843d8ce872" containerID="2c03c56b8e58cc1b664bb13193f85f9add5b62239f04b27072c5b46cf41377b7" exitCode=0 Oct 11 10:27:28.276112 master-2 kubenswrapper[4776]: I1011 10:27:28.276097 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" event={"ID":"5839b979-8c02-4e0d-9dc1-b1843d8ce872","Type":"ContainerDied","Data":"2c03c56b8e58cc1b664bb13193f85f9add5b62239f04b27072c5b46cf41377b7"} Oct 11 10:27:28.587135 master-1 kubenswrapper[4771]: I1011 10:27:28.587047 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-4pm7x"] Oct 11 10:27:28.587706 master-1 kubenswrapper[4771]: I1011 10:27:28.587670 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:28.587836 master-1 kubenswrapper[4771]: E1011 10:27:28.587797 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4pm7x" podUID="0bde275d-f0a5-4bea-93f7-edd2077e46b4" Oct 11 10:27:28.593630 master-2 kubenswrapper[4776]: I1011 10:27:28.593590 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-jdkgd"] Oct 11 10:27:28.593931 master-2 kubenswrapper[4776]: I1011 10:27:28.593910 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:28.594009 master-2 kubenswrapper[4776]: E1011 10:27:28.593976 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:28.656264 master-1 kubenswrapper[4771]: I1011 10:27:28.656208 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hktrh\" (UniqueName: \"kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh\") pod \"network-check-target-4pm7x\" (UID: \"0bde275d-f0a5-4bea-93f7-edd2077e46b4\") " pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:28.661866 master-2 kubenswrapper[4776]: I1011 10:27:28.661799 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:28.757009 master-1 kubenswrapper[4771]: I1011 10:27:28.756964 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hktrh\" (UniqueName: \"kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh\") pod \"network-check-target-4pm7x\" (UID: \"0bde275d-f0a5-4bea-93f7-edd2077e46b4\") " pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:28.763097 master-2 kubenswrapper[4776]: I1011 10:27:28.763042 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:28.775920 master-1 kubenswrapper[4771]: E1011 10:27:28.775850 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:27:28.775920 master-1 kubenswrapper[4771]: E1011 10:27:28.775896 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:27:28.775920 master-1 kubenswrapper[4771]: E1011 10:27:28.775914 4771 projected.go:194] Error preparing data for projected volume kube-api-access-hktrh for pod openshift-network-diagnostics/network-check-target-4pm7x: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:28.776146 master-1 kubenswrapper[4771]: E1011 10:27:28.775975 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh podName:0bde275d-f0a5-4bea-93f7-edd2077e46b4 nodeName:}" failed. No retries permitted until 2025-10-11 10:27:29.275955453 +0000 UTC m=+81.250181894 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hktrh" (UniqueName: "kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh") pod "network-check-target-4pm7x" (UID: "0bde275d-f0a5-4bea-93f7-edd2077e46b4") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:28.788513 master-2 kubenswrapper[4776]: E1011 10:27:28.788444 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:27:28.788513 master-2 kubenswrapper[4776]: E1011 10:27:28.788493 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:27:28.788513 master-2 kubenswrapper[4776]: E1011 10:27:28.788515 4776 projected.go:194] Error preparing data for projected volume kube-api-access-plqrv for pod openshift-network-diagnostics/network-check-target-jdkgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:28.788756 master-2 kubenswrapper[4776]: E1011 10:27:28.788603 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv podName:f6543c6f-6f31-431e-9327-60c8cfd70c7e nodeName:}" failed. No retries permitted until 2025-10-11 10:27:29.288580426 +0000 UTC m=+84.073007165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-plqrv" (UniqueName: "kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv") pod "network-check-target-jdkgd" (UID: "f6543c6f-6f31-431e-9327-60c8cfd70c7e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:29.361228 master-1 kubenswrapper[4771]: I1011 10:27:29.361131 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hktrh\" (UniqueName: \"kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh\") pod \"network-check-target-4pm7x\" (UID: \"0bde275d-f0a5-4bea-93f7-edd2077e46b4\") " pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:29.361533 master-1 kubenswrapper[4771]: E1011 10:27:29.361379 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:27:29.361533 master-1 kubenswrapper[4771]: E1011 10:27:29.361407 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:27:29.361533 master-1 kubenswrapper[4771]: E1011 10:27:29.361424 4771 projected.go:194] Error preparing data for projected volume kube-api-access-hktrh for pod openshift-network-diagnostics/network-check-target-4pm7x: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:29.361533 master-1 kubenswrapper[4771]: E1011 10:27:29.361520 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh podName:0bde275d-f0a5-4bea-93f7-edd2077e46b4 nodeName:}" failed. No retries permitted until 2025-10-11 10:27:30.361499312 +0000 UTC m=+82.335725773 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hktrh" (UniqueName: "kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh") pod "network-check-target-4pm7x" (UID: "0bde275d-f0a5-4bea-93f7-edd2077e46b4") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:29.366000 master-2 kubenswrapper[4776]: I1011 10:27:29.365890 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:29.367996 master-2 kubenswrapper[4776]: E1011 10:27:29.366253 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:27:29.367996 master-2 kubenswrapper[4776]: E1011 10:27:29.366290 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:27:29.367996 master-2 kubenswrapper[4776]: E1011 10:27:29.366308 4776 projected.go:194] Error preparing data for projected volume kube-api-access-plqrv for pod openshift-network-diagnostics/network-check-target-jdkgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:29.367996 master-2 kubenswrapper[4776]: E1011 10:27:29.366369 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv podName:f6543c6f-6f31-431e-9327-60c8cfd70c7e nodeName:}" failed. No retries permitted until 2025-10-11 10:27:30.366353874 +0000 UTC m=+85.150780583 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-plqrv" (UniqueName: "kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv") pod "network-check-target-jdkgd" (UID: "f6543c6f-6f31-431e-9327-60c8cfd70c7e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:29.436963 master-1 kubenswrapper[4771]: I1011 10:27:29.436849 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:29.437128 master-1 kubenswrapper[4771]: E1011 10:27:29.437045 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:29.719258 master-1 kubenswrapper[4771]: I1011 10:27:29.719121 4771 generic.go:334] "Generic (PLEG): container finished" podID="0b4dff81-4eaa-422f-8de9-d6133a8b2016" containerID="38c544c0979325797108d1727473710ae3aa96ae14dc1bedbecd3ddf365a5f0e" exitCode=0 Oct 11 10:27:29.719258 master-1 kubenswrapper[4771]: I1011 10:27:29.719178 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lvp6f" event={"ID":"0b4dff81-4eaa-422f-8de9-d6133a8b2016","Type":"ContainerDied","Data":"38c544c0979325797108d1727473710ae3aa96ae14dc1bedbecd3ddf365a5f0e"} Oct 11 10:27:29.769730 master-2 kubenswrapper[4776]: I1011 10:27:29.769648 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:29.769899 master-2 kubenswrapper[4776]: E1011 10:27:29.769784 4776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:29.769899 master-2 kubenswrapper[4776]: E1011 10:27:29.769848 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs podName:35b21a7b-2a5a-4511-a2d5-d950752b4bda nodeName:}" failed. No retries permitted until 2025-10-11 10:27:45.769827152 +0000 UTC m=+100.554253861 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs") pod "network-metrics-daemon-w52cn" (UID: "35b21a7b-2a5a-4511-a2d5-d950752b4bda") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:29.865582 master-1 kubenswrapper[4771]: I1011 10:27:29.865536 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs\") pod \"network-metrics-daemon-fgjvw\" (UID: \"2c084572-a5c9-4787-8a14-b7d6b0810a1b\") " pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:29.866109 master-1 kubenswrapper[4771]: E1011 10:27:29.865753 4771 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:29.866109 master-1 kubenswrapper[4771]: E1011 10:27:29.865867 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs podName:2c084572-a5c9-4787-8a14-b7d6b0810a1b nodeName:}" failed. No retries permitted until 2025-10-11 10:27:45.865838056 +0000 UTC m=+97.840064527 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs") pod "network-metrics-daemon-fgjvw" (UID: "2c084572-a5c9-4787-8a14-b7d6b0810a1b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:30.057642 master-2 kubenswrapper[4776]: I1011 10:27:30.057510 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:30.057642 master-2 kubenswrapper[4776]: I1011 10:27:30.057619 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:30.057959 master-2 kubenswrapper[4776]: E1011 10:27:30.057772 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:30.057959 master-2 kubenswrapper[4776]: E1011 10:27:30.057879 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:30.368787 master-1 kubenswrapper[4771]: I1011 10:27:30.368731 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hktrh\" (UniqueName: \"kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh\") pod \"network-check-target-4pm7x\" (UID: \"0bde275d-f0a5-4bea-93f7-edd2077e46b4\") " pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:30.369068 master-1 kubenswrapper[4771]: E1011 10:27:30.368899 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:27:30.369068 master-1 kubenswrapper[4771]: E1011 10:27:30.368948 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:27:30.369068 master-1 kubenswrapper[4771]: E1011 10:27:30.368964 4771 projected.go:194] Error preparing data for projected volume kube-api-access-hktrh for pod openshift-network-diagnostics/network-check-target-4pm7x: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:30.369068 master-1 kubenswrapper[4771]: E1011 10:27:30.369032 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh podName:0bde275d-f0a5-4bea-93f7-edd2077e46b4 nodeName:}" failed. No retries permitted until 2025-10-11 10:27:32.369005918 +0000 UTC m=+84.343232359 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hktrh" (UniqueName: "kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh") pod "network-check-target-4pm7x" (UID: "0bde275d-f0a5-4bea-93f7-edd2077e46b4") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:30.374821 master-2 kubenswrapper[4776]: I1011 10:27:30.374179 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:30.374821 master-2 kubenswrapper[4776]: E1011 10:27:30.374330 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:27:30.374821 master-2 kubenswrapper[4776]: E1011 10:27:30.374349 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:27:30.374821 master-2 kubenswrapper[4776]: E1011 10:27:30.374358 4776 projected.go:194] Error preparing data for projected volume kube-api-access-plqrv for pod openshift-network-diagnostics/network-check-target-jdkgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:30.374821 master-2 kubenswrapper[4776]: E1011 10:27:30.374405 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv podName:f6543c6f-6f31-431e-9327-60c8cfd70c7e nodeName:}" failed. No retries permitted until 2025-10-11 10:27:32.37439136 +0000 UTC m=+87.158818069 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-plqrv" (UniqueName: "kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv") pod "network-check-target-jdkgd" (UID: "f6543c6f-6f31-431e-9327-60c8cfd70c7e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:30.436927 master-1 kubenswrapper[4771]: I1011 10:27:30.436872 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:30.437041 master-1 kubenswrapper[4771]: E1011 10:27:30.436972 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4pm7x" podUID="0bde275d-f0a5-4bea-93f7-edd2077e46b4" Oct 11 10:27:31.194376 master-2 kubenswrapper[4776]: I1011 10:27:31.194306 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vx55j"] Oct 11 10:27:31.194719 master-2 kubenswrapper[4776]: I1011 10:27:31.194665 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.197947 master-2 kubenswrapper[4776]: I1011 10:27:31.197891 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 11 10:27:31.198010 master-2 kubenswrapper[4776]: I1011 10:27:31.197891 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 11 10:27:31.198366 master-2 kubenswrapper[4776]: I1011 10:27:31.198326 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 11 10:27:31.198425 master-2 kubenswrapper[4776]: I1011 10:27:31.198367 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 11 10:27:31.198875 master-2 kubenswrapper[4776]: I1011 10:27:31.198838 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 11 10:27:31.211555 master-1 kubenswrapper[4771]: I1011 10:27:31.211499 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-sk5cm"] Oct 11 10:27:31.212062 master-1 kubenswrapper[4771]: I1011 10:27:31.211836 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-sk5cm" Oct 11 10:27:31.215379 master-1 kubenswrapper[4771]: I1011 10:27:31.215329 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 11 10:27:31.215459 master-1 kubenswrapper[4771]: I1011 10:27:31.215419 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 11 10:27:31.216217 master-1 kubenswrapper[4771]: I1011 10:27:31.216177 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 11 10:27:31.216277 master-1 kubenswrapper[4771]: I1011 10:27:31.216220 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 11 10:27:31.216326 master-1 kubenswrapper[4771]: I1011 10:27:31.216300 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 11 10:27:31.283492 master-2 kubenswrapper[4776]: I1011 10:27:31.283444 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skwch\" (UniqueName: \"kubernetes.io/projected/8526c41d-70a8-42de-a10b-6ad2d5266afb-kube-api-access-skwch\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.283711 master-2 kubenswrapper[4776]: I1011 10:27:31.283513 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8526c41d-70a8-42de-a10b-6ad2d5266afb-env-overrides\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.283711 master-2 kubenswrapper[4776]: I1011 10:27:31.283541 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8526c41d-70a8-42de-a10b-6ad2d5266afb-ovnkube-identity-cm\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.283711 master-2 kubenswrapper[4776]: I1011 10:27:31.283559 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8526c41d-70a8-42de-a10b-6ad2d5266afb-webhook-cert\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.285387 master-1 kubenswrapper[4771]: I1011 10:27:31.274949 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e03dddd-4197-40ae-91f1-7e83f90dbd58-webhook-cert\") pod \"network-node-identity-sk5cm\" (UID: \"0e03dddd-4197-40ae-91f1-7e83f90dbd58\") " pod="openshift-network-node-identity/network-node-identity-sk5cm" Oct 11 10:27:31.285387 master-1 kubenswrapper[4771]: I1011 10:27:31.275061 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/0e03dddd-4197-40ae-91f1-7e83f90dbd58-ovnkube-identity-cm\") pod \"network-node-identity-sk5cm\" (UID: \"0e03dddd-4197-40ae-91f1-7e83f90dbd58\") " pod="openshift-network-node-identity/network-node-identity-sk5cm" Oct 11 10:27:31.285387 master-1 kubenswrapper[4771]: I1011 10:27:31.275126 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn9ks\" (UniqueName: \"kubernetes.io/projected/0e03dddd-4197-40ae-91f1-7e83f90dbd58-kube-api-access-kn9ks\") pod \"network-node-identity-sk5cm\" (UID: \"0e03dddd-4197-40ae-91f1-7e83f90dbd58\") " pod="openshift-network-node-identity/network-node-identity-sk5cm" Oct 11 10:27:31.285387 master-1 kubenswrapper[4771]: I1011 10:27:31.275239 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e03dddd-4197-40ae-91f1-7e83f90dbd58-env-overrides\") pod \"network-node-identity-sk5cm\" (UID: \"0e03dddd-4197-40ae-91f1-7e83f90dbd58\") " pod="openshift-network-node-identity/network-node-identity-sk5cm" Oct 11 10:27:31.376257 master-1 kubenswrapper[4771]: I1011 10:27:31.376191 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e03dddd-4197-40ae-91f1-7e83f90dbd58-webhook-cert\") pod \"network-node-identity-sk5cm\" (UID: \"0e03dddd-4197-40ae-91f1-7e83f90dbd58\") " pod="openshift-network-node-identity/network-node-identity-sk5cm" Oct 11 10:27:31.376257 master-1 kubenswrapper[4771]: I1011 10:27:31.376242 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/0e03dddd-4197-40ae-91f1-7e83f90dbd58-ovnkube-identity-cm\") pod \"network-node-identity-sk5cm\" (UID: \"0e03dddd-4197-40ae-91f1-7e83f90dbd58\") " pod="openshift-network-node-identity/network-node-identity-sk5cm" Oct 11 10:27:31.376257 master-1 kubenswrapper[4771]: I1011 10:27:31.376275 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn9ks\" (UniqueName: \"kubernetes.io/projected/0e03dddd-4197-40ae-91f1-7e83f90dbd58-kube-api-access-kn9ks\") pod \"network-node-identity-sk5cm\" (UID: \"0e03dddd-4197-40ae-91f1-7e83f90dbd58\") " pod="openshift-network-node-identity/network-node-identity-sk5cm" Oct 11 10:27:31.376620 master-1 kubenswrapper[4771]: I1011 10:27:31.376304 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e03dddd-4197-40ae-91f1-7e83f90dbd58-env-overrides\") pod \"network-node-identity-sk5cm\" (UID: \"0e03dddd-4197-40ae-91f1-7e83f90dbd58\") " pod="openshift-network-node-identity/network-node-identity-sk5cm" Oct 11 10:27:31.377115 master-1 kubenswrapper[4771]: I1011 10:27:31.377079 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e03dddd-4197-40ae-91f1-7e83f90dbd58-env-overrides\") pod \"network-node-identity-sk5cm\" (UID: \"0e03dddd-4197-40ae-91f1-7e83f90dbd58\") " pod="openshift-network-node-identity/network-node-identity-sk5cm" Oct 11 10:27:31.377843 master-1 kubenswrapper[4771]: I1011 10:27:31.377791 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/0e03dddd-4197-40ae-91f1-7e83f90dbd58-ovnkube-identity-cm\") pod \"network-node-identity-sk5cm\" (UID: \"0e03dddd-4197-40ae-91f1-7e83f90dbd58\") " pod="openshift-network-node-identity/network-node-identity-sk5cm" Oct 11 10:27:31.380270 master-1 kubenswrapper[4771]: I1011 10:27:31.380207 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e03dddd-4197-40ae-91f1-7e83f90dbd58-webhook-cert\") pod \"network-node-identity-sk5cm\" (UID: \"0e03dddd-4197-40ae-91f1-7e83f90dbd58\") " pod="openshift-network-node-identity/network-node-identity-sk5cm" Oct 11 10:27:31.383887 master-2 kubenswrapper[4776]: I1011 10:27:31.383826 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8526c41d-70a8-42de-a10b-6ad2d5266afb-env-overrides\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.383887 master-2 kubenswrapper[4776]: I1011 10:27:31.383868 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8526c41d-70a8-42de-a10b-6ad2d5266afb-ovnkube-identity-cm\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.383887 master-2 kubenswrapper[4776]: I1011 10:27:31.383885 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8526c41d-70a8-42de-a10b-6ad2d5266afb-webhook-cert\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.383887 master-2 kubenswrapper[4776]: I1011 10:27:31.383904 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skwch\" (UniqueName: \"kubernetes.io/projected/8526c41d-70a8-42de-a10b-6ad2d5266afb-kube-api-access-skwch\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.384788 master-2 kubenswrapper[4776]: I1011 10:27:31.384607 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8526c41d-70a8-42de-a10b-6ad2d5266afb-env-overrides\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.385757 master-2 kubenswrapper[4776]: I1011 10:27:31.385666 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8526c41d-70a8-42de-a10b-6ad2d5266afb-ovnkube-identity-cm\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.390505 master-2 kubenswrapper[4776]: I1011 10:27:31.390446 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8526c41d-70a8-42de-a10b-6ad2d5266afb-webhook-cert\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.394393 master-1 kubenswrapper[4771]: I1011 10:27:31.394340 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn9ks\" (UniqueName: \"kubernetes.io/projected/0e03dddd-4197-40ae-91f1-7e83f90dbd58-kube-api-access-kn9ks\") pod \"network-node-identity-sk5cm\" (UID: \"0e03dddd-4197-40ae-91f1-7e83f90dbd58\") " pod="openshift-network-node-identity/network-node-identity-sk5cm" Oct 11 10:27:31.413382 master-2 kubenswrapper[4776]: I1011 10:27:31.413291 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skwch\" (UniqueName: \"kubernetes.io/projected/8526c41d-70a8-42de-a10b-6ad2d5266afb-kube-api-access-skwch\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.436106 master-1 kubenswrapper[4771]: I1011 10:27:31.436044 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:31.436235 master-1 kubenswrapper[4771]: E1011 10:27:31.436192 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:31.507003 master-2 kubenswrapper[4776]: I1011 10:27:31.506844 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.527158 master-1 kubenswrapper[4771]: I1011 10:27:31.526965 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-sk5cm" Oct 11 10:27:31.541396 master-1 kubenswrapper[4771]: W1011 10:27:31.541328 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e03dddd_4197_40ae_91f1_7e83f90dbd58.slice/crio-0acd29c92f7521b4c20ddb03ce7c41f516fe6d5ba831200b335c125fc0e7499f WatchSource:0}: Error finding container 0acd29c92f7521b4c20ddb03ce7c41f516fe6d5ba831200b335c125fc0e7499f: Status 404 returned error can't find the container with id 0acd29c92f7521b4c20ddb03ce7c41f516fe6d5ba831200b335c125fc0e7499f Oct 11 10:27:31.723712 master-1 kubenswrapper[4771]: I1011 10:27:31.723649 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-sk5cm" event={"ID":"0e03dddd-4197-40ae-91f1-7e83f90dbd58","Type":"ContainerStarted","Data":"0acd29c92f7521b4c20ddb03ce7c41f516fe6d5ba831200b335c125fc0e7499f"} Oct 11 10:27:32.058644 master-2 kubenswrapper[4776]: I1011 10:27:32.058363 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:32.058644 master-2 kubenswrapper[4776]: E1011 10:27:32.058533 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:32.058644 master-2 kubenswrapper[4776]: I1011 10:27:32.058553 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:32.058992 master-2 kubenswrapper[4776]: E1011 10:27:32.058756 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:32.284465 master-2 kubenswrapper[4776]: I1011 10:27:32.284410 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vx55j" event={"ID":"8526c41d-70a8-42de-a10b-6ad2d5266afb","Type":"ContainerStarted","Data":"fe91f917532a746fbbdfaf481e8f82717686c7ce90037fe05ff4e042c1b0371d"} Oct 11 10:27:32.384851 master-1 kubenswrapper[4771]: I1011 10:27:32.384757 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hktrh\" (UniqueName: \"kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh\") pod \"network-check-target-4pm7x\" (UID: \"0bde275d-f0a5-4bea-93f7-edd2077e46b4\") " pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:32.385673 master-1 kubenswrapper[4771]: E1011 10:27:32.384978 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:27:32.385673 master-1 kubenswrapper[4771]: E1011 10:27:32.385006 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:27:32.385673 master-1 kubenswrapper[4771]: E1011 10:27:32.385026 4771 projected.go:194] Error preparing data for projected volume kube-api-access-hktrh for pod openshift-network-diagnostics/network-check-target-4pm7x: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:32.385673 master-1 kubenswrapper[4771]: E1011 10:27:32.385098 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh podName:0bde275d-f0a5-4bea-93f7-edd2077e46b4 nodeName:}" failed. No retries permitted until 2025-10-11 10:27:36.385075078 +0000 UTC m=+88.359301559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hktrh" (UniqueName: "kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh") pod "network-check-target-4pm7x" (UID: "0bde275d-f0a5-4bea-93f7-edd2077e46b4") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:32.391398 master-2 kubenswrapper[4776]: I1011 10:27:32.391274 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:32.391874 master-2 kubenswrapper[4776]: E1011 10:27:32.391438 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:27:32.391874 master-2 kubenswrapper[4776]: E1011 10:27:32.391455 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:27:32.391874 master-2 kubenswrapper[4776]: E1011 10:27:32.391466 4776 projected.go:194] Error preparing data for projected volume kube-api-access-plqrv for pod openshift-network-diagnostics/network-check-target-jdkgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:32.391874 master-2 kubenswrapper[4776]: E1011 10:27:32.391515 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv podName:f6543c6f-6f31-431e-9327-60c8cfd70c7e nodeName:}" failed. No retries permitted until 2025-10-11 10:27:36.391502234 +0000 UTC m=+91.175928943 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-plqrv" (UniqueName: "kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv") pod "network-check-target-jdkgd" (UID: "f6543c6f-6f31-431e-9327-60c8cfd70c7e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:32.437931 master-1 kubenswrapper[4771]: I1011 10:27:32.437858 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:32.438576 master-1 kubenswrapper[4771]: E1011 10:27:32.438520 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4pm7x" podUID="0bde275d-f0a5-4bea-93f7-edd2077e46b4" Oct 11 10:27:33.436072 master-1 kubenswrapper[4771]: I1011 10:27:33.435987 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:33.436890 master-1 kubenswrapper[4771]: E1011 10:27:33.436201 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:34.060130 master-2 kubenswrapper[4776]: I1011 10:27:34.060089 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:34.060651 master-2 kubenswrapper[4776]: E1011 10:27:34.060188 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:34.060651 master-2 kubenswrapper[4776]: I1011 10:27:34.060546 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:34.060651 master-2 kubenswrapper[4776]: E1011 10:27:34.060610 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:34.436730 master-1 kubenswrapper[4771]: I1011 10:27:34.436664 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:34.437223 master-1 kubenswrapper[4771]: E1011 10:27:34.436820 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4pm7x" podUID="0bde275d-f0a5-4bea-93f7-edd2077e46b4" Oct 11 10:27:35.436983 master-1 kubenswrapper[4771]: I1011 10:27:35.436906 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:35.437672 master-1 kubenswrapper[4771]: E1011 10:27:35.437084 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:36.058034 master-2 kubenswrapper[4776]: I1011 10:27:36.057988 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:36.058034 master-2 kubenswrapper[4776]: I1011 10:27:36.058032 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:36.058771 master-2 kubenswrapper[4776]: E1011 10:27:36.058597 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:36.058808 master-2 kubenswrapper[4776]: E1011 10:27:36.058765 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:36.417998 master-1 kubenswrapper[4771]: I1011 10:27:36.417942 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hktrh\" (UniqueName: \"kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh\") pod \"network-check-target-4pm7x\" (UID: \"0bde275d-f0a5-4bea-93f7-edd2077e46b4\") " pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:36.418220 master-1 kubenswrapper[4771]: E1011 10:27:36.418123 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:27:36.418220 master-1 kubenswrapper[4771]: E1011 10:27:36.418144 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:27:36.418220 master-1 kubenswrapper[4771]: E1011 10:27:36.418158 4771 projected.go:194] Error preparing data for projected volume kube-api-access-hktrh for pod openshift-network-diagnostics/network-check-target-4pm7x: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:36.418345 master-1 kubenswrapper[4771]: E1011 10:27:36.418245 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh podName:0bde275d-f0a5-4bea-93f7-edd2077e46b4 nodeName:}" failed. No retries permitted until 2025-10-11 10:27:44.418204486 +0000 UTC m=+96.392430937 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hktrh" (UniqueName: "kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh") pod "network-check-target-4pm7x" (UID: "0bde275d-f0a5-4bea-93f7-edd2077e46b4") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:36.420581 master-2 kubenswrapper[4776]: I1011 10:27:36.420522 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:36.420966 master-2 kubenswrapper[4776]: E1011 10:27:36.420899 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:27:36.421012 master-2 kubenswrapper[4776]: E1011 10:27:36.420980 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:27:36.421045 master-2 kubenswrapper[4776]: E1011 10:27:36.421013 4776 projected.go:194] Error preparing data for projected volume kube-api-access-plqrv for pod openshift-network-diagnostics/network-check-target-jdkgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:36.421197 master-2 kubenswrapper[4776]: E1011 10:27:36.421156 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv podName:f6543c6f-6f31-431e-9327-60c8cfd70c7e nodeName:}" failed. No retries permitted until 2025-10-11 10:27:44.421112039 +0000 UTC m=+99.205538898 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-plqrv" (UniqueName: "kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv") pod "network-check-target-jdkgd" (UID: "f6543c6f-6f31-431e-9327-60c8cfd70c7e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:36.437045 master-1 kubenswrapper[4771]: I1011 10:27:36.437022 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:36.437450 master-1 kubenswrapper[4771]: E1011 10:27:36.437122 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4pm7x" podUID="0bde275d-f0a5-4bea-93f7-edd2077e46b4" Oct 11 10:27:37.436225 master-1 kubenswrapper[4771]: I1011 10:27:37.436117 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:37.436597 master-1 kubenswrapper[4771]: E1011 10:27:37.436342 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:38.060083 master-2 kubenswrapper[4776]: I1011 10:27:38.058226 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:38.060083 master-2 kubenswrapper[4776]: I1011 10:27:38.058282 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:38.060083 master-2 kubenswrapper[4776]: E1011 10:27:38.058342 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:38.060083 master-2 kubenswrapper[4776]: E1011 10:27:38.058408 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:38.437138 master-1 kubenswrapper[4771]: I1011 10:27:38.436796 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:38.437138 master-1 kubenswrapper[4771]: E1011 10:27:38.437043 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4pm7x" podUID="0bde275d-f0a5-4bea-93f7-edd2077e46b4" Oct 11 10:27:39.436023 master-1 kubenswrapper[4771]: I1011 10:27:39.435951 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:39.436299 master-1 kubenswrapper[4771]: E1011 10:27:39.436117 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:40.058199 master-2 kubenswrapper[4776]: I1011 10:27:40.057757 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:40.058199 master-2 kubenswrapper[4776]: E1011 10:27:40.057872 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:40.058199 master-2 kubenswrapper[4776]: I1011 10:27:40.057765 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:40.058199 master-2 kubenswrapper[4776]: E1011 10:27:40.058081 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:40.437084 master-1 kubenswrapper[4771]: I1011 10:27:40.436995 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:40.437803 master-1 kubenswrapper[4771]: E1011 10:27:40.437147 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4pm7x" podUID="0bde275d-f0a5-4bea-93f7-edd2077e46b4" Oct 11 10:27:41.436372 master-1 kubenswrapper[4771]: I1011 10:27:41.436307 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:41.436743 master-1 kubenswrapper[4771]: E1011 10:27:41.436465 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:42.057980 master-2 kubenswrapper[4776]: I1011 10:27:42.057912 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:42.058473 master-2 kubenswrapper[4776]: I1011 10:27:42.057912 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:42.058473 master-2 kubenswrapper[4776]: E1011 10:27:42.058115 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:42.058473 master-2 kubenswrapper[4776]: E1011 10:27:42.058270 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:42.436084 master-1 kubenswrapper[4771]: I1011 10:27:42.435963 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:42.437004 master-1 kubenswrapper[4771]: E1011 10:27:42.436948 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4pm7x" podUID="0bde275d-f0a5-4bea-93f7-edd2077e46b4" Oct 11 10:27:42.437943 master-1 kubenswrapper[4771]: I1011 10:27:42.437882 4771 scope.go:117] "RemoveContainer" containerID="60a93fb0643f4c652f352528fa2f68ed301ee9978f9f9a0561174149a6bb77e4" Oct 11 10:27:42.756833 master-1 kubenswrapper[4771]: I1011 10:27:42.756766 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-sk5cm" event={"ID":"0e03dddd-4197-40ae-91f1-7e83f90dbd58","Type":"ContainerStarted","Data":"c93dfaf9a8b9fa7850e31e158a74ae1fbf85ec41153c0883cb5064b10872afdb"} Oct 11 10:27:42.756833 master-1 kubenswrapper[4771]: I1011 10:27:42.756832 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-sk5cm" event={"ID":"0e03dddd-4197-40ae-91f1-7e83f90dbd58","Type":"ContainerStarted","Data":"022f8f384bb7270fddd7540b511c399935f2c7dabee6416c3281661a660bc23f"} Oct 11 10:27:42.762620 master-1 kubenswrapper[4771]: I1011 10:27:42.762574 4771 generic.go:334] "Generic (PLEG): container finished" podID="0b4dff81-4eaa-422f-8de9-d6133a8b2016" containerID="e16f8a87364f9124b8fa2ebe666b652656bef4fcb7b7b1b7a185fbeea9bd8939" exitCode=0 Oct 11 10:27:42.762804 master-1 kubenswrapper[4771]: I1011 10:27:42.762754 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lvp6f" event={"ID":"0b4dff81-4eaa-422f-8de9-d6133a8b2016","Type":"ContainerDied","Data":"e16f8a87364f9124b8fa2ebe666b652656bef4fcb7b7b1b7a185fbeea9bd8939"} Oct 11 10:27:42.766915 master-1 kubenswrapper[4771]: I1011 10:27:42.766856 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-779749f859-5xxzp_e115f8be-9e65-4407-8111-568e5ea8ac1b/kube-rbac-proxy/1.log" Oct 11 10:27:42.769133 master-1 kubenswrapper[4771]: I1011 10:27:42.767935 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" event={"ID":"e115f8be-9e65-4407-8111-568e5ea8ac1b","Type":"ContainerStarted","Data":"d6f0bf05ac57d47238297705efd6175b4b0b48e0ab73a222acdf287379d27829"} Oct 11 10:27:42.770532 master-1 kubenswrapper[4771]: I1011 10:27:42.770484 4771 generic.go:334] "Generic (PLEG): container finished" podID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerID="92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce" exitCode=0 Oct 11 10:27:42.770652 master-1 kubenswrapper[4771]: I1011 10:27:42.770621 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" event={"ID":"96c2d0f1-e436-480c-9e34-9068178f9df4","Type":"ContainerDied","Data":"92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce"} Oct 11 10:27:42.774019 master-1 kubenswrapper[4771]: I1011 10:27:42.773962 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb" event={"ID":"a65a56b0-5ee8-4429-8fe5-b33a6f29bc79","Type":"ContainerStarted","Data":"fd429044454b120b7f284bbe76d575ec34a3889f8f0e590f4edb09bf076942ee"} Oct 11 10:27:42.775421 master-1 kubenswrapper[4771]: I1011 10:27:42.775326 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-sk5cm" podStartSLOduration=1.029117293 podStartE2EDuration="11.775306544s" podCreationTimestamp="2025-10-11 10:27:31 +0000 UTC" firstStartedPulling="2025-10-11 10:27:31.543288911 +0000 UTC m=+83.517515352" lastFinishedPulling="2025-10-11 10:27:42.289478122 +0000 UTC m=+94.263704603" observedRunningTime="2025-10-11 10:27:42.775305684 +0000 UTC m=+94.749532195" watchObservedRunningTime="2025-10-11 10:27:42.775306544 +0000 UTC m=+94.749533045" Oct 11 10:27:42.795347 master-1 kubenswrapper[4771]: I1011 10:27:42.795273 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" podStartSLOduration=24.795252733 podStartE2EDuration="24.795252733s" podCreationTimestamp="2025-10-11 10:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:27:42.794999546 +0000 UTC m=+94.769226057" watchObservedRunningTime="2025-10-11 10:27:42.795252733 +0000 UTC m=+94.769479234" Oct 11 10:27:42.878822 master-1 kubenswrapper[4771]: I1011 10:27:42.878727 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb" podStartSLOduration=1.6528184700000002 podStartE2EDuration="17.878698077s" podCreationTimestamp="2025-10-11 10:27:25 +0000 UTC" firstStartedPulling="2025-10-11 10:27:26.002937135 +0000 UTC m=+77.977163616" lastFinishedPulling="2025-10-11 10:27:42.228816742 +0000 UTC m=+94.203043223" observedRunningTime="2025-10-11 10:27:42.832908722 +0000 UTC m=+94.807135223" watchObservedRunningTime="2025-10-11 10:27:42.878698077 +0000 UTC m=+94.852924568" Oct 11 10:27:43.436555 master-1 kubenswrapper[4771]: I1011 10:27:43.436064 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:43.437964 master-1 kubenswrapper[4771]: E1011 10:27:43.436689 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:43.783505 master-1 kubenswrapper[4771]: I1011 10:27:43.783409 4771 generic.go:334] "Generic (PLEG): container finished" podID="0b4dff81-4eaa-422f-8de9-d6133a8b2016" containerID="9bbb4a99213e3882e40e0fb093245b500c5a29883ced07500ad74a177d298364" exitCode=0 Oct 11 10:27:43.783698 master-1 kubenswrapper[4771]: I1011 10:27:43.783543 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lvp6f" event={"ID":"0b4dff81-4eaa-422f-8de9-d6133a8b2016","Type":"ContainerDied","Data":"9bbb4a99213e3882e40e0fb093245b500c5a29883ced07500ad74a177d298364"} Oct 11 10:27:43.786540 master-1 kubenswrapper[4771]: I1011 10:27:43.786494 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-779749f859-5xxzp_e115f8be-9e65-4407-8111-568e5ea8ac1b/kube-rbac-proxy/2.log" Oct 11 10:27:43.787598 master-1 kubenswrapper[4771]: I1011 10:27:43.787530 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-779749f859-5xxzp_e115f8be-9e65-4407-8111-568e5ea8ac1b/kube-rbac-proxy/1.log" Oct 11 10:27:43.788611 master-1 kubenswrapper[4771]: I1011 10:27:43.788568 4771 generic.go:334] "Generic (PLEG): container finished" podID="e115f8be-9e65-4407-8111-568e5ea8ac1b" containerID="d6f0bf05ac57d47238297705efd6175b4b0b48e0ab73a222acdf287379d27829" exitCode=1 Oct 11 10:27:43.788666 master-1 kubenswrapper[4771]: I1011 10:27:43.788632 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" event={"ID":"e115f8be-9e65-4407-8111-568e5ea8ac1b","Type":"ContainerDied","Data":"d6f0bf05ac57d47238297705efd6175b4b0b48e0ab73a222acdf287379d27829"} Oct 11 10:27:43.788724 master-1 kubenswrapper[4771]: I1011 10:27:43.788702 4771 scope.go:117] "RemoveContainer" containerID="60a93fb0643f4c652f352528fa2f68ed301ee9978f9f9a0561174149a6bb77e4" Oct 11 10:27:43.790216 master-1 kubenswrapper[4771]: I1011 10:27:43.790164 4771 scope.go:117] "RemoveContainer" containerID="d6f0bf05ac57d47238297705efd6175b4b0b48e0ab73a222acdf287379d27829" Oct 11 10:27:43.790601 master-1 kubenswrapper[4771]: E1011 10:27:43.790495 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-779749f859-5xxzp_openshift-cloud-controller-manager-operator(e115f8be-9e65-4407-8111-568e5ea8ac1b)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" podUID="e115f8be-9e65-4407-8111-568e5ea8ac1b" Oct 11 10:27:43.795099 master-1 kubenswrapper[4771]: I1011 10:27:43.795056 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" event={"ID":"96c2d0f1-e436-480c-9e34-9068178f9df4","Type":"ContainerStarted","Data":"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853"} Oct 11 10:27:43.795170 master-1 kubenswrapper[4771]: I1011 10:27:43.795103 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" event={"ID":"96c2d0f1-e436-480c-9e34-9068178f9df4","Type":"ContainerStarted","Data":"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3"} Oct 11 10:27:43.795170 master-1 kubenswrapper[4771]: I1011 10:27:43.795127 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" event={"ID":"96c2d0f1-e436-480c-9e34-9068178f9df4","Type":"ContainerStarted","Data":"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe"} Oct 11 10:27:43.795170 master-1 kubenswrapper[4771]: I1011 10:27:43.795145 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" event={"ID":"96c2d0f1-e436-480c-9e34-9068178f9df4","Type":"ContainerStarted","Data":"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0"} Oct 11 10:27:43.795170 master-1 kubenswrapper[4771]: I1011 10:27:43.795165 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" event={"ID":"96c2d0f1-e436-480c-9e34-9068178f9df4","Type":"ContainerStarted","Data":"fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79"} Oct 11 10:27:43.795390 master-1 kubenswrapper[4771]: I1011 10:27:43.795182 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" event={"ID":"96c2d0f1-e436-480c-9e34-9068178f9df4","Type":"ContainerStarted","Data":"2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88"} Oct 11 10:27:44.058088 master-2 kubenswrapper[4776]: I1011 10:27:44.057658 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:44.058088 master-2 kubenswrapper[4776]: E1011 10:27:44.057787 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:44.058088 master-2 kubenswrapper[4776]: I1011 10:27:44.057658 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:44.058088 master-2 kubenswrapper[4776]: E1011 10:27:44.057950 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:44.313056 master-2 kubenswrapper[4776]: I1011 10:27:44.312945 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" event={"ID":"5839b979-8c02-4e0d-9dc1-b1843d8ce872","Type":"ContainerStarted","Data":"978c6f9460e370b0c889c110bf41484945f0806d1b4dcff4d72e01fbd7666b0d"} Oct 11 10:27:44.436884 master-1 kubenswrapper[4771]: I1011 10:27:44.436594 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:44.436884 master-1 kubenswrapper[4771]: E1011 10:27:44.436751 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4pm7x" podUID="0bde275d-f0a5-4bea-93f7-edd2077e46b4" Oct 11 10:27:44.482208 master-2 kubenswrapper[4776]: I1011 10:27:44.482099 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:44.482398 master-2 kubenswrapper[4776]: E1011 10:27:44.482291 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:27:44.482398 master-2 kubenswrapper[4776]: E1011 10:27:44.482321 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:27:44.482398 master-2 kubenswrapper[4776]: E1011 10:27:44.482336 4776 projected.go:194] Error preparing data for projected volume kube-api-access-plqrv for pod openshift-network-diagnostics/network-check-target-jdkgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:44.482398 master-2 kubenswrapper[4776]: E1011 10:27:44.482394 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv podName:f6543c6f-6f31-431e-9327-60c8cfd70c7e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:00.482378496 +0000 UTC m=+115.266805215 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-plqrv" (UniqueName: "kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv") pod "network-check-target-jdkgd" (UID: "f6543c6f-6f31-431e-9327-60c8cfd70c7e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:44.506672 master-1 kubenswrapper[4771]: I1011 10:27:44.506558 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hktrh\" (UniqueName: \"kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh\") pod \"network-check-target-4pm7x\" (UID: \"0bde275d-f0a5-4bea-93f7-edd2077e46b4\") " pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:44.506952 master-1 kubenswrapper[4771]: E1011 10:27:44.506752 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:27:44.506952 master-1 kubenswrapper[4771]: E1011 10:27:44.506785 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:27:44.506952 master-1 kubenswrapper[4771]: E1011 10:27:44.506807 4771 projected.go:194] Error preparing data for projected volume kube-api-access-hktrh for pod openshift-network-diagnostics/network-check-target-4pm7x: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:44.506952 master-1 kubenswrapper[4771]: E1011 10:27:44.506901 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh podName:0bde275d-f0a5-4bea-93f7-edd2077e46b4 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:00.506868585 +0000 UTC m=+112.481095066 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hktrh" (UniqueName: "kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh") pod "network-check-target-4pm7x" (UID: "0bde275d-f0a5-4bea-93f7-edd2077e46b4") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:44.799102 master-1 kubenswrapper[4771]: I1011 10:27:44.798919 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-779749f859-5xxzp_e115f8be-9e65-4407-8111-568e5ea8ac1b/kube-rbac-proxy/2.log" Oct 11 10:27:44.803956 master-1 kubenswrapper[4771]: I1011 10:27:44.803891 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lvp6f" event={"ID":"0b4dff81-4eaa-422f-8de9-d6133a8b2016","Type":"ContainerStarted","Data":"9c7f4a451768bb810cb6fa39c38e6c02760cfe7f21dbe9bd7c2ba963f6a4e616"} Oct 11 10:27:45.317837 master-2 kubenswrapper[4776]: I1011 10:27:45.317730 4776 generic.go:334] "Generic (PLEG): container finished" podID="5839b979-8c02-4e0d-9dc1-b1843d8ce872" containerID="978c6f9460e370b0c889c110bf41484945f0806d1b4dcff4d72e01fbd7666b0d" exitCode=0 Oct 11 10:27:45.317837 master-2 kubenswrapper[4776]: I1011 10:27:45.317780 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" event={"ID":"5839b979-8c02-4e0d-9dc1-b1843d8ce872","Type":"ContainerDied","Data":"978c6f9460e370b0c889c110bf41484945f0806d1b4dcff4d72e01fbd7666b0d"} Oct 11 10:27:45.436530 master-1 kubenswrapper[4771]: I1011 10:27:45.436378 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:45.436831 master-1 kubenswrapper[4771]: E1011 10:27:45.436552 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:45.790579 master-2 kubenswrapper[4776]: I1011 10:27:45.790509 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:45.790846 master-2 kubenswrapper[4776]: E1011 10:27:45.790732 4776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:45.790846 master-2 kubenswrapper[4776]: E1011 10:27:45.790794 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs podName:35b21a7b-2a5a-4511-a2d5-d950752b4bda nodeName:}" failed. No retries permitted until 2025-10-11 10:28:17.790775633 +0000 UTC m=+132.575202352 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs") pod "network-metrics-daemon-w52cn" (UID: "35b21a7b-2a5a-4511-a2d5-d950752b4bda") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:45.812656 master-1 kubenswrapper[4771]: I1011 10:27:45.812594 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" event={"ID":"96c2d0f1-e436-480c-9e34-9068178f9df4","Type":"ContainerStarted","Data":"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1"} Oct 11 10:27:45.919569 master-1 kubenswrapper[4771]: I1011 10:27:45.919478 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs\") pod \"network-metrics-daemon-fgjvw\" (UID: \"2c084572-a5c9-4787-8a14-b7d6b0810a1b\") " pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:45.919752 master-1 kubenswrapper[4771]: E1011 10:27:45.919676 4771 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:45.919845 master-1 kubenswrapper[4771]: E1011 10:27:45.919814 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs podName:2c084572-a5c9-4787-8a14-b7d6b0810a1b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:17.919778928 +0000 UTC m=+129.894005399 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs") pod "network-metrics-daemon-fgjvw" (UID: "2c084572-a5c9-4787-8a14-b7d6b0810a1b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:46.058215 master-2 kubenswrapper[4776]: I1011 10:27:46.058165 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:46.058363 master-2 kubenswrapper[4776]: I1011 10:27:46.058175 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:46.059032 master-2 kubenswrapper[4776]: E1011 10:27:46.058979 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:46.059190 master-2 kubenswrapper[4776]: E1011 10:27:46.059164 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:46.322915 master-2 kubenswrapper[4776]: I1011 10:27:46.322795 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vx55j" event={"ID":"8526c41d-70a8-42de-a10b-6ad2d5266afb","Type":"ContainerStarted","Data":"266e9dcc343bc1bf1b6ce1eaa05b6e5c2065e68897c04e6301744ef3e5b512b9"} Oct 11 10:27:46.322915 master-2 kubenswrapper[4776]: I1011 10:27:46.322841 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vx55j" event={"ID":"8526c41d-70a8-42de-a10b-6ad2d5266afb","Type":"ContainerStarted","Data":"ae9f21a0d22bdea1c203b25b4cda8a8d2ad2d414f845e4eeb81d1d50f28205a9"} Oct 11 10:27:46.327607 master-2 kubenswrapper[4776]: I1011 10:27:46.327547 4776 generic.go:334] "Generic (PLEG): container finished" podID="5839b979-8c02-4e0d-9dc1-b1843d8ce872" containerID="6cf8c8572314a0e94c28fae95fa2712ce9abd3b4e7ac60073b5b10fdec3a1b47" exitCode=0 Oct 11 10:27:46.327762 master-2 kubenswrapper[4776]: I1011 10:27:46.327713 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" event={"ID":"5839b979-8c02-4e0d-9dc1-b1843d8ce872","Type":"ContainerDied","Data":"6cf8c8572314a0e94c28fae95fa2712ce9abd3b4e7ac60073b5b10fdec3a1b47"} Oct 11 10:27:46.329902 master-2 kubenswrapper[4776]: I1011 10:27:46.329848 4776 generic.go:334] "Generic (PLEG): container finished" podID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerID="9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf" exitCode=0 Oct 11 10:27:46.329970 master-2 kubenswrapper[4776]: I1011 10:27:46.329927 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf"} Oct 11 10:27:46.331586 master-2 kubenswrapper[4776]: I1011 10:27:46.331548 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" event={"ID":"9727aec8-dcb9-40a6-9d8d-2a61f37b6503","Type":"ContainerStarted","Data":"e158d51773ea3d54a9f7af87b30a23cf17dc3b04e477c11473ef17096d07a719"} Oct 11 10:27:46.339298 master-2 kubenswrapper[4776]: I1011 10:27:46.339198 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-vx55j" podStartSLOduration=0.972959232 podStartE2EDuration="15.339173244s" podCreationTimestamp="2025-10-11 10:27:31 +0000 UTC" firstStartedPulling="2025-10-11 10:27:31.519036221 +0000 UTC m=+86.303462930" lastFinishedPulling="2025-10-11 10:27:45.885250223 +0000 UTC m=+100.669676942" observedRunningTime="2025-10-11 10:27:46.338479794 +0000 UTC m=+101.122906553" watchObservedRunningTime="2025-10-11 10:27:46.339173244 +0000 UTC m=+101.123599993" Oct 11 10:27:46.353668 master-2 kubenswrapper[4776]: I1011 10:27:46.353566 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" podStartSLOduration=1.399101989 podStartE2EDuration="21.353539407s" podCreationTimestamp="2025-10-11 10:27:25 +0000 UTC" firstStartedPulling="2025-10-11 10:27:25.853817268 +0000 UTC m=+80.638243977" lastFinishedPulling="2025-10-11 10:27:45.808254686 +0000 UTC m=+100.592681395" observedRunningTime="2025-10-11 10:27:46.35329733 +0000 UTC m=+101.137724039" watchObservedRunningTime="2025-10-11 10:27:46.353539407 +0000 UTC m=+101.137966146" Oct 11 10:27:46.436020 master-1 kubenswrapper[4771]: I1011 10:27:46.435904 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:46.436423 master-1 kubenswrapper[4771]: E1011 10:27:46.436098 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4pm7x" podUID="0bde275d-f0a5-4bea-93f7-edd2077e46b4" Oct 11 10:27:47.340483 master-2 kubenswrapper[4776]: I1011 10:27:47.339828 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" event={"ID":"5839b979-8c02-4e0d-9dc1-b1843d8ce872","Type":"ContainerStarted","Data":"228e86713ba0cd8b98073c6780ce78979773f3d20eace16d0494588e5185833d"} Oct 11 10:27:47.344711 master-2 kubenswrapper[4776]: I1011 10:27:47.344605 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerStarted","Data":"8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7"} Oct 11 10:27:47.344861 master-2 kubenswrapper[4776]: I1011 10:27:47.344715 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerStarted","Data":"2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a"} Oct 11 10:27:47.344861 master-2 kubenswrapper[4776]: I1011 10:27:47.344805 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerStarted","Data":"091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88"} Oct 11 10:27:47.344861 master-2 kubenswrapper[4776]: I1011 10:27:47.344832 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerStarted","Data":"f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f"} Oct 11 10:27:47.345133 master-2 kubenswrapper[4776]: I1011 10:27:47.344859 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerStarted","Data":"5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552"} Oct 11 10:27:47.345133 master-2 kubenswrapper[4776]: I1011 10:27:47.344924 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerStarted","Data":"bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81"} Oct 11 10:27:47.360815 master-2 kubenswrapper[4776]: I1011 10:27:47.360700 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" podStartSLOduration=4.608614771 podStartE2EDuration="34.360649678s" podCreationTimestamp="2025-10-11 10:27:13 +0000 UTC" firstStartedPulling="2025-10-11 10:27:13.548307962 +0000 UTC m=+68.332734711" lastFinishedPulling="2025-10-11 10:27:43.300342899 +0000 UTC m=+98.084769618" observedRunningTime="2025-10-11 10:27:47.360139453 +0000 UTC m=+102.144566172" watchObservedRunningTime="2025-10-11 10:27:47.360649678 +0000 UTC m=+102.145076427" Oct 11 10:27:47.436779 master-1 kubenswrapper[4771]: I1011 10:27:47.436697 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:47.437479 master-1 kubenswrapper[4771]: E1011 10:27:47.437025 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:48.058002 master-2 kubenswrapper[4776]: I1011 10:27:48.057932 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:48.058002 master-2 kubenswrapper[4776]: I1011 10:27:48.058011 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:48.058398 master-2 kubenswrapper[4776]: E1011 10:27:48.058119 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:48.058398 master-2 kubenswrapper[4776]: E1011 10:27:48.058334 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:48.437091 master-1 kubenswrapper[4771]: I1011 10:27:48.436602 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:48.438321 master-1 kubenswrapper[4771]: E1011 10:27:48.437293 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4pm7x" podUID="0bde275d-f0a5-4bea-93f7-edd2077e46b4" Oct 11 10:27:48.828466 master-1 kubenswrapper[4771]: I1011 10:27:48.828302 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" event={"ID":"96c2d0f1-e436-480c-9e34-9068178f9df4","Type":"ContainerStarted","Data":"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc"} Oct 11 10:27:48.828810 master-1 kubenswrapper[4771]: I1011 10:27:48.828764 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:48.828902 master-1 kubenswrapper[4771]: I1011 10:27:48.828828 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:48.828902 master-1 kubenswrapper[4771]: I1011 10:27:48.828852 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:48.864279 master-1 kubenswrapper[4771]: I1011 10:27:48.864187 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lvp6f" podStartSLOduration=8.128615693 podStartE2EDuration="35.864166913s" podCreationTimestamp="2025-10-11 10:27:13 +0000 UTC" firstStartedPulling="2025-10-11 10:27:14.433405404 +0000 UTC m=+66.407631845" lastFinishedPulling="2025-10-11 10:27:42.168956624 +0000 UTC m=+94.143183065" observedRunningTime="2025-10-11 10:27:44.827564135 +0000 UTC m=+96.801790596" watchObservedRunningTime="2025-10-11 10:27:48.864166913 +0000 UTC m=+100.838393394" Oct 11 10:27:49.354821 master-2 kubenswrapper[4776]: I1011 10:27:49.354626 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerStarted","Data":"082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df"} Oct 11 10:27:49.436723 master-1 kubenswrapper[4771]: I1011 10:27:49.436530 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:49.436723 master-1 kubenswrapper[4771]: E1011 10:27:49.436736 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:50.057745 master-2 kubenswrapper[4776]: I1011 10:27:50.057633 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:50.057745 master-2 kubenswrapper[4776]: I1011 10:27:50.057706 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:50.058087 master-2 kubenswrapper[4776]: E1011 10:27:50.057816 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:50.058087 master-2 kubenswrapper[4776]: E1011 10:27:50.057906 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:50.403466 master-1 kubenswrapper[4771]: I1011 10:27:50.403374 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" podStartSLOduration=9.038853162 podStartE2EDuration="25.403324257s" podCreationTimestamp="2025-10-11 10:27:25 +0000 UTC" firstStartedPulling="2025-10-11 10:27:25.936260376 +0000 UTC m=+77.910486827" lastFinishedPulling="2025-10-11 10:27:42.300731481 +0000 UTC m=+94.274957922" observedRunningTime="2025-10-11 10:27:48.864030629 +0000 UTC m=+100.838257170" watchObservedRunningTime="2025-10-11 10:27:50.403324257 +0000 UTC m=+102.377550718" Oct 11 10:27:50.404422 master-1 kubenswrapper[4771]: I1011 10:27:50.404318 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-4pm7x"] Oct 11 10:27:50.404596 master-1 kubenswrapper[4771]: I1011 10:27:50.404561 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:50.404799 master-1 kubenswrapper[4771]: E1011 10:27:50.404751 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4pm7x" podUID="0bde275d-f0a5-4bea-93f7-edd2077e46b4" Oct 11 10:27:50.405295 master-1 kubenswrapper[4771]: I1011 10:27:50.405268 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-fgjvw"] Oct 11 10:27:50.405563 master-1 kubenswrapper[4771]: I1011 10:27:50.405544 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:50.405780 master-1 kubenswrapper[4771]: E1011 10:27:50.405754 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:50.426853 master-2 kubenswrapper[4776]: I1011 10:27:50.426778 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:27:50.427292 master-2 kubenswrapper[4776]: E1011 10:27:50.426919 4776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:27:50.427292 master-2 kubenswrapper[4776]: E1011 10:27:50.427021 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert podName:b7b07707-84bd-43a6-a43d-6680decaa210 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:54.426999384 +0000 UTC m=+169.211426093 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert") pod "cluster-version-operator-55bd67947c-tpbwx" (UID: "b7b07707-84bd-43a6-a43d-6680decaa210") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:27:51.031675 master-1 kubenswrapper[4771]: I1011 10:27:51.031591 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fl2bs"] Oct 11 10:27:51.032868 master-2 kubenswrapper[4776]: I1011 10:27:51.032754 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-p8m82"] Oct 11 10:27:51.034053 master-1 kubenswrapper[4771]: I1011 10:27:51.032025 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="ovn-controller" containerID="cri-o://2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88" gracePeriod=30 Oct 11 10:27:51.034053 master-1 kubenswrapper[4771]: I1011 10:27:51.032052 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="nbdb" containerID="cri-o://676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853" gracePeriod=30 Oct 11 10:27:51.034053 master-1 kubenswrapper[4771]: I1011 10:27:51.032098 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="northd" containerID="cri-o://7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3" gracePeriod=30 Oct 11 10:27:51.034053 master-1 kubenswrapper[4771]: I1011 10:27:51.032124 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe" gracePeriod=30 Oct 11 10:27:51.034053 master-1 kubenswrapper[4771]: I1011 10:27:51.032198 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="sbdb" containerID="cri-o://0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1" gracePeriod=30 Oct 11 10:27:51.034053 master-1 kubenswrapper[4771]: I1011 10:27:51.032338 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="kube-rbac-proxy-node" containerID="cri-o://cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0" gracePeriod=30 Oct 11 10:27:51.034053 master-1 kubenswrapper[4771]: I1011 10:27:51.032338 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="ovn-acl-logging" containerID="cri-o://fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79" gracePeriod=30 Oct 11 10:27:51.095884 master-1 kubenswrapper[4771]: I1011 10:27:51.095782 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="ovnkube-controller" containerID="cri-o://2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc" gracePeriod=30 Oct 11 10:27:51.361631 master-2 kubenswrapper[4776]: I1011 10:27:51.361585 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerStarted","Data":"fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700"} Oct 11 10:27:51.361942 master-2 kubenswrapper[4776]: I1011 10:27:51.361901 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:51.362006 master-2 kubenswrapper[4776]: I1011 10:27:51.361954 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:51.362006 master-2 kubenswrapper[4776]: I1011 10:27:51.361949 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovn-controller" containerID="cri-o://bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81" gracePeriod=30 Oct 11 10:27:51.362006 master-2 kubenswrapper[4776]: I1011 10:27:51.361984 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:51.362006 master-2 kubenswrapper[4776]: I1011 10:27:51.361987 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kube-rbac-proxy-node" containerID="cri-o://f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f" gracePeriod=30 Oct 11 10:27:51.362163 master-2 kubenswrapper[4776]: I1011 10:27:51.362005 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="northd" containerID="cri-o://2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a" gracePeriod=30 Oct 11 10:27:51.362163 master-2 kubenswrapper[4776]: I1011 10:27:51.362086 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovn-acl-logging" containerID="cri-o://5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552" gracePeriod=30 Oct 11 10:27:51.362163 master-2 kubenswrapper[4776]: I1011 10:27:51.362142 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="nbdb" containerID="cri-o://8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7" gracePeriod=30 Oct 11 10:27:51.362275 master-2 kubenswrapper[4776]: I1011 10:27:51.362109 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="sbdb" containerID="cri-o://082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df" gracePeriod=30 Oct 11 10:27:51.362275 master-2 kubenswrapper[4776]: I1011 10:27:51.362106 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88" gracePeriod=30 Oct 11 10:27:51.366673 master-1 kubenswrapper[4771]: I1011 10:27:51.366614 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fl2bs_96c2d0f1-e436-480c-9e34-9068178f9df4/ovnkube-controller/0.log" Oct 11 10:27:51.369306 master-1 kubenswrapper[4771]: I1011 10:27:51.369264 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fl2bs_96c2d0f1-e436-480c-9e34-9068178f9df4/kube-rbac-proxy-ovn-metrics/0.log" Oct 11 10:27:51.370408 master-1 kubenswrapper[4771]: I1011 10:27:51.370321 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fl2bs_96c2d0f1-e436-480c-9e34-9068178f9df4/kube-rbac-proxy-node/0.log" Oct 11 10:27:51.371029 master-1 kubenswrapper[4771]: I1011 10:27:51.370984 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fl2bs_96c2d0f1-e436-480c-9e34-9068178f9df4/ovn-acl-logging/0.log" Oct 11 10:27:51.371935 master-1 kubenswrapper[4771]: I1011 10:27:51.371891 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fl2bs_96c2d0f1-e436-480c-9e34-9068178f9df4/ovn-controller/0.log" Oct 11 10:27:51.372540 master-1 kubenswrapper[4771]: I1011 10:27:51.372497 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:51.382859 master-2 kubenswrapper[4776]: I1011 10:27:51.382789 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovnkube-controller" containerID="cri-o://fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700" gracePeriod=30 Oct 11 10:27:51.391454 master-1 kubenswrapper[4771]: I1011 10:27:51.391240 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/96c2d0f1-e436-480c-9e34-9068178f9df4-ovnkube-config\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.391454 master-1 kubenswrapper[4771]: I1011 10:27:51.391343 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-slash\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.391454 master-1 kubenswrapper[4771]: I1011 10:27:51.391422 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42mmb\" (UniqueName: \"kubernetes.io/projected/96c2d0f1-e436-480c-9e34-9068178f9df4-kube-api-access-42mmb\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.391454 master-1 kubenswrapper[4771]: I1011 10:27:51.391458 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-run-systemd\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.391735 master-1 kubenswrapper[4771]: I1011 10:27:51.391470 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-slash" (OuterVolumeSpecName: "host-slash") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:51.391735 master-1 kubenswrapper[4771]: I1011 10:27:51.391492 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/96c2d0f1-e436-480c-9e34-9068178f9df4-env-overrides\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.391735 master-1 kubenswrapper[4771]: I1011 10:27:51.391523 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-run-ovn\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.391735 master-1 kubenswrapper[4771]: I1011 10:27:51.391566 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-cni-bin\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.391735 master-1 kubenswrapper[4771]: I1011 10:27:51.391606 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-node-log\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.391879 master-1 kubenswrapper[4771]: I1011 10:27:51.391761 4771 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-slash\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.391879 master-1 kubenswrapper[4771]: I1011 10:27:51.391836 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-node-log" (OuterVolumeSpecName: "node-log") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:51.391949 master-1 kubenswrapper[4771]: I1011 10:27:51.391878 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96c2d0f1-e436-480c-9e34-9068178f9df4-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:27:51.391949 master-1 kubenswrapper[4771]: I1011 10:27:51.391905 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:51.391949 master-1 kubenswrapper[4771]: I1011 10:27:51.391923 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:51.392377 master-1 kubenswrapper[4771]: I1011 10:27:51.392302 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96c2d0f1-e436-480c-9e34-9068178f9df4-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:27:51.397403 master-1 kubenswrapper[4771]: I1011 10:27:51.397313 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96c2d0f1-e436-480c-9e34-9068178f9df4-kube-api-access-42mmb" (OuterVolumeSpecName: "kube-api-access-42mmb") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "kube-api-access-42mmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:27:51.400841 master-1 kubenswrapper[4771]: I1011 10:27:51.400785 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: I1011 10:27:51.427331 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-p9l4v"] Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: E1011 10:27:51.427511 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="kubecfg-setup" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: I1011 10:27:51.427533 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="kubecfg-setup" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: E1011 10:27:51.427548 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="northd" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: I1011 10:27:51.427561 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="northd" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: E1011 10:27:51.427575 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="nbdb" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: I1011 10:27:51.427588 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="nbdb" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: E1011 10:27:51.427602 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="ovn-acl-logging" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: I1011 10:27:51.427614 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="ovn-acl-logging" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: E1011 10:27:51.427626 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="sbdb" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: I1011 10:27:51.427638 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="sbdb" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: E1011 10:27:51.427651 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="ovnkube-controller" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: I1011 10:27:51.427664 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="ovnkube-controller" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: E1011 10:27:51.427677 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="kube-rbac-proxy-node" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: I1011 10:27:51.427688 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="kube-rbac-proxy-node" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: E1011 10:27:51.427701 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="kube-rbac-proxy-ovn-metrics" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: I1011 10:27:51.427713 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="kube-rbac-proxy-ovn-metrics" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: E1011 10:27:51.427727 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="ovn-controller" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: I1011 10:27:51.427739 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="ovn-controller" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: I1011 10:27:51.427795 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="ovn-controller" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: I1011 10:27:51.427814 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="kube-rbac-proxy-ovn-metrics" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: I1011 10:27:51.427831 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="nbdb" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: I1011 10:27:51.427849 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="ovnkube-controller" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: I1011 10:27:51.427863 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="kube-rbac-proxy-node" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: I1011 10:27:51.427878 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="ovn-acl-logging" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: I1011 10:27:51.427892 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="northd" Oct 11 10:27:51.428509 master-1 kubenswrapper[4771]: I1011 10:27:51.427907 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerName="sbdb" Oct 11 10:27:51.435587 master-1 kubenswrapper[4771]: I1011 10:27:51.435501 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.492394 master-1 kubenswrapper[4771]: I1011 10:27:51.492206 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/96c2d0f1-e436-480c-9e34-9068178f9df4-ovnkube-script-lib\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.492766 master-1 kubenswrapper[4771]: I1011 10:27:51.492487 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-run-openvswitch\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.492766 master-1 kubenswrapper[4771]: I1011 10:27:51.492611 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:51.493348 master-1 kubenswrapper[4771]: I1011 10:27:51.492845 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-systemd-units\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.493348 master-1 kubenswrapper[4771]: I1011 10:27:51.492896 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.493348 master-1 kubenswrapper[4771]: I1011 10:27:51.492930 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-run-netns\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.493348 master-1 kubenswrapper[4771]: I1011 10:27:51.492961 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-run-ovn-kubernetes\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.493348 master-1 kubenswrapper[4771]: I1011 10:27:51.493001 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/96c2d0f1-e436-480c-9e34-9068178f9df4-ovn-node-metrics-cert\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.493348 master-1 kubenswrapper[4771]: I1011 10:27:51.493039 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-var-lib-openvswitch\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.493348 master-1 kubenswrapper[4771]: I1011 10:27:51.492975 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:51.493348 master-1 kubenswrapper[4771]: I1011 10:27:51.493043 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:51.493348 master-1 kubenswrapper[4771]: I1011 10:27:51.493079 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-log-socket\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.493348 master-1 kubenswrapper[4771]: I1011 10:27:51.493036 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:51.493348 master-1 kubenswrapper[4771]: I1011 10:27:51.493105 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:51.493348 master-1 kubenswrapper[4771]: I1011 10:27:51.493122 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-kubelet\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.493348 master-1 kubenswrapper[4771]: I1011 10:27:51.493167 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-cni-netd\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.493348 master-1 kubenswrapper[4771]: I1011 10:27:51.493172 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96c2d0f1-e436-480c-9e34-9068178f9df4-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:27:51.493348 master-1 kubenswrapper[4771]: I1011 10:27:51.493194 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-log-socket" (OuterVolumeSpecName: "log-socket") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:51.493348 master-1 kubenswrapper[4771]: I1011 10:27:51.493232 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:51.493348 master-1 kubenswrapper[4771]: I1011 10:27:51.493210 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-etc-openvswitch\") pod \"96c2d0f1-e436-480c-9e34-9068178f9df4\" (UID: \"96c2d0f1-e436-480c-9e34-9068178f9df4\") " Oct 11 10:27:51.494322 master-1 kubenswrapper[4771]: I1011 10:27:51.493163 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:51.494322 master-1 kubenswrapper[4771]: I1011 10:27:51.493260 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:51.494322 master-1 kubenswrapper[4771]: I1011 10:27:51.493239 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:51.494322 master-1 kubenswrapper[4771]: I1011 10:27:51.493424 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-run-systemd\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.494322 master-1 kubenswrapper[4771]: I1011 10:27:51.493500 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm62v\" (UniqueName: \"kubernetes.io/projected/a199ebda-03a4-4154-902b-28397e4bc616-kube-api-access-tm62v\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.494322 master-1 kubenswrapper[4771]: I1011 10:27:51.493551 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-run-ovn\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.494322 master-1 kubenswrapper[4771]: I1011 10:27:51.493588 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-slash\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.494322 master-1 kubenswrapper[4771]: I1011 10:27:51.493620 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-log-socket\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.494322 master-1 kubenswrapper[4771]: I1011 10:27:51.493653 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-systemd-units\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.494322 master-1 kubenswrapper[4771]: I1011 10:27:51.493688 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.494322 master-1 kubenswrapper[4771]: I1011 10:27:51.493819 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-run-openvswitch\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.494322 master-1 kubenswrapper[4771]: I1011 10:27:51.493953 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-var-lib-openvswitch\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.494322 master-1 kubenswrapper[4771]: I1011 10:27:51.494055 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a199ebda-03a4-4154-902b-28397e4bc616-env-overrides\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.494322 master-1 kubenswrapper[4771]: I1011 10:27:51.494109 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-kubelet\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.494322 master-1 kubenswrapper[4771]: I1011 10:27:51.494155 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-cni-bin\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.494322 master-1 kubenswrapper[4771]: I1011 10:27:51.494200 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a199ebda-03a4-4154-902b-28397e4bc616-ovnkube-config\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494243 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-node-log\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494283 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-run-netns\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494329 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a199ebda-03a4-4154-902b-28397e4bc616-ovnkube-script-lib\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494410 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-run-ovn-kubernetes\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494457 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a199ebda-03a4-4154-902b-28397e4bc616-ovn-node-metrics-cert\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494529 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-etc-openvswitch\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494571 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-cni-netd\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494667 4771 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/96c2d0f1-e436-480c-9e34-9068178f9df4-env-overrides\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494697 4771 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-run-ovn\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494716 4771 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-cni-bin\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494736 4771 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-node-log\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494755 4771 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-var-lib-openvswitch\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494771 4771 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-log-socket\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494789 4771 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-cni-netd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494806 4771 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-kubelet\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494823 4771 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-etc-openvswitch\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494840 4771 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/96c2d0f1-e436-480c-9e34-9068178f9df4-ovnkube-script-lib\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494857 4771 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-run-openvswitch\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494874 4771 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-systemd-units\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494891 4771 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-run-netns\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494908 4771 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-run-ovn-kubernetes\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.495170 master-1 kubenswrapper[4771]: I1011 10:27:51.494926 4771 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.496467 master-1 kubenswrapper[4771]: I1011 10:27:51.494945 4771 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/96c2d0f1-e436-480c-9e34-9068178f9df4-ovnkube-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.496467 master-1 kubenswrapper[4771]: I1011 10:27:51.494962 4771 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/96c2d0f1-e436-480c-9e34-9068178f9df4-run-systemd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.496467 master-1 kubenswrapper[4771]: I1011 10:27:51.494979 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42mmb\" (UniqueName: \"kubernetes.io/projected/96c2d0f1-e436-480c-9e34-9068178f9df4-kube-api-access-42mmb\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.498091 master-1 kubenswrapper[4771]: I1011 10:27:51.498025 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c2d0f1-e436-480c-9e34-9068178f9df4-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "96c2d0f1-e436-480c-9e34-9068178f9df4" (UID: "96c2d0f1-e436-480c-9e34-9068178f9df4"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:27:51.595584 master-1 kubenswrapper[4771]: I1011 10:27:51.595404 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-var-lib-openvswitch\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.595584 master-1 kubenswrapper[4771]: I1011 10:27:51.595460 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a199ebda-03a4-4154-902b-28397e4bc616-env-overrides\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.595584 master-1 kubenswrapper[4771]: I1011 10:27:51.595494 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-kubelet\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.595584 master-1 kubenswrapper[4771]: I1011 10:27:51.595528 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-node-log\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.595584 master-1 kubenswrapper[4771]: I1011 10:27:51.595559 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-cni-bin\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.595584 master-1 kubenswrapper[4771]: I1011 10:27:51.595573 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-var-lib-openvswitch\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.596520 master-1 kubenswrapper[4771]: I1011 10:27:51.595595 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a199ebda-03a4-4154-902b-28397e4bc616-ovnkube-config\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.596520 master-1 kubenswrapper[4771]: I1011 10:27:51.595707 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-run-netns\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.596520 master-1 kubenswrapper[4771]: I1011 10:27:51.595733 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-kubelet\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.596520 master-1 kubenswrapper[4771]: I1011 10:27:51.595756 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a199ebda-03a4-4154-902b-28397e4bc616-ovnkube-script-lib\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.596520 master-1 kubenswrapper[4771]: I1011 10:27:51.595751 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-node-log\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.596520 master-1 kubenswrapper[4771]: I1011 10:27:51.595807 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-run-netns\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.596520 master-1 kubenswrapper[4771]: I1011 10:27:51.595810 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-run-ovn-kubernetes\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.596520 master-1 kubenswrapper[4771]: I1011 10:27:51.595862 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a199ebda-03a4-4154-902b-28397e4bc616-ovn-node-metrics-cert\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.596520 master-1 kubenswrapper[4771]: I1011 10:27:51.595880 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-cni-bin\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.596520 master-1 kubenswrapper[4771]: I1011 10:27:51.595914 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-etc-openvswitch\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.596520 master-1 kubenswrapper[4771]: I1011 10:27:51.595945 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-cni-netd\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.596520 master-1 kubenswrapper[4771]: I1011 10:27:51.595978 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-run-systemd\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.596520 master-1 kubenswrapper[4771]: I1011 10:27:51.596010 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm62v\" (UniqueName: \"kubernetes.io/projected/a199ebda-03a4-4154-902b-28397e4bc616-kube-api-access-tm62v\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.596520 master-1 kubenswrapper[4771]: I1011 10:27:51.596041 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-slash\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.596520 master-1 kubenswrapper[4771]: I1011 10:27:51.596072 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-run-ovn\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.596520 master-1 kubenswrapper[4771]: I1011 10:27:51.596105 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-log-socket\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.596520 master-1 kubenswrapper[4771]: I1011 10:27:51.596118 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-etc-openvswitch\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.597530 master-1 kubenswrapper[4771]: I1011 10:27:51.596140 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-systemd-units\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.597530 master-1 kubenswrapper[4771]: I1011 10:27:51.596163 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-run-systemd\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.597530 master-1 kubenswrapper[4771]: I1011 10:27:51.596181 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-systemd-units\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.597530 master-1 kubenswrapper[4771]: I1011 10:27:51.595863 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-run-ovn-kubernetes\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.597530 master-1 kubenswrapper[4771]: I1011 10:27:51.596261 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-cni-netd\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.597530 master-1 kubenswrapper[4771]: I1011 10:27:51.596337 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-log-socket\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.597530 master-1 kubenswrapper[4771]: I1011 10:27:51.596349 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-slash\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.597530 master-1 kubenswrapper[4771]: I1011 10:27:51.596344 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-run-ovn\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.597530 master-1 kubenswrapper[4771]: I1011 10:27:51.596724 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a199ebda-03a4-4154-902b-28397e4bc616-env-overrides\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.597530 master-1 kubenswrapper[4771]: I1011 10:27:51.596805 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.597530 master-1 kubenswrapper[4771]: I1011 10:27:51.596858 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-run-openvswitch\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.597530 master-1 kubenswrapper[4771]: I1011 10:27:51.596910 4771 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/96c2d0f1-e436-480c-9e34-9068178f9df4-ovn-node-metrics-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:27:51.597530 master-1 kubenswrapper[4771]: I1011 10:27:51.596930 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.597530 master-1 kubenswrapper[4771]: I1011 10:27:51.596958 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a199ebda-03a4-4154-902b-28397e4bc616-run-openvswitch\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.597530 master-1 kubenswrapper[4771]: I1011 10:27:51.596935 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a199ebda-03a4-4154-902b-28397e4bc616-ovnkube-config\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.597530 master-1 kubenswrapper[4771]: I1011 10:27:51.597288 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a199ebda-03a4-4154-902b-28397e4bc616-ovnkube-script-lib\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.600891 master-1 kubenswrapper[4771]: I1011 10:27:51.600839 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a199ebda-03a4-4154-902b-28397e4bc616-ovn-node-metrics-cert\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.623303 master-1 kubenswrapper[4771]: I1011 10:27:51.623200 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm62v\" (UniqueName: \"kubernetes.io/projected/a199ebda-03a4-4154-902b-28397e4bc616-kube-api-access-tm62v\") pod \"ovnkube-node-p9l4v\" (UID: \"a199ebda-03a4-4154-902b-28397e4bc616\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.756428 master-1 kubenswrapper[4771]: I1011 10:27:51.756334 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:51.776365 master-1 kubenswrapper[4771]: W1011 10:27:51.776285 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda199ebda_03a4_4154_902b_28397e4bc616.slice/crio-c4fa7112a50f60bb3eac55810697b99dd94789b499b8f14995a67cee7dbf8ed0 WatchSource:0}: Error finding container c4fa7112a50f60bb3eac55810697b99dd94789b499b8f14995a67cee7dbf8ed0: Status 404 returned error can't find the container with id c4fa7112a50f60bb3eac55810697b99dd94789b499b8f14995a67cee7dbf8ed0 Oct 11 10:27:51.841896 master-1 kubenswrapper[4771]: I1011 10:27:51.841445 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fl2bs_96c2d0f1-e436-480c-9e34-9068178f9df4/ovnkube-controller/0.log" Oct 11 10:27:51.844989 master-1 kubenswrapper[4771]: I1011 10:27:51.844924 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fl2bs_96c2d0f1-e436-480c-9e34-9068178f9df4/kube-rbac-proxy-ovn-metrics/0.log" Oct 11 10:27:51.845929 master-1 kubenswrapper[4771]: I1011 10:27:51.845812 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fl2bs_96c2d0f1-e436-480c-9e34-9068178f9df4/kube-rbac-proxy-node/0.log" Oct 11 10:27:51.846577 master-1 kubenswrapper[4771]: I1011 10:27:51.846532 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fl2bs_96c2d0f1-e436-480c-9e34-9068178f9df4/ovn-acl-logging/0.log" Oct 11 10:27:51.847350 master-1 kubenswrapper[4771]: I1011 10:27:51.847294 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fl2bs_96c2d0f1-e436-480c-9e34-9068178f9df4/ovn-controller/0.log" Oct 11 10:27:51.848134 master-1 kubenswrapper[4771]: I1011 10:27:51.848064 4771 generic.go:334] "Generic (PLEG): container finished" podID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerID="2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc" exitCode=1 Oct 11 10:27:51.848134 master-1 kubenswrapper[4771]: I1011 10:27:51.848121 4771 generic.go:334] "Generic (PLEG): container finished" podID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerID="0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1" exitCode=0 Oct 11 10:27:51.848134 master-1 kubenswrapper[4771]: I1011 10:27:51.848138 4771 generic.go:334] "Generic (PLEG): container finished" podID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerID="676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853" exitCode=0 Oct 11 10:27:51.848472 master-1 kubenswrapper[4771]: I1011 10:27:51.848140 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" event={"ID":"96c2d0f1-e436-480c-9e34-9068178f9df4","Type":"ContainerDied","Data":"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc"} Oct 11 10:27:51.848472 master-1 kubenswrapper[4771]: I1011 10:27:51.848157 4771 generic.go:334] "Generic (PLEG): container finished" podID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerID="7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3" exitCode=0 Oct 11 10:27:51.848472 master-1 kubenswrapper[4771]: I1011 10:27:51.848210 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" event={"ID":"96c2d0f1-e436-480c-9e34-9068178f9df4","Type":"ContainerDied","Data":"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1"} Oct 11 10:27:51.848472 master-1 kubenswrapper[4771]: I1011 10:27:51.848230 4771 generic.go:334] "Generic (PLEG): container finished" podID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerID="ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe" exitCode=143 Oct 11 10:27:51.848472 master-1 kubenswrapper[4771]: I1011 10:27:51.848236 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" event={"ID":"96c2d0f1-e436-480c-9e34-9068178f9df4","Type":"ContainerDied","Data":"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853"} Oct 11 10:27:51.848472 master-1 kubenswrapper[4771]: I1011 10:27:51.848262 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" event={"ID":"96c2d0f1-e436-480c-9e34-9068178f9df4","Type":"ContainerDied","Data":"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3"} Oct 11 10:27:51.848472 master-1 kubenswrapper[4771]: I1011 10:27:51.848250 4771 generic.go:334] "Generic (PLEG): container finished" podID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerID="cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0" exitCode=143 Oct 11 10:27:51.848472 master-1 kubenswrapper[4771]: I1011 10:27:51.848301 4771 scope.go:117] "RemoveContainer" containerID="2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc" Oct 11 10:27:51.848472 master-1 kubenswrapper[4771]: I1011 10:27:51.848302 4771 generic.go:334] "Generic (PLEG): container finished" podID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerID="fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79" exitCode=143 Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848503 4771 generic.go:334] "Generic (PLEG): container finished" podID="96c2d0f1-e436-480c-9e34-9068178f9df4" containerID="2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88" exitCode=143 Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848282 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" event={"ID":"96c2d0f1-e436-480c-9e34-9068178f9df4","Type":"ContainerDied","Data":"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848612 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" event={"ID":"96c2d0f1-e436-480c-9e34-9068178f9df4","Type":"ContainerDied","Data":"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848636 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848655 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848668 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848696 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" event={"ID":"96c2d0f1-e436-480c-9e34-9068178f9df4","Type":"ContainerDied","Data":"fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848713 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848725 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848737 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848205 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848748 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848794 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848806 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848847 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848860 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848871 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848943 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" event={"ID":"96c2d0f1-e436-480c-9e34-9068178f9df4","Type":"ContainerDied","Data":"2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848968 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848981 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.848992 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.849038 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.849052 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.849062 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.849072 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.849083 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88"} Oct 11 10:27:51.849092 master-1 kubenswrapper[4771]: I1011 10:27:51.849094 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce"} Oct 11 10:27:51.850822 master-1 kubenswrapper[4771]: I1011 10:27:51.849111 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fl2bs" event={"ID":"96c2d0f1-e436-480c-9e34-9068178f9df4","Type":"ContainerDied","Data":"e331eefac7c84cea9d904666f2942a1490b780d5057ea46b8fe92374c1ddb75a"} Oct 11 10:27:51.850822 master-1 kubenswrapper[4771]: I1011 10:27:51.849129 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc"} Oct 11 10:27:51.850822 master-1 kubenswrapper[4771]: I1011 10:27:51.849144 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1"} Oct 11 10:27:51.850822 master-1 kubenswrapper[4771]: I1011 10:27:51.849157 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853"} Oct 11 10:27:51.850822 master-1 kubenswrapper[4771]: I1011 10:27:51.849171 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3"} Oct 11 10:27:51.850822 master-1 kubenswrapper[4771]: I1011 10:27:51.849188 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe"} Oct 11 10:27:51.850822 master-1 kubenswrapper[4771]: I1011 10:27:51.849200 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0"} Oct 11 10:27:51.850822 master-1 kubenswrapper[4771]: I1011 10:27:51.849212 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79"} Oct 11 10:27:51.850822 master-1 kubenswrapper[4771]: I1011 10:27:51.849223 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88"} Oct 11 10:27:51.850822 master-1 kubenswrapper[4771]: I1011 10:27:51.849235 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce"} Oct 11 10:27:51.850822 master-1 kubenswrapper[4771]: I1011 10:27:51.850435 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" event={"ID":"a199ebda-03a4-4154-902b-28397e4bc616","Type":"ContainerStarted","Data":"c4fa7112a50f60bb3eac55810697b99dd94789b499b8f14995a67cee7dbf8ed0"} Oct 11 10:27:51.870552 master-1 kubenswrapper[4771]: I1011 10:27:51.870494 4771 scope.go:117] "RemoveContainer" containerID="0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1" Oct 11 10:27:51.888563 master-1 kubenswrapper[4771]: I1011 10:27:51.888512 4771 scope.go:117] "RemoveContainer" containerID="676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853" Oct 11 10:27:51.893719 master-1 kubenswrapper[4771]: I1011 10:27:51.893670 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fl2bs"] Oct 11 10:27:51.898075 master-1 kubenswrapper[4771]: I1011 10:27:51.898002 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fl2bs"] Oct 11 10:27:51.906820 master-1 kubenswrapper[4771]: I1011 10:27:51.906761 4771 scope.go:117] "RemoveContainer" containerID="7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3" Oct 11 10:27:51.967967 master-1 kubenswrapper[4771]: I1011 10:27:51.967913 4771 scope.go:117] "RemoveContainer" containerID="ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe" Oct 11 10:27:51.984039 master-1 kubenswrapper[4771]: I1011 10:27:51.983995 4771 scope.go:117] "RemoveContainer" containerID="cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0" Oct 11 10:27:51.998677 master-1 kubenswrapper[4771]: I1011 10:27:51.998630 4771 scope.go:117] "RemoveContainer" containerID="fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79" Oct 11 10:27:52.012420 master-1 kubenswrapper[4771]: I1011 10:27:52.012312 4771 scope.go:117] "RemoveContainer" containerID="2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88" Oct 11 10:27:52.029065 master-1 kubenswrapper[4771]: I1011 10:27:52.029006 4771 scope.go:117] "RemoveContainer" containerID="92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce" Oct 11 10:27:52.045892 master-1 kubenswrapper[4771]: I1011 10:27:52.045497 4771 scope.go:117] "RemoveContainer" containerID="2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc" Oct 11 10:27:52.046946 master-1 kubenswrapper[4771]: E1011 10:27:52.045949 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc\": container with ID starting with 2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc not found: ID does not exist" containerID="2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc" Oct 11 10:27:52.046946 master-1 kubenswrapper[4771]: I1011 10:27:52.045996 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc"} err="failed to get container status \"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc\": rpc error: code = NotFound desc = could not find container \"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc\": container with ID starting with 2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc not found: ID does not exist" Oct 11 10:27:52.046946 master-1 kubenswrapper[4771]: I1011 10:27:52.046035 4771 scope.go:117] "RemoveContainer" containerID="0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1" Oct 11 10:27:52.046946 master-1 kubenswrapper[4771]: E1011 10:27:52.046621 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1\": container with ID starting with 0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1 not found: ID does not exist" containerID="0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1" Oct 11 10:27:52.046946 master-1 kubenswrapper[4771]: I1011 10:27:52.046675 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1"} err="failed to get container status \"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1\": rpc error: code = NotFound desc = could not find container \"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1\": container with ID starting with 0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1 not found: ID does not exist" Oct 11 10:27:52.046946 master-1 kubenswrapper[4771]: I1011 10:27:52.046714 4771 scope.go:117] "RemoveContainer" containerID="676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853" Oct 11 10:27:52.047296 master-1 kubenswrapper[4771]: E1011 10:27:52.047103 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853\": container with ID starting with 676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853 not found: ID does not exist" containerID="676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853" Oct 11 10:27:52.047296 master-1 kubenswrapper[4771]: I1011 10:27:52.047140 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853"} err="failed to get container status \"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853\": rpc error: code = NotFound desc = could not find container \"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853\": container with ID starting with 676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853 not found: ID does not exist" Oct 11 10:27:52.047296 master-1 kubenswrapper[4771]: I1011 10:27:52.047169 4771 scope.go:117] "RemoveContainer" containerID="7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3" Oct 11 10:27:52.048021 master-1 kubenswrapper[4771]: E1011 10:27:52.047961 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3\": container with ID starting with 7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3 not found: ID does not exist" containerID="7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3" Oct 11 10:27:52.048087 master-1 kubenswrapper[4771]: I1011 10:27:52.048010 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3"} err="failed to get container status \"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3\": rpc error: code = NotFound desc = could not find container \"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3\": container with ID starting with 7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3 not found: ID does not exist" Oct 11 10:27:52.048087 master-1 kubenswrapper[4771]: I1011 10:27:52.048040 4771 scope.go:117] "RemoveContainer" containerID="ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe" Oct 11 10:27:52.048582 master-1 kubenswrapper[4771]: E1011 10:27:52.048531 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe\": container with ID starting with ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe not found: ID does not exist" containerID="ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe" Oct 11 10:27:52.048667 master-1 kubenswrapper[4771]: I1011 10:27:52.048573 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe"} err="failed to get container status \"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe\": rpc error: code = NotFound desc = could not find container \"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe\": container with ID starting with ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe not found: ID does not exist" Oct 11 10:27:52.048667 master-1 kubenswrapper[4771]: I1011 10:27:52.048599 4771 scope.go:117] "RemoveContainer" containerID="cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0" Oct 11 10:27:52.049018 master-1 kubenswrapper[4771]: E1011 10:27:52.048970 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0\": container with ID starting with cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0 not found: ID does not exist" containerID="cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0" Oct 11 10:27:52.049113 master-1 kubenswrapper[4771]: I1011 10:27:52.049009 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0"} err="failed to get container status \"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0\": rpc error: code = NotFound desc = could not find container \"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0\": container with ID starting with cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0 not found: ID does not exist" Oct 11 10:27:52.049113 master-1 kubenswrapper[4771]: I1011 10:27:52.049035 4771 scope.go:117] "RemoveContainer" containerID="fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79" Oct 11 10:27:52.049590 master-1 kubenswrapper[4771]: E1011 10:27:52.049542 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79\": container with ID starting with fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79 not found: ID does not exist" containerID="fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79" Oct 11 10:27:52.049658 master-1 kubenswrapper[4771]: I1011 10:27:52.049581 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79"} err="failed to get container status \"fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79\": rpc error: code = NotFound desc = could not find container \"fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79\": container with ID starting with fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79 not found: ID does not exist" Oct 11 10:27:52.049658 master-1 kubenswrapper[4771]: I1011 10:27:52.049610 4771 scope.go:117] "RemoveContainer" containerID="2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88" Oct 11 10:27:52.050069 master-1 kubenswrapper[4771]: E1011 10:27:52.050014 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88\": container with ID starting with 2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88 not found: ID does not exist" containerID="2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88" Oct 11 10:27:52.050140 master-1 kubenswrapper[4771]: I1011 10:27:52.050064 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88"} err="failed to get container status \"2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88\": rpc error: code = NotFound desc = could not find container \"2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88\": container with ID starting with 2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88 not found: ID does not exist" Oct 11 10:27:52.050140 master-1 kubenswrapper[4771]: I1011 10:27:52.050093 4771 scope.go:117] "RemoveContainer" containerID="92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce" Oct 11 10:27:52.050634 master-1 kubenswrapper[4771]: E1011 10:27:52.050556 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce\": container with ID starting with 92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce not found: ID does not exist" containerID="92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce" Oct 11 10:27:52.050722 master-1 kubenswrapper[4771]: I1011 10:27:52.050651 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce"} err="failed to get container status \"92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce\": rpc error: code = NotFound desc = could not find container \"92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce\": container with ID starting with 92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce not found: ID does not exist" Oct 11 10:27:52.050784 master-1 kubenswrapper[4771]: I1011 10:27:52.050730 4771 scope.go:117] "RemoveContainer" containerID="2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc" Oct 11 10:27:52.051296 master-1 kubenswrapper[4771]: I1011 10:27:52.051232 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc"} err="failed to get container status \"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc\": rpc error: code = NotFound desc = could not find container \"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc\": container with ID starting with 2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc not found: ID does not exist" Oct 11 10:27:52.051296 master-1 kubenswrapper[4771]: I1011 10:27:52.051270 4771 scope.go:117] "RemoveContainer" containerID="0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1" Oct 11 10:27:52.051774 master-1 kubenswrapper[4771]: I1011 10:27:52.051707 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1"} err="failed to get container status \"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1\": rpc error: code = NotFound desc = could not find container \"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1\": container with ID starting with 0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1 not found: ID does not exist" Oct 11 10:27:52.051774 master-1 kubenswrapper[4771]: I1011 10:27:52.051763 4771 scope.go:117] "RemoveContainer" containerID="676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853" Oct 11 10:27:52.052230 master-1 kubenswrapper[4771]: I1011 10:27:52.052177 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853"} err="failed to get container status \"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853\": rpc error: code = NotFound desc = could not find container \"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853\": container with ID starting with 676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853 not found: ID does not exist" Oct 11 10:27:52.052230 master-1 kubenswrapper[4771]: I1011 10:27:52.052220 4771 scope.go:117] "RemoveContainer" containerID="7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3" Oct 11 10:27:52.052643 master-1 kubenswrapper[4771]: I1011 10:27:52.052582 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3"} err="failed to get container status \"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3\": rpc error: code = NotFound desc = could not find container \"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3\": container with ID starting with 7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3 not found: ID does not exist" Oct 11 10:27:52.052643 master-1 kubenswrapper[4771]: I1011 10:27:52.052624 4771 scope.go:117] "RemoveContainer" containerID="ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe" Oct 11 10:27:52.053155 master-1 kubenswrapper[4771]: I1011 10:27:52.053054 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe"} err="failed to get container status \"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe\": rpc error: code = NotFound desc = could not find container \"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe\": container with ID starting with ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe not found: ID does not exist" Oct 11 10:27:52.053155 master-1 kubenswrapper[4771]: I1011 10:27:52.053129 4771 scope.go:117] "RemoveContainer" containerID="cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0" Oct 11 10:27:52.053585 master-1 kubenswrapper[4771]: I1011 10:27:52.053529 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0"} err="failed to get container status \"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0\": rpc error: code = NotFound desc = could not find container \"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0\": container with ID starting with cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0 not found: ID does not exist" Oct 11 10:27:52.053585 master-1 kubenswrapper[4771]: I1011 10:27:52.053572 4771 scope.go:117] "RemoveContainer" containerID="fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79" Oct 11 10:27:52.054073 master-1 kubenswrapper[4771]: I1011 10:27:52.054002 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79"} err="failed to get container status \"fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79\": rpc error: code = NotFound desc = could not find container \"fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79\": container with ID starting with fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79 not found: ID does not exist" Oct 11 10:27:52.054073 master-1 kubenswrapper[4771]: I1011 10:27:52.054060 4771 scope.go:117] "RemoveContainer" containerID="2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88" Oct 11 10:27:52.054630 master-1 kubenswrapper[4771]: I1011 10:27:52.054574 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88"} err="failed to get container status \"2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88\": rpc error: code = NotFound desc = could not find container \"2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88\": container with ID starting with 2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88 not found: ID does not exist" Oct 11 10:27:52.054630 master-1 kubenswrapper[4771]: I1011 10:27:52.054615 4771 scope.go:117] "RemoveContainer" containerID="92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce" Oct 11 10:27:52.055070 master-1 kubenswrapper[4771]: I1011 10:27:52.055015 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce"} err="failed to get container status \"92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce\": rpc error: code = NotFound desc = could not find container \"92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce\": container with ID starting with 92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce not found: ID does not exist" Oct 11 10:27:52.055070 master-1 kubenswrapper[4771]: I1011 10:27:52.055054 4771 scope.go:117] "RemoveContainer" containerID="2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc" Oct 11 10:27:52.055474 master-1 kubenswrapper[4771]: I1011 10:27:52.055417 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc"} err="failed to get container status \"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc\": rpc error: code = NotFound desc = could not find container \"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc\": container with ID starting with 2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc not found: ID does not exist" Oct 11 10:27:52.055474 master-1 kubenswrapper[4771]: I1011 10:27:52.055458 4771 scope.go:117] "RemoveContainer" containerID="0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1" Oct 11 10:27:52.055948 master-1 kubenswrapper[4771]: I1011 10:27:52.055895 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1"} err="failed to get container status \"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1\": rpc error: code = NotFound desc = could not find container \"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1\": container with ID starting with 0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1 not found: ID does not exist" Oct 11 10:27:52.055948 master-1 kubenswrapper[4771]: I1011 10:27:52.055935 4771 scope.go:117] "RemoveContainer" containerID="676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853" Oct 11 10:27:52.056379 master-1 kubenswrapper[4771]: I1011 10:27:52.056306 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853"} err="failed to get container status \"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853\": rpc error: code = NotFound desc = could not find container \"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853\": container with ID starting with 676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853 not found: ID does not exist" Oct 11 10:27:52.056379 master-1 kubenswrapper[4771]: I1011 10:27:52.056345 4771 scope.go:117] "RemoveContainer" containerID="7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3" Oct 11 10:27:52.056836 master-1 kubenswrapper[4771]: I1011 10:27:52.056785 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3"} err="failed to get container status \"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3\": rpc error: code = NotFound desc = could not find container \"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3\": container with ID starting with 7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3 not found: ID does not exist" Oct 11 10:27:52.056836 master-1 kubenswrapper[4771]: I1011 10:27:52.056821 4771 scope.go:117] "RemoveContainer" containerID="ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe" Oct 11 10:27:52.057267 master-1 kubenswrapper[4771]: I1011 10:27:52.057211 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe"} err="failed to get container status \"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe\": rpc error: code = NotFound desc = could not find container \"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe\": container with ID starting with ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe not found: ID does not exist" Oct 11 10:27:52.057267 master-1 kubenswrapper[4771]: I1011 10:27:52.057254 4771 scope.go:117] "RemoveContainer" containerID="cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0" Oct 11 10:27:52.057689 master-1 kubenswrapper[4771]: I1011 10:27:52.057639 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0"} err="failed to get container status \"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0\": rpc error: code = NotFound desc = could not find container \"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0\": container with ID starting with cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0 not found: ID does not exist" Oct 11 10:27:52.057689 master-1 kubenswrapper[4771]: I1011 10:27:52.057674 4771 scope.go:117] "RemoveContainer" containerID="fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79" Oct 11 10:27:52.058109 master-1 kubenswrapper[4771]: I1011 10:27:52.058059 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79"} err="failed to get container status \"fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79\": rpc error: code = NotFound desc = could not find container \"fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79\": container with ID starting with fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79 not found: ID does not exist" Oct 11 10:27:52.058109 master-1 kubenswrapper[4771]: I1011 10:27:52.058094 4771 scope.go:117] "RemoveContainer" containerID="2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88" Oct 11 10:27:52.058503 master-2 kubenswrapper[4776]: I1011 10:27:52.058377 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:52.058503 master-2 kubenswrapper[4776]: I1011 10:27:52.058440 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:52.058513 master-1 kubenswrapper[4771]: I1011 10:27:52.058464 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88"} err="failed to get container status \"2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88\": rpc error: code = NotFound desc = could not find container \"2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88\": container with ID starting with 2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88 not found: ID does not exist" Oct 11 10:27:52.058513 master-1 kubenswrapper[4771]: I1011 10:27:52.058500 4771 scope.go:117] "RemoveContainer" containerID="92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce" Oct 11 10:27:52.058956 master-1 kubenswrapper[4771]: I1011 10:27:52.058905 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce"} err="failed to get container status \"92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce\": rpc error: code = NotFound desc = could not find container \"92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce\": container with ID starting with 92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce not found: ID does not exist" Oct 11 10:27:52.058956 master-1 kubenswrapper[4771]: I1011 10:27:52.058941 4771 scope.go:117] "RemoveContainer" containerID="2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc" Oct 11 10:27:52.059160 master-2 kubenswrapper[4776]: E1011 10:27:52.058603 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:52.059160 master-2 kubenswrapper[4776]: E1011 10:27:52.059086 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:52.060241 master-1 kubenswrapper[4771]: I1011 10:27:52.060173 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc"} err="failed to get container status \"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc\": rpc error: code = NotFound desc = could not find container \"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc\": container with ID starting with 2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc not found: ID does not exist" Oct 11 10:27:52.060241 master-1 kubenswrapper[4771]: I1011 10:27:52.060233 4771 scope.go:117] "RemoveContainer" containerID="0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1" Oct 11 10:27:52.060709 master-1 kubenswrapper[4771]: I1011 10:27:52.060660 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1"} err="failed to get container status \"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1\": rpc error: code = NotFound desc = could not find container \"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1\": container with ID starting with 0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1 not found: ID does not exist" Oct 11 10:27:52.060709 master-1 kubenswrapper[4771]: I1011 10:27:52.060696 4771 scope.go:117] "RemoveContainer" containerID="676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853" Oct 11 10:27:52.061226 master-1 kubenswrapper[4771]: I1011 10:27:52.061149 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853"} err="failed to get container status \"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853\": rpc error: code = NotFound desc = could not find container \"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853\": container with ID starting with 676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853 not found: ID does not exist" Oct 11 10:27:52.061226 master-1 kubenswrapper[4771]: I1011 10:27:52.061213 4771 scope.go:117] "RemoveContainer" containerID="7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3" Oct 11 10:27:52.061687 master-1 kubenswrapper[4771]: I1011 10:27:52.061621 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3"} err="failed to get container status \"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3\": rpc error: code = NotFound desc = could not find container \"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3\": container with ID starting with 7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3 not found: ID does not exist" Oct 11 10:27:52.061687 master-1 kubenswrapper[4771]: I1011 10:27:52.061656 4771 scope.go:117] "RemoveContainer" containerID="ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe" Oct 11 10:27:52.065453 master-1 kubenswrapper[4771]: I1011 10:27:52.063181 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe"} err="failed to get container status \"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe\": rpc error: code = NotFound desc = could not find container \"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe\": container with ID starting with ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe not found: ID does not exist" Oct 11 10:27:52.065453 master-1 kubenswrapper[4771]: I1011 10:27:52.063273 4771 scope.go:117] "RemoveContainer" containerID="cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0" Oct 11 10:27:52.065453 master-1 kubenswrapper[4771]: I1011 10:27:52.063892 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0"} err="failed to get container status \"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0\": rpc error: code = NotFound desc = could not find container \"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0\": container with ID starting with cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0 not found: ID does not exist" Oct 11 10:27:52.065453 master-1 kubenswrapper[4771]: I1011 10:27:52.063960 4771 scope.go:117] "RemoveContainer" containerID="fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79" Oct 11 10:27:52.065453 master-1 kubenswrapper[4771]: I1011 10:27:52.064498 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79"} err="failed to get container status \"fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79\": rpc error: code = NotFound desc = could not find container \"fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79\": container with ID starting with fd77afbe7b6353ee8dcc04d725d33a007170c03c64f8511748463b5783682a79 not found: ID does not exist" Oct 11 10:27:52.065453 master-1 kubenswrapper[4771]: I1011 10:27:52.064561 4771 scope.go:117] "RemoveContainer" containerID="2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88" Oct 11 10:27:52.065453 master-1 kubenswrapper[4771]: I1011 10:27:52.065012 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88"} err="failed to get container status \"2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88\": rpc error: code = NotFound desc = could not find container \"2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88\": container with ID starting with 2708e75524558fa1175bcbd5968377f34b9b9be54108180f4fce32d42c460d88 not found: ID does not exist" Oct 11 10:27:52.065453 master-1 kubenswrapper[4771]: I1011 10:27:52.065103 4771 scope.go:117] "RemoveContainer" containerID="92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce" Oct 11 10:27:52.065946 master-1 kubenswrapper[4771]: I1011 10:27:52.065582 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce"} err="failed to get container status \"92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce\": rpc error: code = NotFound desc = could not find container \"92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce\": container with ID starting with 92aa93b94b03322d20f3f2cf32f51c2dcd28d35b3f0970cbe4a502bda9e364ce not found: ID does not exist" Oct 11 10:27:52.065946 master-1 kubenswrapper[4771]: I1011 10:27:52.065632 4771 scope.go:117] "RemoveContainer" containerID="2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc" Oct 11 10:27:52.067342 master-1 kubenswrapper[4771]: I1011 10:27:52.066639 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc"} err="failed to get container status \"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc\": rpc error: code = NotFound desc = could not find container \"2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc\": container with ID starting with 2f11f85953c7cb2988ced62308f447053945264a4903a23951393dafad255fbc not found: ID does not exist" Oct 11 10:27:52.067342 master-1 kubenswrapper[4771]: I1011 10:27:52.067335 4771 scope.go:117] "RemoveContainer" containerID="0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1" Oct 11 10:27:52.067859 master-1 kubenswrapper[4771]: I1011 10:27:52.067801 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1"} err="failed to get container status \"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1\": rpc error: code = NotFound desc = could not find container \"0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1\": container with ID starting with 0f3a16a6b5a76cbd7451a015fb2dcaa5057fa6c4c5460d1d4b0c3ee2a73197e1 not found: ID does not exist" Oct 11 10:27:52.067859 master-1 kubenswrapper[4771]: I1011 10:27:52.067845 4771 scope.go:117] "RemoveContainer" containerID="676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853" Oct 11 10:27:52.068253 master-1 kubenswrapper[4771]: I1011 10:27:52.068200 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853"} err="failed to get container status \"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853\": rpc error: code = NotFound desc = could not find container \"676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853\": container with ID starting with 676de21f9311cc96340bf12820a36540fd68a571277c4eee1e15c5020bb27853 not found: ID does not exist" Oct 11 10:27:52.068253 master-1 kubenswrapper[4771]: I1011 10:27:52.068241 4771 scope.go:117] "RemoveContainer" containerID="7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3" Oct 11 10:27:52.068761 master-1 kubenswrapper[4771]: I1011 10:27:52.068697 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3"} err="failed to get container status \"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3\": rpc error: code = NotFound desc = could not find container \"7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3\": container with ID starting with 7c23f33a39621d158f13fbb129750170af4f94ff875db3195fd65a2ea659e5e3 not found: ID does not exist" Oct 11 10:27:52.068761 master-1 kubenswrapper[4771]: I1011 10:27:52.068750 4771 scope.go:117] "RemoveContainer" containerID="ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe" Oct 11 10:27:52.069429 master-1 kubenswrapper[4771]: I1011 10:27:52.069333 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe"} err="failed to get container status \"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe\": rpc error: code = NotFound desc = could not find container \"ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe\": container with ID starting with ab7031670dcd4ac8746cd2646283684fcdf9a2065169a55d1845bc4fd97cabfe not found: ID does not exist" Oct 11 10:27:52.069429 master-1 kubenswrapper[4771]: I1011 10:27:52.069418 4771 scope.go:117] "RemoveContainer" containerID="cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0" Oct 11 10:27:52.069974 master-1 kubenswrapper[4771]: I1011 10:27:52.069918 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0"} err="failed to get container status \"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0\": rpc error: code = NotFound desc = could not find container \"cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0\": container with ID starting with cb657d00f3c40cd2473268577084b894fa35c8640de80e8a24d2cc346b3481f0 not found: ID does not exist" Oct 11 10:27:52.369984 master-2 kubenswrapper[4776]: I1011 10:27:52.369866 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/kube-rbac-proxy-ovn-metrics/0.log" Oct 11 10:27:52.370653 master-2 kubenswrapper[4776]: I1011 10:27:52.370613 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/kube-rbac-proxy-node/0.log" Oct 11 10:27:52.371333 master-2 kubenswrapper[4776]: I1011 10:27:52.371296 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/ovn-acl-logging/0.log" Oct 11 10:27:52.372081 master-2 kubenswrapper[4776]: I1011 10:27:52.372042 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/ovn-controller/0.log" Oct 11 10:27:52.372664 master-2 kubenswrapper[4776]: I1011 10:27:52.372627 4776 generic.go:334] "Generic (PLEG): container finished" podID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerID="082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df" exitCode=0 Oct 11 10:27:52.372711 master-2 kubenswrapper[4776]: I1011 10:27:52.372666 4776 generic.go:334] "Generic (PLEG): container finished" podID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerID="8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7" exitCode=0 Oct 11 10:27:52.372748 master-2 kubenswrapper[4776]: I1011 10:27:52.372709 4776 generic.go:334] "Generic (PLEG): container finished" podID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerID="2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a" exitCode=0 Oct 11 10:27:52.372748 master-2 kubenswrapper[4776]: I1011 10:27:52.372724 4776 generic.go:334] "Generic (PLEG): container finished" podID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerID="091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88" exitCode=143 Oct 11 10:27:52.372748 master-2 kubenswrapper[4776]: I1011 10:27:52.372727 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df"} Oct 11 10:27:52.372823 master-2 kubenswrapper[4776]: I1011 10:27:52.372788 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7"} Oct 11 10:27:52.372823 master-2 kubenswrapper[4776]: I1011 10:27:52.372810 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a"} Oct 11 10:27:52.372874 master-2 kubenswrapper[4776]: I1011 10:27:52.372828 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88"} Oct 11 10:27:52.372874 master-2 kubenswrapper[4776]: I1011 10:27:52.372847 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f"} Oct 11 10:27:52.372874 master-2 kubenswrapper[4776]: I1011 10:27:52.372737 4776 generic.go:334] "Generic (PLEG): container finished" podID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerID="f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f" exitCode=143 Oct 11 10:27:52.372945 master-2 kubenswrapper[4776]: I1011 10:27:52.372881 4776 generic.go:334] "Generic (PLEG): container finished" podID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerID="5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552" exitCode=143 Oct 11 10:27:52.372945 master-2 kubenswrapper[4776]: I1011 10:27:52.372896 4776 generic.go:334] "Generic (PLEG): container finished" podID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerID="bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81" exitCode=143 Oct 11 10:27:52.372945 master-2 kubenswrapper[4776]: I1011 10:27:52.372929 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552"} Oct 11 10:27:52.373013 master-2 kubenswrapper[4776]: I1011 10:27:52.372970 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81"} Oct 11 10:27:52.436484 master-1 kubenswrapper[4771]: I1011 10:27:52.436255 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:52.436806 master-1 kubenswrapper[4771]: I1011 10:27:52.436451 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:52.437899 master-1 kubenswrapper[4771]: E1011 10:27:52.437815 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:52.438019 master-1 kubenswrapper[4771]: E1011 10:27:52.437941 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4pm7x" podUID="0bde275d-f0a5-4bea-93f7-edd2077e46b4" Oct 11 10:27:52.444163 master-1 kubenswrapper[4771]: I1011 10:27:52.444104 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96c2d0f1-e436-480c-9e34-9068178f9df4" path="/var/lib/kubelet/pods/96c2d0f1-e436-480c-9e34-9068178f9df4/volumes" Oct 11 10:27:52.629982 master-2 kubenswrapper[4776]: I1011 10:27:52.629863 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/ovnkube-controller/0.log" Oct 11 10:27:52.631973 master-2 kubenswrapper[4776]: I1011 10:27:52.631897 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/kube-rbac-proxy-ovn-metrics/0.log" Oct 11 10:27:52.632514 master-2 kubenswrapper[4776]: I1011 10:27:52.632478 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/kube-rbac-proxy-node/0.log" Oct 11 10:27:52.633339 master-2 kubenswrapper[4776]: I1011 10:27:52.633255 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/ovn-acl-logging/0.log" Oct 11 10:27:52.633790 master-2 kubenswrapper[4776]: I1011 10:27:52.633731 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/ovn-controller/0.log" Oct 11 10:27:52.634274 master-2 kubenswrapper[4776]: I1011 10:27:52.634253 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:52.741598 master-2 kubenswrapper[4776]: I1011 10:27:52.741514 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x5wg8"] Oct 11 10:27:52.741598 master-2 kubenswrapper[4776]: E1011 10:27:52.741615 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="nbdb" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741629 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="nbdb" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: E1011 10:27:52.741639 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovnkube-controller" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741648 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovnkube-controller" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: E1011 10:27:52.741657 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kube-rbac-proxy-node" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741666 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kube-rbac-proxy-node" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: E1011 10:27:52.741703 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="northd" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741712 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="northd" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: E1011 10:27:52.741721 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="sbdb" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741729 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="sbdb" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: E1011 10:27:52.741737 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovn-controller" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741745 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovn-controller" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: E1011 10:27:52.741753 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kube-rbac-proxy-ovn-metrics" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741762 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kube-rbac-proxy-ovn-metrics" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: E1011 10:27:52.741771 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kubecfg-setup" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741781 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kubecfg-setup" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: E1011 10:27:52.741790 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovn-acl-logging" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741798 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovn-acl-logging" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741834 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="northd" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741843 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kube-rbac-proxy-node" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741866 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kube-rbac-proxy-ovn-metrics" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741874 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="nbdb" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741882 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovn-controller" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741890 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovnkube-controller" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741899 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="sbdb" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741908 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovn-acl-logging" Oct 11 10:27:52.743354 master-2 kubenswrapper[4776]: I1011 10:27:52.742442 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.743354 master-2 kubenswrapper[4776]: I1011 10:27:52.742662 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-netns\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743354 master-2 kubenswrapper[4776]: I1011 10:27:52.742871 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.743354 master-2 kubenswrapper[4776]: I1011 10:27:52.742893 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-openvswitch\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743354 master-2 kubenswrapper[4776]: I1011 10:27:52.742972 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.743354 master-2 kubenswrapper[4776]: I1011 10:27:52.743093 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c908109b-a45d-464d-9ea0-f0823d2cc341-ovn-node-metrics-cert\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743354 master-2 kubenswrapper[4776]: I1011 10:27:52.743224 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743354 master-2 kubenswrapper[4776]: I1011 10:27:52.743266 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.743748 master-2 kubenswrapper[4776]: I1011 10:27:52.743464 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-kubelet\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743748 master-2 kubenswrapper[4776]: I1011 10:27:52.743513 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.743748 master-2 kubenswrapper[4776]: I1011 10:27:52.743590 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-config\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743748 master-2 kubenswrapper[4776]: I1011 10:27:52.743641 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-script-lib\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743748 master-2 kubenswrapper[4776]: I1011 10:27:52.743698 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-ovn\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743748 master-2 kubenswrapper[4776]: I1011 10:27:52.743731 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-systemd\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.743758 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-log-socket\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.743785 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-slash\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.743868 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-env-overrides\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.743868 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.743903 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-var-lib-openvswitch\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.743940 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-etc-openvswitch\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.743934 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-log-socket" (OuterVolumeSpecName: "log-socket") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.743969 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-netd\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.743997 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-bin\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.744008 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744026 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-slash" (OuterVolumeSpecName: "host-slash") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744098 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744045 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxlsk\" (UniqueName: \"kubernetes.io/projected/c908109b-a45d-464d-9ea0-f0823d2cc341-kube-api-access-dxlsk\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744085 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744153 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744138 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-systemd-units\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744110 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744178 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-node-log\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744166 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744213 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-node-log" (OuterVolumeSpecName: "node-log") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744200 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-ovn-kubernetes\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744378 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744395 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-var-lib-openvswitch\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744390 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744401 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744422 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-cni-netd\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744454 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b7bd3364-8f2a-492d-917f-acbbe3267954-ovnkube-config\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744477 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2g8t\" (UniqueName: \"kubernetes.io/projected/b7bd3364-8f2a-492d-917f-acbbe3267954-kube-api-access-h2g8t\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744503 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-run-ovn-kubernetes\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744524 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-slash\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744542 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-run-netns\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744565 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b7bd3364-8f2a-492d-917f-acbbe3267954-ovnkube-script-lib\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744585 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7bd3364-8f2a-492d-917f-acbbe3267954-ovn-node-metrics-cert\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744732 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-kubelet\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744812 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-cni-bin\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744914 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b7bd3364-8f2a-492d-917f-acbbe3267954-env-overrides\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.745004 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-etc-openvswitch\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.745100 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-run-ovn\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.745196 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-run-systemd\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.745223 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-node-log\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.745247 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.745316 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-systemd-units\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.745347 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-run-openvswitch\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.745714 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-log-socket\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.745779 4776 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-netns\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.745794 4776 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-openvswitch\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.745839 4776 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.745882 4776 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-kubelet\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.745944 4776 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.745978 4776 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-script-lib\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746000 4776 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-ovn\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746017 4776 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-log-socket\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746034 4776 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-slash\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746050 4776 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-env-overrides\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746067 4776 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-var-lib-openvswitch\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746085 4776 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-etc-openvswitch\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746103 4776 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-netd\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746119 4776 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-bin\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746137 4776 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-systemd-units\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746153 4776 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-node-log\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746169 4776 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-ovn-kubernetes\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.747721 master-2 kubenswrapper[4776]: I1011 10:27:52.747644 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c908109b-a45d-464d-9ea0-f0823d2cc341-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:27:52.748292 master-2 kubenswrapper[4776]: I1011 10:27:52.748221 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c908109b-a45d-464d-9ea0-f0823d2cc341-kube-api-access-dxlsk" (OuterVolumeSpecName: "kube-api-access-dxlsk") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "kube-api-access-dxlsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:27:52.748982 master-2 kubenswrapper[4776]: I1011 10:27:52.748898 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.847397 master-2 kubenswrapper[4776]: I1011 10:27:52.847320 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-systemd-units\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847564 master-2 kubenswrapper[4776]: I1011 10:27:52.847391 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-run-systemd\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847564 master-2 kubenswrapper[4776]: I1011 10:27:52.847440 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-node-log\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847564 master-2 kubenswrapper[4776]: I1011 10:27:52.847461 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-systemd-units\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847564 master-2 kubenswrapper[4776]: I1011 10:27:52.847466 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847564 master-2 kubenswrapper[4776]: I1011 10:27:52.847516 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847564 master-2 kubenswrapper[4776]: I1011 10:27:52.847531 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-log-socket\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847564 master-2 kubenswrapper[4776]: I1011 10:27:52.847541 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-run-systemd\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847564 master-2 kubenswrapper[4776]: I1011 10:27:52.847555 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-log-socket\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847803 master-2 kubenswrapper[4776]: I1011 10:27:52.847573 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-node-log\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847803 master-2 kubenswrapper[4776]: I1011 10:27:52.847590 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-run-openvswitch\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847803 master-2 kubenswrapper[4776]: I1011 10:27:52.847614 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-run-openvswitch\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847803 master-2 kubenswrapper[4776]: I1011 10:27:52.847634 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-var-lib-openvswitch\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847803 master-2 kubenswrapper[4776]: I1011 10:27:52.847667 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-cni-netd\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847803 master-2 kubenswrapper[4776]: I1011 10:27:52.847718 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-var-lib-openvswitch\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847803 master-2 kubenswrapper[4776]: I1011 10:27:52.847728 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b7bd3364-8f2a-492d-917f-acbbe3267954-ovnkube-config\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847803 master-2 kubenswrapper[4776]: I1011 10:27:52.847783 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2g8t\" (UniqueName: \"kubernetes.io/projected/b7bd3364-8f2a-492d-917f-acbbe3267954-kube-api-access-h2g8t\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848000 master-2 kubenswrapper[4776]: I1011 10:27:52.847820 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-run-ovn-kubernetes\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848000 master-2 kubenswrapper[4776]: I1011 10:27:52.847836 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-cni-netd\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848000 master-2 kubenswrapper[4776]: I1011 10:27:52.847894 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-slash\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848000 master-2 kubenswrapper[4776]: I1011 10:27:52.847856 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-run-ovn-kubernetes\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848000 master-2 kubenswrapper[4776]: I1011 10:27:52.847852 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-slash\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848000 master-2 kubenswrapper[4776]: I1011 10:27:52.847971 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-run-netns\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848148 master-2 kubenswrapper[4776]: I1011 10:27:52.848008 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b7bd3364-8f2a-492d-917f-acbbe3267954-ovnkube-script-lib\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848148 master-2 kubenswrapper[4776]: I1011 10:27:52.848045 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-kubelet\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848148 master-2 kubenswrapper[4776]: I1011 10:27:52.848074 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-cni-bin\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848148 master-2 kubenswrapper[4776]: I1011 10:27:52.848100 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-run-netns\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848148 master-2 kubenswrapper[4776]: I1011 10:27:52.848128 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-kubelet\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848272 master-2 kubenswrapper[4776]: I1011 10:27:52.848103 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7bd3364-8f2a-492d-917f-acbbe3267954-ovn-node-metrics-cert\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848272 master-2 kubenswrapper[4776]: I1011 10:27:52.848160 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-cni-bin\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848272 master-2 kubenswrapper[4776]: I1011 10:27:52.848205 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b7bd3364-8f2a-492d-917f-acbbe3267954-env-overrides\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848345 master-2 kubenswrapper[4776]: I1011 10:27:52.848238 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-etc-openvswitch\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848378 master-2 kubenswrapper[4776]: I1011 10:27:52.848341 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-run-ovn\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848378 master-2 kubenswrapper[4776]: I1011 10:27:52.848373 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxlsk\" (UniqueName: \"kubernetes.io/projected/c908109b-a45d-464d-9ea0-f0823d2cc341-kube-api-access-dxlsk\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.848432 master-2 kubenswrapper[4776]: I1011 10:27:52.848389 4776 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c908109b-a45d-464d-9ea0-f0823d2cc341-ovn-node-metrics-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.848432 master-2 kubenswrapper[4776]: I1011 10:27:52.848403 4776 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-systemd\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.848432 master-2 kubenswrapper[4776]: I1011 10:27:52.848409 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b7bd3364-8f2a-492d-917f-acbbe3267954-ovnkube-config\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848511 master-2 kubenswrapper[4776]: I1011 10:27:52.848441 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-run-ovn\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848511 master-2 kubenswrapper[4776]: I1011 10:27:52.848483 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-etc-openvswitch\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848601 master-2 kubenswrapper[4776]: I1011 10:27:52.848569 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b7bd3364-8f2a-492d-917f-acbbe3267954-ovnkube-script-lib\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848907 master-2 kubenswrapper[4776]: I1011 10:27:52.848875 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b7bd3364-8f2a-492d-917f-acbbe3267954-env-overrides\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.850946 master-2 kubenswrapper[4776]: I1011 10:27:52.850912 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7bd3364-8f2a-492d-917f-acbbe3267954-ovn-node-metrics-cert\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.856810 master-1 kubenswrapper[4771]: I1011 10:27:52.856711 4771 generic.go:334] "Generic (PLEG): container finished" podID="a199ebda-03a4-4154-902b-28397e4bc616" containerID="e286ed9631e3bc792d7041c1bf8fc3c79727dde398696236177ad7a5a407c619" exitCode=0 Oct 11 10:27:52.856810 master-1 kubenswrapper[4771]: I1011 10:27:52.856760 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" event={"ID":"a199ebda-03a4-4154-902b-28397e4bc616","Type":"ContainerDied","Data":"e286ed9631e3bc792d7041c1bf8fc3c79727dde398696236177ad7a5a407c619"} Oct 11 10:27:52.866116 master-2 kubenswrapper[4776]: I1011 10:27:52.866066 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2g8t\" (UniqueName: \"kubernetes.io/projected/b7bd3364-8f2a-492d-917f-acbbe3267954-kube-api-access-h2g8t\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:53.056668 master-2 kubenswrapper[4776]: I1011 10:27:53.056406 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:53.379373 master-2 kubenswrapper[4776]: I1011 10:27:53.379087 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/ovnkube-controller/0.log" Oct 11 10:27:53.381653 master-2 kubenswrapper[4776]: I1011 10:27:53.381558 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/kube-rbac-proxy-ovn-metrics/0.log" Oct 11 10:27:53.382731 master-2 kubenswrapper[4776]: I1011 10:27:53.382658 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/kube-rbac-proxy-node/0.log" Oct 11 10:27:53.383280 master-2 kubenswrapper[4776]: I1011 10:27:53.383232 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/ovn-acl-logging/0.log" Oct 11 10:27:53.383979 master-2 kubenswrapper[4776]: I1011 10:27:53.383931 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/ovn-controller/0.log" Oct 11 10:27:53.384492 master-2 kubenswrapper[4776]: I1011 10:27:53.384443 4776 generic.go:334] "Generic (PLEG): container finished" podID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerID="fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700" exitCode=1 Oct 11 10:27:53.384569 master-2 kubenswrapper[4776]: I1011 10:27:53.384508 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700"} Oct 11 10:27:53.384708 master-2 kubenswrapper[4776]: I1011 10:27:53.384639 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"8f67a229933856e98df91cbb3597a852767fb99fd9bf0ca790d3dd81716f751d"} Oct 11 10:27:53.384788 master-2 kubenswrapper[4776]: I1011 10:27:53.384712 4776 scope.go:117] "RemoveContainer" containerID="fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700" Oct 11 10:27:53.384842 master-2 kubenswrapper[4776]: I1011 10:27:53.384651 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:53.386777 master-2 kubenswrapper[4776]: I1011 10:27:53.386741 4776 generic.go:334] "Generic (PLEG): container finished" podID="b7bd3364-8f2a-492d-917f-acbbe3267954" containerID="b3aed0e6bbc92472d45e0f8800eaeb8e8e1992c8df1659a9f1421e62f43ff048" exitCode=0 Oct 11 10:27:53.386856 master-2 kubenswrapper[4776]: I1011 10:27:53.386786 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerDied","Data":"b3aed0e6bbc92472d45e0f8800eaeb8e8e1992c8df1659a9f1421e62f43ff048"} Oct 11 10:27:53.386903 master-2 kubenswrapper[4776]: I1011 10:27:53.386855 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerStarted","Data":"3c84d7f947367b1e6704e4423e2f88f2d94595023c6e897e5666149c687ce07b"} Oct 11 10:27:53.400846 master-2 kubenswrapper[4776]: I1011 10:27:53.400376 4776 scope.go:117] "RemoveContainer" containerID="082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df" Oct 11 10:27:53.418621 master-2 kubenswrapper[4776]: I1011 10:27:53.418152 4776 scope.go:117] "RemoveContainer" containerID="8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7" Oct 11 10:27:53.433622 master-2 kubenswrapper[4776]: I1011 10:27:53.433557 4776 scope.go:117] "RemoveContainer" containerID="2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a" Oct 11 10:27:53.444832 master-2 kubenswrapper[4776]: I1011 10:27:53.444771 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-p8m82"] Oct 11 10:27:53.450344 master-2 kubenswrapper[4776]: I1011 10:27:53.450273 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-p8m82"] Oct 11 10:27:53.452911 master-2 kubenswrapper[4776]: I1011 10:27:53.452595 4776 scope.go:117] "RemoveContainer" containerID="091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88" Oct 11 10:27:53.468462 master-2 kubenswrapper[4776]: I1011 10:27:53.468223 4776 scope.go:117] "RemoveContainer" containerID="f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f" Oct 11 10:27:53.480535 master-2 kubenswrapper[4776]: I1011 10:27:53.480413 4776 scope.go:117] "RemoveContainer" containerID="5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552" Oct 11 10:27:53.502951 master-2 kubenswrapper[4776]: I1011 10:27:53.502798 4776 scope.go:117] "RemoveContainer" containerID="bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81" Oct 11 10:27:53.519438 master-2 kubenswrapper[4776]: I1011 10:27:53.519354 4776 scope.go:117] "RemoveContainer" containerID="9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf" Oct 11 10:27:53.530214 master-2 kubenswrapper[4776]: I1011 10:27:53.529560 4776 scope.go:117] "RemoveContainer" containerID="fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700" Oct 11 10:27:53.530214 master-2 kubenswrapper[4776]: E1011 10:27:53.529843 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700\": container with ID starting with fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700 not found: ID does not exist" containerID="fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700" Oct 11 10:27:53.530214 master-2 kubenswrapper[4776]: I1011 10:27:53.529871 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700"} err="failed to get container status \"fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700\": rpc error: code = NotFound desc = could not find container \"fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700\": container with ID starting with fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700 not found: ID does not exist" Oct 11 10:27:53.530214 master-2 kubenswrapper[4776]: I1011 10:27:53.529915 4776 scope.go:117] "RemoveContainer" containerID="082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df" Oct 11 10:27:53.530214 master-2 kubenswrapper[4776]: E1011 10:27:53.530188 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df\": container with ID starting with 082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df not found: ID does not exist" containerID="082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df" Oct 11 10:27:53.530214 master-2 kubenswrapper[4776]: I1011 10:27:53.530209 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df"} err="failed to get container status \"082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df\": rpc error: code = NotFound desc = could not find container \"082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df\": container with ID starting with 082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df not found: ID does not exist" Oct 11 10:27:53.530214 master-2 kubenswrapper[4776]: I1011 10:27:53.530224 4776 scope.go:117] "RemoveContainer" containerID="8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7" Oct 11 10:27:53.530644 master-2 kubenswrapper[4776]: E1011 10:27:53.530391 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7\": container with ID starting with 8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7 not found: ID does not exist" containerID="8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7" Oct 11 10:27:53.530644 master-2 kubenswrapper[4776]: I1011 10:27:53.530409 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7"} err="failed to get container status \"8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7\": rpc error: code = NotFound desc = could not find container \"8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7\": container with ID starting with 8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7 not found: ID does not exist" Oct 11 10:27:53.530644 master-2 kubenswrapper[4776]: I1011 10:27:53.530421 4776 scope.go:117] "RemoveContainer" containerID="2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a" Oct 11 10:27:53.530876 master-2 kubenswrapper[4776]: E1011 10:27:53.530748 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a\": container with ID starting with 2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a not found: ID does not exist" containerID="2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a" Oct 11 10:27:53.530876 master-2 kubenswrapper[4776]: I1011 10:27:53.530769 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a"} err="failed to get container status \"2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a\": rpc error: code = NotFound desc = could not find container \"2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a\": container with ID starting with 2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a not found: ID does not exist" Oct 11 10:27:53.530876 master-2 kubenswrapper[4776]: I1011 10:27:53.530783 4776 scope.go:117] "RemoveContainer" containerID="091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88" Oct 11 10:27:53.531048 master-2 kubenswrapper[4776]: E1011 10:27:53.530986 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88\": container with ID starting with 091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88 not found: ID does not exist" containerID="091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88" Oct 11 10:27:53.531048 master-2 kubenswrapper[4776]: I1011 10:27:53.531007 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88"} err="failed to get container status \"091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88\": rpc error: code = NotFound desc = could not find container \"091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88\": container with ID starting with 091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88 not found: ID does not exist" Oct 11 10:27:53.531048 master-2 kubenswrapper[4776]: I1011 10:27:53.531022 4776 scope.go:117] "RemoveContainer" containerID="f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f" Oct 11 10:27:53.531623 master-2 kubenswrapper[4776]: E1011 10:27:53.531537 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f\": container with ID starting with f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f not found: ID does not exist" containerID="f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f" Oct 11 10:27:53.531623 master-2 kubenswrapper[4776]: I1011 10:27:53.531561 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f"} err="failed to get container status \"f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f\": rpc error: code = NotFound desc = could not find container \"f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f\": container with ID starting with f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f not found: ID does not exist" Oct 11 10:27:53.531623 master-2 kubenswrapper[4776]: I1011 10:27:53.531576 4776 scope.go:117] "RemoveContainer" containerID="5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552" Oct 11 10:27:53.532726 master-2 kubenswrapper[4776]: E1011 10:27:53.532406 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552\": container with ID starting with 5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552 not found: ID does not exist" containerID="5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552" Oct 11 10:27:53.532726 master-2 kubenswrapper[4776]: I1011 10:27:53.532512 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552"} err="failed to get container status \"5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552\": rpc error: code = NotFound desc = could not find container \"5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552\": container with ID starting with 5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552 not found: ID does not exist" Oct 11 10:27:53.532726 master-2 kubenswrapper[4776]: I1011 10:27:53.532589 4776 scope.go:117] "RemoveContainer" containerID="bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81" Oct 11 10:27:53.533283 master-2 kubenswrapper[4776]: E1011 10:27:53.533239 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81\": container with ID starting with bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81 not found: ID does not exist" containerID="bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81" Oct 11 10:27:53.533322 master-2 kubenswrapper[4776]: I1011 10:27:53.533292 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81"} err="failed to get container status \"bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81\": rpc error: code = NotFound desc = could not find container \"bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81\": container with ID starting with bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81 not found: ID does not exist" Oct 11 10:27:53.533352 master-2 kubenswrapper[4776]: I1011 10:27:53.533330 4776 scope.go:117] "RemoveContainer" containerID="9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf" Oct 11 10:27:53.533638 master-2 kubenswrapper[4776]: E1011 10:27:53.533612 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf\": container with ID starting with 9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf not found: ID does not exist" containerID="9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf" Oct 11 10:27:53.533638 master-2 kubenswrapper[4776]: I1011 10:27:53.533632 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf"} err="failed to get container status \"9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf\": rpc error: code = NotFound desc = could not find container \"9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf\": container with ID starting with 9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf not found: ID does not exist" Oct 11 10:27:53.868565 master-1 kubenswrapper[4771]: I1011 10:27:53.868479 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" event={"ID":"a199ebda-03a4-4154-902b-28397e4bc616","Type":"ContainerStarted","Data":"811acc170d18d1cc1fb2e96244bd657affd5063d11b11fed0eb46a3a7c5a648f"} Oct 11 10:27:53.868565 master-1 kubenswrapper[4771]: I1011 10:27:53.868567 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" event={"ID":"a199ebda-03a4-4154-902b-28397e4bc616","Type":"ContainerStarted","Data":"3c3b5bb1f5483e2ded64735aa85858cf0858f4f695f8c037443c207c64d0f520"} Oct 11 10:27:53.870335 master-1 kubenswrapper[4771]: I1011 10:27:53.868591 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" event={"ID":"a199ebda-03a4-4154-902b-28397e4bc616","Type":"ContainerStarted","Data":"c1d8c24dcdadcad5e7a13a9718b6daeeaaa8375978b55bcf3c855d56d5b71f16"} Oct 11 10:27:53.870335 master-1 kubenswrapper[4771]: I1011 10:27:53.868610 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" event={"ID":"a199ebda-03a4-4154-902b-28397e4bc616","Type":"ContainerStarted","Data":"8627a1f9bab17ac392bffb2dceb0bec6de4b75d5ef5d603424b688f3ef7d9b5f"} Oct 11 10:27:53.870335 master-1 kubenswrapper[4771]: I1011 10:27:53.868628 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" event={"ID":"a199ebda-03a4-4154-902b-28397e4bc616","Type":"ContainerStarted","Data":"dba69f90e730cf13a8b86f0ec5f8fb7feed6a3bdf7e8bd0002e37b08b0d0cab9"} Oct 11 10:27:53.870335 master-1 kubenswrapper[4771]: I1011 10:27:53.868646 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" event={"ID":"a199ebda-03a4-4154-902b-28397e4bc616","Type":"ContainerStarted","Data":"99315b03b176b9548ff6f442300ed5f7301dbc1d41371db939926c6b88f3bbcb"} Oct 11 10:27:54.058415 master-2 kubenswrapper[4776]: I1011 10:27:54.058336 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:54.058595 master-2 kubenswrapper[4776]: I1011 10:27:54.058336 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:54.058664 master-2 kubenswrapper[4776]: E1011 10:27:54.058605 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:54.058664 master-2 kubenswrapper[4776]: E1011 10:27:54.058466 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:54.064391 master-2 kubenswrapper[4776]: I1011 10:27:54.064349 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" path="/var/lib/kubelet/pods/c908109b-a45d-464d-9ea0-f0823d2cc341/volumes" Oct 11 10:27:54.394222 master-2 kubenswrapper[4776]: I1011 10:27:54.394147 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerStarted","Data":"cc1598a4280245cab1f7a4fbea20199177a785ee92e9d62194ceca67349d3714"} Oct 11 10:27:54.394222 master-2 kubenswrapper[4776]: I1011 10:27:54.394196 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerStarted","Data":"d442efc1b44b6f95f4b75faeec2f7d5b3deac6b7b138cbc3871630d947eabc45"} Oct 11 10:27:54.394222 master-2 kubenswrapper[4776]: I1011 10:27:54.394209 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerStarted","Data":"728ce00595d9265f53bf5fbf1d588ecd2ed424cf93b146811d6c3f08d82584b6"} Oct 11 10:27:54.394222 master-2 kubenswrapper[4776]: I1011 10:27:54.394222 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerStarted","Data":"886ab5820c28e6480d00698580e79e4781c20f7b130fa459da47233902f43417"} Oct 11 10:27:54.394222 master-2 kubenswrapper[4776]: I1011 10:27:54.394232 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerStarted","Data":"4d72efb914bdc3ea62ac41cf6038365dd833039cf28930aafc4d0e0130f12055"} Oct 11 10:27:54.394222 master-2 kubenswrapper[4776]: I1011 10:27:54.394244 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerStarted","Data":"776a37488dc34a6237bc811855d780600eb2615467f8e88048305ef984cd3514"} Oct 11 10:27:54.436581 master-1 kubenswrapper[4771]: I1011 10:27:54.436518 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:54.436850 master-1 kubenswrapper[4771]: E1011 10:27:54.436673 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4pm7x" podUID="0bde275d-f0a5-4bea-93f7-edd2077e46b4" Oct 11 10:27:54.436976 master-1 kubenswrapper[4771]: I1011 10:27:54.436534 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:54.437283 master-1 kubenswrapper[4771]: E1011 10:27:54.437246 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:56.058107 master-2 kubenswrapper[4776]: I1011 10:27:56.058060 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:56.058842 master-2 kubenswrapper[4776]: I1011 10:27:56.058107 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:56.058842 master-2 kubenswrapper[4776]: E1011 10:27:56.058236 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:56.058842 master-2 kubenswrapper[4776]: E1011 10:27:56.058333 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:56.408492 master-2 kubenswrapper[4776]: I1011 10:27:56.408423 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerStarted","Data":"29d5b2f57601ccd97e0b67297507c979bbda2eb904fb57963f2ba752d9aac90a"} Oct 11 10:27:56.436332 master-1 kubenswrapper[4771]: I1011 10:27:56.436165 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:56.437255 master-1 kubenswrapper[4771]: I1011 10:27:56.436335 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:56.437255 master-1 kubenswrapper[4771]: E1011 10:27:56.436471 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4pm7x" podUID="0bde275d-f0a5-4bea-93f7-edd2077e46b4" Oct 11 10:27:56.437255 master-1 kubenswrapper[4771]: E1011 10:27:56.436591 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:56.883617 master-1 kubenswrapper[4771]: I1011 10:27:56.883545 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" event={"ID":"a199ebda-03a4-4154-902b-28397e4bc616","Type":"ContainerStarted","Data":"a31f2faa9277289c626596c01158d31130d2fd07a0622a3dc29355c1c98bcbc4"} Oct 11 10:27:58.057896 master-2 kubenswrapper[4776]: I1011 10:27:58.057781 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:58.058655 master-2 kubenswrapper[4776]: I1011 10:27:58.057786 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:58.058655 master-2 kubenswrapper[4776]: E1011 10:27:58.058037 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:58.058655 master-2 kubenswrapper[4776]: E1011 10:27:58.058124 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:58.416995 master-2 kubenswrapper[4776]: I1011 10:27:58.416716 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerStarted","Data":"179a33dd9b2c47cd10b8c7507158e6874a3b4b5607b9ce19ef0de9c49a47da08"} Oct 11 10:27:58.417119 master-2 kubenswrapper[4776]: I1011 10:27:58.417067 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:58.417119 master-2 kubenswrapper[4776]: I1011 10:27:58.417082 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:58.437167 master-1 kubenswrapper[4771]: I1011 10:27:58.436692 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:27:58.437944 master-1 kubenswrapper[4771]: E1011 10:27:58.437169 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4pm7x" podUID="0bde275d-f0a5-4bea-93f7-edd2077e46b4" Oct 11 10:27:58.437944 master-1 kubenswrapper[4771]: I1011 10:27:58.437432 4771 scope.go:117] "RemoveContainer" containerID="d6f0bf05ac57d47238297705efd6175b4b0b48e0ab73a222acdf287379d27829" Oct 11 10:27:58.437944 master-1 kubenswrapper[4771]: I1011 10:27:58.437441 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:27:58.437944 master-1 kubenswrapper[4771]: E1011 10:27:58.437703 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-779749f859-5xxzp_openshift-cloud-controller-manager-operator(e115f8be-9e65-4407-8111-568e5ea8ac1b)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" podUID="e115f8be-9e65-4407-8111-568e5ea8ac1b" Oct 11 10:27:58.437944 master-1 kubenswrapper[4771]: E1011 10:27:58.437711 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:27:58.440013 master-2 kubenswrapper[4776]: I1011 10:27:58.439901 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" podStartSLOduration=6.439877899 podStartE2EDuration="6.439877899s" podCreationTimestamp="2025-10-11 10:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:27:58.439222411 +0000 UTC m=+113.223649120" watchObservedRunningTime="2025-10-11 10:27:58.439877899 +0000 UTC m=+113.224304628" Oct 11 10:27:58.894857 master-1 kubenswrapper[4771]: I1011 10:27:58.894799 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" event={"ID":"a199ebda-03a4-4154-902b-28397e4bc616","Type":"ContainerStarted","Data":"ca302a8439820b05b4af2e1ab36feb2534be24776f9c7a40729fdb60938aaa70"} Oct 11 10:27:58.895286 master-1 kubenswrapper[4771]: I1011 10:27:58.895212 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:58.895406 master-1 kubenswrapper[4771]: I1011 10:27:58.895308 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:27:59.419408 master-2 kubenswrapper[4776]: I1011 10:27:59.419359 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:59.615147 master-2 kubenswrapper[4776]: I1011 10:27:59.614912 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-w52cn"] Oct 11 10:27:59.615147 master-2 kubenswrapper[4776]: I1011 10:27:59.615065 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:59.615451 master-2 kubenswrapper[4776]: E1011 10:27:59.615190 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:59.618356 master-2 kubenswrapper[4776]: I1011 10:27:59.618284 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-jdkgd"] Oct 11 10:27:59.618531 master-2 kubenswrapper[4776]: I1011 10:27:59.618394 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:59.618531 master-2 kubenswrapper[4776]: E1011 10:27:59.618485 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:59.899025 master-1 kubenswrapper[4771]: I1011 10:27:59.898915 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:28:00.436822 master-1 kubenswrapper[4771]: I1011 10:28:00.436753 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:28:00.436822 master-1 kubenswrapper[4771]: I1011 10:28:00.436786 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:28:00.437055 master-1 kubenswrapper[4771]: E1011 10:28:00.436944 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:28:00.437096 master-1 kubenswrapper[4771]: E1011 10:28:00.437041 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4pm7x" podUID="0bde275d-f0a5-4bea-93f7-edd2077e46b4" Oct 11 10:28:00.513463 master-2 kubenswrapper[4776]: I1011 10:28:00.513284 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:28:00.514365 master-2 kubenswrapper[4776]: E1011 10:28:00.513479 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:28:00.514365 master-2 kubenswrapper[4776]: E1011 10:28:00.513519 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:28:00.514365 master-2 kubenswrapper[4776]: E1011 10:28:00.513540 4776 projected.go:194] Error preparing data for projected volume kube-api-access-plqrv for pod openshift-network-diagnostics/network-check-target-jdkgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:28:00.514365 master-2 kubenswrapper[4776]: E1011 10:28:00.513604 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv podName:f6543c6f-6f31-431e-9327-60c8cfd70c7e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.513581802 +0000 UTC m=+147.298008551 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-plqrv" (UniqueName: "kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv") pod "network-check-target-jdkgd" (UID: "f6543c6f-6f31-431e-9327-60c8cfd70c7e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:28:00.576586 master-1 kubenswrapper[4771]: I1011 10:28:00.576448 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hktrh\" (UniqueName: \"kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh\") pod \"network-check-target-4pm7x\" (UID: \"0bde275d-f0a5-4bea-93f7-edd2077e46b4\") " pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:28:00.576771 master-1 kubenswrapper[4771]: E1011 10:28:00.576697 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:28:00.576771 master-1 kubenswrapper[4771]: E1011 10:28:00.576736 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:28:00.576771 master-1 kubenswrapper[4771]: E1011 10:28:00.576757 4771 projected.go:194] Error preparing data for projected volume kube-api-access-hktrh for pod openshift-network-diagnostics/network-check-target-4pm7x: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:28:00.576886 master-1 kubenswrapper[4771]: E1011 10:28:00.576839 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh podName:0bde275d-f0a5-4bea-93f7-edd2077e46b4 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.576812511 +0000 UTC m=+144.551038982 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-hktrh" (UniqueName: "kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh") pod "network-check-target-4pm7x" (UID: "0bde275d-f0a5-4bea-93f7-edd2077e46b4") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:28:01.058234 master-2 kubenswrapper[4776]: I1011 10:28:01.058116 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:28:01.058518 master-2 kubenswrapper[4776]: E1011 10:28:01.058302 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:28:01.058518 master-2 kubenswrapper[4776]: I1011 10:28:01.058413 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:28:01.058919 master-2 kubenswrapper[4776]: E1011 10:28:01.058850 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:28:02.436901 master-1 kubenswrapper[4771]: I1011 10:28:02.436819 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:28:02.437957 master-1 kubenswrapper[4771]: I1011 10:28:02.436914 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:28:02.437957 master-1 kubenswrapper[4771]: E1011 10:28:02.437920 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:28:02.438502 master-1 kubenswrapper[4771]: E1011 10:28:02.438421 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4pm7x" podUID="0bde275d-f0a5-4bea-93f7-edd2077e46b4" Oct 11 10:28:03.057765 master-2 kubenswrapper[4776]: I1011 10:28:03.057667 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:28:03.058332 master-2 kubenswrapper[4776]: E1011 10:28:03.057848 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:28:03.058332 master-2 kubenswrapper[4776]: I1011 10:28:03.057698 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:28:03.058332 master-2 kubenswrapper[4776]: E1011 10:28:03.057992 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:28:03.092342 master-2 kubenswrapper[4776]: I1011 10:28:03.092264 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:28:03.111702 master-2 kubenswrapper[4776]: I1011 10:28:03.111625 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:28:04.436763 master-1 kubenswrapper[4771]: I1011 10:28:04.436270 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:28:04.437543 master-1 kubenswrapper[4771]: I1011 10:28:04.436532 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:28:04.437543 master-1 kubenswrapper[4771]: E1011 10:28:04.436933 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fgjvw" podUID="2c084572-a5c9-4787-8a14-b7d6b0810a1b" Oct 11 10:28:04.437543 master-1 kubenswrapper[4771]: E1011 10:28:04.437090 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4pm7x" podUID="0bde275d-f0a5-4bea-93f7-edd2077e46b4" Oct 11 10:28:04.490606 master-2 kubenswrapper[4776]: I1011 10:28:04.490189 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeReady" Oct 11 10:28:04.490606 master-2 kubenswrapper[4776]: I1011 10:28:04.490585 4776 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Oct 11 10:28:04.536736 master-2 kubenswrapper[4776]: I1011 10:28:04.536343 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf"] Oct 11 10:28:04.537124 master-2 kubenswrapper[4776]: I1011 10:28:04.537046 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb"] Oct 11 10:28:04.537305 master-2 kubenswrapper[4776]: I1011 10:28:04.537061 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.537537 master-2 kubenswrapper[4776]: I1011 10:28:04.537488 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:04.543743 master-2 kubenswrapper[4776]: I1011 10:28:04.542426 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Oct 11 10:28:04.543743 master-2 kubenswrapper[4776]: I1011 10:28:04.542514 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Oct 11 10:28:04.543743 master-2 kubenswrapper[4776]: I1011 10:28:04.542452 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Oct 11 10:28:04.543743 master-2 kubenswrapper[4776]: I1011 10:28:04.543188 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Oct 11 10:28:04.543743 master-2 kubenswrapper[4776]: I1011 10:28:04.543598 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 11 10:28:04.543743 master-2 kubenswrapper[4776]: I1011 10:28:04.543668 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 11 10:28:04.544492 master-2 kubenswrapper[4776]: I1011 10:28:04.544426 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Oct 11 10:28:04.547792 master-2 kubenswrapper[4776]: I1011 10:28:04.547745 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Oct 11 10:28:04.548311 master-2 kubenswrapper[4776]: I1011 10:28:04.548268 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-9dbb96f7-b88g6"] Oct 11 10:28:04.548907 master-2 kubenswrapper[4776]: I1011 10:28:04.548833 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.549031 master-2 kubenswrapper[4776]: I1011 10:28:04.548983 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj"] Oct 11 10:28:04.550459 master-2 kubenswrapper[4776]: I1011 10:28:04.549423 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.550459 master-2 kubenswrapper[4776]: I1011 10:28:04.549934 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh"] Oct 11 10:28:04.550736 master-2 kubenswrapper[4776]: I1011 10:28:04.550656 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.551526 master-2 kubenswrapper[4776]: I1011 10:28:04.551181 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj"] Oct 11 10:28:04.551526 master-2 kubenswrapper[4776]: I1011 10:28:04.551521 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.555642 master-2 kubenswrapper[4776]: I1011 10:28:04.552157 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd"] Oct 11 10:28:04.555642 master-2 kubenswrapper[4776]: I1011 10:28:04.552214 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 11 10:28:04.555642 master-2 kubenswrapper[4776]: I1011 10:28:04.554955 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr"] Oct 11 10:28:04.555642 master-2 kubenswrapper[4776]: I1011 10:28:04.555128 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2"] Oct 11 10:28:04.555642 master-2 kubenswrapper[4776]: I1011 10:28:04.555350 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" Oct 11 10:28:04.555642 master-2 kubenswrapper[4776]: I1011 10:28:04.555466 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:04.558500 master-2 kubenswrapper[4776]: I1011 10:28:04.555753 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.558500 master-2 kubenswrapper[4776]: I1011 10:28:04.558332 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7"] Oct 11 10:28:04.558669 master-2 kubenswrapper[4776]: I1011 10:28:04.558581 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:04.559983 master-2 kubenswrapper[4776]: I1011 10:28:04.559931 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 11 10:28:04.560124 master-2 kubenswrapper[4776]: I1011 10:28:04.560065 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 11 10:28:04.560347 master-2 kubenswrapper[4776]: I1011 10:28:04.560301 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Oct 11 10:28:04.560465 master-2 kubenswrapper[4776]: I1011 10:28:04.560395 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg"] Oct 11 10:28:04.560554 master-2 kubenswrapper[4776]: I1011 10:28:04.560532 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-7dcf5bd85b-6c2rl"] Oct 11 10:28:04.560776 master-2 kubenswrapper[4776]: I1011 10:28:04.560736 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.561000 master-2 kubenswrapper[4776]: I1011 10:28:04.560957 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:04.561423 master-2 kubenswrapper[4776]: I1011 10:28:04.561343 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-568c655666-84cp8"] Oct 11 10:28:04.561978 master-2 kubenswrapper[4776]: I1011 10:28:04.561915 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:04.563004 master-2 kubenswrapper[4776]: I1011 10:28:04.562935 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8"] Oct 11 10:28:04.563303 master-2 kubenswrapper[4776]: I1011 10:28:04.563236 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Oct 11 10:28:04.563429 master-2 kubenswrapper[4776]: I1011 10:28:04.563380 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Oct 11 10:28:04.563548 master-2 kubenswrapper[4776]: I1011 10:28:04.563503 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8" Oct 11 10:28:04.563632 master-2 kubenswrapper[4776]: I1011 10:28:04.563400 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr"] Oct 11 10:28:04.564421 master-2 kubenswrapper[4776]: I1011 10:28:04.564368 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59"] Oct 11 10:28:04.564732 master-2 kubenswrapper[4776]: I1011 10:28:04.564672 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77"] Oct 11 10:28:04.565053 master-2 kubenswrapper[4776]: I1011 10:28:04.564998 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 11 10:28:04.565186 master-2 kubenswrapper[4776]: I1011 10:28:04.565111 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc"] Oct 11 10:28:04.565643 master-2 kubenswrapper[4776]: I1011 10:28:04.565591 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 11 10:28:04.565938 master-2 kubenswrapper[4776]: I1011 10:28:04.565882 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd"] Oct 11 10:28:04.566187 master-2 kubenswrapper[4776]: I1011 10:28:04.566139 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Oct 11 10:28:04.566355 master-2 kubenswrapper[4776]: I1011 10:28:04.566311 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q"] Oct 11 10:28:04.566544 master-2 kubenswrapper[4776]: I1011 10:28:04.566490 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 11 10:28:04.566868 master-2 kubenswrapper[4776]: I1011 10:28:04.566811 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp"] Oct 11 10:28:04.567207 master-2 kubenswrapper[4776]: I1011 10:28:04.567159 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.607498 master-2 kubenswrapper[4776]: I1011 10:28:04.607410 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv"] Oct 11 10:28:04.607931 master-2 kubenswrapper[4776]: I1011 10:28:04.607818 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:04.608090 master-2 kubenswrapper[4776]: I1011 10:28:04.607932 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:04.608090 master-2 kubenswrapper[4776]: I1011 10:28:04.607994 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:04.608090 master-2 kubenswrapper[4776]: I1011 10:28:04.607878 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:04.608326 master-2 kubenswrapper[4776]: I1011 10:28:04.607940 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:04.608439 master-2 kubenswrapper[4776]: I1011 10:28:04.608397 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:04.614669 master-2 kubenswrapper[4776]: I1011 10:28:04.612494 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:04.615574 master-2 kubenswrapper[4776]: I1011 10:28:04.615491 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs"] Oct 11 10:28:04.615856 master-2 kubenswrapper[4776]: I1011 10:28:04.615719 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:04.615856 master-2 kubenswrapper[4776]: I1011 10:28:04.615751 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:04.616115 master-2 kubenswrapper[4776]: I1011 10:28:04.616033 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54"] Oct 11 10:28:04.619503 master-2 kubenswrapper[4776]: I1011 10:28:04.616239 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.619503 master-2 kubenswrapper[4776]: I1011 10:28:04.617461 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv"] Oct 11 10:28:04.619503 master-2 kubenswrapper[4776]: I1011 10:28:04.617875 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.620447 master-2 kubenswrapper[4776]: I1011 10:28:04.620370 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 11 10:28:04.621152 master-2 kubenswrapper[4776]: I1011 10:28:04.621095 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 11 10:28:04.621152 master-2 kubenswrapper[4776]: I1011 10:28:04.621120 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.621445 master-2 kubenswrapper[4776]: I1011 10:28:04.621396 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 11 10:28:04.621557 master-2 kubenswrapper[4776]: I1011 10:28:04.621396 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588"] Oct 11 10:28:04.621772 master-2 kubenswrapper[4776]: I1011 10:28:04.621707 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 11 10:28:04.622010 master-2 kubenswrapper[4776]: I1011 10:28:04.621962 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622133 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622394 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622424 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622449 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622455 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622502 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622523 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622541 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622620 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622986 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.622715 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.622754 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.622823 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.622819 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.622871 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.623121 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.623600 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.623342 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4"] Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.622901 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.623883 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.624420 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.625374 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.627482 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.628386 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz"] Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.629087 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.629371 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc"] Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.630011 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.630115 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.630150 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.630584 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.630779 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-v6dfc"] Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.630804 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.631568 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.631582 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 11 10:28:04.635145 master-2 kubenswrapper[4776]: I1011 10:28:04.631865 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-s5r5b"] Oct 11 10:28:04.635145 master-2 kubenswrapper[4776]: I1011 10:28:04.632019 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.635145 master-2 kubenswrapper[4776]: I1011 10:28:04.632230 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:04.635145 master-2 kubenswrapper[4776]: I1011 10:28:04.632236 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 11 10:28:04.635145 master-2 kubenswrapper[4776]: I1011 10:28:04.632378 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Oct 11 10:28:04.635145 master-2 kubenswrapper[4776]: I1011 10:28:04.632396 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-5r2t9"] Oct 11 10:28:04.635145 master-2 kubenswrapper[4776]: I1011 10:28:04.632539 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 11 10:28:04.635145 master-2 kubenswrapper[4776]: I1011 10:28:04.633005 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:04.638467 master-2 kubenswrapper[4776]: I1011 10:28:04.637056 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-7769d9677-wh775"] Oct 11 10:28:04.638467 master-2 kubenswrapper[4776]: I1011 10:28:04.637231 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 11 10:28:04.638467 master-2 kubenswrapper[4776]: I1011 10:28:04.637426 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 11 10:28:04.638467 master-2 kubenswrapper[4776]: I1011 10:28:04.637483 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 11 10:28:04.638467 master-2 kubenswrapper[4776]: I1011 10:28:04.637554 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 11 10:28:04.638467 master-2 kubenswrapper[4776]: I1011 10:28:04.638017 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Oct 11 10:28:04.638467 master-2 kubenswrapper[4776]: I1011 10:28:04.638213 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 11 10:28:04.638467 master-2 kubenswrapper[4776]: I1011 10:28:04.638408 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 11 10:28:04.640274 master-2 kubenswrapper[4776]: I1011 10:28:04.638551 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 11 10:28:04.640274 master-2 kubenswrapper[4776]: I1011 10:28:04.638983 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:04.640274 master-2 kubenswrapper[4776]: I1011 10:28:04.639780 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf"] Oct 11 10:28:04.641102 master-2 kubenswrapper[4776]: I1011 10:28:04.641056 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 11 10:28:04.641408 master-2 kubenswrapper[4776]: I1011 10:28:04.641279 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 11 10:28:04.641639 master-2 kubenswrapper[4776]: I1011 10:28:04.641550 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj"] Oct 11 10:28:04.641898 master-2 kubenswrapper[4776]: I1011 10:28:04.641854 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.645090 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd"] Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.645200 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.645223 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.645241 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.645266 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646370 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7"] Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646452 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646507 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/66dee5be-e631-462d-8a2c-51a2031a83a2-images\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646534 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dptq\" (UniqueName: \"kubernetes.io/projected/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-kube-api-access-2dptq\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646562 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646586 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66dee5be-e631-462d-8a2c-51a2031a83a2-config\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646630 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-auth-proxy-config\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646653 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646536 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646818 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646611 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646706 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc6zm\" (UniqueName: \"kubernetes.io/projected/66dee5be-e631-462d-8a2c-51a2031a83a2-kube-api-access-gc6zm\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.647331 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj"] Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.648590 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr"] Oct 11 10:28:04.650295 master-2 kubenswrapper[4776]: I1011 10:28:04.649500 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-568c655666-84cp8"] Oct 11 10:28:04.651205 master-2 kubenswrapper[4776]: I1011 10:28:04.651161 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Oct 11 10:28:04.651405 master-2 kubenswrapper[4776]: I1011 10:28:04.651367 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 11 10:28:04.651661 master-2 kubenswrapper[4776]: I1011 10:28:04.651626 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 11 10:28:04.652134 master-2 kubenswrapper[4776]: I1011 10:28:04.652096 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 11 10:28:04.652340 master-2 kubenswrapper[4776]: I1011 10:28:04.652301 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 11 10:28:04.652637 master-2 kubenswrapper[4776]: I1011 10:28:04.652595 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.652865 master-2 kubenswrapper[4776]: I1011 10:28:04.652798 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd"] Oct 11 10:28:04.652980 master-2 kubenswrapper[4776]: I1011 10:28:04.652891 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2"] Oct 11 10:28:04.653026 master-2 kubenswrapper[4776]: I1011 10:28:04.652982 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8"] Oct 11 10:28:04.653785 master-2 kubenswrapper[4776]: I1011 10:28:04.653207 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Oct 11 10:28:04.653785 master-2 kubenswrapper[4776]: I1011 10:28:04.653312 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 11 10:28:04.653785 master-2 kubenswrapper[4776]: I1011 10:28:04.653444 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 11 10:28:04.653785 master-2 kubenswrapper[4776]: I1011 10:28:04.653539 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 11 10:28:04.653785 master-2 kubenswrapper[4776]: I1011 10:28:04.653778 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 11 10:28:04.653785 master-2 kubenswrapper[4776]: I1011 10:28:04.653780 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 11 10:28:04.654523 master-2 kubenswrapper[4776]: I1011 10:28:04.653791 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Oct 11 10:28:04.654523 master-2 kubenswrapper[4776]: I1011 10:28:04.653860 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Oct 11 10:28:04.654523 master-2 kubenswrapper[4776]: I1011 10:28:04.653961 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Oct 11 10:28:04.654523 master-2 kubenswrapper[4776]: I1011 10:28:04.654300 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 11 10:28:04.654523 master-2 kubenswrapper[4776]: I1011 10:28:04.654459 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 11 10:28:04.655578 master-2 kubenswrapper[4776]: I1011 10:28:04.654862 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp"] Oct 11 10:28:04.655927 master-2 kubenswrapper[4776]: I1011 10:28:04.655846 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77"] Oct 11 10:28:04.656340 master-2 kubenswrapper[4776]: I1011 10:28:04.656283 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Oct 11 10:28:04.656615 master-2 kubenswrapper[4776]: I1011 10:28:04.656569 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Oct 11 10:28:04.656949 master-2 kubenswrapper[4776]: I1011 10:28:04.656733 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-9dbb96f7-b88g6"] Oct 11 10:28:04.657780 master-2 kubenswrapper[4776]: I1011 10:28:04.657729 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc"] Oct 11 10:28:04.658464 master-2 kubenswrapper[4776]: I1011 10:28:04.658412 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-5mn8b"] Oct 11 10:28:04.658573 master-2 kubenswrapper[4776]: I1011 10:28:04.658465 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 11 10:28:04.658573 master-2 kubenswrapper[4776]: I1011 10:28:04.658496 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.658842 master-2 kubenswrapper[4776]: I1011 10:28:04.658828 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.659118 master-2 kubenswrapper[4776]: I1011 10:28:04.659069 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:04.659226 master-2 kubenswrapper[4776]: I1011 10:28:04.659176 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 11 10:28:04.659392 master-2 kubenswrapper[4776]: I1011 10:28:04.659348 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588"] Oct 11 10:28:04.659745 master-2 kubenswrapper[4776]: I1011 10:28:04.659642 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 11 10:28:04.659911 master-2 kubenswrapper[4776]: I1011 10:28:04.659877 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 11 10:28:04.660754 master-2 kubenswrapper[4776]: I1011 10:28:04.660656 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 11 10:28:04.660896 master-2 kubenswrapper[4776]: I1011 10:28:04.660796 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 11 10:28:04.660896 master-2 kubenswrapper[4776]: I1011 10:28:04.660825 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb"] Oct 11 10:28:04.661372 master-2 kubenswrapper[4776]: I1011 10:28:04.661312 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-7dcf5bd85b-6c2rl"] Oct 11 10:28:04.662565 master-2 kubenswrapper[4776]: I1011 10:28:04.662519 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 11 10:28:04.662765 master-2 kubenswrapper[4776]: I1011 10:28:04.662596 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 11 10:28:04.663558 master-2 kubenswrapper[4776]: I1011 10:28:04.663436 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg"] Oct 11 10:28:04.665592 master-2 kubenswrapper[4776]: I1011 10:28:04.665411 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59"] Oct 11 10:28:04.666352 master-2 kubenswrapper[4776]: I1011 10:28:04.666283 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 11 10:28:04.667481 master-2 kubenswrapper[4776]: I1011 10:28:04.667409 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 11 10:28:04.670397 master-2 kubenswrapper[4776]: I1011 10:28:04.670328 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q"] Oct 11 10:28:04.673141 master-2 kubenswrapper[4776]: I1011 10:28:04.673090 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv"] Oct 11 10:28:04.673625 master-2 kubenswrapper[4776]: I1011 10:28:04.673577 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 11 10:28:04.674303 master-2 kubenswrapper[4776]: I1011 10:28:04.674267 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs"] Oct 11 10:28:04.675163 master-2 kubenswrapper[4776]: I1011 10:28:04.675127 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr"] Oct 11 10:28:04.675989 master-2 kubenswrapper[4776]: I1011 10:28:04.675955 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc"] Oct 11 10:28:04.676917 master-2 kubenswrapper[4776]: I1011 10:28:04.676857 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54"] Oct 11 10:28:04.677777 master-2 kubenswrapper[4776]: I1011 10:28:04.677738 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz"] Oct 11 10:28:04.678933 master-2 kubenswrapper[4776]: I1011 10:28:04.678870 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4"] Oct 11 10:28:04.679314 master-2 kubenswrapper[4776]: I1011 10:28:04.679280 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.680280 master-2 kubenswrapper[4776]: I1011 10:28:04.680257 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh"] Oct 11 10:28:04.681904 master-2 kubenswrapper[4776]: I1011 10:28:04.681868 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-s5r5b"] Oct 11 10:28:04.682878 master-2 kubenswrapper[4776]: I1011 10:28:04.682827 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-7769d9677-wh775"] Oct 11 10:28:04.684014 master-2 kubenswrapper[4776]: I1011 10:28:04.683879 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-5r2t9"] Oct 11 10:28:04.700337 master-2 kubenswrapper[4776]: I1011 10:28:04.700299 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Oct 11 10:28:04.719969 master-2 kubenswrapper[4776]: I1011 10:28:04.719932 4776 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Oct 11 10:28:04.740737 master-2 kubenswrapper[4776]: I1011 10:28:04.740659 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Oct 11 10:28:04.747414 master-2 kubenswrapper[4776]: I1011 10:28:04.747374 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8050d30-444b-40a5-829c-1e3b788910a0-serving-cert\") pod \"openshift-apiserver-operator-7d88655794-7jd4q\" (UID: \"f8050d30-444b-40a5-829c-1e3b788910a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:04.747489 master-2 kubenswrapper[4776]: I1011 10:28:04.747421 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-profile-collector-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:04.747489 master-2 kubenswrapper[4776]: I1011 10:28:04.747449 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-serving-cert\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.747489 master-2 kubenswrapper[4776]: I1011 10:28:04.747480 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:04.747580 master-2 kubenswrapper[4776]: I1011 10:28:04.747503 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59763d5b-237f-4095-bf52-86bb0154381c-serving-cert\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.747580 master-2 kubenswrapper[4776]: I1011 10:28:04.747526 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpxlw\" (UniqueName: \"kubernetes.io/projected/8757af56-20fb-439e-adba-7e4e50378936-kube-api-access-xpxlw\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.747580 master-2 kubenswrapper[4776]: I1011 10:28:04.747549 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w627\" (UniqueName: \"kubernetes.io/projected/88129ec6-6f99-42a1-842a-6a965c6b58fe-kube-api-access-4w627\") pod \"openshift-controller-manager-operator-5745565d84-bq4rs\" (UID: \"88129ec6-6f99-42a1-842a-6a965c6b58fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:04.747580 master-2 kubenswrapper[4776]: I1011 10:28:04.747573 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77b56b6f4f-dczh4\" (UID: \"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:04.747712 master-2 kubenswrapper[4776]: I1011 10:28:04.747595 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.747712 master-2 kubenswrapper[4776]: I1011 10:28:04.747619 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxlns\" (UniqueName: \"kubernetes.io/projected/b16a4f10-c724-43cf-acd4-b3f5aa575653-kube-api-access-mxlns\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.747712 master-2 kubenswrapper[4776]: I1011 10:28:04.747643 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5wnj\" (UniqueName: \"kubernetes.io/projected/e4536c84-d8f3-4808-bf8b-9b40695f46de-kube-api-access-x5wnj\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.747712 master-2 kubenswrapper[4776]: E1011 10:28:04.747659 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:04.747712 master-2 kubenswrapper[4776]: I1011 10:28:04.747666 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-profile-collector-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:04.747841 master-2 kubenswrapper[4776]: E1011 10:28:04.747721 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert podName:6b8dc5b8-3c48-4dba-9992-6e269ca133f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.247701627 +0000 UTC m=+120.032128336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert") pod "cluster-autoscaler-operator-7ff449c7c5-cfvjb" (UID: "6b8dc5b8-3c48-4dba-9992-6e269ca133f1") : secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:04.747841 master-2 kubenswrapper[4776]: I1011 10:28:04.747745 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.747841 master-2 kubenswrapper[4776]: I1011 10:28:04.747772 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-bound-sa-token\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.747841 master-2 kubenswrapper[4776]: I1011 10:28:04.747802 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:04.747841 master-2 kubenswrapper[4776]: I1011 10:28:04.747820 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7004f3ff-6db8-446d-94c1-1223e975299d-service-ca-bundle\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.747841 master-2 kubenswrapper[4776]: I1011 10:28:04.747836 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-trusted-ca\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:04.747999 master-2 kubenswrapper[4776]: I1011 10:28:04.747854 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:04.747999 master-2 kubenswrapper[4776]: I1011 10:28:04.747876 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7004f3ff-6db8-446d-94c1-1223e975299d-trusted-ca-bundle\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.747999 master-2 kubenswrapper[4776]: I1011 10:28:04.747895 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/548333d7-2374-4c38-b4fd-45c2bee2ac4e-images\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.747999 master-2 kubenswrapper[4776]: I1011 10:28:04.747915 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmd5r\" (UniqueName: \"kubernetes.io/projected/b562963f-7112-411a-a64c-3b8eba909c59-kube-api-access-rmd5r\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.747999 master-2 kubenswrapper[4776]: I1011 10:28:04.747932 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:04.747999 master-2 kubenswrapper[4776]: I1011 10:28:04.747950 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.747999 master-2 kubenswrapper[4776]: I1011 10:28:04.747971 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw9qb\" (UniqueName: \"kubernetes.io/projected/e20ebc39-150b-472a-bb22-328d8f5db87b-kube-api-access-pw9qb\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:04.748186 master-2 kubenswrapper[4776]: I1011 10:28:04.747993 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7004f3ff-6db8-446d-94c1-1223e975299d-config\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.748186 master-2 kubenswrapper[4776]: I1011 10:28:04.748033 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e487f283-7482-463c-90b6-a812e00d0e35-config\") pod \"kube-controller-manager-operator-5d85974df9-5gj77\" (UID: \"e487f283-7482-463c-90b6-a812e00d0e35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:04.748186 master-2 kubenswrapper[4776]: I1011 10:28:04.748051 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e487f283-7482-463c-90b6-a812e00d0e35-kube-api-access\") pod \"kube-controller-manager-operator-5d85974df9-5gj77\" (UID: \"e487f283-7482-463c-90b6-a812e00d0e35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:04.748186 master-2 kubenswrapper[4776]: I1011 10:28:04.748072 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/dbaa6ca7-9865-42f6-8030-2decf702caa1-telemetry-config\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:04.748186 master-2 kubenswrapper[4776]: I1011 10:28:04.748093 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.748186 master-2 kubenswrapper[4776]: I1011 10:28:04.748123 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlz8s\" (UniqueName: \"kubernetes.io/projected/d4354488-1b32-422d-bb06-767a952192a5-kube-api-access-tlz8s\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:04.748186 master-2 kubenswrapper[4776]: I1011 10:28:04.748166 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7004f3ff-6db8-446d-94c1-1223e975299d-serving-cert\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.748186 master-2 kubenswrapper[4776]: I1011 10:28:04.748193 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qxt8\" (UniqueName: \"kubernetes.io/projected/e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc-kube-api-access-7qxt8\") pod \"cluster-olm-operator-77b56b6f4f-dczh4\" (UID: \"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748215 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748245 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw2ll\" (UniqueName: \"kubernetes.io/projected/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-kube-api-access-xw2ll\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748267 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748293 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6967590c-695e-4e20-964b-0c643abdf367-config\") pod \"kube-apiserver-operator-68f5d95b74-9h5mv\" (UID: \"6967590c-695e-4e20-964b-0c643abdf367\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748314 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjs47\" (UniqueName: \"kubernetes.io/projected/9d362fb9-48e4-4d72-a940-ec6c9c051fac-kube-api-access-hjs47\") pod \"openshift-config-operator-55957b47d5-f7vv7\" (UID: \"9d362fb9-48e4-4d72-a940-ec6c9c051fac\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748338 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58aef476-6586-47bb-bf45-dbeccac6271a-serving-cert\") pod \"openshift-kube-scheduler-operator-766d6b44f6-s5shc\" (UID: \"58aef476-6586-47bb-bf45-dbeccac6271a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748361 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05cf2994-c049-4f42-b2d8-83b23e7e763a-serving-cert\") pod \"service-ca-operator-568c655666-84cp8\" (UID: \"05cf2994-c049-4f42-b2d8-83b23e7e763a\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748392 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29krx\" (UniqueName: \"kubernetes.io/projected/eba1e82e-9f3e-4273-836e-9407cc394b10-kube-api-access-29krx\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748422 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748489 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b562963f-7112-411a-a64c-3b8eba909c59-bound-sa-token\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.749032 master-2 kubenswrapper[4776]: I1011 10:28:04.748522 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-var-run-resolv-conf\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.749032 master-2 kubenswrapper[4776]: I1011 10:28:04.748560 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plpwg\" (UniqueName: \"kubernetes.io/projected/64310b0b-bae1-4ad3-b106-6d59d47d29b2-kube-api-access-plpwg\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:04.749032 master-2 kubenswrapper[4776]: I1011 10:28:04.748595 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxt2q\" (UniqueName: \"kubernetes.io/projected/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-kube-api-access-mxt2q\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.749032 master-2 kubenswrapper[4776]: I1011 10:28:04.748645 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/66dee5be-e631-462d-8a2c-51a2031a83a2-images\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.749032 master-2 kubenswrapper[4776]: I1011 10:28:04.748728 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05cf2994-c049-4f42-b2d8-83b23e7e763a-config\") pod \"service-ca-operator-568c655666-84cp8\" (UID: \"05cf2994-c049-4f42-b2d8-83b23e7e763a\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:04.749032 master-2 kubenswrapper[4776]: I1011 10:28:04.748873 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-etcd-service-ca\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.749032 master-2 kubenswrapper[4776]: I1011 10:28:04.748912 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-etcd-client\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.749032 master-2 kubenswrapper[4776]: I1011 10:28:04.748949 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb54g\" (UniqueName: \"kubernetes.io/projected/dbaa6ca7-9865-42f6-8030-2decf702caa1-kube-api-access-vb54g\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:04.749032 master-2 kubenswrapper[4776]: I1011 10:28:04.749012 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5q65\" (UniqueName: \"kubernetes.io/projected/a0b806b9-13ff-45fa-afba-5d0c89eac7df-kube-api-access-g5q65\") pod \"csi-snapshot-controller-operator-7ff96dd767-vv9w8\" (UID: \"a0b806b9-13ff-45fa-afba-5d0c89eac7df\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8" Oct 11 10:28:04.749307 master-2 kubenswrapper[4776]: I1011 10:28:04.749073 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66dee5be-e631-462d-8a2c-51a2031a83a2-config\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.749307 master-2 kubenswrapper[4776]: I1011 10:28:04.749108 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88129ec6-6f99-42a1-842a-6a965c6b58fe-serving-cert\") pod \"openshift-controller-manager-operator-5745565d84-bq4rs\" (UID: \"88129ec6-6f99-42a1-842a-6a965c6b58fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:04.749612 master-2 kubenswrapper[4776]: I1011 10:28:04.749456 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-auth-proxy-config\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:04.749685 master-2 kubenswrapper[4776]: I1011 10:28:04.749604 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/66dee5be-e631-462d-8a2c-51a2031a83a2-images\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.749685 master-2 kubenswrapper[4776]: I1011 10:28:04.749643 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6967590c-695e-4e20-964b-0c643abdf367-kube-api-access\") pod \"kube-apiserver-operator-68f5d95b74-9h5mv\" (UID: \"6967590c-695e-4e20-964b-0c643abdf367\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:04.749780 master-2 kubenswrapper[4776]: I1011 10:28:04.749750 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-ca-bundle\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.749827 master-2 kubenswrapper[4776]: I1011 10:28:04.749801 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88129ec6-6f99-42a1-842a-6a965c6b58fe-config\") pod \"openshift-controller-manager-operator-5745565d84-bq4rs\" (UID: \"88129ec6-6f99-42a1-842a-6a965c6b58fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:04.749861 master-2 kubenswrapper[4776]: I1011 10:28:04.749831 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmwgj\" (UniqueName: \"kubernetes.io/projected/89e02bcb-b3fe-4a45-a531-4ab41d8ee424-kube-api-access-xmwgj\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ww4zz\" (UID: \"89e02bcb-b3fe-4a45-a531-4ab41d8ee424\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:04.749892 master-2 kubenswrapper[4776]: I1011 10:28:04.749881 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f54r2\" (UniqueName: \"kubernetes.io/projected/05cf2994-c049-4f42-b2d8-83b23e7e763a-kube-api-access-f54r2\") pod \"service-ca-operator-568c655666-84cp8\" (UID: \"05cf2994-c049-4f42-b2d8-83b23e7e763a\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:04.749951 master-2 kubenswrapper[4776]: I1011 10:28:04.749928 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcdgf\" (UniqueName: \"kubernetes.io/projected/f8050d30-444b-40a5-829c-1e3b788910a0-kube-api-access-rcdgf\") pod \"openshift-apiserver-operator-7d88655794-7jd4q\" (UID: \"f8050d30-444b-40a5-829c-1e3b788910a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.749975 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750012 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-config\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750081 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66dee5be-e631-462d-8a2c-51a2031a83a2-config\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750120 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59763d5b-237f-4095-bf52-86bb0154381c-trusted-ca-bundle\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750244 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e02bcb-b3fe-4a45-a531-4ab41d8ee424-config\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ww4zz\" (UID: \"89e02bcb-b3fe-4a45-a531-4ab41d8ee424\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750287 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58aef476-6586-47bb-bf45-dbeccac6271a-config\") pod \"openshift-kube-scheduler-operator-766d6b44f6-s5shc\" (UID: \"58aef476-6586-47bb-bf45-dbeccac6271a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750411 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vc8b\" (UniqueName: \"kubernetes.io/projected/e540333c-4b4d-439e-a82a-cd3a97c95a43-kube-api-access-2vc8b\") pod \"cluster-storage-operator-56d4b95494-9fbb2\" (UID: \"e540333c-4b4d-439e-a82a-cd3a97c95a43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750455 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9d362fb9-48e4-4d72-a940-ec6c9c051fac-available-featuregates\") pod \"openshift-config-operator-55957b47d5-f7vv7\" (UID: \"9d362fb9-48e4-4d72-a940-ec6c9c051fac\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750584 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750626 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b7hm\" (UniqueName: \"kubernetes.io/projected/cbf33a7e-abea-411d-9a19-85cfe67debe3-kube-api-access-9b7hm\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750648 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b16a4f10-c724-43cf-acd4-b3f5aa575653-trusted-ca\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750727 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc6zm\" (UniqueName: \"kubernetes.io/projected/66dee5be-e631-462d-8a2c-51a2031a83a2-kube-api-access-gc6zm\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750747 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwznd\" (UniqueName: \"kubernetes.io/projected/59763d5b-237f-4095-bf52-86bb0154381c-kube-api-access-hwznd\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750766 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59763d5b-237f-4095-bf52-86bb0154381c-service-ca-bundle\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750783 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-resolv-conf\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.750802 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/548333d7-2374-4c38-b4fd-45c2bee2ac4e-config\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.750819 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.750745 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-auth-proxy-config\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.750883 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfh8n\" (UniqueName: \"kubernetes.io/projected/548333d7-2374-4c38-b4fd-45c2bee2ac4e-kube-api-access-dfh8n\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.750911 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e4536c84-d8f3-4808-bf8b-9b40695f46de-images\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.751150 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.751195 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f979h\" (UniqueName: \"kubernetes.io/projected/e3281eb7-fb96-4bae-8c55-b79728d426b0-kube-api-access-f979h\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.751221 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bzgn\" (UniqueName: \"kubernetes.io/projected/7004f3ff-6db8-446d-94c1-1223e975299d-kube-api-access-8bzgn\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.751237 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/08b7d4e3-1682-4a3b-a757-84ded3a16764-auth-proxy-config\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.751276 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e540333c-4b4d-439e-a82a-cd3a97c95a43-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-56d4b95494-9fbb2\" (UID: \"e540333c-4b4d-439e-a82a-cd3a97c95a43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.751297 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eba1e82e-9f3e-4273-836e-9407cc394b10-cco-trusted-ca\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.751322 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6967590c-695e-4e20-964b-0c643abdf367-serving-cert\") pod \"kube-apiserver-operator-68f5d95b74-9h5mv\" (UID: \"6967590c-695e-4e20-964b-0c643abdf367\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.751338 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-etcd-ca\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.751356 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751382 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc-operand-assets\") pod \"cluster-olm-operator-77b56b6f4f-dczh4\" (UID: \"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751399 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krh5m\" (UniqueName: \"kubernetes.io/projected/893af718-1fec-4b8b-8349-d85f978f4140-kube-api-access-krh5m\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751421 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8050d30-444b-40a5-829c-1e3b788910a0-config\") pod \"openshift-apiserver-operator-7d88655794-7jd4q\" (UID: \"f8050d30-444b-40a5-829c-1e3b788910a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751437 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08b7d4e3-1682-4a3b-a757-84ded3a16764-config\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751452 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d362fb9-48e4-4d72-a940-ec6c9c051fac-serving-cert\") pod \"openshift-config-operator-55957b47d5-f7vv7\" (UID: \"9d362fb9-48e4-4d72-a940-ec6c9c051fac\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751467 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-trusted-ca\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751512 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtkpx\" (UniqueName: \"kubernetes.io/projected/7e860f23-9dae-4606-9426-0edec38a332f-kube-api-access-xtkpx\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751538 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4536c84-d8f3-4808-bf8b-9b40695f46de-auth-proxy-config\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751564 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751585 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58aef476-6586-47bb-bf45-dbeccac6271a-kube-api-access\") pod \"openshift-kube-scheduler-operator-766d6b44f6-s5shc\" (UID: \"58aef476-6586-47bb-bf45-dbeccac6271a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751612 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751648 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dptq\" (UniqueName: \"kubernetes.io/projected/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-kube-api-access-2dptq\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751665 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b562963f-7112-411a-a64c-3b8eba909c59-trusted-ca\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751710 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6whh\" (UniqueName: \"kubernetes.io/projected/7652e0ca-2d18-48c7-80e0-f4a936038377-kube-api-access-t6whh\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: I1011 10:28:04.751726 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: E1011 10:28:04.751791 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: E1011 10:28:04.751831 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.251816956 +0000 UTC m=+120.036243665 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: I1011 10:28:04.751942 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: I1011 10:28:04.751971 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwp57\" (UniqueName: \"kubernetes.io/projected/08b7d4e3-1682-4a3b-a757-84ded3a16764-kube-api-access-fwp57\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: I1011 10:28:04.751987 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89e02bcb-b3fe-4a45-a531-4ab41d8ee424-serving-cert\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ww4zz\" (UID: \"89e02bcb-b3fe-4a45-a531-4ab41d8ee424\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: I1011 10:28:04.752006 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/59763d5b-237f-4095-bf52-86bb0154381c-snapshots\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: I1011 10:28:04.752019 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e487f283-7482-463c-90b6-a812e00d0e35-serving-cert\") pod \"kube-controller-manager-operator-5d85974df9-5gj77\" (UID: \"e487f283-7482-463c-90b6-a812e00d0e35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: E1011 10:28:04.752089 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: E1011 10:28:04.752110 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.252101954 +0000 UTC m=+120.036528663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:04.760183 master-2 kubenswrapper[4776]: I1011 10:28:04.760152 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Oct 11 10:28:04.773351 master-1 kubenswrapper[4771]: I1011 10:28:04.773234 4771 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeReady" Oct 11 10:28:04.773683 master-1 kubenswrapper[4771]: I1011 10:28:04.773503 4771 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Oct 11 10:28:04.780631 master-2 kubenswrapper[4776]: I1011 10:28:04.779967 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 11 10:28:04.800842 master-1 kubenswrapper[4771]: I1011 10:28:04.800684 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" podStartSLOduration=13.800654259 podStartE2EDuration="13.800654259s" podCreationTimestamp="2025-10-11 10:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:27:58.930704738 +0000 UTC m=+110.904931239" watchObservedRunningTime="2025-10-11 10:28:04.800654259 +0000 UTC m=+116.774880780" Oct 11 10:28:04.801761 master-2 kubenswrapper[4776]: I1011 10:28:04.801733 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.801809 master-1 kubenswrapper[4771]: I1011 10:28:04.801760 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-t44c5"] Oct 11 10:28:04.802470 master-1 kubenswrapper[4771]: I1011 10:28:04.802398 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-t44c5" Oct 11 10:28:04.806032 master-1 kubenswrapper[4771]: I1011 10:28:04.805952 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 11 10:28:04.812616 master-1 kubenswrapper[4771]: I1011 10:28:04.812553 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w5bt\" (UniqueName: \"kubernetes.io/projected/3346c1b6-593b-4224-802c-25e99e9893a8-kube-api-access-2w5bt\") pod \"iptables-alerter-t44c5\" (UID: \"3346c1b6-593b-4224-802c-25e99e9893a8\") " pod="openshift-network-operator/iptables-alerter-t44c5" Oct 11 10:28:04.812783 master-1 kubenswrapper[4771]: I1011 10:28:04.812647 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/3346c1b6-593b-4224-802c-25e99e9893a8-iptables-alerter-script\") pod \"iptables-alerter-t44c5\" (UID: \"3346c1b6-593b-4224-802c-25e99e9893a8\") " pod="openshift-network-operator/iptables-alerter-t44c5" Oct 11 10:28:04.812783 master-1 kubenswrapper[4771]: I1011 10:28:04.812689 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3346c1b6-593b-4224-802c-25e99e9893a8-host-slash\") pod \"iptables-alerter-t44c5\" (UID: \"3346c1b6-593b-4224-802c-25e99e9893a8\") " pod="openshift-network-operator/iptables-alerter-t44c5" Oct 11 10:28:04.819938 master-2 kubenswrapper[4776]: I1011 10:28:04.819895 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 11 10:28:04.839241 master-2 kubenswrapper[4776]: I1011 10:28:04.839207 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 11 10:28:04.853745 master-2 kubenswrapper[4776]: I1011 10:28:04.853711 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmd5r\" (UniqueName: \"kubernetes.io/projected/b562963f-7112-411a-a64c-3b8eba909c59-kube-api-access-rmd5r\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.853803 master-2 kubenswrapper[4776]: I1011 10:28:04.853748 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:04.853803 master-2 kubenswrapper[4776]: I1011 10:28:04.853770 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.853803 master-2 kubenswrapper[4776]: I1011 10:28:04.853787 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw9qb\" (UniqueName: \"kubernetes.io/projected/e20ebc39-150b-472a-bb22-328d8f5db87b-kube-api-access-pw9qb\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:04.853803 master-2 kubenswrapper[4776]: I1011 10:28:04.853802 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7004f3ff-6db8-446d-94c1-1223e975299d-config\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.853914 master-2 kubenswrapper[4776]: I1011 10:28:04.853819 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e487f283-7482-463c-90b6-a812e00d0e35-config\") pod \"kube-controller-manager-operator-5d85974df9-5gj77\" (UID: \"e487f283-7482-463c-90b6-a812e00d0e35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:04.853914 master-2 kubenswrapper[4776]: I1011 10:28:04.853834 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e487f283-7482-463c-90b6-a812e00d0e35-kube-api-access\") pod \"kube-controller-manager-operator-5d85974df9-5gj77\" (UID: \"e487f283-7482-463c-90b6-a812e00d0e35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:04.853914 master-2 kubenswrapper[4776]: I1011 10:28:04.853852 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/dbaa6ca7-9865-42f6-8030-2decf702caa1-telemetry-config\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:04.853914 master-2 kubenswrapper[4776]: I1011 10:28:04.853869 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.853914 master-2 kubenswrapper[4776]: I1011 10:28:04.853884 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlz8s\" (UniqueName: \"kubernetes.io/projected/d4354488-1b32-422d-bb06-767a952192a5-kube-api-access-tlz8s\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:04.853914 master-2 kubenswrapper[4776]: E1011 10:28:04.853887 4776 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 11 10:28:04.853914 master-2 kubenswrapper[4776]: I1011 10:28:04.853900 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7004f3ff-6db8-446d-94c1-1223e975299d-serving-cert\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: I1011 10:28:04.853931 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qxt8\" (UniqueName: \"kubernetes.io/projected/e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc-kube-api-access-7qxt8\") pod \"cluster-olm-operator-77b56b6f4f-dczh4\" (UID: \"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: E1011 10:28:04.853946 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls podName:e4536c84-d8f3-4808-bf8b-9b40695f46de nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.353927336 +0000 UTC m=+120.138354045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls") pod "machine-config-operator-7b75469658-jtmwh" (UID: "e4536c84-d8f3-4808-bf8b-9b40695f46de") : secret "mco-proxy-tls" not found Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: I1011 10:28:04.853971 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: I1011 10:28:04.853998 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw2ll\" (UniqueName: \"kubernetes.io/projected/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-kube-api-access-xw2ll\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: E1011 10:28:04.854005 4776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: I1011 10:28:04.854016 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: I1011 10:28:04.854035 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6967590c-695e-4e20-964b-0c643abdf367-config\") pod \"kube-apiserver-operator-68f5d95b74-9h5mv\" (UID: \"6967590c-695e-4e20-964b-0c643abdf367\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: E1011 10:28:04.854066 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert podName:eba1e82e-9f3e-4273-836e-9407cc394b10 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.354041219 +0000 UTC m=+120.138467928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-5cf49b6487-8d7xr" (UID: "eba1e82e-9f3e-4273-836e-9407cc394b10") : secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: I1011 10:28:04.854093 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjs47\" (UniqueName: \"kubernetes.io/projected/9d362fb9-48e4-4d72-a940-ec6c9c051fac-kube-api-access-hjs47\") pod \"openshift-config-operator-55957b47d5-f7vv7\" (UID: \"9d362fb9-48e4-4d72-a940-ec6c9c051fac\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: E1011 10:28:04.854103 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: I1011 10:28:04.854120 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58aef476-6586-47bb-bf45-dbeccac6271a-serving-cert\") pod \"openshift-kube-scheduler-operator-766d6b44f6-s5shc\" (UID: \"58aef476-6586-47bb-bf45-dbeccac6271a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: I1011 10:28:04.854138 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05cf2994-c049-4f42-b2d8-83b23e7e763a-serving-cert\") pod \"service-ca-operator-568c655666-84cp8\" (UID: \"05cf2994-c049-4f42-b2d8-83b23e7e763a\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: E1011 10:28:04.854165 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.354141352 +0000 UTC m=+120.138568061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "node-tuning-operator-tls" not found Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: I1011 10:28:04.854196 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29krx\" (UniqueName: \"kubernetes.io/projected/eba1e82e-9f3e-4273-836e-9407cc394b10-kube-api-access-29krx\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: E1011 10:28:04.854224 4776 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: E1011 10:28:04.854279 4776 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: E1011 10:28:04.854297 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls podName:b562963f-7112-411a-a64c-3b8eba909c59 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.354276777 +0000 UTC m=+120.138703486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls") pod "cluster-image-registry-operator-6b8674d7ff-mwbsr" (UID: "b562963f-7112-411a-a64c-3b8eba909c59") : secret "image-registry-operator-tls" not found Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: E1011 10:28:04.854312 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls podName:6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.354305958 +0000 UTC m=+120.138732667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls") pod "ingress-operator-766ddf4575-wf7mj" (UID: "6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c") : secret "metrics-tls" not found Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: I1011 10:28:04.854230 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: I1011 10:28:04.854359 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b562963f-7112-411a-a64c-3b8eba909c59-bound-sa-token\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: I1011 10:28:04.854382 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-var-run-resolv-conf\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: E1011 10:28:04.854397 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854422 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-var-run-resolv-conf\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: E1011 10:28:04.854441 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert podName:e3281eb7-fb96-4bae-8c55-b79728d426b0 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.354427541 +0000 UTC m=+120.138854250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert") pod "catalog-operator-f966fb6f8-8gkqg" (UID: "e3281eb7-fb96-4bae-8c55-b79728d426b0") : secret "catalog-operator-serving-cert" not found Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854403 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plpwg\" (UniqueName: \"kubernetes.io/projected/64310b0b-bae1-4ad3-b106-6d59d47d29b2-kube-api-access-plpwg\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854472 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxt2q\" (UniqueName: \"kubernetes.io/projected/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-kube-api-access-mxt2q\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854491 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05cf2994-c049-4f42-b2d8-83b23e7e763a-config\") pod \"service-ca-operator-568c655666-84cp8\" (UID: \"05cf2994-c049-4f42-b2d8-83b23e7e763a\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854505 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-etcd-service-ca\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854529 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-etcd-client\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854547 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb54g\" (UniqueName: \"kubernetes.io/projected/dbaa6ca7-9865-42f6-8030-2decf702caa1-kube-api-access-vb54g\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854562 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5q65\" (UniqueName: \"kubernetes.io/projected/a0b806b9-13ff-45fa-afba-5d0c89eac7df-kube-api-access-g5q65\") pod \"csi-snapshot-controller-operator-7ff96dd767-vv9w8\" (UID: \"a0b806b9-13ff-45fa-afba-5d0c89eac7df\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854578 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88129ec6-6f99-42a1-842a-6a965c6b58fe-serving-cert\") pod \"openshift-controller-manager-operator-5745565d84-bq4rs\" (UID: \"88129ec6-6f99-42a1-842a-6a965c6b58fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854609 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6967590c-695e-4e20-964b-0c643abdf367-kube-api-access\") pod \"kube-apiserver-operator-68f5d95b74-9h5mv\" (UID: \"6967590c-695e-4e20-964b-0c643abdf367\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854658 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-ca-bundle\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854689 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88129ec6-6f99-42a1-842a-6a965c6b58fe-config\") pod \"openshift-controller-manager-operator-5745565d84-bq4rs\" (UID: \"88129ec6-6f99-42a1-842a-6a965c6b58fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854706 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmwgj\" (UniqueName: \"kubernetes.io/projected/89e02bcb-b3fe-4a45-a531-4ab41d8ee424-kube-api-access-xmwgj\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ww4zz\" (UID: \"89e02bcb-b3fe-4a45-a531-4ab41d8ee424\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:04.855159 master-2 kubenswrapper[4776]: I1011 10:28:04.854722 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f54r2\" (UniqueName: \"kubernetes.io/projected/05cf2994-c049-4f42-b2d8-83b23e7e763a-kube-api-access-f54r2\") pod \"service-ca-operator-568c655666-84cp8\" (UID: \"05cf2994-c049-4f42-b2d8-83b23e7e763a\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:04.855159 master-2 kubenswrapper[4776]: I1011 10:28:04.854726 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6967590c-695e-4e20-964b-0c643abdf367-config\") pod \"kube-apiserver-operator-68f5d95b74-9h5mv\" (UID: \"6967590c-695e-4e20-964b-0c643abdf367\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:04.855159 master-2 kubenswrapper[4776]: I1011 10:28:04.854738 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcdgf\" (UniqueName: \"kubernetes.io/projected/f8050d30-444b-40a5-829c-1e3b788910a0-kube-api-access-rcdgf\") pod \"openshift-apiserver-operator-7d88655794-7jd4q\" (UID: \"f8050d30-444b-40a5-829c-1e3b788910a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:04.855159 master-2 kubenswrapper[4776]: I1011 10:28:04.854769 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:04.855159 master-2 kubenswrapper[4776]: I1011 10:28:04.854790 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-config\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.855159 master-2 kubenswrapper[4776]: I1011 10:28:04.854807 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59763d5b-237f-4095-bf52-86bb0154381c-trusted-ca-bundle\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.855159 master-2 kubenswrapper[4776]: I1011 10:28:04.854822 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-ca-bundle\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.855159 master-2 kubenswrapper[4776]: I1011 10:28:04.854822 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e02bcb-b3fe-4a45-a531-4ab41d8ee424-config\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ww4zz\" (UID: \"89e02bcb-b3fe-4a45-a531-4ab41d8ee424\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:04.855159 master-2 kubenswrapper[4776]: I1011 10:28:04.854788 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e487f283-7482-463c-90b6-a812e00d0e35-config\") pod \"kube-controller-manager-operator-5d85974df9-5gj77\" (UID: \"e487f283-7482-463c-90b6-a812e00d0e35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:04.855467 master-2 kubenswrapper[4776]: I1011 10:28:04.855439 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7004f3ff-6db8-446d-94c1-1223e975299d-config\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.855514 master-2 kubenswrapper[4776]: I1011 10:28:04.855493 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88129ec6-6f99-42a1-842a-6a965c6b58fe-config\") pod \"openshift-controller-manager-operator-5745565d84-bq4rs\" (UID: \"88129ec6-6f99-42a1-842a-6a965c6b58fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:04.855595 master-2 kubenswrapper[4776]: I1011 10:28:04.855575 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58aef476-6586-47bb-bf45-dbeccac6271a-config\") pod \"openshift-kube-scheduler-operator-766d6b44f6-s5shc\" (UID: \"58aef476-6586-47bb-bf45-dbeccac6271a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:04.855635 master-2 kubenswrapper[4776]: I1011 10:28:04.855606 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vc8b\" (UniqueName: \"kubernetes.io/projected/e540333c-4b4d-439e-a82a-cd3a97c95a43-kube-api-access-2vc8b\") pod \"cluster-storage-operator-56d4b95494-9fbb2\" (UID: \"e540333c-4b4d-439e-a82a-cd3a97c95a43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" Oct 11 10:28:04.855718 master-2 kubenswrapper[4776]: I1011 10:28:04.855627 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9d362fb9-48e4-4d72-a940-ec6c9c051fac-available-featuregates\") pod \"openshift-config-operator-55957b47d5-f7vv7\" (UID: \"9d362fb9-48e4-4d72-a940-ec6c9c051fac\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:04.855765 master-2 kubenswrapper[4776]: E1011 10:28:04.855753 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 11 10:28:04.855799 master-2 kubenswrapper[4776]: E1011 10:28:04.855784 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert podName:e20ebc39-150b-472a-bb22-328d8f5db87b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.355774299 +0000 UTC m=+120.140201008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-xzntp" (UID: "e20ebc39-150b-472a-bb22-328d8f5db87b") : secret "package-server-manager-serving-cert" not found Oct 11 10:28:04.856495 master-2 kubenswrapper[4776]: I1011 10:28:04.856448 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05cf2994-c049-4f42-b2d8-83b23e7e763a-config\") pod \"service-ca-operator-568c655666-84cp8\" (UID: \"05cf2994-c049-4f42-b2d8-83b23e7e763a\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:04.856546 master-2 kubenswrapper[4776]: I1011 10:28:04.856480 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e02bcb-b3fe-4a45-a531-4ab41d8ee424-config\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ww4zz\" (UID: \"89e02bcb-b3fe-4a45-a531-4ab41d8ee424\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:04.856844 master-2 kubenswrapper[4776]: I1011 10:28:04.856818 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:04.856900 master-2 kubenswrapper[4776]: I1011 10:28:04.856855 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b7hm\" (UniqueName: \"kubernetes.io/projected/cbf33a7e-abea-411d-9a19-85cfe67debe3-kube-api-access-9b7hm\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:04.856900 master-2 kubenswrapper[4776]: I1011 10:28:04.856894 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b16a4f10-c724-43cf-acd4-b3f5aa575653-trusted-ca\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.856971 master-2 kubenswrapper[4776]: I1011 10:28:04.856920 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/dbaa6ca7-9865-42f6-8030-2decf702caa1-telemetry-config\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:04.856971 master-2 kubenswrapper[4776]: I1011 10:28:04.856935 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lznwk\" (UniqueName: \"kubernetes.io/projected/18ca0678-0b0d-4d5d-bc50-a0a098301f38-kube-api-access-lznwk\") pod \"iptables-alerter-5mn8b\" (UID: \"18ca0678-0b0d-4d5d-bc50-a0a098301f38\") " pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:04.856971 master-2 kubenswrapper[4776]: E1011 10:28:04.856955 4776 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:04.857075 master-2 kubenswrapper[4776]: I1011 10:28:04.856987 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwznd\" (UniqueName: \"kubernetes.io/projected/59763d5b-237f-4095-bf52-86bb0154381c-kube-api-access-hwznd\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.857075 master-2 kubenswrapper[4776]: E1011 10:28:04.857053 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls podName:893af718-1fec-4b8b-8349-d85f978f4140 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.357008926 +0000 UTC m=+120.141435695 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls") pod "dns-operator-7769d9677-wh775" (UID: "893af718-1fec-4b8b-8349-d85f978f4140") : secret "metrics-tls" not found Oct 11 10:28:04.857142 master-2 kubenswrapper[4776]: I1011 10:28:04.857120 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/18ca0678-0b0d-4d5d-bc50-a0a098301f38-iptables-alerter-script\") pod \"iptables-alerter-5mn8b\" (UID: \"18ca0678-0b0d-4d5d-bc50-a0a098301f38\") " pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:04.857228 master-2 kubenswrapper[4776]: I1011 10:28:04.857187 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-config\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.857228 master-2 kubenswrapper[4776]: I1011 10:28:04.857204 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58aef476-6586-47bb-bf45-dbeccac6271a-config\") pod \"openshift-kube-scheduler-operator-766d6b44f6-s5shc\" (UID: \"58aef476-6586-47bb-bf45-dbeccac6271a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:04.857373 master-2 kubenswrapper[4776]: I1011 10:28:04.857235 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59763d5b-237f-4095-bf52-86bb0154381c-service-ca-bundle\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.857373 master-2 kubenswrapper[4776]: I1011 10:28:04.857319 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-resolv-conf\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.857465 master-2 kubenswrapper[4776]: I1011 10:28:04.857423 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/548333d7-2374-4c38-b4fd-45c2bee2ac4e-config\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.857579 master-2 kubenswrapper[4776]: I1011 10:28:04.857430 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-resolv-conf\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.857639 master-2 kubenswrapper[4776]: I1011 10:28:04.857292 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-etcd-service-ca\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.857786 master-2 kubenswrapper[4776]: I1011 10:28:04.857740 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.857933 master-2 kubenswrapper[4776]: E1011 10:28:04.857911 4776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 11 10:28:04.857981 master-2 kubenswrapper[4776]: E1011 10:28:04.857964 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls podName:548333d7-2374-4c38-b4fd-45c2bee2ac4e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.357950752 +0000 UTC m=+120.142377471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-b88g6" (UID: "548333d7-2374-4c38-b4fd-45c2bee2ac4e") : secret "machine-api-operator-tls" not found Oct 11 10:28:04.858062 master-2 kubenswrapper[4776]: I1011 10:28:04.858031 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59763d5b-237f-4095-bf52-86bb0154381c-service-ca-bundle\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.858167 master-2 kubenswrapper[4776]: I1011 10:28:04.857852 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfh8n\" (UniqueName: \"kubernetes.io/projected/548333d7-2374-4c38-b4fd-45c2bee2ac4e-kube-api-access-dfh8n\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.858343 master-2 kubenswrapper[4776]: I1011 10:28:04.858269 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e4536c84-d8f3-4808-bf8b-9b40695f46de-images\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.858386 master-2 kubenswrapper[4776]: I1011 10:28:04.858355 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/548333d7-2374-4c38-b4fd-45c2bee2ac4e-config\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.858533 master-2 kubenswrapper[4776]: I1011 10:28:04.858501 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7004f3ff-6db8-446d-94c1-1223e975299d-serving-cert\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.858748 master-2 kubenswrapper[4776]: I1011 10:28:04.858722 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9d362fb9-48e4-4d72-a940-ec6c9c051fac-available-featuregates\") pod \"openshift-config-operator-55957b47d5-f7vv7\" (UID: \"9d362fb9-48e4-4d72-a940-ec6c9c051fac\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:04.859101 master-2 kubenswrapper[4776]: I1011 10:28:04.858687 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59763d5b-237f-4095-bf52-86bb0154381c-trusted-ca-bundle\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.859213 master-2 kubenswrapper[4776]: I1011 10:28:04.859185 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e4536c84-d8f3-4808-bf8b-9b40695f46de-images\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.859371 master-2 kubenswrapper[4776]: I1011 10:28:04.859320 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:04.859555 master-2 kubenswrapper[4776]: E1011 10:28:04.859522 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:04.859744 master-2 kubenswrapper[4776]: E1011 10:28:04.859712 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs podName:64310b0b-bae1-4ad3-b106-6d59d47d29b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.359568968 +0000 UTC m=+120.143995677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs") pod "multus-admission-controller-77b66fddc8-5r2t9" (UID: "64310b0b-bae1-4ad3-b106-6d59d47d29b2") : secret "multus-admission-controller-secret" not found Oct 11 10:28:04.859744 master-2 kubenswrapper[4776]: I1011 10:28:04.859520 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f979h\" (UniqueName: \"kubernetes.io/projected/e3281eb7-fb96-4bae-8c55-b79728d426b0-kube-api-access-f979h\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:04.859839 master-2 kubenswrapper[4776]: I1011 10:28:04.859177 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58aef476-6586-47bb-bf45-dbeccac6271a-serving-cert\") pod \"openshift-kube-scheduler-operator-766d6b44f6-s5shc\" (UID: \"58aef476-6586-47bb-bf45-dbeccac6271a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:04.859962 master-2 kubenswrapper[4776]: I1011 10:28:04.859830 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bzgn\" (UniqueName: \"kubernetes.io/projected/7004f3ff-6db8-446d-94c1-1223e975299d-kube-api-access-8bzgn\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.860007 master-2 kubenswrapper[4776]: I1011 10:28:04.859978 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/08b7d4e3-1682-4a3b-a757-84ded3a16764-auth-proxy-config\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.860188 master-2 kubenswrapper[4776]: I1011 10:28:04.860023 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e540333c-4b4d-439e-a82a-cd3a97c95a43-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-56d4b95494-9fbb2\" (UID: \"e540333c-4b4d-439e-a82a-cd3a97c95a43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" Oct 11 10:28:04.860223 master-2 kubenswrapper[4776]: I1011 10:28:04.860198 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eba1e82e-9f3e-4273-836e-9407cc394b10-cco-trusted-ca\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:04.860250 master-2 kubenswrapper[4776]: I1011 10:28:04.860231 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6967590c-695e-4e20-964b-0c643abdf367-serving-cert\") pod \"kube-apiserver-operator-68f5d95b74-9h5mv\" (UID: \"6967590c-695e-4e20-964b-0c643abdf367\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:04.860348 master-2 kubenswrapper[4776]: I1011 10:28:04.860264 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-etcd-ca\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.860512 master-2 kubenswrapper[4776]: I1011 10:28:04.860360 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:04.860512 master-2 kubenswrapper[4776]: I1011 10:28:04.860522 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc-operand-assets\") pod \"cluster-olm-operator-77b56b6f4f-dczh4\" (UID: \"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:04.860616 master-2 kubenswrapper[4776]: I1011 10:28:04.860558 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krh5m\" (UniqueName: \"kubernetes.io/projected/893af718-1fec-4b8b-8349-d85f978f4140-kube-api-access-krh5m\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:04.860616 master-2 kubenswrapper[4776]: I1011 10:28:04.860586 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8050d30-444b-40a5-829c-1e3b788910a0-config\") pod \"openshift-apiserver-operator-7d88655794-7jd4q\" (UID: \"f8050d30-444b-40a5-829c-1e3b788910a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:04.860667 master-2 kubenswrapper[4776]: I1011 10:28:04.860616 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08b7d4e3-1682-4a3b-a757-84ded3a16764-config\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.860667 master-2 kubenswrapper[4776]: I1011 10:28:04.860648 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d362fb9-48e4-4d72-a940-ec6c9c051fac-serving-cert\") pod \"openshift-config-operator-55957b47d5-f7vv7\" (UID: \"9d362fb9-48e4-4d72-a940-ec6c9c051fac\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:04.860888 master-2 kubenswrapper[4776]: I1011 10:28:04.860694 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-trusted-ca\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.860927 master-2 kubenswrapper[4776]: I1011 10:28:04.860899 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtkpx\" (UniqueName: \"kubernetes.io/projected/7e860f23-9dae-4606-9426-0edec38a332f-kube-api-access-xtkpx\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:04.860956 master-2 kubenswrapper[4776]: I1011 10:28:04.860940 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4536c84-d8f3-4808-bf8b-9b40695f46de-auth-proxy-config\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.861075 master-2 kubenswrapper[4776]: I1011 10:28:04.861055 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18ca0678-0b0d-4d5d-bc50-a0a098301f38-host-slash\") pod \"iptables-alerter-5mn8b\" (UID: \"18ca0678-0b0d-4d5d-bc50-a0a098301f38\") " pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:04.861159 master-2 kubenswrapper[4776]: I1011 10:28:04.861138 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:04.861252 master-2 kubenswrapper[4776]: I1011 10:28:04.861230 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58aef476-6586-47bb-bf45-dbeccac6271a-kube-api-access\") pod \"openshift-kube-scheduler-operator-766d6b44f6-s5shc\" (UID: \"58aef476-6586-47bb-bf45-dbeccac6271a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:04.861358 master-2 kubenswrapper[4776]: I1011 10:28:04.861332 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b562963f-7112-411a-a64c-3b8eba909c59-trusted-ca\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.861393 master-2 kubenswrapper[4776]: I1011 10:28:04.861375 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6whh\" (UniqueName: \"kubernetes.io/projected/7652e0ca-2d18-48c7-80e0-f4a936038377-kube-api-access-t6whh\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:04.861490 master-2 kubenswrapper[4776]: I1011 10:28:04.861422 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwp57\" (UniqueName: \"kubernetes.io/projected/08b7d4e3-1682-4a3b-a757-84ded3a16764-kube-api-access-fwp57\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.861523 master-2 kubenswrapper[4776]: I1011 10:28:04.861500 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eba1e82e-9f3e-4273-836e-9407cc394b10-cco-trusted-ca\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:04.861523 master-2 kubenswrapper[4776]: I1011 10:28:04.861509 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:04.861814 master-2 kubenswrapper[4776]: I1011 10:28:04.861765 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b16a4f10-c724-43cf-acd4-b3f5aa575653-trusted-ca\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.861898 master-2 kubenswrapper[4776]: E1011 10:28:04.861784 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:04.861963 master-2 kubenswrapper[4776]: I1011 10:28:04.861902 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89e02bcb-b3fe-4a45-a531-4ab41d8ee424-serving-cert\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ww4zz\" (UID: \"89e02bcb-b3fe-4a45-a531-4ab41d8ee424\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:04.861963 master-2 kubenswrapper[4776]: E1011 10:28:04.861928 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs podName:cbf33a7e-abea-411d-9a19-85cfe67debe3 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.361910466 +0000 UTC m=+120.146337175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs") pod "multus-admission-controller-77b66fddc8-s5r5b" (UID: "cbf33a7e-abea-411d-9a19-85cfe67debe3") : secret "multus-admission-controller-secret" not found Oct 11 10:28:04.862024 master-2 kubenswrapper[4776]: I1011 10:28:04.861964 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/08b7d4e3-1682-4a3b-a757-84ded3a16764-auth-proxy-config\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.862050 master-2 kubenswrapper[4776]: E1011 10:28:04.862018 4776 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:04.862138 master-2 kubenswrapper[4776]: E1011 10:28:04.862110 4776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:04.862138 master-2 kubenswrapper[4776]: E1011 10:28:04.862129 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls podName:7e860f23-9dae-4606-9426-0edec38a332f nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.362120362 +0000 UTC m=+120.146547071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-bjntd" (UID: "7e860f23-9dae-4606-9426-0edec38a332f") : secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:04.862201 master-2 kubenswrapper[4776]: E1011 10:28:04.862172 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls podName:dbaa6ca7-9865-42f6-8030-2decf702caa1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.362157683 +0000 UTC m=+120.146584392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-h8588" (UID: "dbaa6ca7-9865-42f6-8030-2decf702caa1") : secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:04.862201 master-2 kubenswrapper[4776]: I1011 10:28:04.862110 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/59763d5b-237f-4095-bf52-86bb0154381c-snapshots\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.862251 master-2 kubenswrapper[4776]: I1011 10:28:04.862221 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e487f283-7482-463c-90b6-a812e00d0e35-serving-cert\") pod \"kube-controller-manager-operator-5d85974df9-5gj77\" (UID: \"e487f283-7482-463c-90b6-a812e00d0e35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:04.862280 master-2 kubenswrapper[4776]: I1011 10:28:04.862263 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8050d30-444b-40a5-829c-1e3b788910a0-serving-cert\") pod \"openshift-apiserver-operator-7d88655794-7jd4q\" (UID: \"f8050d30-444b-40a5-829c-1e3b788910a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:04.862319 master-2 kubenswrapper[4776]: I1011 10:28:04.862300 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-profile-collector-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:04.862363 master-2 kubenswrapper[4776]: I1011 10:28:04.862345 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-serving-cert\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.862423 master-2 kubenswrapper[4776]: I1011 10:28:04.862400 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59763d5b-237f-4095-bf52-86bb0154381c-serving-cert\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.862480 master-2 kubenswrapper[4776]: I1011 10:28:04.862450 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4536c84-d8f3-4808-bf8b-9b40695f46de-auth-proxy-config\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.862517 master-2 kubenswrapper[4776]: I1011 10:28:04.862444 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpxlw\" (UniqueName: \"kubernetes.io/projected/8757af56-20fb-439e-adba-7e4e50378936-kube-api-access-xpxlw\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.862557 master-2 kubenswrapper[4776]: I1011 10:28:04.862539 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w627\" (UniqueName: \"kubernetes.io/projected/88129ec6-6f99-42a1-842a-6a965c6b58fe-kube-api-access-4w627\") pod \"openshift-controller-manager-operator-5745565d84-bq4rs\" (UID: \"88129ec6-6f99-42a1-842a-6a965c6b58fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:04.862588 master-2 kubenswrapper[4776]: I1011 10:28:04.862575 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77b56b6f4f-dczh4\" (UID: \"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:04.862702 master-2 kubenswrapper[4776]: I1011 10:28:04.862595 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-trusted-ca\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.862702 master-2 kubenswrapper[4776]: I1011 10:28:04.862605 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.862702 master-2 kubenswrapper[4776]: I1011 10:28:04.862689 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxlns\" (UniqueName: \"kubernetes.io/projected/b16a4f10-c724-43cf-acd4-b3f5aa575653-kube-api-access-mxlns\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.862798 master-2 kubenswrapper[4776]: I1011 10:28:04.862706 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc-operand-assets\") pod \"cluster-olm-operator-77b56b6f4f-dczh4\" (UID: \"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:04.862798 master-2 kubenswrapper[4776]: I1011 10:28:04.862724 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5wnj\" (UniqueName: \"kubernetes.io/projected/e4536c84-d8f3-4808-bf8b-9b40695f46de-kube-api-access-x5wnj\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.862853 master-2 kubenswrapper[4776]: I1011 10:28:04.862825 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-profile-collector-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:04.862921 master-2 kubenswrapper[4776]: I1011 10:28:04.862895 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.862957 master-2 kubenswrapper[4776]: I1011 10:28:04.862823 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08b7d4e3-1682-4a3b-a757-84ded3a16764-config\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.862984 master-2 kubenswrapper[4776]: I1011 10:28:04.862952 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-bound-sa-token\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.862984 master-2 kubenswrapper[4776]: I1011 10:28:04.862975 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8050d30-444b-40a5-829c-1e3b788910a0-config\") pod \"openshift-apiserver-operator-7d88655794-7jd4q\" (UID: \"f8050d30-444b-40a5-829c-1e3b788910a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:04.863286 master-2 kubenswrapper[4776]: I1011 10:28:04.863259 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b562963f-7112-411a-a64c-3b8eba909c59-trusted-ca\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.863503 master-2 kubenswrapper[4776]: I1011 10:28:04.863480 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-etcd-ca\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.863797 master-2 kubenswrapper[4776]: I1011 10:28:04.863765 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e540333c-4b4d-439e-a82a-cd3a97c95a43-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-56d4b95494-9fbb2\" (UID: \"e540333c-4b4d-439e-a82a-cd3a97c95a43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" Oct 11 10:28:04.863855 master-2 kubenswrapper[4776]: I1011 10:28:04.863789 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/59763d5b-237f-4095-bf52-86bb0154381c-snapshots\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.864046 master-2 kubenswrapper[4776]: I1011 10:28:04.863999 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-etcd-client\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.864263 master-2 kubenswrapper[4776]: E1011 10:28:04.864237 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:04.864318 master-2 kubenswrapper[4776]: I1011 10:28:04.864253 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88129ec6-6f99-42a1-842a-6a965c6b58fe-serving-cert\") pod \"openshift-controller-manager-operator-5745565d84-bq4rs\" (UID: \"88129ec6-6f99-42a1-842a-6a965c6b58fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:04.864318 master-2 kubenswrapper[4776]: E1011 10:28:04.864291 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.364279834 +0000 UTC m=+120.148706543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:04.864318 master-2 kubenswrapper[4776]: I1011 10:28:04.864289 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05cf2994-c049-4f42-b2d8-83b23e7e763a-serving-cert\") pod \"service-ca-operator-568c655666-84cp8\" (UID: \"05cf2994-c049-4f42-b2d8-83b23e7e763a\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:04.864654 master-2 kubenswrapper[4776]: E1011 10:28:04.864561 4776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Oct 11 10:28:04.864880 master-2 kubenswrapper[4776]: I1011 10:28:04.864713 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:04.864880 master-2 kubenswrapper[4776]: E1011 10:28:04.864738 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls podName:08b7d4e3-1682-4a3b-a757-84ded3a16764 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.364718757 +0000 UTC m=+120.149145466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls") pod "machine-approver-7876f99457-h7hhv" (UID: "08b7d4e3-1682-4a3b-a757-84ded3a16764") : secret "machine-approver-tls" not found Oct 11 10:28:04.864880 master-2 kubenswrapper[4776]: E1011 10:28:04.864805 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 11 10:28:04.864880 master-2 kubenswrapper[4776]: I1011 10:28:04.864846 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7004f3ff-6db8-446d-94c1-1223e975299d-service-ca-bundle\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.865021 master-2 kubenswrapper[4776]: I1011 10:28:04.864853 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89e02bcb-b3fe-4a45-a531-4ab41d8ee424-serving-cert\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ww4zz\" (UID: \"89e02bcb-b3fe-4a45-a531-4ab41d8ee424\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:04.865192 master-2 kubenswrapper[4776]: I1011 10:28:04.865169 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7004f3ff-6db8-446d-94c1-1223e975299d-service-ca-bundle\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.865700 master-2 kubenswrapper[4776]: E1011 10:28:04.865635 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert podName:d4354488-1b32-422d-bb06-767a952192a5 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.36483781 +0000 UTC m=+120.149264569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert") pod "olm-operator-867f8475d9-8lf59" (UID: "d4354488-1b32-422d-bb06-767a952192a5") : secret "olm-operator-serving-cert" not found Oct 11 10:28:04.866510 master-2 kubenswrapper[4776]: I1011 10:28:04.865642 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6967590c-695e-4e20-964b-0c643abdf367-serving-cert\") pod \"kube-apiserver-operator-68f5d95b74-9h5mv\" (UID: \"6967590c-695e-4e20-964b-0c643abdf367\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:04.866549 master-2 kubenswrapper[4776]: I1011 10:28:04.866259 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-serving-cert\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.866648 master-2 kubenswrapper[4776]: I1011 10:28:04.866563 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-trusted-ca\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:04.867300 master-2 kubenswrapper[4776]: I1011 10:28:04.866589 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59763d5b-237f-4095-bf52-86bb0154381c-serving-cert\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.867300 master-2 kubenswrapper[4776]: I1011 10:28:04.867281 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-profile-collector-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:04.867300 master-2 kubenswrapper[4776]: I1011 10:28:04.867307 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:04.867445 master-2 kubenswrapper[4776]: I1011 10:28:04.867344 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7004f3ff-6db8-446d-94c1-1223e975299d-trusted-ca-bundle\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.867445 master-2 kubenswrapper[4776]: I1011 10:28:04.867363 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/548333d7-2374-4c38-b4fd-45c2bee2ac4e-images\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.867445 master-2 kubenswrapper[4776]: E1011 10:28:04.867366 4776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 11 10:28:04.867445 master-2 kubenswrapper[4776]: E1011 10:28:04.867403 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics podName:7652e0ca-2d18-48c7-80e0-f4a936038377 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.367393844 +0000 UTC m=+120.151820543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-wsmdd" (UID: "7652e0ca-2d18-48c7-80e0-f4a936038377") : secret "marketplace-operator-metrics" not found Oct 11 10:28:04.867544 master-2 kubenswrapper[4776]: I1011 10:28:04.867459 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8050d30-444b-40a5-829c-1e3b788910a0-serving-cert\") pod \"openshift-apiserver-operator-7d88655794-7jd4q\" (UID: \"f8050d30-444b-40a5-829c-1e3b788910a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:04.867632 master-2 kubenswrapper[4776]: I1011 10:28:04.867608 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77b56b6f4f-dczh4\" (UID: \"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:04.867892 master-2 kubenswrapper[4776]: I1011 10:28:04.867855 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d362fb9-48e4-4d72-a940-ec6c9c051fac-serving-cert\") pod \"openshift-config-operator-55957b47d5-f7vv7\" (UID: \"9d362fb9-48e4-4d72-a940-ec6c9c051fac\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:04.868097 master-2 kubenswrapper[4776]: I1011 10:28:04.868064 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-trusted-ca\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:04.868097 master-2 kubenswrapper[4776]: I1011 10:28:04.868084 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/548333d7-2374-4c38-b4fd-45c2bee2ac4e-images\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.868166 master-2 kubenswrapper[4776]: I1011 10:28:04.868124 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-profile-collector-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:04.868325 master-2 kubenswrapper[4776]: I1011 10:28:04.868294 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e487f283-7482-463c-90b6-a812e00d0e35-serving-cert\") pod \"kube-controller-manager-operator-5d85974df9-5gj77\" (UID: \"e487f283-7482-463c-90b6-a812e00d0e35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:04.869992 master-2 kubenswrapper[4776]: I1011 10:28:04.869948 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7004f3ff-6db8-446d-94c1-1223e975299d-trusted-ca-bundle\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.879584 master-2 kubenswrapper[4776]: I1011 10:28:04.879547 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 11 10:28:04.899332 master-2 kubenswrapper[4776]: I1011 10:28:04.899303 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.913058 master-1 kubenswrapper[4771]: I1011 10:28:04.912949 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w5bt\" (UniqueName: \"kubernetes.io/projected/3346c1b6-593b-4224-802c-25e99e9893a8-kube-api-access-2w5bt\") pod \"iptables-alerter-t44c5\" (UID: \"3346c1b6-593b-4224-802c-25e99e9893a8\") " pod="openshift-network-operator/iptables-alerter-t44c5" Oct 11 10:28:04.913205 master-1 kubenswrapper[4771]: I1011 10:28:04.913073 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/3346c1b6-593b-4224-802c-25e99e9893a8-iptables-alerter-script\") pod \"iptables-alerter-t44c5\" (UID: \"3346c1b6-593b-4224-802c-25e99e9893a8\") " pod="openshift-network-operator/iptables-alerter-t44c5" Oct 11 10:28:04.913205 master-1 kubenswrapper[4771]: I1011 10:28:04.913114 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3346c1b6-593b-4224-802c-25e99e9893a8-host-slash\") pod \"iptables-alerter-t44c5\" (UID: \"3346c1b6-593b-4224-802c-25e99e9893a8\") " pod="openshift-network-operator/iptables-alerter-t44c5" Oct 11 10:28:04.913205 master-1 kubenswrapper[4771]: I1011 10:28:04.913193 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3346c1b6-593b-4224-802c-25e99e9893a8-host-slash\") pod \"iptables-alerter-t44c5\" (UID: \"3346c1b6-593b-4224-802c-25e99e9893a8\") " pod="openshift-network-operator/iptables-alerter-t44c5" Oct 11 10:28:04.913929 master-1 kubenswrapper[4771]: I1011 10:28:04.913869 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/3346c1b6-593b-4224-802c-25e99e9893a8-iptables-alerter-script\") pod \"iptables-alerter-t44c5\" (UID: \"3346c1b6-593b-4224-802c-25e99e9893a8\") " pod="openshift-network-operator/iptables-alerter-t44c5" Oct 11 10:28:04.919727 master-2 kubenswrapper[4776]: I1011 10:28:04.919669 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 11 10:28:04.935385 master-1 kubenswrapper[4771]: I1011 10:28:04.935284 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w5bt\" (UniqueName: \"kubernetes.io/projected/3346c1b6-593b-4224-802c-25e99e9893a8-kube-api-access-2w5bt\") pod \"iptables-alerter-t44c5\" (UID: \"3346c1b6-593b-4224-802c-25e99e9893a8\") " pod="openshift-network-operator/iptables-alerter-t44c5" Oct 11 10:28:04.957249 master-2 kubenswrapper[4776]: I1011 10:28:04.957183 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc6zm\" (UniqueName: \"kubernetes.io/projected/66dee5be-e631-462d-8a2c-51a2031a83a2-kube-api-access-gc6zm\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.968293 master-2 kubenswrapper[4776]: I1011 10:28:04.968252 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lznwk\" (UniqueName: \"kubernetes.io/projected/18ca0678-0b0d-4d5d-bc50-a0a098301f38-kube-api-access-lznwk\") pod \"iptables-alerter-5mn8b\" (UID: \"18ca0678-0b0d-4d5d-bc50-a0a098301f38\") " pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:04.968376 master-2 kubenswrapper[4776]: I1011 10:28:04.968304 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/18ca0678-0b0d-4d5d-bc50-a0a098301f38-iptables-alerter-script\") pod \"iptables-alerter-5mn8b\" (UID: \"18ca0678-0b0d-4d5d-bc50-a0a098301f38\") " pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:04.968413 master-2 kubenswrapper[4776]: I1011 10:28:04.968390 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18ca0678-0b0d-4d5d-bc50-a0a098301f38-host-slash\") pod \"iptables-alerter-5mn8b\" (UID: \"18ca0678-0b0d-4d5d-bc50-a0a098301f38\") " pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:04.968554 master-2 kubenswrapper[4776]: I1011 10:28:04.968525 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18ca0678-0b0d-4d5d-bc50-a0a098301f38-host-slash\") pod \"iptables-alerter-5mn8b\" (UID: \"18ca0678-0b0d-4d5d-bc50-a0a098301f38\") " pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:04.969128 master-2 kubenswrapper[4776]: I1011 10:28:04.969085 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/18ca0678-0b0d-4d5d-bc50-a0a098301f38-iptables-alerter-script\") pod \"iptables-alerter-5mn8b\" (UID: \"18ca0678-0b0d-4d5d-bc50-a0a098301f38\") " pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:04.975031 master-2 kubenswrapper[4776]: I1011 10:28:04.974998 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dptq\" (UniqueName: \"kubernetes.io/projected/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-kube-api-access-2dptq\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:04.994184 master-2 kubenswrapper[4776]: I1011 10:28:04.993999 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw9qb\" (UniqueName: \"kubernetes.io/projected/e20ebc39-150b-472a-bb22-328d8f5db87b-kube-api-access-pw9qb\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:05.012937 master-2 kubenswrapper[4776]: I1011 10:28:05.012877 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlz8s\" (UniqueName: \"kubernetes.io/projected/d4354488-1b32-422d-bb06-767a952192a5-kube-api-access-tlz8s\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:05.042251 master-2 kubenswrapper[4776]: I1011 10:28:05.042198 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e487f283-7482-463c-90b6-a812e00d0e35-kube-api-access\") pod \"kube-controller-manager-operator-5d85974df9-5gj77\" (UID: \"e487f283-7482-463c-90b6-a812e00d0e35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:05.057916 master-2 kubenswrapper[4776]: I1011 10:28:05.057874 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:28:05.058040 master-2 kubenswrapper[4776]: I1011 10:28:05.057972 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:28:05.065162 master-2 kubenswrapper[4776]: I1011 10:28:05.065126 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29krx\" (UniqueName: \"kubernetes.io/projected/eba1e82e-9f3e-4273-836e-9407cc394b10-kube-api-access-29krx\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:05.084031 master-2 kubenswrapper[4776]: I1011 10:28:05.083964 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qxt8\" (UniqueName: \"kubernetes.io/projected/e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc-kube-api-access-7qxt8\") pod \"cluster-olm-operator-77b56b6f4f-dczh4\" (UID: \"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:05.090300 master-2 kubenswrapper[4776]: I1011 10:28:05.090239 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:05.093533 master-2 kubenswrapper[4776]: I1011 10:28:05.093477 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjs47\" (UniqueName: \"kubernetes.io/projected/9d362fb9-48e4-4d72-a940-ec6c9c051fac-kube-api-access-hjs47\") pod \"openshift-config-operator-55957b47d5-f7vv7\" (UID: \"9d362fb9-48e4-4d72-a940-ec6c9c051fac\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:05.114635 master-2 kubenswrapper[4776]: I1011 10:28:05.114571 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw2ll\" (UniqueName: \"kubernetes.io/projected/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-kube-api-access-xw2ll\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:05.126933 master-1 kubenswrapper[4771]: I1011 10:28:05.126790 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-t44c5" Oct 11 10:28:05.135058 master-2 kubenswrapper[4776]: I1011 10:28:05.134994 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:05.144611 master-2 kubenswrapper[4776]: I1011 10:28:05.144585 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b562963f-7112-411a-a64c-3b8eba909c59-bound-sa-token\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:05.150658 master-1 kubenswrapper[4771]: W1011 10:28:05.150586 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3346c1b6_593b_4224_802c_25e99e9893a8.slice/crio-47eb04e21cc8cca54dd2af469c2901eb6aaa71fb7ab0ddf3d031a884da55e0f6 WatchSource:0}: Error finding container 47eb04e21cc8cca54dd2af469c2901eb6aaa71fb7ab0ddf3d031a884da55e0f6: Status 404 returned error can't find the container with id 47eb04e21cc8cca54dd2af469c2901eb6aaa71fb7ab0ddf3d031a884da55e0f6 Oct 11 10:28:05.158446 master-2 kubenswrapper[4776]: I1011 10:28:05.158411 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plpwg\" (UniqueName: \"kubernetes.io/projected/64310b0b-bae1-4ad3-b106-6d59d47d29b2-kube-api-access-plpwg\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:05.186598 master-2 kubenswrapper[4776]: I1011 10:28:05.186578 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcdgf\" (UniqueName: \"kubernetes.io/projected/f8050d30-444b-40a5-829c-1e3b788910a0-kube-api-access-rcdgf\") pod \"openshift-apiserver-operator-7d88655794-7jd4q\" (UID: \"f8050d30-444b-40a5-829c-1e3b788910a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:05.204529 master-2 kubenswrapper[4776]: I1011 10:28:05.204427 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5q65\" (UniqueName: \"kubernetes.io/projected/a0b806b9-13ff-45fa-afba-5d0c89eac7df-kube-api-access-g5q65\") pod \"csi-snapshot-controller-operator-7ff96dd767-vv9w8\" (UID: \"a0b806b9-13ff-45fa-afba-5d0c89eac7df\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8" Oct 11 10:28:05.229900 master-2 kubenswrapper[4776]: I1011 10:28:05.229833 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxt2q\" (UniqueName: \"kubernetes.io/projected/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-kube-api-access-mxt2q\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:05.237497 master-2 kubenswrapper[4776]: I1011 10:28:05.237420 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb54g\" (UniqueName: \"kubernetes.io/projected/dbaa6ca7-9865-42f6-8030-2decf702caa1-kube-api-access-vb54g\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:05.257871 master-2 kubenswrapper[4776]: I1011 10:28:05.257823 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6967590c-695e-4e20-964b-0c643abdf367-kube-api-access\") pod \"kube-apiserver-operator-68f5d95b74-9h5mv\" (UID: \"6967590c-695e-4e20-964b-0c643abdf367\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: I1011 10:28:05.274741 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: I1011 10:28:05.274788 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: I1011 10:28:05.274855 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: E1011 10:28:05.274985 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: E1011 10:28:05.275026 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert podName:6b8dc5b8-3c48-4dba-9992-6e269ca133f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.275013592 +0000 UTC m=+121.059440301 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert") pod "cluster-autoscaler-operator-7ff449c7c5-cfvjb" (UID: "6b8dc5b8-3c48-4dba-9992-6e269ca133f1") : secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: E1011 10:28:05.275279 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: E1011 10:28:05.275301 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.27529341 +0000 UTC m=+121.059720119 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: E1011 10:28:05.275330 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: E1011 10:28:05.275346 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.275341811 +0000 UTC m=+121.059768520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: I1011 10:28:05.280263 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmd5r\" (UniqueName: \"kubernetes.io/projected/b562963f-7112-411a-a64c-3b8eba909c59-kube-api-access-rmd5r\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:05.316595 master-2 kubenswrapper[4776]: I1011 10:28:05.316519 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77"] Oct 11 10:28:05.317848 master-2 kubenswrapper[4776]: I1011 10:28:05.317640 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vc8b\" (UniqueName: \"kubernetes.io/projected/e540333c-4b4d-439e-a82a-cd3a97c95a43-kube-api-access-2vc8b\") pod \"cluster-storage-operator-56d4b95494-9fbb2\" (UID: \"e540333c-4b4d-439e-a82a-cd3a97c95a43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" Oct 11 10:28:05.319164 master-2 kubenswrapper[4776]: I1011 10:28:05.319117 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmwgj\" (UniqueName: \"kubernetes.io/projected/89e02bcb-b3fe-4a45-a531-4ab41d8ee424-kube-api-access-xmwgj\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ww4zz\" (UID: \"89e02bcb-b3fe-4a45-a531-4ab41d8ee424\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:05.329927 master-2 kubenswrapper[4776]: W1011 10:28:05.329883 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode487f283_7482_463c_90b6_a812e00d0e35.slice/crio-682473197d1a6765d5b8eb8302055ac7da19710922e0edf3c0354b34d4fa6a07 WatchSource:0}: Error finding container 682473197d1a6765d5b8eb8302055ac7da19710922e0edf3c0354b34d4fa6a07: Status 404 returned error can't find the container with id 682473197d1a6765d5b8eb8302055ac7da19710922e0edf3c0354b34d4fa6a07 Oct 11 10:28:05.334311 master-2 kubenswrapper[4776]: I1011 10:28:05.334276 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:05.339085 master-2 kubenswrapper[4776]: I1011 10:28:05.339051 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4"] Oct 11 10:28:05.341243 master-2 kubenswrapper[4776]: I1011 10:28:05.341198 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f54r2\" (UniqueName: \"kubernetes.io/projected/05cf2994-c049-4f42-b2d8-83b23e7e763a-kube-api-access-f54r2\") pod \"service-ca-operator-568c655666-84cp8\" (UID: \"05cf2994-c049-4f42-b2d8-83b23e7e763a\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:05.344213 master-2 kubenswrapper[4776]: W1011 10:28:05.344170 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6e70e9c_b1bd_4f28_911c_fc6ecfd2e8fc.slice/crio-af6eef4adea766bbdc1f8cd4dbe21919a4b8dfead251251d760b6ec0c39cd78c WatchSource:0}: Error finding container af6eef4adea766bbdc1f8cd4dbe21919a4b8dfead251251d760b6ec0c39cd78c: Status 404 returned error can't find the container with id af6eef4adea766bbdc1f8cd4dbe21919a4b8dfead251251d760b6ec0c39cd78c Oct 11 10:28:05.353649 master-2 kubenswrapper[4776]: I1011 10:28:05.353612 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:05.359604 master-2 kubenswrapper[4776]: I1011 10:28:05.359565 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b7hm\" (UniqueName: \"kubernetes.io/projected/cbf33a7e-abea-411d-9a19-85cfe67debe3-kube-api-access-9b7hm\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:05.359745 master-2 kubenswrapper[4776]: I1011 10:28:05.359661 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8" Oct 11 10:28:05.372653 master-2 kubenswrapper[4776]: I1011 10:28:05.372601 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:05.375957 master-2 kubenswrapper[4776]: I1011 10:28:05.375894 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:05.376035 master-2 kubenswrapper[4776]: I1011 10:28:05.375981 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:05.376077 master-2 kubenswrapper[4776]: E1011 10:28:05.376037 4776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Oct 11 10:28:05.376128 master-2 kubenswrapper[4776]: E1011 10:28:05.376095 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls podName:08b7d4e3-1682-4a3b-a757-84ded3a16764 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.376076452 +0000 UTC m=+121.160503161 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls") pod "machine-approver-7876f99457-h7hhv" (UID: "08b7d4e3-1682-4a3b-a757-84ded3a16764") : secret "machine-approver-tls" not found Oct 11 10:28:05.376232 master-2 kubenswrapper[4776]: E1011 10:28:05.376190 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 11 10:28:05.376315 master-2 kubenswrapper[4776]: E1011 10:28:05.376284 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert podName:d4354488-1b32-422d-bb06-767a952192a5 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.376258157 +0000 UTC m=+121.160684916 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert") pod "olm-operator-867f8475d9-8lf59" (UID: "d4354488-1b32-422d-bb06-767a952192a5") : secret "olm-operator-serving-cert" not found Oct 11 10:28:05.376420 master-2 kubenswrapper[4776]: E1011 10:28:05.376388 4776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 11 10:28:05.376462 master-2 kubenswrapper[4776]: E1011 10:28:05.376450 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics podName:7652e0ca-2d18-48c7-80e0-f4a936038377 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.376432022 +0000 UTC m=+121.160858821 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-wsmdd" (UID: "7652e0ca-2d18-48c7-80e0-f4a936038377") : secret "marketplace-operator-metrics" not found Oct 11 10:28:05.376513 master-2 kubenswrapper[4776]: I1011 10:28:05.376039 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:05.376559 master-2 kubenswrapper[4776]: I1011 10:28:05.376521 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:05.376605 master-2 kubenswrapper[4776]: I1011 10:28:05.376575 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:05.376689 master-2 kubenswrapper[4776]: I1011 10:28:05.376639 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:05.376766 master-2 kubenswrapper[4776]: I1011 10:28:05.376737 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:05.376817 master-2 kubenswrapper[4776]: I1011 10:28:05.376797 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:05.376876 master-2 kubenswrapper[4776]: I1011 10:28:05.376848 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:05.376970 master-2 kubenswrapper[4776]: I1011 10:28:05.376940 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwznd\" (UniqueName: \"kubernetes.io/projected/59763d5b-237f-4095-bf52-86bb0154381c-kube-api-access-hwznd\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:05.377011 master-2 kubenswrapper[4776]: E1011 10:28:05.376961 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Oct 11 10:28:05.377011 master-2 kubenswrapper[4776]: I1011 10:28:05.376966 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:05.377091 master-2 kubenswrapper[4776]: E1011 10:28:05.377040 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377019899 +0000 UTC m=+121.161446648 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "node-tuning-operator-tls" not found Oct 11 10:28:05.377091 master-2 kubenswrapper[4776]: E1011 10:28:05.377063 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 11 10:28:05.377168 master-2 kubenswrapper[4776]: I1011 10:28:05.377089 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:05.377168 master-2 kubenswrapper[4776]: E1011 10:28:05.377110 4776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:05.377168 master-2 kubenswrapper[4776]: E1011 10:28:05.377124 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert podName:e20ebc39-150b-472a-bb22-328d8f5db87b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377102431 +0000 UTC m=+121.161529190 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-xzntp" (UID: "e20ebc39-150b-472a-bb22-328d8f5db87b") : secret "package-server-manager-serving-cert" not found Oct 11 10:28:05.377168 master-2 kubenswrapper[4776]: E1011 10:28:05.377155 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert podName:eba1e82e-9f3e-4273-836e-9407cc394b10 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377137312 +0000 UTC m=+121.161564021 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-5cf49b6487-8d7xr" (UID: "eba1e82e-9f3e-4273-836e-9407cc394b10") : secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: I1011 10:28:05.377185 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: E1011 10:28:05.377212 4776 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: I1011 10:28:05.377220 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: E1011 10:28:05.377265 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls podName:893af718-1fec-4b8b-8349-d85f978f4140 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377251166 +0000 UTC m=+121.161677905 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls") pod "dns-operator-7769d9677-wh775" (UID: "893af718-1fec-4b8b-8349-d85f978f4140") : secret "metrics-tls" not found Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: E1011 10:28:05.377290 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: E1011 10:28:05.377301 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: E1011 10:28:05.377213 4776 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: E1011 10:28:05.377341 4776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: E1011 10:28:05.377070 4776 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: E1011 10:28:05.377309 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs podName:64310b0b-bae1-4ad3-b106-6d59d47d29b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377303587 +0000 UTC m=+121.161730296 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs") pod "multus-admission-controller-77b66fddc8-5r2t9" (UID: "64310b0b-bae1-4ad3-b106-6d59d47d29b2") : secret "multus-admission-controller-secret" not found Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: I1011 10:28:05.377410 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: E1011 10:28:05.377415 4776 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377468 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls podName:6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377450441 +0000 UTC m=+121.161877310 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls") pod "ingress-operator-766ddf4575-wf7mj" (UID: "6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c") : secret "metrics-tls" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377498 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls podName:b562963f-7112-411a-a64c-3b8eba909c59 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377482352 +0000 UTC m=+121.161909211 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls") pod "cluster-image-registry-operator-6b8674d7ff-mwbsr" (UID: "b562963f-7112-411a-a64c-3b8eba909c59") : secret "image-registry-operator-tls" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377532 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls podName:548333d7-2374-4c38-b4fd-45c2bee2ac4e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377513753 +0000 UTC m=+121.161940602 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-b88g6" (UID: "548333d7-2374-4c38-b4fd-45c2bee2ac4e") : secret "machine-api-operator-tls" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377532 4776 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377561 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls podName:e4536c84-d8f3-4808-bf8b-9b40695f46de nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377546404 +0000 UTC m=+121.161973283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls") pod "machine-config-operator-7b75469658-jtmwh" (UID: "e4536c84-d8f3-4808-bf8b-9b40695f46de") : secret "mco-proxy-tls" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377588 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls podName:7e860f23-9dae-4606-9426-0edec38a332f nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377574625 +0000 UTC m=+121.162001374 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-bjntd" (UID: "7e860f23-9dae-4606-9426-0edec38a332f") : secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: I1011 10:28:05.377633 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377706 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert podName:e3281eb7-fb96-4bae-8c55-b79728d426b0 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377660587 +0000 UTC m=+121.162087296 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert") pod "catalog-operator-f966fb6f8-8gkqg" (UID: "e3281eb7-fb96-4bae-8c55-b79728d426b0") : secret "catalog-operator-serving-cert" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377746 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377785 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs podName:cbf33a7e-abea-411d-9a19-85cfe67debe3 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.37777313 +0000 UTC m=+121.162199869 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs") pod "multus-admission-controller-77b66fddc8-s5r5b" (UID: "cbf33a7e-abea-411d-9a19-85cfe67debe3") : secret "multus-admission-controller-secret" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: I1011 10:28:05.377784 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377832 4776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:05.378303 master-2 kubenswrapper[4776]: I1011 10:28:05.377837 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:05.378303 master-2 kubenswrapper[4776]: E1011 10:28:05.377857 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls podName:dbaa6ca7-9865-42f6-8030-2decf702caa1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377850883 +0000 UTC m=+121.162277592 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-h8588" (UID: "dbaa6ca7-9865-42f6-8030-2decf702caa1") : secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:05.378303 master-2 kubenswrapper[4776]: E1011 10:28:05.377913 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:05.378303 master-2 kubenswrapper[4776]: E1011 10:28:05.377970 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377950805 +0000 UTC m=+121.162377634 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:05.403368 master-2 kubenswrapper[4776]: I1011 10:28:05.403302 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:05.405961 master-2 kubenswrapper[4776]: I1011 10:28:05.405913 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfh8n\" (UniqueName: \"kubernetes.io/projected/548333d7-2374-4c38-b4fd-45c2bee2ac4e-kube-api-access-dfh8n\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:05.416783 master-2 kubenswrapper[4776]: I1011 10:28:05.416731 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:05.418440 master-2 kubenswrapper[4776]: I1011 10:28:05.418407 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f979h\" (UniqueName: \"kubernetes.io/projected/e3281eb7-fb96-4bae-8c55-b79728d426b0-kube-api-access-f979h\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:05.443752 master-2 kubenswrapper[4776]: I1011 10:28:05.443114 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:05.449733 master-2 kubenswrapper[4776]: I1011 10:28:05.448401 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bzgn\" (UniqueName: \"kubernetes.io/projected/7004f3ff-6db8-446d-94c1-1223e975299d-kube-api-access-8bzgn\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:05.449733 master-2 kubenswrapper[4776]: I1011 10:28:05.448994 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:05.453522 master-2 kubenswrapper[4776]: I1011 10:28:05.453485 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" event={"ID":"e487f283-7482-463c-90b6-a812e00d0e35","Type":"ContainerStarted","Data":"682473197d1a6765d5b8eb8302055ac7da19710922e0edf3c0354b34d4fa6a07"} Oct 11 10:28:05.454621 master-2 kubenswrapper[4776]: I1011 10:28:05.454591 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" event={"ID":"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc","Type":"ContainerStarted","Data":"af6eef4adea766bbdc1f8cd4dbe21919a4b8dfead251251d760b6ec0c39cd78c"} Oct 11 10:28:05.481107 master-2 kubenswrapper[4776]: I1011 10:28:05.480833 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58aef476-6586-47bb-bf45-dbeccac6271a-kube-api-access\") pod \"openshift-kube-scheduler-operator-766d6b44f6-s5shc\" (UID: \"58aef476-6586-47bb-bf45-dbeccac6271a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:05.499159 master-2 kubenswrapper[4776]: I1011 10:28:05.499115 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6whh\" (UniqueName: \"kubernetes.io/projected/7652e0ca-2d18-48c7-80e0-f4a936038377-kube-api-access-t6whh\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:05.519348 master-2 kubenswrapper[4776]: I1011 10:28:05.519257 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtkpx\" (UniqueName: \"kubernetes.io/projected/7e860f23-9dae-4606-9426-0edec38a332f-kube-api-access-xtkpx\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:05.541046 master-2 kubenswrapper[4776]: I1011 10:28:05.538940 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwp57\" (UniqueName: \"kubernetes.io/projected/08b7d4e3-1682-4a3b-a757-84ded3a16764-kube-api-access-fwp57\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:05.557466 master-2 kubenswrapper[4776]: I1011 10:28:05.557424 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7"] Oct 11 10:28:05.558525 master-2 kubenswrapper[4776]: I1011 10:28:05.558494 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:05.565936 master-2 kubenswrapper[4776]: I1011 10:28:05.563194 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krh5m\" (UniqueName: \"kubernetes.io/projected/893af718-1fec-4b8b-8349-d85f978f4140-kube-api-access-krh5m\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:05.573492 master-2 kubenswrapper[4776]: I1011 10:28:05.573447 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8"] Oct 11 10:28:05.578954 master-2 kubenswrapper[4776]: I1011 10:28:05.578914 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-568c655666-84cp8"] Oct 11 10:28:05.587261 master-2 kubenswrapper[4776]: I1011 10:28:05.587113 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpxlw\" (UniqueName: \"kubernetes.io/projected/8757af56-20fb-439e-adba-7e4e50378936-kube-api-access-xpxlw\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:05.595832 master-2 kubenswrapper[4776]: I1011 10:28:05.595792 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q"] Oct 11 10:28:05.603905 master-2 kubenswrapper[4776]: I1011 10:28:05.601183 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w627\" (UniqueName: \"kubernetes.io/projected/88129ec6-6f99-42a1-842a-6a965c6b58fe-kube-api-access-4w627\") pod \"openshift-controller-manager-operator-5745565d84-bq4rs\" (UID: \"88129ec6-6f99-42a1-842a-6a965c6b58fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:05.604612 master-2 kubenswrapper[4776]: W1011 10:28:05.604526 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8050d30_444b_40a5_829c_1e3b788910a0.slice/crio-a641dde88cb0eb910af8600e4b9aa4f67ea86f8b0a5f29ef7f742d66d5b7eb69 WatchSource:0}: Error finding container a641dde88cb0eb910af8600e4b9aa4f67ea86f8b0a5f29ef7f742d66d5b7eb69: Status 404 returned error can't find the container with id a641dde88cb0eb910af8600e4b9aa4f67ea86f8b0a5f29ef7f742d66d5b7eb69 Oct 11 10:28:05.611965 master-2 kubenswrapper[4776]: I1011 10:28:05.611910 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv"] Oct 11 10:28:05.614460 master-2 kubenswrapper[4776]: I1011 10:28:05.614405 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" Oct 11 10:28:05.617990 master-2 kubenswrapper[4776]: W1011 10:28:05.617958 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6967590c_695e_4e20_964b_0c643abdf367.slice/crio-18c2d67486e169d09512b65a5f4a23491bdfb755bad6884758580671b299c356 WatchSource:0}: Error finding container 18c2d67486e169d09512b65a5f4a23491bdfb755bad6884758580671b299c356: Status 404 returned error can't find the container with id 18c2d67486e169d09512b65a5f4a23491bdfb755bad6884758580671b299c356 Oct 11 10:28:05.619057 master-2 kubenswrapper[4776]: I1011 10:28:05.618958 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxlns\" (UniqueName: \"kubernetes.io/projected/b16a4f10-c724-43cf-acd4-b3f5aa575653-kube-api-access-mxlns\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:05.632590 master-2 kubenswrapper[4776]: I1011 10:28:05.632063 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54"] Oct 11 10:28:05.635130 master-2 kubenswrapper[4776]: W1011 10:28:05.635086 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda35883b8_6cf5_45d7_a4e3_02c0ac0d91e1.slice/crio-ef035f600c6de6398c2351b00cea47d45cdc23afa3b46d4c5caee020d9ff82b6 WatchSource:0}: Error finding container ef035f600c6de6398c2351b00cea47d45cdc23afa3b46d4c5caee020d9ff82b6: Status 404 returned error can't find the container with id ef035f600c6de6398c2351b00cea47d45cdc23afa3b46d4c5caee020d9ff82b6 Oct 11 10:28:05.637179 master-2 kubenswrapper[4776]: I1011 10:28:05.637139 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5wnj\" (UniqueName: \"kubernetes.io/projected/e4536c84-d8f3-4808-bf8b-9b40695f46de-kube-api-access-x5wnj\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:05.643885 master-2 kubenswrapper[4776]: I1011 10:28:05.641892 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:05.654105 master-2 kubenswrapper[4776]: I1011 10:28:05.654063 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz"] Oct 11 10:28:05.662342 master-2 kubenswrapper[4776]: I1011 10:28:05.662307 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-bound-sa-token\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:05.679755 master-2 kubenswrapper[4776]: I1011 10:28:05.679729 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lznwk\" (UniqueName: \"kubernetes.io/projected/18ca0678-0b0d-4d5d-bc50-a0a098301f38-kube-api-access-lznwk\") pod \"iptables-alerter-5mn8b\" (UID: \"18ca0678-0b0d-4d5d-bc50-a0a098301f38\") " pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:05.679827 master-2 kubenswrapper[4776]: W1011 10:28:05.679725 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89e02bcb_b3fe_4a45_a531_4ab41d8ee424.slice/crio-f99465d5fca50b4458a90201f8a003a63aa0a3926c975edbb7c2e5699790ba29 WatchSource:0}: Error finding container f99465d5fca50b4458a90201f8a003a63aa0a3926c975edbb7c2e5699790ba29: Status 404 returned error can't find the container with id f99465d5fca50b4458a90201f8a003a63aa0a3926c975edbb7c2e5699790ba29 Oct 11 10:28:05.680703 master-2 kubenswrapper[4776]: I1011 10:28:05.680657 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 11 10:28:05.700317 master-2 kubenswrapper[4776]: I1011 10:28:05.700157 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 11 10:28:05.710355 master-2 kubenswrapper[4776]: I1011 10:28:05.710317 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:05.720749 master-2 kubenswrapper[4776]: I1011 10:28:05.720714 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 11 10:28:05.722325 master-2 kubenswrapper[4776]: I1011 10:28:05.722288 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc"] Oct 11 10:28:05.772777 master-2 kubenswrapper[4776]: I1011 10:28:05.772552 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:05.782206 master-2 kubenswrapper[4776]: I1011 10:28:05.782033 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc"] Oct 11 10:28:05.790598 master-2 kubenswrapper[4776]: W1011 10:28:05.790539 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8757af56_20fb_439e_adba_7e4e50378936.slice/crio-a75184c2ae2e40b8111ddce83c4348030deaf04dbe5bc245f6d525047856b81f WatchSource:0}: Error finding container a75184c2ae2e40b8111ddce83c4348030deaf04dbe5bc245f6d525047856b81f: Status 404 returned error can't find the container with id a75184c2ae2e40b8111ddce83c4348030deaf04dbe5bc245f6d525047856b81f Oct 11 10:28:05.798259 master-2 kubenswrapper[4776]: W1011 10:28:05.798000 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58aef476_6586_47bb_bf45_dbeccac6271a.slice/crio-ac2feb73ac3d44ff07af99870af8e846b44254a04b4b3bda7aa0d54ef49c052c WatchSource:0}: Error finding container ac2feb73ac3d44ff07af99870af8e846b44254a04b4b3bda7aa0d54ef49c052c: Status 404 returned error can't find the container with id ac2feb73ac3d44ff07af99870af8e846b44254a04b4b3bda7aa0d54ef49c052c Oct 11 10:28:05.814746 master-2 kubenswrapper[4776]: I1011 10:28:05.814707 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2"] Oct 11 10:28:05.818235 master-2 kubenswrapper[4776]: I1011 10:28:05.818179 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:05.822469 master-2 kubenswrapper[4776]: E1011 10:28:05.822314 4776 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cluster-storage-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d8df789ec16971dc14423860f7b20b9ee27d926e4e5be632714cadc15e7f9b32,Command:[cluster-storage-operator start],Args:[start -v=2 --terminate-on-files=/var/run/secrets/serving-cert/tls.crt --terminate-on-files=/var/run/secrets/serving-cert/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.25,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.25,ValueFrom:nil,},EnvVar{Name:AWS_EBS_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf90c10ec9d9171d5bd25b66abd13d8b5b9d2b6d760915c2340267349dd52b30,ValueFrom:nil,},EnvVar{Name:AWS_EBS_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de96b1f387e4519341ed1c1716ce281855ff8cdb3c16ef5b2679cdc9f7750ced,ValueFrom:nil,},EnvVar{Name:GCP_PD_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d43bfe6638266a6703a47d5be6c2452bd2d8cc3acf29cbf3888849124b4869,ValueFrom:nil,},EnvVar{Name:GCP_PD_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2a28202489fb0a4ba57fdec457474da3dd3cecf14755740b3cf67928b4ee939a,ValueFrom:nil,},EnvVar{Name:OPENSTACK_CINDER_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b7152d9afe01f6a4478bc99b44325fe5a9490009fd26e805c12a32c5494a6c56,ValueFrom:nil,},EnvVar{Name:OPENSTACK_CINDER_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c7b6f05c6c4268a757c602999ab17f19d4c736be8fb245e16edcc2299408a599,ValueFrom:nil,},EnvVar{Name:OVIRT_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d796f981ddcc15d0e399f72991eef54933ac737c38323f66a4f4b5f2719c836,ValueFrom:nil,},EnvVar{Name:OVIRT_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56269a150f4fa9a1befa2322cbff6987fab9d057c47ce9e22e349c57ed9ada5,ValueFrom:nil,},EnvVar{Name:MANILA_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:163e4a4b11c0e88deac21852e7faecb57330d887d4022a4a205b3b426b9d8ab8,ValueFrom:nil,},EnvVar{Name:MANILA_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6cee6f9e952daa541bb07a0d913da6c0b910526d679bc6e57f22b253651e933,ValueFrom:nil,},EnvVar{Name:MANILA_NFS_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:875ca6816a8aa81b837afffa42cc10710abe9c94edd7e90cfef0723aa9a9c3a9,ValueFrom:nil,},EnvVar{Name:PROVISIONER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:259bcf080eb040af978ae5cb6b9ecdb23cb30070a46dc2e9eaad8f39dd0ea3b4,ValueFrom:nil,},EnvVar{Name:ATTACHER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d9a92fcbd55d858a29d71a1c3de84e9e54fcda14b133d77927dfa5e481cd26c,ValueFrom:nil,},EnvVar{Name:RESIZER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9631ca611a761ba23809c6f0b4286d3507df81057ebf7c3a316a780dd3a238f5,ValueFrom:nil,},EnvVar{Name:SNAPSHOTTER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32ba38ea67c3cc901f2168fd8741d04c38d41eebe99a13aab0e308f7a3d19e2d,ValueFrom:nil,},EnvVar{Name:NODE_DRIVER_REGISTRAR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:02a63915677f55be53239c2d39112a24c8fb76569418b4cf143784d7d4978a98,ValueFrom:nil,},EnvVar{Name:LIVENESS_PROBE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:14f4c10817737e35e427b12960465d4925250a3216e20cd9c8703e384d82072a,ValueFrom:nil,},EnvVar{Name:VSPHERE_PROBLEM_DETECTOR_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a18b9f542d7e3c308e85f1ddd9ab16f597fa8bd8179fae50ebce6e76c368bae,ValueFrom:nil,},EnvVar{Name:AZURE_DISK_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3b6ee9244784011ccc9b003e3fef287e1e8fe841ee84cefff3064e627a8bc102,ValueFrom:nil,},EnvVar{Name:AZURE_DISK_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:580436c37851ccb8511ad0bd03f4e975e899cbafa4ab82e8285a0e7a968a94be,ValueFrom:nil,},EnvVar{Name:AZURE_FILE_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cf7fe3e89df66f3f14669abc5000a0c57f8a8108fbf2fdfdabd5a2385fa1183,ValueFrom:nil,},EnvVar{Name:AZURE_FILE_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:38eba09d0099585d4d4318e444a2ad827087b415232b3ae5351741422bcea2fc,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169,ValueFrom:nil,},EnvVar{Name:VMWARE_VSPHERE_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c349bac6253b33ef299566d78e22aea174c8431cae2591d56ba22d389c01bc5,ValueFrom:nil,},EnvVar{Name:VMWARE_VSPHERE_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:601a001f8e7ac290cab0eede5fff7fbd23100bc92c2e8068c7e1dfa85cbc8c00,ValueFrom:nil,},EnvVar{Name:VMWARE_VSPHERE_SYNCER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3dd0132eff8900273dbfe86c5be49afd8101cefcde67bdc4ad586b02a8caf342,ValueFrom:nil,},EnvVar{Name:CLUSTER_CLOUD_CONTROLLER_MANAGER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:201e8fc1896dadc01ce68cec4c7437f12ddc3ac35792cc4d193242b5c41f48e1,ValueFrom:nil,},EnvVar{Name:IBM_VPC_BLOCK_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f661f985884d888372169daa7122c637b35de7f146de29a1ecc3e45007d2a0d5,ValueFrom:nil,},EnvVar{Name:IBM_VPC_BLOCK_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72a42f061ebbf86c63dc5750d1a4d9292299fb986837cb148214de1afbc3e5d4,ValueFrom:nil,},EnvVar{Name:POWERVS_BLOCK_CSI_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:46be8d78844b327070148dc5381af89dda5c2e3994b93e7b7e82bdec70e8916d,ValueFrom:nil,},EnvVar{Name:POWERVS_BLOCK_CSI_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53cfbbddd9b4b73dc46c7c16b4b01211e2d04f2ddad16607baf5c7e08e3c9190,ValueFrom:nil,},EnvVar{Name:TOOLS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2f612a166859beeffd90c78a8dfe0dc0721ffe5e0bc9b7a6d1ee155e0a39830,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cluster-storage-operator-serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vc8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cluster-storage-operator-56d4b95494-9fbb2_openshift-cluster-storage-operator(e540333c-4b4d-439e-a82a-cd3a97c95a43): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 11 10:28:05.823853 master-2 kubenswrapper[4776]: E1011 10:28:05.823821 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-storage-operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" podUID="e540333c-4b4d-439e-a82a-cd3a97c95a43" Oct 11 10:28:05.830001 master-2 kubenswrapper[4776]: W1011 10:28:05.829955 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18ca0678_0b0d_4d5d_bc50_a0a098301f38.slice/crio-aff4fc2bae63d39167bfd5b06973cb1e2c2eed3757eaedf7fcff7ce5f143d716 WatchSource:0}: Error finding container aff4fc2bae63d39167bfd5b06973cb1e2c2eed3757eaedf7fcff7ce5f143d716: Status 404 returned error can't find the container with id aff4fc2bae63d39167bfd5b06973cb1e2c2eed3757eaedf7fcff7ce5f143d716 Oct 11 10:28:05.832519 master-2 kubenswrapper[4776]: E1011 10:28:05.832473 4776 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lznwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5mn8b_openshift-network-operator(18ca0678-0b0d-4d5d-bc50-a0a098301f38): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 11 10:28:05.833849 master-2 kubenswrapper[4776]: E1011 10:28:05.833807 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-network-operator/iptables-alerter-5mn8b" podUID="18ca0678-0b0d-4d5d-bc50-a0a098301f38" Oct 11 10:28:05.844004 master-2 kubenswrapper[4776]: I1011 10:28:05.843966 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-7dcf5bd85b-6c2rl"] Oct 11 10:28:05.850526 master-2 kubenswrapper[4776]: W1011 10:28:05.850473 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59763d5b_237f_4095_bf52_86bb0154381c.slice/crio-81917bbf0fa4d12b89f9108c22d49466b9b11f765e577bb4761536b9fd8d7328 WatchSource:0}: Error finding container 81917bbf0fa4d12b89f9108c22d49466b9b11f765e577bb4761536b9fd8d7328: Status 404 returned error can't find the container with id 81917bbf0fa4d12b89f9108c22d49466b9b11f765e577bb4761536b9fd8d7328 Oct 11 10:28:05.854113 master-2 kubenswrapper[4776]: E1011 10:28:05.854049 4776 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:insights-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c3058c461907ec5ff06a628e935722d7ec8bf86fa90b95269372a6dc41444ce,Command:[],Args:[start --config=/etc/insights-operator/server.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:RELEASE_VERSION,Value:4.18.25,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{56623104 0} {} 54Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:snapshots,ReadOnly:false,MountPath:/var/lib/insights-operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/trusted-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:service-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/service-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwznd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000270000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod insights-operator-7dcf5bd85b-6c2rl_openshift-insights(59763d5b-237f-4095-bf52-86bb0154381c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 11 10:28:05.855266 master-2 kubenswrapper[4776]: E1011 10:28:05.855228 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" podUID="59763d5b-237f-4095-bf52-86bb0154381c" Oct 11 10:28:05.887079 master-2 kubenswrapper[4776]: I1011 10:28:05.887002 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs"] Oct 11 10:28:05.893692 master-2 kubenswrapper[4776]: W1011 10:28:05.893642 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88129ec6_6f99_42a1_842a_6a965c6b58fe.slice/crio-effc2411f0b3e10834a868d9bfd6f868e3e9f1606e9d6043cd6d8654e3630f38 WatchSource:0}: Error finding container effc2411f0b3e10834a868d9bfd6f868e3e9f1606e9d6043cd6d8654e3630f38: Status 404 returned error can't find the container with id effc2411f0b3e10834a868d9bfd6f868e3e9f1606e9d6043cd6d8654e3630f38 Oct 11 10:28:05.895933 master-2 kubenswrapper[4776]: E1011 10:28:05.895862 4776 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f425875bda87dc167d613efc88c56256e48364b73174d1392f7d23301baec0b,Command:[cluster-openshift-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.25,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.25,ValueFrom:nil,},EnvVar{Name:ROUTE_CONTROLLER_MANAGER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4w627,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-controller-manager-operator-5745565d84-bq4rs_openshift-controller-manager-operator(88129ec6-6f99-42a1-842a-6a965c6b58fe): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 11 10:28:05.897293 master-2 kubenswrapper[4776]: E1011 10:28:05.897247 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" podUID="88129ec6-6f99-42a1-842a-6a965c6b58fe" Oct 11 10:28:05.921378 master-1 kubenswrapper[4771]: I1011 10:28:05.921274 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-t44c5" event={"ID":"3346c1b6-593b-4224-802c-25e99e9893a8","Type":"ContainerStarted","Data":"47eb04e21cc8cca54dd2af469c2901eb6aaa71fb7ab0ddf3d031a884da55e0f6"} Oct 11 10:28:06.286483 master-2 kubenswrapper[4776]: I1011 10:28:06.286420 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:06.286483 master-2 kubenswrapper[4776]: I1011 10:28:06.286466 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:06.286798 master-2 kubenswrapper[4776]: I1011 10:28:06.286502 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:06.286798 master-2 kubenswrapper[4776]: E1011 10:28:06.286703 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:06.286798 master-2 kubenswrapper[4776]: E1011 10:28:06.286747 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert podName:6b8dc5b8-3c48-4dba-9992-6e269ca133f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.286733415 +0000 UTC m=+123.071160124 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert") pod "cluster-autoscaler-operator-7ff449c7c5-cfvjb" (UID: "6b8dc5b8-3c48-4dba-9992-6e269ca133f1") : secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:06.287091 master-2 kubenswrapper[4776]: E1011 10:28:06.287070 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:06.287144 master-2 kubenswrapper[4776]: E1011 10:28:06.287098 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.287091295 +0000 UTC m=+123.071518004 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:06.287144 master-2 kubenswrapper[4776]: E1011 10:28:06.287133 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:06.287239 master-2 kubenswrapper[4776]: E1011 10:28:06.287153 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.287144587 +0000 UTC m=+123.071571296 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:06.388087 master-2 kubenswrapper[4776]: I1011 10:28:06.388042 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:06.388268 master-2 kubenswrapper[4776]: I1011 10:28:06.388095 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:06.388268 master-2 kubenswrapper[4776]: I1011 10:28:06.388153 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:06.388268 master-2 kubenswrapper[4776]: I1011 10:28:06.388175 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:06.388268 master-2 kubenswrapper[4776]: I1011 10:28:06.388233 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:06.388268 master-2 kubenswrapper[4776]: I1011 10:28:06.388259 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:06.388435 master-2 kubenswrapper[4776]: I1011 10:28:06.388286 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:06.388435 master-2 kubenswrapper[4776]: I1011 10:28:06.388323 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:06.388435 master-2 kubenswrapper[4776]: I1011 10:28:06.388342 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:06.388435 master-2 kubenswrapper[4776]: I1011 10:28:06.388420 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:06.388539 master-2 kubenswrapper[4776]: I1011 10:28:06.388439 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:06.388539 master-2 kubenswrapper[4776]: I1011 10:28:06.388459 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:06.388539 master-2 kubenswrapper[4776]: I1011 10:28:06.388475 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:06.388539 master-2 kubenswrapper[4776]: I1011 10:28:06.388493 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:06.388539 master-2 kubenswrapper[4776]: I1011 10:28:06.388513 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:06.388539 master-2 kubenswrapper[4776]: I1011 10:28:06.388529 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:06.388701 master-2 kubenswrapper[4776]: I1011 10:28:06.388546 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:06.388820 master-2 kubenswrapper[4776]: E1011 10:28:06.388772 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:06.388860 master-2 kubenswrapper[4776]: E1011 10:28:06.388834 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.388817314 +0000 UTC m=+123.173244023 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:06.388953 master-2 kubenswrapper[4776]: E1011 10:28:06.388882 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:06.388953 master-2 kubenswrapper[4776]: E1011 10:28:06.388906 4776 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Oct 11 10:28:06.389018 master-2 kubenswrapper[4776]: E1011 10:28:06.388953 4776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:06.389018 master-2 kubenswrapper[4776]: E1011 10:28:06.388976 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs podName:64310b0b-bae1-4ad3-b106-6d59d47d29b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.388948288 +0000 UTC m=+123.173374997 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs") pod "multus-admission-controller-77b66fddc8-5r2t9" (UID: "64310b0b-bae1-4ad3-b106-6d59d47d29b2") : secret "multus-admission-controller-secret" not found Oct 11 10:28:06.389018 master-2 kubenswrapper[4776]: E1011 10:28:06.389004 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls podName:b562963f-7112-411a-a64c-3b8eba909c59 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.388993309 +0000 UTC m=+123.173420088 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls") pod "cluster-image-registry-operator-6b8674d7ff-mwbsr" (UID: "b562963f-7112-411a-a64c-3b8eba909c59") : secret "image-registry-operator-tls" not found Oct 11 10:28:06.389110 master-2 kubenswrapper[4776]: E1011 10:28:06.389021 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert podName:eba1e82e-9f3e-4273-836e-9407cc394b10 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.38901273 +0000 UTC m=+123.173439519 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-5cf49b6487-8d7xr" (UID: "eba1e82e-9f3e-4273-836e-9407cc394b10") : secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:06.389110 master-2 kubenswrapper[4776]: E1011 10:28:06.389027 4776 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 11 10:28:06.389110 master-2 kubenswrapper[4776]: E1011 10:28:06.389073 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls podName:e4536c84-d8f3-4808-bf8b-9b40695f46de nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389052891 +0000 UTC m=+123.173479650 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls") pod "machine-config-operator-7b75469658-jtmwh" (UID: "e4536c84-d8f3-4808-bf8b-9b40695f46de") : secret "mco-proxy-tls" not found Oct 11 10:28:06.389110 master-2 kubenswrapper[4776]: E1011 10:28:06.389077 4776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389114 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics podName:7652e0ca-2d18-48c7-80e0-f4a936038377 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389102102 +0000 UTC m=+123.173528891 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-wsmdd" (UID: "7652e0ca-2d18-48c7-80e0-f4a936038377") : secret "marketplace-operator-metrics" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389075 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389130 4776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389153 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert podName:e20ebc39-150b-472a-bb22-328d8f5db87b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389144623 +0000 UTC m=+123.173571422 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-xzntp" (UID: "e20ebc39-150b-472a-bb22-328d8f5db87b") : secret "package-server-manager-serving-cert" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389169 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls podName:548333d7-2374-4c38-b4fd-45c2bee2ac4e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389160384 +0000 UTC m=+123.173587163 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-b88g6" (UID: "548333d7-2374-4c38-b4fd-45c2bee2ac4e") : secret "machine-api-operator-tls" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389173 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389205 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs podName:cbf33a7e-abea-411d-9a19-85cfe67debe3 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389197015 +0000 UTC m=+123.173623834 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs") pod "multus-admission-controller-77b66fddc8-s5r5b" (UID: "cbf33a7e-abea-411d-9a19-85cfe67debe3") : secret "multus-admission-controller-secret" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389209 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389122 4776 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389238 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert podName:d4354488-1b32-422d-bb06-767a952192a5 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389228126 +0000 UTC m=+123.173654915 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert") pod "olm-operator-867f8475d9-8lf59" (UID: "d4354488-1b32-422d-bb06-767a952192a5") : secret "olm-operator-serving-cert" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389250 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389253 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls podName:893af718-1fec-4b8b-8349-d85f978f4140 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389246626 +0000 UTC m=+123.173673445 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls") pod "dns-operator-7769d9677-wh775" (UID: "893af718-1fec-4b8b-8349-d85f978f4140") : secret "metrics-tls" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389260 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389211 4776 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389215 4776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389033 4776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389276 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389269417 +0000 UTC m=+123.173696126 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "node-tuning-operator-tls" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389414 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert podName:e3281eb7-fb96-4bae-8c55-b79728d426b0 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389404532 +0000 UTC m=+123.173831241 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert") pod "catalog-operator-f966fb6f8-8gkqg" (UID: "e3281eb7-fb96-4bae-8c55-b79728d426b0") : secret "catalog-operator-serving-cert" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389424 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls podName:7e860f23-9dae-4606-9426-0edec38a332f nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389418952 +0000 UTC m=+123.173845661 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-bjntd" (UID: "7e860f23-9dae-4606-9426-0edec38a332f") : secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389435 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls podName:dbaa6ca7-9865-42f6-8030-2decf702caa1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389430162 +0000 UTC m=+123.173856871 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-h8588" (UID: "dbaa6ca7-9865-42f6-8030-2decf702caa1") : secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389446 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls podName:08b7d4e3-1682-4a3b-a757-84ded3a16764 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389440643 +0000 UTC m=+123.173867352 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls") pod "machine-approver-7876f99457-h7hhv" (UID: "08b7d4e3-1682-4a3b-a757-84ded3a16764") : secret "machine-approver-tls" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389497 4776 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:06.390006 master-2 kubenswrapper[4776]: E1011 10:28:06.389544 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls podName:6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389537335 +0000 UTC m=+123.173964044 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls") pod "ingress-operator-766ddf4575-wf7mj" (UID: "6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c") : secret "metrics-tls" not found Oct 11 10:28:06.436619 master-1 kubenswrapper[4771]: I1011 10:28:06.436479 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:28:06.436921 master-1 kubenswrapper[4771]: I1011 10:28:06.436775 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:28:06.439236 master-1 kubenswrapper[4771]: I1011 10:28:06.439172 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 11 10:28:06.440006 master-1 kubenswrapper[4771]: I1011 10:28:06.439929 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 11 10:28:06.440268 master-1 kubenswrapper[4771]: I1011 10:28:06.440197 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 11 10:28:06.459388 master-2 kubenswrapper[4776]: I1011 10:28:06.459257 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" event={"ID":"7004f3ff-6db8-446d-94c1-1223e975299d","Type":"ContainerStarted","Data":"5cfec723866b812f77a49c915767786d108ed192c33a71d16a214dbbfd2a0d46"} Oct 11 10:28:06.460638 master-2 kubenswrapper[4776]: I1011 10:28:06.460607 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" event={"ID":"58aef476-6586-47bb-bf45-dbeccac6271a","Type":"ContainerStarted","Data":"ac2feb73ac3d44ff07af99870af8e846b44254a04b4b3bda7aa0d54ef49c052c"} Oct 11 10:28:06.461565 master-2 kubenswrapper[4776]: I1011 10:28:06.461543 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" event={"ID":"05cf2994-c049-4f42-b2d8-83b23e7e763a","Type":"ContainerStarted","Data":"f26c0ef0c38264941d187bbda410fe98086c119977b5c40be0952dd4d38735f9"} Oct 11 10:28:06.463527 master-2 kubenswrapper[4776]: I1011 10:28:06.463480 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" event={"ID":"88129ec6-6f99-42a1-842a-6a965c6b58fe","Type":"ContainerStarted","Data":"effc2411f0b3e10834a868d9bfd6f868e3e9f1606e9d6043cd6d8654e3630f38"} Oct 11 10:28:06.465413 master-2 kubenswrapper[4776]: E1011 10:28:06.464893 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f425875bda87dc167d613efc88c56256e48364b73174d1392f7d23301baec0b\\\"\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" podUID="88129ec6-6f99-42a1-842a-6a965c6b58fe" Oct 11 10:28:06.465413 master-2 kubenswrapper[4776]: I1011 10:28:06.465192 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" event={"ID":"6967590c-695e-4e20-964b-0c643abdf367","Type":"ContainerStarted","Data":"18c2d67486e169d09512b65a5f4a23491bdfb755bad6884758580671b299c356"} Oct 11 10:28:06.466199 master-2 kubenswrapper[4776]: I1011 10:28:06.466167 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" event={"ID":"9d362fb9-48e4-4d72-a940-ec6c9c051fac","Type":"ContainerStarted","Data":"8927aa5a212997b84e6c2aa15861cb3f5032bda0e77b5b5d1174cff70042e0fe"} Oct 11 10:28:06.467096 master-2 kubenswrapper[4776]: I1011 10:28:06.467077 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5mn8b" event={"ID":"18ca0678-0b0d-4d5d-bc50-a0a098301f38","Type":"ContainerStarted","Data":"aff4fc2bae63d39167bfd5b06973cb1e2c2eed3757eaedf7fcff7ce5f143d716"} Oct 11 10:28:06.468043 master-2 kubenswrapper[4776]: E1011 10:28:06.468016 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a\\\"\"" pod="openshift-network-operator/iptables-alerter-5mn8b" podUID="18ca0678-0b0d-4d5d-bc50-a0a098301f38" Oct 11 10:28:06.468295 master-2 kubenswrapper[4776]: I1011 10:28:06.468252 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" event={"ID":"89e02bcb-b3fe-4a45-a531-4ab41d8ee424","Type":"ContainerStarted","Data":"f99465d5fca50b4458a90201f8a003a63aa0a3926c975edbb7c2e5699790ba29"} Oct 11 10:28:06.469045 master-2 kubenswrapper[4776]: I1011 10:28:06.469015 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" event={"ID":"e540333c-4b4d-439e-a82a-cd3a97c95a43","Type":"ContainerStarted","Data":"51185f5a5ef8be51d0b9fb54a45d8490768ada5cc0c176fc8916c38ad3293b36"} Oct 11 10:28:06.470261 master-2 kubenswrapper[4776]: I1011 10:28:06.470207 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-v6dfc" event={"ID":"8757af56-20fb-439e-adba-7e4e50378936","Type":"ContainerStarted","Data":"a75184c2ae2e40b8111ddce83c4348030deaf04dbe5bc245f6d525047856b81f"} Oct 11 10:28:06.470390 master-2 kubenswrapper[4776]: E1011 10:28:06.470361 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-storage-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d8df789ec16971dc14423860f7b20b9ee27d926e4e5be632714cadc15e7f9b32\\\"\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" podUID="e540333c-4b4d-439e-a82a-cd3a97c95a43" Oct 11 10:28:06.471820 master-2 kubenswrapper[4776]: I1011 10:28:06.471796 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" event={"ID":"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1","Type":"ContainerStarted","Data":"ef035f600c6de6398c2351b00cea47d45cdc23afa3b46d4c5caee020d9ff82b6"} Oct 11 10:28:06.473583 master-2 kubenswrapper[4776]: I1011 10:28:06.473561 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8" event={"ID":"a0b806b9-13ff-45fa-afba-5d0c89eac7df","Type":"ContainerStarted","Data":"efd656a1d8792a9b72e0b29d7f3bda39220cfc02fe075faa984b3373ff02bcd7"} Oct 11 10:28:06.474777 master-2 kubenswrapper[4776]: I1011 10:28:06.474756 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" event={"ID":"f8050d30-444b-40a5-829c-1e3b788910a0","Type":"ContainerStarted","Data":"a641dde88cb0eb910af8600e4b9aa4f67ea86f8b0a5f29ef7f742d66d5b7eb69"} Oct 11 10:28:06.475627 master-2 kubenswrapper[4776]: I1011 10:28:06.475606 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" event={"ID":"59763d5b-237f-4095-bf52-86bb0154381c","Type":"ContainerStarted","Data":"81917bbf0fa4d12b89f9108c22d49466b9b11f765e577bb4761536b9fd8d7328"} Oct 11 10:28:06.476651 master-2 kubenswrapper[4776]: E1011 10:28:06.476626 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c3058c461907ec5ff06a628e935722d7ec8bf86fa90b95269372a6dc41444ce\\\"\"" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" podUID="59763d5b-237f-4095-bf52-86bb0154381c" Oct 11 10:28:07.479141 master-2 kubenswrapper[4776]: E1011 10:28:07.478887 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f425875bda87dc167d613efc88c56256e48364b73174d1392f7d23301baec0b\\\"\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" podUID="88129ec6-6f99-42a1-842a-6a965c6b58fe" Oct 11 10:28:07.479141 master-2 kubenswrapper[4776]: E1011 10:28:07.478893 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c3058c461907ec5ff06a628e935722d7ec8bf86fa90b95269372a6dc41444ce\\\"\"" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" podUID="59763d5b-237f-4095-bf52-86bb0154381c" Oct 11 10:28:07.479141 master-2 kubenswrapper[4776]: E1011 10:28:07.478929 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-storage-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d8df789ec16971dc14423860f7b20b9ee27d926e4e5be632714cadc15e7f9b32\\\"\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" podUID="e540333c-4b4d-439e-a82a-cd3a97c95a43" Oct 11 10:28:08.312323 master-2 kubenswrapper[4776]: I1011 10:28:08.312214 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:08.312740 master-2 kubenswrapper[4776]: E1011 10:28:08.312419 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:08.312740 master-2 kubenswrapper[4776]: I1011 10:28:08.312494 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:08.312740 master-2 kubenswrapper[4776]: E1011 10:28:08.312515 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert podName:6b8dc5b8-3c48-4dba-9992-6e269ca133f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.312492428 +0000 UTC m=+127.096919137 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert") pod "cluster-autoscaler-operator-7ff449c7c5-cfvjb" (UID: "6b8dc5b8-3c48-4dba-9992-6e269ca133f1") : secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:08.312740 master-2 kubenswrapper[4776]: I1011 10:28:08.312535 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:08.312740 master-2 kubenswrapper[4776]: E1011 10:28:08.312582 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:08.312740 master-2 kubenswrapper[4776]: E1011 10:28:08.312609 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:08.312740 master-2 kubenswrapper[4776]: E1011 10:28:08.312616 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.312605921 +0000 UTC m=+127.097032640 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:08.312740 master-2 kubenswrapper[4776]: E1011 10:28:08.312633 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.312627171 +0000 UTC m=+127.097053880 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:08.413662 master-2 kubenswrapper[4776]: I1011 10:28:08.413531 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:08.413662 master-2 kubenswrapper[4776]: I1011 10:28:08.413617 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:08.413662 master-2 kubenswrapper[4776]: I1011 10:28:08.413650 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:08.413662 master-2 kubenswrapper[4776]: I1011 10:28:08.413696 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: I1011 10:28:08.413738 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: I1011 10:28:08.413771 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: E1011 10:28:08.413738 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: I1011 10:28:08.413818 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: I1011 10:28:08.413850 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: E1011 10:28:08.413863 4776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: E1011 10:28:08.413902 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: E1011 10:28:08.413911 4776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: E1011 10:28:08.413872 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert podName:e20ebc39-150b-472a-bb22-328d8f5db87b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.413851496 +0000 UTC m=+127.198278205 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-xzntp" (UID: "e20ebc39-150b-472a-bb22-328d8f5db87b") : secret "package-server-manager-serving-cert" not found Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: E1011 10:28:08.413776 4776 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: E1011 10:28:08.413979 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: E1011 10:28:08.413991 4776 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: E1011 10:28:08.414062 4776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: I1011 10:28:08.414009 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.413782 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414037 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414020091 +0000 UTC m=+127.198446860 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414137 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls podName:548333d7-2374-4c38-b4fd-45c2bee2ac4e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414126804 +0000 UTC m=+127.198553513 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-b88g6" (UID: "548333d7-2374-4c38-b4fd-45c2bee2ac4e") : secret "machine-api-operator-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414149 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs podName:cbf33a7e-abea-411d-9a19-85cfe67debe3 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414142354 +0000 UTC m=+127.198569063 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs") pod "multus-admission-controller-77b66fddc8-s5r5b" (UID: "cbf33a7e-abea-411d-9a19-85cfe67debe3") : secret "multus-admission-controller-secret" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414161 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls podName:dbaa6ca7-9865-42f6-8030-2decf702caa1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414156015 +0000 UTC m=+127.198582724 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-h8588" (UID: "dbaa6ca7-9865-42f6-8030-2decf702caa1") : secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414173 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls podName:893af718-1fec-4b8b-8349-d85f978f4140 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414168835 +0000 UTC m=+127.198595544 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls") pod "dns-operator-7769d9677-wh775" (UID: "893af718-1fec-4b8b-8349-d85f978f4140") : secret "metrics-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414183 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls podName:08b7d4e3-1682-4a3b-a757-84ded3a16764 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414178665 +0000 UTC m=+127.198605374 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls") pod "machine-approver-7876f99457-h7hhv" (UID: "08b7d4e3-1682-4a3b-a757-84ded3a16764") : secret "machine-approver-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414197 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls podName:7e860f23-9dae-4606-9426-0edec38a332f nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414190326 +0000 UTC m=+127.198617025 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-bjntd" (UID: "7e860f23-9dae-4606-9426-0edec38a332f") : secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: I1011 10:28:08.414226 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414253 4776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414287 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics podName:7652e0ca-2d18-48c7-80e0-f4a936038377 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414277118 +0000 UTC m=+127.198703887 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-wsmdd" (UID: "7652e0ca-2d18-48c7-80e0-f4a936038377") : secret "marketplace-operator-metrics" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414293 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: I1011 10:28:08.414257 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414331 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs podName:64310b0b-bae1-4ad3-b106-6d59d47d29b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414296139 +0000 UTC m=+127.198722838 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs") pod "multus-admission-controller-77b66fddc8-5r2t9" (UID: "64310b0b-bae1-4ad3-b106-6d59d47d29b2") : secret "multus-admission-controller-secret" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: I1011 10:28:08.414362 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: I1011 10:28:08.414394 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: I1011 10:28:08.414428 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414431 4776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414469 4776 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414472 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert podName:d4354488-1b32-422d-bb06-767a952192a5 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414440913 +0000 UTC m=+127.198867632 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert") pod "olm-operator-867f8475d9-8lf59" (UID: "d4354488-1b32-422d-bb06-767a952192a5") : secret "olm-operator-serving-cert" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414495 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414502 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls podName:e4536c84-d8f3-4808-bf8b-9b40695f46de nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414495464 +0000 UTC m=+127.198922173 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls") pod "machine-config-operator-7b75469658-jtmwh" (UID: "e4536c84-d8f3-4808-bf8b-9b40695f46de") : secret "mco-proxy-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: I1011 10:28:08.414508 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414516 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414510035 +0000 UTC m=+127.198936744 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "node-tuning-operator-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414559 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert podName:eba1e82e-9f3e-4273-836e-9407cc394b10 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414549516 +0000 UTC m=+127.198976325 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-5cf49b6487-8d7xr" (UID: "eba1e82e-9f3e-4273-836e-9407cc394b10") : secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414561 4776 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: I1011 10:28:08.414580 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414598 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls podName:b562963f-7112-411a-a64c-3b8eba909c59 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414588477 +0000 UTC m=+127.199015246 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls") pod "cluster-image-registry-operator-6b8674d7ff-mwbsr" (UID: "b562963f-7112-411a-a64c-3b8eba909c59") : secret "image-registry-operator-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: I1011 10:28:08.414613 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414625 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414654 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert podName:e3281eb7-fb96-4bae-8c55-b79728d426b0 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414646019 +0000 UTC m=+127.199072788 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert") pod "catalog-operator-f966fb6f8-8gkqg" (UID: "e3281eb7-fb96-4bae-8c55-b79728d426b0") : secret "catalog-operator-serving-cert" not found Oct 11 10:28:08.416275 master-2 kubenswrapper[4776]: E1011 10:28:08.414729 4776 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:08.416275 master-2 kubenswrapper[4776]: E1011 10:28:08.414768 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls podName:6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414756773 +0000 UTC m=+127.199183562 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls") pod "ingress-operator-766ddf4575-wf7mj" (UID: "6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c") : secret "metrics-tls" not found Oct 11 10:28:10.437425 master-1 kubenswrapper[4771]: I1011 10:28:10.437296 4771 scope.go:117] "RemoveContainer" containerID="d6f0bf05ac57d47238297705efd6175b4b0b48e0ab73a222acdf287379d27829" Oct 11 10:28:10.940980 master-1 kubenswrapper[4771]: I1011 10:28:10.940768 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-779749f859-5xxzp_e115f8be-9e65-4407-8111-568e5ea8ac1b/kube-rbac-proxy/3.log" Oct 11 10:28:10.942080 master-1 kubenswrapper[4771]: I1011 10:28:10.942019 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-779749f859-5xxzp_e115f8be-9e65-4407-8111-568e5ea8ac1b/kube-rbac-proxy/2.log" Oct 11 10:28:10.943104 master-1 kubenswrapper[4771]: I1011 10:28:10.943050 4771 generic.go:334] "Generic (PLEG): container finished" podID="e115f8be-9e65-4407-8111-568e5ea8ac1b" containerID="793c72629ffb5d64763cce906980f11774530f02d707e0389b69155b33560c5d" exitCode=1 Oct 11 10:28:10.943164 master-1 kubenswrapper[4771]: I1011 10:28:10.943116 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" event={"ID":"e115f8be-9e65-4407-8111-568e5ea8ac1b","Type":"ContainerDied","Data":"793c72629ffb5d64763cce906980f11774530f02d707e0389b69155b33560c5d"} Oct 11 10:28:10.943209 master-1 kubenswrapper[4771]: I1011 10:28:10.943189 4771 scope.go:117] "RemoveContainer" containerID="d6f0bf05ac57d47238297705efd6175b4b0b48e0ab73a222acdf287379d27829" Oct 11 10:28:10.944000 master-1 kubenswrapper[4771]: I1011 10:28:10.943938 4771 scope.go:117] "RemoveContainer" containerID="793c72629ffb5d64763cce906980f11774530f02d707e0389b69155b33560c5d" Oct 11 10:28:10.944234 master-1 kubenswrapper[4771]: E1011 10:28:10.944184 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-779749f859-5xxzp_openshift-cloud-controller-manager-operator(e115f8be-9e65-4407-8111-568e5ea8ac1b)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" podUID="e115f8be-9e65-4407-8111-568e5ea8ac1b" Oct 11 10:28:11.791780 master-1 kubenswrapper[4771]: I1011 10:28:11.791696 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:28:11.793653 master-1 kubenswrapper[4771]: I1011 10:28:11.793596 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:28:11.811569 master-1 kubenswrapper[4771]: I1011 10:28:11.811487 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-p9l4v" Oct 11 10:28:11.949312 master-1 kubenswrapper[4771]: I1011 10:28:11.949206 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-779749f859-5xxzp_e115f8be-9e65-4407-8111-568e5ea8ac1b/kube-rbac-proxy/3.log" Oct 11 10:28:12.369259 master-2 kubenswrapper[4776]: I1011 10:28:12.369080 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:12.369259 master-2 kubenswrapper[4776]: I1011 10:28:12.369217 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:12.370562 master-2 kubenswrapper[4776]: I1011 10:28:12.369314 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:12.370562 master-2 kubenswrapper[4776]: E1011 10:28:12.369739 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:12.370562 master-2 kubenswrapper[4776]: E1011 10:28:12.369836 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert podName:6b8dc5b8-3c48-4dba-9992-6e269ca133f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.369801569 +0000 UTC m=+135.154228318 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert") pod "cluster-autoscaler-operator-7ff449c7c5-cfvjb" (UID: "6b8dc5b8-3c48-4dba-9992-6e269ca133f1") : secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:12.370562 master-2 kubenswrapper[4776]: E1011 10:28:12.369860 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:12.370562 master-2 kubenswrapper[4776]: E1011 10:28:12.369899 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:12.370562 master-2 kubenswrapper[4776]: E1011 10:28:12.369925 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.369906662 +0000 UTC m=+135.154333381 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:12.370562 master-2 kubenswrapper[4776]: E1011 10:28:12.369955 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.369937683 +0000 UTC m=+135.154364412 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:12.470567 master-2 kubenswrapper[4776]: I1011 10:28:12.470457 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:12.470567 master-2 kubenswrapper[4776]: I1011 10:28:12.470578 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:12.471003 master-2 kubenswrapper[4776]: I1011 10:28:12.470631 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:12.471003 master-2 kubenswrapper[4776]: I1011 10:28:12.470743 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:12.471003 master-2 kubenswrapper[4776]: I1011 10:28:12.470796 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:12.471003 master-2 kubenswrapper[4776]: E1011 10:28:12.470747 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:12.471003 master-2 kubenswrapper[4776]: I1011 10:28:12.470866 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:12.471003 master-2 kubenswrapper[4776]: I1011 10:28:12.470904 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:12.471003 master-2 kubenswrapper[4776]: E1011 10:28:12.470903 4776 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.470945 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs podName:64310b0b-bae1-4ad3-b106-6d59d47d29b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.47091375 +0000 UTC m=+135.255340479 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs") pod "multus-admission-controller-77b66fddc8-5r2t9" (UID: "64310b0b-bae1-4ad3-b106-6d59d47d29b2") : secret "multus-admission-controller-secret" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.471053 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.471115 4776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.471138 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert podName:d4354488-1b32-422d-bb06-767a952192a5 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471114226 +0000 UTC m=+135.255540975 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert") pod "olm-operator-867f8475d9-8lf59" (UID: "d4354488-1b32-422d-bb06-767a952192a5") : secret "olm-operator-serving-cert" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.471050 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.470820 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.471050 4776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: I1011 10:28:12.471139 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.471192 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls podName:7e860f23-9dae-4606-9426-0edec38a332f nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471160008 +0000 UTC m=+135.255586737 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-bjntd" (UID: "7e860f23-9dae-4606-9426-0edec38a332f") : secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.471238 4776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.471355 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls podName:08b7d4e3-1682-4a3b-a757-84ded3a16764 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471310363 +0000 UTC m=+135.255737262 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls") pod "machine-approver-7876f99457-h7hhv" (UID: "08b7d4e3-1682-4a3b-a757-84ded3a16764") : secret "machine-approver-tls" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.471385 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471371524 +0000 UTC m=+135.255798463 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471409 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs podName:cbf33a7e-abea-411d-9a19-85cfe67debe3 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471399395 +0000 UTC m=+135.255826324 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs") pod "multus-admission-controller-77b66fddc8-s5r5b" (UID: "cbf33a7e-abea-411d-9a19-85cfe67debe3") : secret "multus-admission-controller-secret" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471437 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls podName:dbaa6ca7-9865-42f6-8030-2decf702caa1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471421256 +0000 UTC m=+135.255848185 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-h8588" (UID: "dbaa6ca7-9865-42f6-8030-2decf702caa1") : secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: I1011 10:28:12.471490 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: I1011 10:28:12.471539 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: I1011 10:28:12.471597 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471631 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics podName:7652e0ca-2d18-48c7-80e0-f4a936038377 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471611551 +0000 UTC m=+135.256038410 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-wsmdd" (UID: "7652e0ca-2d18-48c7-80e0-f4a936038377") : secret "marketplace-operator-metrics" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: I1011 10:28:12.471666 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471730 4776 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471740 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: I1011 10:28:12.471744 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471788 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls podName:e4536c84-d8f3-4808-bf8b-9b40695f46de nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471772406 +0000 UTC m=+135.256199305 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls") pod "machine-config-operator-7b75469658-jtmwh" (UID: "e4536c84-d8f3-4808-bf8b-9b40695f46de") : secret "mco-proxy-tls" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471814 4776 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471835 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471821197 +0000 UTC m=+135.256248096 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "node-tuning-operator-tls" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471835 4776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: I1011 10:28:12.471868 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471933 4776 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471812 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471883 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls podName:b562963f-7112-411a-a64c-3b8eba909c59 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471864478 +0000 UTC m=+135.256291227 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls") pod "cluster-image-registry-operator-6b8674d7ff-mwbsr" (UID: "b562963f-7112-411a-a64c-3b8eba909c59") : secret "image-registry-operator-tls" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: I1011 10:28:12.472059 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.472093 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert podName:eba1e82e-9f3e-4273-836e-9407cc394b10 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.472064844 +0000 UTC m=+135.256491723 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-5cf49b6487-8d7xr" (UID: "eba1e82e-9f3e-4273-836e-9407cc394b10") : secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.472125 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls podName:6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.472115636 +0000 UTC m=+135.256542575 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls") pod "ingress-operator-766ddf4575-wf7mj" (UID: "6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c") : secret "metrics-tls" not found Oct 11 10:28:12.473357 master-2 kubenswrapper[4776]: E1011 10:28:12.472149 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 11 10:28:12.473357 master-2 kubenswrapper[4776]: E1011 10:28:12.472161 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert podName:e3281eb7-fb96-4bae-8c55-b79728d426b0 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.472147976 +0000 UTC m=+135.256574925 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert") pod "catalog-operator-f966fb6f8-8gkqg" (UID: "e3281eb7-fb96-4bae-8c55-b79728d426b0") : secret "catalog-operator-serving-cert" not found Oct 11 10:28:12.473357 master-2 kubenswrapper[4776]: I1011 10:28:12.472203 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:12.473357 master-2 kubenswrapper[4776]: E1011 10:28:12.472215 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert podName:e20ebc39-150b-472a-bb22-328d8f5db87b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.472197058 +0000 UTC m=+135.256623927 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-xzntp" (UID: "e20ebc39-150b-472a-bb22-328d8f5db87b") : secret "package-server-manager-serving-cert" not found Oct 11 10:28:12.473357 master-2 kubenswrapper[4776]: I1011 10:28:12.472245 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:12.473357 master-2 kubenswrapper[4776]: E1011 10:28:12.472364 4776 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:12.473357 master-2 kubenswrapper[4776]: E1011 10:28:12.472417 4776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 11 10:28:12.473357 master-2 kubenswrapper[4776]: E1011 10:28:12.472455 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls podName:893af718-1fec-4b8b-8349-d85f978f4140 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.472440335 +0000 UTC m=+135.256867174 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls") pod "dns-operator-7769d9677-wh775" (UID: "893af718-1fec-4b8b-8349-d85f978f4140") : secret "metrics-tls" not found Oct 11 10:28:12.473357 master-2 kubenswrapper[4776]: E1011 10:28:12.472489 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls podName:548333d7-2374-4c38-b4fd-45c2bee2ac4e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.472475706 +0000 UTC m=+135.256902625 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-b88g6" (UID: "548333d7-2374-4c38-b4fd-45c2bee2ac4e") : secret "machine-api-operator-tls" not found Oct 11 10:28:13.130027 master-2 kubenswrapper[4776]: I1011 10:28:13.129971 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:28:13.959198 master-1 kubenswrapper[4771]: I1011 10:28:13.959070 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-t44c5" event={"ID":"3346c1b6-593b-4224-802c-25e99e9893a8","Type":"ContainerStarted","Data":"80ee69bb8bb9ee41ed409fb0049d311eb9b31c6f3c980d9975c6a0d160195a6d"} Oct 11 10:28:13.977490 master-1 kubenswrapper[4771]: I1011 10:28:13.977331 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-t44c5" podStartSLOduration=5.91420538 podStartE2EDuration="9.977306523s" podCreationTimestamp="2025-10-11 10:28:04 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.15416437 +0000 UTC m=+117.128390851" lastFinishedPulling="2025-10-11 10:28:09.217265513 +0000 UTC m=+121.191491994" observedRunningTime="2025-10-11 10:28:13.975242798 +0000 UTC m=+125.949469279" watchObservedRunningTime="2025-10-11 10:28:13.977306523 +0000 UTC m=+125.951533004" Oct 11 10:28:17.254614 master-2 kubenswrapper[4776]: I1011 10:28:17.254304 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Oct 11 10:28:17.266327 master-2 kubenswrapper[4776]: I1011 10:28:17.266103 4776 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Oct 11 10:28:17.508983 master-2 kubenswrapper[4776]: I1011 10:28:17.508934 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" event={"ID":"9d362fb9-48e4-4d72-a940-ec6c9c051fac","Type":"ContainerDied","Data":"e90f7250992a43c127322ebe6a88091226718110bb2803a9ad4004b18fa488dd"} Oct 11 10:28:17.509383 master-2 kubenswrapper[4776]: I1011 10:28:17.508656 4776 generic.go:334] "Generic (PLEG): container finished" podID="9d362fb9-48e4-4d72-a940-ec6c9c051fac" containerID="e90f7250992a43c127322ebe6a88091226718110bb2803a9ad4004b18fa488dd" exitCode=0 Oct 11 10:28:17.511532 master-2 kubenswrapper[4776]: I1011 10:28:17.511506 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" event={"ID":"7004f3ff-6db8-446d-94c1-1223e975299d","Type":"ContainerStarted","Data":"13931b8d42a71308bf45f4bd6921b1ab789c1a4f3b0b726209cf504aecb722a9"} Oct 11 10:28:17.513994 master-2 kubenswrapper[4776]: I1011 10:28:17.513716 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8" event={"ID":"a0b806b9-13ff-45fa-afba-5d0c89eac7df","Type":"ContainerStarted","Data":"ef0b776ca5352b516fbbf8012bd62838aed8c9c935aab5fafdd14b5c301abac5"} Oct 11 10:28:17.515651 master-2 kubenswrapper[4776]: I1011 10:28:17.515627 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" event={"ID":"05cf2994-c049-4f42-b2d8-83b23e7e763a","Type":"ContainerStarted","Data":"09c81be857efc36ce4d8daa4c934f1437649343613c3da9aac63b8db86978ed6"} Oct 11 10:28:17.518375 master-2 kubenswrapper[4776]: I1011 10:28:17.518337 4776 generic.go:334] "Generic (PLEG): container finished" podID="e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc" containerID="cd8822d95b19957043a12128b0929e8211cff636608b79c99c54fc322091c398" exitCode=0 Oct 11 10:28:17.518440 master-2 kubenswrapper[4776]: I1011 10:28:17.518379 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" event={"ID":"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc","Type":"ContainerDied","Data":"cd8822d95b19957043a12128b0929e8211cff636608b79c99c54fc322091c398"} Oct 11 10:28:17.520688 master-2 kubenswrapper[4776]: I1011 10:28:17.520657 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" event={"ID":"89e02bcb-b3fe-4a45-a531-4ab41d8ee424","Type":"ContainerStarted","Data":"2311ef45db3839160f50cf52dfc54b1dab3ed31b8a810ff4165ecab2eb84274b"} Oct 11 10:28:17.522164 master-2 kubenswrapper[4776]: I1011 10:28:17.522143 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" event={"ID":"f8050d30-444b-40a5-829c-1e3b788910a0","Type":"ContainerStarted","Data":"5001bd6df546d1ceae6c934b8abd9de1f6f93838b1e654bff89ff6b24eb56ca9"} Oct 11 10:28:17.547485 master-2 kubenswrapper[4776]: I1011 10:28:17.547398 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" podStartSLOduration=69.923064333 podStartE2EDuration="1m21.54737089s" podCreationTimestamp="2025-10-11 10:26:56 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.592536635 +0000 UTC m=+120.376963344" lastFinishedPulling="2025-10-11 10:28:17.216843192 +0000 UTC m=+132.001269901" observedRunningTime="2025-10-11 10:28:17.544261629 +0000 UTC m=+132.328688338" watchObservedRunningTime="2025-10-11 10:28:17.54737089 +0000 UTC m=+132.331797619" Oct 11 10:28:17.581367 master-2 kubenswrapper[4776]: I1011 10:28:17.581275 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" podStartSLOduration=68.964024381 podStartE2EDuration="1m20.581253285s" podCreationTimestamp="2025-10-11 10:26:57 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.608591887 +0000 UTC m=+120.393018596" lastFinishedPulling="2025-10-11 10:28:17.225820791 +0000 UTC m=+132.010247500" observedRunningTime="2025-10-11 10:28:17.580132932 +0000 UTC m=+132.364559631" watchObservedRunningTime="2025-10-11 10:28:17.581253285 +0000 UTC m=+132.365679984" Oct 11 10:28:17.618464 master-2 kubenswrapper[4776]: I1011 10:28:17.618037 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8" podStartSLOduration=99.990421182 podStartE2EDuration="1m51.618015354s" podCreationTimestamp="2025-10-11 10:26:26 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.593329217 +0000 UTC m=+120.377755926" lastFinishedPulling="2025-10-11 10:28:17.220923379 +0000 UTC m=+132.005350098" observedRunningTime="2025-10-11 10:28:17.5967199 +0000 UTC m=+132.381146619" watchObservedRunningTime="2025-10-11 10:28:17.618015354 +0000 UTC m=+132.402442073" Oct 11 10:28:17.633201 master-2 kubenswrapper[4776]: I1011 10:28:17.633137 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" podStartSLOduration=69.099223355 podStartE2EDuration="1m20.633117988s" podCreationTimestamp="2025-10-11 10:26:57 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.683995339 +0000 UTC m=+120.468422048" lastFinishedPulling="2025-10-11 10:28:17.217889932 +0000 UTC m=+132.002316681" observedRunningTime="2025-10-11 10:28:17.631736079 +0000 UTC m=+132.416162788" watchObservedRunningTime="2025-10-11 10:28:17.633117988 +0000 UTC m=+132.417544697" Oct 11 10:28:17.654140 master-2 kubenswrapper[4776]: I1011 10:28:17.654067 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" podStartSLOduration=68.167568224 podStartE2EDuration="1m19.654044331s" podCreationTimestamp="2025-10-11 10:26:58 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.731883138 +0000 UTC m=+120.516309837" lastFinishedPulling="2025-10-11 10:28:17.218359235 +0000 UTC m=+132.002785944" observedRunningTime="2025-10-11 10:28:17.654022851 +0000 UTC m=+132.438449560" watchObservedRunningTime="2025-10-11 10:28:17.654044331 +0000 UTC m=+132.438471040" Oct 11 10:28:17.845649 master-2 kubenswrapper[4776]: I1011 10:28:17.845524 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:28:17.849875 master-2 kubenswrapper[4776]: I1011 10:28:17.849824 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 11 10:28:17.856310 master-2 kubenswrapper[4776]: E1011 10:28:17.856259 4776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Oct 11 10:28:17.856399 master-2 kubenswrapper[4776]: E1011 10:28:17.856351 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs podName:35b21a7b-2a5a-4511-a2d5-d950752b4bda nodeName:}" failed. No retries permitted until 2025-10-11 10:29:21.856327346 +0000 UTC m=+196.640754055 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs") pod "network-metrics-daemon-w52cn" (UID: "35b21a7b-2a5a-4511-a2d5-d950752b4bda") : secret "metrics-daemon-secret" not found Oct 11 10:28:17.927932 master-1 kubenswrapper[4771]: I1011 10:28:17.927814 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs\") pod \"network-metrics-daemon-fgjvw\" (UID: \"2c084572-a5c9-4787-8a14-b7d6b0810a1b\") " pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:28:17.931209 master-1 kubenswrapper[4771]: I1011 10:28:17.931150 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 11 10:28:17.938899 master-1 kubenswrapper[4771]: E1011 10:28:17.938832 4771 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Oct 11 10:28:17.938899 master-1 kubenswrapper[4771]: E1011 10:28:17.938895 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs podName:2c084572-a5c9-4787-8a14-b7d6b0810a1b nodeName:}" failed. No retries permitted until 2025-10-11 10:29:21.938877361 +0000 UTC m=+193.913103812 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs") pod "network-metrics-daemon-fgjvw" (UID: "2c084572-a5c9-4787-8a14-b7d6b0810a1b") : secret "metrics-daemon-secret" not found Oct 11 10:28:18.041803 master-1 kubenswrapper[4771]: I1011 10:28:18.041680 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/master-1-debug-vwkqm"] Oct 11 10:28:18.042110 master-1 kubenswrapper[4771]: I1011 10:28:18.042042 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/master-1-debug-vwkqm" Oct 11 10:28:18.045041 master-1 kubenswrapper[4771]: I1011 10:28:18.044986 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Oct 11 10:28:18.045897 master-1 kubenswrapper[4771]: I1011 10:28:18.045861 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Oct 11 10:28:18.081715 master-2 kubenswrapper[4776]: I1011 10:28:18.081663 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/master-2-debug-gpmgw"] Oct 11 10:28:18.085691 master-2 kubenswrapper[4776]: I1011 10:28:18.082064 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:18.129637 master-1 kubenswrapper[4771]: I1011 10:28:18.129562 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-254q5\" (UniqueName: \"kubernetes.io/projected/e5877eb5-69db-40eb-af0c-096d52c3fc4d-kube-api-access-254q5\") pod \"master-1-debug-vwkqm\" (UID: \"e5877eb5-69db-40eb-af0c-096d52c3fc4d\") " pod="assisted-installer/master-1-debug-vwkqm" Oct 11 10:28:18.129909 master-1 kubenswrapper[4771]: I1011 10:28:18.129633 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5877eb5-69db-40eb-af0c-096d52c3fc4d-host\") pod \"master-1-debug-vwkqm\" (UID: \"e5877eb5-69db-40eb-af0c-096d52c3fc4d\") " pod="assisted-installer/master-1-debug-vwkqm" Oct 11 10:28:18.148571 master-2 kubenswrapper[4776]: I1011 10:28:18.148460 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj76g\" (UniqueName: \"kubernetes.io/projected/02b839f3-9031-49c2-87a5-630975c7e14c-kube-api-access-xj76g\") pod \"master-2-debug-gpmgw\" (UID: \"02b839f3-9031-49c2-87a5-630975c7e14c\") " pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:18.148779 master-2 kubenswrapper[4776]: I1011 10:28:18.148755 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02b839f3-9031-49c2-87a5-630975c7e14c-host\") pod \"master-2-debug-gpmgw\" (UID: \"02b839f3-9031-49c2-87a5-630975c7e14c\") " pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:18.231112 master-1 kubenswrapper[4771]: I1011 10:28:18.230935 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-254q5\" (UniqueName: \"kubernetes.io/projected/e5877eb5-69db-40eb-af0c-096d52c3fc4d-kube-api-access-254q5\") pod \"master-1-debug-vwkqm\" (UID: \"e5877eb5-69db-40eb-af0c-096d52c3fc4d\") " pod="assisted-installer/master-1-debug-vwkqm" Oct 11 10:28:18.231112 master-1 kubenswrapper[4771]: I1011 10:28:18.230992 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5877eb5-69db-40eb-af0c-096d52c3fc4d-host\") pod \"master-1-debug-vwkqm\" (UID: \"e5877eb5-69db-40eb-af0c-096d52c3fc4d\") " pod="assisted-installer/master-1-debug-vwkqm" Oct 11 10:28:18.231112 master-1 kubenswrapper[4771]: I1011 10:28:18.231074 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5877eb5-69db-40eb-af0c-096d52c3fc4d-host\") pod \"master-1-debug-vwkqm\" (UID: \"e5877eb5-69db-40eb-af0c-096d52c3fc4d\") " pod="assisted-installer/master-1-debug-vwkqm" Oct 11 10:28:18.250390 master-2 kubenswrapper[4776]: I1011 10:28:18.250014 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02b839f3-9031-49c2-87a5-630975c7e14c-host\") pod \"master-2-debug-gpmgw\" (UID: \"02b839f3-9031-49c2-87a5-630975c7e14c\") " pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:18.250390 master-2 kubenswrapper[4776]: I1011 10:28:18.250398 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj76g\" (UniqueName: \"kubernetes.io/projected/02b839f3-9031-49c2-87a5-630975c7e14c-kube-api-access-xj76g\") pod \"master-2-debug-gpmgw\" (UID: \"02b839f3-9031-49c2-87a5-630975c7e14c\") " pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:18.250805 master-2 kubenswrapper[4776]: I1011 10:28:18.250746 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02b839f3-9031-49c2-87a5-630975c7e14c-host\") pod \"master-2-debug-gpmgw\" (UID: \"02b839f3-9031-49c2-87a5-630975c7e14c\") " pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:18.253972 master-1 kubenswrapper[4771]: I1011 10:28:18.253817 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-254q5\" (UniqueName: \"kubernetes.io/projected/e5877eb5-69db-40eb-af0c-096d52c3fc4d-kube-api-access-254q5\") pod \"master-1-debug-vwkqm\" (UID: \"e5877eb5-69db-40eb-af0c-096d52c3fc4d\") " pod="assisted-installer/master-1-debug-vwkqm" Oct 11 10:28:18.278071 master-2 kubenswrapper[4776]: I1011 10:28:18.278019 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj76g\" (UniqueName: \"kubernetes.io/projected/02b839f3-9031-49c2-87a5-630975c7e14c-kube-api-access-xj76g\") pod \"master-2-debug-gpmgw\" (UID: \"02b839f3-9031-49c2-87a5-630975c7e14c\") " pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:18.281740 master-1 kubenswrapper[4771]: I1011 10:28:18.281654 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-d8c4d9469-bxq92"] Oct 11 10:28:18.282240 master-1 kubenswrapper[4771]: I1011 10:28:18.282199 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-d8c4d9469-bxq92" Oct 11 10:28:18.284674 master-1 kubenswrapper[4771]: I1011 10:28:18.284640 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 11 10:28:18.284942 master-1 kubenswrapper[4771]: I1011 10:28:18.284903 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 11 10:28:18.291339 master-1 kubenswrapper[4771]: I1011 10:28:18.291285 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-d8c4d9469-bxq92"] Oct 11 10:28:18.331769 master-1 kubenswrapper[4771]: I1011 10:28:18.331714 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvrs8\" (UniqueName: \"kubernetes.io/projected/b62583d2-1a1e-44fd-871e-1c48e3cb1732-kube-api-access-lvrs8\") pod \"migrator-d8c4d9469-bxq92\" (UID: \"b62583d2-1a1e-44fd-871e-1c48e3cb1732\") " pod="openshift-kube-storage-version-migrator/migrator-d8c4d9469-bxq92" Oct 11 10:28:18.375139 master-1 kubenswrapper[4771]: I1011 10:28:18.375053 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/master-1-debug-vwkqm" Oct 11 10:28:18.394024 master-1 kubenswrapper[4771]: W1011 10:28:18.393955 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5877eb5_69db_40eb_af0c_096d52c3fc4d.slice/crio-78e52134fc9902f7609237c0f679b9c5002f127b8ab80d9c3907b1e5458d8504 WatchSource:0}: Error finding container 78e52134fc9902f7609237c0f679b9c5002f127b8ab80d9c3907b1e5458d8504: Status 404 returned error can't find the container with id 78e52134fc9902f7609237c0f679b9c5002f127b8ab80d9c3907b1e5458d8504 Oct 11 10:28:18.398742 master-2 kubenswrapper[4776]: I1011 10:28:18.398113 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:18.418371 master-2 kubenswrapper[4776]: W1011 10:28:18.418281 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02b839f3_9031_49c2_87a5_630975c7e14c.slice/crio-ac1317205de0639804fa13c50df1e4818a9d1c3375597958265c0b50192bde59 WatchSource:0}: Error finding container ac1317205de0639804fa13c50df1e4818a9d1c3375597958265c0b50192bde59: Status 404 returned error can't find the container with id ac1317205de0639804fa13c50df1e4818a9d1c3375597958265c0b50192bde59 Oct 11 10:28:18.432316 master-1 kubenswrapper[4771]: I1011 10:28:18.432250 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvrs8\" (UniqueName: \"kubernetes.io/projected/b62583d2-1a1e-44fd-871e-1c48e3cb1732-kube-api-access-lvrs8\") pod \"migrator-d8c4d9469-bxq92\" (UID: \"b62583d2-1a1e-44fd-871e-1c48e3cb1732\") " pod="openshift-kube-storage-version-migrator/migrator-d8c4d9469-bxq92" Oct 11 10:28:18.455760 master-1 kubenswrapper[4771]: I1011 10:28:18.455668 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvrs8\" (UniqueName: \"kubernetes.io/projected/b62583d2-1a1e-44fd-871e-1c48e3cb1732-kube-api-access-lvrs8\") pod \"migrator-d8c4d9469-bxq92\" (UID: \"b62583d2-1a1e-44fd-871e-1c48e3cb1732\") " pod="openshift-kube-storage-version-migrator/migrator-d8c4d9469-bxq92" Oct 11 10:28:18.530989 master-2 kubenswrapper[4776]: I1011 10:28:18.530745 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" event={"ID":"6967590c-695e-4e20-964b-0c643abdf367","Type":"ContainerStarted","Data":"e3b061ce9d0eb2a283f25a5377c1ec78f61f62a5f692f1f7bc57aa0c47f8c828"} Oct 11 10:28:18.532590 master-2 kubenswrapper[4776]: I1011 10:28:18.532546 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/master-2-debug-gpmgw" event={"ID":"02b839f3-9031-49c2-87a5-630975c7e14c","Type":"ContainerStarted","Data":"ac1317205de0639804fa13c50df1e4818a9d1c3375597958265c0b50192bde59"} Oct 11 10:28:18.534145 master-2 kubenswrapper[4776]: I1011 10:28:18.533637 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" event={"ID":"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1","Type":"ContainerStarted","Data":"0b7bb22c9bcc10fdcba6be60dad53b1b80998cc309ea84f073feff75133d2485"} Oct 11 10:28:18.536241 master-2 kubenswrapper[4776]: I1011 10:28:18.536217 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" event={"ID":"e487f283-7482-463c-90b6-a812e00d0e35","Type":"ContainerStarted","Data":"75e09f57e9f3d2d1f9408b0cb83b216f2432311fdbe734afce1ac9bd82b32464"} Oct 11 10:28:18.537589 master-2 kubenswrapper[4776]: I1011 10:28:18.537564 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-v6dfc" event={"ID":"8757af56-20fb-439e-adba-7e4e50378936","Type":"ContainerStarted","Data":"25fc39e758a2899d86cad41cf89dd130d8c1f8d7d2271b02d90a5c1db60a0fae"} Oct 11 10:28:18.539169 master-2 kubenswrapper[4776]: I1011 10:28:18.539134 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" event={"ID":"58aef476-6586-47bb-bf45-dbeccac6271a","Type":"ContainerStarted","Data":"4086f54b40e82a9c1520dd01a01f1e17aa8e4bfa53d48bc75f9b65494739f67c"} Oct 11 10:28:18.545444 master-2 kubenswrapper[4776]: I1011 10:28:18.545386 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" podStartSLOduration=97.965610083 podStartE2EDuration="1m49.545375898s" podCreationTimestamp="2025-10-11 10:26:29 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.620222642 +0000 UTC m=+120.404649341" lastFinishedPulling="2025-10-11 10:28:17.199988447 +0000 UTC m=+131.984415156" observedRunningTime="2025-10-11 10:28:18.54375117 +0000 UTC m=+133.328177879" watchObservedRunningTime="2025-10-11 10:28:18.545375898 +0000 UTC m=+133.329802607" Oct 11 10:28:18.555873 master-2 kubenswrapper[4776]: I1011 10:28:18.555807 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" podStartSLOduration=73.704323249 podStartE2EDuration="1m25.555793298s" podCreationTimestamp="2025-10-11 10:26:53 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.331738565 +0000 UTC m=+120.116165274" lastFinishedPulling="2025-10-11 10:28:17.183208614 +0000 UTC m=+131.967635323" observedRunningTime="2025-10-11 10:28:18.554865741 +0000 UTC m=+133.339292450" watchObservedRunningTime="2025-10-11 10:28:18.555793298 +0000 UTC m=+133.340220007" Oct 11 10:28:18.568733 master-2 kubenswrapper[4776]: I1011 10:28:18.567837 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="assisted-installer/assisted-installer-controller-v6dfc" podStartSLOduration=212.108560161 podStartE2EDuration="3m43.567816914s" podCreationTimestamp="2025-10-11 10:24:35 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.79271464 +0000 UTC m=+120.577141349" lastFinishedPulling="2025-10-11 10:28:17.251971393 +0000 UTC m=+132.036398102" observedRunningTime="2025-10-11 10:28:18.566079003 +0000 UTC m=+133.350505722" watchObservedRunningTime="2025-10-11 10:28:18.567816914 +0000 UTC m=+133.352243623" Oct 11 10:28:18.582706 master-2 kubenswrapper[4776]: I1011 10:28:18.578804 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" podStartSLOduration=69.997017828 podStartE2EDuration="1m21.57878701s" podCreationTimestamp="2025-10-11 10:26:57 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.636779899 +0000 UTC m=+120.421206608" lastFinishedPulling="2025-10-11 10:28:17.218549081 +0000 UTC m=+132.002975790" observedRunningTime="2025-10-11 10:28:18.577623316 +0000 UTC m=+133.362050025" watchObservedRunningTime="2025-10-11 10:28:18.57878701 +0000 UTC m=+133.363213719" Oct 11 10:28:18.598674 master-1 kubenswrapper[4771]: I1011 10:28:18.598594 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-d8c4d9469-bxq92" Oct 11 10:28:18.600737 master-2 kubenswrapper[4776]: I1011 10:28:18.597642 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" podStartSLOduration=71.180854421 podStartE2EDuration="1m22.597622732s" podCreationTimestamp="2025-10-11 10:26:56 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.801023088 +0000 UTC m=+120.585449797" lastFinishedPulling="2025-10-11 10:28:17.217791399 +0000 UTC m=+132.002218108" observedRunningTime="2025-10-11 10:28:18.590394594 +0000 UTC m=+133.374821303" watchObservedRunningTime="2025-10-11 10:28:18.597622732 +0000 UTC m=+133.382049441" Oct 11 10:28:18.831833 master-1 kubenswrapper[4771]: I1011 10:28:18.831418 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-c2t4m"] Oct 11 10:28:18.832222 master-1 kubenswrapper[4771]: I1011 10:28:18.832183 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-c2t4m" Oct 11 10:28:18.835376 master-1 kubenswrapper[4771]: I1011 10:28:18.835325 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Oct 11 10:28:18.835956 master-1 kubenswrapper[4771]: I1011 10:28:18.835921 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Oct 11 10:28:18.838331 master-1 kubenswrapper[4771]: I1011 10:28:18.838297 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-c2t4m"] Oct 11 10:28:18.840915 master-2 kubenswrapper[4776]: I1011 10:28:18.840846 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49"] Oct 11 10:28:18.841435 master-2 kubenswrapper[4776]: I1011 10:28:18.841404 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49" Oct 11 10:28:18.849136 master-2 kubenswrapper[4776]: I1011 10:28:18.849065 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49"] Oct 11 10:28:18.867055 master-1 kubenswrapper[4771]: I1011 10:28:18.866799 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-d8c4d9469-bxq92"] Oct 11 10:28:18.936542 master-1 kubenswrapper[4771]: I1011 10:28:18.936479 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7plbp\" (UniqueName: \"kubernetes.io/projected/09cf0cd5-a6f6-4b35-88cf-ca6ca4402656-kube-api-access-7plbp\") pod \"csi-snapshot-controller-ddd7d64cd-c2t4m\" (UID: \"09cf0cd5-a6f6-4b35-88cf-ca6ca4402656\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-c2t4m" Oct 11 10:28:18.961946 master-2 kubenswrapper[4776]: I1011 10:28:18.961883 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djkhs\" (UniqueName: \"kubernetes.io/projected/b5b27c80-52a3-4747-a128-28952a667faa-kube-api-access-djkhs\") pod \"csi-snapshot-controller-ddd7d64cd-95l49\" (UID: \"b5b27c80-52a3-4747-a128-28952a667faa\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49" Oct 11 10:28:18.976290 master-1 kubenswrapper[4771]: I1011 10:28:18.976209 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-d8c4d9469-bxq92" event={"ID":"b62583d2-1a1e-44fd-871e-1c48e3cb1732","Type":"ContainerStarted","Data":"d98dba785d03bcc465a43f2068692663b1667d84f0433b1551e2ef33f927aca9"} Oct 11 10:28:18.977627 master-1 kubenswrapper[4771]: I1011 10:28:18.977571 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/master-1-debug-vwkqm" event={"ID":"e5877eb5-69db-40eb-af0c-096d52c3fc4d","Type":"ContainerStarted","Data":"78e52134fc9902f7609237c0f679b9c5002f127b8ab80d9c3907b1e5458d8504"} Oct 11 10:28:19.037481 master-1 kubenswrapper[4771]: I1011 10:28:19.037349 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7plbp\" (UniqueName: \"kubernetes.io/projected/09cf0cd5-a6f6-4b35-88cf-ca6ca4402656-kube-api-access-7plbp\") pod \"csi-snapshot-controller-ddd7d64cd-c2t4m\" (UID: \"09cf0cd5-a6f6-4b35-88cf-ca6ca4402656\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-c2t4m" Oct 11 10:28:19.062834 master-2 kubenswrapper[4776]: I1011 10:28:19.062803 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djkhs\" (UniqueName: \"kubernetes.io/projected/b5b27c80-52a3-4747-a128-28952a667faa-kube-api-access-djkhs\") pod \"csi-snapshot-controller-ddd7d64cd-95l49\" (UID: \"b5b27c80-52a3-4747-a128-28952a667faa\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49" Oct 11 10:28:19.073300 master-1 kubenswrapper[4771]: I1011 10:28:19.073150 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7plbp\" (UniqueName: \"kubernetes.io/projected/09cf0cd5-a6f6-4b35-88cf-ca6ca4402656-kube-api-access-7plbp\") pod \"csi-snapshot-controller-ddd7d64cd-c2t4m\" (UID: \"09cf0cd5-a6f6-4b35-88cf-ca6ca4402656\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-c2t4m" Oct 11 10:28:19.082299 master-2 kubenswrapper[4776]: I1011 10:28:19.082244 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djkhs\" (UniqueName: \"kubernetes.io/projected/b5b27c80-52a3-4747-a128-28952a667faa-kube-api-access-djkhs\") pod \"csi-snapshot-controller-ddd7d64cd-95l49\" (UID: \"b5b27c80-52a3-4747-a128-28952a667faa\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49" Oct 11 10:28:19.146038 master-1 kubenswrapper[4771]: I1011 10:28:19.145875 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-c2t4m" Oct 11 10:28:19.156371 master-2 kubenswrapper[4776]: I1011 10:28:19.156321 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49" Oct 11 10:28:19.421498 master-1 kubenswrapper[4771]: I1011 10:28:19.421328 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-c2t4m"] Oct 11 10:28:19.431382 master-1 kubenswrapper[4771]: W1011 10:28:19.431283 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09cf0cd5_a6f6_4b35_88cf_ca6ca4402656.slice/crio-8c76b004f57f73a0758ab97b4b322aadaddacefb450c5c506970c872696e5d88 WatchSource:0}: Error finding container 8c76b004f57f73a0758ab97b4b322aadaddacefb450c5c506970c872696e5d88: Status 404 returned error can't find the container with id 8c76b004f57f73a0758ab97b4b322aadaddacefb450c5c506970c872696e5d88 Oct 11 10:28:19.982100 master-1 kubenswrapper[4771]: I1011 10:28:19.982048 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-c2t4m" event={"ID":"09cf0cd5-a6f6-4b35-88cf-ca6ca4402656","Type":"ContainerStarted","Data":"8c76b004f57f73a0758ab97b4b322aadaddacefb450c5c506970c872696e5d88"} Oct 11 10:28:20.381076 master-2 kubenswrapper[4776]: I1011 10:28:20.381035 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:20.381711 master-2 kubenswrapper[4776]: I1011 10:28:20.381078 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:20.381711 master-2 kubenswrapper[4776]: I1011 10:28:20.381124 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:20.381711 master-2 kubenswrapper[4776]: E1011 10:28:20.381243 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:20.381711 master-2 kubenswrapper[4776]: E1011 10:28:20.381321 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:20.381711 master-2 kubenswrapper[4776]: E1011 10:28:20.381332 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.381304643 +0000 UTC m=+151.165731532 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:20.381711 master-2 kubenswrapper[4776]: E1011 10:28:20.381331 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:20.381711 master-2 kubenswrapper[4776]: E1011 10:28:20.381504 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.381479308 +0000 UTC m=+151.165906187 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:20.381711 master-2 kubenswrapper[4776]: E1011 10:28:20.381603 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert podName:6b8dc5b8-3c48-4dba-9992-6e269ca133f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.381566411 +0000 UTC m=+151.165993120 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert") pod "cluster-autoscaler-operator-7ff449c7c5-cfvjb" (UID: "6b8dc5b8-3c48-4dba-9992-6e269ca133f1") : secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.493150 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.493731 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.493769 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.493796 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.493822 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.493849 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.493913 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.493948 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.493982 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.494012 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.494048 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.494076 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.494129 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.494171 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.494200 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.494221 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.494243 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: E1011 10:28:20.496654 4776 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: E1011 10:28:20.496759 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls podName:893af718-1fec-4b8b-8349-d85f978f4140 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.496735057 +0000 UTC m=+151.281161766 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls") pod "dns-operator-7769d9677-wh775" (UID: "893af718-1fec-4b8b-8349-d85f978f4140") : secret "metrics-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497604 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497649 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs podName:cbf33a7e-abea-411d-9a19-85cfe67debe3 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.497636524 +0000 UTC m=+151.282063233 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs") pod "multus-admission-controller-77b66fddc8-s5r5b" (UID: "cbf33a7e-abea-411d-9a19-85cfe67debe3") : secret "multus-admission-controller-secret" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497713 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497768 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497816 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.497789128 +0000 UTC m=+151.282216017 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497833 4776 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497845 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs podName:64310b0b-bae1-4ad3-b106-6d59d47d29b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.497831179 +0000 UTC m=+151.282258078 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs") pod "multus-admission-controller-77b66fddc8-5r2t9" (UID: "64310b0b-bae1-4ad3-b106-6d59d47d29b2") : secret "multus-admission-controller-secret" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497867 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls podName:7e860f23-9dae-4606-9426-0edec38a332f nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.49785767 +0000 UTC m=+151.282284379 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-bjntd" (UID: "7e860f23-9dae-4606-9426-0edec38a332f") : secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497899 4776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497906 4776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497926 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls podName:dbaa6ca7-9865-42f6-8030-2decf702caa1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.497918352 +0000 UTC m=+151.282345061 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-h8588" (UID: "dbaa6ca7-9865-42f6-8030-2decf702caa1") : secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497925 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497945 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert podName:eba1e82e-9f3e-4273-836e-9407cc394b10 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.497934412 +0000 UTC m=+151.282361111 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-5cf49b6487-8d7xr" (UID: "eba1e82e-9f3e-4273-836e-9407cc394b10") : secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497958 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert podName:e3281eb7-fb96-4bae-8c55-b79728d426b0 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.497953163 +0000 UTC m=+151.282379872 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert") pod "catalog-operator-f966fb6f8-8gkqg" (UID: "e3281eb7-fb96-4bae-8c55-b79728d426b0") : secret "catalog-operator-serving-cert" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497722 4776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497975 4776 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497990 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls podName:548333d7-2374-4c38-b4fd-45c2bee2ac4e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.497981273 +0000 UTC m=+151.282407982 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-b88g6" (UID: "548333d7-2374-4c38-b4fd-45c2bee2ac4e") : secret "machine-api-operator-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497990 4776 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.498010 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls podName:e4536c84-d8f3-4808-bf8b-9b40695f46de nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.498000744 +0000 UTC m=+151.282427683 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls") pod "machine-config-operator-7b75469658-jtmwh" (UID: "e4536c84-d8f3-4808-bf8b-9b40695f46de") : secret "mco-proxy-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.498030 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls podName:6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.498024105 +0000 UTC m=+151.282451024 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls") pod "ingress-operator-766ddf4575-wf7mj" (UID: "6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c") : secret "metrics-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.498031 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.498062 4776 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.498073 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert podName:e20ebc39-150b-472a-bb22-328d8f5db87b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.498065796 +0000 UTC m=+151.282492715 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-xzntp" (UID: "e20ebc39-150b-472a-bb22-328d8f5db87b") : secret "package-server-manager-serving-cert" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.498089 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls podName:b562963f-7112-411a-a64c-3b8eba909c59 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.498082226 +0000 UTC m=+151.282508935 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls") pod "cluster-image-registry-operator-6b8674d7ff-mwbsr" (UID: "b562963f-7112-411a-a64c-3b8eba909c59") : secret "image-registry-operator-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.498102 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Oct 11 10:28:20.498991 master-2 kubenswrapper[4776]: E1011 10:28:20.498126 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.498119167 +0000 UTC m=+151.282545876 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "node-tuning-operator-tls" not found Oct 11 10:28:20.498991 master-2 kubenswrapper[4776]: E1011 10:28:20.498130 4776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Oct 11 10:28:20.498991 master-2 kubenswrapper[4776]: E1011 10:28:20.498152 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls podName:08b7d4e3-1682-4a3b-a757-84ded3a16764 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.498144148 +0000 UTC m=+151.282570857 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls") pod "machine-approver-7876f99457-h7hhv" (UID: "08b7d4e3-1682-4a3b-a757-84ded3a16764") : secret "machine-approver-tls" not found Oct 11 10:28:20.498991 master-2 kubenswrapper[4776]: E1011 10:28:20.498162 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 11 10:28:20.498991 master-2 kubenswrapper[4776]: E1011 10:28:20.498191 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert podName:d4354488-1b32-422d-bb06-767a952192a5 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.498181399 +0000 UTC m=+151.282608108 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert") pod "olm-operator-867f8475d9-8lf59" (UID: "d4354488-1b32-422d-bb06-767a952192a5") : secret "olm-operator-serving-cert" not found Oct 11 10:28:20.498991 master-2 kubenswrapper[4776]: E1011 10:28:20.498889 4776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 11 10:28:20.499211 master-2 kubenswrapper[4776]: E1011 10:28:20.499021 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics podName:7652e0ca-2d18-48c7-80e0-f4a936038377 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.498991852 +0000 UTC m=+151.283418741 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-wsmdd" (UID: "7652e0ca-2d18-48c7-80e0-f4a936038377") : secret "marketplace-operator-metrics" not found Oct 11 10:28:20.548018 master-2 kubenswrapper[4776]: I1011 10:28:20.547944 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49"] Oct 11 10:28:20.553135 master-2 kubenswrapper[4776]: I1011 10:28:20.553084 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" event={"ID":"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc","Type":"ContainerStarted","Data":"960e453a9267449ffe7e9c7dd5e312b5b4a9b57933093dc218b7400eca3f6b59"} Oct 11 10:28:20.556489 master-2 kubenswrapper[4776]: I1011 10:28:20.556295 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" event={"ID":"9d362fb9-48e4-4d72-a940-ec6c9c051fac","Type":"ContainerStarted","Data":"53f868ac8b0d0bcce62eb761aa1d944f2aeff4c3ea9d582cec7865a78d5991fa"} Oct 11 10:28:20.556994 master-2 kubenswrapper[4776]: I1011 10:28:20.556953 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:20.589776 master-2 kubenswrapper[4776]: I1011 10:28:20.589664 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" podStartSLOduration=94.796118568 podStartE2EDuration="1m49.589640493s" podCreationTimestamp="2025-10-11 10:26:31 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.582013132 +0000 UTC m=+120.366439841" lastFinishedPulling="2025-10-11 10:28:20.375535057 +0000 UTC m=+135.159961766" observedRunningTime="2025-10-11 10:28:20.58433232 +0000 UTC m=+135.368759039" watchObservedRunningTime="2025-10-11 10:28:20.589640493 +0000 UTC m=+135.374067202" Oct 11 10:28:20.621991 master-2 kubenswrapper[4776]: W1011 10:28:20.621785 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5b27c80_52a3_4747_a128_28952a667faa.slice/crio-d1d6bf2f2a56f97f6b140f03d0a6fae4eeb9d2c208dfb81f5ebd257f0612b3cc WatchSource:0}: Error finding container d1d6bf2f2a56f97f6b140f03d0a6fae4eeb9d2c208dfb81f5ebd257f0612b3cc: Status 404 returned error can't find the container with id d1d6bf2f2a56f97f6b140f03d0a6fae4eeb9d2c208dfb81f5ebd257f0612b3cc Oct 11 10:28:20.708535 master-1 kubenswrapper[4771]: I1011 10:28:20.708335 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-64446499c7-sb6sm"] Oct 11 10:28:20.708747 master-1 kubenswrapper[4771]: I1011 10:28:20.708728 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-64446499c7-sb6sm" Oct 11 10:28:20.712077 master-1 kubenswrapper[4771]: I1011 10:28:20.711627 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 11 10:28:20.712077 master-1 kubenswrapper[4771]: I1011 10:28:20.711753 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 11 10:28:20.712077 master-1 kubenswrapper[4771]: I1011 10:28:20.711771 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 11 10:28:20.712077 master-1 kubenswrapper[4771]: I1011 10:28:20.711941 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 11 10:28:20.720037 master-1 kubenswrapper[4771]: I1011 10:28:20.719974 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-64446499c7-sb6sm"] Oct 11 10:28:20.753017 master-1 kubenswrapper[4771]: I1011 10:28:20.752813 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/27c04f6d-d04c-41b4-bcaf-19edb41f6604-signing-cabundle\") pod \"service-ca-64446499c7-sb6sm\" (UID: \"27c04f6d-d04c-41b4-bcaf-19edb41f6604\") " pod="openshift-service-ca/service-ca-64446499c7-sb6sm" Oct 11 10:28:20.753017 master-1 kubenswrapper[4771]: I1011 10:28:20.752893 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/27c04f6d-d04c-41b4-bcaf-19edb41f6604-signing-key\") pod \"service-ca-64446499c7-sb6sm\" (UID: \"27c04f6d-d04c-41b4-bcaf-19edb41f6604\") " pod="openshift-service-ca/service-ca-64446499c7-sb6sm" Oct 11 10:28:20.753017 master-1 kubenswrapper[4771]: I1011 10:28:20.752960 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6pz9\" (UniqueName: \"kubernetes.io/projected/27c04f6d-d04c-41b4-bcaf-19edb41f6604-kube-api-access-t6pz9\") pod \"service-ca-64446499c7-sb6sm\" (UID: \"27c04f6d-d04c-41b4-bcaf-19edb41f6604\") " pod="openshift-service-ca/service-ca-64446499c7-sb6sm" Oct 11 10:28:20.854659 master-1 kubenswrapper[4771]: I1011 10:28:20.853877 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6pz9\" (UniqueName: \"kubernetes.io/projected/27c04f6d-d04c-41b4-bcaf-19edb41f6604-kube-api-access-t6pz9\") pod \"service-ca-64446499c7-sb6sm\" (UID: \"27c04f6d-d04c-41b4-bcaf-19edb41f6604\") " pod="openshift-service-ca/service-ca-64446499c7-sb6sm" Oct 11 10:28:20.854659 master-1 kubenswrapper[4771]: I1011 10:28:20.853998 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/27c04f6d-d04c-41b4-bcaf-19edb41f6604-signing-cabundle\") pod \"service-ca-64446499c7-sb6sm\" (UID: \"27c04f6d-d04c-41b4-bcaf-19edb41f6604\") " pod="openshift-service-ca/service-ca-64446499c7-sb6sm" Oct 11 10:28:20.854659 master-1 kubenswrapper[4771]: I1011 10:28:20.854028 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/27c04f6d-d04c-41b4-bcaf-19edb41f6604-signing-key\") pod \"service-ca-64446499c7-sb6sm\" (UID: \"27c04f6d-d04c-41b4-bcaf-19edb41f6604\") " pod="openshift-service-ca/service-ca-64446499c7-sb6sm" Oct 11 10:28:20.855659 master-1 kubenswrapper[4771]: I1011 10:28:20.855597 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/27c04f6d-d04c-41b4-bcaf-19edb41f6604-signing-cabundle\") pod \"service-ca-64446499c7-sb6sm\" (UID: \"27c04f6d-d04c-41b4-bcaf-19edb41f6604\") " pod="openshift-service-ca/service-ca-64446499c7-sb6sm" Oct 11 10:28:20.862318 master-1 kubenswrapper[4771]: I1011 10:28:20.862258 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/27c04f6d-d04c-41b4-bcaf-19edb41f6604-signing-key\") pod \"service-ca-64446499c7-sb6sm\" (UID: \"27c04f6d-d04c-41b4-bcaf-19edb41f6604\") " pod="openshift-service-ca/service-ca-64446499c7-sb6sm" Oct 11 10:28:20.872611 master-1 kubenswrapper[4771]: I1011 10:28:20.872549 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6pz9\" (UniqueName: \"kubernetes.io/projected/27c04f6d-d04c-41b4-bcaf-19edb41f6604-kube-api-access-t6pz9\") pod \"service-ca-64446499c7-sb6sm\" (UID: \"27c04f6d-d04c-41b4-bcaf-19edb41f6604\") " pod="openshift-service-ca/service-ca-64446499c7-sb6sm" Oct 11 10:28:20.985803 master-1 kubenswrapper[4771]: I1011 10:28:20.985675 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-d8c4d9469-bxq92" event={"ID":"b62583d2-1a1e-44fd-871e-1c48e3cb1732","Type":"ContainerStarted","Data":"9dd1ab667102e62c18c0e52de9fa777b29d04f09335b1e34df94c64f0b365c01"} Oct 11 10:28:20.985803 master-1 kubenswrapper[4771]: I1011 10:28:20.985736 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-d8c4d9469-bxq92" event={"ID":"b62583d2-1a1e-44fd-871e-1c48e3cb1732","Type":"ContainerStarted","Data":"6c4a2bd203d96b682cdd610dfa28d37a6f351ee6104d4c8e1fb079f0631ff99e"} Oct 11 10:28:20.999807 master-1 kubenswrapper[4771]: I1011 10:28:20.999520 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-d8c4d9469-bxq92" podStartSLOduration=1.769219514 podStartE2EDuration="2.999503822s" podCreationTimestamp="2025-10-11 10:28:18 +0000 UTC" firstStartedPulling="2025-10-11 10:28:18.882593455 +0000 UTC m=+130.856819896" lastFinishedPulling="2025-10-11 10:28:20.112877763 +0000 UTC m=+132.087104204" observedRunningTime="2025-10-11 10:28:20.999158543 +0000 UTC m=+132.973385044" watchObservedRunningTime="2025-10-11 10:28:20.999503822 +0000 UTC m=+132.973730273" Oct 11 10:28:21.023236 master-1 kubenswrapper[4771]: I1011 10:28:21.023192 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-64446499c7-sb6sm" Oct 11 10:28:21.563594 master-2 kubenswrapper[4776]: I1011 10:28:21.563083 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49" event={"ID":"b5b27c80-52a3-4747-a128-28952a667faa","Type":"ContainerStarted","Data":"d1d6bf2f2a56f97f6b140f03d0a6fae4eeb9d2c208dfb81f5ebd257f0612b3cc"} Oct 11 10:28:21.567976 master-2 kubenswrapper[4776]: I1011 10:28:21.567901 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" event={"ID":"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc","Type":"ContainerDied","Data":"960e453a9267449ffe7e9c7dd5e312b5b4a9b57933093dc218b7400eca3f6b59"} Oct 11 10:28:21.568528 master-2 kubenswrapper[4776]: I1011 10:28:21.568471 4776 generic.go:334] "Generic (PLEG): container finished" podID="e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc" containerID="960e453a9267449ffe7e9c7dd5e312b5b4a9b57933093dc218b7400eca3f6b59" exitCode=0 Oct 11 10:28:23.342045 master-2 kubenswrapper[4776]: I1011 10:28:23.341991 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:23.437681 master-1 kubenswrapper[4771]: I1011 10:28:23.437246 4771 scope.go:117] "RemoveContainer" containerID="793c72629ffb5d64763cce906980f11774530f02d707e0389b69155b33560c5d" Oct 11 10:28:23.438759 master-1 kubenswrapper[4771]: E1011 10:28:23.437843 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-779749f859-5xxzp_openshift-cloud-controller-manager-operator(e115f8be-9e65-4407-8111-568e5ea8ac1b)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" podUID="e115f8be-9e65-4407-8111-568e5ea8ac1b" Oct 11 10:28:24.937682 master-1 kubenswrapper[4771]: I1011 10:28:24.937344 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-64446499c7-sb6sm"] Oct 11 10:28:24.948924 master-1 kubenswrapper[4771]: W1011 10:28:24.948894 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27c04f6d_d04c_41b4_bcaf_19edb41f6604.slice/crio-6ae068076842dd077497e3ab2fe775738f4feeaf33e9d0270c74c49a5c55b246 WatchSource:0}: Error finding container 6ae068076842dd077497e3ab2fe775738f4feeaf33e9d0270c74c49a5c55b246: Status 404 returned error can't find the container with id 6ae068076842dd077497e3ab2fe775738f4feeaf33e9d0270c74c49a5c55b246 Oct 11 10:28:24.998588 master-1 kubenswrapper[4771]: I1011 10:28:24.998439 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-c2t4m" event={"ID":"09cf0cd5-a6f6-4b35-88cf-ca6ca4402656","Type":"ContainerStarted","Data":"93e6fb561deb65d2eaccfde1779f8f7f72d30de6e99e33cb683f39fa4720cc36"} Oct 11 10:28:24.999815 master-1 kubenswrapper[4771]: I1011 10:28:24.999745 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-64446499c7-sb6sm" event={"ID":"27c04f6d-d04c-41b4-bcaf-19edb41f6604","Type":"ContainerStarted","Data":"6ae068076842dd077497e3ab2fe775738f4feeaf33e9d0270c74c49a5c55b246"} Oct 11 10:28:25.001075 master-1 kubenswrapper[4771]: I1011 10:28:25.000992 4771 generic.go:334] "Generic (PLEG): container finished" podID="e5877eb5-69db-40eb-af0c-096d52c3fc4d" containerID="723577cf73aef3a8a2e4442f397b70704c28142936c48024dbef159c3112ef95" exitCode=0 Oct 11 10:28:25.001075 master-1 kubenswrapper[4771]: I1011 10:28:25.001037 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/master-1-debug-vwkqm" event={"ID":"e5877eb5-69db-40eb-af0c-096d52c3fc4d","Type":"ContainerDied","Data":"723577cf73aef3a8a2e4442f397b70704c28142936c48024dbef159c3112ef95"} Oct 11 10:28:25.016470 master-1 kubenswrapper[4771]: I1011 10:28:25.016334 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-c2t4m" podStartSLOduration=1.698926228 podStartE2EDuration="7.016311315s" podCreationTimestamp="2025-10-11 10:28:18 +0000 UTC" firstStartedPulling="2025-10-11 10:28:19.43482432 +0000 UTC m=+131.409050761" lastFinishedPulling="2025-10-11 10:28:24.752209377 +0000 UTC m=+136.726435848" observedRunningTime="2025-10-11 10:28:25.01384317 +0000 UTC m=+136.988069611" watchObservedRunningTime="2025-10-11 10:28:25.016311315 +0000 UTC m=+136.990537796" Oct 11 10:28:25.056789 master-1 kubenswrapper[4771]: I1011 10:28:25.056698 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["assisted-installer/master-1-debug-vwkqm"] Oct 11 10:28:25.058142 master-1 kubenswrapper[4771]: I1011 10:28:25.058077 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["assisted-installer/master-1-debug-vwkqm"] Oct 11 10:28:26.040708 master-1 kubenswrapper[4771]: I1011 10:28:26.040635 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/master-1-debug-vwkqm" Oct 11 10:28:26.100405 master-1 kubenswrapper[4771]: I1011 10:28:26.100300 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5877eb5-69db-40eb-af0c-096d52c3fc4d-host\") pod \"e5877eb5-69db-40eb-af0c-096d52c3fc4d\" (UID: \"e5877eb5-69db-40eb-af0c-096d52c3fc4d\") " Oct 11 10:28:26.100693 master-1 kubenswrapper[4771]: I1011 10:28:26.100459 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-254q5\" (UniqueName: \"kubernetes.io/projected/e5877eb5-69db-40eb-af0c-096d52c3fc4d-kube-api-access-254q5\") pod \"e5877eb5-69db-40eb-af0c-096d52c3fc4d\" (UID: \"e5877eb5-69db-40eb-af0c-096d52c3fc4d\") " Oct 11 10:28:26.100693 master-1 kubenswrapper[4771]: I1011 10:28:26.100491 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5877eb5-69db-40eb-af0c-096d52c3fc4d-host" (OuterVolumeSpecName: "host") pod "e5877eb5-69db-40eb-af0c-096d52c3fc4d" (UID: "e5877eb5-69db-40eb-af0c-096d52c3fc4d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:28:26.100693 master-1 kubenswrapper[4771]: I1011 10:28:26.100680 4771 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5877eb5-69db-40eb-af0c-096d52c3fc4d-host\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:26.107284 master-1 kubenswrapper[4771]: I1011 10:28:26.107186 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5877eb5-69db-40eb-af0c-096d52c3fc4d-kube-api-access-254q5" (OuterVolumeSpecName: "kube-api-access-254q5") pod "e5877eb5-69db-40eb-af0c-096d52c3fc4d" (UID: "e5877eb5-69db-40eb-af0c-096d52c3fc4d"). InnerVolumeSpecName "kube-api-access-254q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:28:26.201640 master-1 kubenswrapper[4771]: I1011 10:28:26.201554 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-254q5\" (UniqueName: \"kubernetes.io/projected/e5877eb5-69db-40eb-af0c-096d52c3fc4d-kube-api-access-254q5\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:26.442330 master-1 kubenswrapper[4771]: I1011 10:28:26.442255 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5877eb5-69db-40eb-af0c-096d52c3fc4d" path="/var/lib/kubelet/pods/e5877eb5-69db-40eb-af0c-096d52c3fc4d/volumes" Oct 11 10:28:27.009267 master-1 kubenswrapper[4771]: I1011 10:28:27.008670 4771 scope.go:117] "RemoveContainer" containerID="723577cf73aef3a8a2e4442f397b70704c28142936c48024dbef159c3112ef95" Oct 11 10:28:27.009267 master-1 kubenswrapper[4771]: I1011 10:28:27.008746 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/master-1-debug-vwkqm" Oct 11 10:28:27.011488 master-1 kubenswrapper[4771]: I1011 10:28:27.010857 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-64446499c7-sb6sm" event={"ID":"27c04f6d-d04c-41b4-bcaf-19edb41f6604","Type":"ContainerStarted","Data":"2ce7b303d28a1305e8fa0a23a119002db3a3528b142e62f30f20198e8cfc40c9"} Oct 11 10:28:27.025317 master-1 kubenswrapper[4771]: I1011 10:28:27.024980 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-64446499c7-sb6sm" podStartSLOduration=5.116819782 podStartE2EDuration="7.024956518s" podCreationTimestamp="2025-10-11 10:28:20 +0000 UTC" firstStartedPulling="2025-10-11 10:28:24.951011322 +0000 UTC m=+136.925237763" lastFinishedPulling="2025-10-11 10:28:26.859148038 +0000 UTC m=+138.833374499" observedRunningTime="2025-10-11 10:28:27.023019147 +0000 UTC m=+138.997245598" watchObservedRunningTime="2025-10-11 10:28:27.024956518 +0000 UTC m=+138.999182999" Oct 11 10:28:29.597753 master-2 kubenswrapper[4776]: I1011 10:28:29.597481 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49" event={"ID":"b5b27c80-52a3-4747-a128-28952a667faa","Type":"ContainerStarted","Data":"c8a8c52b73cf91ea6bba79404a35830c95d980291188b0aaa7590f6318351fe4"} Oct 11 10:28:29.600233 master-2 kubenswrapper[4776]: I1011 10:28:29.599980 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" event={"ID":"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc","Type":"ContainerStarted","Data":"79dc22f0a550f7a03fdfde0714643e7ddafbfdc868d34604a3c34fe38c3b91d6"} Oct 11 10:28:29.601828 master-2 kubenswrapper[4776]: I1011 10:28:29.601798 4776 generic.go:334] "Generic (PLEG): container finished" podID="02b839f3-9031-49c2-87a5-630975c7e14c" containerID="84ee2aa257fd47e52819ac5b509341f1915d01766245c38ccb1a2b7cae91293a" exitCode=0 Oct 11 10:28:29.601907 master-2 kubenswrapper[4776]: I1011 10:28:29.601870 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/master-2-debug-gpmgw" event={"ID":"02b839f3-9031-49c2-87a5-630975c7e14c","Type":"ContainerDied","Data":"84ee2aa257fd47e52819ac5b509341f1915d01766245c38ccb1a2b7cae91293a"} Oct 11 10:28:29.603498 master-2 kubenswrapper[4776]: I1011 10:28:29.603061 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" event={"ID":"59763d5b-237f-4095-bf52-86bb0154381c","Type":"ContainerStarted","Data":"3ce0e4c23d3462cc28a54aa78bddda37020e10bc5a0b28d2d4d54aa602abe170"} Oct 11 10:28:29.604733 master-2 kubenswrapper[4776]: I1011 10:28:29.604689 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" event={"ID":"e540333c-4b4d-439e-a82a-cd3a97c95a43","Type":"ContainerStarted","Data":"0ba5d510196688f6b97d8a36964cc97a744fb54a3c5e03a38ad0b42712671103"} Oct 11 10:28:29.605837 master-2 kubenswrapper[4776]: I1011 10:28:29.605801 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" event={"ID":"88129ec6-6f99-42a1-842a-6a965c6b58fe","Type":"ContainerStarted","Data":"e266c7a2e3d240b36e8aa83f32c98d86c0362e7f150797bd2e151f66b7e2430e"} Oct 11 10:28:29.615797 master-2 kubenswrapper[4776]: I1011 10:28:29.612708 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49" podStartSLOduration=3.679856145 podStartE2EDuration="11.612691606s" podCreationTimestamp="2025-10-11 10:28:18 +0000 UTC" firstStartedPulling="2025-10-11 10:28:20.624388183 +0000 UTC m=+135.408814892" lastFinishedPulling="2025-10-11 10:28:28.557223644 +0000 UTC m=+143.341650353" observedRunningTime="2025-10-11 10:28:29.610367289 +0000 UTC m=+144.394793998" watchObservedRunningTime="2025-10-11 10:28:29.612691606 +0000 UTC m=+144.397118315" Oct 11 10:28:29.626726 master-2 kubenswrapper[4776]: I1011 10:28:29.626604 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" podStartSLOduration=70.39977609 podStartE2EDuration="1m33.626584016s" podCreationTimestamp="2025-10-11 10:26:56 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.345731178 +0000 UTC m=+120.130157887" lastFinishedPulling="2025-10-11 10:28:28.572539104 +0000 UTC m=+143.356965813" observedRunningTime="2025-10-11 10:28:29.625037262 +0000 UTC m=+144.409463991" watchObservedRunningTime="2025-10-11 10:28:29.626584016 +0000 UTC m=+144.411010725" Oct 11 10:28:29.643256 master-2 kubenswrapper[4776]: I1011 10:28:29.643206 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" podStartSLOduration=94.662210773 podStartE2EDuration="1m56.643188704s" podCreationTimestamp="2025-10-11 10:26:33 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.822105946 +0000 UTC m=+120.606532655" lastFinishedPulling="2025-10-11 10:28:27.803083877 +0000 UTC m=+142.587510586" observedRunningTime="2025-10-11 10:28:29.64166159 +0000 UTC m=+144.426088309" watchObservedRunningTime="2025-10-11 10:28:29.643188704 +0000 UTC m=+144.427615413" Oct 11 10:28:29.753293 master-2 kubenswrapper[4776]: I1011 10:28:29.753217 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" podStartSLOduration=94.089606485 podStartE2EDuration="1m56.753170692s" podCreationTimestamp="2025-10-11 10:26:33 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.853914861 +0000 UTC m=+120.638341570" lastFinishedPulling="2025-10-11 10:28:28.517479028 +0000 UTC m=+143.301905777" observedRunningTime="2025-10-11 10:28:29.66041629 +0000 UTC m=+144.444843009" watchObservedRunningTime="2025-10-11 10:28:29.753170692 +0000 UTC m=+144.537597401" Oct 11 10:28:29.760810 master-2 kubenswrapper[4776]: I1011 10:28:29.760761 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" podStartSLOduration=70.139045478 podStartE2EDuration="1m32.760742759s" podCreationTimestamp="2025-10-11 10:26:57 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.895714645 +0000 UTC m=+120.680141354" lastFinishedPulling="2025-10-11 10:28:28.517411926 +0000 UTC m=+143.301838635" observedRunningTime="2025-10-11 10:28:29.750490104 +0000 UTC m=+144.534916813" watchObservedRunningTime="2025-10-11 10:28:29.760742759 +0000 UTC m=+144.545169468" Oct 11 10:28:29.784567 master-2 kubenswrapper[4776]: I1011 10:28:29.784525 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["assisted-installer/master-2-debug-gpmgw"] Oct 11 10:28:29.786758 master-2 kubenswrapper[4776]: I1011 10:28:29.786732 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["assisted-installer/master-2-debug-gpmgw"] Oct 11 10:28:30.053712 master-1 kubenswrapper[4771]: I1011 10:28:30.053621 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-796c687c6d-9b677"] Oct 11 10:28:30.054753 master-1 kubenswrapper[4771]: E1011 10:28:30.054726 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5877eb5-69db-40eb-af0c-096d52c3fc4d" containerName="container-00" Oct 11 10:28:30.054906 master-1 kubenswrapper[4771]: I1011 10:28:30.054889 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5877eb5-69db-40eb-af0c-096d52c3fc4d" containerName="container-00" Oct 11 10:28:30.055049 master-1 kubenswrapper[4771]: I1011 10:28:30.055033 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5877eb5-69db-40eb-af0c-096d52c3fc4d" containerName="container-00" Oct 11 10:28:30.055679 master-1 kubenswrapper[4771]: I1011 10:28:30.055650 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.059831 master-1 kubenswrapper[4771]: I1011 10:28:30.059441 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 11 10:28:30.059831 master-1 kubenswrapper[4771]: I1011 10:28:30.059628 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 11 10:28:30.059831 master-1 kubenswrapper[4771]: I1011 10:28:30.059690 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 11 10:28:30.059831 master-1 kubenswrapper[4771]: I1011 10:28:30.059633 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 11 10:28:30.060136 master-1 kubenswrapper[4771]: I1011 10:28:30.059914 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 11 10:28:30.064302 master-1 kubenswrapper[4771]: I1011 10:28:30.061300 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Oct 11 10:28:30.066084 master-1 kubenswrapper[4771]: I1011 10:28:30.065110 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Oct 11 10:28:30.066499 master-1 kubenswrapper[4771]: I1011 10:28:30.066166 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 11 10:28:30.067068 master-1 kubenswrapper[4771]: I1011 10:28:30.066564 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 11 10:28:30.068829 master-2 kubenswrapper[4776]: I1011 10:28:30.068791 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-796c687c6d-k46j4"] Oct 11 10:28:30.069423 master-2 kubenswrapper[4776]: E1011 10:28:30.069406 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b839f3-9031-49c2-87a5-630975c7e14c" containerName="container-00" Oct 11 10:28:30.069503 master-2 kubenswrapper[4776]: I1011 10:28:30.069493 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b839f3-9031-49c2-87a5-630975c7e14c" containerName="container-00" Oct 11 10:28:30.069654 master-2 kubenswrapper[4776]: I1011 10:28:30.069641 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b839f3-9031-49c2-87a5-630975c7e14c" containerName="container-00" Oct 11 10:28:30.070560 master-2 kubenswrapper[4776]: I1011 10:28:30.070539 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.073355 master-2 kubenswrapper[4776]: I1011 10:28:30.073322 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 11 10:28:30.073583 master-2 kubenswrapper[4776]: I1011 10:28:30.073561 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 11 10:28:30.073833 master-2 kubenswrapper[4776]: I1011 10:28:30.073780 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 11 10:28:30.074021 master-2 kubenswrapper[4776]: I1011 10:28:30.074006 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Oct 11 10:28:30.074534 master-2 kubenswrapper[4776]: I1011 10:28:30.074512 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 11 10:28:30.075980 master-2 kubenswrapper[4776]: I1011 10:28:30.075953 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Oct 11 10:28:30.076558 master-2 kubenswrapper[4776]: I1011 10:28:30.076541 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 11 10:28:30.076722 master-2 kubenswrapper[4776]: I1011 10:28:30.076659 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 11 10:28:30.076951 master-2 kubenswrapper[4776]: I1011 10:28:30.076930 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 11 10:28:30.078570 master-1 kubenswrapper[4771]: I1011 10:28:30.078505 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-796c687c6d-9b677"] Oct 11 10:28:30.084742 master-2 kubenswrapper[4776]: I1011 10:28:30.082537 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 11 10:28:30.084742 master-2 kubenswrapper[4776]: I1011 10:28:30.084192 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-796c687c6d-k46j4"] Oct 11 10:28:30.096426 master-1 kubenswrapper[4771]: I1011 10:28:30.095965 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 11 10:28:30.138226 master-1 kubenswrapper[4771]: I1011 10:28:30.137943 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94d811f4-4ac9-46b0-b937-d3370b1b4305-etcd-client\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.138226 master-1 kubenswrapper[4771]: I1011 10:28:30.138020 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.138226 master-1 kubenswrapper[4771]: I1011 10:28:30.138060 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-config\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.138226 master-1 kubenswrapper[4771]: I1011 10:28:30.138099 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94d811f4-4ac9-46b0-b937-d3370b1b4305-encryption-config\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.138226 master-1 kubenswrapper[4771]: I1011 10:28:30.138174 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit-dir\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.138226 master-1 kubenswrapper[4771]: I1011 10:28:30.138237 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-image-import-ca\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.138901 master-1 kubenswrapper[4771]: I1011 10:28:30.138308 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/94d811f4-4ac9-46b0-b937-d3370b1b4305-node-pullsecrets\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.138901 master-1 kubenswrapper[4771]: I1011 10:28:30.138349 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-trusted-ca-bundle\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.138901 master-1 kubenswrapper[4771]: I1011 10:28:30.138421 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94d811f4-4ac9-46b0-b937-d3370b1b4305-serving-cert\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.138901 master-1 kubenswrapper[4771]: I1011 10:28:30.138480 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-etcd-serving-ca\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.138901 master-1 kubenswrapper[4771]: I1011 10:28:30.138516 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbkqb\" (UniqueName: \"kubernetes.io/projected/94d811f4-4ac9-46b0-b937-d3370b1b4305-kube-api-access-dbkqb\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.177049 master-2 kubenswrapper[4776]: I1011 10:28:30.176944 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-client\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.177578 master-2 kubenswrapper[4776]: I1011 10:28:30.177552 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-serving-ca\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.177715 master-2 kubenswrapper[4776]: I1011 10:28:30.177658 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxmhq\" (UniqueName: \"kubernetes.io/projected/9bf5fcc5-d60e-45da-976d-56ac881274f1-kube-api-access-lxmhq\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.177843 master-2 kubenswrapper[4776]: I1011 10:28:30.177824 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.177918 master-2 kubenswrapper[4776]: I1011 10:28:30.177904 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-image-import-ca\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.178043 master-2 kubenswrapper[4776]: I1011 10:28:30.178029 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-node-pullsecrets\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.178122 master-2 kubenswrapper[4776]: I1011 10:28:30.178108 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-serving-cert\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.178206 master-2 kubenswrapper[4776]: I1011 10:28:30.178192 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-trusted-ca-bundle\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.178292 master-2 kubenswrapper[4776]: I1011 10:28:30.178277 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-encryption-config\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.178419 master-2 kubenswrapper[4776]: I1011 10:28:30.178405 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-config\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.178525 master-2 kubenswrapper[4776]: I1011 10:28:30.178511 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit-dir\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.239558 master-1 kubenswrapper[4771]: I1011 10:28:30.239427 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-config\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.239558 master-1 kubenswrapper[4771]: I1011 10:28:30.239541 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94d811f4-4ac9-46b0-b937-d3370b1b4305-encryption-config\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.239993 master-1 kubenswrapper[4771]: I1011 10:28:30.239604 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit-dir\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.239993 master-1 kubenswrapper[4771]: I1011 10:28:30.239644 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-image-import-ca\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.239993 master-1 kubenswrapper[4771]: I1011 10:28:30.239708 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/94d811f4-4ac9-46b0-b937-d3370b1b4305-node-pullsecrets\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.239993 master-1 kubenswrapper[4771]: I1011 10:28:30.239745 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94d811f4-4ac9-46b0-b937-d3370b1b4305-serving-cert\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.239993 master-1 kubenswrapper[4771]: I1011 10:28:30.239775 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-trusted-ca-bundle\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.239993 master-1 kubenswrapper[4771]: I1011 10:28:30.239833 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-etcd-serving-ca\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.239993 master-1 kubenswrapper[4771]: I1011 10:28:30.239835 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit-dir\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.239993 master-1 kubenswrapper[4771]: I1011 10:28:30.239866 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbkqb\" (UniqueName: \"kubernetes.io/projected/94d811f4-4ac9-46b0-b937-d3370b1b4305-kube-api-access-dbkqb\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.239993 master-1 kubenswrapper[4771]: I1011 10:28:30.239952 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/94d811f4-4ac9-46b0-b937-d3370b1b4305-node-pullsecrets\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.240776 master-1 kubenswrapper[4771]: I1011 10:28:30.240008 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94d811f4-4ac9-46b0-b937-d3370b1b4305-etcd-client\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.240776 master-1 kubenswrapper[4771]: I1011 10:28:30.240097 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.240776 master-1 kubenswrapper[4771]: E1011 10:28:30.240235 4771 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 11 10:28:30.240776 master-1 kubenswrapper[4771]: E1011 10:28:30.240345 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit podName:94d811f4-4ac9-46b0-b937-d3370b1b4305 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:30.740306693 +0000 UTC m=+142.714533164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit") pod "apiserver-796c687c6d-9b677" (UID: "94d811f4-4ac9-46b0-b937-d3370b1b4305") : configmap "audit-0" not found Oct 11 10:28:30.241347 master-1 kubenswrapper[4771]: I1011 10:28:30.241257 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-config\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.241883 master-1 kubenswrapper[4771]: I1011 10:28:30.241814 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-image-import-ca\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.242230 master-1 kubenswrapper[4771]: I1011 10:28:30.242112 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-etcd-serving-ca\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.242499 master-1 kubenswrapper[4771]: I1011 10:28:30.242451 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-trusted-ca-bundle\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.247163 master-1 kubenswrapper[4771]: I1011 10:28:30.246520 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94d811f4-4ac9-46b0-b937-d3370b1b4305-encryption-config\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.247163 master-1 kubenswrapper[4771]: I1011 10:28:30.247085 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94d811f4-4ac9-46b0-b937-d3370b1b4305-etcd-client\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.247352 master-1 kubenswrapper[4771]: I1011 10:28:30.247160 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94d811f4-4ac9-46b0-b937-d3370b1b4305-serving-cert\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.262058 master-1 kubenswrapper[4771]: I1011 10:28:30.261993 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbkqb\" (UniqueName: \"kubernetes.io/projected/94d811f4-4ac9-46b0-b937-d3370b1b4305-kube-api-access-dbkqb\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.279721 master-2 kubenswrapper[4776]: I1011 10:28:30.279646 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-config\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.279721 master-2 kubenswrapper[4776]: I1011 10:28:30.279710 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit-dir\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.279944 master-2 kubenswrapper[4776]: I1011 10:28:30.279764 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-client\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.279944 master-2 kubenswrapper[4776]: I1011 10:28:30.279805 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-serving-ca\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.279944 master-2 kubenswrapper[4776]: I1011 10:28:30.279821 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxmhq\" (UniqueName: \"kubernetes.io/projected/9bf5fcc5-d60e-45da-976d-56ac881274f1-kube-api-access-lxmhq\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.279944 master-2 kubenswrapper[4776]: I1011 10:28:30.279867 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.279944 master-2 kubenswrapper[4776]: I1011 10:28:30.279884 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-image-import-ca\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.279944 master-2 kubenswrapper[4776]: I1011 10:28:30.279911 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-node-pullsecrets\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.279944 master-2 kubenswrapper[4776]: I1011 10:28:30.279928 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-serving-cert\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.279944 master-2 kubenswrapper[4776]: I1011 10:28:30.279947 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-trusted-ca-bundle\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.280242 master-2 kubenswrapper[4776]: I1011 10:28:30.279964 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-encryption-config\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.280693 master-2 kubenswrapper[4776]: I1011 10:28:30.280637 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-config\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.280764 master-2 kubenswrapper[4776]: I1011 10:28:30.280751 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-node-pullsecrets\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.280829 master-2 kubenswrapper[4776]: E1011 10:28:30.280656 4776 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 11 10:28:30.280957 master-2 kubenswrapper[4776]: E1011 10:28:30.280942 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit podName:9bf5fcc5-d60e-45da-976d-56ac881274f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:30.780921838 +0000 UTC m=+145.565348547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit") pod "apiserver-796c687c6d-k46j4" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1") : configmap "audit-0" not found Oct 11 10:28:30.281215 master-2 kubenswrapper[4776]: I1011 10:28:30.281191 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-image-import-ca\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.281697 master-2 kubenswrapper[4776]: I1011 10:28:30.281660 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit-dir\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.281795 master-2 kubenswrapper[4776]: I1011 10:28:30.281738 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-serving-ca\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.283341 master-2 kubenswrapper[4776]: I1011 10:28:30.283300 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-trusted-ca-bundle\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.295703 master-2 kubenswrapper[4776]: I1011 10:28:30.287760 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-client\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.295703 master-2 kubenswrapper[4776]: I1011 10:28:30.287887 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-encryption-config\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.295703 master-2 kubenswrapper[4776]: I1011 10:28:30.292148 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-serving-cert\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.296779 master-2 kubenswrapper[4776]: I1011 10:28:30.296397 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxmhq\" (UniqueName: \"kubernetes.io/projected/9bf5fcc5-d60e-45da-976d-56ac881274f1-kube-api-access-lxmhq\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.608307 master-1 kubenswrapper[4771]: I1011 10:28:30.608246 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-wqj5f"] Oct 11 10:28:30.609139 master-1 kubenswrapper[4771]: I1011 10:28:30.609106 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:30.612871 master-1 kubenswrapper[4771]: I1011 10:28:30.612814 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 11 10:28:30.613014 master-1 kubenswrapper[4771]: I1011 10:28:30.612882 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 11 10:28:30.613014 master-1 kubenswrapper[4771]: I1011 10:28:30.612893 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 11 10:28:30.613297 master-1 kubenswrapper[4771]: I1011 10:28:30.613259 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 11 10:28:30.613413 master-1 kubenswrapper[4771]: I1011 10:28:30.613309 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 11 10:28:30.614804 master-1 kubenswrapper[4771]: I1011 10:28:30.614772 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 11 10:28:30.619947 master-2 kubenswrapper[4776]: I1011 10:28:30.619885 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-llh2g"] Oct 11 10:28:30.620477 master-2 kubenswrapper[4776]: I1011 10:28:30.620440 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.624059 master-2 kubenswrapper[4776]: I1011 10:28:30.624007 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 11 10:28:30.624149 master-1 kubenswrapper[4771]: I1011 10:28:30.624037 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-wqj5f"] Oct 11 10:28:30.624311 master-2 kubenswrapper[4776]: I1011 10:28:30.624280 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 11 10:28:30.624465 master-2 kubenswrapper[4776]: I1011 10:28:30.624439 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 11 10:28:30.624836 master-2 kubenswrapper[4776]: I1011 10:28:30.624808 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 11 10:28:30.624949 master-2 kubenswrapper[4776]: I1011 10:28:30.624923 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 11 10:28:30.625033 master-2 kubenswrapper[4776]: I1011 10:28:30.624991 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 11 10:28:30.632960 master-2 kubenswrapper[4776]: I1011 10:28:30.632898 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-llh2g"] Oct 11 10:28:30.644076 master-1 kubenswrapper[4771]: I1011 10:28:30.644024 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-config\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:30.644429 master-1 kubenswrapper[4771]: I1011 10:28:30.644405 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhr88\" (UniqueName: \"kubernetes.io/projected/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-kube-api-access-vhr88\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:30.644609 master-1 kubenswrapper[4771]: I1011 10:28:30.644586 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:30.644734 master-1 kubenswrapper[4771]: I1011 10:28:30.644717 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-serving-cert\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:30.644979 master-1 kubenswrapper[4771]: I1011 10:28:30.644962 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-client-ca\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:30.646397 master-2 kubenswrapper[4776]: I1011 10:28:30.646297 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:30.685287 master-2 kubenswrapper[4776]: I1011 10:28:30.685216 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02b839f3-9031-49c2-87a5-630975c7e14c-host\") pod \"02b839f3-9031-49c2-87a5-630975c7e14c\" (UID: \"02b839f3-9031-49c2-87a5-630975c7e14c\") " Oct 11 10:28:30.685601 master-2 kubenswrapper[4776]: I1011 10:28:30.685325 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj76g\" (UniqueName: \"kubernetes.io/projected/02b839f3-9031-49c2-87a5-630975c7e14c-kube-api-access-xj76g\") pod \"02b839f3-9031-49c2-87a5-630975c7e14c\" (UID: \"02b839f3-9031-49c2-87a5-630975c7e14c\") " Oct 11 10:28:30.685601 master-2 kubenswrapper[4776]: I1011 10:28:30.685331 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02b839f3-9031-49c2-87a5-630975c7e14c-host" (OuterVolumeSpecName: "host") pod "02b839f3-9031-49c2-87a5-630975c7e14c" (UID: "02b839f3-9031-49c2-87a5-630975c7e14c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:28:30.685734 master-2 kubenswrapper[4776]: I1011 10:28:30.685614 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.685734 master-2 kubenswrapper[4776]: I1011 10:28:30.685640 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.686097 master-2 kubenswrapper[4776]: I1011 10:28:30.686047 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.686153 master-2 kubenswrapper[4776]: I1011 10:28:30.686107 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfjtx\" (UniqueName: \"kubernetes.io/projected/a4117af6-90eb-4a97-af54-06b199075a28-kube-api-access-wfjtx\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.686238 master-2 kubenswrapper[4776]: I1011 10:28:30.686209 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.686332 master-2 kubenswrapper[4776]: I1011 10:28:30.686309 4776 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02b839f3-9031-49c2-87a5-630975c7e14c-host\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:30.688331 master-2 kubenswrapper[4776]: I1011 10:28:30.688290 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b839f3-9031-49c2-87a5-630975c7e14c-kube-api-access-xj76g" (OuterVolumeSpecName: "kube-api-access-xj76g") pod "02b839f3-9031-49c2-87a5-630975c7e14c" (UID: "02b839f3-9031-49c2-87a5-630975c7e14c"). InnerVolumeSpecName "kube-api-access-xj76g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:28:30.746614 master-1 kubenswrapper[4771]: I1011 10:28:30.746307 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-config\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:30.746895 master-1 kubenswrapper[4771]: I1011 10:28:30.746636 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhr88\" (UniqueName: \"kubernetes.io/projected/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-kube-api-access-vhr88\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:30.746895 master-1 kubenswrapper[4771]: I1011 10:28:30.746710 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:30.746895 master-1 kubenswrapper[4771]: I1011 10:28:30.746750 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-serving-cert\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:30.746895 master-1 kubenswrapper[4771]: I1011 10:28:30.746787 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:30.746895 master-1 kubenswrapper[4771]: I1011 10:28:30.746827 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-client-ca\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:30.747233 master-1 kubenswrapper[4771]: E1011 10:28:30.746940 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:30.747233 master-1 kubenswrapper[4771]: E1011 10:28:30.746968 4771 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 11 10:28:30.747233 master-1 kubenswrapper[4771]: E1011 10:28:30.747007 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-client-ca podName:295ceec5-4761-4cb3-95a7-cfc5cb35f03e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:31.24698636 +0000 UTC m=+143.221212821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-client-ca") pod "controller-manager-5d9b59775c-wqj5f" (UID: "295ceec5-4761-4cb3-95a7-cfc5cb35f03e") : configmap "client-ca" not found Oct 11 10:28:30.747233 master-1 kubenswrapper[4771]: E1011 10:28:30.747016 4771 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:30.747233 master-1 kubenswrapper[4771]: E1011 10:28:30.747042 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit podName:94d811f4-4ac9-46b0-b937-d3370b1b4305 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:31.747019301 +0000 UTC m=+143.721245782 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit") pod "apiserver-796c687c6d-9b677" (UID: "94d811f4-4ac9-46b0-b937-d3370b1b4305") : configmap "audit-0" not found Oct 11 10:28:30.747233 master-1 kubenswrapper[4771]: E1011 10:28:30.747092 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-serving-cert podName:295ceec5-4761-4cb3-95a7-cfc5cb35f03e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:31.247065892 +0000 UTC m=+143.221292373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-serving-cert") pod "controller-manager-5d9b59775c-wqj5f" (UID: "295ceec5-4761-4cb3-95a7-cfc5cb35f03e") : secret "serving-cert" not found Oct 11 10:28:30.747233 master-1 kubenswrapper[4771]: E1011 10:28:30.747105 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Oct 11 10:28:30.747233 master-1 kubenswrapper[4771]: E1011 10:28:30.747207 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Oct 11 10:28:30.747233 master-1 kubenswrapper[4771]: E1011 10:28:30.747235 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-proxy-ca-bundles podName:295ceec5-4761-4cb3-95a7-cfc5cb35f03e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:31.247216756 +0000 UTC m=+143.221443237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-proxy-ca-bundles") pod "controller-manager-5d9b59775c-wqj5f" (UID: "295ceec5-4761-4cb3-95a7-cfc5cb35f03e") : configmap "openshift-global-ca" not found Oct 11 10:28:30.747893 master-1 kubenswrapper[4771]: E1011 10:28:30.747265 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-config podName:295ceec5-4761-4cb3-95a7-cfc5cb35f03e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:31.247248347 +0000 UTC m=+143.221474798 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-config") pod "controller-manager-5d9b59775c-wqj5f" (UID: "295ceec5-4761-4cb3-95a7-cfc5cb35f03e") : configmap "config" not found Oct 11 10:28:30.780795 master-1 kubenswrapper[4771]: I1011 10:28:30.780679 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhr88\" (UniqueName: \"kubernetes.io/projected/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-kube-api-access-vhr88\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:30.787603 master-2 kubenswrapper[4776]: I1011 10:28:30.787555 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.787842 master-2 kubenswrapper[4776]: I1011 10:28:30.787652 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.787842 master-2 kubenswrapper[4776]: I1011 10:28:30.787690 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfjtx\" (UniqueName: \"kubernetes.io/projected/a4117af6-90eb-4a97-af54-06b199075a28-kube-api-access-wfjtx\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.787842 master-2 kubenswrapper[4776]: E1011 10:28:30.787710 4776 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 11 10:28:30.787842 master-2 kubenswrapper[4776]: E1011 10:28:30.787750 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:30.787842 master-2 kubenswrapper[4776]: E1011 10:28:30.787769 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit podName:9bf5fcc5-d60e-45da-976d-56ac881274f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:31.787753383 +0000 UTC m=+146.572180092 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit") pod "apiserver-796c687c6d-k46j4" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1") : configmap "audit-0" not found Oct 11 10:28:30.787842 master-2 kubenswrapper[4776]: E1011 10:28:30.787781 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:31.287775804 +0000 UTC m=+146.072202503 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : configmap "client-ca" not found Oct 11 10:28:30.787842 master-2 kubenswrapper[4776]: I1011 10:28:30.787717 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.787842 master-2 kubenswrapper[4776]: E1011 10:28:30.787789 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Oct 11 10:28:30.787842 master-2 kubenswrapper[4776]: E1011 10:28:30.787810 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:31.287801254 +0000 UTC m=+146.072227963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : configmap "openshift-global-ca" not found Oct 11 10:28:30.788199 master-2 kubenswrapper[4776]: I1011 10:28:30.787940 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.788199 master-2 kubenswrapper[4776]: I1011 10:28:30.787961 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.788199 master-2 kubenswrapper[4776]: I1011 10:28:30.788011 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj76g\" (UniqueName: \"kubernetes.io/projected/02b839f3-9031-49c2-87a5-630975c7e14c-kube-api-access-xj76g\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:30.788199 master-2 kubenswrapper[4776]: E1011 10:28:30.788080 4776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:30.788199 master-2 kubenswrapper[4776]: E1011 10:28:30.788103 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:31.288095063 +0000 UTC m=+146.072521772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : secret "serving-cert" not found Oct 11 10:28:30.788199 master-2 kubenswrapper[4776]: E1011 10:28:30.788136 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Oct 11 10:28:30.788199 master-2 kubenswrapper[4776]: E1011 10:28:30.788160 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:31.288152444 +0000 UTC m=+146.072579153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : configmap "config" not found Oct 11 10:28:30.816651 master-2 kubenswrapper[4776]: I1011 10:28:30.816571 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfjtx\" (UniqueName: \"kubernetes.io/projected/a4117af6-90eb-4a97-af54-06b199075a28-kube-api-access-wfjtx\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:31.251887 master-1 kubenswrapper[4771]: I1011 10:28:31.251730 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:31.251887 master-1 kubenswrapper[4771]: I1011 10:28:31.251815 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-serving-cert\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:31.251887 master-1 kubenswrapper[4771]: I1011 10:28:31.251883 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-client-ca\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:31.252870 master-1 kubenswrapper[4771]: E1011 10:28:31.251913 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Oct 11 10:28:31.252870 master-1 kubenswrapper[4771]: E1011 10:28:31.252023 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-proxy-ca-bundles podName:295ceec5-4761-4cb3-95a7-cfc5cb35f03e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.251995591 +0000 UTC m=+144.226222072 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-proxy-ca-bundles") pod "controller-manager-5d9b59775c-wqj5f" (UID: "295ceec5-4761-4cb3-95a7-cfc5cb35f03e") : configmap "openshift-global-ca" not found Oct 11 10:28:31.252870 master-1 kubenswrapper[4771]: E1011 10:28:31.252035 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Oct 11 10:28:31.252870 master-1 kubenswrapper[4771]: E1011 10:28:31.252113 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-config podName:295ceec5-4761-4cb3-95a7-cfc5cb35f03e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.252089673 +0000 UTC m=+144.226316144 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-config") pod "controller-manager-5d9b59775c-wqj5f" (UID: "295ceec5-4761-4cb3-95a7-cfc5cb35f03e") : configmap "config" not found Oct 11 10:28:31.252870 master-1 kubenswrapper[4771]: E1011 10:28:31.252207 4771 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:31.252870 master-1 kubenswrapper[4771]: E1011 10:28:31.252244 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-serving-cert podName:295ceec5-4761-4cb3-95a7-cfc5cb35f03e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.252231357 +0000 UTC m=+144.226457838 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-serving-cert") pod "controller-manager-5d9b59775c-wqj5f" (UID: "295ceec5-4761-4cb3-95a7-cfc5cb35f03e") : secret "serving-cert" not found Oct 11 10:28:31.252870 master-1 kubenswrapper[4771]: E1011 10:28:31.252289 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:31.252870 master-1 kubenswrapper[4771]: E1011 10:28:31.252326 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-client-ca podName:295ceec5-4761-4cb3-95a7-cfc5cb35f03e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.252315379 +0000 UTC m=+144.226541850 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-client-ca") pod "controller-manager-5d9b59775c-wqj5f" (UID: "295ceec5-4761-4cb3-95a7-cfc5cb35f03e") : configmap "client-ca" not found Oct 11 10:28:31.252870 master-1 kubenswrapper[4771]: I1011 10:28:31.251932 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-config\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:31.294236 master-2 kubenswrapper[4776]: I1011 10:28:31.294166 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:31.294236 master-2 kubenswrapper[4776]: I1011 10:28:31.294222 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:31.294634 master-2 kubenswrapper[4776]: I1011 10:28:31.294324 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:31.294634 master-2 kubenswrapper[4776]: I1011 10:28:31.294342 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:31.294634 master-2 kubenswrapper[4776]: E1011 10:28:31.294455 4776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:31.294634 master-2 kubenswrapper[4776]: E1011 10:28:31.294480 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Oct 11 10:28:31.294634 master-2 kubenswrapper[4776]: E1011 10:28:31.294499 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.294484634 +0000 UTC m=+147.078911343 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : secret "serving-cert" not found Oct 11 10:28:31.294634 master-2 kubenswrapper[4776]: E1011 10:28:31.294619 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.294590747 +0000 UTC m=+147.079017486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : configmap "openshift-global-ca" not found Oct 11 10:28:31.295086 master-2 kubenswrapper[4776]: E1011 10:28:31.294717 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Oct 11 10:28:31.295086 master-2 kubenswrapper[4776]: E1011 10:28:31.294758 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.294745322 +0000 UTC m=+147.079172071 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : configmap "config" not found Oct 11 10:28:31.295086 master-2 kubenswrapper[4776]: E1011 10:28:31.294808 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:31.295086 master-2 kubenswrapper[4776]: E1011 10:28:31.294843 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.294832114 +0000 UTC m=+147.079258863 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : configmap "client-ca" not found Oct 11 10:28:31.615287 master-2 kubenswrapper[4776]: I1011 10:28:31.615183 4776 scope.go:117] "RemoveContainer" containerID="84ee2aa257fd47e52819ac5b509341f1915d01766245c38ccb1a2b7cae91293a" Oct 11 10:28:31.615287 master-2 kubenswrapper[4776]: I1011 10:28:31.615241 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:31.693295 master-1 kubenswrapper[4771]: I1011 10:28:31.693216 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-wqj5f"] Oct 11 10:28:31.693685 master-1 kubenswrapper[4771]: E1011 10:28:31.693527 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" podUID="295ceec5-4761-4cb3-95a7-cfc5cb35f03e" Oct 11 10:28:31.708242 master-1 kubenswrapper[4771]: I1011 10:28:31.708160 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf"] Oct 11 10:28:31.708883 master-1 kubenswrapper[4771]: I1011 10:28:31.708840 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:31.713345 master-1 kubenswrapper[4771]: I1011 10:28:31.713287 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 11 10:28:31.713463 master-1 kubenswrapper[4771]: I1011 10:28:31.713340 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 11 10:28:31.713550 master-1 kubenswrapper[4771]: I1011 10:28:31.713303 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 11 10:28:31.713618 master-1 kubenswrapper[4771]: I1011 10:28:31.713409 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 11 10:28:31.716060 master-1 kubenswrapper[4771]: I1011 10:28:31.715922 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 11 10:28:31.718255 master-1 kubenswrapper[4771]: I1011 10:28:31.718209 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf"] Oct 11 10:28:31.731817 master-2 kubenswrapper[4776]: I1011 10:28:31.731754 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb"] Oct 11 10:28:31.732515 master-2 kubenswrapper[4776]: I1011 10:28:31.732479 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.744344 master-2 kubenswrapper[4776]: I1011 10:28:31.744256 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 11 10:28:31.744344 master-2 kubenswrapper[4776]: I1011 10:28:31.744318 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 11 10:28:31.744479 master-2 kubenswrapper[4776]: I1011 10:28:31.744378 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 11 10:28:31.744479 master-2 kubenswrapper[4776]: I1011 10:28:31.744318 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 11 10:28:31.744794 master-2 kubenswrapper[4776]: I1011 10:28:31.744738 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 11 10:28:31.746521 master-2 kubenswrapper[4776]: I1011 10:28:31.746465 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb"] Oct 11 10:28:31.758273 master-1 kubenswrapper[4771]: I1011 10:28:31.758155 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-config\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:31.758273 master-1 kubenswrapper[4771]: I1011 10:28:31.758240 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:31.758273 master-1 kubenswrapper[4771]: I1011 10:28:31.758273 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:31.758695 master-1 kubenswrapper[4771]: I1011 10:28:31.758306 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:31.758695 master-1 kubenswrapper[4771]: I1011 10:28:31.758339 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcj47\" (UniqueName: \"kubernetes.io/projected/67be9a32-17b4-480c-98ba-caf9841bef6b-kube-api-access-fcj47\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:31.758695 master-1 kubenswrapper[4771]: E1011 10:28:31.758520 4771 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 11 10:28:31.759303 master-1 kubenswrapper[4771]: E1011 10:28:31.759233 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit podName:94d811f4-4ac9-46b0-b937-d3370b1b4305 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:33.759182181 +0000 UTC m=+145.733408632 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit") pod "apiserver-796c687c6d-9b677" (UID: "94d811f4-4ac9-46b0-b937-d3370b1b4305") : configmap "audit-0" not found Oct 11 10:28:31.804483 master-2 kubenswrapper[4776]: I1011 10:28:31.804389 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.804823 master-2 kubenswrapper[4776]: I1011 10:28:31.804573 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-config\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.804823 master-2 kubenswrapper[4776]: I1011 10:28:31.804643 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:31.804823 master-2 kubenswrapper[4776]: E1011 10:28:31.804746 4776 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 11 10:28:31.804823 master-2 kubenswrapper[4776]: I1011 10:28:31.804791 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.805330 master-2 kubenswrapper[4776]: E1011 10:28:31.804829 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit podName:9bf5fcc5-d60e-45da-976d-56ac881274f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:33.804811469 +0000 UTC m=+148.589238188 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit") pod "apiserver-796c687c6d-k46j4" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1") : configmap "audit-0" not found Oct 11 10:28:31.805330 master-2 kubenswrapper[4776]: I1011 10:28:31.804900 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sqbz\" (UniqueName: \"kubernetes.io/projected/17bef070-1a9d-4090-b97a-7ce2c1c93b19-kube-api-access-9sqbz\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.860080 master-1 kubenswrapper[4771]: I1011 10:28:31.859935 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:31.860080 master-1 kubenswrapper[4771]: I1011 10:28:31.860035 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:31.860507 master-1 kubenswrapper[4771]: I1011 10:28:31.860096 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcj47\" (UniqueName: \"kubernetes.io/projected/67be9a32-17b4-480c-98ba-caf9841bef6b-kube-api-access-fcj47\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:31.860507 master-1 kubenswrapper[4771]: E1011 10:28:31.860172 4771 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:31.860507 master-1 kubenswrapper[4771]: E1011 10:28:31.860334 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca podName:67be9a32-17b4-480c-98ba-caf9841bef6b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.360290134 +0000 UTC m=+144.334516585 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca") pod "route-controller-manager-67d4d4d6d8-szbpf" (UID: "67be9a32-17b4-480c-98ba-caf9841bef6b") : configmap "client-ca" not found Oct 11 10:28:31.860868 master-1 kubenswrapper[4771]: I1011 10:28:31.860204 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-config\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:31.861150 master-1 kubenswrapper[4771]: E1011 10:28:31.861104 4771 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:31.861228 master-1 kubenswrapper[4771]: E1011 10:28:31.861165 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert podName:67be9a32-17b4-480c-98ba-caf9841bef6b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.361151357 +0000 UTC m=+144.335377818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert") pod "route-controller-manager-67d4d4d6d8-szbpf" (UID: "67be9a32-17b4-480c-98ba-caf9841bef6b") : secret "serving-cert" not found Oct 11 10:28:31.861727 master-1 kubenswrapper[4771]: I1011 10:28:31.861673 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-config\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:31.882061 master-1 kubenswrapper[4771]: I1011 10:28:31.881983 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcj47\" (UniqueName: \"kubernetes.io/projected/67be9a32-17b4-480c-98ba-caf9841bef6b-kube-api-access-fcj47\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:31.906222 master-2 kubenswrapper[4776]: I1011 10:28:31.906151 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.906758 master-2 kubenswrapper[4776]: I1011 10:28:31.906237 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-config\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.906758 master-2 kubenswrapper[4776]: I1011 10:28:31.906351 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.906758 master-2 kubenswrapper[4776]: E1011 10:28:31.906375 4776 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:31.906758 master-2 kubenswrapper[4776]: I1011 10:28:31.906408 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sqbz\" (UniqueName: \"kubernetes.io/projected/17bef070-1a9d-4090-b97a-7ce2c1c93b19-kube-api-access-9sqbz\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.906758 master-2 kubenswrapper[4776]: E1011 10:28:31.906458 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.406436945 +0000 UTC m=+147.190863654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : secret "serving-cert" not found Oct 11 10:28:31.906758 master-2 kubenswrapper[4776]: E1011 10:28:31.906570 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:31.906758 master-2 kubenswrapper[4776]: E1011 10:28:31.906661 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.406639791 +0000 UTC m=+147.191066500 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:28:31.907387 master-2 kubenswrapper[4776]: I1011 10:28:31.907336 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-config\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.927135 master-2 kubenswrapper[4776]: I1011 10:28:31.927093 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sqbz\" (UniqueName: \"kubernetes.io/projected/17bef070-1a9d-4090-b97a-7ce2c1c93b19-kube-api-access-9sqbz\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:32.038719 master-1 kubenswrapper[4771]: I1011 10:28:32.038539 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:32.047936 master-1 kubenswrapper[4771]: I1011 10:28:32.047865 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:32.062263 master-2 kubenswrapper[4776]: I1011 10:28:32.062207 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02b839f3-9031-49c2-87a5-630975c7e14c" path="/var/lib/kubelet/pods/02b839f3-9031-49c2-87a5-630975c7e14c/volumes" Oct 11 10:28:32.165175 master-1 kubenswrapper[4771]: I1011 10:28:32.165054 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhr88\" (UniqueName: \"kubernetes.io/projected/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-kube-api-access-vhr88\") pod \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " Oct 11 10:28:32.174689 master-1 kubenswrapper[4771]: I1011 10:28:32.174611 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-kube-api-access-vhr88" (OuterVolumeSpecName: "kube-api-access-vhr88") pod "295ceec5-4761-4cb3-95a7-cfc5cb35f03e" (UID: "295ceec5-4761-4cb3-95a7-cfc5cb35f03e"). InnerVolumeSpecName "kube-api-access-vhr88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:28:32.266584 master-1 kubenswrapper[4771]: I1011 10:28:32.266436 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-client-ca\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:32.266584 master-1 kubenswrapper[4771]: I1011 10:28:32.266556 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-config\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:32.266584 master-1 kubenswrapper[4771]: I1011 10:28:32.266601 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:32.267512 master-1 kubenswrapper[4771]: I1011 10:28:32.266631 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-serving-cert\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:32.267512 master-1 kubenswrapper[4771]: I1011 10:28:32.266676 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhr88\" (UniqueName: \"kubernetes.io/projected/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-kube-api-access-vhr88\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:32.267512 master-1 kubenswrapper[4771]: E1011 10:28:32.266710 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:32.267512 master-1 kubenswrapper[4771]: E1011 10:28:32.266810 4771 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:32.267512 master-1 kubenswrapper[4771]: E1011 10:28:32.266852 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-client-ca podName:295ceec5-4761-4cb3-95a7-cfc5cb35f03e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:34.266802242 +0000 UTC m=+146.241028723 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-client-ca") pod "controller-manager-5d9b59775c-wqj5f" (UID: "295ceec5-4761-4cb3-95a7-cfc5cb35f03e") : configmap "client-ca" not found Oct 11 10:28:32.267512 master-1 kubenswrapper[4771]: E1011 10:28:32.266897 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-serving-cert podName:295ceec5-4761-4cb3-95a7-cfc5cb35f03e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:34.266872214 +0000 UTC m=+146.241098695 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-serving-cert") pod "controller-manager-5d9b59775c-wqj5f" (UID: "295ceec5-4761-4cb3-95a7-cfc5cb35f03e") : secret "serving-cert" not found Oct 11 10:28:32.267848 master-1 kubenswrapper[4771]: I1011 10:28:32.267686 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-config\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:32.268911 master-1 kubenswrapper[4771]: I1011 10:28:32.268853 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-wqj5f\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:32.310558 master-2 kubenswrapper[4776]: I1011 10:28:32.310427 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:32.310558 master-2 kubenswrapper[4776]: I1011 10:28:32.310485 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:32.311075 master-2 kubenswrapper[4776]: I1011 10:28:32.310624 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:32.311075 master-2 kubenswrapper[4776]: I1011 10:28:32.310654 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:32.311075 master-2 kubenswrapper[4776]: E1011 10:28:32.310815 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:32.311075 master-2 kubenswrapper[4776]: E1011 10:28:32.310866 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:34.310850711 +0000 UTC m=+149.095277420 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : configmap "client-ca" not found Oct 11 10:28:32.311075 master-2 kubenswrapper[4776]: E1011 10:28:32.310927 4776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:32.311075 master-2 kubenswrapper[4776]: E1011 10:28:32.311013 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:34.310995016 +0000 UTC m=+149.095421725 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : secret "serving-cert" not found Oct 11 10:28:32.311806 master-2 kubenswrapper[4776]: I1011 10:28:32.311761 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:32.311993 master-2 kubenswrapper[4776]: I1011 10:28:32.311938 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:32.367748 master-1 kubenswrapper[4771]: I1011 10:28:32.367652 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-proxy-ca-bundles\") pod \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " Oct 11 10:28:32.367748 master-1 kubenswrapper[4771]: I1011 10:28:32.367736 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-config\") pod \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\" (UID: \"295ceec5-4761-4cb3-95a7-cfc5cb35f03e\") " Oct 11 10:28:32.368177 master-1 kubenswrapper[4771]: I1011 10:28:32.367911 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:32.368177 master-1 kubenswrapper[4771]: I1011 10:28:32.367946 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:32.368177 master-1 kubenswrapper[4771]: E1011 10:28:32.368087 4771 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:32.368177 master-1 kubenswrapper[4771]: E1011 10:28:32.368147 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert podName:67be9a32-17b4-480c-98ba-caf9841bef6b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:33.368126861 +0000 UTC m=+145.342353302 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert") pod "route-controller-manager-67d4d4d6d8-szbpf" (UID: "67be9a32-17b4-480c-98ba-caf9841bef6b") : secret "serving-cert" not found Oct 11 10:28:32.368550 master-1 kubenswrapper[4771]: E1011 10:28:32.368337 4771 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:32.368550 master-1 kubenswrapper[4771]: E1011 10:28:32.368495 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca podName:67be9a32-17b4-480c-98ba-caf9841bef6b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:33.36846196 +0000 UTC m=+145.342688431 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca") pod "route-controller-manager-67d4d4d6d8-szbpf" (UID: "67be9a32-17b4-480c-98ba-caf9841bef6b") : configmap "client-ca" not found Oct 11 10:28:32.368707 master-1 kubenswrapper[4771]: I1011 10:28:32.368597 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-config" (OuterVolumeSpecName: "config") pod "295ceec5-4761-4cb3-95a7-cfc5cb35f03e" (UID: "295ceec5-4761-4cb3-95a7-cfc5cb35f03e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:32.368993 master-1 kubenswrapper[4771]: I1011 10:28:32.368895 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "295ceec5-4761-4cb3-95a7-cfc5cb35f03e" (UID: "295ceec5-4761-4cb3-95a7-cfc5cb35f03e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:32.412118 master-2 kubenswrapper[4776]: I1011 10:28:32.412024 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:32.412376 master-2 kubenswrapper[4776]: I1011 10:28:32.412163 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:32.412376 master-2 kubenswrapper[4776]: E1011 10:28:32.412309 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:32.412376 master-2 kubenswrapper[4776]: E1011 10:28:32.412342 4776 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:32.412376 master-2 kubenswrapper[4776]: E1011 10:28:32.412368 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:33.412354394 +0000 UTC m=+148.196781103 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:28:32.412586 master-2 kubenswrapper[4776]: E1011 10:28:32.412419 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:33.412386705 +0000 UTC m=+148.196813434 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : secret "serving-cert" not found Oct 11 10:28:32.468600 master-1 kubenswrapper[4771]: I1011 10:28:32.468547 4771 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-proxy-ca-bundles\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:32.468600 master-1 kubenswrapper[4771]: I1011 10:28:32.468585 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:32.614765 master-2 kubenswrapper[4776]: I1011 10:28:32.613983 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:28:32.616856 master-2 kubenswrapper[4776]: I1011 10:28:32.616828 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 11 10:28:32.620360 master-2 kubenswrapper[4776]: I1011 10:28:32.620175 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5mn8b" event={"ID":"18ca0678-0b0d-4d5d-bc50-a0a098301f38","Type":"ContainerStarted","Data":"ae98d45df9584e1ebff96e0b7a9a74984b149159c94abf567838341fa680617e"} Oct 11 10:28:32.626754 master-2 kubenswrapper[4776]: I1011 10:28:32.626723 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 11 10:28:32.635245 master-2 kubenswrapper[4776]: I1011 10:28:32.635180 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-5mn8b" podStartSLOduration=5.950253988 podStartE2EDuration="28.635164269s" podCreationTimestamp="2025-10-11 10:28:04 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.832383232 +0000 UTC m=+120.616809941" lastFinishedPulling="2025-10-11 10:28:28.517293513 +0000 UTC m=+143.301720222" observedRunningTime="2025-10-11 10:28:32.633518413 +0000 UTC m=+147.417945172" watchObservedRunningTime="2025-10-11 10:28:32.635164269 +0000 UTC m=+147.419590978" Oct 11 10:28:32.640231 master-2 kubenswrapper[4776]: I1011 10:28:32.640181 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:28:32.671323 master-1 kubenswrapper[4771]: I1011 10:28:32.670989 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hktrh\" (UniqueName: \"kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh\") pod \"network-check-target-4pm7x\" (UID: \"0bde275d-f0a5-4bea-93f7-edd2077e46b4\") " pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:28:32.675858 master-1 kubenswrapper[4771]: I1011 10:28:32.675793 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 11 10:28:32.678231 master-2 kubenswrapper[4776]: I1011 10:28:32.678151 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:28:32.686520 master-1 kubenswrapper[4771]: I1011 10:28:32.686282 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 11 10:28:32.698699 master-1 kubenswrapper[4771]: I1011 10:28:32.698617 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hktrh\" (UniqueName: \"kubernetes.io/projected/0bde275d-f0a5-4bea-93f7-edd2077e46b4-kube-api-access-hktrh\") pod \"network-check-target-4pm7x\" (UID: \"0bde275d-f0a5-4bea-93f7-edd2077e46b4\") " pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:28:32.846916 master-2 kubenswrapper[4776]: I1011 10:28:32.846438 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-jdkgd"] Oct 11 10:28:32.854007 master-2 kubenswrapper[4776]: W1011 10:28:32.853938 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6543c6f_6f31_431e_9327_60c8cfd70c7e.slice/crio-84e81236f7928a2ee4e2cf2be3beb7780aa0eee7727e21e21cd76476b8426999 WatchSource:0}: Error finding container 84e81236f7928a2ee4e2cf2be3beb7780aa0eee7727e21e21cd76476b8426999: Status 404 returned error can't find the container with id 84e81236f7928a2ee4e2cf2be3beb7780aa0eee7727e21e21cd76476b8426999 Oct 11 10:28:32.878746 master-1 kubenswrapper[4771]: I1011 10:28:32.878602 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:28:33.042127 master-1 kubenswrapper[4771]: I1011 10:28:33.042048 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b59775c-wqj5f" Oct 11 10:28:33.080327 master-1 kubenswrapper[4771]: I1011 10:28:33.080281 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-857df878cf-tz7h4"] Oct 11 10:28:33.080824 master-1 kubenswrapper[4771]: I1011 10:28:33.080779 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:33.082474 master-1 kubenswrapper[4771]: I1011 10:28:33.082403 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-wqj5f"] Oct 11 10:28:33.084953 master-1 kubenswrapper[4771]: I1011 10:28:33.084921 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 11 10:28:33.085115 master-1 kubenswrapper[4771]: I1011 10:28:33.084978 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 11 10:28:33.085656 master-1 kubenswrapper[4771]: I1011 10:28:33.085272 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-wqj5f"] Oct 11 10:28:33.085758 master-1 kubenswrapper[4771]: I1011 10:28:33.085711 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 11 10:28:33.086748 master-1 kubenswrapper[4771]: I1011 10:28:33.086698 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 11 10:28:33.087320 master-1 kubenswrapper[4771]: I1011 10:28:33.087257 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 11 10:28:33.091658 master-1 kubenswrapper[4771]: I1011 10:28:33.091612 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-857df878cf-tz7h4"] Oct 11 10:28:33.095938 master-1 kubenswrapper[4771]: I1011 10:28:33.095859 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 11 10:28:33.098592 master-1 kubenswrapper[4771]: I1011 10:28:33.098238 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-4pm7x"] Oct 11 10:28:33.108424 master-1 kubenswrapper[4771]: W1011 10:28:33.108300 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bde275d_f0a5_4bea_93f7_edd2077e46b4.slice/crio-edbdb16084802a4579f0e809d7a0a2e2997637364095207122799afce742249b WatchSource:0}: Error finding container edbdb16084802a4579f0e809d7a0a2e2997637364095207122799afce742249b: Status 404 returned error can't find the container with id edbdb16084802a4579f0e809d7a0a2e2997637364095207122799afce742249b Oct 11 10:28:33.177195 master-1 kubenswrapper[4771]: I1011 10:28:33.177131 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-config\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:33.177305 master-1 kubenswrapper[4771]: I1011 10:28:33.177213 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:33.177305 master-1 kubenswrapper[4771]: I1011 10:28:33.177258 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-proxy-ca-bundles\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:33.177442 master-1 kubenswrapper[4771]: I1011 10:28:33.177325 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hktq9\" (UniqueName: \"kubernetes.io/projected/9deef4a8-bf40-4a1f-bd3f-764b298245b2-kube-api-access-hktq9\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:33.177442 master-1 kubenswrapper[4771]: I1011 10:28:33.177385 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:33.177442 master-1 kubenswrapper[4771]: I1011 10:28:33.177435 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:33.177547 master-1 kubenswrapper[4771]: I1011 10:28:33.177457 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/295ceec5-4761-4cb3-95a7-cfc5cb35f03e-client-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:33.279032 master-1 kubenswrapper[4771]: I1011 10:28:33.278650 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hktq9\" (UniqueName: \"kubernetes.io/projected/9deef4a8-bf40-4a1f-bd3f-764b298245b2-kube-api-access-hktq9\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:33.279032 master-1 kubenswrapper[4771]: I1011 10:28:33.279001 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:33.280293 master-1 kubenswrapper[4771]: I1011 10:28:33.279071 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-config\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:33.280293 master-1 kubenswrapper[4771]: I1011 10:28:33.279115 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:33.280293 master-1 kubenswrapper[4771]: I1011 10:28:33.279149 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-proxy-ca-bundles\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:33.280293 master-1 kubenswrapper[4771]: E1011 10:28:33.279340 4771 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:33.280293 master-1 kubenswrapper[4771]: E1011 10:28:33.279422 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:33.280293 master-1 kubenswrapper[4771]: E1011 10:28:33.279467 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert podName:9deef4a8-bf40-4a1f-bd3f-764b298245b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:33.779439263 +0000 UTC m=+145.753665744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert") pod "controller-manager-857df878cf-tz7h4" (UID: "9deef4a8-bf40-4a1f-bd3f-764b298245b2") : secret "serving-cert" not found Oct 11 10:28:33.280293 master-1 kubenswrapper[4771]: E1011 10:28:33.279684 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca podName:9deef4a8-bf40-4a1f-bd3f-764b298245b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:33.779649869 +0000 UTC m=+145.753876350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca") pod "controller-manager-857df878cf-tz7h4" (UID: "9deef4a8-bf40-4a1f-bd3f-764b298245b2") : configmap "client-ca" not found Oct 11 10:28:33.281320 master-1 kubenswrapper[4771]: I1011 10:28:33.281236 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-config\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:33.281762 master-1 kubenswrapper[4771]: I1011 10:28:33.281700 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-proxy-ca-bundles\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:33.305769 master-2 kubenswrapper[4776]: I1011 10:28:33.305667 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-llh2g"] Oct 11 10:28:33.306003 master-2 kubenswrapper[4776]: E1011 10:28:33.305892 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" podUID="a4117af6-90eb-4a97-af54-06b199075a28" Oct 11 10:28:33.307454 master-1 kubenswrapper[4771]: I1011 10:28:33.307336 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hktq9\" (UniqueName: \"kubernetes.io/projected/9deef4a8-bf40-4a1f-bd3f-764b298245b2-kube-api-access-hktq9\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:33.380333 master-1 kubenswrapper[4771]: I1011 10:28:33.380080 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:33.380333 master-1 kubenswrapper[4771]: I1011 10:28:33.380188 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:33.380333 master-1 kubenswrapper[4771]: E1011 10:28:33.380325 4771 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:33.381037 master-1 kubenswrapper[4771]: E1011 10:28:33.380483 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca podName:67be9a32-17b4-480c-98ba-caf9841bef6b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:35.380451574 +0000 UTC m=+147.354678055 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca") pod "route-controller-manager-67d4d4d6d8-szbpf" (UID: "67be9a32-17b4-480c-98ba-caf9841bef6b") : configmap "client-ca" not found Oct 11 10:28:33.381037 master-1 kubenswrapper[4771]: E1011 10:28:33.380334 4771 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:33.381037 master-1 kubenswrapper[4771]: E1011 10:28:33.380623 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert podName:67be9a32-17b4-480c-98ba-caf9841bef6b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:35.380590328 +0000 UTC m=+147.354816869 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert") pod "route-controller-manager-67d4d4d6d8-szbpf" (UID: "67be9a32-17b4-480c-98ba-caf9841bef6b") : secret "serving-cert" not found Oct 11 10:28:33.425488 master-2 kubenswrapper[4776]: I1011 10:28:33.425364 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:33.425488 master-2 kubenswrapper[4776]: E1011 10:28:33.425450 4776 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:33.425801 master-2 kubenswrapper[4776]: E1011 10:28:33.425509 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:35.425495527 +0000 UTC m=+150.209922236 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : secret "serving-cert" not found Oct 11 10:28:33.425856 master-2 kubenswrapper[4776]: I1011 10:28:33.425797 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:33.425901 master-2 kubenswrapper[4776]: E1011 10:28:33.425890 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:33.426100 master-2 kubenswrapper[4776]: E1011 10:28:33.426048 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:35.426019083 +0000 UTC m=+150.210445832 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:28:33.627963 master-2 kubenswrapper[4776]: I1011 10:28:33.627779 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:33.627963 master-2 kubenswrapper[4776]: I1011 10:28:33.627764 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-jdkgd" event={"ID":"f6543c6f-6f31-431e-9327-60c8cfd70c7e","Type":"ContainerStarted","Data":"84e81236f7928a2ee4e2cf2be3beb7780aa0eee7727e21e21cd76476b8426999"} Oct 11 10:28:33.637074 master-2 kubenswrapper[4776]: I1011 10:28:33.637030 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:33.731026 master-2 kubenswrapper[4776]: I1011 10:28:33.729883 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config\") pod \"a4117af6-90eb-4a97-af54-06b199075a28\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " Oct 11 10:28:33.731026 master-2 kubenswrapper[4776]: I1011 10:28:33.729992 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfjtx\" (UniqueName: \"kubernetes.io/projected/a4117af6-90eb-4a97-af54-06b199075a28-kube-api-access-wfjtx\") pod \"a4117af6-90eb-4a97-af54-06b199075a28\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " Oct 11 10:28:33.731026 master-2 kubenswrapper[4776]: I1011 10:28:33.730046 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles\") pod \"a4117af6-90eb-4a97-af54-06b199075a28\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " Oct 11 10:28:33.731026 master-2 kubenswrapper[4776]: I1011 10:28:33.730406 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config" (OuterVolumeSpecName: "config") pod "a4117af6-90eb-4a97-af54-06b199075a28" (UID: "a4117af6-90eb-4a97-af54-06b199075a28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:33.731026 master-2 kubenswrapper[4776]: I1011 10:28:33.730838 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:33.732378 master-2 kubenswrapper[4776]: I1011 10:28:33.732292 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a4117af6-90eb-4a97-af54-06b199075a28" (UID: "a4117af6-90eb-4a97-af54-06b199075a28"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:33.734293 master-2 kubenswrapper[4776]: I1011 10:28:33.734230 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4117af6-90eb-4a97-af54-06b199075a28-kube-api-access-wfjtx" (OuterVolumeSpecName: "kube-api-access-wfjtx") pod "a4117af6-90eb-4a97-af54-06b199075a28" (UID: "a4117af6-90eb-4a97-af54-06b199075a28"). InnerVolumeSpecName "kube-api-access-wfjtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:28:33.785112 master-1 kubenswrapper[4771]: I1011 10:28:33.785045 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:33.785387 master-1 kubenswrapper[4771]: I1011 10:28:33.785179 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:33.785387 master-1 kubenswrapper[4771]: I1011 10:28:33.785244 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:33.785387 master-1 kubenswrapper[4771]: E1011 10:28:33.785312 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:33.785520 master-1 kubenswrapper[4771]: E1011 10:28:33.785411 4771 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 11 10:28:33.785520 master-1 kubenswrapper[4771]: E1011 10:28:33.785313 4771 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:33.785596 master-1 kubenswrapper[4771]: E1011 10:28:33.785436 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca podName:9deef4a8-bf40-4a1f-bd3f-764b298245b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:34.785413561 +0000 UTC m=+146.759640012 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca") pod "controller-manager-857df878cf-tz7h4" (UID: "9deef4a8-bf40-4a1f-bd3f-764b298245b2") : configmap "client-ca" not found Oct 11 10:28:33.785596 master-1 kubenswrapper[4771]: E1011 10:28:33.785581 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit podName:94d811f4-4ac9-46b0-b937-d3370b1b4305 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:37.785548144 +0000 UTC m=+149.759774625 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit") pod "apiserver-796c687c6d-9b677" (UID: "94d811f4-4ac9-46b0-b937-d3370b1b4305") : configmap "audit-0" not found Oct 11 10:28:33.785690 master-1 kubenswrapper[4771]: E1011 10:28:33.785619 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert podName:9deef4a8-bf40-4a1f-bd3f-764b298245b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:34.785605896 +0000 UTC m=+146.759832397 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert") pod "controller-manager-857df878cf-tz7h4" (UID: "9deef4a8-bf40-4a1f-bd3f-764b298245b2") : secret "serving-cert" not found Oct 11 10:28:33.832069 master-2 kubenswrapper[4776]: I1011 10:28:33.832007 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:33.832625 master-2 kubenswrapper[4776]: I1011 10:28:33.832606 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfjtx\" (UniqueName: \"kubernetes.io/projected/a4117af6-90eb-4a97-af54-06b199075a28-kube-api-access-wfjtx\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:33.832738 master-2 kubenswrapper[4776]: I1011 10:28:33.832723 4776 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:33.833007 master-2 kubenswrapper[4776]: E1011 10:28:33.832936 4776 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 11 10:28:33.833106 master-2 kubenswrapper[4776]: E1011 10:28:33.833083 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit podName:9bf5fcc5-d60e-45da-976d-56ac881274f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:37.833054144 +0000 UTC m=+152.617480843 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit") pod "apiserver-796c687c6d-k46j4" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1") : configmap "audit-0" not found Oct 11 10:28:33.849645 master-2 kubenswrapper[4776]: I1011 10:28:33.849592 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-796c687c6d-k46j4"] Oct 11 10:28:33.850790 master-2 kubenswrapper[4776]: E1011 10:28:33.850765 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-796c687c6d-k46j4" podUID="9bf5fcc5-d60e-45da-976d-56ac881274f1" Oct 11 10:28:34.047467 master-1 kubenswrapper[4771]: I1011 10:28:34.047232 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-4pm7x" event={"ID":"0bde275d-f0a5-4bea-93f7-edd2077e46b4","Type":"ContainerStarted","Data":"88e945236d6876de3adbd520e22a037ce30e88ce8cfbf1da3226eb75f03ff32f"} Oct 11 10:28:34.047467 master-1 kubenswrapper[4771]: I1011 10:28:34.047317 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-4pm7x" event={"ID":"0bde275d-f0a5-4bea-93f7-edd2077e46b4","Type":"ContainerStarted","Data":"edbdb16084802a4579f0e809d7a0a2e2997637364095207122799afce742249b"} Oct 11 10:28:34.047467 master-1 kubenswrapper[4771]: I1011 10:28:34.047429 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:28:34.063905 master-1 kubenswrapper[4771]: I1011 10:28:34.063787 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-4pm7x" podStartSLOduration=66.063763757 podStartE2EDuration="1m6.063763757s" podCreationTimestamp="2025-10-11 10:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:28:34.062802522 +0000 UTC m=+146.037028973" watchObservedRunningTime="2025-10-11 10:28:34.063763757 +0000 UTC m=+146.037990238" Oct 11 10:28:34.338362 master-2 kubenswrapper[4776]: I1011 10:28:34.338267 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:34.338758 master-2 kubenswrapper[4776]: I1011 10:28:34.338739 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:34.339042 master-2 kubenswrapper[4776]: E1011 10:28:34.339007 4776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:34.339104 master-2 kubenswrapper[4776]: E1011 10:28:34.339082 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:38.339063705 +0000 UTC m=+153.123490414 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : secret "serving-cert" not found Oct 11 10:28:34.339401 master-2 kubenswrapper[4776]: E1011 10:28:34.339332 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:34.339536 master-2 kubenswrapper[4776]: E1011 10:28:34.339500 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:38.339466756 +0000 UTC m=+153.123893465 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : configmap "client-ca" not found Oct 11 10:28:34.437804 master-1 kubenswrapper[4771]: I1011 10:28:34.437702 4771 scope.go:117] "RemoveContainer" containerID="793c72629ffb5d64763cce906980f11774530f02d707e0389b69155b33560c5d" Oct 11 10:28:34.438662 master-1 kubenswrapper[4771]: E1011 10:28:34.437953 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-779749f859-5xxzp_openshift-cloud-controller-manager-operator(e115f8be-9e65-4407-8111-568e5ea8ac1b)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" podUID="e115f8be-9e65-4407-8111-568e5ea8ac1b" Oct 11 10:28:34.450790 master-1 kubenswrapper[4771]: I1011 10:28:34.450723 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="295ceec5-4761-4cb3-95a7-cfc5cb35f03e" path="/var/lib/kubelet/pods/295ceec5-4761-4cb3-95a7-cfc5cb35f03e/volumes" Oct 11 10:28:34.633563 master-2 kubenswrapper[4776]: I1011 10:28:34.633394 4776 generic.go:334] "Generic (PLEG): container finished" podID="e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc" containerID="79dc22f0a550f7a03fdfde0714643e7ddafbfdc868d34604a3c34fe38c3b91d6" exitCode=0 Oct 11 10:28:34.633563 master-2 kubenswrapper[4776]: I1011 10:28:34.633473 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" event={"ID":"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc","Type":"ContainerDied","Data":"79dc22f0a550f7a03fdfde0714643e7ddafbfdc868d34604a3c34fe38c3b91d6"} Oct 11 10:28:34.634393 master-2 kubenswrapper[4776]: I1011 10:28:34.634336 4776 scope.go:117] "RemoveContainer" containerID="79dc22f0a550f7a03fdfde0714643e7ddafbfdc868d34604a3c34fe38c3b91d6" Oct 11 10:28:34.636888 master-2 kubenswrapper[4776]: I1011 10:28:34.636843 4776 generic.go:334] "Generic (PLEG): container finished" podID="e540333c-4b4d-439e-a82a-cd3a97c95a43" containerID="0ba5d510196688f6b97d8a36964cc97a744fb54a3c5e03a38ad0b42712671103" exitCode=0 Oct 11 10:28:34.636978 master-2 kubenswrapper[4776]: I1011 10:28:34.636920 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" event={"ID":"e540333c-4b4d-439e-a82a-cd3a97c95a43","Type":"ContainerDied","Data":"0ba5d510196688f6b97d8a36964cc97a744fb54a3c5e03a38ad0b42712671103"} Oct 11 10:28:34.636978 master-2 kubenswrapper[4776]: I1011 10:28:34.636972 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:34.637069 master-2 kubenswrapper[4776]: I1011 10:28:34.637038 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:34.637705 master-2 kubenswrapper[4776]: I1011 10:28:34.637686 4776 scope.go:117] "RemoveContainer" containerID="0ba5d510196688f6b97d8a36964cc97a744fb54a3c5e03a38ad0b42712671103" Oct 11 10:28:34.647013 master-2 kubenswrapper[4776]: I1011 10:28:34.646986 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:34.707448 master-2 kubenswrapper[4776]: I1011 10:28:34.707069 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-llh2g"] Oct 11 10:28:34.708817 master-2 kubenswrapper[4776]: I1011 10:28:34.708781 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-llh2g"] Oct 11 10:28:34.710183 master-2 kubenswrapper[4776]: I1011 10:28:34.710150 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-546b64dc7b-pdhmc"] Oct 11 10:28:34.711053 master-2 kubenswrapper[4776]: I1011 10:28:34.710987 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.714135 master-2 kubenswrapper[4776]: I1011 10:28:34.714078 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 11 10:28:34.718324 master-2 kubenswrapper[4776]: I1011 10:28:34.715268 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 11 10:28:34.718324 master-2 kubenswrapper[4776]: I1011 10:28:34.715270 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 11 10:28:34.718324 master-2 kubenswrapper[4776]: I1011 10:28:34.715313 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 11 10:28:34.718324 master-2 kubenswrapper[4776]: I1011 10:28:34.715374 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 11 10:28:34.718324 master-2 kubenswrapper[4776]: I1011 10:28:34.717691 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-546b64dc7b-pdhmc"] Oct 11 10:28:34.723972 master-2 kubenswrapper[4776]: I1011 10:28:34.723908 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 11 10:28:34.744459 master-2 kubenswrapper[4776]: I1011 10:28:34.744390 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-serving-ca\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.744459 master-2 kubenswrapper[4776]: I1011 10:28:34.744475 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit-dir\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.744884 master-2 kubenswrapper[4776]: I1011 10:28:34.744528 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-encryption-config\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.744884 master-2 kubenswrapper[4776]: I1011 10:28:34.744533 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:28:34.744884 master-2 kubenswrapper[4776]: I1011 10:28:34.744557 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxmhq\" (UniqueName: \"kubernetes.io/projected/9bf5fcc5-d60e-45da-976d-56ac881274f1-kube-api-access-lxmhq\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.744884 master-2 kubenswrapper[4776]: I1011 10:28:34.744606 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-node-pullsecrets\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.744884 master-2 kubenswrapper[4776]: I1011 10:28:34.744628 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-config\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.744884 master-2 kubenswrapper[4776]: I1011 10:28:34.744645 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-trusted-ca-bundle\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.744884 master-2 kubenswrapper[4776]: I1011 10:28:34.744695 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-client\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.744884 master-2 kubenswrapper[4776]: I1011 10:28:34.744714 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-image-import-ca\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.745252 master-2 kubenswrapper[4776]: I1011 10:28:34.744970 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:34.745252 master-2 kubenswrapper[4776]: I1011 10:28:34.745120 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-serving-cert\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.745252 master-2 kubenswrapper[4776]: I1011 10:28:34.745138 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:34.745381 master-2 kubenswrapper[4776]: I1011 10:28:34.745340 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.745429 master-2 kubenswrapper[4776]: I1011 10:28:34.745420 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.745485 master-2 kubenswrapper[4776]: I1011 10:28:34.745459 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-config\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.745530 master-2 kubenswrapper[4776]: I1011 10:28:34.745485 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-config" (OuterVolumeSpecName: "config") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:34.745530 master-2 kubenswrapper[4776]: I1011 10:28:34.745514 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:28:34.745530 master-2 kubenswrapper[4776]: I1011 10:28:34.745521 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-proxy-ca-bundles\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.745649 master-2 kubenswrapper[4776]: I1011 10:28:34.745558 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjjzl\" (UniqueName: \"kubernetes.io/projected/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-kube-api-access-tjjzl\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.745743 master-2 kubenswrapper[4776]: I1011 10:28:34.745667 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-serving-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.745743 master-2 kubenswrapper[4776]: I1011 10:28:34.745697 4776 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.745743 master-2 kubenswrapper[4776]: I1011 10:28:34.745708 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.745743 master-2 kubenswrapper[4776]: I1011 10:28:34.745717 4776 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-node-pullsecrets\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.745743 master-2 kubenswrapper[4776]: I1011 10:28:34.745726 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.745743 master-2 kubenswrapper[4776]: I1011 10:28:34.745735 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.745743 master-2 kubenswrapper[4776]: I1011 10:28:34.745747 4776 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-image-import-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.746966 master-2 kubenswrapper[4776]: I1011 10:28:34.746779 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:34.749726 master-2 kubenswrapper[4776]: I1011 10:28:34.748405 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:28:34.749726 master-2 kubenswrapper[4776]: I1011 10:28:34.748824 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bf5fcc5-d60e-45da-976d-56ac881274f1-kube-api-access-lxmhq" (OuterVolumeSpecName: "kube-api-access-lxmhq") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "kube-api-access-lxmhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:28:34.749726 master-2 kubenswrapper[4776]: I1011 10:28:34.749049 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:28:34.753018 master-2 kubenswrapper[4776]: I1011 10:28:34.752980 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:28:34.795475 master-1 kubenswrapper[4771]: I1011 10:28:34.795270 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:34.795475 master-1 kubenswrapper[4771]: I1011 10:28:34.795459 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:34.795766 master-1 kubenswrapper[4771]: E1011 10:28:34.795622 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:34.795766 master-1 kubenswrapper[4771]: E1011 10:28:34.795697 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca podName:9deef4a8-bf40-4a1f-bd3f-764b298245b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.795675961 +0000 UTC m=+148.769902442 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca") pod "controller-manager-857df878cf-tz7h4" (UID: "9deef4a8-bf40-4a1f-bd3f-764b298245b2") : configmap "client-ca" not found Oct 11 10:28:34.796151 master-1 kubenswrapper[4771]: E1011 10:28:34.796042 4771 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:34.796151 master-1 kubenswrapper[4771]: E1011 10:28:34.796124 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert podName:9deef4a8-bf40-4a1f-bd3f-764b298245b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.796105552 +0000 UTC m=+148.770332033 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert") pod "controller-manager-857df878cf-tz7h4" (UID: "9deef4a8-bf40-4a1f-bd3f-764b298245b2") : secret "serving-cert" not found Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847025 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847142 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847175 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-config\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847248 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-proxy-ca-bundles\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847286 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjjzl\" (UniqueName: \"kubernetes.io/projected/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-kube-api-access-tjjzl\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847402 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847420 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-client\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847435 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847450 4776 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-encryption-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847464 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxmhq\" (UniqueName: \"kubernetes.io/projected/9bf5fcc5-d60e-45da-976d-56ac881274f1-kube-api-access-lxmhq\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: E1011 10:28:34.847906 4776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: E1011 10:28:34.847962 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:35.347947098 +0000 UTC m=+150.132373807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : secret "serving-cert" not found Oct 11 10:28:34.848482 master-2 kubenswrapper[4776]: E1011 10:28:34.848122 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:34.848482 master-2 kubenswrapper[4776]: E1011 10:28:34.848149 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:35.348142483 +0000 UTC m=+150.132569192 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:28:34.849380 master-2 kubenswrapper[4776]: I1011 10:28:34.849326 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-config\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.849380 master-2 kubenswrapper[4776]: I1011 10:28:34.849356 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-proxy-ca-bundles\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.872528 master-2 kubenswrapper[4776]: I1011 10:28:34.872444 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjjzl\" (UniqueName: \"kubernetes.io/projected/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-kube-api-access-tjjzl\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:35.353291 master-2 kubenswrapper[4776]: I1011 10:28:35.353212 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:35.353537 master-2 kubenswrapper[4776]: I1011 10:28:35.353344 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:35.353537 master-2 kubenswrapper[4776]: E1011 10:28:35.353506 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:35.353620 master-2 kubenswrapper[4776]: E1011 10:28:35.353570 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.353549757 +0000 UTC m=+151.137976486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:28:35.353763 master-2 kubenswrapper[4776]: E1011 10:28:35.353695 4776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:35.353871 master-2 kubenswrapper[4776]: E1011 10:28:35.353841 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.353811725 +0000 UTC m=+151.138238444 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : secret "serving-cert" not found Oct 11 10:28:35.401387 master-1 kubenswrapper[4771]: I1011 10:28:35.401259 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:35.401387 master-1 kubenswrapper[4771]: I1011 10:28:35.401339 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:35.401960 master-1 kubenswrapper[4771]: E1011 10:28:35.401497 4771 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:35.401960 master-1 kubenswrapper[4771]: E1011 10:28:35.401518 4771 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:35.401960 master-1 kubenswrapper[4771]: E1011 10:28:35.401608 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert podName:67be9a32-17b4-480c-98ba-caf9841bef6b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:39.40158344 +0000 UTC m=+151.375809911 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert") pod "route-controller-manager-67d4d4d6d8-szbpf" (UID: "67be9a32-17b4-480c-98ba-caf9841bef6b") : secret "serving-cert" not found Oct 11 10:28:35.401960 master-1 kubenswrapper[4771]: E1011 10:28:35.401633 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca podName:67be9a32-17b4-480c-98ba-caf9841bef6b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:39.401621951 +0000 UTC m=+151.375848432 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca") pod "route-controller-manager-67d4d4d6d8-szbpf" (UID: "67be9a32-17b4-480c-98ba-caf9841bef6b") : configmap "client-ca" not found Oct 11 10:28:35.454553 master-2 kubenswrapper[4776]: I1011 10:28:35.454484 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:35.454735 master-2 kubenswrapper[4776]: I1011 10:28:35.454587 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:35.454789 master-2 kubenswrapper[4776]: E1011 10:28:35.454763 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:35.454837 master-2 kubenswrapper[4776]: E1011 10:28:35.454823 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:39.454808233 +0000 UTC m=+154.239234932 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:28:35.454947 master-2 kubenswrapper[4776]: E1011 10:28:35.454851 4776 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:35.455068 master-2 kubenswrapper[4776]: E1011 10:28:35.455022 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:39.454988758 +0000 UTC m=+154.239415477 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : secret "serving-cert" not found Oct 11 10:28:35.642518 master-2 kubenswrapper[4776]: I1011 10:28:35.642394 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" event={"ID":"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc","Type":"ContainerStarted","Data":"f88349d5575db3cbd9b37db276b5c369862cf7f868981c67616f19244c7c612f"} Oct 11 10:28:35.643775 master-2 kubenswrapper[4776]: I1011 10:28:35.643744 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" event={"ID":"e540333c-4b4d-439e-a82a-cd3a97c95a43","Type":"ContainerStarted","Data":"89704c12769118c53c22d7f82d393e22678a4835f23d73f837dd13b143b58cd8"} Oct 11 10:28:35.643849 master-2 kubenswrapper[4776]: I1011 10:28:35.643785 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:35.689616 master-2 kubenswrapper[4776]: I1011 10:28:35.688146 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-796c687c6d-k46j4"] Oct 11 10:28:35.693705 master-2 kubenswrapper[4776]: I1011 10:28:35.692367 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-796c687c6d-k46j4"] Oct 11 10:28:35.759210 master-2 kubenswrapper[4776]: I1011 10:28:35.758694 4776 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:36.063477 master-2 kubenswrapper[4776]: I1011 10:28:36.063415 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bf5fcc5-d60e-45da-976d-56ac881274f1" path="/var/lib/kubelet/pods/9bf5fcc5-d60e-45da-976d-56ac881274f1/volumes" Oct 11 10:28:36.064095 master-2 kubenswrapper[4776]: I1011 10:28:36.063761 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4117af6-90eb-4a97-af54-06b199075a28" path="/var/lib/kubelet/pods/a4117af6-90eb-4a97-af54-06b199075a28/volumes" Oct 11 10:28:36.369365 master-2 kubenswrapper[4776]: I1011 10:28:36.369240 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:36.369365 master-2 kubenswrapper[4776]: I1011 10:28:36.369337 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:36.369555 master-2 kubenswrapper[4776]: E1011 10:28:36.369508 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:36.369601 master-2 kubenswrapper[4776]: E1011 10:28:36.369566 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:38.369547593 +0000 UTC m=+153.153974312 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:28:36.369927 master-2 kubenswrapper[4776]: E1011 10:28:36.369877 4776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:36.369984 master-2 kubenswrapper[4776]: E1011 10:28:36.369973 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:38.369954005 +0000 UTC m=+153.154380714 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : secret "serving-cert" not found Oct 11 10:28:36.470713 master-2 kubenswrapper[4776]: I1011 10:28:36.470384 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:36.470713 master-2 kubenswrapper[4776]: I1011 10:28:36.470726 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:36.471049 master-2 kubenswrapper[4776]: I1011 10:28:36.470754 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:36.471049 master-2 kubenswrapper[4776]: E1011 10:28:36.470562 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:36.471049 master-2 kubenswrapper[4776]: E1011 10:28:36.470868 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.470845741 +0000 UTC m=+183.255272460 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:36.471049 master-2 kubenswrapper[4776]: E1011 10:28:36.470897 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:36.471049 master-2 kubenswrapper[4776]: E1011 10:28:36.470954 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.470939643 +0000 UTC m=+183.255366352 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:36.478420 master-2 kubenswrapper[4776]: I1011 10:28:36.478328 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:36.565937 master-2 kubenswrapper[4776]: I1011 10:28:36.565760 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:36.572171 master-2 kubenswrapper[4776]: I1011 10:28:36.572131 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:36.572311 master-2 kubenswrapper[4776]: I1011 10:28:36.572183 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:36.572311 master-2 kubenswrapper[4776]: I1011 10:28:36.572229 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:36.572311 master-2 kubenswrapper[4776]: I1011 10:28:36.572269 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:36.572311 master-2 kubenswrapper[4776]: I1011 10:28:36.572294 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:36.572476 master-2 kubenswrapper[4776]: I1011 10:28:36.572322 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:36.572476 master-2 kubenswrapper[4776]: I1011 10:28:36.572369 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:36.572476 master-2 kubenswrapper[4776]: I1011 10:28:36.572400 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:36.572476 master-2 kubenswrapper[4776]: I1011 10:28:36.572445 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:36.572476 master-2 kubenswrapper[4776]: I1011 10:28:36.572471 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:36.572633 master-2 kubenswrapper[4776]: E1011 10:28:36.572469 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 11 10:28:36.572633 master-2 kubenswrapper[4776]: E1011 10:28:36.572570 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert podName:e3281eb7-fb96-4bae-8c55-b79728d426b0 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.572546929 +0000 UTC m=+183.356973738 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert") pod "catalog-operator-f966fb6f8-8gkqg" (UID: "e3281eb7-fb96-4bae-8c55-b79728d426b0") : secret "catalog-operator-serving-cert" not found Oct 11 10:28:36.572906 master-2 kubenswrapper[4776]: I1011 10:28:36.572497 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: I1011 10:28:36.572951 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: I1011 10:28:36.572986 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: I1011 10:28:36.573020 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: I1011 10:28:36.573049 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: I1011 10:28:36.573092 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: E1011 10:28:36.573103 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: E1011 10:28:36.573177 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert podName:d4354488-1b32-422d-bb06-767a952192a5 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.573158836 +0000 UTC m=+183.357585545 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert") pod "olm-operator-867f8475d9-8lf59" (UID: "d4354488-1b32-422d-bb06-767a952192a5") : secret "olm-operator-serving-cert" not found Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: I1011 10:28:36.573116 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: E1011 10:28:36.573582 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: E1011 10:28:36.573608 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs podName:64310b0b-bae1-4ad3-b106-6d59d47d29b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.573601779 +0000 UTC m=+183.358028488 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs") pod "multus-admission-controller-77b66fddc8-5r2t9" (UID: "64310b0b-bae1-4ad3-b106-6d59d47d29b2") : secret "multus-admission-controller-secret" not found Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: E1011 10:28:36.573705 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: E1011 10:28:36.573729 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert podName:e20ebc39-150b-472a-bb22-328d8f5db87b nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.573722562 +0000 UTC m=+183.358149271 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-xzntp" (UID: "e20ebc39-150b-472a-bb22-328d8f5db87b") : secret "package-server-manager-serving-cert" not found Oct 11 10:28:36.574578 master-2 kubenswrapper[4776]: E1011 10:28:36.574344 4776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 11 10:28:36.574578 master-2 kubenswrapper[4776]: E1011 10:28:36.574448 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls podName:548333d7-2374-4c38-b4fd-45c2bee2ac4e nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.574417273 +0000 UTC m=+183.358844022 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-b88g6" (UID: "548333d7-2374-4c38-b4fd-45c2bee2ac4e") : secret "machine-api-operator-tls" not found Oct 11 10:28:36.574698 master-2 kubenswrapper[4776]: E1011 10:28:36.574566 4776 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:36.574785 master-2 kubenswrapper[4776]: E1011 10:28:36.574755 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls podName:7e860f23-9dae-4606-9426-0edec38a332f nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.574710721 +0000 UTC m=+183.359137490 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-bjntd" (UID: "7e860f23-9dae-4606-9426-0edec38a332f") : secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:36.574904 master-2 kubenswrapper[4776]: E1011 10:28:36.574862 4776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 11 10:28:36.575099 master-2 kubenswrapper[4776]: E1011 10:28:36.574976 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:36.575360 master-2 kubenswrapper[4776]: E1011 10:28:36.574989 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics podName:7652e0ca-2d18-48c7-80e0-f4a936038377 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.574954488 +0000 UTC m=+183.359381237 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-wsmdd" (UID: "7652e0ca-2d18-48c7-80e0-f4a936038377") : secret "marketplace-operator-metrics" not found Oct 11 10:28:36.575452 master-2 kubenswrapper[4776]: E1011 10:28:36.575399 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs podName:cbf33a7e-abea-411d-9a19-85cfe67debe3 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.575381021 +0000 UTC m=+183.359807740 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs") pod "multus-admission-controller-77b66fddc8-s5r5b" (UID: "cbf33a7e-abea-411d-9a19-85cfe67debe3") : secret "multus-admission-controller-secret" not found Oct 11 10:28:36.575452 master-2 kubenswrapper[4776]: E1011 10:28:36.575075 4776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:36.575539 master-2 kubenswrapper[4776]: E1011 10:28:36.575455 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls podName:dbaa6ca7-9865-42f6-8030-2decf702caa1 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.575444812 +0000 UTC m=+183.359871531 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-h8588" (UID: "dbaa6ca7-9865-42f6-8030-2decf702caa1") : secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:36.575577 master-2 kubenswrapper[4776]: E1011 10:28:36.575547 4776 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 11 10:28:36.575577 master-2 kubenswrapper[4776]: E1011 10:28:36.575576 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls podName:e4536c84-d8f3-4808-bf8b-9b40695f46de nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.575567716 +0000 UTC m=+183.359994435 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls") pod "machine-config-operator-7b75469658-jtmwh" (UID: "e4536c84-d8f3-4808-bf8b-9b40695f46de") : secret "mco-proxy-tls" not found Oct 11 10:28:36.577405 master-2 kubenswrapper[4776]: I1011 10:28:36.577362 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:36.577536 master-2 kubenswrapper[4776]: I1011 10:28:36.577502 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:36.580329 master-2 kubenswrapper[4776]: I1011 10:28:36.580286 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:36.580736 master-2 kubenswrapper[4776]: I1011 10:28:36.580638 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:36.580942 master-2 kubenswrapper[4776]: I1011 10:28:36.580888 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:36.584417 master-2 kubenswrapper[4776]: I1011 10:28:36.584374 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:36.590185 master-2 kubenswrapper[4776]: I1011 10:28:36.589255 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:36.625645 master-2 kubenswrapper[4776]: I1011 10:28:36.623246 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:36.652698 master-2 kubenswrapper[4776]: I1011 10:28:36.652590 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-jdkgd" event={"ID":"f6543c6f-6f31-431e-9327-60c8cfd70c7e","Type":"ContainerStarted","Data":"4f73e18df9c7f779acf2f55c8c41ee29b55b8adf693c1c5eb81eeb622e853772"} Oct 11 10:28:36.653381 master-2 kubenswrapper[4776]: I1011 10:28:36.653351 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:28:36.656220 master-2 kubenswrapper[4776]: I1011 10:28:36.655988 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" event={"ID":"08b7d4e3-1682-4a3b-a757-84ded3a16764","Type":"ContainerStarted","Data":"0add7d1bda760e1d5b101492c65e520fdafceed8322aea33238c621f47687a61"} Oct 11 10:28:36.711113 master-2 kubenswrapper[4776]: I1011 10:28:36.711062 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:36.746077 master-2 kubenswrapper[4776]: I1011 10:28:36.746018 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-jdkgd" podStartSLOduration=65.488807731 podStartE2EDuration="1m8.745999324s" podCreationTimestamp="2025-10-11 10:27:28 +0000 UTC" firstStartedPulling="2025-10-11 10:28:32.856730829 +0000 UTC m=+147.641157538" lastFinishedPulling="2025-10-11 10:28:36.113922422 +0000 UTC m=+150.898349131" observedRunningTime="2025-10-11 10:28:36.665545797 +0000 UTC m=+151.449972526" watchObservedRunningTime="2025-10-11 10:28:36.745999324 +0000 UTC m=+151.530426033" Oct 11 10:28:36.746324 master-2 kubenswrapper[4776]: I1011 10:28:36.746261 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb"] Oct 11 10:28:36.766569 master-2 kubenswrapper[4776]: W1011 10:28:36.758332 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b8dc5b8_3c48_4dba_9992_6e269ca133f1.slice/crio-188f1621be47ba092888f41a29f5d0a5260959698afa47b0b74f94fc571421c1 WatchSource:0}: Error finding container 188f1621be47ba092888f41a29f5d0a5260959698afa47b0b74f94fc571421c1: Status 404 returned error can't find the container with id 188f1621be47ba092888f41a29f5d0a5260959698afa47b0b74f94fc571421c1 Oct 11 10:28:36.792178 master-2 kubenswrapper[4776]: I1011 10:28:36.792115 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:36.808972 master-2 kubenswrapper[4776]: I1011 10:28:36.808513 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:36.817819 master-1 kubenswrapper[4771]: I1011 10:28:36.817719 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:36.818700 master-1 kubenswrapper[4771]: E1011 10:28:36.817840 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:36.818700 master-1 kubenswrapper[4771]: I1011 10:28:36.817889 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:36.818700 master-1 kubenswrapper[4771]: E1011 10:28:36.817920 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca podName:9deef4a8-bf40-4a1f-bd3f-764b298245b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:40.817899774 +0000 UTC m=+152.792126235 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca") pod "controller-manager-857df878cf-tz7h4" (UID: "9deef4a8-bf40-4a1f-bd3f-764b298245b2") : configmap "client-ca" not found Oct 11 10:28:36.818700 master-1 kubenswrapper[4771]: E1011 10:28:36.818117 4771 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:36.818700 master-1 kubenswrapper[4771]: E1011 10:28:36.818208 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert podName:9deef4a8-bf40-4a1f-bd3f-764b298245b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:40.818182442 +0000 UTC m=+152.792408923 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert") pod "controller-manager-857df878cf-tz7h4" (UID: "9deef4a8-bf40-4a1f-bd3f-764b298245b2") : secret "serving-cert" not found Oct 11 10:28:36.823956 master-2 kubenswrapper[4776]: I1011 10:28:36.822404 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:36.883660 master-2 kubenswrapper[4776]: I1011 10:28:36.883609 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:36.895336 master-2 kubenswrapper[4776]: I1011 10:28:36.895295 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-7769d9677-wh775"] Oct 11 10:28:37.006089 master-2 kubenswrapper[4776]: I1011 10:28:37.006037 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj"] Oct 11 10:28:37.019538 master-2 kubenswrapper[4776]: W1011 10:28:37.019497 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb16a4f10_c724_43cf_acd4_b3f5aa575653.slice/crio-5710afd5394e3b53c5b13f8833cf4b5c9cc77c1ac6fd041e5f45c14fa911d31d WatchSource:0}: Error finding container 5710afd5394e3b53c5b13f8833cf4b5c9cc77c1ac6fd041e5f45c14fa911d31d: Status 404 returned error can't find the container with id 5710afd5394e3b53c5b13f8833cf4b5c9cc77c1ac6fd041e5f45c14fa911d31d Oct 11 10:28:37.049030 master-2 kubenswrapper[4776]: I1011 10:28:37.048974 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr"] Oct 11 10:28:37.054882 master-2 kubenswrapper[4776]: W1011 10:28:37.054835 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb562963f_7112_411a_a64c_3b8eba909c59.slice/crio-82e86843429c6909f09d49890b4e92330d30aff06c836b9fc7a117266e887235 WatchSource:0}: Error finding container 82e86843429c6909f09d49890b4e92330d30aff06c836b9fc7a117266e887235: Status 404 returned error can't find the container with id 82e86843429c6909f09d49890b4e92330d30aff06c836b9fc7a117266e887235 Oct 11 10:28:37.061925 master-2 kubenswrapper[4776]: I1011 10:28:37.061810 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj"] Oct 11 10:28:37.070024 master-2 kubenswrapper[4776]: W1011 10:28:37.069963 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ebe6a0e_5a45_4c92_bbb5_77f3ec1fe55c.slice/crio-4d04bee2a3fa895107e6ee4c7e0c78bb925b7271658dc663ec4f5779ff21a55b WatchSource:0}: Error finding container 4d04bee2a3fa895107e6ee4c7e0c78bb925b7271658dc663ec4f5779ff21a55b: Status 404 returned error can't find the container with id 4d04bee2a3fa895107e6ee4c7e0c78bb925b7271658dc663ec4f5779ff21a55b Oct 11 10:28:37.085488 master-2 kubenswrapper[4776]: I1011 10:28:37.085408 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr"] Oct 11 10:28:37.093644 master-2 kubenswrapper[4776]: W1011 10:28:37.093592 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeba1e82e_9f3e_4273_836e_9407cc394b10.slice/crio-288eff82e0422001024cfeac653461a2840d9f7875d884b876cdd25f039cb29c WatchSource:0}: Error finding container 288eff82e0422001024cfeac653461a2840d9f7875d884b876cdd25f039cb29c: Status 404 returned error can't find the container with id 288eff82e0422001024cfeac653461a2840d9f7875d884b876cdd25f039cb29c Oct 11 10:28:37.603761 master-2 kubenswrapper[4776]: I1011 10:28:37.603265 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-555f658fd6-wmcqt"] Oct 11 10:28:37.604394 master-2 kubenswrapper[4776]: I1011 10:28:37.604358 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.607385 master-2 kubenswrapper[4776]: I1011 10:28:37.607257 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 11 10:28:37.608134 master-2 kubenswrapper[4776]: I1011 10:28:37.608079 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 11 10:28:37.608417 master-2 kubenswrapper[4776]: I1011 10:28:37.608386 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 11 10:28:37.608761 master-2 kubenswrapper[4776]: I1011 10:28:37.608626 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 11 10:28:37.609097 master-2 kubenswrapper[4776]: I1011 10:28:37.609031 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 11 10:28:37.610614 master-2 kubenswrapper[4776]: I1011 10:28:37.610401 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 11 10:28:37.610614 master-2 kubenswrapper[4776]: I1011 10:28:37.610407 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 11 10:28:37.610614 master-2 kubenswrapper[4776]: I1011 10:28:37.610486 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 11 10:28:37.612217 master-2 kubenswrapper[4776]: I1011 10:28:37.610884 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 11 10:28:37.615531 master-2 kubenswrapper[4776]: I1011 10:28:37.615391 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-555f658fd6-wmcqt"] Oct 11 10:28:37.617781 master-2 kubenswrapper[4776]: I1011 10:28:37.617693 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 11 10:28:37.661900 master-2 kubenswrapper[4776]: I1011 10:28:37.661840 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" event={"ID":"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c","Type":"ContainerStarted","Data":"4d04bee2a3fa895107e6ee4c7e0c78bb925b7271658dc663ec4f5779ff21a55b"} Oct 11 10:28:37.663510 master-2 kubenswrapper[4776]: I1011 10:28:37.663463 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" event={"ID":"6b8dc5b8-3c48-4dba-9992-6e269ca133f1","Type":"ContainerStarted","Data":"a2f213e229cd515098c17350a5db040adcc59dc05e9b25b48ab6c73159f7a768"} Oct 11 10:28:37.663587 master-2 kubenswrapper[4776]: I1011 10:28:37.663521 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" event={"ID":"6b8dc5b8-3c48-4dba-9992-6e269ca133f1","Type":"ContainerStarted","Data":"188f1621be47ba092888f41a29f5d0a5260959698afa47b0b74f94fc571421c1"} Oct 11 10:28:37.664787 master-2 kubenswrapper[4776]: I1011 10:28:37.664751 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" event={"ID":"08b7d4e3-1682-4a3b-a757-84ded3a16764","Type":"ContainerStarted","Data":"267425053e21eaecb5876aa58130583543e28c9e0ceacc764ad483ef9c1a09d8"} Oct 11 10:28:37.666023 master-2 kubenswrapper[4776]: I1011 10:28:37.665974 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" event={"ID":"b16a4f10-c724-43cf-acd4-b3f5aa575653","Type":"ContainerStarted","Data":"5710afd5394e3b53c5b13f8833cf4b5c9cc77c1ac6fd041e5f45c14fa911d31d"} Oct 11 10:28:37.666958 master-2 kubenswrapper[4776]: I1011 10:28:37.666918 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-7769d9677-wh775" event={"ID":"893af718-1fec-4b8b-8349-d85f978f4140","Type":"ContainerStarted","Data":"3ca9a32abe3eeaa78ad3b955ed2a9db43a464c56268719e20d096ebb23a8bc9c"} Oct 11 10:28:37.668405 master-2 kubenswrapper[4776]: I1011 10:28:37.668375 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" event={"ID":"eba1e82e-9f3e-4273-836e-9407cc394b10","Type":"ContainerStarted","Data":"8b3ae054e2080d8747bb5d4193692c2c66a2b67a445b4b6f41b27d918beea8e3"} Oct 11 10:28:37.668489 master-2 kubenswrapper[4776]: I1011 10:28:37.668411 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" event={"ID":"eba1e82e-9f3e-4273-836e-9407cc394b10","Type":"ContainerStarted","Data":"288eff82e0422001024cfeac653461a2840d9f7875d884b876cdd25f039cb29c"} Oct 11 10:28:37.669635 master-2 kubenswrapper[4776]: I1011 10:28:37.669606 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" event={"ID":"b562963f-7112-411a-a64c-3b8eba909c59","Type":"ContainerStarted","Data":"82e86843429c6909f09d49890b4e92330d30aff06c836b9fc7a117266e887235"} Oct 11 10:28:37.701371 master-2 kubenswrapper[4776]: I1011 10:28:37.701319 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-config\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701568 master-2 kubenswrapper[4776]: I1011 10:28:37.701379 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc5gq\" (UniqueName: \"kubernetes.io/projected/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-kube-api-access-wc5gq\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701568 master-2 kubenswrapper[4776]: I1011 10:28:37.701445 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-client\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701568 master-2 kubenswrapper[4776]: I1011 10:28:37.701488 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-trusted-ca-bundle\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701568 master-2 kubenswrapper[4776]: I1011 10:28:37.701513 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701742 master-2 kubenswrapper[4776]: I1011 10:28:37.701626 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-image-import-ca\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701742 master-2 kubenswrapper[4776]: I1011 10:28:37.701656 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-node-pullsecrets\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701742 master-2 kubenswrapper[4776]: I1011 10:28:37.701721 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-serving-cert\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701742 master-2 kubenswrapper[4776]: I1011 10:28:37.701739 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit-dir\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701851 master-2 kubenswrapper[4776]: I1011 10:28:37.701761 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-serving-ca\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701851 master-2 kubenswrapper[4776]: I1011 10:28:37.701781 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-encryption-config\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802539 master-2 kubenswrapper[4776]: I1011 10:28:37.802454 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-node-pullsecrets\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802742 master-2 kubenswrapper[4776]: I1011 10:28:37.802552 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-serving-cert\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802742 master-2 kubenswrapper[4776]: I1011 10:28:37.802575 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit-dir\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802742 master-2 kubenswrapper[4776]: I1011 10:28:37.802598 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-serving-ca\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802742 master-2 kubenswrapper[4776]: I1011 10:28:37.802617 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-encryption-config\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802742 master-2 kubenswrapper[4776]: I1011 10:28:37.802663 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-config\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802742 master-2 kubenswrapper[4776]: I1011 10:28:37.802741 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc5gq\" (UniqueName: \"kubernetes.io/projected/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-kube-api-access-wc5gq\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802984 master-2 kubenswrapper[4776]: I1011 10:28:37.802795 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-client\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802984 master-2 kubenswrapper[4776]: I1011 10:28:37.802791 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-node-pullsecrets\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802984 master-2 kubenswrapper[4776]: I1011 10:28:37.802848 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-trusted-ca-bundle\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802984 master-2 kubenswrapper[4776]: I1011 10:28:37.802872 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802984 master-2 kubenswrapper[4776]: I1011 10:28:37.802937 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-image-import-ca\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.804372 master-2 kubenswrapper[4776]: I1011 10:28:37.804348 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-image-import-ca\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.804440 master-2 kubenswrapper[4776]: I1011 10:28:37.804421 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-config\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.806516 master-2 kubenswrapper[4776]: I1011 10:28:37.806437 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-trusted-ca-bundle\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.807917 master-2 kubenswrapper[4776]: I1011 10:28:37.807752 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.808844 master-2 kubenswrapper[4776]: I1011 10:28:37.808275 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-serving-ca\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.808844 master-2 kubenswrapper[4776]: I1011 10:28:37.808329 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit-dir\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.811101 master-2 kubenswrapper[4776]: I1011 10:28:37.810961 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-client\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.812053 master-2 kubenswrapper[4776]: I1011 10:28:37.812017 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-serving-cert\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.824440 master-2 kubenswrapper[4776]: I1011 10:28:37.824402 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-encryption-config\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.829270 master-1 kubenswrapper[4771]: I1011 10:28:37.829146 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:37.830065 master-1 kubenswrapper[4771]: E1011 10:28:37.829320 4771 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 11 10:28:37.830065 master-1 kubenswrapper[4771]: E1011 10:28:37.829467 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit podName:94d811f4-4ac9-46b0-b937-d3370b1b4305 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:45.829441428 +0000 UTC m=+157.803667919 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit") pod "apiserver-796c687c6d-9b677" (UID: "94d811f4-4ac9-46b0-b937-d3370b1b4305") : configmap "audit-0" not found Oct 11 10:28:37.832548 master-2 kubenswrapper[4776]: I1011 10:28:37.832508 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc5gq\" (UniqueName: \"kubernetes.io/projected/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-kube-api-access-wc5gq\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.924844 master-2 kubenswrapper[4776]: I1011 10:28:37.924025 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:38.133213 master-2 kubenswrapper[4776]: I1011 10:28:38.133172 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-555f658fd6-wmcqt"] Oct 11 10:28:38.408608 master-2 kubenswrapper[4776]: I1011 10:28:38.408186 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:38.408796 master-2 kubenswrapper[4776]: I1011 10:28:38.408666 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:38.408796 master-2 kubenswrapper[4776]: E1011 10:28:38.408407 4776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:38.408796 master-2 kubenswrapper[4776]: E1011 10:28:38.408788 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:42.408766233 +0000 UTC m=+157.193192942 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : secret "serving-cert" not found Oct 11 10:28:38.408904 master-2 kubenswrapper[4776]: E1011 10:28:38.408834 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:38.408904 master-2 kubenswrapper[4776]: E1011 10:28:38.408897 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:42.408882187 +0000 UTC m=+157.193308896 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:28:38.675602 master-2 kubenswrapper[4776]: I1011 10:28:38.675494 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" event={"ID":"7a89cb41-fb97-4282-a12d-c6f8d87bc41e","Type":"ContainerStarted","Data":"632a135875099c1d39a46b5212f4753eda648d4f1ce35df8cc0f167cab38ce86"} Oct 11 10:28:39.447972 master-1 kubenswrapper[4771]: I1011 10:28:39.447689 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:39.447972 master-1 kubenswrapper[4771]: I1011 10:28:39.447752 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:39.447972 master-1 kubenswrapper[4771]: E1011 10:28:39.447864 4771 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:39.449047 master-1 kubenswrapper[4771]: E1011 10:28:39.447996 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca podName:67be9a32-17b4-480c-98ba-caf9841bef6b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:47.447966079 +0000 UTC m=+159.422192550 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca") pod "route-controller-manager-67d4d4d6d8-szbpf" (UID: "67be9a32-17b4-480c-98ba-caf9841bef6b") : configmap "client-ca" not found Oct 11 10:28:39.449047 master-1 kubenswrapper[4771]: E1011 10:28:39.447886 4771 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:39.449047 master-1 kubenswrapper[4771]: E1011 10:28:39.448070 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert podName:67be9a32-17b4-480c-98ba-caf9841bef6b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:47.448053542 +0000 UTC m=+159.422280063 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert") pod "route-controller-manager-67d4d4d6d8-szbpf" (UID: "67be9a32-17b4-480c-98ba-caf9841bef6b") : secret "serving-cert" not found Oct 11 10:28:39.524654 master-2 kubenswrapper[4776]: I1011 10:28:39.524429 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:39.524654 master-2 kubenswrapper[4776]: E1011 10:28:39.524595 4776 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:39.524654 master-2 kubenswrapper[4776]: I1011 10:28:39.524638 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:39.524654 master-2 kubenswrapper[4776]: E1011 10:28:39.524661 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:47.524641056 +0000 UTC m=+162.309067765 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : secret "serving-cert" not found Oct 11 10:28:39.525531 master-2 kubenswrapper[4776]: E1011 10:28:39.524863 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:39.525531 master-2 kubenswrapper[4776]: E1011 10:28:39.524985 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:47.524953775 +0000 UTC m=+162.309380554 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:28:40.493556 master-1 kubenswrapper[4771]: I1011 10:28:40.493463 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-1"] Oct 11 10:28:40.494386 master-1 kubenswrapper[4771]: I1011 10:28:40.493971 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-1" Oct 11 10:28:40.497645 master-1 kubenswrapper[4771]: I1011 10:28:40.497588 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Oct 11 10:28:40.548816 master-1 kubenswrapper[4771]: I1011 10:28:40.501772 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-1"] Oct 11 10:28:40.560863 master-1 kubenswrapper[4771]: I1011 10:28:40.560770 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7-kube-api-access\") pod \"installer-1-master-1\" (UID: \"007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7\") " pod="openshift-kube-scheduler/installer-1-master-1" Oct 11 10:28:40.560863 master-1 kubenswrapper[4771]: I1011 10:28:40.560859 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7\") " pod="openshift-kube-scheduler/installer-1-master-1" Oct 11 10:28:40.561205 master-1 kubenswrapper[4771]: I1011 10:28:40.561102 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7-var-lock\") pod \"installer-1-master-1\" (UID: \"007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7\") " pod="openshift-kube-scheduler/installer-1-master-1" Oct 11 10:28:40.662548 master-1 kubenswrapper[4771]: I1011 10:28:40.662390 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7-var-lock\") pod \"installer-1-master-1\" (UID: \"007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7\") " pod="openshift-kube-scheduler/installer-1-master-1" Oct 11 10:28:40.662548 master-1 kubenswrapper[4771]: I1011 10:28:40.662554 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7-kube-api-access\") pod \"installer-1-master-1\" (UID: \"007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7\") " pod="openshift-kube-scheduler/installer-1-master-1" Oct 11 10:28:40.662957 master-1 kubenswrapper[4771]: I1011 10:28:40.662554 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7-var-lock\") pod \"installer-1-master-1\" (UID: \"007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7\") " pod="openshift-kube-scheduler/installer-1-master-1" Oct 11 10:28:40.662957 master-1 kubenswrapper[4771]: I1011 10:28:40.662681 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7\") " pod="openshift-kube-scheduler/installer-1-master-1" Oct 11 10:28:40.662957 master-1 kubenswrapper[4771]: I1011 10:28:40.662609 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7\") " pod="openshift-kube-scheduler/installer-1-master-1" Oct 11 10:28:40.687191 master-1 kubenswrapper[4771]: I1011 10:28:40.687093 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7-kube-api-access\") pod \"installer-1-master-1\" (UID: \"007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7\") " pod="openshift-kube-scheduler/installer-1-master-1" Oct 11 10:28:40.856945 master-1 kubenswrapper[4771]: I1011 10:28:40.856833 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-1" Oct 11 10:28:40.865451 master-1 kubenswrapper[4771]: I1011 10:28:40.865395 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:40.865557 master-1 kubenswrapper[4771]: I1011 10:28:40.865486 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:40.865706 master-1 kubenswrapper[4771]: E1011 10:28:40.865663 4771 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:40.865781 master-1 kubenswrapper[4771]: E1011 10:28:40.865723 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:40.865854 master-1 kubenswrapper[4771]: E1011 10:28:40.865750 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert podName:9deef4a8-bf40-4a1f-bd3f-764b298245b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:48.865729562 +0000 UTC m=+160.839956013 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert") pod "controller-manager-857df878cf-tz7h4" (UID: "9deef4a8-bf40-4a1f-bd3f-764b298245b2") : secret "serving-cert" not found Oct 11 10:28:40.865933 master-1 kubenswrapper[4771]: E1011 10:28:40.865863 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca podName:9deef4a8-bf40-4a1f-bd3f-764b298245b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:48.865837655 +0000 UTC m=+160.840064196 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca") pod "controller-manager-857df878cf-tz7h4" (UID: "9deef4a8-bf40-4a1f-bd3f-764b298245b2") : configmap "client-ca" not found Oct 11 10:28:41.096526 master-1 kubenswrapper[4771]: I1011 10:28:41.096340 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-1"] Oct 11 10:28:41.106135 master-1 kubenswrapper[4771]: W1011 10:28:41.105986 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod007dcbab_9e3e_4dcd_9ad9_0ea8dd07dfc7.slice/crio-862c7a0762d99806cbf395198a8a115efc48c49b1a93e1cd22e9f82545990f2e WatchSource:0}: Error finding container 862c7a0762d99806cbf395198a8a115efc48c49b1a93e1cd22e9f82545990f2e: Status 404 returned error can't find the container with id 862c7a0762d99806cbf395198a8a115efc48c49b1a93e1cd22e9f82545990f2e Oct 11 10:28:42.072232 master-1 kubenswrapper[4771]: I1011 10:28:42.071976 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-1" event={"ID":"007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7","Type":"ContainerStarted","Data":"862c7a0762d99806cbf395198a8a115efc48c49b1a93e1cd22e9f82545990f2e"} Oct 11 10:28:42.462199 master-2 kubenswrapper[4776]: I1011 10:28:42.462120 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:42.462852 master-2 kubenswrapper[4776]: I1011 10:28:42.462311 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:42.462852 master-2 kubenswrapper[4776]: E1011 10:28:42.462316 4776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:42.462852 master-2 kubenswrapper[4776]: E1011 10:28:42.462506 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:50.462487602 +0000 UTC m=+165.246914311 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : secret "serving-cert" not found Oct 11 10:28:42.462852 master-2 kubenswrapper[4776]: E1011 10:28:42.462358 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:42.462852 master-2 kubenswrapper[4776]: E1011 10:28:42.462657 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:50.462621906 +0000 UTC m=+165.247048605 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:28:43.418786 master-1 kubenswrapper[4771]: I1011 10:28:43.418599 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw"] Oct 11 10:28:43.419856 master-1 kubenswrapper[4771]: I1011 10:28:43.419246 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.423335 master-1 kubenswrapper[4771]: I1011 10:28:43.423264 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 11 10:28:43.426139 master-1 kubenswrapper[4771]: I1011 10:28:43.426051 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 11 10:28:43.426436 master-1 kubenswrapper[4771]: I1011 10:28:43.426208 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 11 10:28:43.426555 master-1 kubenswrapper[4771]: I1011 10:28:43.426457 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 11 10:28:43.426555 master-1 kubenswrapper[4771]: I1011 10:28:43.426461 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 11 10:28:43.426723 master-1 kubenswrapper[4771]: I1011 10:28:43.426596 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 11 10:28:43.426936 master-1 kubenswrapper[4771]: I1011 10:28:43.426863 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 11 10:28:43.427041 master-1 kubenswrapper[4771]: I1011 10:28:43.426876 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 11 10:28:43.432081 master-1 kubenswrapper[4771]: I1011 10:28:43.432015 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw"] Oct 11 10:28:43.436158 master-2 kubenswrapper[4776]: I1011 10:28:43.436103 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6"] Oct 11 10:28:43.436728 master-2 kubenswrapper[4776]: I1011 10:28:43.436712 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.439724 master-2 kubenswrapper[4776]: I1011 10:28:43.439664 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 11 10:28:43.440520 master-2 kubenswrapper[4776]: I1011 10:28:43.440492 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 11 10:28:43.440844 master-2 kubenswrapper[4776]: I1011 10:28:43.440820 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 11 10:28:43.441006 master-2 kubenswrapper[4776]: I1011 10:28:43.440982 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 11 10:28:43.441631 master-2 kubenswrapper[4776]: I1011 10:28:43.441583 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 11 10:28:43.441942 master-2 kubenswrapper[4776]: I1011 10:28:43.441889 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 11 10:28:43.441996 master-2 kubenswrapper[4776]: I1011 10:28:43.441945 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 11 10:28:43.442357 master-2 kubenswrapper[4776]: I1011 10:28:43.442337 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 11 10:28:43.461206 master-2 kubenswrapper[4776]: I1011 10:28:43.461179 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6"] Oct 11 10:28:43.493571 master-1 kubenswrapper[4771]: I1011 10:28:43.493496 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-etcd-client\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.493824 master-1 kubenswrapper[4771]: I1011 10:28:43.493587 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/004ee387-d0e9-4582-ad14-f571832ebd6e-etcd-serving-ca\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.493824 master-1 kubenswrapper[4771]: I1011 10:28:43.493625 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/004ee387-d0e9-4582-ad14-f571832ebd6e-audit-dir\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.493824 master-1 kubenswrapper[4771]: I1011 10:28:43.493750 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrj8w\" (UniqueName: \"kubernetes.io/projected/004ee387-d0e9-4582-ad14-f571832ebd6e-kube-api-access-vrj8w\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.494008 master-1 kubenswrapper[4771]: I1011 10:28:43.493838 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-serving-cert\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.494008 master-1 kubenswrapper[4771]: I1011 10:28:43.493975 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/004ee387-d0e9-4582-ad14-f571832ebd6e-trusted-ca-bundle\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.494131 master-1 kubenswrapper[4771]: I1011 10:28:43.494068 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/004ee387-d0e9-4582-ad14-f571832ebd6e-audit-policies\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.494131 master-1 kubenswrapper[4771]: I1011 10:28:43.494105 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-encryption-config\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.577493 master-2 kubenswrapper[4776]: I1011 10:28:43.577442 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-etcd-serving-ca\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.577888 master-2 kubenswrapper[4776]: I1011 10:28:43.577508 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-trusted-ca-bundle\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.577888 master-2 kubenswrapper[4776]: I1011 10:28:43.577533 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.577888 master-2 kubenswrapper[4776]: I1011 10:28:43.577862 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e350b624-6581-4982-96f3-cd5c37256e02-audit-dir\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.578083 master-2 kubenswrapper[4776]: I1011 10:28:43.578045 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqlgq\" (UniqueName: \"kubernetes.io/projected/e350b624-6581-4982-96f3-cd5c37256e02-kube-api-access-tqlgq\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.578134 master-2 kubenswrapper[4776]: I1011 10:28:43.578110 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-etcd-client\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.578203 master-2 kubenswrapper[4776]: I1011 10:28:43.578176 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-audit-policies\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.578242 master-2 kubenswrapper[4776]: I1011 10:28:43.578211 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-encryption-config\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.595386 master-1 kubenswrapper[4771]: I1011 10:28:43.595230 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/004ee387-d0e9-4582-ad14-f571832ebd6e-trusted-ca-bundle\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.595386 master-1 kubenswrapper[4771]: I1011 10:28:43.595345 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/004ee387-d0e9-4582-ad14-f571832ebd6e-audit-policies\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.595710 master-1 kubenswrapper[4771]: I1011 10:28:43.595413 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-encryption-config\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.595710 master-1 kubenswrapper[4771]: I1011 10:28:43.595457 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-etcd-client\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.595710 master-1 kubenswrapper[4771]: I1011 10:28:43.595488 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/004ee387-d0e9-4582-ad14-f571832ebd6e-etcd-serving-ca\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.595710 master-1 kubenswrapper[4771]: I1011 10:28:43.595521 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/004ee387-d0e9-4582-ad14-f571832ebd6e-audit-dir\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.595710 master-1 kubenswrapper[4771]: I1011 10:28:43.595568 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrj8w\" (UniqueName: \"kubernetes.io/projected/004ee387-d0e9-4582-ad14-f571832ebd6e-kube-api-access-vrj8w\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.595710 master-1 kubenswrapper[4771]: I1011 10:28:43.595615 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-serving-cert\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.596551 master-1 kubenswrapper[4771]: E1011 10:28:43.595786 4771 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Oct 11 10:28:43.596551 master-1 kubenswrapper[4771]: E1011 10:28:43.595861 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-serving-cert podName:004ee387-d0e9-4582-ad14-f571832ebd6e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:44.0958376 +0000 UTC m=+156.070064051 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-serving-cert") pod "apiserver-65b6f4d4c9-skwvw" (UID: "004ee387-d0e9-4582-ad14-f571832ebd6e") : secret "serving-cert" not found Oct 11 10:28:43.596551 master-1 kubenswrapper[4771]: I1011 10:28:43.595917 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/004ee387-d0e9-4582-ad14-f571832ebd6e-audit-dir\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.596551 master-1 kubenswrapper[4771]: I1011 10:28:43.596455 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/004ee387-d0e9-4582-ad14-f571832ebd6e-trusted-ca-bundle\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.596932 master-1 kubenswrapper[4771]: I1011 10:28:43.596861 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/004ee387-d0e9-4582-ad14-f571832ebd6e-audit-policies\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.597103 master-1 kubenswrapper[4771]: I1011 10:28:43.597054 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/004ee387-d0e9-4582-ad14-f571832ebd6e-etcd-serving-ca\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.603835 master-1 kubenswrapper[4771]: I1011 10:28:43.603776 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-encryption-config\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.603995 master-1 kubenswrapper[4771]: I1011 10:28:43.603928 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-etcd-client\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.625311 master-1 kubenswrapper[4771]: I1011 10:28:43.625218 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrj8w\" (UniqueName: \"kubernetes.io/projected/004ee387-d0e9-4582-ad14-f571832ebd6e-kube-api-access-vrj8w\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:43.679661 master-2 kubenswrapper[4776]: I1011 10:28:43.679590 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e350b624-6581-4982-96f3-cd5c37256e02-audit-dir\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.679872 master-2 kubenswrapper[4776]: I1011 10:28:43.679703 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqlgq\" (UniqueName: \"kubernetes.io/projected/e350b624-6581-4982-96f3-cd5c37256e02-kube-api-access-tqlgq\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.679872 master-2 kubenswrapper[4776]: I1011 10:28:43.679730 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-etcd-client\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.679872 master-2 kubenswrapper[4776]: I1011 10:28:43.679763 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-audit-policies\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.679872 master-2 kubenswrapper[4776]: I1011 10:28:43.679786 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-encryption-config\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.679872 master-2 kubenswrapper[4776]: I1011 10:28:43.679848 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-etcd-serving-ca\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.680103 master-2 kubenswrapper[4776]: I1011 10:28:43.679889 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-trusted-ca-bundle\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.680103 master-2 kubenswrapper[4776]: I1011 10:28:43.679942 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.680160 master-2 kubenswrapper[4776]: E1011 10:28:43.680139 4776 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Oct 11 10:28:43.680207 master-2 kubenswrapper[4776]: E1011 10:28:43.680190 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert podName:e350b624-6581-4982-96f3-cd5c37256e02 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:44.180175826 +0000 UTC m=+158.964602535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert") pod "apiserver-65b6f4d4c9-5wrz6" (UID: "e350b624-6581-4982-96f3-cd5c37256e02") : secret "serving-cert" not found Oct 11 10:28:43.681255 master-2 kubenswrapper[4776]: I1011 10:28:43.680931 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-etcd-serving-ca\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.681255 master-2 kubenswrapper[4776]: I1011 10:28:43.681066 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-trusted-ca-bundle\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.681799 master-2 kubenswrapper[4776]: I1011 10:28:43.681750 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-audit-policies\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.681977 master-2 kubenswrapper[4776]: I1011 10:28:43.681930 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e350b624-6581-4982-96f3-cd5c37256e02-audit-dir\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.684551 master-2 kubenswrapper[4776]: I1011 10:28:43.684510 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-etcd-client\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.684614 master-2 kubenswrapper[4776]: I1011 10:28:43.684565 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-encryption-config\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.696423 master-2 kubenswrapper[4776]: I1011 10:28:43.696355 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqlgq\" (UniqueName: \"kubernetes.io/projected/e350b624-6581-4982-96f3-cd5c37256e02-kube-api-access-tqlgq\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:44.081613 master-1 kubenswrapper[4771]: I1011 10:28:44.081481 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-1" event={"ID":"007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7","Type":"ContainerStarted","Data":"80fbdcaea7022dfed31b23d1e5cd04123cb13507148681a1c855ca79f442ec90"} Oct 11 10:28:44.100508 master-1 kubenswrapper[4771]: I1011 10:28:44.100424 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-serving-cert\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:44.100791 master-1 kubenswrapper[4771]: E1011 10:28:44.100641 4771 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Oct 11 10:28:44.100791 master-1 kubenswrapper[4771]: E1011 10:28:44.100759 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-serving-cert podName:004ee387-d0e9-4582-ad14-f571832ebd6e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:45.100731779 +0000 UTC m=+157.074958260 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-serving-cert") pod "apiserver-65b6f4d4c9-skwvw" (UID: "004ee387-d0e9-4582-ad14-f571832ebd6e") : secret "serving-cert" not found Oct 11 10:28:44.103228 master-1 kubenswrapper[4771]: I1011 10:28:44.103139 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-1" podStartSLOduration=2.097822787 podStartE2EDuration="4.103118512s" podCreationTimestamp="2025-10-11 10:28:40 +0000 UTC" firstStartedPulling="2025-10-11 10:28:41.109361787 +0000 UTC m=+153.083588238" lastFinishedPulling="2025-10-11 10:28:43.114657522 +0000 UTC m=+155.088883963" observedRunningTime="2025-10-11 10:28:44.100324648 +0000 UTC m=+156.074551119" watchObservedRunningTime="2025-10-11 10:28:44.103118512 +0000 UTC m=+156.077344983" Oct 11 10:28:44.185349 master-2 kubenswrapper[4776]: I1011 10:28:44.185286 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:44.185731 master-2 kubenswrapper[4776]: E1011 10:28:44.185425 4776 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Oct 11 10:28:44.185731 master-2 kubenswrapper[4776]: E1011 10:28:44.185495 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert podName:e350b624-6581-4982-96f3-cd5c37256e02 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:45.185477466 +0000 UTC m=+159.969904175 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert") pod "apiserver-65b6f4d4c9-5wrz6" (UID: "e350b624-6581-4982-96f3-cd5c37256e02") : secret "serving-cert" not found Oct 11 10:28:45.112290 master-1 kubenswrapper[4771]: I1011 10:28:45.112182 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-serving-cert\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:45.113337 master-1 kubenswrapper[4771]: E1011 10:28:45.112473 4771 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Oct 11 10:28:45.113337 master-1 kubenswrapper[4771]: E1011 10:28:45.112599 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-serving-cert podName:004ee387-d0e9-4582-ad14-f571832ebd6e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:47.11256841 +0000 UTC m=+159.086794891 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-serving-cert") pod "apiserver-65b6f4d4c9-skwvw" (UID: "004ee387-d0e9-4582-ad14-f571832ebd6e") : secret "serving-cert" not found Oct 11 10:28:45.195584 master-2 kubenswrapper[4776]: I1011 10:28:45.195521 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:45.196223 master-2 kubenswrapper[4776]: E1011 10:28:45.195826 4776 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Oct 11 10:28:45.196223 master-2 kubenswrapper[4776]: E1011 10:28:45.195884 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert podName:e350b624-6581-4982-96f3-cd5c37256e02 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:47.195865861 +0000 UTC m=+161.980292570 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert") pod "apiserver-65b6f4d4c9-5wrz6" (UID: "e350b624-6581-4982-96f3-cd5c37256e02") : secret "serving-cert" not found Oct 11 10:28:45.244382 master-1 kubenswrapper[4771]: I1011 10:28:45.244291 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc"] Oct 11 10:28:45.245147 master-1 kubenswrapper[4771]: I1011 10:28:45.245079 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:28:45.247998 master-1 kubenswrapper[4771]: I1011 10:28:45.247944 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Oct 11 10:28:45.248594 master-1 kubenswrapper[4771]: I1011 10:28:45.248542 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Oct 11 10:28:45.252669 master-1 kubenswrapper[4771]: I1011 10:28:45.252624 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc"] Oct 11 10:28:45.257644 master-1 kubenswrapper[4771]: I1011 10:28:45.257587 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Oct 11 10:28:45.314890 master-1 kubenswrapper[4771]: I1011 10:28:45.314840 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:28:45.315090 master-1 kubenswrapper[4771]: I1011 10:28:45.314907 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-cache\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:28:45.315090 master-1 kubenswrapper[4771]: I1011 10:28:45.314997 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-containers\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:28:45.315224 master-1 kubenswrapper[4771]: I1011 10:28:45.315158 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-ca-certs\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:28:45.315288 master-1 kubenswrapper[4771]: I1011 10:28:45.315248 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crbvx\" (UniqueName: \"kubernetes.io/projected/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-kube-api-access-crbvx\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:28:45.340613 master-1 kubenswrapper[4771]: I1011 10:28:45.340542 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm"] Oct 11 10:28:45.341478 master-1 kubenswrapper[4771]: I1011 10:28:45.341430 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:45.347463 master-1 kubenswrapper[4771]: I1011 10:28:45.347403 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Oct 11 10:28:45.347624 master-1 kubenswrapper[4771]: I1011 10:28:45.347446 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Oct 11 10:28:45.347624 master-1 kubenswrapper[4771]: I1011 10:28:45.347527 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Oct 11 10:28:45.352565 master-1 kubenswrapper[4771]: I1011 10:28:45.352522 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm"] Oct 11 10:28:45.357694 master-1 kubenswrapper[4771]: I1011 10:28:45.357627 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Oct 11 10:28:45.416215 master-1 kubenswrapper[4771]: I1011 10:28:45.416009 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-catalogserver-certs\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:45.416215 master-1 kubenswrapper[4771]: I1011 10:28:45.416106 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-ca-certs\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:45.416699 master-1 kubenswrapper[4771]: I1011 10:28:45.416216 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:28:45.416699 master-1 kubenswrapper[4771]: I1011 10:28:45.416282 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qknv\" (UniqueName: \"kubernetes.io/projected/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-kube-api-access-9qknv\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:45.416699 master-1 kubenswrapper[4771]: I1011 10:28:45.416345 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-cache\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:28:45.416699 master-1 kubenswrapper[4771]: I1011 10:28:45.416436 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-containers\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:28:45.416699 master-1 kubenswrapper[4771]: I1011 10:28:45.416512 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-cache\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:45.416699 master-1 kubenswrapper[4771]: I1011 10:28:45.416586 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-containers\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:45.416699 master-1 kubenswrapper[4771]: I1011 10:28:45.416675 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:45.417221 master-1 kubenswrapper[4771]: I1011 10:28:45.416778 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-ca-certs\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:28:45.417221 master-1 kubenswrapper[4771]: I1011 10:28:45.416861 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crbvx\" (UniqueName: \"kubernetes.io/projected/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-kube-api-access-crbvx\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:28:45.417904 master-1 kubenswrapper[4771]: I1011 10:28:45.417852 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-containers\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:28:45.418062 master-1 kubenswrapper[4771]: E1011 10:28:45.418007 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker podName:d7647696-42d9-4dd9-bc3b-a4d52a42cf9a nodeName:}" failed. No retries permitted until 2025-10-11 10:28:45.917968284 +0000 UTC m=+157.892194765 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-bqdlc" (UID: "d7647696-42d9-4dd9-bc3b-a4d52a42cf9a") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:28:45.418797 master-1 kubenswrapper[4771]: I1011 10:28:45.418720 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-cache\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:28:45.423553 master-1 kubenswrapper[4771]: I1011 10:28:45.423474 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-ca-certs\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:28:45.436209 master-1 kubenswrapper[4771]: I1011 10:28:45.436114 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crbvx\" (UniqueName: \"kubernetes.io/projected/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-kube-api-access-crbvx\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:28:45.518021 master-1 kubenswrapper[4771]: I1011 10:28:45.517853 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-catalogserver-certs\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:45.518021 master-1 kubenswrapper[4771]: I1011 10:28:45.517933 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-ca-certs\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:45.518021 master-1 kubenswrapper[4771]: I1011 10:28:45.517977 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qknv\" (UniqueName: \"kubernetes.io/projected/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-kube-api-access-9qknv\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:45.518607 master-1 kubenswrapper[4771]: I1011 10:28:45.518053 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-cache\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:45.518607 master-1 kubenswrapper[4771]: I1011 10:28:45.518129 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-containers\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:45.518607 master-1 kubenswrapper[4771]: E1011 10:28:45.518141 4771 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Oct 11 10:28:45.518607 master-1 kubenswrapper[4771]: I1011 10:28:45.518217 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:45.518607 master-1 kubenswrapper[4771]: E1011 10:28:45.518269 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-catalogserver-certs podName:6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:46.018232225 +0000 UTC m=+157.992458756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-catalogserver-certs") pod "catalogd-controller-manager-596f9d8bbf-tpzsm" (UID: "6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b") : secret "catalogserver-cert" not found Oct 11 10:28:45.518607 master-1 kubenswrapper[4771]: I1011 10:28:45.518393 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-containers\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:45.518607 master-1 kubenswrapper[4771]: E1011 10:28:45.518425 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker podName:6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:46.018398129 +0000 UTC m=+157.992624610 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-tpzsm" (UID: "6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:28:45.519215 master-1 kubenswrapper[4771]: I1011 10:28:45.519097 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-cache\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:45.523590 master-1 kubenswrapper[4771]: I1011 10:28:45.523276 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-ca-certs\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:45.557169 master-1 kubenswrapper[4771]: I1011 10:28:45.557025 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qknv\" (UniqueName: \"kubernetes.io/projected/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-kube-api-access-9qknv\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:45.924576 master-1 kubenswrapper[4771]: I1011 10:28:45.924440 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit\") pod \"apiserver-796c687c6d-9b677\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:45.924870 master-1 kubenswrapper[4771]: E1011 10:28:45.924590 4771 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 11 10:28:45.924870 master-1 kubenswrapper[4771]: E1011 10:28:45.924691 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit podName:94d811f4-4ac9-46b0-b937-d3370b1b4305 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:01.924665021 +0000 UTC m=+173.898891492 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit") pod "apiserver-796c687c6d-9b677" (UID: "94d811f4-4ac9-46b0-b937-d3370b1b4305") : configmap "audit-0" not found Oct 11 10:28:45.924870 master-1 kubenswrapper[4771]: I1011 10:28:45.924712 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:28:45.925245 master-1 kubenswrapper[4771]: E1011 10:28:45.925171 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker podName:d7647696-42d9-4dd9-bc3b-a4d52a42cf9a nodeName:}" failed. No retries permitted until 2025-10-11 10:28:46.925114392 +0000 UTC m=+158.899340883 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-bqdlc" (UID: "d7647696-42d9-4dd9-bc3b-a4d52a42cf9a") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:28:46.025910 master-1 kubenswrapper[4771]: I1011 10:28:46.025797 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:46.025910 master-1 kubenswrapper[4771]: I1011 10:28:46.025905 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-catalogserver-certs\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:46.026412 master-1 kubenswrapper[4771]: E1011 10:28:46.026066 4771 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Oct 11 10:28:46.026412 master-1 kubenswrapper[4771]: E1011 10:28:46.026135 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker podName:6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:47.026094772 +0000 UTC m=+159.000321253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-tpzsm" (UID: "6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:28:46.026412 master-1 kubenswrapper[4771]: E1011 10:28:46.026190 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-catalogserver-certs podName:6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:47.026174434 +0000 UTC m=+159.000400915 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-catalogserver-certs") pod "catalogd-controller-manager-596f9d8bbf-tpzsm" (UID: "6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b") : secret "catalogserver-cert" not found Oct 11 10:28:46.706418 master-2 kubenswrapper[4776]: I1011 10:28:46.706093 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" event={"ID":"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c","Type":"ContainerStarted","Data":"2eff4353493e1e27d8a85bd8e32e0408e179cf5370df38de2ded9a10d5e6c314"} Oct 11 10:28:46.938605 master-1 kubenswrapper[4771]: I1011 10:28:46.938464 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:28:46.939185 master-1 kubenswrapper[4771]: E1011 10:28:46.938715 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker podName:d7647696-42d9-4dd9-bc3b-a4d52a42cf9a nodeName:}" failed. No retries permitted until 2025-10-11 10:28:48.938681898 +0000 UTC m=+160.912908369 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-bqdlc" (UID: "d7647696-42d9-4dd9-bc3b-a4d52a42cf9a") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:28:47.039633 master-1 kubenswrapper[4771]: I1011 10:28:47.039504 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:47.039891 master-1 kubenswrapper[4771]: I1011 10:28:47.039652 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-catalogserver-certs\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:47.039948 master-1 kubenswrapper[4771]: E1011 10:28:47.039879 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker podName:6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:49.039837413 +0000 UTC m=+161.014063884 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-tpzsm" (UID: "6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:28:47.044594 master-1 kubenswrapper[4771]: I1011 10:28:47.044543 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-catalogserver-certs\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:47.141320 master-1 kubenswrapper[4771]: I1011 10:28:47.140670 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-serving-cert\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:47.145458 master-1 kubenswrapper[4771]: I1011 10:28:47.145397 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-serving-cert\") pod \"apiserver-65b6f4d4c9-skwvw\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:47.224797 master-2 kubenswrapper[4776]: I1011 10:28:47.224734 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:47.228977 master-2 kubenswrapper[4776]: I1011 10:28:47.228943 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:47.340547 master-1 kubenswrapper[4771]: I1011 10:28:47.340435 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:47.365326 master-2 kubenswrapper[4776]: I1011 10:28:47.365256 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:47.529329 master-2 kubenswrapper[4776]: I1011 10:28:47.529167 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:47.529550 master-2 kubenswrapper[4776]: I1011 10:28:47.529329 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:47.529550 master-2 kubenswrapper[4776]: E1011 10:28:47.529445 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:47.529550 master-2 kubenswrapper[4776]: E1011 10:28:47.529518 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:03.529498059 +0000 UTC m=+178.313924778 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:28:47.537365 master-2 kubenswrapper[4776]: I1011 10:28:47.537317 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:47.545573 master-1 kubenswrapper[4771]: I1011 10:28:47.545482 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:47.545823 master-1 kubenswrapper[4771]: I1011 10:28:47.545637 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:47.545823 master-1 kubenswrapper[4771]: E1011 10:28:47.545652 4771 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:47.545823 master-1 kubenswrapper[4771]: E1011 10:28:47.545752 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca podName:67be9a32-17b4-480c-98ba-caf9841bef6b nodeName:}" failed. No retries permitted until 2025-10-11 10:29:03.545729278 +0000 UTC m=+175.519955769 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca") pod "route-controller-manager-67d4d4d6d8-szbpf" (UID: "67be9a32-17b4-480c-98ba-caf9841bef6b") : configmap "client-ca" not found Oct 11 10:28:47.550486 master-1 kubenswrapper[4771]: I1011 10:28:47.550434 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-szbpf\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:47.556673 master-1 kubenswrapper[4771]: I1011 10:28:47.556596 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw"] Oct 11 10:28:47.566553 master-1 kubenswrapper[4771]: W1011 10:28:47.566488 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod004ee387_d0e9_4582_ad14_f571832ebd6e.slice/crio-70ee09355a354a55a1e3cc86654a95e054448e4680cbf989813075d48bc93f03 WatchSource:0}: Error finding container 70ee09355a354a55a1e3cc86654a95e054448e4680cbf989813075d48bc93f03: Status 404 returned error can't find the container with id 70ee09355a354a55a1e3cc86654a95e054448e4680cbf989813075d48bc93f03 Oct 11 10:28:48.096924 master-1 kubenswrapper[4771]: I1011 10:28:48.096861 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" event={"ID":"004ee387-d0e9-4582-ad14-f571832ebd6e","Type":"ContainerStarted","Data":"70ee09355a354a55a1e3cc86654a95e054448e4680cbf989813075d48bc93f03"} Oct 11 10:28:48.472711 master-2 kubenswrapper[4776]: I1011 10:28:48.472398 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-sgvjd"] Oct 11 10:28:48.473872 master-2 kubenswrapper[4776]: I1011 10:28:48.473243 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.476354 master-2 kubenswrapper[4776]: I1011 10:28:48.476310 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 11 10:28:48.476505 master-2 kubenswrapper[4776]: I1011 10:28:48.476398 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 11 10:28:48.476717 master-2 kubenswrapper[4776]: I1011 10:28:48.476662 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 11 10:28:48.477062 master-2 kubenswrapper[4776]: I1011 10:28:48.477028 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 11 10:28:48.483731 master-2 kubenswrapper[4776]: I1011 10:28:48.483701 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sgvjd"] Oct 11 10:28:48.483741 master-1 kubenswrapper[4771]: I1011 10:28:48.483628 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-rzjcf"] Oct 11 10:28:48.484582 master-1 kubenswrapper[4771]: I1011 10:28:48.484555 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rzjcf" Oct 11 10:28:48.487997 master-1 kubenswrapper[4771]: I1011 10:28:48.487950 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 11 10:28:48.488163 master-1 kubenswrapper[4771]: I1011 10:28:48.488110 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 11 10:28:48.488163 master-1 kubenswrapper[4771]: I1011 10:28:48.488160 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 11 10:28:48.488302 master-1 kubenswrapper[4771]: I1011 10:28:48.488280 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 11 10:28:48.493166 master-1 kubenswrapper[4771]: I1011 10:28:48.492874 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rzjcf"] Oct 11 10:28:48.543398 master-2 kubenswrapper[4776]: I1011 10:28:48.543334 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3f3ba3c-1d27-4529-9ae3-a61f88e50b62-config-volume\") pod \"dns-default-sgvjd\" (UID: \"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62\") " pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.543398 master-2 kubenswrapper[4776]: I1011 10:28:48.543378 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e3f3ba3c-1d27-4529-9ae3-a61f88e50b62-metrics-tls\") pod \"dns-default-sgvjd\" (UID: \"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62\") " pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.543619 master-2 kubenswrapper[4776]: I1011 10:28:48.543411 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksw9c\" (UniqueName: \"kubernetes.io/projected/e3f3ba3c-1d27-4529-9ae3-a61f88e50b62-kube-api-access-ksw9c\") pod \"dns-default-sgvjd\" (UID: \"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62\") " pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.555816 master-1 kubenswrapper[4771]: I1011 10:28:48.555725 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3f49f37-a9e4-4acd-ae7e-d644e8475106-config-volume\") pod \"dns-default-rzjcf\" (UID: \"b3f49f37-a9e4-4acd-ae7e-d644e8475106\") " pod="openshift-dns/dns-default-rzjcf" Oct 11 10:28:48.555816 master-1 kubenswrapper[4771]: I1011 10:28:48.555786 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b3f49f37-a9e4-4acd-ae7e-d644e8475106-metrics-tls\") pod \"dns-default-rzjcf\" (UID: \"b3f49f37-a9e4-4acd-ae7e-d644e8475106\") " pod="openshift-dns/dns-default-rzjcf" Oct 11 10:28:48.555816 master-1 kubenswrapper[4771]: I1011 10:28:48.555813 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrzmn\" (UniqueName: \"kubernetes.io/projected/b3f49f37-a9e4-4acd-ae7e-d644e8475106-kube-api-access-xrzmn\") pod \"dns-default-rzjcf\" (UID: \"b3f49f37-a9e4-4acd-ae7e-d644e8475106\") " pod="openshift-dns/dns-default-rzjcf" Oct 11 10:28:48.644417 master-2 kubenswrapper[4776]: I1011 10:28:48.644344 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3f3ba3c-1d27-4529-9ae3-a61f88e50b62-config-volume\") pod \"dns-default-sgvjd\" (UID: \"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62\") " pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.644417 master-2 kubenswrapper[4776]: I1011 10:28:48.644401 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e3f3ba3c-1d27-4529-9ae3-a61f88e50b62-metrics-tls\") pod \"dns-default-sgvjd\" (UID: \"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62\") " pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.644644 master-2 kubenswrapper[4776]: I1011 10:28:48.644443 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksw9c\" (UniqueName: \"kubernetes.io/projected/e3f3ba3c-1d27-4529-9ae3-a61f88e50b62-kube-api-access-ksw9c\") pod \"dns-default-sgvjd\" (UID: \"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62\") " pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.645292 master-2 kubenswrapper[4776]: I1011 10:28:48.645240 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3f3ba3c-1d27-4529-9ae3-a61f88e50b62-config-volume\") pod \"dns-default-sgvjd\" (UID: \"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62\") " pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.650262 master-2 kubenswrapper[4776]: I1011 10:28:48.650217 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e3f3ba3c-1d27-4529-9ae3-a61f88e50b62-metrics-tls\") pod \"dns-default-sgvjd\" (UID: \"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62\") " pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.657126 master-1 kubenswrapper[4771]: I1011 10:28:48.656498 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3f49f37-a9e4-4acd-ae7e-d644e8475106-config-volume\") pod \"dns-default-rzjcf\" (UID: \"b3f49f37-a9e4-4acd-ae7e-d644e8475106\") " pod="openshift-dns/dns-default-rzjcf" Oct 11 10:28:48.657126 master-1 kubenswrapper[4771]: I1011 10:28:48.656574 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b3f49f37-a9e4-4acd-ae7e-d644e8475106-metrics-tls\") pod \"dns-default-rzjcf\" (UID: \"b3f49f37-a9e4-4acd-ae7e-d644e8475106\") " pod="openshift-dns/dns-default-rzjcf" Oct 11 10:28:48.657126 master-1 kubenswrapper[4771]: I1011 10:28:48.656607 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrzmn\" (UniqueName: \"kubernetes.io/projected/b3f49f37-a9e4-4acd-ae7e-d644e8475106-kube-api-access-xrzmn\") pod \"dns-default-rzjcf\" (UID: \"b3f49f37-a9e4-4acd-ae7e-d644e8475106\") " pod="openshift-dns/dns-default-rzjcf" Oct 11 10:28:48.657367 master-1 kubenswrapper[4771]: I1011 10:28:48.657256 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3f49f37-a9e4-4acd-ae7e-d644e8475106-config-volume\") pod \"dns-default-rzjcf\" (UID: \"b3f49f37-a9e4-4acd-ae7e-d644e8475106\") " pod="openshift-dns/dns-default-rzjcf" Oct 11 10:28:48.659708 master-1 kubenswrapper[4771]: I1011 10:28:48.659693 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b3f49f37-a9e4-4acd-ae7e-d644e8475106-metrics-tls\") pod \"dns-default-rzjcf\" (UID: \"b3f49f37-a9e4-4acd-ae7e-d644e8475106\") " pod="openshift-dns/dns-default-rzjcf" Oct 11 10:28:48.681280 master-2 kubenswrapper[4776]: I1011 10:28:48.681191 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksw9c\" (UniqueName: \"kubernetes.io/projected/e3f3ba3c-1d27-4529-9ae3-a61f88e50b62-kube-api-access-ksw9c\") pod \"dns-default-sgvjd\" (UID: \"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62\") " pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.692504 master-1 kubenswrapper[4771]: I1011 10:28:48.692449 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrzmn\" (UniqueName: \"kubernetes.io/projected/b3f49f37-a9e4-4acd-ae7e-d644e8475106-kube-api-access-xrzmn\") pod \"dns-default-rzjcf\" (UID: \"b3f49f37-a9e4-4acd-ae7e-d644e8475106\") " pod="openshift-dns/dns-default-rzjcf" Oct 11 10:28:48.714606 master-2 kubenswrapper[4776]: I1011 10:28:48.714284 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-7769d9677-wh775" event={"ID":"893af718-1fec-4b8b-8349-d85f978f4140","Type":"ContainerStarted","Data":"950901b87af0c91716dd6b0b32b00414910693e5066f298cd5ccb27d712bc959"} Oct 11 10:28:48.782693 master-1 kubenswrapper[4771]: I1011 10:28:48.782542 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-fjwjw"] Oct 11 10:28:48.783170 master-1 kubenswrapper[4771]: I1011 10:28:48.783126 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fjwjw" Oct 11 10:28:48.788988 master-2 kubenswrapper[4776]: I1011 10:28:48.788864 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-z9trl"] Oct 11 10:28:48.789188 master-2 kubenswrapper[4776]: I1011 10:28:48.789140 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.789486 master-2 kubenswrapper[4776]: I1011 10:28:48.789455 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-z9trl" Oct 11 10:28:48.805712 master-1 kubenswrapper[4771]: I1011 10:28:48.805659 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rzjcf" Oct 11 10:28:48.848387 master-2 kubenswrapper[4776]: I1011 10:28:48.848266 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxvgh\" (UniqueName: \"kubernetes.io/projected/0550ab10-d45d-4526-8551-c1ce0b232bbc-kube-api-access-bxvgh\") pod \"node-resolver-z9trl\" (UID: \"0550ab10-d45d-4526-8551-c1ce0b232bbc\") " pod="openshift-dns/node-resolver-z9trl" Oct 11 10:28:48.848387 master-2 kubenswrapper[4776]: I1011 10:28:48.848361 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0550ab10-d45d-4526-8551-c1ce0b232bbc-hosts-file\") pod \"node-resolver-z9trl\" (UID: \"0550ab10-d45d-4526-8551-c1ce0b232bbc\") " pod="openshift-dns/node-resolver-z9trl" Oct 11 10:28:48.859430 master-1 kubenswrapper[4771]: I1011 10:28:48.859377 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2919a957-a46f-4e96-b42e-3ba3c537e98e-hosts-file\") pod \"node-resolver-fjwjw\" (UID: \"2919a957-a46f-4e96-b42e-3ba3c537e98e\") " pod="openshift-dns/node-resolver-fjwjw" Oct 11 10:28:48.859641 master-1 kubenswrapper[4771]: I1011 10:28:48.859588 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpt6z\" (UniqueName: \"kubernetes.io/projected/2919a957-a46f-4e96-b42e-3ba3c537e98e-kube-api-access-zpt6z\") pod \"node-resolver-fjwjw\" (UID: \"2919a957-a46f-4e96-b42e-3ba3c537e98e\") " pod="openshift-dns/node-resolver-fjwjw" Oct 11 10:28:48.950538 master-2 kubenswrapper[4776]: I1011 10:28:48.950423 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxvgh\" (UniqueName: \"kubernetes.io/projected/0550ab10-d45d-4526-8551-c1ce0b232bbc-kube-api-access-bxvgh\") pod \"node-resolver-z9trl\" (UID: \"0550ab10-d45d-4526-8551-c1ce0b232bbc\") " pod="openshift-dns/node-resolver-z9trl" Oct 11 10:28:48.950538 master-2 kubenswrapper[4776]: I1011 10:28:48.950524 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0550ab10-d45d-4526-8551-c1ce0b232bbc-hosts-file\") pod \"node-resolver-z9trl\" (UID: \"0550ab10-d45d-4526-8551-c1ce0b232bbc\") " pod="openshift-dns/node-resolver-z9trl" Oct 11 10:28:48.950978 master-2 kubenswrapper[4776]: I1011 10:28:48.950923 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0550ab10-d45d-4526-8551-c1ce0b232bbc-hosts-file\") pod \"node-resolver-z9trl\" (UID: \"0550ab10-d45d-4526-8551-c1ce0b232bbc\") " pod="openshift-dns/node-resolver-z9trl" Oct 11 10:28:48.960499 master-1 kubenswrapper[4771]: I1011 10:28:48.960399 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:48.960499 master-1 kubenswrapper[4771]: I1011 10:28:48.960479 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpt6z\" (UniqueName: \"kubernetes.io/projected/2919a957-a46f-4e96-b42e-3ba3c537e98e-kube-api-access-zpt6z\") pod \"node-resolver-fjwjw\" (UID: \"2919a957-a46f-4e96-b42e-3ba3c537e98e\") " pod="openshift-dns/node-resolver-fjwjw" Oct 11 10:28:48.960865 master-1 kubenswrapper[4771]: I1011 10:28:48.960526 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:48.960865 master-1 kubenswrapper[4771]: I1011 10:28:48.960586 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:28:48.960865 master-1 kubenswrapper[4771]: I1011 10:28:48.960618 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2919a957-a46f-4e96-b42e-3ba3c537e98e-hosts-file\") pod \"node-resolver-fjwjw\" (UID: \"2919a957-a46f-4e96-b42e-3ba3c537e98e\") " pod="openshift-dns/node-resolver-fjwjw" Oct 11 10:28:48.960865 master-1 kubenswrapper[4771]: E1011 10:28:48.960723 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:48.960865 master-1 kubenswrapper[4771]: E1011 10:28:48.960778 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca podName:9deef4a8-bf40-4a1f-bd3f-764b298245b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:04.960761319 +0000 UTC m=+176.934987770 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca") pod "controller-manager-857df878cf-tz7h4" (UID: "9deef4a8-bf40-4a1f-bd3f-764b298245b2") : configmap "client-ca" not found Oct 11 10:28:48.960865 master-1 kubenswrapper[4771]: E1011 10:28:48.960796 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker podName:d7647696-42d9-4dd9-bc3b-a4d52a42cf9a nodeName:}" failed. No retries permitted until 2025-10-11 10:28:52.96078712 +0000 UTC m=+164.935013571 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-bqdlc" (UID: "d7647696-42d9-4dd9-bc3b-a4d52a42cf9a") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:28:48.960865 master-1 kubenswrapper[4771]: I1011 10:28:48.960731 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2919a957-a46f-4e96-b42e-3ba3c537e98e-hosts-file\") pod \"node-resolver-fjwjw\" (UID: \"2919a957-a46f-4e96-b42e-3ba3c537e98e\") " pod="openshift-dns/node-resolver-fjwjw" Oct 11 10:28:48.967219 master-1 kubenswrapper[4771]: I1011 10:28:48.967147 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert\") pod \"controller-manager-857df878cf-tz7h4\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:48.978207 master-1 kubenswrapper[4771]: I1011 10:28:48.978168 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpt6z\" (UniqueName: \"kubernetes.io/projected/2919a957-a46f-4e96-b42e-3ba3c537e98e-kube-api-access-zpt6z\") pod \"node-resolver-fjwjw\" (UID: \"2919a957-a46f-4e96-b42e-3ba3c537e98e\") " pod="openshift-dns/node-resolver-fjwjw" Oct 11 10:28:48.982072 master-2 kubenswrapper[4776]: I1011 10:28:48.982009 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxvgh\" (UniqueName: \"kubernetes.io/projected/0550ab10-d45d-4526-8551-c1ce0b232bbc-kube-api-access-bxvgh\") pod \"node-resolver-z9trl\" (UID: \"0550ab10-d45d-4526-8551-c1ce0b232bbc\") " pod="openshift-dns/node-resolver-z9trl" Oct 11 10:28:48.997622 master-1 kubenswrapper[4771]: I1011 10:28:48.997572 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rzjcf"] Oct 11 10:28:49.006656 master-1 kubenswrapper[4771]: W1011 10:28:49.006588 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3f49f37_a9e4_4acd_ae7e_d644e8475106.slice/crio-8ef7b976a4f384167d2fc481618a09fa0aee47bd492fde332c525af91b56c920 WatchSource:0}: Error finding container 8ef7b976a4f384167d2fc481618a09fa0aee47bd492fde332c525af91b56c920: Status 404 returned error can't find the container with id 8ef7b976a4f384167d2fc481618a09fa0aee47bd492fde332c525af91b56c920 Oct 11 10:28:49.061999 master-1 kubenswrapper[4771]: I1011 10:28:49.061910 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:49.062295 master-1 kubenswrapper[4771]: E1011 10:28:49.062248 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker podName:6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:53.06218303 +0000 UTC m=+165.036409481 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-tpzsm" (UID: "6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:28:49.097586 master-1 kubenswrapper[4771]: I1011 10:28:49.097463 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fjwjw" Oct 11 10:28:49.101988 master-1 kubenswrapper[4771]: I1011 10:28:49.101931 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rzjcf" event={"ID":"b3f49f37-a9e4-4acd-ae7e-d644e8475106","Type":"ContainerStarted","Data":"8ef7b976a4f384167d2fc481618a09fa0aee47bd492fde332c525af91b56c920"} Oct 11 10:28:49.123624 master-2 kubenswrapper[4776]: I1011 10:28:49.123449 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-z9trl" Oct 11 10:28:49.436717 master-1 kubenswrapper[4771]: I1011 10:28:49.436606 4771 scope.go:117] "RemoveContainer" containerID="793c72629ffb5d64763cce906980f11774530f02d707e0389b69155b33560c5d" Oct 11 10:28:49.436888 master-1 kubenswrapper[4771]: E1011 10:28:49.436797 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-779749f859-5xxzp_openshift-cloud-controller-manager-operator(e115f8be-9e65-4407-8111-568e5ea8ac1b)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" podUID="e115f8be-9e65-4407-8111-568e5ea8ac1b" Oct 11 10:28:49.578448 master-1 kubenswrapper[4771]: W1011 10:28:49.578396 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2919a957_a46f_4e96_b42e_3ba3c537e98e.slice/crio-8f6998e0a5e0c5234251889b81be2d102553dd2fe4f826390029300e639e331a WatchSource:0}: Error finding container 8f6998e0a5e0c5234251889b81be2d102553dd2fe4f826390029300e639e331a: Status 404 returned error can't find the container with id 8f6998e0a5e0c5234251889b81be2d102553dd2fe4f826390029300e639e331a Oct 11 10:28:49.626022 master-2 kubenswrapper[4776]: E1011 10:28:49.625768 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[serving-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" podUID="b7b07707-84bd-43a6-a43d-6680decaa210" Oct 11 10:28:49.698324 master-2 kubenswrapper[4776]: I1011 10:28:49.697990 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6"] Oct 11 10:28:49.713830 master-2 kubenswrapper[4776]: I1011 10:28:49.713760 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sgvjd"] Oct 11 10:28:49.714552 master-2 kubenswrapper[4776]: W1011 10:28:49.714342 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode350b624_6581_4982_96f3_cd5c37256e02.slice/crio-7123427ff4a739a7b15f9487edb2172df73189bcf0b6f9273cbe9a3faa4de58f WatchSource:0}: Error finding container 7123427ff4a739a7b15f9487edb2172df73189bcf0b6f9273cbe9a3faa4de58f: Status 404 returned error can't find the container with id 7123427ff4a739a7b15f9487edb2172df73189bcf0b6f9273cbe9a3faa4de58f Oct 11 10:28:49.737666 master-2 kubenswrapper[4776]: I1011 10:28:49.737621 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" event={"ID":"b16a4f10-c724-43cf-acd4-b3f5aa575653","Type":"ContainerStarted","Data":"c0e8f71d396fd27257db760a12b957d9766d7b8f4ea38505f65cfa745ea983cb"} Oct 11 10:28:49.741546 master-2 kubenswrapper[4776]: W1011 10:28:49.741485 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3f3ba3c_1d27_4529_9ae3_a61f88e50b62.slice/crio-4fbd5baa22bf7168e39d0a10835a0030b5465676760f7471ce718b48974a41f5 WatchSource:0}: Error finding container 4fbd5baa22bf7168e39d0a10835a0030b5465676760f7471ce718b48974a41f5: Status 404 returned error can't find the container with id 4fbd5baa22bf7168e39d0a10835a0030b5465676760f7471ce718b48974a41f5 Oct 11 10:28:49.743765 master-2 kubenswrapper[4776]: I1011 10:28:49.743694 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" event={"ID":"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c","Type":"ContainerStarted","Data":"56e5041a6c1005b559504440c80e17a9a6cffc931c2704b5c5ae753ba7406a36"} Oct 11 10:28:49.746265 master-2 kubenswrapper[4776]: I1011 10:28:49.746217 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-z9trl" event={"ID":"0550ab10-d45d-4526-8551-c1ce0b232bbc","Type":"ContainerStarted","Data":"8e10b519a711a0feb5153b454a4fb4c5f5dd8d87baaef013a2af6d31f287dbc6"} Oct 11 10:28:49.747881 master-2 kubenswrapper[4776]: I1011 10:28:49.747841 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" event={"ID":"6b8dc5b8-3c48-4dba-9992-6e269ca133f1","Type":"ContainerStarted","Data":"18e94b730f9322c1d1497f21d03aa5ce221afb64bd7545bbd8eb547d8ca9d1f9"} Oct 11 10:28:49.753846 master-2 kubenswrapper[4776]: I1011 10:28:49.753782 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" podStartSLOduration=131.953448169 podStartE2EDuration="2m22.753765919s" podCreationTimestamp="2025-10-11 10:26:27 +0000 UTC" firstStartedPulling="2025-10-11 10:28:37.021004422 +0000 UTC m=+151.805431131" lastFinishedPulling="2025-10-11 10:28:47.821322132 +0000 UTC m=+162.605748881" observedRunningTime="2025-10-11 10:28:49.752519564 +0000 UTC m=+164.536946273" watchObservedRunningTime="2025-10-11 10:28:49.753765919 +0000 UTC m=+164.538192628" Oct 11 10:28:49.787234 master-2 kubenswrapper[4776]: I1011 10:28:49.787164 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" podStartSLOduration=112.381565983 podStartE2EDuration="1m57.78714373s" podCreationTimestamp="2025-10-11 10:26:52 +0000 UTC" firstStartedPulling="2025-10-11 10:28:36.870666813 +0000 UTC m=+151.655093522" lastFinishedPulling="2025-10-11 10:28:42.27624455 +0000 UTC m=+157.060671269" observedRunningTime="2025-10-11 10:28:49.771365046 +0000 UTC m=+164.555791755" watchObservedRunningTime="2025-10-11 10:28:49.78714373 +0000 UTC m=+164.571570439" Oct 11 10:28:49.789706 master-2 kubenswrapper[4776]: I1011 10:28:49.789654 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" podStartSLOduration=135.586553187 podStartE2EDuration="2m20.789645942s" podCreationTimestamp="2025-10-11 10:26:29 +0000 UTC" firstStartedPulling="2025-10-11 10:28:37.073109613 +0000 UTC m=+151.857536322" lastFinishedPulling="2025-10-11 10:28:42.276202368 +0000 UTC m=+157.060629077" observedRunningTime="2025-10-11 10:28:49.786491672 +0000 UTC m=+164.570918381" watchObservedRunningTime="2025-10-11 10:28:49.789645942 +0000 UTC m=+164.574072651" Oct 11 10:28:49.952480 master-1 kubenswrapper[4771]: I1011 10:28:49.951964 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-vhfgw"] Oct 11 10:28:49.953164 master-1 kubenswrapper[4771]: I1011 10:28:49.953112 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:49.955723 master-1 kubenswrapper[4771]: I1011 10:28:49.955658 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Oct 11 10:28:49.956442 master-1 kubenswrapper[4771]: I1011 10:28:49.956382 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Oct 11 10:28:49.960759 master-2 kubenswrapper[4776]: I1011 10:28:49.960137 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-5tqrt"] Oct 11 10:28:49.961477 master-2 kubenswrapper[4776]: I1011 10:28:49.961439 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.074756 master-2 kubenswrapper[4776]: I1011 10:28:50.074711 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-sys\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.074756 master-2 kubenswrapper[4776]: I1011 10:28:50.074764 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-systemd\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075078 master-2 kubenswrapper[4776]: I1011 10:28:50.074826 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-host\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075078 master-2 kubenswrapper[4776]: I1011 10:28:50.074878 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-lib-modules\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075078 master-2 kubenswrapper[4776]: I1011 10:28:50.074911 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-var-lib-kubelet\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075078 master-2 kubenswrapper[4776]: I1011 10:28:50.074928 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-sysctl-conf\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075078 master-2 kubenswrapper[4776]: I1011 10:28:50.074943 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzpwk\" (UniqueName: \"kubernetes.io/projected/4347a983-767e-44a3-92e8-74386c4e2e82-kube-api-access-bzpwk\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075078 master-2 kubenswrapper[4776]: I1011 10:28:50.074965 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-sysctl-d\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075078 master-2 kubenswrapper[4776]: I1011 10:28:50.074991 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4347a983-767e-44a3-92e8-74386c4e2e82-tmp\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075489 master-2 kubenswrapper[4776]: I1011 10:28:50.075162 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-kubernetes\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075489 master-2 kubenswrapper[4776]: I1011 10:28:50.075273 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-run\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075489 master-2 kubenswrapper[4776]: I1011 10:28:50.075301 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-sysconfig\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075489 master-2 kubenswrapper[4776]: I1011 10:28:50.075318 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/4347a983-767e-44a3-92e8-74386c4e2e82-etc-tuned\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075489 master-2 kubenswrapper[4776]: I1011 10:28:50.075391 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-modprobe-d\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.083654 master-1 kubenswrapper[4771]: I1011 10:28:50.083507 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-systemd\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.083654 master-1 kubenswrapper[4771]: I1011 10:28:50.083658 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-kubernetes\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.084014 master-1 kubenswrapper[4771]: I1011 10:28:50.083711 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-sysctl-d\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.084014 master-1 kubenswrapper[4771]: I1011 10:28:50.083826 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f5a3f75a-c5b4-407a-b16a-5277aec051f7-tmp\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.084014 master-1 kubenswrapper[4771]: I1011 10:28:50.083879 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-modprobe-d\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.084014 master-1 kubenswrapper[4771]: I1011 10:28:50.083924 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-sysconfig\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.084292 master-1 kubenswrapper[4771]: I1011 10:28:50.084100 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-lib-modules\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.084292 master-1 kubenswrapper[4771]: I1011 10:28:50.084226 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-run\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.084483 master-1 kubenswrapper[4771]: I1011 10:28:50.084316 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-host\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.084558 master-1 kubenswrapper[4771]: I1011 10:28:50.084534 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clwnw\" (UniqueName: \"kubernetes.io/projected/f5a3f75a-c5b4-407a-b16a-5277aec051f7-kube-api-access-clwnw\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.084623 master-1 kubenswrapper[4771]: I1011 10:28:50.084597 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-sysctl-conf\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.084742 master-1 kubenswrapper[4771]: I1011 10:28:50.084627 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-tuned\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.084810 master-1 kubenswrapper[4771]: I1011 10:28:50.084779 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-sys\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.084946 master-1 kubenswrapper[4771]: I1011 10:28:50.084896 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-var-lib-kubelet\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.107563 master-1 kubenswrapper[4771]: I1011 10:28:50.107479 4771 generic.go:334] "Generic (PLEG): container finished" podID="004ee387-d0e9-4582-ad14-f571832ebd6e" containerID="e7797162c7d48146c8bfccf87f14747095a7d1d8794c4946a5c000f5385fa346" exitCode=0 Oct 11 10:28:50.107563 master-1 kubenswrapper[4771]: I1011 10:28:50.107561 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" event={"ID":"004ee387-d0e9-4582-ad14-f571832ebd6e","Type":"ContainerDied","Data":"e7797162c7d48146c8bfccf87f14747095a7d1d8794c4946a5c000f5385fa346"} Oct 11 10:28:50.110026 master-1 kubenswrapper[4771]: I1011 10:28:50.109961 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fjwjw" event={"ID":"2919a957-a46f-4e96-b42e-3ba3c537e98e","Type":"ContainerStarted","Data":"0ed94446953dd34ae6187c546beb5e48bd72be4a6a0fb6994d62b9be96b82a01"} Oct 11 10:28:50.110026 master-1 kubenswrapper[4771]: I1011 10:28:50.110026 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fjwjw" event={"ID":"2919a957-a46f-4e96-b42e-3ba3c537e98e","Type":"ContainerStarted","Data":"8f6998e0a5e0c5234251889b81be2d102553dd2fe4f826390029300e639e331a"} Oct 11 10:28:50.125933 master-1 kubenswrapper[4771]: I1011 10:28:50.125866 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-857df878cf-tz7h4"] Oct 11 10:28:50.126142 master-1 kubenswrapper[4771]: E1011 10:28:50.126095 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" podUID="9deef4a8-bf40-4a1f-bd3f-764b298245b2" Oct 11 10:28:50.142464 master-1 kubenswrapper[4771]: I1011 10:28:50.142387 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf"] Oct 11 10:28:50.142598 master-1 kubenswrapper[4771]: E1011 10:28:50.142561 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" podUID="67be9a32-17b4-480c-98ba-caf9841bef6b" Oct 11 10:28:50.169945 master-1 kubenswrapper[4771]: I1011 10:28:50.169809 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-fjwjw" podStartSLOduration=2.169783123 podStartE2EDuration="2.169783123s" podCreationTimestamp="2025-10-11 10:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:28:50.167481332 +0000 UTC m=+162.141707813" watchObservedRunningTime="2025-10-11 10:28:50.169783123 +0000 UTC m=+162.144009604" Oct 11 10:28:50.176398 master-2 kubenswrapper[4776]: I1011 10:28:50.176334 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4347a983-767e-44a3-92e8-74386c4e2e82-tmp\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176398 master-2 kubenswrapper[4776]: I1011 10:28:50.176381 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-kubernetes\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176638 master-2 kubenswrapper[4776]: I1011 10:28:50.176420 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-run\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176638 master-2 kubenswrapper[4776]: I1011 10:28:50.176445 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-sysconfig\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176638 master-2 kubenswrapper[4776]: I1011 10:28:50.176468 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/4347a983-767e-44a3-92e8-74386c4e2e82-etc-tuned\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176638 master-2 kubenswrapper[4776]: I1011 10:28:50.176501 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-modprobe-d\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176638 master-2 kubenswrapper[4776]: I1011 10:28:50.176543 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-sys\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176638 master-2 kubenswrapper[4776]: I1011 10:28:50.176559 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-kubernetes\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176638 master-2 kubenswrapper[4776]: I1011 10:28:50.176616 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-systemd\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176638 master-2 kubenswrapper[4776]: I1011 10:28:50.176566 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-systemd\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176886 master-2 kubenswrapper[4776]: I1011 10:28:50.176726 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-run\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176886 master-2 kubenswrapper[4776]: I1011 10:28:50.176750 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-host\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176886 master-2 kubenswrapper[4776]: I1011 10:28:50.176789 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-lib-modules\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176886 master-2 kubenswrapper[4776]: I1011 10:28:50.176805 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-modprobe-d\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176886 master-2 kubenswrapper[4776]: I1011 10:28:50.176852 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-sysconfig\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177011 master-2 kubenswrapper[4776]: I1011 10:28:50.176899 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-var-lib-kubelet\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177011 master-2 kubenswrapper[4776]: I1011 10:28:50.176906 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-sys\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177011 master-2 kubenswrapper[4776]: I1011 10:28:50.176925 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-sysctl-conf\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177011 master-2 kubenswrapper[4776]: I1011 10:28:50.176969 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-lib-modules\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177011 master-2 kubenswrapper[4776]: I1011 10:28:50.176948 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-host\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177011 master-2 kubenswrapper[4776]: I1011 10:28:50.177007 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-var-lib-kubelet\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177160 master-2 kubenswrapper[4776]: I1011 10:28:50.177068 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzpwk\" (UniqueName: \"kubernetes.io/projected/4347a983-767e-44a3-92e8-74386c4e2e82-kube-api-access-bzpwk\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177160 master-2 kubenswrapper[4776]: I1011 10:28:50.177092 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-sysctl-conf\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177160 master-2 kubenswrapper[4776]: I1011 10:28:50.177114 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-sysctl-d\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177234 master-2 kubenswrapper[4776]: I1011 10:28:50.177180 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-sysctl-d\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.180656 master-2 kubenswrapper[4776]: I1011 10:28:50.180621 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4347a983-767e-44a3-92e8-74386c4e2e82-tmp\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.185858 master-1 kubenswrapper[4771]: I1011 10:28:50.185771 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-systemd\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.185858 master-1 kubenswrapper[4771]: I1011 10:28:50.185861 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-kubernetes\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.186065 master-1 kubenswrapper[4771]: I1011 10:28:50.185881 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-sysctl-d\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.186065 master-1 kubenswrapper[4771]: I1011 10:28:50.185909 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f5a3f75a-c5b4-407a-b16a-5277aec051f7-tmp\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.186065 master-1 kubenswrapper[4771]: I1011 10:28:50.185925 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-modprobe-d\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.186065 master-1 kubenswrapper[4771]: I1011 10:28:50.185918 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-systemd\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.186065 master-1 kubenswrapper[4771]: I1011 10:28:50.185939 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-sysconfig\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.186065 master-1 kubenswrapper[4771]: I1011 10:28:50.186004 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-sysconfig\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.186065 master-1 kubenswrapper[4771]: I1011 10:28:50.186027 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-lib-modules\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.186065 master-1 kubenswrapper[4771]: I1011 10:28:50.186069 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-run\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.186559 master-1 kubenswrapper[4771]: I1011 10:28:50.186089 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-sysctl-d\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.186559 master-1 kubenswrapper[4771]: I1011 10:28:50.186149 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-host\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.186559 master-1 kubenswrapper[4771]: I1011 10:28:50.186241 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-kubernetes\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.186559 master-1 kubenswrapper[4771]: I1011 10:28:50.186327 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-lib-modules\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.186559 master-1 kubenswrapper[4771]: I1011 10:28:50.186377 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-run\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.186559 master-1 kubenswrapper[4771]: I1011 10:28:50.186467 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-host\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.186930 master-1 kubenswrapper[4771]: I1011 10:28:50.186562 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-modprobe-d\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.186930 master-1 kubenswrapper[4771]: I1011 10:28:50.186758 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clwnw\" (UniqueName: \"kubernetes.io/projected/f5a3f75a-c5b4-407a-b16a-5277aec051f7-kube-api-access-clwnw\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.186930 master-1 kubenswrapper[4771]: I1011 10:28:50.186810 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-tuned\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.186930 master-1 kubenswrapper[4771]: I1011 10:28:50.186873 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-sysctl-conf\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.186999 master-2 kubenswrapper[4776]: I1011 10:28:50.186950 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/4347a983-767e-44a3-92e8-74386c4e2e82-etc-tuned\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.187231 master-1 kubenswrapper[4771]: I1011 10:28:50.186969 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-sys\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.187231 master-1 kubenswrapper[4771]: I1011 10:28:50.187027 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-var-lib-kubelet\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.187231 master-1 kubenswrapper[4771]: I1011 10:28:50.187128 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-var-lib-kubelet\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.187231 master-1 kubenswrapper[4771]: I1011 10:28:50.187203 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-sys\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.187550 master-1 kubenswrapper[4771]: I1011 10:28:50.187278 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-sysctl-conf\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.190806 master-1 kubenswrapper[4771]: I1011 10:28:50.190749 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/f5a3f75a-c5b4-407a-b16a-5277aec051f7-etc-tuned\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.190955 master-1 kubenswrapper[4771]: I1011 10:28:50.190758 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f5a3f75a-c5b4-407a-b16a-5277aec051f7-tmp\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.198166 master-2 kubenswrapper[4776]: I1011 10:28:50.198128 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzpwk\" (UniqueName: \"kubernetes.io/projected/4347a983-767e-44a3-92e8-74386c4e2e82-kube-api-access-bzpwk\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.207388 master-1 kubenswrapper[4771]: I1011 10:28:50.207294 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clwnw\" (UniqueName: \"kubernetes.io/projected/f5a3f75a-c5b4-407a-b16a-5277aec051f7-kube-api-access-clwnw\") pod \"tuned-vhfgw\" (UID: \"f5a3f75a-c5b4-407a-b16a-5277aec051f7\") " pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.267507 master-1 kubenswrapper[4771]: I1011 10:28:50.267442 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" Oct 11 10:28:50.277785 master-1 kubenswrapper[4771]: W1011 10:28:50.277656 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5a3f75a_c5b4_407a_b16a_5277aec051f7.slice/crio-f54666226746504bded08b7b482b73574d5dbca7369c120b64bc771746344af4 WatchSource:0}: Error finding container f54666226746504bded08b7b482b73574d5dbca7369c120b64bc771746344af4: Status 404 returned error can't find the container with id f54666226746504bded08b7b482b73574d5dbca7369c120b64bc771746344af4 Oct 11 10:28:50.291232 master-2 kubenswrapper[4776]: I1011 10:28:50.291089 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.306649 master-2 kubenswrapper[4776]: W1011 10:28:50.306592 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4347a983_767e_44a3_92e8_74386c4e2e82.slice/crio-12e46036439fb4dbe17ce032eaf899c78e9c38fc32a80d6342dc2f61dd5bc37a WatchSource:0}: Error finding container 12e46036439fb4dbe17ce032eaf899c78e9c38fc32a80d6342dc2f61dd5bc37a: Status 404 returned error can't find the container with id 12e46036439fb4dbe17ce032eaf899c78e9c38fc32a80d6342dc2f61dd5bc37a Oct 11 10:28:50.481358 master-2 kubenswrapper[4776]: I1011 10:28:50.481297 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:50.481358 master-2 kubenswrapper[4776]: I1011 10:28:50.481367 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:50.481630 master-2 kubenswrapper[4776]: E1011 10:28:50.481472 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:50.481630 master-2 kubenswrapper[4776]: E1011 10:28:50.481530 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:06.481516535 +0000 UTC m=+181.265943244 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:28:50.484994 master-2 kubenswrapper[4776]: I1011 10:28:50.484946 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:50.756210 master-2 kubenswrapper[4776]: I1011 10:28:50.754137 4776 generic.go:334] "Generic (PLEG): container finished" podID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerID="b832fb464d44d9bbecf0e8282e7db004cc8bdd8588f8fbb153766321b64a0e01" exitCode=0 Oct 11 10:28:50.756210 master-2 kubenswrapper[4776]: I1011 10:28:50.754266 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" event={"ID":"7a89cb41-fb97-4282-a12d-c6f8d87bc41e","Type":"ContainerDied","Data":"b832fb464d44d9bbecf0e8282e7db004cc8bdd8588f8fbb153766321b64a0e01"} Oct 11 10:28:50.760014 master-2 kubenswrapper[4776]: I1011 10:28:50.758271 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" event={"ID":"08b7d4e3-1682-4a3b-a757-84ded3a16764","Type":"ContainerStarted","Data":"441bff0c1dbecd89cdf0753230b353069931a1e8819510d825274248cb28dd04"} Oct 11 10:28:50.761304 master-2 kubenswrapper[4776]: I1011 10:28:50.761263 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" event={"ID":"eba1e82e-9f3e-4273-836e-9407cc394b10","Type":"ContainerStarted","Data":"9490ddff809a74a126a8a8c9116d6770a13d848ba3beedd121f2f46f7a6331ef"} Oct 11 10:28:50.762749 master-2 kubenswrapper[4776]: I1011 10:28:50.762703 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sgvjd" event={"ID":"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62","Type":"ContainerStarted","Data":"4fbd5baa22bf7168e39d0a10835a0030b5465676760f7471ce718b48974a41f5"} Oct 11 10:28:50.764094 master-2 kubenswrapper[4776]: I1011 10:28:50.764074 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" event={"ID":"b562963f-7112-411a-a64c-3b8eba909c59","Type":"ContainerStarted","Data":"1e1cfe199ccdaa68d15ec5334ceee0d8a37a2ac146702dfa36b53b722456e784"} Oct 11 10:28:50.768062 master-2 kubenswrapper[4776]: I1011 10:28:50.768031 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" event={"ID":"e350b624-6581-4982-96f3-cd5c37256e02","Type":"ContainerStarted","Data":"7123427ff4a739a7b15f9487edb2172df73189bcf0b6f9273cbe9a3faa4de58f"} Oct 11 10:28:50.771125 master-2 kubenswrapper[4776]: I1011 10:28:50.771095 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-7769d9677-wh775" event={"ID":"893af718-1fec-4b8b-8349-d85f978f4140","Type":"ContainerStarted","Data":"584cdc1a444916dd850d4de78dc9c815c71b05d47e34da7ccf47aad50644ba49"} Oct 11 10:28:50.773138 master-2 kubenswrapper[4776]: I1011 10:28:50.773103 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" event={"ID":"4347a983-767e-44a3-92e8-74386c4e2e82","Type":"ContainerStarted","Data":"ee015f8808258d90664cc71bae7bb11bb6d0962b8e9697f8b806d4475fbe89c5"} Oct 11 10:28:50.773138 master-2 kubenswrapper[4776]: I1011 10:28:50.773133 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" event={"ID":"4347a983-767e-44a3-92e8-74386c4e2e82","Type":"ContainerStarted","Data":"12e46036439fb4dbe17ce032eaf899c78e9c38fc32a80d6342dc2f61dd5bc37a"} Oct 11 10:28:50.775772 master-2 kubenswrapper[4776]: I1011 10:28:50.775743 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-z9trl" event={"ID":"0550ab10-d45d-4526-8551-c1ce0b232bbc","Type":"ContainerStarted","Data":"45a0d23a5a7c9e3b7dbcd399a6e078c4573d85741efbdb1c99131191da106db8"} Oct 11 10:28:50.798583 master-2 kubenswrapper[4776]: I1011 10:28:50.798522 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" podStartSLOduration=129.035133116 podStartE2EDuration="2m19.798505063s" podCreationTimestamp="2025-10-11 10:26:31 +0000 UTC" firstStartedPulling="2025-10-11 10:28:37.057992957 +0000 UTC m=+151.842419666" lastFinishedPulling="2025-10-11 10:28:47.821364904 +0000 UTC m=+162.605791613" observedRunningTime="2025-10-11 10:28:50.798161363 +0000 UTC m=+165.582588072" watchObservedRunningTime="2025-10-11 10:28:50.798505063 +0000 UTC m=+165.582931772" Oct 11 10:28:50.815708 master-2 kubenswrapper[4776]: I1011 10:28:50.815470 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-z9trl" podStartSLOduration=2.815442391 podStartE2EDuration="2.815442391s" podCreationTimestamp="2025-10-11 10:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:28:50.812158646 +0000 UTC m=+165.596585355" watchObservedRunningTime="2025-10-11 10:28:50.815442391 +0000 UTC m=+165.599869100" Oct 11 10:28:50.829161 master-2 kubenswrapper[4776]: I1011 10:28:50.828856 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" podStartSLOduration=108.551456274 podStartE2EDuration="2m0.828837497s" podCreationTimestamp="2025-10-11 10:26:50 +0000 UTC" firstStartedPulling="2025-10-11 10:28:37.236474537 +0000 UTC m=+152.020901286" lastFinishedPulling="2025-10-11 10:28:49.51385576 +0000 UTC m=+164.298282509" observedRunningTime="2025-10-11 10:28:50.828196038 +0000 UTC m=+165.612622747" watchObservedRunningTime="2025-10-11 10:28:50.828837497 +0000 UTC m=+165.613264206" Oct 11 10:28:50.897130 master-2 kubenswrapper[4776]: I1011 10:28:50.896735 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" podStartSLOduration=94.485196283 podStartE2EDuration="1m39.896718931s" podCreationTimestamp="2025-10-11 10:27:11 +0000 UTC" firstStartedPulling="2025-10-11 10:28:36.870509918 +0000 UTC m=+151.654936627" lastFinishedPulling="2025-10-11 10:28:42.282032556 +0000 UTC m=+157.066459275" observedRunningTime="2025-10-11 10:28:50.896293189 +0000 UTC m=+165.680719888" watchObservedRunningTime="2025-10-11 10:28:50.896718931 +0000 UTC m=+165.681145640" Oct 11 10:28:50.902849 master-2 kubenswrapper[4776]: I1011 10:28:50.902778 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-7769d9677-wh775" podStartSLOduration=108.549571228 podStartE2EDuration="1m53.902761375s" podCreationTimestamp="2025-10-11 10:26:57 +0000 UTC" firstStartedPulling="2025-10-11 10:28:36.923118314 +0000 UTC m=+151.707545023" lastFinishedPulling="2025-10-11 10:28:42.276308451 +0000 UTC m=+157.060735170" observedRunningTime="2025-10-11 10:28:50.849596714 +0000 UTC m=+165.634023423" watchObservedRunningTime="2025-10-11 10:28:50.902761375 +0000 UTC m=+165.687188084" Oct 11 10:28:50.914697 master-2 kubenswrapper[4776]: I1011 10:28:50.914620 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" podStartSLOduration=1.914595515 podStartE2EDuration="1.914595515s" podCreationTimestamp="2025-10-11 10:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:28:50.913050691 +0000 UTC m=+165.697477400" watchObservedRunningTime="2025-10-11 10:28:50.914595515 +0000 UTC m=+165.699022224" Oct 11 10:28:51.093325 master-1 kubenswrapper[4771]: I1011 10:28:51.092919 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-1"] Oct 11 10:28:51.093521 master-1 kubenswrapper[4771]: I1011 10:28:51.093344 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-1" podUID="007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7" containerName="installer" containerID="cri-o://80fbdcaea7022dfed31b23d1e5cd04123cb13507148681a1c855ca79f442ec90" gracePeriod=30 Oct 11 10:28:51.116010 master-1 kubenswrapper[4771]: I1011 10:28:51.115910 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" event={"ID":"f5a3f75a-c5b4-407a-b16a-5277aec051f7","Type":"ContainerStarted","Data":"f54666226746504bded08b7b482b73574d5dbca7369c120b64bc771746344af4"} Oct 11 10:28:51.118699 master-1 kubenswrapper[4771]: I1011 10:28:51.118641 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:51.119585 master-1 kubenswrapper[4771]: I1011 10:28:51.119510 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:51.119585 master-1 kubenswrapper[4771]: I1011 10:28:51.119562 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" event={"ID":"004ee387-d0e9-4582-ad14-f571832ebd6e","Type":"ContainerStarted","Data":"913475a2d9db4e326668ab52c1e0002f7c3998815697a186b12ad2e219935d68"} Oct 11 10:28:51.128373 master-1 kubenswrapper[4771]: I1011 10:28:51.128294 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:51.133766 master-1 kubenswrapper[4771]: I1011 10:28:51.133721 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:51.139382 master-1 kubenswrapper[4771]: I1011 10:28:51.139288 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" podStartSLOduration=6.100808295 podStartE2EDuration="8.13926696s" podCreationTimestamp="2025-10-11 10:28:43 +0000 UTC" firstStartedPulling="2025-10-11 10:28:47.569445427 +0000 UTC m=+159.543671858" lastFinishedPulling="2025-10-11 10:28:49.607904042 +0000 UTC m=+161.582130523" observedRunningTime="2025-10-11 10:28:51.138796377 +0000 UTC m=+163.113022848" watchObservedRunningTime="2025-10-11 10:28:51.13926696 +0000 UTC m=+163.113493441" Oct 11 10:28:51.199556 master-1 kubenswrapper[4771]: I1011 10:28:51.199448 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-config\") pod \"67be9a32-17b4-480c-98ba-caf9841bef6b\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " Oct 11 10:28:51.199733 master-1 kubenswrapper[4771]: I1011 10:28:51.199603 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert\") pod \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " Oct 11 10:28:51.200337 master-1 kubenswrapper[4771]: I1011 10:28:51.200280 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-config" (OuterVolumeSpecName: "config") pod "67be9a32-17b4-480c-98ba-caf9841bef6b" (UID: "67be9a32-17b4-480c-98ba-caf9841bef6b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:51.200945 master-1 kubenswrapper[4771]: I1011 10:28:51.199729 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert\") pod \"67be9a32-17b4-480c-98ba-caf9841bef6b\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " Oct 11 10:28:51.201103 master-1 kubenswrapper[4771]: I1011 10:28:51.201033 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-proxy-ca-bundles\") pod \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " Oct 11 10:28:51.202375 master-1 kubenswrapper[4771]: I1011 10:28:51.201928 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9deef4a8-bf40-4a1f-bd3f-764b298245b2" (UID: "9deef4a8-bf40-4a1f-bd3f-764b298245b2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:51.203324 master-1 kubenswrapper[4771]: I1011 10:28:51.203276 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hktq9\" (UniqueName: \"kubernetes.io/projected/9deef4a8-bf40-4a1f-bd3f-764b298245b2-kube-api-access-hktq9\") pod \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " Oct 11 10:28:51.203455 master-1 kubenswrapper[4771]: I1011 10:28:51.203421 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-config\") pod \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\" (UID: \"9deef4a8-bf40-4a1f-bd3f-764b298245b2\") " Oct 11 10:28:51.203543 master-1 kubenswrapper[4771]: I1011 10:28:51.203512 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcj47\" (UniqueName: \"kubernetes.io/projected/67be9a32-17b4-480c-98ba-caf9841bef6b-kube-api-access-fcj47\") pod \"67be9a32-17b4-480c-98ba-caf9841bef6b\" (UID: \"67be9a32-17b4-480c-98ba-caf9841bef6b\") " Oct 11 10:28:51.204313 master-1 kubenswrapper[4771]: I1011 10:28:51.204254 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-config" (OuterVolumeSpecName: "config") pod "9deef4a8-bf40-4a1f-bd3f-764b298245b2" (UID: "9deef4a8-bf40-4a1f-bd3f-764b298245b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:51.204438 master-1 kubenswrapper[4771]: I1011 10:28:51.204391 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9deef4a8-bf40-4a1f-bd3f-764b298245b2" (UID: "9deef4a8-bf40-4a1f-bd3f-764b298245b2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:28:51.204948 master-1 kubenswrapper[4771]: I1011 10:28:51.204892 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:51.205242 master-1 kubenswrapper[4771]: I1011 10:28:51.205213 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9deef4a8-bf40-4a1f-bd3f-764b298245b2-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:51.205407 master-1 kubenswrapper[4771]: I1011 10:28:51.205384 4771 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-proxy-ca-bundles\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:51.205566 master-1 kubenswrapper[4771]: I1011 10:28:51.205545 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:51.205722 master-1 kubenswrapper[4771]: I1011 10:28:51.205214 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "67be9a32-17b4-480c-98ba-caf9841bef6b" (UID: "67be9a32-17b4-480c-98ba-caf9841bef6b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:28:51.206403 master-1 kubenswrapper[4771]: I1011 10:28:51.206320 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67be9a32-17b4-480c-98ba-caf9841bef6b-kube-api-access-fcj47" (OuterVolumeSpecName: "kube-api-access-fcj47") pod "67be9a32-17b4-480c-98ba-caf9841bef6b" (UID: "67be9a32-17b4-480c-98ba-caf9841bef6b"). InnerVolumeSpecName "kube-api-access-fcj47". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:28:51.206954 master-1 kubenswrapper[4771]: I1011 10:28:51.206889 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9deef4a8-bf40-4a1f-bd3f-764b298245b2-kube-api-access-hktq9" (OuterVolumeSpecName: "kube-api-access-hktq9") pod "9deef4a8-bf40-4a1f-bd3f-764b298245b2" (UID: "9deef4a8-bf40-4a1f-bd3f-764b298245b2"). InnerVolumeSpecName "kube-api-access-hktq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:28:51.306858 master-1 kubenswrapper[4771]: I1011 10:28:51.306792 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67be9a32-17b4-480c-98ba-caf9841bef6b-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:51.306858 master-1 kubenswrapper[4771]: I1011 10:28:51.306848 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hktq9\" (UniqueName: \"kubernetes.io/projected/9deef4a8-bf40-4a1f-bd3f-764b298245b2-kube-api-access-hktq9\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:51.307061 master-1 kubenswrapper[4771]: I1011 10:28:51.306869 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcj47\" (UniqueName: \"kubernetes.io/projected/67be9a32-17b4-480c-98ba-caf9841bef6b-kube-api-access-fcj47\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:51.780902 master-2 kubenswrapper[4776]: I1011 10:28:51.780776 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" event={"ID":"7a89cb41-fb97-4282-a12d-c6f8d87bc41e","Type":"ContainerStarted","Data":"a486fb47915dcf562b4049ef498fba0a79dc2f0d9c2b35c61e3a77be9dcdeae7"} Oct 11 10:28:52.065560 master-1 kubenswrapper[4771]: I1011 10:28:52.065103 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-1"] Oct 11 10:28:52.066508 master-1 kubenswrapper[4771]: I1011 10:28:52.066473 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-1" Oct 11 10:28:52.069745 master-1 kubenswrapper[4771]: I1011 10:28:52.069693 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Oct 11 10:28:52.073024 master-1 kubenswrapper[4771]: I1011 10:28:52.072964 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-1"] Oct 11 10:28:52.125614 master-1 kubenswrapper[4771]: I1011 10:28:52.125520 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rzjcf" event={"ID":"b3f49f37-a9e4-4acd-ae7e-d644e8475106","Type":"ContainerStarted","Data":"acafd6bd0153153d6b38fbdc6317872f00d21dba7abb8bb24592c18f1e0a2729"} Oct 11 10:28:52.125614 master-1 kubenswrapper[4771]: I1011 10:28:52.125607 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rzjcf" event={"ID":"b3f49f37-a9e4-4acd-ae7e-d644e8475106","Type":"ContainerStarted","Data":"4baab67efc4d26139295ca37e7c077eab06b8a73e0c12560e14c2c4b7a1656ca"} Oct 11 10:28:52.125614 master-1 kubenswrapper[4771]: I1011 10:28:52.125566 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-857df878cf-tz7h4" Oct 11 10:28:52.126420 master-1 kubenswrapper[4771]: I1011 10:28:52.125718 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf" Oct 11 10:28:52.152609 master-1 kubenswrapper[4771]: I1011 10:28:52.152541 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-rzjcf" podStartSLOduration=2.096689283 podStartE2EDuration="4.152511418s" podCreationTimestamp="2025-10-11 10:28:48 +0000 UTC" firstStartedPulling="2025-10-11 10:28:49.01319373 +0000 UTC m=+160.987420181" lastFinishedPulling="2025-10-11 10:28:51.069015835 +0000 UTC m=+163.043242316" observedRunningTime="2025-10-11 10:28:52.152310253 +0000 UTC m=+164.126536764" watchObservedRunningTime="2025-10-11 10:28:52.152511418 +0000 UTC m=+164.126737859" Oct 11 10:28:52.176386 master-1 kubenswrapper[4771]: I1011 10:28:52.176305 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw"] Oct 11 10:28:52.177752 master-1 kubenswrapper[4771]: I1011 10:28:52.177684 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:28:52.180377 master-1 kubenswrapper[4771]: I1011 10:28:52.180306 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 11 10:28:52.181041 master-1 kubenswrapper[4771]: I1011 10:28:52.180984 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 11 10:28:52.181605 master-1 kubenswrapper[4771]: I1011 10:28:52.181498 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf"] Oct 11 10:28:52.181605 master-1 kubenswrapper[4771]: I1011 10:28:52.181507 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 11 10:28:52.181820 master-1 kubenswrapper[4771]: I1011 10:28:52.181300 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 11 10:28:52.181820 master-1 kubenswrapper[4771]: I1011 10:28:52.181440 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 11 10:28:52.183755 master-1 kubenswrapper[4771]: I1011 10:28:52.183677 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf"] Oct 11 10:28:52.185469 master-1 kubenswrapper[4771]: I1011 10:28:52.185426 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw"] Oct 11 10:28:52.201874 master-1 kubenswrapper[4771]: I1011 10:28:52.201823 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-857df878cf-tz7h4"] Oct 11 10:28:52.204132 master-1 kubenswrapper[4771]: I1011 10:28:52.204062 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-857df878cf-tz7h4"] Oct 11 10:28:52.217654 master-1 kubenswrapper[4771]: I1011 10:28:52.217603 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/826e1279-bc0d-426e-b6e0-5108268f340e-kube-api-access\") pod \"installer-1-master-1\" (UID: \"826e1279-bc0d-426e-b6e0-5108268f340e\") " pod="openshift-etcd/installer-1-master-1" Oct 11 10:28:52.217654 master-1 kubenswrapper[4771]: I1011 10:28:52.217660 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/826e1279-bc0d-426e-b6e0-5108268f340e-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"826e1279-bc0d-426e-b6e0-5108268f340e\") " pod="openshift-etcd/installer-1-master-1" Oct 11 10:28:52.217969 master-1 kubenswrapper[4771]: I1011 10:28:52.217679 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/826e1279-bc0d-426e-b6e0-5108268f340e-var-lock\") pod \"installer-1-master-1\" (UID: \"826e1279-bc0d-426e-b6e0-5108268f340e\") " pod="openshift-etcd/installer-1-master-1" Oct 11 10:28:52.318870 master-1 kubenswrapper[4771]: I1011 10:28:52.318699 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwxw7\" (UniqueName: \"kubernetes.io/projected/537a2b50-0394-47bd-941a-def350316943-kube-api-access-zwxw7\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:28:52.318870 master-1 kubenswrapper[4771]: I1011 10:28:52.318747 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:28:52.318870 master-1 kubenswrapper[4771]: I1011 10:28:52.318794 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/826e1279-bc0d-426e-b6e0-5108268f340e-kube-api-access\") pod \"installer-1-master-1\" (UID: \"826e1279-bc0d-426e-b6e0-5108268f340e\") " pod="openshift-etcd/installer-1-master-1" Oct 11 10:28:52.318870 master-1 kubenswrapper[4771]: I1011 10:28:52.318820 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/826e1279-bc0d-426e-b6e0-5108268f340e-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"826e1279-bc0d-426e-b6e0-5108268f340e\") " pod="openshift-etcd/installer-1-master-1" Oct 11 10:28:52.318870 master-1 kubenswrapper[4771]: I1011 10:28:52.318842 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537a2b50-0394-47bd-941a-def350316943-serving-cert\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:28:52.318870 master-1 kubenswrapper[4771]: I1011 10:28:52.318864 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/826e1279-bc0d-426e-b6e0-5108268f340e-var-lock\") pod \"installer-1-master-1\" (UID: \"826e1279-bc0d-426e-b6e0-5108268f340e\") " pod="openshift-etcd/installer-1-master-1" Oct 11 10:28:52.319246 master-1 kubenswrapper[4771]: I1011 10:28:52.318924 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-config\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:28:52.319246 master-1 kubenswrapper[4771]: I1011 10:28:52.318964 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67be9a32-17b4-480c-98ba-caf9841bef6b-client-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:52.319246 master-1 kubenswrapper[4771]: I1011 10:28:52.318977 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9deef4a8-bf40-4a1f-bd3f-764b298245b2-client-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:52.319440 master-1 kubenswrapper[4771]: I1011 10:28:52.319416 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/826e1279-bc0d-426e-b6e0-5108268f340e-var-lock\") pod \"installer-1-master-1\" (UID: \"826e1279-bc0d-426e-b6e0-5108268f340e\") " pod="openshift-etcd/installer-1-master-1" Oct 11 10:28:52.319577 master-1 kubenswrapper[4771]: I1011 10:28:52.319463 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/826e1279-bc0d-426e-b6e0-5108268f340e-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"826e1279-bc0d-426e-b6e0-5108268f340e\") " pod="openshift-etcd/installer-1-master-1" Oct 11 10:28:52.340742 master-1 kubenswrapper[4771]: I1011 10:28:52.340684 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:52.340815 master-1 kubenswrapper[4771]: I1011 10:28:52.340760 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:52.350547 master-1 kubenswrapper[4771]: I1011 10:28:52.350430 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/826e1279-bc0d-426e-b6e0-5108268f340e-kube-api-access\") pod \"installer-1-master-1\" (UID: \"826e1279-bc0d-426e-b6e0-5108268f340e\") " pod="openshift-etcd/installer-1-master-1" Oct 11 10:28:52.353003 master-1 kubenswrapper[4771]: I1011 10:28:52.352927 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:52.381790 master-1 kubenswrapper[4771]: I1011 10:28:52.381714 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-1" Oct 11 10:28:52.446395 master-1 kubenswrapper[4771]: I1011 10:28:52.432572 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-config\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:28:52.446395 master-1 kubenswrapper[4771]: I1011 10:28:52.432711 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwxw7\" (UniqueName: \"kubernetes.io/projected/537a2b50-0394-47bd-941a-def350316943-kube-api-access-zwxw7\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:28:52.446395 master-1 kubenswrapper[4771]: I1011 10:28:52.432803 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:28:52.446395 master-1 kubenswrapper[4771]: I1011 10:28:52.432893 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537a2b50-0394-47bd-941a-def350316943-serving-cert\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:28:52.446395 master-1 kubenswrapper[4771]: E1011 10:28:52.435416 4771 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:52.446395 master-1 kubenswrapper[4771]: I1011 10:28:52.435501 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-config\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:28:52.446395 master-1 kubenswrapper[4771]: E1011 10:28:52.435575 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca podName:537a2b50-0394-47bd-941a-def350316943 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:52.935505058 +0000 UTC m=+164.909731529 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca") pod "route-controller-manager-5bcc5987f5-f92xw" (UID: "537a2b50-0394-47bd-941a-def350316943") : configmap "client-ca" not found Oct 11 10:28:52.454386 master-1 kubenswrapper[4771]: I1011 10:28:52.453003 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537a2b50-0394-47bd-941a-def350316943-serving-cert\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:28:52.458796 master-1 kubenswrapper[4771]: I1011 10:28:52.458723 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67be9a32-17b4-480c-98ba-caf9841bef6b" path="/var/lib/kubelet/pods/67be9a32-17b4-480c-98ba-caf9841bef6b/volumes" Oct 11 10:28:52.459443 master-1 kubenswrapper[4771]: I1011 10:28:52.459399 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9deef4a8-bf40-4a1f-bd3f-764b298245b2" path="/var/lib/kubelet/pods/9deef4a8-bf40-4a1f-bd3f-764b298245b2/volumes" Oct 11 10:28:52.469640 master-1 kubenswrapper[4771]: I1011 10:28:52.469565 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwxw7\" (UniqueName: \"kubernetes.io/projected/537a2b50-0394-47bd-941a-def350316943-kube-api-access-zwxw7\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:28:52.659836 master-1 kubenswrapper[4771]: I1011 10:28:52.659772 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-1"] Oct 11 10:28:52.670195 master-1 kubenswrapper[4771]: W1011 10:28:52.670144 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod826e1279_bc0d_426e_b6e0_5108268f340e.slice/crio-5363fbd5b12a9230e5f6b1dd57b8fb070fa12eb536a76d7bdfd11f7b2167cad9 WatchSource:0}: Error finding container 5363fbd5b12a9230e5f6b1dd57b8fb070fa12eb536a76d7bdfd11f7b2167cad9: Status 404 returned error can't find the container with id 5363fbd5b12a9230e5f6b1dd57b8fb070fa12eb536a76d7bdfd11f7b2167cad9 Oct 11 10:28:52.786817 master-2 kubenswrapper[4776]: I1011 10:28:52.786722 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sgvjd" event={"ID":"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62","Type":"ContainerStarted","Data":"77e97cf5afe4c800f46be81a45fd1c5b7ad05b15de9779b61b57bc99ea5963db"} Oct 11 10:28:52.786817 master-2 kubenswrapper[4776]: I1011 10:28:52.786776 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sgvjd" event={"ID":"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62","Type":"ContainerStarted","Data":"69cfa4bd3903110c0f93b93c280a5ba53c6b44fa6d9f0abdd2bdf1bd106527d9"} Oct 11 10:28:52.786817 master-2 kubenswrapper[4776]: I1011 10:28:52.786821 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:52.789203 master-2 kubenswrapper[4776]: I1011 10:28:52.789162 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" event={"ID":"7a89cb41-fb97-4282-a12d-c6f8d87bc41e","Type":"ContainerStarted","Data":"2fdacc499227869c48e6c020ccf86a1927bcc28943d27adf9666ea1d0e17f652"} Oct 11 10:28:52.792867 master-2 kubenswrapper[4776]: I1011 10:28:52.792828 4776 generic.go:334] "Generic (PLEG): container finished" podID="e350b624-6581-4982-96f3-cd5c37256e02" containerID="e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278" exitCode=0 Oct 11 10:28:52.793685 master-2 kubenswrapper[4776]: I1011 10:28:52.792948 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" event={"ID":"e350b624-6581-4982-96f3-cd5c37256e02","Type":"ContainerDied","Data":"e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278"} Oct 11 10:28:52.807210 master-2 kubenswrapper[4776]: I1011 10:28:52.807116 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-sgvjd" podStartSLOduration=2.515632178 podStartE2EDuration="4.807093961s" podCreationTimestamp="2025-10-11 10:28:48 +0000 UTC" firstStartedPulling="2025-10-11 10:28:49.742776573 +0000 UTC m=+164.527203282" lastFinishedPulling="2025-10-11 10:28:52.034238366 +0000 UTC m=+166.818665065" observedRunningTime="2025-10-11 10:28:52.806628438 +0000 UTC m=+167.591055207" watchObservedRunningTime="2025-10-11 10:28:52.807093961 +0000 UTC m=+167.591520690" Oct 11 10:28:52.861695 master-2 kubenswrapper[4776]: I1011 10:28:52.860108 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podStartSLOduration=8.990944144 podStartE2EDuration="19.860089667s" podCreationTimestamp="2025-10-11 10:28:33 +0000 UTC" firstStartedPulling="2025-10-11 10:28:38.424325372 +0000 UTC m=+153.208752091" lastFinishedPulling="2025-10-11 10:28:49.293470855 +0000 UTC m=+164.077897614" observedRunningTime="2025-10-11 10:28:52.857795321 +0000 UTC m=+167.642222050" watchObservedRunningTime="2025-10-11 10:28:52.860089667 +0000 UTC m=+167.644516376" Oct 11 10:28:52.926995 master-2 kubenswrapper[4776]: I1011 10:28:52.926742 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:52.926995 master-2 kubenswrapper[4776]: I1011 10:28:52.926813 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: I1011 10:28:52.935667 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]etcd ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: livez check failed Oct 11 10:28:52.936556 master-2 kubenswrapper[4776]: I1011 10:28:52.935768 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:28:52.938927 master-1 kubenswrapper[4771]: I1011 10:28:52.938668 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:28:52.938927 master-1 kubenswrapper[4771]: E1011 10:28:52.938806 4771 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:52.938927 master-1 kubenswrapper[4771]: E1011 10:28:52.938876 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca podName:537a2b50-0394-47bd-941a-def350316943 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:53.938859926 +0000 UTC m=+165.913086377 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca") pod "route-controller-manager-5bcc5987f5-f92xw" (UID: "537a2b50-0394-47bd-941a-def350316943") : configmap "client-ca" not found Oct 11 10:28:53.039956 master-1 kubenswrapper[4771]: I1011 10:28:53.039861 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:28:53.039956 master-1 kubenswrapper[4771]: E1011 10:28:53.039988 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker podName:d7647696-42d9-4dd9-bc3b-a4d52a42cf9a nodeName:}" failed. No retries permitted until 2025-10-11 10:29:01.039972209 +0000 UTC m=+173.014198650 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-bqdlc" (UID: "d7647696-42d9-4dd9-bc3b-a4d52a42cf9a") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:28:53.131957 master-1 kubenswrapper[4771]: I1011 10:28:53.131876 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-1" event={"ID":"826e1279-bc0d-426e-b6e0-5108268f340e","Type":"ContainerStarted","Data":"5363fbd5b12a9230e5f6b1dd57b8fb070fa12eb536a76d7bdfd11f7b2167cad9"} Oct 11 10:28:53.134207 master-1 kubenswrapper[4771]: I1011 10:28:53.132472 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-rzjcf" Oct 11 10:28:53.137959 master-1 kubenswrapper[4771]: I1011 10:28:53.137775 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:28:53.140834 master-1 kubenswrapper[4771]: I1011 10:28:53.140785 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:28:53.141070 master-1 kubenswrapper[4771]: E1011 10:28:53.141032 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker podName:6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b nodeName:}" failed. No retries permitted until 2025-10-11 10:29:01.1410091 +0000 UTC m=+173.115235561 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-tpzsm" (UID: "6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:28:53.688506 master-1 kubenswrapper[4771]: I1011 10:28:53.688449 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-1"] Oct 11 10:28:53.688865 master-1 kubenswrapper[4771]: I1011 10:28:53.688838 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-1" Oct 11 10:28:53.695212 master-1 kubenswrapper[4771]: I1011 10:28:53.694900 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-1"] Oct 11 10:28:53.800590 master-2 kubenswrapper[4776]: I1011 10:28:53.800490 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" event={"ID":"e350b624-6581-4982-96f3-cd5c37256e02","Type":"ContainerStarted","Data":"c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912"} Oct 11 10:28:53.820729 master-2 kubenswrapper[4776]: I1011 10:28:53.820607 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podStartSLOduration=8.539214492 podStartE2EDuration="10.820576185s" podCreationTimestamp="2025-10-11 10:28:43 +0000 UTC" firstStartedPulling="2025-10-11 10:28:49.737162331 +0000 UTC m=+164.521589040" lastFinishedPulling="2025-10-11 10:28:52.018524024 +0000 UTC m=+166.802950733" observedRunningTime="2025-10-11 10:28:53.817471646 +0000 UTC m=+168.601898385" watchObservedRunningTime="2025-10-11 10:28:53.820576185 +0000 UTC m=+168.605002894" Oct 11 10:28:53.847208 master-1 kubenswrapper[4771]: I1011 10:28:53.847140 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/750e7efe-07f7-4280-85c7-78250178f965-var-lock\") pod \"installer-2-master-1\" (UID: \"750e7efe-07f7-4280-85c7-78250178f965\") " pod="openshift-kube-scheduler/installer-2-master-1" Oct 11 10:28:53.847535 master-1 kubenswrapper[4771]: I1011 10:28:53.847236 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/750e7efe-07f7-4280-85c7-78250178f965-kubelet-dir\") pod \"installer-2-master-1\" (UID: \"750e7efe-07f7-4280-85c7-78250178f965\") " pod="openshift-kube-scheduler/installer-2-master-1" Oct 11 10:28:53.847535 master-1 kubenswrapper[4771]: I1011 10:28:53.847295 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/750e7efe-07f7-4280-85c7-78250178f965-kube-api-access\") pod \"installer-2-master-1\" (UID: \"750e7efe-07f7-4280-85c7-78250178f965\") " pod="openshift-kube-scheduler/installer-2-master-1" Oct 11 10:28:53.949014 master-1 kubenswrapper[4771]: I1011 10:28:53.948837 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/750e7efe-07f7-4280-85c7-78250178f965-var-lock\") pod \"installer-2-master-1\" (UID: \"750e7efe-07f7-4280-85c7-78250178f965\") " pod="openshift-kube-scheduler/installer-2-master-1" Oct 11 10:28:53.949014 master-1 kubenswrapper[4771]: I1011 10:28:53.948933 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/750e7efe-07f7-4280-85c7-78250178f965-kubelet-dir\") pod \"installer-2-master-1\" (UID: \"750e7efe-07f7-4280-85c7-78250178f965\") " pod="openshift-kube-scheduler/installer-2-master-1" Oct 11 10:28:53.949014 master-1 kubenswrapper[4771]: I1011 10:28:53.948976 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/750e7efe-07f7-4280-85c7-78250178f965-kube-api-access\") pod \"installer-2-master-1\" (UID: \"750e7efe-07f7-4280-85c7-78250178f965\") " pod="openshift-kube-scheduler/installer-2-master-1" Oct 11 10:28:53.949014 master-1 kubenswrapper[4771]: I1011 10:28:53.948997 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:28:53.949014 master-1 kubenswrapper[4771]: I1011 10:28:53.949005 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/750e7efe-07f7-4280-85c7-78250178f965-var-lock\") pod \"installer-2-master-1\" (UID: \"750e7efe-07f7-4280-85c7-78250178f965\") " pod="openshift-kube-scheduler/installer-2-master-1" Oct 11 10:28:53.949429 master-1 kubenswrapper[4771]: E1011 10:28:53.949098 4771 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:53.949429 master-1 kubenswrapper[4771]: I1011 10:28:53.949123 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/750e7efe-07f7-4280-85c7-78250178f965-kubelet-dir\") pod \"installer-2-master-1\" (UID: \"750e7efe-07f7-4280-85c7-78250178f965\") " pod="openshift-kube-scheduler/installer-2-master-1" Oct 11 10:28:53.949429 master-1 kubenswrapper[4771]: E1011 10:28:53.949153 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca podName:537a2b50-0394-47bd-941a-def350316943 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:55.949139496 +0000 UTC m=+167.923365937 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca") pod "route-controller-manager-5bcc5987f5-f92xw" (UID: "537a2b50-0394-47bd-941a-def350316943") : configmap "client-ca" not found Oct 11 10:28:53.974804 master-1 kubenswrapper[4771]: I1011 10:28:53.974713 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/750e7efe-07f7-4280-85c7-78250178f965-kube-api-access\") pod \"installer-2-master-1\" (UID: \"750e7efe-07f7-4280-85c7-78250178f965\") " pod="openshift-kube-scheduler/installer-2-master-1" Oct 11 10:28:54.005417 master-1 kubenswrapper[4771]: I1011 10:28:54.005344 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-1" Oct 11 10:28:54.445451 master-2 kubenswrapper[4776]: I1011 10:28:54.445339 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:28:54.450321 master-2 kubenswrapper[4776]: I1011 10:28:54.450271 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:28:54.614671 master-1 kubenswrapper[4771]: I1011 10:28:54.614601 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-565f857764-nhm4g"] Oct 11 10:28:54.615343 master-1 kubenswrapper[4771]: I1011 10:28:54.615308 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:28:54.618089 master-1 kubenswrapper[4771]: I1011 10:28:54.618049 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 11 10:28:54.619428 master-1 kubenswrapper[4771]: I1011 10:28:54.619391 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 11 10:28:54.619478 master-1 kubenswrapper[4771]: I1011 10:28:54.619464 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 11 10:28:54.619826 master-1 kubenswrapper[4771]: I1011 10:28:54.619744 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 11 10:28:54.620839 master-1 kubenswrapper[4771]: I1011 10:28:54.620808 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 11 10:28:54.628523 master-1 kubenswrapper[4771]: I1011 10:28:54.628461 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-565f857764-nhm4g"] Oct 11 10:28:54.631782 master-1 kubenswrapper[4771]: I1011 10:28:54.631713 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 11 10:28:54.757422 master-1 kubenswrapper[4771]: I1011 10:28:54.757334 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:28:54.757422 master-1 kubenswrapper[4771]: I1011 10:28:54.757426 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-proxy-ca-bundles\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:28:54.757669 master-1 kubenswrapper[4771]: I1011 10:28:54.757482 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9e9455e-0b47-4623-9b4c-ef79cf62a254-serving-cert\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:28:54.757669 master-1 kubenswrapper[4771]: I1011 10:28:54.757520 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wp47\" (UniqueName: \"kubernetes.io/projected/c9e9455e-0b47-4623-9b4c-ef79cf62a254-kube-api-access-9wp47\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:28:54.757669 master-1 kubenswrapper[4771]: I1011 10:28:54.757558 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-config\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:28:54.858189 master-1 kubenswrapper[4771]: I1011 10:28:54.858115 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:28:54.858189 master-1 kubenswrapper[4771]: I1011 10:28:54.858171 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-proxy-ca-bundles\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:28:54.858189 master-1 kubenswrapper[4771]: I1011 10:28:54.858198 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9e9455e-0b47-4623-9b4c-ef79cf62a254-serving-cert\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:28:54.858541 master-1 kubenswrapper[4771]: I1011 10:28:54.858221 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wp47\" (UniqueName: \"kubernetes.io/projected/c9e9455e-0b47-4623-9b4c-ef79cf62a254-kube-api-access-9wp47\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:28:54.858541 master-1 kubenswrapper[4771]: I1011 10:28:54.858245 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-config\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:28:54.860666 master-1 kubenswrapper[4771]: E1011 10:28:54.859600 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:54.860666 master-1 kubenswrapper[4771]: E1011 10:28:54.859761 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca podName:c9e9455e-0b47-4623-9b4c-ef79cf62a254 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:55.359705869 +0000 UTC m=+167.333932350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca") pod "controller-manager-565f857764-nhm4g" (UID: "c9e9455e-0b47-4623-9b4c-ef79cf62a254") : configmap "client-ca" not found Oct 11 10:28:54.863432 master-1 kubenswrapper[4771]: I1011 10:28:54.862122 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-config\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:28:54.866529 master-1 kubenswrapper[4771]: I1011 10:28:54.864615 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9e9455e-0b47-4623-9b4c-ef79cf62a254-serving-cert\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:28:54.866529 master-1 kubenswrapper[4771]: I1011 10:28:54.865914 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-proxy-ca-bundles\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:28:54.877715 master-1 kubenswrapper[4771]: I1011 10:28:54.877639 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wp47\" (UniqueName: \"kubernetes.io/projected/c9e9455e-0b47-4623-9b4c-ef79cf62a254-kube-api-access-9wp47\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:28:55.366554 master-1 kubenswrapper[4771]: I1011 10:28:55.366459 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:28:55.366755 master-1 kubenswrapper[4771]: E1011 10:28:55.366715 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:55.366883 master-1 kubenswrapper[4771]: E1011 10:28:55.366853 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca podName:c9e9455e-0b47-4623-9b4c-ef79cf62a254 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:56.366825207 +0000 UTC m=+168.341051688 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca") pod "controller-manager-565f857764-nhm4g" (UID: "c9e9455e-0b47-4623-9b4c-ef79cf62a254") : configmap "client-ca" not found Oct 11 10:28:55.973802 master-1 kubenswrapper[4771]: I1011 10:28:55.973702 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:28:55.974613 master-1 kubenswrapper[4771]: E1011 10:28:55.973916 4771 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:55.974613 master-1 kubenswrapper[4771]: E1011 10:28:55.974012 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca podName:537a2b50-0394-47bd-941a-def350316943 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:59.97398837 +0000 UTC m=+171.948214841 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca") pod "route-controller-manager-5bcc5987f5-f92xw" (UID: "537a2b50-0394-47bd-941a-def350316943") : configmap "client-ca" not found Oct 11 10:28:56.347663 master-1 kubenswrapper[4771]: I1011 10:28:56.347617 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-1"] Oct 11 10:28:56.362316 master-1 kubenswrapper[4771]: W1011 10:28:56.362270 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod750e7efe_07f7_4280_85c7_78250178f965.slice/crio-23a2788a32f3ff82c23ee63b5bf39839725cd21b8f03a9a69d479fa53764ca28 WatchSource:0}: Error finding container 23a2788a32f3ff82c23ee63b5bf39839725cd21b8f03a9a69d479fa53764ca28: Status 404 returned error can't find the container with id 23a2788a32f3ff82c23ee63b5bf39839725cd21b8f03a9a69d479fa53764ca28 Oct 11 10:28:56.378810 master-1 kubenswrapper[4771]: I1011 10:28:56.378749 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:28:56.378949 master-1 kubenswrapper[4771]: E1011 10:28:56.378882 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:56.379015 master-1 kubenswrapper[4771]: E1011 10:28:56.378986 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca podName:c9e9455e-0b47-4623-9b4c-ef79cf62a254 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:58.378957656 +0000 UTC m=+170.353184117 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca") pod "controller-manager-565f857764-nhm4g" (UID: "c9e9455e-0b47-4623-9b4c-ef79cf62a254") : configmap "client-ca" not found Oct 11 10:28:56.656433 master-1 kubenswrapper[4771]: I1011 10:28:56.655876 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-75f9c7d795-2zgv4"] Oct 11 10:28:56.657788 master-1 kubenswrapper[4771]: I1011 10:28:56.657749 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-75f9c7d795-2zgv4" Oct 11 10:28:56.661131 master-1 kubenswrapper[4771]: I1011 10:28:56.661053 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 11 10:28:56.662005 master-1 kubenswrapper[4771]: I1011 10:28:56.661941 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 11 10:28:56.662137 master-1 kubenswrapper[4771]: I1011 10:28:56.662006 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 11 10:28:56.667238 master-1 kubenswrapper[4771]: I1011 10:28:56.667165 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-75f9c7d795-2zgv4"] Oct 11 10:28:56.783045 master-1 kubenswrapper[4771]: I1011 10:28:56.782860 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b763cbe4-f035-45f2-9f70-4bbb8d5cac87-samples-operator-tls\") pod \"cluster-samples-operator-75f9c7d795-2zgv4\" (UID: \"b763cbe4-f035-45f2-9f70-4bbb8d5cac87\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-75f9c7d795-2zgv4" Oct 11 10:28:56.783045 master-1 kubenswrapper[4771]: I1011 10:28:56.783006 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffrmg\" (UniqueName: \"kubernetes.io/projected/b763cbe4-f035-45f2-9f70-4bbb8d5cac87-kube-api-access-ffrmg\") pod \"cluster-samples-operator-75f9c7d795-2zgv4\" (UID: \"b763cbe4-f035-45f2-9f70-4bbb8d5cac87\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-75f9c7d795-2zgv4" Oct 11 10:28:56.884718 master-1 kubenswrapper[4771]: I1011 10:28:56.884642 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b763cbe4-f035-45f2-9f70-4bbb8d5cac87-samples-operator-tls\") pod \"cluster-samples-operator-75f9c7d795-2zgv4\" (UID: \"b763cbe4-f035-45f2-9f70-4bbb8d5cac87\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-75f9c7d795-2zgv4" Oct 11 10:28:56.884973 master-1 kubenswrapper[4771]: I1011 10:28:56.884806 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffrmg\" (UniqueName: \"kubernetes.io/projected/b763cbe4-f035-45f2-9f70-4bbb8d5cac87-kube-api-access-ffrmg\") pod \"cluster-samples-operator-75f9c7d795-2zgv4\" (UID: \"b763cbe4-f035-45f2-9f70-4bbb8d5cac87\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-75f9c7d795-2zgv4" Oct 11 10:28:56.892511 master-1 kubenswrapper[4771]: I1011 10:28:56.892436 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b763cbe4-f035-45f2-9f70-4bbb8d5cac87-samples-operator-tls\") pod \"cluster-samples-operator-75f9c7d795-2zgv4\" (UID: \"b763cbe4-f035-45f2-9f70-4bbb8d5cac87\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-75f9c7d795-2zgv4" Oct 11 10:28:56.908100 master-1 kubenswrapper[4771]: I1011 10:28:56.908015 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffrmg\" (UniqueName: \"kubernetes.io/projected/b763cbe4-f035-45f2-9f70-4bbb8d5cac87-kube-api-access-ffrmg\") pod \"cluster-samples-operator-75f9c7d795-2zgv4\" (UID: \"b763cbe4-f035-45f2-9f70-4bbb8d5cac87\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-75f9c7d795-2zgv4" Oct 11 10:28:56.979240 master-1 kubenswrapper[4771]: I1011 10:28:56.979157 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-75f9c7d795-2zgv4" Oct 11 10:28:57.150850 master-1 kubenswrapper[4771]: I1011 10:28:57.150768 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" event={"ID":"f5a3f75a-c5b4-407a-b16a-5277aec051f7","Type":"ContainerStarted","Data":"f9a8c29368a9534369890a79857333416edb6b50b525db38508428bb0d6a4590"} Oct 11 10:28:57.153883 master-1 kubenswrapper[4771]: I1011 10:28:57.153816 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-1" event={"ID":"826e1279-bc0d-426e-b6e0-5108268f340e","Type":"ContainerStarted","Data":"9a616ae6ac6ffcbc27ae54a54aec1c65046926d3773ee73ab8bfdedb75371f06"} Oct 11 10:28:57.155931 master-1 kubenswrapper[4771]: I1011 10:28:57.155869 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-1" event={"ID":"750e7efe-07f7-4280-85c7-78250178f965","Type":"ContainerStarted","Data":"2bb08e74f6a03f94e752e699929c9aed89e09c0d951507eccf30a241344f6bd2"} Oct 11 10:28:57.156018 master-1 kubenswrapper[4771]: I1011 10:28:57.155934 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-1" event={"ID":"750e7efe-07f7-4280-85c7-78250178f965","Type":"ContainerStarted","Data":"23a2788a32f3ff82c23ee63b5bf39839725cd21b8f03a9a69d479fa53764ca28"} Oct 11 10:28:57.170791 master-1 kubenswrapper[4771]: I1011 10:28:57.169653 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-vhfgw" podStartSLOduration=2.324017334 podStartE2EDuration="8.169629989s" podCreationTimestamp="2025-10-11 10:28:49 +0000 UTC" firstStartedPulling="2025-10-11 10:28:50.27930851 +0000 UTC m=+162.253534991" lastFinishedPulling="2025-10-11 10:28:56.124921205 +0000 UTC m=+168.099147646" observedRunningTime="2025-10-11 10:28:57.16891912 +0000 UTC m=+169.143145591" watchObservedRunningTime="2025-10-11 10:28:57.169629989 +0000 UTC m=+169.143856470" Oct 11 10:28:57.184117 master-1 kubenswrapper[4771]: I1011 10:28:57.184042 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-1" podStartSLOduration=4.184021791 podStartE2EDuration="4.184021791s" podCreationTimestamp="2025-10-11 10:28:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:28:57.182475979 +0000 UTC m=+169.156702480" watchObservedRunningTime="2025-10-11 10:28:57.184021791 +0000 UTC m=+169.158248272" Oct 11 10:28:57.215931 master-1 kubenswrapper[4771]: I1011 10:28:57.215793 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-1" podStartSLOduration=1.740002676 podStartE2EDuration="5.215765223s" podCreationTimestamp="2025-10-11 10:28:52 +0000 UTC" firstStartedPulling="2025-10-11 10:28:52.67410995 +0000 UTC m=+164.648336391" lastFinishedPulling="2025-10-11 10:28:56.149872487 +0000 UTC m=+168.124098938" observedRunningTime="2025-10-11 10:28:57.199578853 +0000 UTC m=+169.173805314" watchObservedRunningTime="2025-10-11 10:28:57.215765223 +0000 UTC m=+169.189991694" Oct 11 10:28:57.216662 master-1 kubenswrapper[4771]: I1011 10:28:57.216241 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-75f9c7d795-2zgv4"] Oct 11 10:28:57.366630 master-2 kubenswrapper[4776]: I1011 10:28:57.366538 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:57.367258 master-2 kubenswrapper[4776]: I1011 10:28:57.366652 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:57.375742 master-2 kubenswrapper[4776]: I1011 10:28:57.375695 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:57.836138 master-2 kubenswrapper[4776]: I1011 10:28:57.836078 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:57.935587 master-2 kubenswrapper[4776]: I1011 10:28:57.935491 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:57.943366 master-2 kubenswrapper[4776]: I1011 10:28:57.943285 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:58.029813 master-1 kubenswrapper[4771]: I1011 10:28:58.029750 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-796c687c6d-9b677"] Oct 11 10:28:58.030549 master-1 kubenswrapper[4771]: E1011 10:28:58.030020 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-796c687c6d-9b677" podUID="94d811f4-4ac9-46b0-b937-d3370b1b4305" Oct 11 10:28:58.164222 master-1 kubenswrapper[4771]: I1011 10:28:58.163767 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-75f9c7d795-2zgv4" event={"ID":"b763cbe4-f035-45f2-9f70-4bbb8d5cac87","Type":"ContainerStarted","Data":"19901557922a89a89ad56c45f63b913979e9a7ab1f0d9d02378b0fbc33650102"} Oct 11 10:28:58.164469 master-1 kubenswrapper[4771]: I1011 10:28:58.164238 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:58.171696 master-1 kubenswrapper[4771]: I1011 10:28:58.171662 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:58.301592 master-1 kubenswrapper[4771]: I1011 10:28:58.301531 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94d811f4-4ac9-46b0-b937-d3370b1b4305-serving-cert\") pod \"94d811f4-4ac9-46b0-b937-d3370b1b4305\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " Oct 11 10:28:58.301857 master-1 kubenswrapper[4771]: I1011 10:28:58.301604 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/94d811f4-4ac9-46b0-b937-d3370b1b4305-node-pullsecrets\") pod \"94d811f4-4ac9-46b0-b937-d3370b1b4305\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " Oct 11 10:28:58.301857 master-1 kubenswrapper[4771]: I1011 10:28:58.301653 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbkqb\" (UniqueName: \"kubernetes.io/projected/94d811f4-4ac9-46b0-b937-d3370b1b4305-kube-api-access-dbkqb\") pod \"94d811f4-4ac9-46b0-b937-d3370b1b4305\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " Oct 11 10:28:58.301857 master-1 kubenswrapper[4771]: I1011 10:28:58.301716 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-etcd-serving-ca\") pod \"94d811f4-4ac9-46b0-b937-d3370b1b4305\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " Oct 11 10:28:58.301857 master-1 kubenswrapper[4771]: I1011 10:28:58.301756 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94d811f4-4ac9-46b0-b937-d3370b1b4305-etcd-client\") pod \"94d811f4-4ac9-46b0-b937-d3370b1b4305\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " Oct 11 10:28:58.301857 master-1 kubenswrapper[4771]: I1011 10:28:58.301785 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-image-import-ca\") pod \"94d811f4-4ac9-46b0-b937-d3370b1b4305\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " Oct 11 10:28:58.301857 master-1 kubenswrapper[4771]: I1011 10:28:58.301821 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-config\") pod \"94d811f4-4ac9-46b0-b937-d3370b1b4305\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " Oct 11 10:28:58.301857 master-1 kubenswrapper[4771]: I1011 10:28:58.301853 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-trusted-ca-bundle\") pod \"94d811f4-4ac9-46b0-b937-d3370b1b4305\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " Oct 11 10:28:58.302265 master-1 kubenswrapper[4771]: I1011 10:28:58.301887 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit-dir\") pod \"94d811f4-4ac9-46b0-b937-d3370b1b4305\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " Oct 11 10:28:58.302265 master-1 kubenswrapper[4771]: I1011 10:28:58.301929 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94d811f4-4ac9-46b0-b937-d3370b1b4305-encryption-config\") pod \"94d811f4-4ac9-46b0-b937-d3370b1b4305\" (UID: \"94d811f4-4ac9-46b0-b937-d3370b1b4305\") " Oct 11 10:28:58.304509 master-1 kubenswrapper[4771]: I1011 10:28:58.304324 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94d811f4-4ac9-46b0-b937-d3370b1b4305-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "94d811f4-4ac9-46b0-b937-d3370b1b4305" (UID: "94d811f4-4ac9-46b0-b937-d3370b1b4305"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:28:58.304992 master-1 kubenswrapper[4771]: I1011 10:28:58.304935 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "94d811f4-4ac9-46b0-b937-d3370b1b4305" (UID: "94d811f4-4ac9-46b0-b937-d3370b1b4305"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:58.305090 master-1 kubenswrapper[4771]: I1011 10:28:58.305007 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-config" (OuterVolumeSpecName: "config") pod "94d811f4-4ac9-46b0-b937-d3370b1b4305" (UID: "94d811f4-4ac9-46b0-b937-d3370b1b4305"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:58.305090 master-1 kubenswrapper[4771]: I1011 10:28:58.305042 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "94d811f4-4ac9-46b0-b937-d3370b1b4305" (UID: "94d811f4-4ac9-46b0-b937-d3370b1b4305"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:58.305952 master-1 kubenswrapper[4771]: I1011 10:28:58.305686 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "94d811f4-4ac9-46b0-b937-d3370b1b4305" (UID: "94d811f4-4ac9-46b0-b937-d3370b1b4305"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:28:58.306102 master-1 kubenswrapper[4771]: I1011 10:28:58.306050 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "94d811f4-4ac9-46b0-b937-d3370b1b4305" (UID: "94d811f4-4ac9-46b0-b937-d3370b1b4305"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:58.308606 master-1 kubenswrapper[4771]: I1011 10:28:58.308512 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d811f4-4ac9-46b0-b937-d3370b1b4305-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "94d811f4-4ac9-46b0-b937-d3370b1b4305" (UID: "94d811f4-4ac9-46b0-b937-d3370b1b4305"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:28:58.309237 master-1 kubenswrapper[4771]: I1011 10:28:58.309195 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d811f4-4ac9-46b0-b937-d3370b1b4305-kube-api-access-dbkqb" (OuterVolumeSpecName: "kube-api-access-dbkqb") pod "94d811f4-4ac9-46b0-b937-d3370b1b4305" (UID: "94d811f4-4ac9-46b0-b937-d3370b1b4305"). InnerVolumeSpecName "kube-api-access-dbkqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:28:58.309433 master-1 kubenswrapper[4771]: I1011 10:28:58.309345 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d811f4-4ac9-46b0-b937-d3370b1b4305-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "94d811f4-4ac9-46b0-b937-d3370b1b4305" (UID: "94d811f4-4ac9-46b0-b937-d3370b1b4305"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:28:58.310348 master-1 kubenswrapper[4771]: I1011 10:28:58.310279 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d811f4-4ac9-46b0-b937-d3370b1b4305-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "94d811f4-4ac9-46b0-b937-d3370b1b4305" (UID: "94d811f4-4ac9-46b0-b937-d3370b1b4305"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:28:58.403588 master-1 kubenswrapper[4771]: I1011 10:28:58.403514 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:28:58.403756 master-1 kubenswrapper[4771]: I1011 10:28:58.403613 4771 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94d811f4-4ac9-46b0-b937-d3370b1b4305-encryption-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:58.403756 master-1 kubenswrapper[4771]: I1011 10:28:58.403630 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94d811f4-4ac9-46b0-b937-d3370b1b4305-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:58.403756 master-1 kubenswrapper[4771]: I1011 10:28:58.403644 4771 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/94d811f4-4ac9-46b0-b937-d3370b1b4305-node-pullsecrets\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:58.403756 master-1 kubenswrapper[4771]: I1011 10:28:58.403657 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbkqb\" (UniqueName: \"kubernetes.io/projected/94d811f4-4ac9-46b0-b937-d3370b1b4305-kube-api-access-dbkqb\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:58.403756 master-1 kubenswrapper[4771]: I1011 10:28:58.403672 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-etcd-serving-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:58.403756 master-1 kubenswrapper[4771]: I1011 10:28:58.403684 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94d811f4-4ac9-46b0-b937-d3370b1b4305-etcd-client\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:58.403756 master-1 kubenswrapper[4771]: I1011 10:28:58.403696 4771 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-image-import-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:58.403756 master-1 kubenswrapper[4771]: I1011 10:28:58.403708 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:58.403756 master-1 kubenswrapper[4771]: I1011 10:28:58.403720 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:58.403756 master-1 kubenswrapper[4771]: I1011 10:28:58.403734 4771 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:58.404297 master-1 kubenswrapper[4771]: E1011 10:28:58.403769 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:58.404297 master-1 kubenswrapper[4771]: E1011 10:28:58.403885 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca podName:c9e9455e-0b47-4623-9b4c-ef79cf62a254 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:02.40385068 +0000 UTC m=+174.378077151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca") pod "controller-manager-565f857764-nhm4g" (UID: "c9e9455e-0b47-4623-9b4c-ef79cf62a254") : configmap "client-ca" not found Oct 11 10:28:59.167506 master-1 kubenswrapper[4771]: I1011 10:28:59.167462 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-796c687c6d-9b677" Oct 11 10:28:59.198909 master-1 kubenswrapper[4771]: I1011 10:28:59.198842 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-555f658fd6-n5n6g"] Oct 11 10:28:59.199741 master-1 kubenswrapper[4771]: I1011 10:28:59.199703 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.200301 master-1 kubenswrapper[4771]: I1011 10:28:59.200258 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-796c687c6d-9b677"] Oct 11 10:28:59.203709 master-1 kubenswrapper[4771]: I1011 10:28:59.203660 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-796c687c6d-9b677"] Oct 11 10:28:59.203850 master-1 kubenswrapper[4771]: I1011 10:28:59.203824 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 11 10:28:59.203931 master-1 kubenswrapper[4771]: I1011 10:28:59.203847 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 11 10:28:59.203931 master-1 kubenswrapper[4771]: I1011 10:28:59.203853 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 11 10:28:59.204324 master-1 kubenswrapper[4771]: I1011 10:28:59.204293 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 11 10:28:59.204324 master-1 kubenswrapper[4771]: I1011 10:28:59.204324 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 11 10:28:59.204547 master-1 kubenswrapper[4771]: I1011 10:28:59.204510 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 11 10:28:59.204779 master-1 kubenswrapper[4771]: I1011 10:28:59.204749 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 11 10:28:59.204870 master-1 kubenswrapper[4771]: I1011 10:28:59.204847 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 11 10:28:59.205574 master-1 kubenswrapper[4771]: I1011 10:28:59.205532 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 11 10:28:59.211127 master-1 kubenswrapper[4771]: I1011 10:28:59.211086 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-555f658fd6-n5n6g"] Oct 11 10:28:59.212296 master-1 kubenswrapper[4771]: I1011 10:28:59.212255 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 11 10:28:59.312269 master-1 kubenswrapper[4771]: I1011 10:28:59.312171 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/027736d1-f3d3-490e-9ee1-d08bad7a25b7-etcd-client\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.312269 master-1 kubenswrapper[4771]: I1011 10:28:59.312268 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-config\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.312515 master-1 kubenswrapper[4771]: I1011 10:28:59.312391 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/027736d1-f3d3-490e-9ee1-d08bad7a25b7-encryption-config\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.312515 master-1 kubenswrapper[4771]: I1011 10:28:59.312429 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/027736d1-f3d3-490e-9ee1-d08bad7a25b7-serving-cert\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.312515 master-1 kubenswrapper[4771]: I1011 10:28:59.312461 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-etcd-serving-ca\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.312630 master-1 kubenswrapper[4771]: I1011 10:28:59.312588 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-image-import-ca\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.312728 master-1 kubenswrapper[4771]: I1011 10:28:59.312701 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-trusted-ca-bundle\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.312772 master-1 kubenswrapper[4771]: I1011 10:28:59.312732 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-audit\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.312804 master-1 kubenswrapper[4771]: I1011 10:28:59.312797 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/027736d1-f3d3-490e-9ee1-d08bad7a25b7-node-pullsecrets\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.312840 master-1 kubenswrapper[4771]: I1011 10:28:59.312829 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s269\" (UniqueName: \"kubernetes.io/projected/027736d1-f3d3-490e-9ee1-d08bad7a25b7-kube-api-access-4s269\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.312882 master-1 kubenswrapper[4771]: I1011 10:28:59.312866 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/027736d1-f3d3-490e-9ee1-d08bad7a25b7-audit-dir\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.312929 master-1 kubenswrapper[4771]: I1011 10:28:59.312918 4771 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/94d811f4-4ac9-46b0-b937-d3370b1b4305-audit\") on node \"master-1\" DevicePath \"\"" Oct 11 10:28:59.414195 master-1 kubenswrapper[4771]: I1011 10:28:59.414030 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/027736d1-f3d3-490e-9ee1-d08bad7a25b7-node-pullsecrets\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.414195 master-1 kubenswrapper[4771]: I1011 10:28:59.414088 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s269\" (UniqueName: \"kubernetes.io/projected/027736d1-f3d3-490e-9ee1-d08bad7a25b7-kube-api-access-4s269\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.414195 master-1 kubenswrapper[4771]: I1011 10:28:59.414123 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/027736d1-f3d3-490e-9ee1-d08bad7a25b7-audit-dir\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.414195 master-1 kubenswrapper[4771]: I1011 10:28:59.414154 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/027736d1-f3d3-490e-9ee1-d08bad7a25b7-etcd-client\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.414195 master-1 kubenswrapper[4771]: I1011 10:28:59.414177 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-config\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.414531 master-1 kubenswrapper[4771]: I1011 10:28:59.414209 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/027736d1-f3d3-490e-9ee1-d08bad7a25b7-audit-dir\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.414531 master-1 kubenswrapper[4771]: I1011 10:28:59.414221 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/027736d1-f3d3-490e-9ee1-d08bad7a25b7-encryption-config\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.414531 master-1 kubenswrapper[4771]: I1011 10:28:59.414225 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/027736d1-f3d3-490e-9ee1-d08bad7a25b7-node-pullsecrets\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.414531 master-1 kubenswrapper[4771]: I1011 10:28:59.414241 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/027736d1-f3d3-490e-9ee1-d08bad7a25b7-serving-cert\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.414531 master-1 kubenswrapper[4771]: I1011 10:28:59.414449 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-etcd-serving-ca\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.414531 master-1 kubenswrapper[4771]: I1011 10:28:59.414524 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-image-import-ca\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.415209 master-1 kubenswrapper[4771]: I1011 10:28:59.414600 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-trusted-ca-bundle\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.415209 master-1 kubenswrapper[4771]: I1011 10:28:59.414634 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-audit\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.415209 master-1 kubenswrapper[4771]: I1011 10:28:59.414971 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-config\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.415584 master-1 kubenswrapper[4771]: I1011 10:28:59.415517 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-image-import-ca\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.416021 master-1 kubenswrapper[4771]: I1011 10:28:59.415975 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-trusted-ca-bundle\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.416465 master-1 kubenswrapper[4771]: I1011 10:28:59.416405 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-audit\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.416572 master-1 kubenswrapper[4771]: I1011 10:28:59.416408 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-etcd-serving-ca\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.420072 master-1 kubenswrapper[4771]: I1011 10:28:59.420009 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/027736d1-f3d3-490e-9ee1-d08bad7a25b7-serving-cert\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.420269 master-1 kubenswrapper[4771]: I1011 10:28:59.420201 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/027736d1-f3d3-490e-9ee1-d08bad7a25b7-encryption-config\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.421165 master-1 kubenswrapper[4771]: I1011 10:28:59.421096 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/027736d1-f3d3-490e-9ee1-d08bad7a25b7-etcd-client\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.442321 master-1 kubenswrapper[4771]: I1011 10:28:59.441926 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s269\" (UniqueName: \"kubernetes.io/projected/027736d1-f3d3-490e-9ee1-d08bad7a25b7-kube-api-access-4s269\") pod \"apiserver-555f658fd6-n5n6g\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.516217 master-1 kubenswrapper[4771]: I1011 10:28:59.516149 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:28:59.995985 master-1 kubenswrapper[4771]: I1011 10:28:59.995809 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-555f658fd6-n5n6g"] Oct 11 10:29:00.005573 master-1 kubenswrapper[4771]: W1011 10:29:00.005503 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod027736d1_f3d3_490e_9ee1_d08bad7a25b7.slice/crio-0245a7fd6940eab125c14495c22d9aa4a273c8034b951fafcde945d3497b7a29 WatchSource:0}: Error finding container 0245a7fd6940eab125c14495c22d9aa4a273c8034b951fafcde945d3497b7a29: Status 404 returned error can't find the container with id 0245a7fd6940eab125c14495c22d9aa4a273c8034b951fafcde945d3497b7a29 Oct 11 10:29:00.022499 master-1 kubenswrapper[4771]: I1011 10:29:00.022434 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:29:00.022716 master-1 kubenswrapper[4771]: E1011 10:29:00.022635 4771 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:29:00.022838 master-1 kubenswrapper[4771]: E1011 10:29:00.022805 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca podName:537a2b50-0394-47bd-941a-def350316943 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.022764781 +0000 UTC m=+179.996991262 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca") pod "route-controller-manager-5bcc5987f5-f92xw" (UID: "537a2b50-0394-47bd-941a-def350316943") : configmap "client-ca" not found Oct 11 10:29:00.162111 master-1 kubenswrapper[4771]: I1011 10:29:00.162034 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-1"] Oct 11 10:29:00.162970 master-1 kubenswrapper[4771]: I1011 10:29:00.162918 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-1" Oct 11 10:29:00.166432 master-1 kubenswrapper[4771]: I1011 10:29:00.166347 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 11 10:29:00.170025 master-1 kubenswrapper[4771]: I1011 10:29:00.169944 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-1"] Oct 11 10:29:00.172909 master-1 kubenswrapper[4771]: I1011 10:29:00.172840 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" event={"ID":"027736d1-f3d3-490e-9ee1-d08bad7a25b7","Type":"ContainerStarted","Data":"0245a7fd6940eab125c14495c22d9aa4a273c8034b951fafcde945d3497b7a29"} Oct 11 10:29:00.176114 master-1 kubenswrapper[4771]: I1011 10:29:00.176049 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-75f9c7d795-2zgv4" event={"ID":"b763cbe4-f035-45f2-9f70-4bbb8d5cac87","Type":"ContainerStarted","Data":"9cc8b420bb3cfff536ac575cf0f6beae66dfd3d8bbf199b252ad774b529ae70b"} Oct 11 10:29:00.176114 master-1 kubenswrapper[4771]: I1011 10:29:00.176098 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-75f9c7d795-2zgv4" event={"ID":"b763cbe4-f035-45f2-9f70-4bbb8d5cac87","Type":"ContainerStarted","Data":"b2ffb3a26e0ef89ca4434fa7ed83e3ba7017b0a577bbe8882a5dee941aa1a52d"} Oct 11 10:29:00.197391 master-1 kubenswrapper[4771]: I1011 10:29:00.197283 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-75f9c7d795-2zgv4" podStartSLOduration=2.185435645 podStartE2EDuration="4.197262832s" podCreationTimestamp="2025-10-11 10:28:56 +0000 UTC" firstStartedPulling="2025-10-11 10:28:57.314660307 +0000 UTC m=+169.288886778" lastFinishedPulling="2025-10-11 10:28:59.326487524 +0000 UTC m=+171.300713965" observedRunningTime="2025-10-11 10:29:00.195517595 +0000 UTC m=+172.169744076" watchObservedRunningTime="2025-10-11 10:29:00.197262832 +0000 UTC m=+172.171489303" Oct 11 10:29:00.327116 master-1 kubenswrapper[4771]: I1011 10:29:00.326997 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6534d9db-a553-4c39-bf4a-014a359ee336-var-lock\") pod \"installer-1-master-1\" (UID: \"6534d9db-a553-4c39-bf4a-014a359ee336\") " pod="openshift-kube-controller-manager/installer-1-master-1" Oct 11 10:29:00.327507 master-1 kubenswrapper[4771]: I1011 10:29:00.327149 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6534d9db-a553-4c39-bf4a-014a359ee336-kube-api-access\") pod \"installer-1-master-1\" (UID: \"6534d9db-a553-4c39-bf4a-014a359ee336\") " pod="openshift-kube-controller-manager/installer-1-master-1" Oct 11 10:29:00.327507 master-1 kubenswrapper[4771]: I1011 10:29:00.327236 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6534d9db-a553-4c39-bf4a-014a359ee336-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"6534d9db-a553-4c39-bf4a-014a359ee336\") " pod="openshift-kube-controller-manager/installer-1-master-1" Oct 11 10:29:00.429095 master-1 kubenswrapper[4771]: I1011 10:29:00.429013 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6534d9db-a553-4c39-bf4a-014a359ee336-kube-api-access\") pod \"installer-1-master-1\" (UID: \"6534d9db-a553-4c39-bf4a-014a359ee336\") " pod="openshift-kube-controller-manager/installer-1-master-1" Oct 11 10:29:00.429309 master-1 kubenswrapper[4771]: I1011 10:29:00.429114 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6534d9db-a553-4c39-bf4a-014a359ee336-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"6534d9db-a553-4c39-bf4a-014a359ee336\") " pod="openshift-kube-controller-manager/installer-1-master-1" Oct 11 10:29:00.429309 master-1 kubenswrapper[4771]: I1011 10:29:00.429207 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6534d9db-a553-4c39-bf4a-014a359ee336-var-lock\") pod \"installer-1-master-1\" (UID: \"6534d9db-a553-4c39-bf4a-014a359ee336\") " pod="openshift-kube-controller-manager/installer-1-master-1" Oct 11 10:29:00.429309 master-1 kubenswrapper[4771]: I1011 10:29:00.429299 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6534d9db-a553-4c39-bf4a-014a359ee336-var-lock\") pod \"installer-1-master-1\" (UID: \"6534d9db-a553-4c39-bf4a-014a359ee336\") " pod="openshift-kube-controller-manager/installer-1-master-1" Oct 11 10:29:00.429587 master-1 kubenswrapper[4771]: I1011 10:29:00.429320 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6534d9db-a553-4c39-bf4a-014a359ee336-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"6534d9db-a553-4c39-bf4a-014a359ee336\") " pod="openshift-kube-controller-manager/installer-1-master-1" Oct 11 10:29:00.445067 master-1 kubenswrapper[4771]: I1011 10:29:00.444960 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94d811f4-4ac9-46b0-b937-d3370b1b4305" path="/var/lib/kubelet/pods/94d811f4-4ac9-46b0-b937-d3370b1b4305/volumes" Oct 11 10:29:00.462002 master-1 kubenswrapper[4771]: I1011 10:29:00.461918 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6534d9db-a553-4c39-bf4a-014a359ee336-kube-api-access\") pod \"installer-1-master-1\" (UID: \"6534d9db-a553-4c39-bf4a-014a359ee336\") " pod="openshift-kube-controller-manager/installer-1-master-1" Oct 11 10:29:00.482855 master-1 kubenswrapper[4771]: I1011 10:29:00.482765 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-1" Oct 11 10:29:00.945475 master-1 kubenswrapper[4771]: I1011 10:29:00.945377 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-1"] Oct 11 10:29:01.138919 master-1 kubenswrapper[4771]: I1011 10:29:01.138830 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:29:01.139214 master-1 kubenswrapper[4771]: E1011 10:29:01.138994 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker podName:d7647696-42d9-4dd9-bc3b-a4d52a42cf9a nodeName:}" failed. No retries permitted until 2025-10-11 10:29:17.138959671 +0000 UTC m=+189.113186152 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-bqdlc" (UID: "d7647696-42d9-4dd9-bc3b-a4d52a42cf9a") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:29:01.180238 master-1 kubenswrapper[4771]: I1011 10:29:01.180129 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-1" event={"ID":"6534d9db-a553-4c39-bf4a-014a359ee336","Type":"ContainerStarted","Data":"b5b289645c8dafc708db0dfb37bf1e6882fdc062aac0a46f6f992e36cadc5dc7"} Oct 11 10:29:01.240296 master-1 kubenswrapper[4771]: I1011 10:29:01.240135 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:29:01.240568 master-1 kubenswrapper[4771]: E1011 10:29:01.240392 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker podName:6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b nodeName:}" failed. No retries permitted until 2025-10-11 10:29:17.24034625 +0000 UTC m=+189.214572741 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-tpzsm" (UID: "6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:29:02.092943 master-1 kubenswrapper[4771]: I1011 10:29:02.092895 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-1"] Oct 11 10:29:02.093233 master-1 kubenswrapper[4771]: I1011 10:29:02.093116 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-1" podUID="750e7efe-07f7-4280-85c7-78250178f965" containerName="installer" containerID="cri-o://2bb08e74f6a03f94e752e699929c9aed89e09c0d951507eccf30a241344f6bd2" gracePeriod=30 Oct 11 10:29:02.453006 master-1 kubenswrapper[4771]: I1011 10:29:02.452918 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:29:02.454030 master-1 kubenswrapper[4771]: E1011 10:29:02.453046 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:29:02.454030 master-1 kubenswrapper[4771]: E1011 10:29:02.453107 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca podName:c9e9455e-0b47-4623-9b4c-ef79cf62a254 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:10.453090763 +0000 UTC m=+182.427317304 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca") pod "controller-manager-565f857764-nhm4g" (UID: "c9e9455e-0b47-4623-9b4c-ef79cf62a254") : configmap "client-ca" not found Oct 11 10:29:02.972720 master-1 kubenswrapper[4771]: I1011 10:29:02.972573 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-1_750e7efe-07f7-4280-85c7-78250178f965/installer/0.log" Oct 11 10:29:02.972720 master-1 kubenswrapper[4771]: I1011 10:29:02.972649 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-1" Oct 11 10:29:03.057813 master-2 kubenswrapper[4776]: I1011 10:29:03.057728 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:29:03.058523 master-2 kubenswrapper[4776]: I1011 10:29:03.058299 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:29:03.059646 master-1 kubenswrapper[4771]: I1011 10:29:03.059574 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/750e7efe-07f7-4280-85c7-78250178f965-var-lock\") pod \"750e7efe-07f7-4280-85c7-78250178f965\" (UID: \"750e7efe-07f7-4280-85c7-78250178f965\") " Oct 11 10:29:03.059773 master-1 kubenswrapper[4771]: I1011 10:29:03.059727 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/750e7efe-07f7-4280-85c7-78250178f965-kube-api-access\") pod \"750e7efe-07f7-4280-85c7-78250178f965\" (UID: \"750e7efe-07f7-4280-85c7-78250178f965\") " Oct 11 10:29:03.059773 master-1 kubenswrapper[4771]: I1011 10:29:03.059736 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/750e7efe-07f7-4280-85c7-78250178f965-var-lock" (OuterVolumeSpecName: "var-lock") pod "750e7efe-07f7-4280-85c7-78250178f965" (UID: "750e7efe-07f7-4280-85c7-78250178f965"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:29:03.059841 master-1 kubenswrapper[4771]: I1011 10:29:03.059798 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/750e7efe-07f7-4280-85c7-78250178f965-kubelet-dir\") pod \"750e7efe-07f7-4280-85c7-78250178f965\" (UID: \"750e7efe-07f7-4280-85c7-78250178f965\") " Oct 11 10:29:03.059934 master-1 kubenswrapper[4771]: I1011 10:29:03.059902 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/750e7efe-07f7-4280-85c7-78250178f965-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "750e7efe-07f7-4280-85c7-78250178f965" (UID: "750e7efe-07f7-4280-85c7-78250178f965"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:29:03.060234 master-1 kubenswrapper[4771]: I1011 10:29:03.060206 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/750e7efe-07f7-4280-85c7-78250178f965-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:03.060270 master-1 kubenswrapper[4771]: I1011 10:29:03.060241 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/750e7efe-07f7-4280-85c7-78250178f965-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:03.066601 master-1 kubenswrapper[4771]: I1011 10:29:03.066556 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/750e7efe-07f7-4280-85c7-78250178f965-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "750e7efe-07f7-4280-85c7-78250178f965" (UID: "750e7efe-07f7-4280-85c7-78250178f965"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:29:03.085946 master-2 kubenswrapper[4776]: W1011 10:29:03.085882 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7b07707_84bd_43a6_a43d_6680decaa210.slice/crio-0d15a643ea56b9893fbb1737fa71c1a6d03dea576b0aff6e3f4a0561257eb624 WatchSource:0}: Error finding container 0d15a643ea56b9893fbb1737fa71c1a6d03dea576b0aff6e3f4a0561257eb624: Status 404 returned error can't find the container with id 0d15a643ea56b9893fbb1737fa71c1a6d03dea576b0aff6e3f4a0561257eb624 Oct 11 10:29:03.161611 master-1 kubenswrapper[4771]: I1011 10:29:03.161564 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/750e7efe-07f7-4280-85c7-78250178f965-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:03.188267 master-1 kubenswrapper[4771]: I1011 10:29:03.188224 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-1_750e7efe-07f7-4280-85c7-78250178f965/installer/0.log" Oct 11 10:29:03.188393 master-1 kubenswrapper[4771]: I1011 10:29:03.188293 4771 generic.go:334] "Generic (PLEG): container finished" podID="750e7efe-07f7-4280-85c7-78250178f965" containerID="2bb08e74f6a03f94e752e699929c9aed89e09c0d951507eccf30a241344f6bd2" exitCode=1 Oct 11 10:29:03.188393 master-1 kubenswrapper[4771]: I1011 10:29:03.188330 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-1" event={"ID":"750e7efe-07f7-4280-85c7-78250178f965","Type":"ContainerDied","Data":"2bb08e74f6a03f94e752e699929c9aed89e09c0d951507eccf30a241344f6bd2"} Oct 11 10:29:03.188476 master-1 kubenswrapper[4771]: I1011 10:29:03.188408 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-1" event={"ID":"750e7efe-07f7-4280-85c7-78250178f965","Type":"ContainerDied","Data":"23a2788a32f3ff82c23ee63b5bf39839725cd21b8f03a9a69d479fa53764ca28"} Oct 11 10:29:03.188476 master-1 kubenswrapper[4771]: I1011 10:29:03.188454 4771 scope.go:117] "RemoveContainer" containerID="2bb08e74f6a03f94e752e699929c9aed89e09c0d951507eccf30a241344f6bd2" Oct 11 10:29:03.188531 master-1 kubenswrapper[4771]: I1011 10:29:03.188468 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-1" Oct 11 10:29:03.201175 master-1 kubenswrapper[4771]: I1011 10:29:03.201142 4771 scope.go:117] "RemoveContainer" containerID="2bb08e74f6a03f94e752e699929c9aed89e09c0d951507eccf30a241344f6bd2" Oct 11 10:29:03.201666 master-1 kubenswrapper[4771]: E1011 10:29:03.201605 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bb08e74f6a03f94e752e699929c9aed89e09c0d951507eccf30a241344f6bd2\": container with ID starting with 2bb08e74f6a03f94e752e699929c9aed89e09c0d951507eccf30a241344f6bd2 not found: ID does not exist" containerID="2bb08e74f6a03f94e752e699929c9aed89e09c0d951507eccf30a241344f6bd2" Oct 11 10:29:03.201666 master-1 kubenswrapper[4771]: I1011 10:29:03.201639 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bb08e74f6a03f94e752e699929c9aed89e09c0d951507eccf30a241344f6bd2"} err="failed to get container status \"2bb08e74f6a03f94e752e699929c9aed89e09c0d951507eccf30a241344f6bd2\": rpc error: code = NotFound desc = could not find container \"2bb08e74f6a03f94e752e699929c9aed89e09c0d951507eccf30a241344f6bd2\": container with ID starting with 2bb08e74f6a03f94e752e699929c9aed89e09c0d951507eccf30a241344f6bd2 not found: ID does not exist" Oct 11 10:29:03.219106 master-1 kubenswrapper[4771]: I1011 10:29:03.219070 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-1"] Oct 11 10:29:03.223398 master-1 kubenswrapper[4771]: I1011 10:29:03.223346 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-1"] Oct 11 10:29:03.436923 master-1 kubenswrapper[4771]: I1011 10:29:03.436821 4771 scope.go:117] "RemoveContainer" containerID="793c72629ffb5d64763cce906980f11774530f02d707e0389b69155b33560c5d" Oct 11 10:29:03.577611 master-2 kubenswrapper[4776]: I1011 10:29:03.577532 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:29:03.577896 master-2 kubenswrapper[4776]: E1011 10:29:03.577842 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:29:03.578015 master-2 kubenswrapper[4776]: E1011 10:29:03.577980 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:35.57794556 +0000 UTC m=+210.362372299 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:29:03.793129 master-2 kubenswrapper[4776]: I1011 10:29:03.793061 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-sgvjd" Oct 11 10:29:03.809864 master-1 kubenswrapper[4771]: I1011 10:29:03.809778 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-rzjcf" Oct 11 10:29:03.858620 master-2 kubenswrapper[4776]: I1011 10:29:03.858492 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" event={"ID":"b7b07707-84bd-43a6-a43d-6680decaa210","Type":"ContainerStarted","Data":"0d15a643ea56b9893fbb1737fa71c1a6d03dea576b0aff6e3f4a0561257eb624"} Oct 11 10:29:04.197756 master-1 kubenswrapper[4771]: I1011 10:29:04.197617 4771 generic.go:334] "Generic (PLEG): container finished" podID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerID="41af63c058a1e7b90357082e0adac794e0e1b2996f71cfa6b9c3a91b7079c8d7" exitCode=0 Oct 11 10:29:04.198097 master-1 kubenswrapper[4771]: I1011 10:29:04.197702 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" event={"ID":"027736d1-f3d3-490e-9ee1-d08bad7a25b7","Type":"ContainerDied","Data":"41af63c058a1e7b90357082e0adac794e0e1b2996f71cfa6b9c3a91b7079c8d7"} Oct 11 10:29:04.202081 master-1 kubenswrapper[4771]: I1011 10:29:04.201982 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-1" event={"ID":"6534d9db-a553-4c39-bf4a-014a359ee336","Type":"ContainerStarted","Data":"c9e465db2f016eeb1b9eb6a1701316ad91386e0556613224875082e886221894"} Oct 11 10:29:04.213999 master-1 kubenswrapper[4771]: I1011 10:29:04.213941 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-779749f859-5xxzp_e115f8be-9e65-4407-8111-568e5ea8ac1b/kube-rbac-proxy/3.log" Oct 11 10:29:04.216030 master-1 kubenswrapper[4771]: I1011 10:29:04.215953 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp" event={"ID":"e115f8be-9e65-4407-8111-568e5ea8ac1b","Type":"ContainerStarted","Data":"903a925fbd464397f1aac6d43f29ca7e35957aff84f8e3ba36189e56cf222199"} Oct 11 10:29:04.255246 master-1 kubenswrapper[4771]: I1011 10:29:04.255124 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-1" podStartSLOduration=2.228838693 podStartE2EDuration="4.255098543s" podCreationTimestamp="2025-10-11 10:29:00 +0000 UTC" firstStartedPulling="2025-10-11 10:29:00.95580279 +0000 UTC m=+172.930029231" lastFinishedPulling="2025-10-11 10:29:02.98206264 +0000 UTC m=+174.956289081" observedRunningTime="2025-10-11 10:29:04.253290695 +0000 UTC m=+176.227517196" watchObservedRunningTime="2025-10-11 10:29:04.255098543 +0000 UTC m=+176.229325014" Oct 11 10:29:04.446552 master-1 kubenswrapper[4771]: I1011 10:29:04.446458 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="750e7efe-07f7-4280-85c7-78250178f965" path="/var/lib/kubelet/pods/750e7efe-07f7-4280-85c7-78250178f965/volumes" Oct 11 10:29:04.692604 master-1 kubenswrapper[4771]: I1011 10:29:04.692532 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-1"] Oct 11 10:29:04.692806 master-1 kubenswrapper[4771]: E1011 10:29:04.692725 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750e7efe-07f7-4280-85c7-78250178f965" containerName="installer" Oct 11 10:29:04.692806 master-1 kubenswrapper[4771]: I1011 10:29:04.692744 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="750e7efe-07f7-4280-85c7-78250178f965" containerName="installer" Oct 11 10:29:04.692866 master-1 kubenswrapper[4771]: I1011 10:29:04.692837 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="750e7efe-07f7-4280-85c7-78250178f965" containerName="installer" Oct 11 10:29:04.693260 master-1 kubenswrapper[4771]: I1011 10:29:04.693233 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-1" Oct 11 10:29:04.700691 master-1 kubenswrapper[4771]: I1011 10:29:04.700602 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-1"] Oct 11 10:29:04.779388 master-1 kubenswrapper[4771]: I1011 10:29:04.779297 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49766d35-174a-4677-8b2d-e3ed195d0a26-kubelet-dir\") pod \"installer-3-master-1\" (UID: \"49766d35-174a-4677-8b2d-e3ed195d0a26\") " pod="openshift-kube-scheduler/installer-3-master-1" Oct 11 10:29:04.779656 master-1 kubenswrapper[4771]: I1011 10:29:04.779403 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49766d35-174a-4677-8b2d-e3ed195d0a26-kube-api-access\") pod \"installer-3-master-1\" (UID: \"49766d35-174a-4677-8b2d-e3ed195d0a26\") " pod="openshift-kube-scheduler/installer-3-master-1" Oct 11 10:29:04.779656 master-1 kubenswrapper[4771]: I1011 10:29:04.779475 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/49766d35-174a-4677-8b2d-e3ed195d0a26-var-lock\") pod \"installer-3-master-1\" (UID: \"49766d35-174a-4677-8b2d-e3ed195d0a26\") " pod="openshift-kube-scheduler/installer-3-master-1" Oct 11 10:29:04.880387 master-1 kubenswrapper[4771]: I1011 10:29:04.880307 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49766d35-174a-4677-8b2d-e3ed195d0a26-kubelet-dir\") pod \"installer-3-master-1\" (UID: \"49766d35-174a-4677-8b2d-e3ed195d0a26\") " pod="openshift-kube-scheduler/installer-3-master-1" Oct 11 10:29:04.880987 master-1 kubenswrapper[4771]: I1011 10:29:04.880434 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49766d35-174a-4677-8b2d-e3ed195d0a26-kube-api-access\") pod \"installer-3-master-1\" (UID: \"49766d35-174a-4677-8b2d-e3ed195d0a26\") " pod="openshift-kube-scheduler/installer-3-master-1" Oct 11 10:29:04.880987 master-1 kubenswrapper[4771]: I1011 10:29:04.880513 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/49766d35-174a-4677-8b2d-e3ed195d0a26-var-lock\") pod \"installer-3-master-1\" (UID: \"49766d35-174a-4677-8b2d-e3ed195d0a26\") " pod="openshift-kube-scheduler/installer-3-master-1" Oct 11 10:29:04.880987 master-1 kubenswrapper[4771]: I1011 10:29:04.880537 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49766d35-174a-4677-8b2d-e3ed195d0a26-kubelet-dir\") pod \"installer-3-master-1\" (UID: \"49766d35-174a-4677-8b2d-e3ed195d0a26\") " pod="openshift-kube-scheduler/installer-3-master-1" Oct 11 10:29:04.880987 master-1 kubenswrapper[4771]: I1011 10:29:04.880599 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/49766d35-174a-4677-8b2d-e3ed195d0a26-var-lock\") pod \"installer-3-master-1\" (UID: \"49766d35-174a-4677-8b2d-e3ed195d0a26\") " pod="openshift-kube-scheduler/installer-3-master-1" Oct 11 10:29:04.904500 master-1 kubenswrapper[4771]: I1011 10:29:04.903991 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49766d35-174a-4677-8b2d-e3ed195d0a26-kube-api-access\") pod \"installer-3-master-1\" (UID: \"49766d35-174a-4677-8b2d-e3ed195d0a26\") " pod="openshift-kube-scheduler/installer-3-master-1" Oct 11 10:29:05.051693 master-1 kubenswrapper[4771]: I1011 10:29:05.051174 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-1" Oct 11 10:29:05.227388 master-1 kubenswrapper[4771]: I1011 10:29:05.221464 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" event={"ID":"027736d1-f3d3-490e-9ee1-d08bad7a25b7","Type":"ContainerStarted","Data":"5ee744232b5a66fa90e18d0677b90fd7ff50cae1f9e1afc9158b036b712f32da"} Oct 11 10:29:05.247734 master-1 kubenswrapper[4771]: I1011 10:29:05.247676 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-1"] Oct 11 10:29:05.253618 master-1 kubenswrapper[4771]: W1011 10:29:05.253560 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod49766d35_174a_4677_8b2d_e3ed195d0a26.slice/crio-c590919bb57c8eb20558e4148e5af322c480e4ab020811da46e7b9c3247ad430 WatchSource:0}: Error finding container c590919bb57c8eb20558e4148e5af322c480e4ab020811da46e7b9c3247ad430: Status 404 returned error can't find the container with id c590919bb57c8eb20558e4148e5af322c480e4ab020811da46e7b9c3247ad430 Oct 11 10:29:05.870844 master-2 kubenswrapper[4776]: I1011 10:29:05.870717 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" event={"ID":"b7b07707-84bd-43a6-a43d-6680decaa210","Type":"ContainerStarted","Data":"173f2aafa4e9f75815282d30aaf59a9c91879c49ed9a0dc06484b03a065a2298"} Oct 11 10:29:05.889615 master-2 kubenswrapper[4776]: I1011 10:29:05.889431 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" podStartSLOduration=137.825768576 podStartE2EDuration="2m19.889389399s" podCreationTimestamp="2025-10-11 10:26:46 +0000 UTC" firstStartedPulling="2025-10-11 10:29:03.088469855 +0000 UTC m=+177.872896614" lastFinishedPulling="2025-10-11 10:29:05.152090728 +0000 UTC m=+179.936517437" observedRunningTime="2025-10-11 10:29:05.885728294 +0000 UTC m=+180.670155043" watchObservedRunningTime="2025-10-11 10:29:05.889389399 +0000 UTC m=+180.673816148" Oct 11 10:29:06.226242 master-1 kubenswrapper[4771]: I1011 10:29:06.226164 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-1" event={"ID":"49766d35-174a-4677-8b2d-e3ed195d0a26","Type":"ContainerStarted","Data":"7f8db2473bcbc14ad35cb8dd456f940c8050a9882fcf9aa519950777d8bb0fc0"} Oct 11 10:29:06.226242 master-1 kubenswrapper[4771]: I1011 10:29:06.226218 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-1" event={"ID":"49766d35-174a-4677-8b2d-e3ed195d0a26","Type":"ContainerStarted","Data":"c590919bb57c8eb20558e4148e5af322c480e4ab020811da46e7b9c3247ad430"} Oct 11 10:29:06.517653 master-2 kubenswrapper[4776]: I1011 10:29:06.517593 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:29:06.518129 master-2 kubenswrapper[4776]: E1011 10:29:06.517803 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:29:06.518230 master-2 kubenswrapper[4776]: E1011 10:29:06.518210 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:38.518177545 +0000 UTC m=+213.302604284 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:29:07.233736 master-1 kubenswrapper[4771]: I1011 10:29:07.233526 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" event={"ID":"027736d1-f3d3-490e-9ee1-d08bad7a25b7","Type":"ContainerStarted","Data":"893d86a98f61447fa7f11deae879fe95aeccf34e5a1d5e59961a43c4a181ec43"} Oct 11 10:29:07.261146 master-1 kubenswrapper[4771]: I1011 10:29:07.261029 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-1" podStartSLOduration=3.261005871 podStartE2EDuration="3.261005871s" podCreationTimestamp="2025-10-11 10:29:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:06.246433927 +0000 UTC m=+178.220660378" watchObservedRunningTime="2025-10-11 10:29:07.261005871 +0000 UTC m=+179.235232352" Oct 11 10:29:07.262296 master-1 kubenswrapper[4771]: I1011 10:29:07.262229 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" podStartSLOduration=2.610306823 podStartE2EDuration="9.262221854s" podCreationTimestamp="2025-10-11 10:28:58 +0000 UTC" firstStartedPulling="2025-10-11 10:29:00.009722205 +0000 UTC m=+171.983948676" lastFinishedPulling="2025-10-11 10:29:06.661637226 +0000 UTC m=+178.635863707" observedRunningTime="2025-10-11 10:29:07.25982775 +0000 UTC m=+179.234054251" watchObservedRunningTime="2025-10-11 10:29:07.262221854 +0000 UTC m=+179.236448325" Oct 11 10:29:08.120313 master-1 kubenswrapper[4771]: I1011 10:29:08.120225 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:29:08.120647 master-1 kubenswrapper[4771]: E1011 10:29:08.120528 4771 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:29:08.120683 master-1 kubenswrapper[4771]: E1011 10:29:08.120661 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca podName:537a2b50-0394-47bd-941a-def350316943 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:24.120633123 +0000 UTC m=+196.094859594 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca") pod "route-controller-manager-5bcc5987f5-f92xw" (UID: "537a2b50-0394-47bd-941a-def350316943") : configmap "client-ca" not found Oct 11 10:29:08.548237 master-2 kubenswrapper[4776]: I1011 10:29:08.548059 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:29:08.548237 master-2 kubenswrapper[4776]: I1011 10:29:08.548209 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:29:08.555202 master-2 kubenswrapper[4776]: I1011 10:29:08.555147 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:29:08.555891 master-2 kubenswrapper[4776]: I1011 10:29:08.555838 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:29:08.649857 master-2 kubenswrapper[4776]: I1011 10:29:08.649656 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:29:08.649857 master-2 kubenswrapper[4776]: I1011 10:29:08.649827 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:29:08.649857 master-2 kubenswrapper[4776]: I1011 10:29:08.649872 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:29:08.650240 master-2 kubenswrapper[4776]: I1011 10:29:08.649943 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:29:08.650240 master-2 kubenswrapper[4776]: I1011 10:29:08.649983 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:29:08.650240 master-2 kubenswrapper[4776]: I1011 10:29:08.650033 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:29:08.650240 master-2 kubenswrapper[4776]: I1011 10:29:08.650069 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:29:08.650240 master-2 kubenswrapper[4776]: I1011 10:29:08.650153 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:29:08.650240 master-2 kubenswrapper[4776]: I1011 10:29:08.650185 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:29:08.650240 master-2 kubenswrapper[4776]: I1011 10:29:08.650235 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:29:08.655588 master-2 kubenswrapper[4776]: I1011 10:29:08.655504 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:29:08.655740 master-2 kubenswrapper[4776]: I1011 10:29:08.655506 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:29:08.656166 master-2 kubenswrapper[4776]: I1011 10:29:08.656100 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:29:08.657337 master-2 kubenswrapper[4776]: I1011 10:29:08.656737 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:29:08.657337 master-2 kubenswrapper[4776]: I1011 10:29:08.656827 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:29:08.657337 master-2 kubenswrapper[4776]: I1011 10:29:08.657276 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:29:08.657337 master-2 kubenswrapper[4776]: I1011 10:29:08.657289 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:29:08.657719 master-2 kubenswrapper[4776]: I1011 10:29:08.657605 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:29:08.659125 master-2 kubenswrapper[4776]: I1011 10:29:08.659069 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:29:08.659468 master-2 kubenswrapper[4776]: I1011 10:29:08.659352 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:29:08.678842 master-2 kubenswrapper[4776]: I1011 10:29:08.678792 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:29:08.697773 master-2 kubenswrapper[4776]: I1011 10:29:08.697692 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:29:08.728524 master-2 kubenswrapper[4776]: I1011 10:29:08.728438 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:29:08.797018 master-2 kubenswrapper[4776]: I1011 10:29:08.796959 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:29:08.804917 master-2 kubenswrapper[4776]: I1011 10:29:08.804444 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:29:08.838282 master-2 kubenswrapper[4776]: I1011 10:29:08.838221 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:29:08.877367 master-2 kubenswrapper[4776]: I1011 10:29:08.875435 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:29:08.886552 master-2 kubenswrapper[4776]: I1011 10:29:08.886495 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:29:08.901146 master-2 kubenswrapper[4776]: I1011 10:29:08.901093 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:29:08.927423 master-2 kubenswrapper[4776]: I1011 10:29:08.927298 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:29:08.948641 master-2 kubenswrapper[4776]: I1011 10:29:08.947998 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:29:09.109453 master-2 kubenswrapper[4776]: I1011 10:29:09.107353 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd"] Oct 11 10:29:09.162305 master-2 kubenswrapper[4776]: I1011 10:29:09.162254 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp"] Oct 11 10:29:09.173501 master-2 kubenswrapper[4776]: I1011 10:29:09.173135 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-5r2t9"] Oct 11 10:29:09.249244 master-2 kubenswrapper[4776]: I1011 10:29:09.249201 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588"] Oct 11 10:29:09.250598 master-2 kubenswrapper[4776]: I1011 10:29:09.250501 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-s5r5b"] Oct 11 10:29:09.257021 master-2 kubenswrapper[4776]: W1011 10:29:09.256971 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbaa6ca7_9865_42f6_8030_2decf702caa1.slice/crio-942a0a636fb02acdce7f5ca1af79d6c5459f3029a0686eb940200927b07ce9aa WatchSource:0}: Error finding container 942a0a636fb02acdce7f5ca1af79d6c5459f3029a0686eb940200927b07ce9aa: Status 404 returned error can't find the container with id 942a0a636fb02acdce7f5ca1af79d6c5459f3029a0686eb940200927b07ce9aa Oct 11 10:29:09.257882 master-2 kubenswrapper[4776]: W1011 10:29:09.257855 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf33a7e_abea_411d_9a19_85cfe67debe3.slice/crio-d97eb420aaba8fb64f806ab9e8c4614e0403881645e49bc4703d4dfecdcf78f8 WatchSource:0}: Error finding container d97eb420aaba8fb64f806ab9e8c4614e0403881645e49bc4703d4dfecdcf78f8: Status 404 returned error can't find the container with id d97eb420aaba8fb64f806ab9e8c4614e0403881645e49bc4703d4dfecdcf78f8 Oct 11 10:29:09.393556 master-2 kubenswrapper[4776]: I1011 10:29:09.393503 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf"] Oct 11 10:29:09.400134 master-2 kubenswrapper[4776]: W1011 10:29:09.400094 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66dee5be_e631_462d_8a2c_51a2031a83a2.slice/crio-2b875e637bce4c66d9aee618d210f875af9b88b999fa1547741d12d0c13fb027 WatchSource:0}: Error finding container 2b875e637bce4c66d9aee618d210f875af9b88b999fa1547741d12d0c13fb027: Status 404 returned error can't find the container with id 2b875e637bce4c66d9aee618d210f875af9b88b999fa1547741d12d0c13fb027 Oct 11 10:29:09.432369 master-2 kubenswrapper[4776]: I1011 10:29:09.432316 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh"] Oct 11 10:29:09.437969 master-2 kubenswrapper[4776]: I1011 10:29:09.437922 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59"] Oct 11 10:29:09.439641 master-2 kubenswrapper[4776]: W1011 10:29:09.439609 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4536c84_d8f3_4808_bf8b_9b40695f46de.slice/crio-d077a5eab5d81ce2290e601c65e085ee2d5d37c0530bcc71c015b79e447f81b0 WatchSource:0}: Error finding container d077a5eab5d81ce2290e601c65e085ee2d5d37c0530bcc71c015b79e447f81b0: Status 404 returned error can't find the container with id d077a5eab5d81ce2290e601c65e085ee2d5d37c0530bcc71c015b79e447f81b0 Oct 11 10:29:09.441444 master-2 kubenswrapper[4776]: W1011 10:29:09.441400 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4354488_1b32_422d_bb06_767a952192a5.slice/crio-a556a45ac137fda790c7adca931defb5e82d192fa03e7152cd0556a3c5ba907b WatchSource:0}: Error finding container a556a45ac137fda790c7adca931defb5e82d192fa03e7152cd0556a3c5ba907b: Status 404 returned error can't find the container with id a556a45ac137fda790c7adca931defb5e82d192fa03e7152cd0556a3c5ba907b Oct 11 10:29:09.446621 master-2 kubenswrapper[4776]: I1011 10:29:09.446590 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg"] Oct 11 10:29:09.448269 master-2 kubenswrapper[4776]: I1011 10:29:09.448179 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-9dbb96f7-b88g6"] Oct 11 10:29:09.453745 master-2 kubenswrapper[4776]: W1011 10:29:09.453662 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3281eb7_fb96_4bae_8c55_b79728d426b0.slice/crio-8bbd939af8c063654038f5f4b60919b6db007b5064acc7639052fd6ecc1e54ff WatchSource:0}: Error finding container 8bbd939af8c063654038f5f4b60919b6db007b5064acc7639052fd6ecc1e54ff: Status 404 returned error can't find the container with id 8bbd939af8c063654038f5f4b60919b6db007b5064acc7639052fd6ecc1e54ff Oct 11 10:29:09.455537 master-2 kubenswrapper[4776]: W1011 10:29:09.455504 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod548333d7_2374_4c38_b4fd_45c2bee2ac4e.slice/crio-724b4e4876a7e65a5dfd8a937e512c0c0dab4fa561ff2efd2ea0fe10cd52f778 WatchSource:0}: Error finding container 724b4e4876a7e65a5dfd8a937e512c0c0dab4fa561ff2efd2ea0fe10cd52f778: Status 404 returned error can't find the container with id 724b4e4876a7e65a5dfd8a937e512c0c0dab4fa561ff2efd2ea0fe10cd52f778 Oct 11 10:29:09.517085 master-1 kubenswrapper[4771]: I1011 10:29:09.516973 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:29:09.517085 master-1 kubenswrapper[4771]: I1011 10:29:09.517071 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:29:09.527819 master-1 kubenswrapper[4771]: I1011 10:29:09.527761 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:29:09.594853 master-2 kubenswrapper[4776]: I1011 10:29:09.594728 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd"] Oct 11 10:29:09.900707 master-2 kubenswrapper[4776]: I1011 10:29:09.900557 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" event={"ID":"66dee5be-e631-462d-8a2c-51a2031a83a2","Type":"ContainerStarted","Data":"2b875e637bce4c66d9aee618d210f875af9b88b999fa1547741d12d0c13fb027"} Oct 11 10:29:09.902151 master-2 kubenswrapper[4776]: I1011 10:29:09.902100 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" event={"ID":"64310b0b-bae1-4ad3-b106-6d59d47d29b2","Type":"ContainerStarted","Data":"e72cc89f7bb8839ad3fcaec89df9b0ae1c41473603f0bffc6a5201981557d826"} Oct 11 10:29:09.904696 master-2 kubenswrapper[4776]: I1011 10:29:09.904590 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" event={"ID":"e4536c84-d8f3-4808-bf8b-9b40695f46de","Type":"ContainerStarted","Data":"174f4a4d112f231ff625c542cc912a8e0a801f8c86c1b8c10689aa8a9d412a99"} Oct 11 10:29:09.904696 master-2 kubenswrapper[4776]: I1011 10:29:09.904637 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" event={"ID":"e4536c84-d8f3-4808-bf8b-9b40695f46de","Type":"ContainerStarted","Data":"873a86c033a5133f32e69aa7992e031067b942254453e75c7b18b231e747b156"} Oct 11 10:29:09.904696 master-2 kubenswrapper[4776]: I1011 10:29:09.904663 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" event={"ID":"e4536c84-d8f3-4808-bf8b-9b40695f46de","Type":"ContainerStarted","Data":"d077a5eab5d81ce2290e601c65e085ee2d5d37c0530bcc71c015b79e447f81b0"} Oct 11 10:29:09.906428 master-2 kubenswrapper[4776]: I1011 10:29:09.906399 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" event={"ID":"7652e0ca-2d18-48c7-80e0-f4a936038377","Type":"ContainerStarted","Data":"1f4d3a48a71555ffdcdf1c1073fbe86ebf6d442fb70386b341facb9625835980"} Oct 11 10:29:09.911699 master-2 kubenswrapper[4776]: I1011 10:29:09.911617 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" event={"ID":"e20ebc39-150b-472a-bb22-328d8f5db87b","Type":"ContainerStarted","Data":"4bc4056a907ac0ec224d8bd696e843da4318af4a567956c2044bab87181c045c"} Oct 11 10:29:09.911699 master-2 kubenswrapper[4776]: I1011 10:29:09.911667 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" event={"ID":"e20ebc39-150b-472a-bb22-328d8f5db87b","Type":"ContainerStarted","Data":"30e3aec7445b067ba5a72f4ede367eb6434e3a5b3933f665a386dce066bcbfaa"} Oct 11 10:29:09.914355 master-2 kubenswrapper[4776]: I1011 10:29:09.914138 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" event={"ID":"d4354488-1b32-422d-bb06-767a952192a5","Type":"ContainerStarted","Data":"a556a45ac137fda790c7adca931defb5e82d192fa03e7152cd0556a3c5ba907b"} Oct 11 10:29:09.916037 master-2 kubenswrapper[4776]: I1011 10:29:09.915705 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" event={"ID":"548333d7-2374-4c38-b4fd-45c2bee2ac4e","Type":"ContainerStarted","Data":"5926a997226e15274953a94c6e3df1ecbe8d31dc4836d2e8edaaefd2851bd608"} Oct 11 10:29:09.916037 master-2 kubenswrapper[4776]: I1011 10:29:09.916019 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" event={"ID":"548333d7-2374-4c38-b4fd-45c2bee2ac4e","Type":"ContainerStarted","Data":"724b4e4876a7e65a5dfd8a937e512c0c0dab4fa561ff2efd2ea0fe10cd52f778"} Oct 11 10:29:09.916604 master-2 kubenswrapper[4776]: I1011 10:29:09.916561 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" event={"ID":"cbf33a7e-abea-411d-9a19-85cfe67debe3","Type":"ContainerStarted","Data":"d97eb420aaba8fb64f806ab9e8c4614e0403881645e49bc4703d4dfecdcf78f8"} Oct 11 10:29:09.917791 master-2 kubenswrapper[4776]: I1011 10:29:09.917748 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" event={"ID":"e3281eb7-fb96-4bae-8c55-b79728d426b0","Type":"ContainerStarted","Data":"8bbd939af8c063654038f5f4b60919b6db007b5064acc7639052fd6ecc1e54ff"} Oct 11 10:29:09.918983 master-2 kubenswrapper[4776]: I1011 10:29:09.918893 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" event={"ID":"dbaa6ca7-9865-42f6-8030-2decf702caa1","Type":"ContainerStarted","Data":"942a0a636fb02acdce7f5ca1af79d6c5459f3029a0686eb940200927b07ce9aa"} Oct 11 10:29:09.919942 master-2 kubenswrapper[4776]: I1011 10:29:09.919914 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" event={"ID":"7e860f23-9dae-4606-9426-0edec38a332f","Type":"ContainerStarted","Data":"b805db0d0bd2ed8118b82e487667e217574535ac24dd585e38e0f7c1717a52dd"} Oct 11 10:29:09.923324 master-2 kubenswrapper[4776]: I1011 10:29:09.923285 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" podStartSLOduration=157.923276427 podStartE2EDuration="2m37.923276427s" podCreationTimestamp="2025-10-11 10:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:09.921290649 +0000 UTC m=+184.705717358" watchObservedRunningTime="2025-10-11 10:29:09.923276427 +0000 UTC m=+184.707703136" Oct 11 10:29:10.254840 master-1 kubenswrapper[4771]: I1011 10:29:10.254755 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:29:10.547686 master-1 kubenswrapper[4771]: I1011 10:29:10.547534 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:29:10.548478 master-1 kubenswrapper[4771]: E1011 10:29:10.547698 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:29:10.548478 master-1 kubenswrapper[4771]: E1011 10:29:10.547806 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca podName:c9e9455e-0b47-4623-9b4c-ef79cf62a254 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:26.547775572 +0000 UTC m=+198.522002043 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca") pod "controller-manager-565f857764-nhm4g" (UID: "c9e9455e-0b47-4623-9b4c-ef79cf62a254") : configmap "client-ca" not found Oct 11 10:29:12.092532 master-1 kubenswrapper[4771]: I1011 10:29:12.092436 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-1"] Oct 11 10:29:12.093550 master-1 kubenswrapper[4771]: I1011 10:29:12.092682 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-master-1" podUID="49766d35-174a-4677-8b2d-e3ed195d0a26" containerName="installer" containerID="cri-o://7f8db2473bcbc14ad35cb8dd456f940c8050a9882fcf9aa519950777d8bb0fc0" gracePeriod=30 Oct 11 10:29:12.272423 master-1 kubenswrapper[4771]: I1011 10:29:12.272323 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-1_49766d35-174a-4677-8b2d-e3ed195d0a26/installer/0.log" Oct 11 10:29:12.272687 master-1 kubenswrapper[4771]: I1011 10:29:12.272431 4771 generic.go:334] "Generic (PLEG): container finished" podID="49766d35-174a-4677-8b2d-e3ed195d0a26" containerID="7f8db2473bcbc14ad35cb8dd456f940c8050a9882fcf9aa519950777d8bb0fc0" exitCode=1 Oct 11 10:29:12.272687 master-1 kubenswrapper[4771]: I1011 10:29:12.272488 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-1" event={"ID":"49766d35-174a-4677-8b2d-e3ed195d0a26","Type":"ContainerDied","Data":"7f8db2473bcbc14ad35cb8dd456f940c8050a9882fcf9aa519950777d8bb0fc0"} Oct 11 10:29:12.436134 master-1 kubenswrapper[4771]: I1011 10:29:12.435983 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-9nzpz"] Oct 11 10:29:12.437118 master-1 kubenswrapper[4771]: I1011 10:29:12.437068 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9nzpz" Oct 11 10:29:12.442257 master-1 kubenswrapper[4771]: I1011 10:29:12.442203 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 11 10:29:12.442799 master-1 kubenswrapper[4771]: I1011 10:29:12.442769 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 11 10:29:12.442922 master-1 kubenswrapper[4771]: I1011 10:29:12.442883 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 11 10:29:12.443099 master-1 kubenswrapper[4771]: I1011 10:29:12.443064 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 11 10:29:12.448804 master-2 kubenswrapper[4776]: I1011 10:29:12.448738 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-xmz7m"] Oct 11 10:29:12.449416 master-2 kubenswrapper[4776]: I1011 10:29:12.449387 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.454004 master-2 kubenswrapper[4776]: I1011 10:29:12.453903 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 11 10:29:12.499373 master-2 kubenswrapper[4776]: I1011 10:29:12.499301 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phzns\" (UniqueName: \"kubernetes.io/projected/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-kube-api-access-phzns\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.499373 master-2 kubenswrapper[4776]: I1011 10:29:12.499371 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-mcd-auth-proxy-config\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.499682 master-2 kubenswrapper[4776]: I1011 10:29:12.499448 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-rootfs\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.499797 master-2 kubenswrapper[4776]: I1011 10:29:12.499706 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-proxy-tls\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.501883 master-1 kubenswrapper[4771]: I1011 10:29:12.501829 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-1_49766d35-174a-4677-8b2d-e3ed195d0a26/installer/0.log" Oct 11 10:29:12.502012 master-1 kubenswrapper[4771]: I1011 10:29:12.501908 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-1" Oct 11 10:29:12.569486 master-1 kubenswrapper[4771]: I1011 10:29:12.569410 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebb73d72-cbb7-4736-870e-79e86c9fa7f5-mcd-auth-proxy-config\") pod \"machine-config-daemon-9nzpz\" (UID: \"ebb73d72-cbb7-4736-870e-79e86c9fa7f5\") " pod="openshift-machine-config-operator/machine-config-daemon-9nzpz" Oct 11 10:29:12.569486 master-1 kubenswrapper[4771]: I1011 10:29:12.569456 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebb73d72-cbb7-4736-870e-79e86c9fa7f5-proxy-tls\") pod \"machine-config-daemon-9nzpz\" (UID: \"ebb73d72-cbb7-4736-870e-79e86c9fa7f5\") " pod="openshift-machine-config-operator/machine-config-daemon-9nzpz" Oct 11 10:29:12.569486 master-1 kubenswrapper[4771]: I1011 10:29:12.569496 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ebb73d72-cbb7-4736-870e-79e86c9fa7f5-rootfs\") pod \"machine-config-daemon-9nzpz\" (UID: \"ebb73d72-cbb7-4736-870e-79e86c9fa7f5\") " pod="openshift-machine-config-operator/machine-config-daemon-9nzpz" Oct 11 10:29:12.569879 master-1 kubenswrapper[4771]: I1011 10:29:12.569516 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzt9h\" (UniqueName: \"kubernetes.io/projected/ebb73d72-cbb7-4736-870e-79e86c9fa7f5-kube-api-access-kzt9h\") pod \"machine-config-daemon-9nzpz\" (UID: \"ebb73d72-cbb7-4736-870e-79e86c9fa7f5\") " pod="openshift-machine-config-operator/machine-config-daemon-9nzpz" Oct 11 10:29:12.600661 master-2 kubenswrapper[4776]: I1011 10:29:12.600578 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-proxy-tls\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.600661 master-2 kubenswrapper[4776]: I1011 10:29:12.600688 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phzns\" (UniqueName: \"kubernetes.io/projected/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-kube-api-access-phzns\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.600661 master-2 kubenswrapper[4776]: I1011 10:29:12.600729 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-mcd-auth-proxy-config\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.600661 master-2 kubenswrapper[4776]: I1011 10:29:12.600764 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-rootfs\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.601189 master-2 kubenswrapper[4776]: I1011 10:29:12.600852 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-rootfs\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.601757 master-2 kubenswrapper[4776]: I1011 10:29:12.601727 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-mcd-auth-proxy-config\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.607139 master-2 kubenswrapper[4776]: I1011 10:29:12.607103 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-proxy-tls\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.625082 master-2 kubenswrapper[4776]: I1011 10:29:12.625028 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phzns\" (UniqueName: \"kubernetes.io/projected/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-kube-api-access-phzns\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.670191 master-1 kubenswrapper[4771]: I1011 10:29:12.670059 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/49766d35-174a-4677-8b2d-e3ed195d0a26-var-lock\") pod \"49766d35-174a-4677-8b2d-e3ed195d0a26\" (UID: \"49766d35-174a-4677-8b2d-e3ed195d0a26\") " Oct 11 10:29:12.670191 master-1 kubenswrapper[4771]: I1011 10:29:12.670186 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49766d35-174a-4677-8b2d-e3ed195d0a26-var-lock" (OuterVolumeSpecName: "var-lock") pod "49766d35-174a-4677-8b2d-e3ed195d0a26" (UID: "49766d35-174a-4677-8b2d-e3ed195d0a26"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:29:12.670552 master-1 kubenswrapper[4771]: I1011 10:29:12.670212 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49766d35-174a-4677-8b2d-e3ed195d0a26-kubelet-dir\") pod \"49766d35-174a-4677-8b2d-e3ed195d0a26\" (UID: \"49766d35-174a-4677-8b2d-e3ed195d0a26\") " Oct 11 10:29:12.670552 master-1 kubenswrapper[4771]: I1011 10:29:12.670238 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49766d35-174a-4677-8b2d-e3ed195d0a26-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "49766d35-174a-4677-8b2d-e3ed195d0a26" (UID: "49766d35-174a-4677-8b2d-e3ed195d0a26"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:29:12.670552 master-1 kubenswrapper[4771]: I1011 10:29:12.670322 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49766d35-174a-4677-8b2d-e3ed195d0a26-kube-api-access\") pod \"49766d35-174a-4677-8b2d-e3ed195d0a26\" (UID: \"49766d35-174a-4677-8b2d-e3ed195d0a26\") " Oct 11 10:29:12.670800 master-1 kubenswrapper[4771]: I1011 10:29:12.670608 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzt9h\" (UniqueName: \"kubernetes.io/projected/ebb73d72-cbb7-4736-870e-79e86c9fa7f5-kube-api-access-kzt9h\") pod \"machine-config-daemon-9nzpz\" (UID: \"ebb73d72-cbb7-4736-870e-79e86c9fa7f5\") " pod="openshift-machine-config-operator/machine-config-daemon-9nzpz" Oct 11 10:29:12.670897 master-1 kubenswrapper[4771]: I1011 10:29:12.670875 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebb73d72-cbb7-4736-870e-79e86c9fa7f5-mcd-auth-proxy-config\") pod \"machine-config-daemon-9nzpz\" (UID: \"ebb73d72-cbb7-4736-870e-79e86c9fa7f5\") " pod="openshift-machine-config-operator/machine-config-daemon-9nzpz" Oct 11 10:29:12.670988 master-1 kubenswrapper[4771]: I1011 10:29:12.670916 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebb73d72-cbb7-4736-870e-79e86c9fa7f5-proxy-tls\") pod \"machine-config-daemon-9nzpz\" (UID: \"ebb73d72-cbb7-4736-870e-79e86c9fa7f5\") " pod="openshift-machine-config-operator/machine-config-daemon-9nzpz" Oct 11 10:29:12.670988 master-1 kubenswrapper[4771]: I1011 10:29:12.670972 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ebb73d72-cbb7-4736-870e-79e86c9fa7f5-rootfs\") pod \"machine-config-daemon-9nzpz\" (UID: \"ebb73d72-cbb7-4736-870e-79e86c9fa7f5\") " pod="openshift-machine-config-operator/machine-config-daemon-9nzpz" Oct 11 10:29:12.671205 master-1 kubenswrapper[4771]: I1011 10:29:12.671074 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/49766d35-174a-4677-8b2d-e3ed195d0a26-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:12.671205 master-1 kubenswrapper[4771]: I1011 10:29:12.671101 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49766d35-174a-4677-8b2d-e3ed195d0a26-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:12.671205 master-1 kubenswrapper[4771]: I1011 10:29:12.671192 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ebb73d72-cbb7-4736-870e-79e86c9fa7f5-rootfs\") pod \"machine-config-daemon-9nzpz\" (UID: \"ebb73d72-cbb7-4736-870e-79e86c9fa7f5\") " pod="openshift-machine-config-operator/machine-config-daemon-9nzpz" Oct 11 10:29:12.672102 master-1 kubenswrapper[4771]: I1011 10:29:12.672028 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebb73d72-cbb7-4736-870e-79e86c9fa7f5-mcd-auth-proxy-config\") pod \"machine-config-daemon-9nzpz\" (UID: \"ebb73d72-cbb7-4736-870e-79e86c9fa7f5\") " pod="openshift-machine-config-operator/machine-config-daemon-9nzpz" Oct 11 10:29:12.674835 master-1 kubenswrapper[4771]: I1011 10:29:12.674765 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49766d35-174a-4677-8b2d-e3ed195d0a26-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "49766d35-174a-4677-8b2d-e3ed195d0a26" (UID: "49766d35-174a-4677-8b2d-e3ed195d0a26"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:29:12.675764 master-1 kubenswrapper[4771]: I1011 10:29:12.675719 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebb73d72-cbb7-4736-870e-79e86c9fa7f5-proxy-tls\") pod \"machine-config-daemon-9nzpz\" (UID: \"ebb73d72-cbb7-4736-870e-79e86c9fa7f5\") " pod="openshift-machine-config-operator/machine-config-daemon-9nzpz" Oct 11 10:29:12.682010 master-2 kubenswrapper[4776]: I1011 10:29:12.681959 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:29:12.694842 master-1 kubenswrapper[4771]: I1011 10:29:12.694636 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzt9h\" (UniqueName: \"kubernetes.io/projected/ebb73d72-cbb7-4736-870e-79e86c9fa7f5-kube-api-access-kzt9h\") pod \"machine-config-daemon-9nzpz\" (UID: \"ebb73d72-cbb7-4736-870e-79e86c9fa7f5\") " pod="openshift-machine-config-operator/machine-config-daemon-9nzpz" Oct 11 10:29:12.756272 master-1 kubenswrapper[4771]: I1011 10:29:12.756124 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9nzpz" Oct 11 10:29:12.767365 master-2 kubenswrapper[4776]: I1011 10:29:12.767224 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.771940 master-1 kubenswrapper[4771]: I1011 10:29:12.771870 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49766d35-174a-4677-8b2d-e3ed195d0a26-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:12.883333 master-1 kubenswrapper[4771]: I1011 10:29:12.883225 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-4pm7x" Oct 11 10:29:13.277499 master-1 kubenswrapper[4771]: I1011 10:29:13.277460 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-1_49766d35-174a-4677-8b2d-e3ed195d0a26/installer/0.log" Oct 11 10:29:13.278215 master-1 kubenswrapper[4771]: I1011 10:29:13.277593 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-1" Oct 11 10:29:13.278847 master-1 kubenswrapper[4771]: I1011 10:29:13.278427 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-1" event={"ID":"49766d35-174a-4677-8b2d-e3ed195d0a26","Type":"ContainerDied","Data":"c590919bb57c8eb20558e4148e5af322c480e4ab020811da46e7b9c3247ad430"} Oct 11 10:29:13.278847 master-1 kubenswrapper[4771]: I1011 10:29:13.278568 4771 scope.go:117] "RemoveContainer" containerID="7f8db2473bcbc14ad35cb8dd456f940c8050a9882fcf9aa519950777d8bb0fc0" Oct 11 10:29:13.285710 master-1 kubenswrapper[4771]: I1011 10:29:13.284556 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9nzpz" event={"ID":"ebb73d72-cbb7-4736-870e-79e86c9fa7f5","Type":"ContainerStarted","Data":"101752d726c211a4147854bc821564d59ba9692b10a61e8a2aebecd7573d5028"} Oct 11 10:29:13.285710 master-1 kubenswrapper[4771]: I1011 10:29:13.284614 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9nzpz" event={"ID":"ebb73d72-cbb7-4736-870e-79e86c9fa7f5","Type":"ContainerStarted","Data":"bbebfd3c58947a78314370999f26a02214de1f1c409cbccc2d5da4102c7788b3"} Oct 11 10:29:13.285710 master-1 kubenswrapper[4771]: I1011 10:29:13.284640 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9nzpz" event={"ID":"ebb73d72-cbb7-4736-870e-79e86c9fa7f5","Type":"ContainerStarted","Data":"461a26a3bdce412f46dca35747cd452487253f3acb4cd435bdee26610015c724"} Oct 11 10:29:13.302454 master-1 kubenswrapper[4771]: I1011 10:29:13.302339 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-9nzpz" podStartSLOduration=1.302311408 podStartE2EDuration="1.302311408s" podCreationTimestamp="2025-10-11 10:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:13.300503579 +0000 UTC m=+185.274730080" watchObservedRunningTime="2025-10-11 10:29:13.302311408 +0000 UTC m=+185.276537849" Oct 11 10:29:13.345812 master-1 kubenswrapper[4771]: I1011 10:29:13.345722 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-1"] Oct 11 10:29:13.345924 master-1 kubenswrapper[4771]: I1011 10:29:13.345830 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-master-1"] Oct 11 10:29:14.441608 master-1 kubenswrapper[4771]: I1011 10:29:14.441560 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49766d35-174a-4677-8b2d-e3ed195d0a26" path="/var/lib/kubelet/pods/49766d35-174a-4677-8b2d-e3ed195d0a26/volumes" Oct 11 10:29:14.497896 master-1 kubenswrapper[4771]: I1011 10:29:14.497786 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-1"] Oct 11 10:29:14.498209 master-1 kubenswrapper[4771]: E1011 10:29:14.498013 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49766d35-174a-4677-8b2d-e3ed195d0a26" containerName="installer" Oct 11 10:29:14.498209 master-1 kubenswrapper[4771]: I1011 10:29:14.498039 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="49766d35-174a-4677-8b2d-e3ed195d0a26" containerName="installer" Oct 11 10:29:14.498209 master-1 kubenswrapper[4771]: I1011 10:29:14.498151 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="49766d35-174a-4677-8b2d-e3ed195d0a26" containerName="installer" Oct 11 10:29:14.498759 master-1 kubenswrapper[4771]: I1011 10:29:14.498703 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-1" Oct 11 10:29:14.506500 master-1 kubenswrapper[4771]: I1011 10:29:14.506442 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-1"] Oct 11 10:29:14.591925 master-1 kubenswrapper[4771]: I1011 10:29:14.591808 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7662f87a-13ba-439c-b386-05e68284803c-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"7662f87a-13ba-439c-b386-05e68284803c\") " pod="openshift-kube-scheduler/installer-4-master-1" Oct 11 10:29:14.591925 master-1 kubenswrapper[4771]: I1011 10:29:14.591888 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7662f87a-13ba-439c-b386-05e68284803c-var-lock\") pod \"installer-4-master-1\" (UID: \"7662f87a-13ba-439c-b386-05e68284803c\") " pod="openshift-kube-scheduler/installer-4-master-1" Oct 11 10:29:14.592333 master-1 kubenswrapper[4771]: I1011 10:29:14.591978 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7662f87a-13ba-439c-b386-05e68284803c-kube-api-access\") pod \"installer-4-master-1\" (UID: \"7662f87a-13ba-439c-b386-05e68284803c\") " pod="openshift-kube-scheduler/installer-4-master-1" Oct 11 10:29:14.592600 master-1 kubenswrapper[4771]: E1011 10:29:14.592521 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod49766d35_174a_4677_8b2d_e3ed195d0a26.slice/crio-c590919bb57c8eb20558e4148e5af322c480e4ab020811da46e7b9c3247ad430\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod007dcbab_9e3e_4dcd_9ad9_0ea8dd07dfc7.slice/crio-conmon-80fbdcaea7022dfed31b23d1e5cd04123cb13507148681a1c855ca79f442ec90.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod49766d35_174a_4677_8b2d_e3ed195d0a26.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod007dcbab_9e3e_4dcd_9ad9_0ea8dd07dfc7.slice/crio-80fbdcaea7022dfed31b23d1e5cd04123cb13507148681a1c855ca79f442ec90.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:29:14.693444 master-1 kubenswrapper[4771]: I1011 10:29:14.693224 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7662f87a-13ba-439c-b386-05e68284803c-kube-api-access\") pod \"installer-4-master-1\" (UID: \"7662f87a-13ba-439c-b386-05e68284803c\") " pod="openshift-kube-scheduler/installer-4-master-1" Oct 11 10:29:14.693444 master-1 kubenswrapper[4771]: I1011 10:29:14.693377 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7662f87a-13ba-439c-b386-05e68284803c-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"7662f87a-13ba-439c-b386-05e68284803c\") " pod="openshift-kube-scheduler/installer-4-master-1" Oct 11 10:29:14.693444 master-1 kubenswrapper[4771]: I1011 10:29:14.693400 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7662f87a-13ba-439c-b386-05e68284803c-var-lock\") pod \"installer-4-master-1\" (UID: \"7662f87a-13ba-439c-b386-05e68284803c\") " pod="openshift-kube-scheduler/installer-4-master-1" Oct 11 10:29:14.693755 master-1 kubenswrapper[4771]: I1011 10:29:14.693517 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7662f87a-13ba-439c-b386-05e68284803c-var-lock\") pod \"installer-4-master-1\" (UID: \"7662f87a-13ba-439c-b386-05e68284803c\") " pod="openshift-kube-scheduler/installer-4-master-1" Oct 11 10:29:14.693755 master-1 kubenswrapper[4771]: I1011 10:29:14.693526 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7662f87a-13ba-439c-b386-05e68284803c-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"7662f87a-13ba-439c-b386-05e68284803c\") " pod="openshift-kube-scheduler/installer-4-master-1" Oct 11 10:29:14.717819 master-1 kubenswrapper[4771]: I1011 10:29:14.717757 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7662f87a-13ba-439c-b386-05e68284803c-kube-api-access\") pod \"installer-4-master-1\" (UID: \"7662f87a-13ba-439c-b386-05e68284803c\") " pod="openshift-kube-scheduler/installer-4-master-1" Oct 11 10:29:14.757536 master-1 kubenswrapper[4771]: I1011 10:29:14.757425 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-1"] Oct 11 10:29:14.757849 master-1 kubenswrapper[4771]: I1011 10:29:14.757768 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-1" podUID="6534d9db-a553-4c39-bf4a-014a359ee336" containerName="installer" containerID="cri-o://c9e465db2f016eeb1b9eb6a1701316ad91386e0556613224875082e886221894" gracePeriod=30 Oct 11 10:29:14.831688 master-1 kubenswrapper[4771]: I1011 10:29:14.831642 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-1_007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7/installer/0.log" Oct 11 10:29:14.831798 master-1 kubenswrapper[4771]: I1011 10:29:14.831729 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-1" Oct 11 10:29:14.856425 master-1 kubenswrapper[4771]: I1011 10:29:14.853518 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-1" Oct 11 10:29:14.997236 master-1 kubenswrapper[4771]: I1011 10:29:14.997172 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7-kube-api-access\") pod \"007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7\" (UID: \"007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7\") " Oct 11 10:29:14.997471 master-1 kubenswrapper[4771]: I1011 10:29:14.997281 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7-var-lock\") pod \"007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7\" (UID: \"007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7\") " Oct 11 10:29:14.997471 master-1 kubenswrapper[4771]: I1011 10:29:14.997311 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7-kubelet-dir\") pod \"007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7\" (UID: \"007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7\") " Oct 11 10:29:14.997562 master-1 kubenswrapper[4771]: I1011 10:29:14.997447 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7-var-lock" (OuterVolumeSpecName: "var-lock") pod "007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7" (UID: "007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:29:14.997562 master-1 kubenswrapper[4771]: I1011 10:29:14.997509 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7" (UID: "007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:29:15.003160 master-1 kubenswrapper[4771]: I1011 10:29:15.003094 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7" (UID: "007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:29:15.099164 master-1 kubenswrapper[4771]: I1011 10:29:15.099018 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:15.099164 master-1 kubenswrapper[4771]: I1011 10:29:15.099119 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:15.099164 master-1 kubenswrapper[4771]: I1011 10:29:15.099143 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:15.278829 master-1 kubenswrapper[4771]: I1011 10:29:15.278639 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-1"] Oct 11 10:29:15.287461 master-1 kubenswrapper[4771]: W1011 10:29:15.287347 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7662f87a_13ba_439c_b386_05e68284803c.slice/crio-ed6acc7afd35b0ef55fa3ef023c51664249170487a6297da51f4f2e72955fbfc WatchSource:0}: Error finding container ed6acc7afd35b0ef55fa3ef023c51664249170487a6297da51f4f2e72955fbfc: Status 404 returned error can't find the container with id ed6acc7afd35b0ef55fa3ef023c51664249170487a6297da51f4f2e72955fbfc Oct 11 10:29:15.294300 master-1 kubenswrapper[4771]: I1011 10:29:15.294255 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-1" event={"ID":"7662f87a-13ba-439c-b386-05e68284803c","Type":"ContainerStarted","Data":"ed6acc7afd35b0ef55fa3ef023c51664249170487a6297da51f4f2e72955fbfc"} Oct 11 10:29:15.296004 master-1 kubenswrapper[4771]: I1011 10:29:15.295941 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-1_007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7/installer/0.log" Oct 11 10:29:15.296082 master-1 kubenswrapper[4771]: I1011 10:29:15.296012 4771 generic.go:334] "Generic (PLEG): container finished" podID="007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7" containerID="80fbdcaea7022dfed31b23d1e5cd04123cb13507148681a1c855ca79f442ec90" exitCode=1 Oct 11 10:29:15.296082 master-1 kubenswrapper[4771]: I1011 10:29:15.296057 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-1" event={"ID":"007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7","Type":"ContainerDied","Data":"80fbdcaea7022dfed31b23d1e5cd04123cb13507148681a1c855ca79f442ec90"} Oct 11 10:29:15.296169 master-1 kubenswrapper[4771]: I1011 10:29:15.296091 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-1" event={"ID":"007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7","Type":"ContainerDied","Data":"862c7a0762d99806cbf395198a8a115efc48c49b1a93e1cd22e9f82545990f2e"} Oct 11 10:29:15.296169 master-1 kubenswrapper[4771]: I1011 10:29:15.296116 4771 scope.go:117] "RemoveContainer" containerID="80fbdcaea7022dfed31b23d1e5cd04123cb13507148681a1c855ca79f442ec90" Oct 11 10:29:15.296290 master-1 kubenswrapper[4771]: I1011 10:29:15.296250 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-1" Oct 11 10:29:15.322948 master-1 kubenswrapper[4771]: I1011 10:29:15.322885 4771 scope.go:117] "RemoveContainer" containerID="80fbdcaea7022dfed31b23d1e5cd04123cb13507148681a1c855ca79f442ec90" Oct 11 10:29:15.323648 master-1 kubenswrapper[4771]: E1011 10:29:15.323587 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80fbdcaea7022dfed31b23d1e5cd04123cb13507148681a1c855ca79f442ec90\": container with ID starting with 80fbdcaea7022dfed31b23d1e5cd04123cb13507148681a1c855ca79f442ec90 not found: ID does not exist" containerID="80fbdcaea7022dfed31b23d1e5cd04123cb13507148681a1c855ca79f442ec90" Oct 11 10:29:15.323717 master-1 kubenswrapper[4771]: I1011 10:29:15.323661 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80fbdcaea7022dfed31b23d1e5cd04123cb13507148681a1c855ca79f442ec90"} err="failed to get container status \"80fbdcaea7022dfed31b23d1e5cd04123cb13507148681a1c855ca79f442ec90\": rpc error: code = NotFound desc = could not find container \"80fbdcaea7022dfed31b23d1e5cd04123cb13507148681a1c855ca79f442ec90\": container with ID starting with 80fbdcaea7022dfed31b23d1e5cd04123cb13507148681a1c855ca79f442ec90 not found: ID does not exist" Oct 11 10:29:15.334271 master-1 kubenswrapper[4771]: I1011 10:29:15.334217 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-1"] Oct 11 10:29:15.337812 master-1 kubenswrapper[4771]: I1011 10:29:15.337766 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-1"] Oct 11 10:29:16.303213 master-1 kubenswrapper[4771]: I1011 10:29:16.303094 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-1" event={"ID":"7662f87a-13ba-439c-b386-05e68284803c","Type":"ContainerStarted","Data":"6597ee1a813020ee9e9d9c3bc4ac9547370cdcefee548bc443d67590ef76026d"} Oct 11 10:29:16.444524 master-1 kubenswrapper[4771]: I1011 10:29:16.444437 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7" path="/var/lib/kubelet/pods/007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7/volumes" Oct 11 10:29:17.230010 master-1 kubenswrapper[4771]: I1011 10:29:17.229898 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:29:17.230301 master-1 kubenswrapper[4771]: E1011 10:29:17.230180 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker podName:d7647696-42d9-4dd9-bc3b-a4d52a42cf9a nodeName:}" failed. No retries permitted until 2025-10-11 10:29:49.230137499 +0000 UTC m=+221.204363980 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-bqdlc" (UID: "d7647696-42d9-4dd9-bc3b-a4d52a42cf9a") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:29:17.330730 master-1 kubenswrapper[4771]: I1011 10:29:17.330635 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:29:17.331618 master-1 kubenswrapper[4771]: E1011 10:29:17.330891 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker podName:6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b nodeName:}" failed. No retries permitted until 2025-10-11 10:29:49.330861222 +0000 UTC m=+221.305087663 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-tpzsm" (UID: "6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:29:17.556131 master-1 kubenswrapper[4771]: I1011 10:29:17.556006 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-1" podStartSLOduration=3.555982437 podStartE2EDuration="3.555982437s" podCreationTimestamp="2025-10-11 10:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:16.319624578 +0000 UTC m=+188.293851119" watchObservedRunningTime="2025-10-11 10:29:17.555982437 +0000 UTC m=+189.530208878" Oct 11 10:29:17.556753 master-1 kubenswrapper[4771]: I1011 10:29:17.556707 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-1"] Oct 11 10:29:17.556877 master-1 kubenswrapper[4771]: E1011 10:29:17.556850 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7" containerName="installer" Oct 11 10:29:17.556877 master-1 kubenswrapper[4771]: I1011 10:29:17.556864 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7" containerName="installer" Oct 11 10:29:17.557067 master-1 kubenswrapper[4771]: I1011 10:29:17.556924 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="007dcbab-9e3e-4dcd-9ad9-0ea8dd07dfc7" containerName="installer" Oct 11 10:29:17.557266 master-1 kubenswrapper[4771]: I1011 10:29:17.557222 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-1" Oct 11 10:29:17.567800 master-1 kubenswrapper[4771]: I1011 10:29:17.567728 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-1"] Oct 11 10:29:17.735184 master-1 kubenswrapper[4771]: I1011 10:29:17.735075 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/792389a1-400d-4a07-a0a5-e80b2edfd8f1-var-lock\") pod \"installer-2-master-1\" (UID: \"792389a1-400d-4a07-a0a5-e80b2edfd8f1\") " pod="openshift-kube-controller-manager/installer-2-master-1" Oct 11 10:29:17.735519 master-1 kubenswrapper[4771]: I1011 10:29:17.735400 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/792389a1-400d-4a07-a0a5-e80b2edfd8f1-kubelet-dir\") pod \"installer-2-master-1\" (UID: \"792389a1-400d-4a07-a0a5-e80b2edfd8f1\") " pod="openshift-kube-controller-manager/installer-2-master-1" Oct 11 10:29:17.735519 master-1 kubenswrapper[4771]: I1011 10:29:17.735483 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/792389a1-400d-4a07-a0a5-e80b2edfd8f1-kube-api-access\") pod \"installer-2-master-1\" (UID: \"792389a1-400d-4a07-a0a5-e80b2edfd8f1\") " pod="openshift-kube-controller-manager/installer-2-master-1" Oct 11 10:29:17.837215 master-1 kubenswrapper[4771]: I1011 10:29:17.837011 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/792389a1-400d-4a07-a0a5-e80b2edfd8f1-kubelet-dir\") pod \"installer-2-master-1\" (UID: \"792389a1-400d-4a07-a0a5-e80b2edfd8f1\") " pod="openshift-kube-controller-manager/installer-2-master-1" Oct 11 10:29:17.837215 master-1 kubenswrapper[4771]: I1011 10:29:17.837109 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/792389a1-400d-4a07-a0a5-e80b2edfd8f1-kube-api-access\") pod \"installer-2-master-1\" (UID: \"792389a1-400d-4a07-a0a5-e80b2edfd8f1\") " pod="openshift-kube-controller-manager/installer-2-master-1" Oct 11 10:29:17.837215 master-1 kubenswrapper[4771]: I1011 10:29:17.837183 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/792389a1-400d-4a07-a0a5-e80b2edfd8f1-var-lock\") pod \"installer-2-master-1\" (UID: \"792389a1-400d-4a07-a0a5-e80b2edfd8f1\") " pod="openshift-kube-controller-manager/installer-2-master-1" Oct 11 10:29:17.837649 master-1 kubenswrapper[4771]: I1011 10:29:17.837220 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/792389a1-400d-4a07-a0a5-e80b2edfd8f1-kubelet-dir\") pod \"installer-2-master-1\" (UID: \"792389a1-400d-4a07-a0a5-e80b2edfd8f1\") " pod="openshift-kube-controller-manager/installer-2-master-1" Oct 11 10:29:17.837649 master-1 kubenswrapper[4771]: I1011 10:29:17.837339 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/792389a1-400d-4a07-a0a5-e80b2edfd8f1-var-lock\") pod \"installer-2-master-1\" (UID: \"792389a1-400d-4a07-a0a5-e80b2edfd8f1\") " pod="openshift-kube-controller-manager/installer-2-master-1" Oct 11 10:29:17.869203 master-1 kubenswrapper[4771]: I1011 10:29:17.869115 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/792389a1-400d-4a07-a0a5-e80b2edfd8f1-kube-api-access\") pod \"installer-2-master-1\" (UID: \"792389a1-400d-4a07-a0a5-e80b2edfd8f1\") " pod="openshift-kube-controller-manager/installer-2-master-1" Oct 11 10:29:17.871842 master-1 kubenswrapper[4771]: I1011 10:29:17.871797 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-1" Oct 11 10:29:18.136998 master-1 kubenswrapper[4771]: I1011 10:29:18.136826 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-1"] Oct 11 10:29:18.145260 master-1 kubenswrapper[4771]: W1011 10:29:18.145198 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod792389a1_400d_4a07_a0a5_e80b2edfd8f1.slice/crio-278d42f198fc93ee50b135376d28ae4eb2fe4bcf6f5f1c9223b4e9e7ffd7be30 WatchSource:0}: Error finding container 278d42f198fc93ee50b135376d28ae4eb2fe4bcf6f5f1c9223b4e9e7ffd7be30: Status 404 returned error can't find the container with id 278d42f198fc93ee50b135376d28ae4eb2fe4bcf6f5f1c9223b4e9e7ffd7be30 Oct 11 10:29:18.315016 master-1 kubenswrapper[4771]: I1011 10:29:18.314535 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-1" event={"ID":"792389a1-400d-4a07-a0a5-e80b2edfd8f1","Type":"ContainerStarted","Data":"278d42f198fc93ee50b135376d28ae4eb2fe4bcf6f5f1c9223b4e9e7ffd7be30"} Oct 11 10:29:19.322701 master-1 kubenswrapper[4771]: I1011 10:29:19.322613 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-1" event={"ID":"792389a1-400d-4a07-a0a5-e80b2edfd8f1","Type":"ContainerStarted","Data":"1847b9a9f31d4cf6b7fede3d6231e62c7c7aec1680e7c800a880c6ba363a8798"} Oct 11 10:29:19.342978 master-1 kubenswrapper[4771]: I1011 10:29:19.342824 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-1" podStartSLOduration=2.342799813 podStartE2EDuration="2.342799813s" podCreationTimestamp="2025-10-11 10:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:19.342788622 +0000 UTC m=+191.317015063" watchObservedRunningTime="2025-10-11 10:29:19.342799813 +0000 UTC m=+191.317026274" Oct 11 10:29:20.978195 master-2 kubenswrapper[4776]: I1011 10:29:20.977498 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" event={"ID":"e3281eb7-fb96-4bae-8c55-b79728d426b0","Type":"ContainerStarted","Data":"10b004bcf8fd1ef0733b195df6589766b1519ee70424b80772e6e7e1bc36c75e"} Oct 11 10:29:20.978195 master-2 kubenswrapper[4776]: I1011 10:29:20.977915 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:29:20.988001 master-2 kubenswrapper[4776]: I1011 10:29:20.987076 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" event={"ID":"dbaa6ca7-9865-42f6-8030-2decf702caa1","Type":"ContainerStarted","Data":"44ceb896cc8343bbb3f15f6ce236e68c97335c0859c31751e10b1cff6a07681c"} Oct 11 10:29:20.988001 master-2 kubenswrapper[4776]: I1011 10:29:20.987208 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:29:20.989484 master-2 kubenswrapper[4776]: I1011 10:29:20.988978 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" event={"ID":"e20ebc39-150b-472a-bb22-328d8f5db87b","Type":"ContainerStarted","Data":"ebb38a29026c752699221fdf069077ff027321233818c7fd1baeae0ce79ca4c1"} Oct 11 10:29:20.990361 master-2 kubenswrapper[4776]: I1011 10:29:20.989857 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:29:20.991760 master-2 kubenswrapper[4776]: I1011 10:29:20.991430 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" event={"ID":"7e860f23-9dae-4606-9426-0edec38a332f","Type":"ContainerStarted","Data":"abcfaa1bb3973d38dfde3d5e4981f116ced123c94a2a75e51dd75e4997f3fd4d"} Oct 11 10:29:20.997173 master-2 kubenswrapper[4776]: I1011 10:29:20.997121 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" event={"ID":"d4354488-1b32-422d-bb06-767a952192a5","Type":"ContainerStarted","Data":"d79633d40a0d1afd1ab3529abe17263cbee7f6776b1af5edf3ba2ba654773573"} Oct 11 10:29:20.997431 master-2 kubenswrapper[4776]: I1011 10:29:20.997372 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" podStartSLOduration=157.908370068 podStartE2EDuration="2m48.9973565s" podCreationTimestamp="2025-10-11 10:26:32 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.455216249 +0000 UTC m=+184.239642958" lastFinishedPulling="2025-10-11 10:29:20.544202681 +0000 UTC m=+195.328629390" observedRunningTime="2025-10-11 10:29:20.995850536 +0000 UTC m=+195.780277245" watchObservedRunningTime="2025-10-11 10:29:20.9973565 +0000 UTC m=+195.781783209" Oct 11 10:29:20.997962 master-2 kubenswrapper[4776]: I1011 10:29:20.997904 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:29:21.004884 master-2 kubenswrapper[4776]: I1011 10:29:21.003145 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" event={"ID":"64310b0b-bae1-4ad3-b106-6d59d47d29b2","Type":"ContainerStarted","Data":"70bd3f9c400e0f2d03040b409d0be80f6f7bbda878ae150537f2b4ec7baf71bd"} Oct 11 10:29:21.006785 master-2 kubenswrapper[4776]: I1011 10:29:21.006762 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" event={"ID":"cbf33a7e-abea-411d-9a19-85cfe67debe3","Type":"ContainerStarted","Data":"38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3"} Oct 11 10:29:21.007309 master-2 kubenswrapper[4776]: I1011 10:29:21.007289 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:29:21.008519 master-2 kubenswrapper[4776]: I1011 10:29:21.008455 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" event={"ID":"cdb1ed8c-c61c-48d1-88c2-66bf2783d131","Type":"ContainerStarted","Data":"c085d473f14ad61623a8d88060e27a54a61d086f47065481d8611834348b20db"} Oct 11 10:29:21.008607 master-2 kubenswrapper[4776]: I1011 10:29:21.008532 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" event={"ID":"cdb1ed8c-c61c-48d1-88c2-66bf2783d131","Type":"ContainerStarted","Data":"db89193149f068c7107a0d2d00501a02dbd4f1b90fa7b15d8d5b29eb670e0e82"} Oct 11 10:29:21.008607 master-2 kubenswrapper[4776]: I1011 10:29:21.008547 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" event={"ID":"cdb1ed8c-c61c-48d1-88c2-66bf2783d131","Type":"ContainerStarted","Data":"044d1990d5b903504defaa32786c34d22ae4d3d293b3dec6b8a098517966fc1c"} Oct 11 10:29:21.011414 master-2 kubenswrapper[4776]: I1011 10:29:21.011383 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" event={"ID":"66dee5be-e631-462d-8a2c-51a2031a83a2","Type":"ContainerStarted","Data":"5993ee4c50ac66f983c7275e415dab008a25f4d7f1725733f6cd0c4bfccdb402"} Oct 11 10:29:21.011414 master-2 kubenswrapper[4776]: I1011 10:29:21.011420 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" event={"ID":"66dee5be-e631-462d-8a2c-51a2031a83a2","Type":"ContainerStarted","Data":"d441e6d043ca6a5f4f9c4d53fc9f4517672d8d7ce53e4d5876332aa0dff6a002"} Oct 11 10:29:21.018135 master-2 kubenswrapper[4776]: I1011 10:29:21.017884 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" podStartSLOduration=159.77363488700001 podStartE2EDuration="2m51.01786218s" podCreationTimestamp="2025-10-11 10:26:30 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.316475923 +0000 UTC m=+184.100902632" lastFinishedPulling="2025-10-11 10:29:20.560703216 +0000 UTC m=+195.345129925" observedRunningTime="2025-10-11 10:29:21.016355637 +0000 UTC m=+195.800782366" watchObservedRunningTime="2025-10-11 10:29:21.01786218 +0000 UTC m=+195.802288889" Oct 11 10:29:21.019123 master-2 kubenswrapper[4776]: I1011 10:29:21.019007 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" event={"ID":"548333d7-2374-4c38-b4fd-45c2bee2ac4e","Type":"ContainerStarted","Data":"ba8f18fdcf52199cdff7e52a954cf2889c4e32a293bd45bea24bae811f7ed5c9"} Oct 11 10:29:21.023473 master-2 kubenswrapper[4776]: I1011 10:29:21.023428 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" event={"ID":"7652e0ca-2d18-48c7-80e0-f4a936038377","Type":"ContainerStarted","Data":"bee07b3499003457995a526e2769ae6950a3ee1b71df0e623d05c583f95fa09d"} Oct 11 10:29:21.023866 master-2 kubenswrapper[4776]: I1011 10:29:21.023807 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:29:21.025297 master-2 kubenswrapper[4776]: I1011 10:29:21.025256 4776 patch_prober.go:28] interesting pod/marketplace-operator-c4f798dd4-wsmdd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Oct 11 10:29:21.025378 master-2 kubenswrapper[4776]: I1011 10:29:21.025310 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" podUID="7652e0ca-2d18-48c7-80e0-f4a936038377" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Oct 11 10:29:21.036183 master-2 kubenswrapper[4776]: I1011 10:29:21.036128 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" podStartSLOduration=144.161916778 podStartE2EDuration="2m35.036109326s" podCreationTimestamp="2025-10-11 10:26:46 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.610040097 +0000 UTC m=+184.394466806" lastFinishedPulling="2025-10-11 10:29:20.484232645 +0000 UTC m=+195.268659354" observedRunningTime="2025-10-11 10:29:21.034312724 +0000 UTC m=+195.818739433" watchObservedRunningTime="2025-10-11 10:29:21.036109326 +0000 UTC m=+195.820536035" Oct 11 10:29:21.051548 master-2 kubenswrapper[4776]: I1011 10:29:21.050855 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" podStartSLOduration=161.835189561 podStartE2EDuration="2m53.05083739s" podCreationTimestamp="2025-10-11 10:26:28 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.258795923 +0000 UTC m=+184.043222632" lastFinishedPulling="2025-10-11 10:29:20.474443752 +0000 UTC m=+195.258870461" observedRunningTime="2025-10-11 10:29:21.048140892 +0000 UTC m=+195.832567611" watchObservedRunningTime="2025-10-11 10:29:21.05083739 +0000 UTC m=+195.835264099" Oct 11 10:29:21.078936 master-2 kubenswrapper[4776]: I1011 10:29:21.078858 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" podStartSLOduration=158.005946976 podStartE2EDuration="2m49.078840506s" podCreationTimestamp="2025-10-11 10:26:32 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.402102549 +0000 UTC m=+184.186529258" lastFinishedPulling="2025-10-11 10:29:20.474996079 +0000 UTC m=+195.259422788" observedRunningTime="2025-10-11 10:29:21.077292012 +0000 UTC m=+195.861718731" watchObservedRunningTime="2025-10-11 10:29:21.078840506 +0000 UTC m=+195.863267205" Oct 11 10:29:21.093886 master-2 kubenswrapper[4776]: I1011 10:29:21.093162 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" podStartSLOduration=131.805381682 podStartE2EDuration="2m23.093148578s" podCreationTimestamp="2025-10-11 10:26:58 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.132860137 +0000 UTC m=+183.917286846" lastFinishedPulling="2025-10-11 10:29:20.420627033 +0000 UTC m=+195.205053742" observedRunningTime="2025-10-11 10:29:21.091174281 +0000 UTC m=+195.875600990" watchObservedRunningTime="2025-10-11 10:29:21.093148578 +0000 UTC m=+195.877575287" Oct 11 10:29:21.108152 master-2 kubenswrapper[4776]: I1011 10:29:21.107744 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" podStartSLOduration=156.098349308 podStartE2EDuration="2m47.107670426s" podCreationTimestamp="2025-10-11 10:26:34 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.536646834 +0000 UTC m=+184.321073553" lastFinishedPulling="2025-10-11 10:29:20.545967962 +0000 UTC m=+195.330394671" observedRunningTime="2025-10-11 10:29:21.105107132 +0000 UTC m=+195.889533841" watchObservedRunningTime="2025-10-11 10:29:21.107670426 +0000 UTC m=+195.892097135" Oct 11 10:29:21.120424 master-2 kubenswrapper[4776]: I1011 10:29:21.118176 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" podStartSLOduration=159.014957197 podStartE2EDuration="2m50.118157659s" podCreationTimestamp="2025-10-11 10:26:31 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.442731219 +0000 UTC m=+184.227157928" lastFinishedPulling="2025-10-11 10:29:20.545931681 +0000 UTC m=+195.330358390" observedRunningTime="2025-10-11 10:29:21.1171598 +0000 UTC m=+195.901586509" watchObservedRunningTime="2025-10-11 10:29:21.118157659 +0000 UTC m=+195.902584368" Oct 11 10:29:21.152704 master-2 kubenswrapper[4776]: I1011 10:29:21.151892 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" podStartSLOduration=9.151870309 podStartE2EDuration="9.151870309s" podCreationTimestamp="2025-10-11 10:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:21.130143973 +0000 UTC m=+195.914570682" watchObservedRunningTime="2025-10-11 10:29:21.151870309 +0000 UTC m=+195.936297058" Oct 11 10:29:21.183953 master-2 kubenswrapper[4776]: I1011 10:29:21.183899 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-66df44bc95-kxhjc_7004f3ff-6db8-446d-94c1-1223e975299d/authentication-operator/0.log" Oct 11 10:29:21.333886 master-1 kubenswrapper[4771]: I1011 10:29:21.333795 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gwwz9"] Oct 11 10:29:21.334994 master-1 kubenswrapper[4771]: I1011 10:29:21.334814 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwwz9" Oct 11 10:29:21.338742 master-1 kubenswrapper[4771]: I1011 10:29:21.338684 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 11 10:29:21.338844 master-1 kubenswrapper[4771]: I1011 10:29:21.338791 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 11 10:29:21.340900 master-1 kubenswrapper[4771]: I1011 10:29:21.340846 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gwwz9"] Oct 11 10:29:21.478954 master-1 kubenswrapper[4771]: I1011 10:29:21.478814 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b7d1d62-0062-47cd-a963-63893777198e-catalog-content\") pod \"community-operators-gwwz9\" (UID: \"0b7d1d62-0062-47cd-a963-63893777198e\") " pod="openshift-marketplace/community-operators-gwwz9" Oct 11 10:29:21.478954 master-1 kubenswrapper[4771]: I1011 10:29:21.478952 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6ptz\" (UniqueName: \"kubernetes.io/projected/0b7d1d62-0062-47cd-a963-63893777198e-kube-api-access-r6ptz\") pod \"community-operators-gwwz9\" (UID: \"0b7d1d62-0062-47cd-a963-63893777198e\") " pod="openshift-marketplace/community-operators-gwwz9" Oct 11 10:29:21.479337 master-1 kubenswrapper[4771]: I1011 10:29:21.479078 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b7d1d62-0062-47cd-a963-63893777198e-utilities\") pod \"community-operators-gwwz9\" (UID: \"0b7d1d62-0062-47cd-a963-63893777198e\") " pod="openshift-marketplace/community-operators-gwwz9" Oct 11 10:29:21.580488 master-1 kubenswrapper[4771]: I1011 10:29:21.580387 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6ptz\" (UniqueName: \"kubernetes.io/projected/0b7d1d62-0062-47cd-a963-63893777198e-kube-api-access-r6ptz\") pod \"community-operators-gwwz9\" (UID: \"0b7d1d62-0062-47cd-a963-63893777198e\") " pod="openshift-marketplace/community-operators-gwwz9" Oct 11 10:29:21.580723 master-1 kubenswrapper[4771]: I1011 10:29:21.580522 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b7d1d62-0062-47cd-a963-63893777198e-utilities\") pod \"community-operators-gwwz9\" (UID: \"0b7d1d62-0062-47cd-a963-63893777198e\") " pod="openshift-marketplace/community-operators-gwwz9" Oct 11 10:29:21.580723 master-1 kubenswrapper[4771]: I1011 10:29:21.580588 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b7d1d62-0062-47cd-a963-63893777198e-catalog-content\") pod \"community-operators-gwwz9\" (UID: \"0b7d1d62-0062-47cd-a963-63893777198e\") " pod="openshift-marketplace/community-operators-gwwz9" Oct 11 10:29:21.581131 master-1 kubenswrapper[4771]: I1011 10:29:21.581102 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b7d1d62-0062-47cd-a963-63893777198e-catalog-content\") pod \"community-operators-gwwz9\" (UID: \"0b7d1d62-0062-47cd-a963-63893777198e\") " pod="openshift-marketplace/community-operators-gwwz9" Oct 11 10:29:21.581499 master-1 kubenswrapper[4771]: I1011 10:29:21.581414 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b7d1d62-0062-47cd-a963-63893777198e-utilities\") pod \"community-operators-gwwz9\" (UID: \"0b7d1d62-0062-47cd-a963-63893777198e\") " pod="openshift-marketplace/community-operators-gwwz9" Oct 11 10:29:21.610901 master-1 kubenswrapper[4771]: I1011 10:29:21.610754 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6ptz\" (UniqueName: \"kubernetes.io/projected/0b7d1d62-0062-47cd-a963-63893777198e-kube-api-access-r6ptz\") pod \"community-operators-gwwz9\" (UID: \"0b7d1d62-0062-47cd-a963-63893777198e\") " pod="openshift-marketplace/community-operators-gwwz9" Oct 11 10:29:21.659761 master-1 kubenswrapper[4771]: I1011 10:29:21.659664 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwwz9" Oct 11 10:29:21.782536 master-2 kubenswrapper[4776]: I1011 10:29:21.782469 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-65b6f4d4c9-5wrz6_e350b624-6581-4982-96f3-cd5c37256e02/fix-audit-permissions/0.log" Oct 11 10:29:21.939699 master-2 kubenswrapper[4776]: I1011 10:29:21.939627 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:29:21.944064 master-2 kubenswrapper[4776]: I1011 10:29:21.944024 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:29:21.983389 master-2 kubenswrapper[4776]: I1011 10:29:21.983309 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-65b6f4d4c9-5wrz6_e350b624-6581-4982-96f3-cd5c37256e02/oauth-apiserver/0.log" Oct 11 10:29:22.031441 master-2 kubenswrapper[4776]: I1011 10:29:22.031379 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" event={"ID":"64310b0b-bae1-4ad3-b106-6d59d47d29b2","Type":"ContainerStarted","Data":"7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829"} Oct 11 10:29:22.032844 master-1 kubenswrapper[4771]: I1011 10:29:22.032639 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs\") pod \"network-metrics-daemon-fgjvw\" (UID: \"2c084572-a5c9-4787-8a14-b7d6b0810a1b\") " pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:29:22.034169 master-2 kubenswrapper[4776]: I1011 10:29:22.034043 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" event={"ID":"cbf33a7e-abea-411d-9a19-85cfe67debe3","Type":"ContainerStarted","Data":"4c073c5ee5fd6035e588b594f71843fa0867444b7edf11350aaa49874157a615"} Oct 11 10:29:22.036199 master-1 kubenswrapper[4771]: I1011 10:29:22.036151 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c084572-a5c9-4787-8a14-b7d6b0810a1b-metrics-certs\") pod \"network-metrics-daemon-fgjvw\" (UID: \"2c084572-a5c9-4787-8a14-b7d6b0810a1b\") " pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:29:22.037294 master-2 kubenswrapper[4776]: I1011 10:29:22.037238 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:29:22.051349 master-2 kubenswrapper[4776]: I1011 10:29:22.051271 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" podStartSLOduration=115.810472134 podStartE2EDuration="2m7.051258337s" podCreationTimestamp="2025-10-11 10:27:15 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.179916432 +0000 UTC m=+183.964343141" lastFinishedPulling="2025-10-11 10:29:20.420702635 +0000 UTC m=+195.205129344" observedRunningTime="2025-10-11 10:29:22.050043683 +0000 UTC m=+196.834470382" watchObservedRunningTime="2025-10-11 10:29:22.051258337 +0000 UTC m=+196.835685046" Oct 11 10:29:22.063131 master-1 kubenswrapper[4771]: I1011 10:29:22.063040 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fgjvw" Oct 11 10:29:22.072146 master-2 kubenswrapper[4776]: I1011 10:29:22.072071 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" podStartSLOduration=115.912465572 podStartE2EDuration="2m7.072052167s" podCreationTimestamp="2025-10-11 10:27:15 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.261005227 +0000 UTC m=+184.045431936" lastFinishedPulling="2025-10-11 10:29:20.420591832 +0000 UTC m=+195.205018531" observedRunningTime="2025-10-11 10:29:22.06903315 +0000 UTC m=+196.853459869" watchObservedRunningTime="2025-10-11 10:29:22.072052167 +0000 UTC m=+196.856478876" Oct 11 10:29:22.107956 master-1 kubenswrapper[4771]: I1011 10:29:22.107913 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gwwz9"] Oct 11 10:29:22.172475 master-2 kubenswrapper[4776]: I1011 10:29:22.172405 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:29:22.179573 master-1 kubenswrapper[4771]: I1011 10:29:22.178739 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-65b6f4d4c9-skwvw_004ee387-d0e9-4582-ad14-f571832ebd6e/fix-audit-permissions/0.log" Oct 11 10:29:22.278555 master-1 kubenswrapper[4771]: I1011 10:29:22.278468 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-fgjvw"] Oct 11 10:29:22.289927 master-1 kubenswrapper[4771]: W1011 10:29:22.289733 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c084572_a5c9_4787_8a14_b7d6b0810a1b.slice/crio-a3c2aac5746fe934f86d97bbbc78969442131582cbfeb761d1f7e839717b9136 WatchSource:0}: Error finding container a3c2aac5746fe934f86d97bbbc78969442131582cbfeb761d1f7e839717b9136: Status 404 returned error can't find the container with id a3c2aac5746fe934f86d97bbbc78969442131582cbfeb761d1f7e839717b9136 Oct 11 10:29:22.314437 master-1 kubenswrapper[4771]: I1011 10:29:22.314167 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xkrc6"] Oct 11 10:29:22.315607 master-1 kubenswrapper[4771]: I1011 10:29:22.315568 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xkrc6" Oct 11 10:29:22.321530 master-1 kubenswrapper[4771]: I1011 10:29:22.321465 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xkrc6"] Oct 11 10:29:22.335654 master-1 kubenswrapper[4771]: I1011 10:29:22.335578 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwwz9" event={"ID":"0b7d1d62-0062-47cd-a963-63893777198e","Type":"ContainerStarted","Data":"94df55f9d42e35f3eb12d9d840811113835d067c33b17a8f7670d61e212cd7f3"} Oct 11 10:29:22.338010 master-1 kubenswrapper[4771]: I1011 10:29:22.337960 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fgjvw" event={"ID":"2c084572-a5c9-4787-8a14-b7d6b0810a1b","Type":"ContainerStarted","Data":"a3c2aac5746fe934f86d97bbbc78969442131582cbfeb761d1f7e839717b9136"} Oct 11 10:29:22.385457 master-1 kubenswrapper[4771]: I1011 10:29:22.385376 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-65b6f4d4c9-skwvw_004ee387-d0e9-4582-ad14-f571832ebd6e/oauth-apiserver/0.log" Oct 11 10:29:22.436672 master-1 kubenswrapper[4771]: I1011 10:29:22.436587 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqnhk\" (UniqueName: \"kubernetes.io/projected/26005893-ecd8-4acb-8417-71a97ed97cbe-kube-api-access-hqnhk\") pod \"redhat-marketplace-xkrc6\" (UID: \"26005893-ecd8-4acb-8417-71a97ed97cbe\") " pod="openshift-marketplace/redhat-marketplace-xkrc6" Oct 11 10:29:22.436960 master-1 kubenswrapper[4771]: I1011 10:29:22.436721 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26005893-ecd8-4acb-8417-71a97ed97cbe-catalog-content\") pod \"redhat-marketplace-xkrc6\" (UID: \"26005893-ecd8-4acb-8417-71a97ed97cbe\") " pod="openshift-marketplace/redhat-marketplace-xkrc6" Oct 11 10:29:22.436960 master-1 kubenswrapper[4771]: I1011 10:29:22.436739 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26005893-ecd8-4acb-8417-71a97ed97cbe-utilities\") pod \"redhat-marketplace-xkrc6\" (UID: \"26005893-ecd8-4acb-8417-71a97ed97cbe\") " pod="openshift-marketplace/redhat-marketplace-xkrc6" Oct 11 10:29:22.537494 master-1 kubenswrapper[4771]: I1011 10:29:22.537398 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26005893-ecd8-4acb-8417-71a97ed97cbe-catalog-content\") pod \"redhat-marketplace-xkrc6\" (UID: \"26005893-ecd8-4acb-8417-71a97ed97cbe\") " pod="openshift-marketplace/redhat-marketplace-xkrc6" Oct 11 10:29:22.537494 master-1 kubenswrapper[4771]: I1011 10:29:22.537479 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26005893-ecd8-4acb-8417-71a97ed97cbe-utilities\") pod \"redhat-marketplace-xkrc6\" (UID: \"26005893-ecd8-4acb-8417-71a97ed97cbe\") " pod="openshift-marketplace/redhat-marketplace-xkrc6" Oct 11 10:29:22.537826 master-1 kubenswrapper[4771]: I1011 10:29:22.537597 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqnhk\" (UniqueName: \"kubernetes.io/projected/26005893-ecd8-4acb-8417-71a97ed97cbe-kube-api-access-hqnhk\") pod \"redhat-marketplace-xkrc6\" (UID: \"26005893-ecd8-4acb-8417-71a97ed97cbe\") " pod="openshift-marketplace/redhat-marketplace-xkrc6" Oct 11 10:29:22.538563 master-1 kubenswrapper[4771]: I1011 10:29:22.538498 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26005893-ecd8-4acb-8417-71a97ed97cbe-catalog-content\") pod \"redhat-marketplace-xkrc6\" (UID: \"26005893-ecd8-4acb-8417-71a97ed97cbe\") " pod="openshift-marketplace/redhat-marketplace-xkrc6" Oct 11 10:29:22.539094 master-1 kubenswrapper[4771]: I1011 10:29:22.538996 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26005893-ecd8-4acb-8417-71a97ed97cbe-utilities\") pod \"redhat-marketplace-xkrc6\" (UID: \"26005893-ecd8-4acb-8417-71a97ed97cbe\") " pod="openshift-marketplace/redhat-marketplace-xkrc6" Oct 11 10:29:22.569665 master-1 kubenswrapper[4771]: I1011 10:29:22.569620 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqnhk\" (UniqueName: \"kubernetes.io/projected/26005893-ecd8-4acb-8417-71a97ed97cbe-kube-api-access-hqnhk\") pod \"redhat-marketplace-xkrc6\" (UID: \"26005893-ecd8-4acb-8417-71a97ed97cbe\") " pod="openshift-marketplace/redhat-marketplace-xkrc6" Oct 11 10:29:22.585342 master-2 kubenswrapper[4776]: I1011 10:29:22.585307 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-7ff449c7c5-cfvjb_6b8dc5b8-3c48-4dba-9992-6e269ca133f1/kube-rbac-proxy/0.log" Oct 11 10:29:22.585533 master-2 kubenswrapper[4776]: I1011 10:29:22.585498 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-w52cn"] Oct 11 10:29:22.589353 master-2 kubenswrapper[4776]: W1011 10:29:22.589306 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35b21a7b_2a5a_4511_a2d5_d950752b4bda.slice/crio-5dcde2e575ebd08bc1de039b107945728fb612faa4db21c614392ec5c88a780b WatchSource:0}: Error finding container 5dcde2e575ebd08bc1de039b107945728fb612faa4db21c614392ec5c88a780b: Status 404 returned error can't find the container with id 5dcde2e575ebd08bc1de039b107945728fb612faa4db21c614392ec5c88a780b Oct 11 10:29:22.640411 master-1 kubenswrapper[4771]: I1011 10:29:22.640291 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xkrc6" Oct 11 10:29:22.783216 master-2 kubenswrapper[4776]: I1011 10:29:22.783182 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-7ff449c7c5-cfvjb_6b8dc5b8-3c48-4dba-9992-6e269ca133f1/cluster-autoscaler-operator/0.log" Oct 11 10:29:22.918491 master-1 kubenswrapper[4771]: I1011 10:29:22.918296 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm"] Oct 11 10:29:22.919105 master-1 kubenswrapper[4771]: I1011 10:29:22.919065 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm" Oct 11 10:29:22.922843 master-1 kubenswrapper[4771]: I1011 10:29:22.922783 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 11 10:29:22.922910 master-1 kubenswrapper[4771]: I1011 10:29:22.922888 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 11 10:29:22.923037 master-1 kubenswrapper[4771]: I1011 10:29:22.922787 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 11 10:29:22.929275 master-1 kubenswrapper[4771]: I1011 10:29:22.929235 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm"] Oct 11 10:29:22.937495 master-2 kubenswrapper[4776]: I1011 10:29:22.937429 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6"] Oct 11 10:29:22.938210 master-2 kubenswrapper[4776]: I1011 10:29:22.938174 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:22.943345 master-2 kubenswrapper[4776]: I1011 10:29:22.942895 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 11 10:29:22.944102 master-2 kubenswrapper[4776]: I1011 10:29:22.944023 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6"] Oct 11 10:29:22.982617 master-2 kubenswrapper[4776]: I1011 10:29:22.982568 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6c8fbf4498-wq4jf_66dee5be-e631-462d-8a2c-51a2031a83a2/cluster-baremetal-operator/0.log" Oct 11 10:29:23.039042 master-2 kubenswrapper[4776]: I1011 10:29:23.038994 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-w52cn" event={"ID":"35b21a7b-2a5a-4511-a2d5-d950752b4bda","Type":"ContainerStarted","Data":"5dcde2e575ebd08bc1de039b107945728fb612faa4db21c614392ec5c88a780b"} Oct 11 10:29:23.043169 master-1 kubenswrapper[4771]: I1011 10:29:23.043051 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/68bdaf37-fa14-4c86-a697-881df7c9c7f1-tmpfs\") pod \"packageserver-77c85f5c6-6zxmm\" (UID: \"68bdaf37-fa14-4c86-a697-881df7c9c7f1\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm" Oct 11 10:29:23.043169 master-1 kubenswrapper[4771]: I1011 10:29:23.043179 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/68bdaf37-fa14-4c86-a697-881df7c9c7f1-apiservice-cert\") pod \"packageserver-77c85f5c6-6zxmm\" (UID: \"68bdaf37-fa14-4c86-a697-881df7c9c7f1\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm" Oct 11 10:29:23.043694 master-1 kubenswrapper[4771]: I1011 10:29:23.043284 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdhgl\" (UniqueName: \"kubernetes.io/projected/68bdaf37-fa14-4c86-a697-881df7c9c7f1-kube-api-access-sdhgl\") pod \"packageserver-77c85f5c6-6zxmm\" (UID: \"68bdaf37-fa14-4c86-a697-881df7c9c7f1\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm" Oct 11 10:29:23.043694 master-1 kubenswrapper[4771]: I1011 10:29:23.043467 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/68bdaf37-fa14-4c86-a697-881df7c9c7f1-webhook-cert\") pod \"packageserver-77c85f5c6-6zxmm\" (UID: \"68bdaf37-fa14-4c86-a697-881df7c9c7f1\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm" Oct 11 10:29:23.054604 master-2 kubenswrapper[4776]: I1011 10:29:23.054536 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5ch6\" (UniqueName: \"kubernetes.io/projected/4e35cfca-8883-465b-b952-cc91f7f5dd81-kube-api-access-m5ch6\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.054820 master-2 kubenswrapper[4776]: I1011 10:29:23.054628 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e35cfca-8883-465b-b952-cc91f7f5dd81-apiservice-cert\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.054820 master-2 kubenswrapper[4776]: I1011 10:29:23.054651 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e35cfca-8883-465b-b952-cc91f7f5dd81-tmpfs\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.054820 master-2 kubenswrapper[4776]: I1011 10:29:23.054713 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e35cfca-8883-465b-b952-cc91f7f5dd81-webhook-cert\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.089979 master-1 kubenswrapper[4771]: I1011 10:29:23.089898 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xkrc6"] Oct 11 10:29:23.101195 master-1 kubenswrapper[4771]: W1011 10:29:23.101115 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26005893_ecd8_4acb_8417_71a97ed97cbe.slice/crio-42678277150d23882615afd583505d1ee80fbc936870ab20c76affe3a676bd4c WatchSource:0}: Error finding container 42678277150d23882615afd583505d1ee80fbc936870ab20c76affe3a676bd4c: Status 404 returned error can't find the container with id 42678277150d23882615afd583505d1ee80fbc936870ab20c76affe3a676bd4c Oct 11 10:29:23.144379 master-1 kubenswrapper[4771]: I1011 10:29:23.144293 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/68bdaf37-fa14-4c86-a697-881df7c9c7f1-apiservice-cert\") pod \"packageserver-77c85f5c6-6zxmm\" (UID: \"68bdaf37-fa14-4c86-a697-881df7c9c7f1\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm" Oct 11 10:29:23.144379 master-1 kubenswrapper[4771]: I1011 10:29:23.144370 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdhgl\" (UniqueName: \"kubernetes.io/projected/68bdaf37-fa14-4c86-a697-881df7c9c7f1-kube-api-access-sdhgl\") pod \"packageserver-77c85f5c6-6zxmm\" (UID: \"68bdaf37-fa14-4c86-a697-881df7c9c7f1\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm" Oct 11 10:29:23.144699 master-1 kubenswrapper[4771]: I1011 10:29:23.144431 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/68bdaf37-fa14-4c86-a697-881df7c9c7f1-webhook-cert\") pod \"packageserver-77c85f5c6-6zxmm\" (UID: \"68bdaf37-fa14-4c86-a697-881df7c9c7f1\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm" Oct 11 10:29:23.144699 master-1 kubenswrapper[4771]: I1011 10:29:23.144516 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/68bdaf37-fa14-4c86-a697-881df7c9c7f1-tmpfs\") pod \"packageserver-77c85f5c6-6zxmm\" (UID: \"68bdaf37-fa14-4c86-a697-881df7c9c7f1\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm" Oct 11 10:29:23.145305 master-1 kubenswrapper[4771]: I1011 10:29:23.145264 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/68bdaf37-fa14-4c86-a697-881df7c9c7f1-tmpfs\") pod \"packageserver-77c85f5c6-6zxmm\" (UID: \"68bdaf37-fa14-4c86-a697-881df7c9c7f1\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm" Oct 11 10:29:23.148307 master-1 kubenswrapper[4771]: I1011 10:29:23.148262 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/68bdaf37-fa14-4c86-a697-881df7c9c7f1-apiservice-cert\") pod \"packageserver-77c85f5c6-6zxmm\" (UID: \"68bdaf37-fa14-4c86-a697-881df7c9c7f1\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm" Oct 11 10:29:23.148902 master-1 kubenswrapper[4771]: I1011 10:29:23.148859 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/68bdaf37-fa14-4c86-a697-881df7c9c7f1-webhook-cert\") pod \"packageserver-77c85f5c6-6zxmm\" (UID: \"68bdaf37-fa14-4c86-a697-881df7c9c7f1\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm" Oct 11 10:29:23.155917 master-2 kubenswrapper[4776]: I1011 10:29:23.155832 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5ch6\" (UniqueName: \"kubernetes.io/projected/4e35cfca-8883-465b-b952-cc91f7f5dd81-kube-api-access-m5ch6\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.156163 master-2 kubenswrapper[4776]: I1011 10:29:23.155983 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e35cfca-8883-465b-b952-cc91f7f5dd81-tmpfs\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.156163 master-2 kubenswrapper[4776]: I1011 10:29:23.156004 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e35cfca-8883-465b-b952-cc91f7f5dd81-apiservice-cert\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.156163 master-2 kubenswrapper[4776]: I1011 10:29:23.156148 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e35cfca-8883-465b-b952-cc91f7f5dd81-webhook-cert\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.157340 master-2 kubenswrapper[4776]: I1011 10:29:23.157086 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e35cfca-8883-465b-b952-cc91f7f5dd81-tmpfs\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.159638 master-2 kubenswrapper[4776]: I1011 10:29:23.159589 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e35cfca-8883-465b-b952-cc91f7f5dd81-webhook-cert\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.160790 master-2 kubenswrapper[4776]: I1011 10:29:23.160728 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e35cfca-8883-465b-b952-cc91f7f5dd81-apiservice-cert\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.165713 master-1 kubenswrapper[4771]: I1011 10:29:23.165177 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdhgl\" (UniqueName: \"kubernetes.io/projected/68bdaf37-fa14-4c86-a697-881df7c9c7f1-kube-api-access-sdhgl\") pod \"packageserver-77c85f5c6-6zxmm\" (UID: \"68bdaf37-fa14-4c86-a697-881df7c9c7f1\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm" Oct 11 10:29:23.173448 master-2 kubenswrapper[4776]: I1011 10:29:23.173394 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5ch6\" (UniqueName: \"kubernetes.io/projected/4e35cfca-8883-465b-b952-cc91f7f5dd81-kube-api-access-m5ch6\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.182764 master-2 kubenswrapper[4776]: I1011 10:29:23.182726 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6c8fbf4498-wq4jf_66dee5be-e631-462d-8a2c-51a2031a83a2/baremetal-kube-rbac-proxy/0.log" Oct 11 10:29:23.234304 master-1 kubenswrapper[4771]: I1011 10:29:23.234164 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm" Oct 11 10:29:23.254268 master-2 kubenswrapper[4776]: I1011 10:29:23.254130 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.345839 master-1 kubenswrapper[4771]: I1011 10:29:23.345039 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xkrc6" event={"ID":"26005893-ecd8-4acb-8417-71a97ed97cbe","Type":"ContainerStarted","Data":"42678277150d23882615afd583505d1ee80fbc936870ab20c76affe3a676bd4c"} Oct 11 10:29:23.381340 master-2 kubenswrapper[4776]: I1011 10:29:23.381285 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-84f9cbd5d9-bjntd_7e860f23-9dae-4606-9426-0edec38a332f/control-plane-machine-set-operator/0.log" Oct 11 10:29:23.520929 master-1 kubenswrapper[4771]: I1011 10:29:23.520762 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g8tm6"] Oct 11 10:29:23.521518 master-1 kubenswrapper[4771]: I1011 10:29:23.521483 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8tm6" Oct 11 10:29:23.531988 master-1 kubenswrapper[4771]: I1011 10:29:23.530927 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g8tm6"] Oct 11 10:29:23.584496 master-2 kubenswrapper[4776]: I1011 10:29:23.584315 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-9dbb96f7-b88g6_548333d7-2374-4c38-b4fd-45c2bee2ac4e/kube-rbac-proxy/0.log" Oct 11 10:29:23.649335 master-1 kubenswrapper[4771]: I1011 10:29:23.649276 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38131fcf-d407-4ba3-b7bf-471586bab887-utilities\") pod \"redhat-operators-g8tm6\" (UID: \"38131fcf-d407-4ba3-b7bf-471586bab887\") " pod="openshift-marketplace/redhat-operators-g8tm6" Oct 11 10:29:23.649623 master-1 kubenswrapper[4771]: I1011 10:29:23.649350 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpmjh\" (UniqueName: \"kubernetes.io/projected/38131fcf-d407-4ba3-b7bf-471586bab887-kube-api-access-gpmjh\") pod \"redhat-operators-g8tm6\" (UID: \"38131fcf-d407-4ba3-b7bf-471586bab887\") " pod="openshift-marketplace/redhat-operators-g8tm6" Oct 11 10:29:23.649623 master-1 kubenswrapper[4771]: I1011 10:29:23.649448 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38131fcf-d407-4ba3-b7bf-471586bab887-catalog-content\") pod \"redhat-operators-g8tm6\" (UID: \"38131fcf-d407-4ba3-b7bf-471586bab887\") " pod="openshift-marketplace/redhat-operators-g8tm6" Oct 11 10:29:23.650421 master-1 kubenswrapper[4771]: I1011 10:29:23.650374 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm"] Oct 11 10:29:23.657428 master-1 kubenswrapper[4771]: W1011 10:29:23.657339 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68bdaf37_fa14_4c86_a697_881df7c9c7f1.slice/crio-4a2c543ad15bd096a76d71db493ea6b2ffca6a47d70401e74955a5901737d9c3 WatchSource:0}: Error finding container 4a2c543ad15bd096a76d71db493ea6b2ffca6a47d70401e74955a5901737d9c3: Status 404 returned error can't find the container with id 4a2c543ad15bd096a76d71db493ea6b2ffca6a47d70401e74955a5901737d9c3 Oct 11 10:29:23.664612 master-2 kubenswrapper[4776]: I1011 10:29:23.664544 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6"] Oct 11 10:29:23.670309 master-2 kubenswrapper[4776]: W1011 10:29:23.670241 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e35cfca_8883_465b_b952_cc91f7f5dd81.slice/crio-42b9e229d25461501c54106355bb73ca72ed3779d4e12a80d61b0141ed76e6f7 WatchSource:0}: Error finding container 42b9e229d25461501c54106355bb73ca72ed3779d4e12a80d61b0141ed76e6f7: Status 404 returned error can't find the container with id 42b9e229d25461501c54106355bb73ca72ed3779d4e12a80d61b0141ed76e6f7 Oct 11 10:29:23.750502 master-1 kubenswrapper[4771]: I1011 10:29:23.750438 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38131fcf-d407-4ba3-b7bf-471586bab887-utilities\") pod \"redhat-operators-g8tm6\" (UID: \"38131fcf-d407-4ba3-b7bf-471586bab887\") " pod="openshift-marketplace/redhat-operators-g8tm6" Oct 11 10:29:23.750502 master-1 kubenswrapper[4771]: I1011 10:29:23.750501 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpmjh\" (UniqueName: \"kubernetes.io/projected/38131fcf-d407-4ba3-b7bf-471586bab887-kube-api-access-gpmjh\") pod \"redhat-operators-g8tm6\" (UID: \"38131fcf-d407-4ba3-b7bf-471586bab887\") " pod="openshift-marketplace/redhat-operators-g8tm6" Oct 11 10:29:23.750757 master-1 kubenswrapper[4771]: I1011 10:29:23.750529 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38131fcf-d407-4ba3-b7bf-471586bab887-catalog-content\") pod \"redhat-operators-g8tm6\" (UID: \"38131fcf-d407-4ba3-b7bf-471586bab887\") " pod="openshift-marketplace/redhat-operators-g8tm6" Oct 11 10:29:23.751099 master-1 kubenswrapper[4771]: I1011 10:29:23.751071 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38131fcf-d407-4ba3-b7bf-471586bab887-catalog-content\") pod \"redhat-operators-g8tm6\" (UID: \"38131fcf-d407-4ba3-b7bf-471586bab887\") " pod="openshift-marketplace/redhat-operators-g8tm6" Oct 11 10:29:23.751466 master-1 kubenswrapper[4771]: I1011 10:29:23.751412 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38131fcf-d407-4ba3-b7bf-471586bab887-utilities\") pod \"redhat-operators-g8tm6\" (UID: \"38131fcf-d407-4ba3-b7bf-471586bab887\") " pod="openshift-marketplace/redhat-operators-g8tm6" Oct 11 10:29:23.773022 master-1 kubenswrapper[4771]: I1011 10:29:23.772923 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpmjh\" (UniqueName: \"kubernetes.io/projected/38131fcf-d407-4ba3-b7bf-471586bab887-kube-api-access-gpmjh\") pod \"redhat-operators-g8tm6\" (UID: \"38131fcf-d407-4ba3-b7bf-471586bab887\") " pod="openshift-marketplace/redhat-operators-g8tm6" Oct 11 10:29:23.779570 master-2 kubenswrapper[4776]: I1011 10:29:23.779534 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-9dbb96f7-b88g6_548333d7-2374-4c38-b4fd-45c2bee2ac4e/machine-api-operator/0.log" Oct 11 10:29:23.839927 master-1 kubenswrapper[4771]: I1011 10:29:23.839855 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8tm6" Oct 11 10:29:23.980276 master-2 kubenswrapper[4776]: I1011 10:29:23.980214 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-7769d9677-wh775_893af718-1fec-4b8b-8349-d85f978f4140/dns-operator/0.log" Oct 11 10:29:24.045577 master-2 kubenswrapper[4776]: I1011 10:29:24.045502 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" event={"ID":"4e35cfca-8883-465b-b952-cc91f7f5dd81","Type":"ContainerStarted","Data":"d331c322f9894436a43d3dc3344c299da66b85b01c2f7d860c8463bca15e8045"} Oct 11 10:29:24.045577 master-2 kubenswrapper[4776]: I1011 10:29:24.045569 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" event={"ID":"4e35cfca-8883-465b-b952-cc91f7f5dd81","Type":"ContainerStarted","Data":"42b9e229d25461501c54106355bb73ca72ed3779d4e12a80d61b0141ed76e6f7"} Oct 11 10:29:24.046422 master-2 kubenswrapper[4776]: I1011 10:29:24.046388 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:24.047650 master-2 kubenswrapper[4776]: I1011 10:29:24.047583 4776 generic.go:334] "Generic (PLEG): container finished" podID="58aef476-6586-47bb-bf45-dbeccac6271a" containerID="4086f54b40e82a9c1520dd01a01f1e17aa8e4bfa53d48bc75f9b65494739f67c" exitCode=0 Oct 11 10:29:24.047650 master-2 kubenswrapper[4776]: I1011 10:29:24.047634 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" event={"ID":"58aef476-6586-47bb-bf45-dbeccac6271a","Type":"ContainerDied","Data":"4086f54b40e82a9c1520dd01a01f1e17aa8e4bfa53d48bc75f9b65494739f67c"} Oct 11 10:29:24.048005 master-2 kubenswrapper[4776]: I1011 10:29:24.047969 4776 scope.go:117] "RemoveContainer" containerID="4086f54b40e82a9c1520dd01a01f1e17aa8e4bfa53d48bc75f9b65494739f67c" Oct 11 10:29:24.067336 master-2 kubenswrapper[4776]: I1011 10:29:24.067261 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" podStartSLOduration=2.067242659 podStartE2EDuration="2.067242659s" podCreationTimestamp="2025-10-11 10:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:24.065523409 +0000 UTC m=+198.849950118" watchObservedRunningTime="2025-10-11 10:29:24.067242659 +0000 UTC m=+198.851669368" Oct 11 10:29:24.076808 master-2 kubenswrapper[4776]: I1011 10:29:24.076758 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t"] Oct 11 10:29:24.077514 master-2 kubenswrapper[4776]: I1011 10:29:24.077485 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.081395 master-2 kubenswrapper[4776]: I1011 10:29:24.081341 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 11 10:29:24.086091 master-2 kubenswrapper[4776]: I1011 10:29:24.086032 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t"] Oct 11 10:29:24.153809 master-1 kubenswrapper[4771]: I1011 10:29:24.153746 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:29:24.154085 master-1 kubenswrapper[4771]: E1011 10:29:24.153945 4771 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:29:24.154085 master-1 kubenswrapper[4771]: E1011 10:29:24.154082 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca podName:537a2b50-0394-47bd-941a-def350316943 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:56.154057698 +0000 UTC m=+228.128284149 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca") pod "route-controller-manager-5bcc5987f5-f92xw" (UID: "537a2b50-0394-47bd-941a-def350316943") : configmap "client-ca" not found Oct 11 10:29:24.167991 master-2 kubenswrapper[4776]: I1011 10:29:24.167923 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1029b995-20ca-45f4-bccb-e83ccee2075f-mcc-auth-proxy-config\") pod \"machine-config-controller-6dcc7bf8f6-4496t\" (UID: \"1029b995-20ca-45f4-bccb-e83ccee2075f\") " pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.168209 master-2 kubenswrapper[4776]: I1011 10:29:24.168021 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgbhl\" (UniqueName: \"kubernetes.io/projected/1029b995-20ca-45f4-bccb-e83ccee2075f-kube-api-access-lgbhl\") pod \"machine-config-controller-6dcc7bf8f6-4496t\" (UID: \"1029b995-20ca-45f4-bccb-e83ccee2075f\") " pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.168209 master-2 kubenswrapper[4776]: I1011 10:29:24.168163 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1029b995-20ca-45f4-bccb-e83ccee2075f-proxy-tls\") pod \"machine-config-controller-6dcc7bf8f6-4496t\" (UID: \"1029b995-20ca-45f4-bccb-e83ccee2075f\") " pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.180077 master-2 kubenswrapper[4776]: I1011 10:29:24.180039 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-7769d9677-wh775_893af718-1fec-4b8b-8349-d85f978f4140/kube-rbac-proxy/0.log" Oct 11 10:29:24.224098 master-1 kubenswrapper[4771]: I1011 10:29:24.224017 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g8tm6"] Oct 11 10:29:24.265962 master-1 kubenswrapper[4771]: W1011 10:29:24.265887 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38131fcf_d407_4ba3_b7bf_471586bab887.slice/crio-fd83c4d331d341ca058f07884e0c753dec2509d54999da528657ce66ee47354c WatchSource:0}: Error finding container fd83c4d331d341ca058f07884e0c753dec2509d54999da528657ce66ee47354c: Status 404 returned error can't find the container with id fd83c4d331d341ca058f07884e0c753dec2509d54999da528657ce66ee47354c Oct 11 10:29:24.283419 master-2 kubenswrapper[4776]: I1011 10:29:24.269106 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1029b995-20ca-45f4-bccb-e83ccee2075f-proxy-tls\") pod \"machine-config-controller-6dcc7bf8f6-4496t\" (UID: \"1029b995-20ca-45f4-bccb-e83ccee2075f\") " pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.283419 master-2 kubenswrapper[4776]: I1011 10:29:24.269166 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1029b995-20ca-45f4-bccb-e83ccee2075f-mcc-auth-proxy-config\") pod \"machine-config-controller-6dcc7bf8f6-4496t\" (UID: \"1029b995-20ca-45f4-bccb-e83ccee2075f\") " pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.283419 master-2 kubenswrapper[4776]: I1011 10:29:24.269196 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgbhl\" (UniqueName: \"kubernetes.io/projected/1029b995-20ca-45f4-bccb-e83ccee2075f-kube-api-access-lgbhl\") pod \"machine-config-controller-6dcc7bf8f6-4496t\" (UID: \"1029b995-20ca-45f4-bccb-e83ccee2075f\") " pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.283419 master-2 kubenswrapper[4776]: I1011 10:29:24.270575 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1029b995-20ca-45f4-bccb-e83ccee2075f-mcc-auth-proxy-config\") pod \"machine-config-controller-6dcc7bf8f6-4496t\" (UID: \"1029b995-20ca-45f4-bccb-e83ccee2075f\") " pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.351323 master-1 kubenswrapper[4771]: I1011 10:29:24.351239 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm" event={"ID":"68bdaf37-fa14-4c86-a697-881df7c9c7f1","Type":"ContainerStarted","Data":"4a2c543ad15bd096a76d71db493ea6b2ffca6a47d70401e74955a5901737d9c3"} Oct 11 10:29:24.353056 master-1 kubenswrapper[4771]: I1011 10:29:24.353007 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8tm6" event={"ID":"38131fcf-d407-4ba3-b7bf-471586bab887","Type":"ContainerStarted","Data":"fd83c4d331d341ca058f07884e0c753dec2509d54999da528657ce66ee47354c"} Oct 11 10:29:24.367658 master-2 kubenswrapper[4776]: I1011 10:29:24.367615 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1029b995-20ca-45f4-bccb-e83ccee2075f-proxy-tls\") pod \"machine-config-controller-6dcc7bf8f6-4496t\" (UID: \"1029b995-20ca-45f4-bccb-e83ccee2075f\") " pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.371981 master-2 kubenswrapper[4776]: I1011 10:29:24.371948 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgbhl\" (UniqueName: \"kubernetes.io/projected/1029b995-20ca-45f4-bccb-e83ccee2075f-kube-api-access-lgbhl\") pod \"machine-config-controller-6dcc7bf8f6-4496t\" (UID: \"1029b995-20ca-45f4-bccb-e83ccee2075f\") " pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.378689 master-1 kubenswrapper[4771]: I1011 10:29:24.378555 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-rzjcf_b3f49f37-a9e4-4acd-ae7e-d644e8475106/dns/0.log" Oct 11 10:29:24.401922 master-2 kubenswrapper[4776]: I1011 10:29:24.401857 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.429257 master-2 kubenswrapper[4776]: I1011 10:29:24.428980 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:24.579521 master-1 kubenswrapper[4771]: I1011 10:29:24.579470 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-rzjcf_b3f49f37-a9e4-4acd-ae7e-d644e8475106/kube-rbac-proxy/0.log" Oct 11 10:29:24.720755 master-2 kubenswrapper[4776]: I1011 10:29:24.720704 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mwqr6"] Oct 11 10:29:24.721959 master-2 kubenswrapper[4776]: I1011 10:29:24.721926 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.727990 master-2 kubenswrapper[4776]: I1011 10:29:24.727932 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mwqr6"] Oct 11 10:29:24.775319 master-2 kubenswrapper[4776]: I1011 10:29:24.775235 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-catalog-content\") pod \"certified-operators-mwqr6\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.775573 master-2 kubenswrapper[4776]: I1011 10:29:24.775400 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tlh8\" (UniqueName: \"kubernetes.io/projected/444ea5b2-c9dc-4685-9f66-2273b30d9045-kube-api-access-7tlh8\") pod \"certified-operators-mwqr6\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.775573 master-2 kubenswrapper[4776]: I1011 10:29:24.775464 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-utilities\") pod \"certified-operators-mwqr6\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.779219 master-2 kubenswrapper[4776]: I1011 10:29:24.779170 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-sgvjd_e3f3ba3c-1d27-4529-9ae3-a61f88e50b62/dns/0.log" Oct 11 10:29:24.859195 master-2 kubenswrapper[4776]: I1011 10:29:24.859149 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t"] Oct 11 10:29:24.865318 master-2 kubenswrapper[4776]: W1011 10:29:24.865268 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1029b995_20ca_45f4_bccb_e83ccee2075f.slice/crio-28961758d7110a9ebe5fde4f8561f86b3b5ece7a41c742fe39cdafa150b14f67 WatchSource:0}: Error finding container 28961758d7110a9ebe5fde4f8561f86b3b5ece7a41c742fe39cdafa150b14f67: Status 404 returned error can't find the container with id 28961758d7110a9ebe5fde4f8561f86b3b5ece7a41c742fe39cdafa150b14f67 Oct 11 10:29:24.876273 master-2 kubenswrapper[4776]: I1011 10:29:24.876214 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-utilities\") pod \"certified-operators-mwqr6\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.876392 master-2 kubenswrapper[4776]: I1011 10:29:24.876317 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-catalog-content\") pod \"certified-operators-mwqr6\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.876392 master-2 kubenswrapper[4776]: I1011 10:29:24.876376 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tlh8\" (UniqueName: \"kubernetes.io/projected/444ea5b2-c9dc-4685-9f66-2273b30d9045-kube-api-access-7tlh8\") pod \"certified-operators-mwqr6\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.876765 master-2 kubenswrapper[4776]: I1011 10:29:24.876733 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-utilities\") pod \"certified-operators-mwqr6\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.876814 master-2 kubenswrapper[4776]: I1011 10:29:24.876767 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-catalog-content\") pod \"certified-operators-mwqr6\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.893458 master-2 kubenswrapper[4776]: I1011 10:29:24.893390 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tlh8\" (UniqueName: \"kubernetes.io/projected/444ea5b2-c9dc-4685-9f66-2273b30d9045-kube-api-access-7tlh8\") pod \"certified-operators-mwqr6\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.984819 master-2 kubenswrapper[4776]: I1011 10:29:24.984738 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-sgvjd_e3f3ba3c-1d27-4529-9ae3-a61f88e50b62/kube-rbac-proxy/0.log" Oct 11 10:29:25.038743 master-2 kubenswrapper[4776]: I1011 10:29:25.038433 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:25.066107 master-2 kubenswrapper[4776]: I1011 10:29:25.066042 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-w52cn" event={"ID":"35b21a7b-2a5a-4511-a2d5-d950752b4bda","Type":"ContainerStarted","Data":"aa646dbd8ebc87224fe643fd67ac6d06da9999c61bb42a2952179eca90d79b2f"} Oct 11 10:29:25.066107 master-2 kubenswrapper[4776]: I1011 10:29:25.066101 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-w52cn" event={"ID":"35b21a7b-2a5a-4511-a2d5-d950752b4bda","Type":"ContainerStarted","Data":"f85a0c2578f74dc1df3cf78c79f321e17f6d94c3d1623ee8f96a6043898f3a8e"} Oct 11 10:29:25.069057 master-2 kubenswrapper[4776]: I1011 10:29:25.068988 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" event={"ID":"1029b995-20ca-45f4-bccb-e83ccee2075f","Type":"ContainerStarted","Data":"689bf3206b70fe46ed0e99643b190eb90caf56759cb4bc40e6c7a2e98ecadb6a"} Oct 11 10:29:25.069283 master-2 kubenswrapper[4776]: I1011 10:29:25.069077 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" event={"ID":"1029b995-20ca-45f4-bccb-e83ccee2075f","Type":"ContainerStarted","Data":"28961758d7110a9ebe5fde4f8561f86b3b5ece7a41c742fe39cdafa150b14f67"} Oct 11 10:29:25.079776 master-2 kubenswrapper[4776]: I1011 10:29:25.079655 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" event={"ID":"58aef476-6586-47bb-bf45-dbeccac6271a","Type":"ContainerStarted","Data":"da8404df46b28e243f3a617ea5f5889d8f632ca65d9dfb6ee6a8ca2df35f5786"} Oct 11 10:29:25.085833 master-2 kubenswrapper[4776]: I1011 10:29:25.085755 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-w52cn" podStartSLOduration=130.197507344 podStartE2EDuration="2m12.085737086s" podCreationTimestamp="2025-10-11 10:27:13 +0000 UTC" firstStartedPulling="2025-10-11 10:29:22.591491494 +0000 UTC m=+197.375918203" lastFinishedPulling="2025-10-11 10:29:24.479721246 +0000 UTC m=+199.264147945" observedRunningTime="2025-10-11 10:29:25.082979977 +0000 UTC m=+199.867406686" watchObservedRunningTime="2025-10-11 10:29:25.085737086 +0000 UTC m=+199.870163795" Oct 11 10:29:25.175179 master-1 kubenswrapper[4771]: I1011 10:29:25.174703 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5ddb89f76-z5t6x"] Oct 11 10:29:25.177590 master-1 kubenswrapper[4771]: I1011 10:29:25.175578 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:29:25.177590 master-1 kubenswrapper[4771]: I1011 10:29:25.175758 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-67qps"] Oct 11 10:29:25.177590 master-1 kubenswrapper[4771]: I1011 10:29:25.176331 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-67qps" Oct 11 10:29:25.178427 master-1 kubenswrapper[4771]: I1011 10:29:25.178380 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 11 10:29:25.179427 master-1 kubenswrapper[4771]: I1011 10:29:25.179349 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-fjwjw_2919a957-a46f-4e96-b42e-3ba3c537e98e/dns-node-resolver/0.log" Oct 11 10:29:25.179534 master-1 kubenswrapper[4771]: I1011 10:29:25.179496 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 11 10:29:25.179582 master-1 kubenswrapper[4771]: I1011 10:29:25.179494 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 11 10:29:25.179582 master-1 kubenswrapper[4771]: I1011 10:29:25.179548 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 11 10:29:25.183473 master-1 kubenswrapper[4771]: I1011 10:29:25.179680 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 11 10:29:25.183473 master-1 kubenswrapper[4771]: I1011 10:29:25.179719 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 11 10:29:25.183473 master-1 kubenswrapper[4771]: I1011 10:29:25.179775 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Oct 11 10:29:25.183473 master-1 kubenswrapper[4771]: I1011 10:29:25.183261 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-967c7bb47-djx82"] Oct 11 10:29:25.189015 master-1 kubenswrapper[4771]: I1011 10:29:25.184110 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-967c7bb47-djx82" Oct 11 10:29:25.189015 master-1 kubenswrapper[4771]: I1011 10:29:25.186320 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-67qps"] Oct 11 10:29:25.189270 master-1 kubenswrapper[4771]: I1011 10:29:25.189115 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-967c7bb47-djx82"] Oct 11 10:29:25.270155 master-1 kubenswrapper[4771]: I1011 10:29:25.270069 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/04cd4a19-2532-43d1-9144-1f59d9e52d19-metrics-certs\") pod \"router-default-5ddb89f76-z5t6x\" (UID: \"04cd4a19-2532-43d1-9144-1f59d9e52d19\") " pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:29:25.270349 master-1 kubenswrapper[4771]: I1011 10:29:25.270199 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xh9q\" (UniqueName: \"kubernetes.io/projected/abd3363b-056a-4468-b13b-3e353929307d-kube-api-access-5xh9q\") pod \"network-check-source-967c7bb47-djx82\" (UID: \"abd3363b-056a-4468-b13b-3e353929307d\") " pod="openshift-network-diagnostics/network-check-source-967c7bb47-djx82" Oct 11 10:29:25.270349 master-1 kubenswrapper[4771]: I1011 10:29:25.270244 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/2631acfc-dace-435d-8ea9-65d023c13ab6-tls-certificates\") pod \"prometheus-operator-admission-webhook-79d5f95f5c-67qps\" (UID: \"2631acfc-dace-435d-8ea9-65d023c13ab6\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-67qps" Oct 11 10:29:25.270349 master-1 kubenswrapper[4771]: I1011 10:29:25.270284 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/04cd4a19-2532-43d1-9144-1f59d9e52d19-default-certificate\") pod \"router-default-5ddb89f76-z5t6x\" (UID: \"04cd4a19-2532-43d1-9144-1f59d9e52d19\") " pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:29:25.270490 master-1 kubenswrapper[4771]: I1011 10:29:25.270385 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04cd4a19-2532-43d1-9144-1f59d9e52d19-service-ca-bundle\") pod \"router-default-5ddb89f76-z5t6x\" (UID: \"04cd4a19-2532-43d1-9144-1f59d9e52d19\") " pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:29:25.270521 master-1 kubenswrapper[4771]: I1011 10:29:25.270484 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gdbm\" (UniqueName: \"kubernetes.io/projected/04cd4a19-2532-43d1-9144-1f59d9e52d19-kube-api-access-7gdbm\") pod \"router-default-5ddb89f76-z5t6x\" (UID: \"04cd4a19-2532-43d1-9144-1f59d9e52d19\") " pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:29:25.270548 master-1 kubenswrapper[4771]: I1011 10:29:25.270523 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/04cd4a19-2532-43d1-9144-1f59d9e52d19-stats-auth\") pod \"router-default-5ddb89f76-z5t6x\" (UID: \"04cd4a19-2532-43d1-9144-1f59d9e52d19\") " pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:29:25.360413 master-1 kubenswrapper[4771]: I1011 10:29:25.360283 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fgjvw" event={"ID":"2c084572-a5c9-4787-8a14-b7d6b0810a1b","Type":"ContainerStarted","Data":"fccb120465597209818f91c776fbabe2d81f28a21944709ed07033cb4785774c"} Oct 11 10:29:25.360413 master-1 kubenswrapper[4771]: I1011 10:29:25.360425 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fgjvw" event={"ID":"2c084572-a5c9-4787-8a14-b7d6b0810a1b","Type":"ContainerStarted","Data":"4ad7978bcb64b587f8b7811ba06ddea362e2663fe0567625b02089ee562de4a3"} Oct 11 10:29:25.372139 master-1 kubenswrapper[4771]: I1011 10:29:25.372048 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/04cd4a19-2532-43d1-9144-1f59d9e52d19-stats-auth\") pod \"router-default-5ddb89f76-z5t6x\" (UID: \"04cd4a19-2532-43d1-9144-1f59d9e52d19\") " pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:29:25.372331 master-1 kubenswrapper[4771]: I1011 10:29:25.372179 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/04cd4a19-2532-43d1-9144-1f59d9e52d19-metrics-certs\") pod \"router-default-5ddb89f76-z5t6x\" (UID: \"04cd4a19-2532-43d1-9144-1f59d9e52d19\") " pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:29:25.372331 master-1 kubenswrapper[4771]: I1011 10:29:25.372297 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xh9q\" (UniqueName: \"kubernetes.io/projected/abd3363b-056a-4468-b13b-3e353929307d-kube-api-access-5xh9q\") pod \"network-check-source-967c7bb47-djx82\" (UID: \"abd3363b-056a-4468-b13b-3e353929307d\") " pod="openshift-network-diagnostics/network-check-source-967c7bb47-djx82" Oct 11 10:29:25.372476 master-1 kubenswrapper[4771]: I1011 10:29:25.372348 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/2631acfc-dace-435d-8ea9-65d023c13ab6-tls-certificates\") pod \"prometheus-operator-admission-webhook-79d5f95f5c-67qps\" (UID: \"2631acfc-dace-435d-8ea9-65d023c13ab6\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-67qps" Oct 11 10:29:25.372528 master-1 kubenswrapper[4771]: I1011 10:29:25.372503 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/04cd4a19-2532-43d1-9144-1f59d9e52d19-default-certificate\") pod \"router-default-5ddb89f76-z5t6x\" (UID: \"04cd4a19-2532-43d1-9144-1f59d9e52d19\") " pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:29:25.372643 master-1 kubenswrapper[4771]: I1011 10:29:25.372597 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04cd4a19-2532-43d1-9144-1f59d9e52d19-service-ca-bundle\") pod \"router-default-5ddb89f76-z5t6x\" (UID: \"04cd4a19-2532-43d1-9144-1f59d9e52d19\") " pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:29:25.372742 master-1 kubenswrapper[4771]: I1011 10:29:25.372705 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gdbm\" (UniqueName: \"kubernetes.io/projected/04cd4a19-2532-43d1-9144-1f59d9e52d19-kube-api-access-7gdbm\") pod \"router-default-5ddb89f76-z5t6x\" (UID: \"04cd4a19-2532-43d1-9144-1f59d9e52d19\") " pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:29:25.373776 master-1 kubenswrapper[4771]: I1011 10:29:25.373734 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04cd4a19-2532-43d1-9144-1f59d9e52d19-service-ca-bundle\") pod \"router-default-5ddb89f76-z5t6x\" (UID: \"04cd4a19-2532-43d1-9144-1f59d9e52d19\") " pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:29:25.376750 master-1 kubenswrapper[4771]: I1011 10:29:25.376708 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/04cd4a19-2532-43d1-9144-1f59d9e52d19-default-certificate\") pod \"router-default-5ddb89f76-z5t6x\" (UID: \"04cd4a19-2532-43d1-9144-1f59d9e52d19\") " pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:29:25.377155 master-1 kubenswrapper[4771]: I1011 10:29:25.377111 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/2631acfc-dace-435d-8ea9-65d023c13ab6-tls-certificates\") pod \"prometheus-operator-admission-webhook-79d5f95f5c-67qps\" (UID: \"2631acfc-dace-435d-8ea9-65d023c13ab6\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-67qps" Oct 11 10:29:25.378023 master-1 kubenswrapper[4771]: I1011 10:29:25.377974 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-fgjvw" podStartSLOduration=130.370612377 podStartE2EDuration="2m12.377962316s" podCreationTimestamp="2025-10-11 10:27:13 +0000 UTC" firstStartedPulling="2025-10-11 10:29:22.293522286 +0000 UTC m=+194.267748727" lastFinishedPulling="2025-10-11 10:29:24.300872205 +0000 UTC m=+196.275098666" observedRunningTime="2025-10-11 10:29:25.376387962 +0000 UTC m=+197.350614483" watchObservedRunningTime="2025-10-11 10:29:25.377962316 +0000 UTC m=+197.352188757" Oct 11 10:29:25.379347 master-1 kubenswrapper[4771]: I1011 10:29:25.379308 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/04cd4a19-2532-43d1-9144-1f59d9e52d19-stats-auth\") pod \"router-default-5ddb89f76-z5t6x\" (UID: \"04cd4a19-2532-43d1-9144-1f59d9e52d19\") " pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:29:25.380727 master-2 kubenswrapper[4776]: I1011 10:29:25.380546 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-z9trl_0550ab10-d45d-4526-8551-c1ce0b232bbc/dns-node-resolver/0.log" Oct 11 10:29:25.383410 master-1 kubenswrapper[4771]: I1011 10:29:25.383369 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/04cd4a19-2532-43d1-9144-1f59d9e52d19-metrics-certs\") pod \"router-default-5ddb89f76-z5t6x\" (UID: \"04cd4a19-2532-43d1-9144-1f59d9e52d19\") " pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:29:25.403479 master-1 kubenswrapper[4771]: I1011 10:29:25.403410 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xh9q\" (UniqueName: \"kubernetes.io/projected/abd3363b-056a-4468-b13b-3e353929307d-kube-api-access-5xh9q\") pod \"network-check-source-967c7bb47-djx82\" (UID: \"abd3363b-056a-4468-b13b-3e353929307d\") " pod="openshift-network-diagnostics/network-check-source-967c7bb47-djx82" Oct 11 10:29:25.407416 master-1 kubenswrapper[4771]: I1011 10:29:25.407334 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gdbm\" (UniqueName: \"kubernetes.io/projected/04cd4a19-2532-43d1-9144-1f59d9e52d19-kube-api-access-7gdbm\") pod \"router-default-5ddb89f76-z5t6x\" (UID: \"04cd4a19-2532-43d1-9144-1f59d9e52d19\") " pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:29:25.473731 master-2 kubenswrapper[4776]: I1011 10:29:25.473634 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mwqr6"] Oct 11 10:29:25.477433 master-2 kubenswrapper[4776]: W1011 10:29:25.477392 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod444ea5b2_c9dc_4685_9f66_2273b30d9045.slice/crio-3de15bd009d5969b3f80470fb7549e4f068bb4b317e68ee93b70421988c245b5 WatchSource:0}: Error finding container 3de15bd009d5969b3f80470fb7549e4f068bb4b317e68ee93b70421988c245b5: Status 404 returned error can't find the container with id 3de15bd009d5969b3f80470fb7549e4f068bb4b317e68ee93b70421988c245b5 Oct 11 10:29:25.493967 master-1 kubenswrapper[4771]: I1011 10:29:25.493771 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:29:25.506480 master-1 kubenswrapper[4771]: I1011 10:29:25.506411 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-67qps" Oct 11 10:29:25.513386 master-1 kubenswrapper[4771]: I1011 10:29:25.513300 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-967c7bb47-djx82" Oct 11 10:29:25.541096 master-1 kubenswrapper[4771]: W1011 10:29:25.541034 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04cd4a19_2532_43d1_9144_1f59d9e52d19.slice/crio-ffb197fb6a875de889d24d5f079ca17b067a86a2db866b6b6480cc1cd47ddea6 WatchSource:0}: Error finding container ffb197fb6a875de889d24d5f079ca17b067a86a2db866b6b6480cc1cd47ddea6: Status 404 returned error can't find the container with id ffb197fb6a875de889d24d5f079ca17b067a86a2db866b6b6480cc1cd47ddea6 Oct 11 10:29:25.585586 master-2 kubenswrapper[4776]: I1011 10:29:25.585539 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-8wc54_a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1/etcd-operator/0.log" Oct 11 10:29:25.780033 master-1 kubenswrapper[4771]: I1011 10:29:25.779908 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-1_826e1279-bc0d-426e-b6e0-5108268f340e/installer/0.log" Oct 11 10:29:25.900290 master-1 kubenswrapper[4771]: I1011 10:29:25.900238 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-67qps"] Oct 11 10:29:25.906644 master-1 kubenswrapper[4771]: W1011 10:29:25.906597 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2631acfc_dace_435d_8ea9_65d023c13ab6.slice/crio-7903dcdfa0035252353a54ffe91b977075a9669460965531b40fe22deec61a78 WatchSource:0}: Error finding container 7903dcdfa0035252353a54ffe91b977075a9669460965531b40fe22deec61a78: Status 404 returned error can't find the container with id 7903dcdfa0035252353a54ffe91b977075a9669460965531b40fe22deec61a78 Oct 11 10:29:25.962607 master-1 kubenswrapper[4771]: I1011 10:29:25.962544 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-967c7bb47-djx82"] Oct 11 10:29:25.968952 master-1 kubenswrapper[4771]: W1011 10:29:25.968906 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabd3363b_056a_4468_b13b_3e353929307d.slice/crio-ca29caeaf1e6bc39a022ff46244647d6149803f865e5c9c560c81008d170c904 WatchSource:0}: Error finding container ca29caeaf1e6bc39a022ff46244647d6149803f865e5c9c560c81008d170c904: Status 404 returned error can't find the container with id ca29caeaf1e6bc39a022ff46244647d6149803f865e5c9c560c81008d170c904 Oct 11 10:29:25.982344 master-2 kubenswrapper[4776]: I1011 10:29:25.982277 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/0.log" Oct 11 10:29:26.088199 master-2 kubenswrapper[4776]: I1011 10:29:26.088122 4776 generic.go:334] "Generic (PLEG): container finished" podID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerID="2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82" exitCode=0 Oct 11 10:29:26.088199 master-2 kubenswrapper[4776]: I1011 10:29:26.088182 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwqr6" event={"ID":"444ea5b2-c9dc-4685-9f66-2273b30d9045","Type":"ContainerDied","Data":"2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82"} Oct 11 10:29:26.088199 master-2 kubenswrapper[4776]: I1011 10:29:26.088206 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwqr6" event={"ID":"444ea5b2-c9dc-4685-9f66-2273b30d9045","Type":"ContainerStarted","Data":"3de15bd009d5969b3f80470fb7549e4f068bb4b317e68ee93b70421988c245b5"} Oct 11 10:29:26.091651 master-2 kubenswrapper[4776]: I1011 10:29:26.091596 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" event={"ID":"1029b995-20ca-45f4-bccb-e83ccee2075f","Type":"ContainerStarted","Data":"fd0627f99e898fedf28d7abd206c16618bb59f65133b2c9236f2d54f3cb2f4c6"} Oct 11 10:29:26.134391 master-2 kubenswrapper[4776]: I1011 10:29:26.134301 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" podStartSLOduration=2.134282595 podStartE2EDuration="2.134282595s" podCreationTimestamp="2025-10-11 10:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:26.131644393 +0000 UTC m=+200.916071122" watchObservedRunningTime="2025-10-11 10:29:26.134282595 +0000 UTC m=+200.918709314" Oct 11 10:29:26.179588 master-2 kubenswrapper[4776]: I1011 10:29:26.179491 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/kube-rbac-proxy/0.log" Oct 11 10:29:26.372533 master-1 kubenswrapper[4771]: I1011 10:29:26.372472 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-967c7bb47-djx82" event={"ID":"abd3363b-056a-4468-b13b-3e353929307d","Type":"ContainerStarted","Data":"dd0f391c28ea252aea31150e5c7ca0630e5b84d86fe5527ce669a76f5c8eb413"} Oct 11 10:29:26.373117 master-1 kubenswrapper[4771]: I1011 10:29:26.372544 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-967c7bb47-djx82" event={"ID":"abd3363b-056a-4468-b13b-3e353929307d","Type":"ContainerStarted","Data":"ca29caeaf1e6bc39a022ff46244647d6149803f865e5c9c560c81008d170c904"} Oct 11 10:29:26.374406 master-1 kubenswrapper[4771]: I1011 10:29:26.374377 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-67qps" event={"ID":"2631acfc-dace-435d-8ea9-65d023c13ab6","Type":"ContainerStarted","Data":"7903dcdfa0035252353a54ffe91b977075a9669460965531b40fe22deec61a78"} Oct 11 10:29:26.375878 master-1 kubenswrapper[4771]: I1011 10:29:26.375807 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" event={"ID":"04cd4a19-2532-43d1-9144-1f59d9e52d19","Type":"ContainerStarted","Data":"ffb197fb6a875de889d24d5f079ca17b067a86a2db866b6b6480cc1cd47ddea6"} Oct 11 10:29:26.397130 master-1 kubenswrapper[4771]: I1011 10:29:26.397052 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-967c7bb47-djx82" podStartSLOduration=119.397033138 podStartE2EDuration="1m59.397033138s" podCreationTimestamp="2025-10-11 10:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:26.395812574 +0000 UTC m=+198.370039035" watchObservedRunningTime="2025-10-11 10:29:26.397033138 +0000 UTC m=+198.371259579" Oct 11 10:29:26.583459 master-1 kubenswrapper[4771]: I1011 10:29:26.583378 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:29:26.583739 master-1 kubenswrapper[4771]: E1011 10:29:26.583474 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:29:26.583739 master-1 kubenswrapper[4771]: E1011 10:29:26.583589 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca podName:c9e9455e-0b47-4623-9b4c-ef79cf62a254 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:58.58356456 +0000 UTC m=+230.557791041 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca") pod "controller-manager-565f857764-nhm4g" (UID: "c9e9455e-0b47-4623-9b4c-ef79cf62a254") : configmap "client-ca" not found Oct 11 10:29:26.785207 master-2 kubenswrapper[4776]: I1011 10:29:26.785150 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68f5d95b74-9h5mv_6967590c-695e-4e20-964b-0c643abdf367/kube-apiserver-operator/0.log" Oct 11 10:29:26.979870 master-1 kubenswrapper[4771]: I1011 10:29:26.979819 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-1_6534d9db-a553-4c39-bf4a-014a359ee336/installer/0.log" Oct 11 10:29:27.184329 master-2 kubenswrapper[4776]: I1011 10:29:27.184272 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-5d85974df9-5gj77_e487f283-7482-463c-90b6-a812e00d0e35/kube-controller-manager-operator/0.log" Oct 11 10:29:27.381382 master-1 kubenswrapper[4771]: I1011 10:29:27.381313 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-1_7662f87a-13ba-439c-b386-05e68284803c/installer/0.log" Oct 11 10:29:27.584060 master-2 kubenswrapper[4776]: I1011 10:29:27.583913 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-766d6b44f6-s5shc_58aef476-6586-47bb-bf45-dbeccac6271a/kube-scheduler-operator-container/1.log" Oct 11 10:29:27.637476 master-2 kubenswrapper[4776]: I1011 10:29:27.637389 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq"] Oct 11 10:29:27.638160 master-2 kubenswrapper[4776]: I1011 10:29:27.638138 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" Oct 11 10:29:27.641053 master-2 kubenswrapper[4776]: I1011 10:29:27.641007 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Oct 11 10:29:27.644535 master-2 kubenswrapper[4776]: I1011 10:29:27.644256 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5ddb89f76-57kcw"] Oct 11 10:29:27.645877 master-2 kubenswrapper[4776]: I1011 10:29:27.645244 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.648201 master-2 kubenswrapper[4776]: I1011 10:29:27.647955 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 11 10:29:27.648455 master-2 kubenswrapper[4776]: I1011 10:29:27.648395 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 11 10:29:27.648455 master-2 kubenswrapper[4776]: I1011 10:29:27.648442 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 11 10:29:27.649035 master-2 kubenswrapper[4776]: I1011 10:29:27.648576 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 11 10:29:27.649035 master-2 kubenswrapper[4776]: I1011 10:29:27.648612 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq"] Oct 11 10:29:27.649035 master-2 kubenswrapper[4776]: I1011 10:29:27.649001 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 11 10:29:27.649035 master-2 kubenswrapper[4776]: I1011 10:29:27.649014 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 11 10:29:27.720604 master-2 kubenswrapper[4776]: I1011 10:29:27.720538 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-service-ca-bundle\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.720604 master-2 kubenswrapper[4776]: I1011 10:29:27.720598 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-metrics-certs\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.720881 master-2 kubenswrapper[4776]: I1011 10:29:27.720659 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79cz7\" (UniqueName: \"kubernetes.io/projected/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-kube-api-access-79cz7\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.720881 master-2 kubenswrapper[4776]: I1011 10:29:27.720732 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-default-certificate\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.720881 master-2 kubenswrapper[4776]: I1011 10:29:27.720788 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/2ec2ac05-04f0-4170-9423-b405676995ee-tls-certificates\") pod \"prometheus-operator-admission-webhook-79d5f95f5c-tf6cq\" (UID: \"2ec2ac05-04f0-4170-9423-b405676995ee\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" Oct 11 10:29:27.720881 master-2 kubenswrapper[4776]: I1011 10:29:27.720839 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-stats-auth\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.821816 master-2 kubenswrapper[4776]: I1011 10:29:27.821403 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/2ec2ac05-04f0-4170-9423-b405676995ee-tls-certificates\") pod \"prometheus-operator-admission-webhook-79d5f95f5c-tf6cq\" (UID: \"2ec2ac05-04f0-4170-9423-b405676995ee\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" Oct 11 10:29:27.821816 master-2 kubenswrapper[4776]: I1011 10:29:27.821465 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-stats-auth\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.821816 master-2 kubenswrapper[4776]: I1011 10:29:27.821508 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-service-ca-bundle\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.821816 master-2 kubenswrapper[4776]: I1011 10:29:27.821528 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-metrics-certs\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.821816 master-2 kubenswrapper[4776]: I1011 10:29:27.821551 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79cz7\" (UniqueName: \"kubernetes.io/projected/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-kube-api-access-79cz7\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.821816 master-2 kubenswrapper[4776]: I1011 10:29:27.821571 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-default-certificate\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.822973 master-2 kubenswrapper[4776]: I1011 10:29:27.822913 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-service-ca-bundle\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.825755 master-2 kubenswrapper[4776]: I1011 10:29:27.825718 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-metrics-certs\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.826107 master-2 kubenswrapper[4776]: I1011 10:29:27.826075 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/2ec2ac05-04f0-4170-9423-b405676995ee-tls-certificates\") pod \"prometheus-operator-admission-webhook-79d5f95f5c-tf6cq\" (UID: \"2ec2ac05-04f0-4170-9423-b405676995ee\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" Oct 11 10:29:27.826760 master-2 kubenswrapper[4776]: I1011 10:29:27.826722 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-default-certificate\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.833054 master-2 kubenswrapper[4776]: I1011 10:29:27.833012 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-stats-auth\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.845320 master-2 kubenswrapper[4776]: I1011 10:29:27.845256 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79cz7\" (UniqueName: \"kubernetes.io/projected/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-kube-api-access-79cz7\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.934645 master-1 kubenswrapper[4771]: I1011 10:29:27.934560 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-1"] Oct 11 10:29:27.936646 master-1 kubenswrapper[4771]: I1011 10:29:27.936373 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 11 10:29:27.957078 master-2 kubenswrapper[4776]: I1011 10:29:27.957016 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" Oct 11 10:29:27.966211 master-2 kubenswrapper[4776]: I1011 10:29:27.966173 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.973581 master-1 kubenswrapper[4771]: I1011 10:29:27.973547 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-1"] Oct 11 10:29:28.006936 master-2 kubenswrapper[4776]: W1011 10:29:28.006900 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8cd90ff_e70c_4837_82c4_0fec67a8a51b.slice/crio-7c323050fd888edd84eebd02fed987cdc12b328bcca0b9ae079b6c745bcff1a9 WatchSource:0}: Error finding container 7c323050fd888edd84eebd02fed987cdc12b328bcca0b9ae079b6c745bcff1a9: Status 404 returned error can't find the container with id 7c323050fd888edd84eebd02fed987cdc12b328bcca0b9ae079b6c745bcff1a9 Oct 11 10:29:28.098220 master-1 kubenswrapper[4771]: I1011 10:29:28.098098 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-usr-local-bin\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:29:28.098220 master-1 kubenswrapper[4771]: I1011 10:29:28.098163 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-cert-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:29:28.098220 master-1 kubenswrapper[4771]: I1011 10:29:28.098184 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-data-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:29:28.098721 master-1 kubenswrapper[4771]: I1011 10:29:28.098259 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-resource-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:29:28.098721 master-1 kubenswrapper[4771]: I1011 10:29:28.098287 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-log-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:29:28.098721 master-1 kubenswrapper[4771]: I1011 10:29:28.098345 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-static-pod-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:29:28.103223 master-2 kubenswrapper[4776]: I1011 10:29:28.103102 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-57kcw" event={"ID":"c8cd90ff-e70c-4837-82c4-0fec67a8a51b","Type":"ContainerStarted","Data":"7c323050fd888edd84eebd02fed987cdc12b328bcca0b9ae079b6c745bcff1a9"} Oct 11 10:29:28.199346 master-1 kubenswrapper[4771]: I1011 10:29:28.199157 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-resource-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:29:28.199346 master-1 kubenswrapper[4771]: I1011 10:29:28.199255 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-log-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:29:28.199346 master-1 kubenswrapper[4771]: I1011 10:29:28.199286 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-resource-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:29:28.199346 master-1 kubenswrapper[4771]: I1011 10:29:28.199336 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-static-pod-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:29:28.199661 master-1 kubenswrapper[4771]: I1011 10:29:28.199384 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-static-pod-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:29:28.199661 master-1 kubenswrapper[4771]: I1011 10:29:28.199415 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-log-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:29:28.199661 master-1 kubenswrapper[4771]: I1011 10:29:28.199449 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-usr-local-bin\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:29:28.199661 master-1 kubenswrapper[4771]: I1011 10:29:28.199517 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-usr-local-bin\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:29:28.199661 master-1 kubenswrapper[4771]: I1011 10:29:28.199594 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-data-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:29:28.199661 master-1 kubenswrapper[4771]: I1011 10:29:28.199644 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-cert-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:29:28.199823 master-1 kubenswrapper[4771]: I1011 10:29:28.199679 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-data-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:29:28.199823 master-1 kubenswrapper[4771]: I1011 10:29:28.199746 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-cert-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:29:28.271754 master-1 kubenswrapper[4771]: I1011 10:29:28.271658 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 11 10:29:28.384749 master-1 kubenswrapper[4771]: I1011 10:29:28.384693 4771 generic.go:334] "Generic (PLEG): container finished" podID="826e1279-bc0d-426e-b6e0-5108268f340e" containerID="9a616ae6ac6ffcbc27ae54a54aec1c65046926d3773ee73ab8bfdedb75371f06" exitCode=0 Oct 11 10:29:28.385406 master-1 kubenswrapper[4771]: I1011 10:29:28.384749 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-1" event={"ID":"826e1279-bc0d-426e-b6e0-5108268f340e","Type":"ContainerDied","Data":"9a616ae6ac6ffcbc27ae54a54aec1c65046926d3773ee73ab8bfdedb75371f06"} Oct 11 10:29:28.386769 master-2 kubenswrapper[4776]: I1011 10:29:28.381055 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq"] Oct 11 10:29:28.387442 master-2 kubenswrapper[4776]: W1011 10:29:28.387187 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ec2ac05_04f0_4170_9423_b405676995ee.slice/crio-7984ff7be16de27e1c0ded96341501037f0da16732811ea61514bf163a918358 WatchSource:0}: Error finding container 7984ff7be16de27e1c0ded96341501037f0da16732811ea61514bf163a918358: Status 404 returned error can't find the container with id 7984ff7be16de27e1c0ded96341501037f0da16732811ea61514bf163a918358 Oct 11 10:29:28.983060 master-2 kubenswrapper[4776]: I1011 10:29:28.983018 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-dczh4_e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc/cluster-olm-operator/0.log" Oct 11 10:29:29.109400 master-2 kubenswrapper[4776]: I1011 10:29:29.109339 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" event={"ID":"2ec2ac05-04f0-4170-9423-b405676995ee","Type":"ContainerStarted","Data":"7984ff7be16de27e1c0ded96341501037f0da16732811ea61514bf163a918358"} Oct 11 10:29:29.178204 master-2 kubenswrapper[4776]: I1011 10:29:29.178150 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-dczh4_e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc/copy-catalogd-manifests/0.log" Oct 11 10:29:29.379770 master-2 kubenswrapper[4776]: I1011 10:29:29.379656 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-dczh4_e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc/copy-operator-controller-manifests/0.log" Oct 11 10:29:29.519313 master-2 kubenswrapper[4776]: I1011 10:29:29.518362 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-tpjwk"] Oct 11 10:29:29.519863 master-2 kubenswrapper[4776]: I1011 10:29:29.519427 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.522682 master-2 kubenswrapper[4776]: I1011 10:29:29.522637 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 11 10:29:29.522915 master-2 kubenswrapper[4776]: I1011 10:29:29.522853 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 11 10:29:29.529431 master-1 kubenswrapper[4771]: I1011 10:29:29.526779 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-h7gnk"] Oct 11 10:29:29.529431 master-1 kubenswrapper[4771]: I1011 10:29:29.527324 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-h7gnk" Oct 11 10:29:29.532250 master-1 kubenswrapper[4771]: I1011 10:29:29.532169 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 11 10:29:29.532729 master-1 kubenswrapper[4771]: I1011 10:29:29.532679 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 11 10:29:29.584930 master-2 kubenswrapper[4776]: I1011 10:29:29.584894 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-dczh4_e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc/cluster-olm-operator/1.log" Oct 11 10:29:29.645193 master-2 kubenswrapper[4776]: I1011 10:29:29.645070 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3594e65d-a9cb-4d12-b4cd-88229b18abdc-node-bootstrap-token\") pod \"machine-config-server-tpjwk\" (UID: \"3594e65d-a9cb-4d12-b4cd-88229b18abdc\") " pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.645193 master-2 kubenswrapper[4776]: I1011 10:29:29.645133 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95b9q\" (UniqueName: \"kubernetes.io/projected/3594e65d-a9cb-4d12-b4cd-88229b18abdc-kube-api-access-95b9q\") pod \"machine-config-server-tpjwk\" (UID: \"3594e65d-a9cb-4d12-b4cd-88229b18abdc\") " pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.645193 master-2 kubenswrapper[4776]: I1011 10:29:29.645157 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3594e65d-a9cb-4d12-b4cd-88229b18abdc-certs\") pod \"machine-config-server-tpjwk\" (UID: \"3594e65d-a9cb-4d12-b4cd-88229b18abdc\") " pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.716347 master-1 kubenswrapper[4771]: I1011 10:29:29.716176 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgpg7\" (UniqueName: \"kubernetes.io/projected/6d20faa4-e5eb-4766-b4f5-30e491d1820c-kube-api-access-sgpg7\") pod \"machine-config-server-h7gnk\" (UID: \"6d20faa4-e5eb-4766-b4f5-30e491d1820c\") " pod="openshift-machine-config-operator/machine-config-server-h7gnk" Oct 11 10:29:29.716347 master-1 kubenswrapper[4771]: I1011 10:29:29.716290 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6d20faa4-e5eb-4766-b4f5-30e491d1820c-node-bootstrap-token\") pod \"machine-config-server-h7gnk\" (UID: \"6d20faa4-e5eb-4766-b4f5-30e491d1820c\") " pod="openshift-machine-config-operator/machine-config-server-h7gnk" Oct 11 10:29:29.716622 master-1 kubenswrapper[4771]: I1011 10:29:29.716400 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6d20faa4-e5eb-4766-b4f5-30e491d1820c-certs\") pod \"machine-config-server-h7gnk\" (UID: \"6d20faa4-e5eb-4766-b4f5-30e491d1820c\") " pod="openshift-machine-config-operator/machine-config-server-h7gnk" Oct 11 10:29:29.745922 master-2 kubenswrapper[4776]: I1011 10:29:29.745870 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95b9q\" (UniqueName: \"kubernetes.io/projected/3594e65d-a9cb-4d12-b4cd-88229b18abdc-kube-api-access-95b9q\") pod \"machine-config-server-tpjwk\" (UID: \"3594e65d-a9cb-4d12-b4cd-88229b18abdc\") " pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.745922 master-2 kubenswrapper[4776]: I1011 10:29:29.745922 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3594e65d-a9cb-4d12-b4cd-88229b18abdc-certs\") pod \"machine-config-server-tpjwk\" (UID: \"3594e65d-a9cb-4d12-b4cd-88229b18abdc\") " pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.746185 master-2 kubenswrapper[4776]: I1011 10:29:29.745993 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3594e65d-a9cb-4d12-b4cd-88229b18abdc-node-bootstrap-token\") pod \"machine-config-server-tpjwk\" (UID: \"3594e65d-a9cb-4d12-b4cd-88229b18abdc\") " pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.749790 master-2 kubenswrapper[4776]: I1011 10:29:29.749766 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3594e65d-a9cb-4d12-b4cd-88229b18abdc-node-bootstrap-token\") pod \"machine-config-server-tpjwk\" (UID: \"3594e65d-a9cb-4d12-b4cd-88229b18abdc\") " pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.749872 master-2 kubenswrapper[4776]: I1011 10:29:29.749806 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3594e65d-a9cb-4d12-b4cd-88229b18abdc-certs\") pod \"machine-config-server-tpjwk\" (UID: \"3594e65d-a9cb-4d12-b4cd-88229b18abdc\") " pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.767238 master-2 kubenswrapper[4776]: I1011 10:29:29.767185 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95b9q\" (UniqueName: \"kubernetes.io/projected/3594e65d-a9cb-4d12-b4cd-88229b18abdc-kube-api-access-95b9q\") pod \"machine-config-server-tpjwk\" (UID: \"3594e65d-a9cb-4d12-b4cd-88229b18abdc\") " pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.787195 master-2 kubenswrapper[4776]: I1011 10:29:29.787152 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7d88655794-7jd4q_f8050d30-444b-40a5-829c-1e3b788910a0/openshift-apiserver-operator/0.log" Oct 11 10:29:29.817570 master-1 kubenswrapper[4771]: I1011 10:29:29.817470 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6d20faa4-e5eb-4766-b4f5-30e491d1820c-certs\") pod \"machine-config-server-h7gnk\" (UID: \"6d20faa4-e5eb-4766-b4f5-30e491d1820c\") " pod="openshift-machine-config-operator/machine-config-server-h7gnk" Oct 11 10:29:29.817695 master-1 kubenswrapper[4771]: I1011 10:29:29.817570 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgpg7\" (UniqueName: \"kubernetes.io/projected/6d20faa4-e5eb-4766-b4f5-30e491d1820c-kube-api-access-sgpg7\") pod \"machine-config-server-h7gnk\" (UID: \"6d20faa4-e5eb-4766-b4f5-30e491d1820c\") " pod="openshift-machine-config-operator/machine-config-server-h7gnk" Oct 11 10:29:29.817695 master-1 kubenswrapper[4771]: I1011 10:29:29.817644 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6d20faa4-e5eb-4766-b4f5-30e491d1820c-node-bootstrap-token\") pod \"machine-config-server-h7gnk\" (UID: \"6d20faa4-e5eb-4766-b4f5-30e491d1820c\") " pod="openshift-machine-config-operator/machine-config-server-h7gnk" Oct 11 10:29:29.822445 master-1 kubenswrapper[4771]: I1011 10:29:29.822381 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6d20faa4-e5eb-4766-b4f5-30e491d1820c-node-bootstrap-token\") pod \"machine-config-server-h7gnk\" (UID: \"6d20faa4-e5eb-4766-b4f5-30e491d1820c\") " pod="openshift-machine-config-operator/machine-config-server-h7gnk" Oct 11 10:29:29.822579 master-1 kubenswrapper[4771]: I1011 10:29:29.822463 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6d20faa4-e5eb-4766-b4f5-30e491d1820c-certs\") pod \"machine-config-server-h7gnk\" (UID: \"6d20faa4-e5eb-4766-b4f5-30e491d1820c\") " pod="openshift-machine-config-operator/machine-config-server-h7gnk" Oct 11 10:29:29.834275 master-2 kubenswrapper[4776]: I1011 10:29:29.834210 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.834844 master-1 kubenswrapper[4771]: I1011 10:29:29.834757 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgpg7\" (UniqueName: \"kubernetes.io/projected/6d20faa4-e5eb-4766-b4f5-30e491d1820c-kube-api-access-sgpg7\") pod \"machine-config-server-h7gnk\" (UID: \"6d20faa4-e5eb-4766-b4f5-30e491d1820c\") " pod="openshift-machine-config-operator/machine-config-server-h7gnk" Oct 11 10:29:29.846658 master-1 kubenswrapper[4771]: I1011 10:29:29.846585 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-h7gnk" Oct 11 10:29:29.877883 master-2 kubenswrapper[4776]: W1011 10:29:29.877839 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3594e65d_a9cb_4d12_b4cd_88229b18abdc.slice/crio-a4faf49c97588c7f69baf4c5f6ff4f277750df6632d3e7e041026675c3ce1864 WatchSource:0}: Error finding container a4faf49c97588c7f69baf4c5f6ff4f277750df6632d3e7e041026675c3ce1864: Status 404 returned error can't find the container with id a4faf49c97588c7f69baf4c5f6ff4f277750df6632d3e7e041026675c3ce1864 Oct 11 10:29:29.943990 master-1 kubenswrapper[4771]: W1011 10:29:29.943916 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5268b2f2ae2aef0c7f2e7a6e651ed702.slice/crio-da72a8df71c43223b153f8d9058eb065866400eec698b95e777f1a50e9811194 WatchSource:0}: Error finding container da72a8df71c43223b153f8d9058eb065866400eec698b95e777f1a50e9811194: Status 404 returned error can't find the container with id da72a8df71c43223b153f8d9058eb065866400eec698b95e777f1a50e9811194 Oct 11 10:29:29.951115 master-1 kubenswrapper[4771]: W1011 10:29:29.951056 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d20faa4_e5eb_4766_b4f5_30e491d1820c.slice/crio-d9139183c94d28fcf1a4d7ca8fca07b3415c016dbb4dcf57c4b77203784c50aa WatchSource:0}: Error finding container d9139183c94d28fcf1a4d7ca8fca07b3415c016dbb4dcf57c4b77203784c50aa: Status 404 returned error can't find the container with id d9139183c94d28fcf1a4d7ca8fca07b3415c016dbb4dcf57c4b77203784c50aa Oct 11 10:29:29.969079 master-1 kubenswrapper[4771]: I1011 10:29:29.968963 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-1" Oct 11 10:29:29.978286 master-1 kubenswrapper[4771]: I1011 10:29:29.978225 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-555f658fd6-n5n6g_027736d1-f3d3-490e-9ee1-d08bad7a25b7/fix-audit-permissions/0.log" Oct 11 10:29:30.117158 master-2 kubenswrapper[4776]: I1011 10:29:30.117108 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-57kcw" event={"ID":"c8cd90ff-e70c-4837-82c4-0fec67a8a51b","Type":"ContainerStarted","Data":"d0cb5ca87b019c6dd7de016a32a463f61a07f9fbd819c59a6476d827605ded9c"} Oct 11 10:29:30.120028 master-2 kubenswrapper[4776]: I1011 10:29:30.119179 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tpjwk" event={"ID":"3594e65d-a9cb-4d12-b4cd-88229b18abdc","Type":"ContainerStarted","Data":"1df183bde159ebd53360d83dbc6dec8b3f26092aec2e036570774307ae38d932"} Oct 11 10:29:30.120028 master-2 kubenswrapper[4776]: I1011 10:29:30.119235 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tpjwk" event={"ID":"3594e65d-a9cb-4d12-b4cd-88229b18abdc","Type":"ContainerStarted","Data":"a4faf49c97588c7f69baf4c5f6ff4f277750df6632d3e7e041026675c3ce1864"} Oct 11 10:29:30.121265 master-1 kubenswrapper[4771]: I1011 10:29:30.121115 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/826e1279-bc0d-426e-b6e0-5108268f340e-kube-api-access\") pod \"826e1279-bc0d-426e-b6e0-5108268f340e\" (UID: \"826e1279-bc0d-426e-b6e0-5108268f340e\") " Oct 11 10:29:30.121265 master-1 kubenswrapper[4771]: I1011 10:29:30.121192 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/826e1279-bc0d-426e-b6e0-5108268f340e-var-lock\") pod \"826e1279-bc0d-426e-b6e0-5108268f340e\" (UID: \"826e1279-bc0d-426e-b6e0-5108268f340e\") " Oct 11 10:29:30.121265 master-1 kubenswrapper[4771]: I1011 10:29:30.121230 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/826e1279-bc0d-426e-b6e0-5108268f340e-kubelet-dir\") pod \"826e1279-bc0d-426e-b6e0-5108268f340e\" (UID: \"826e1279-bc0d-426e-b6e0-5108268f340e\") " Oct 11 10:29:30.121588 master-1 kubenswrapper[4771]: I1011 10:29:30.121368 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/826e1279-bc0d-426e-b6e0-5108268f340e-var-lock" (OuterVolumeSpecName: "var-lock") pod "826e1279-bc0d-426e-b6e0-5108268f340e" (UID: "826e1279-bc0d-426e-b6e0-5108268f340e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:29:30.121588 master-1 kubenswrapper[4771]: I1011 10:29:30.121453 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/826e1279-bc0d-426e-b6e0-5108268f340e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "826e1279-bc0d-426e-b6e0-5108268f340e" (UID: "826e1279-bc0d-426e-b6e0-5108268f340e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:29:30.123761 master-2 kubenswrapper[4776]: I1011 10:29:30.120865 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" event={"ID":"2ec2ac05-04f0-4170-9423-b405676995ee","Type":"ContainerStarted","Data":"f930b9c6be9d1302318e869655554571eb051f5a4e26ab4627da1ce4a1e858d8"} Oct 11 10:29:30.123761 master-2 kubenswrapper[4776]: I1011 10:29:30.121164 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" Oct 11 10:29:30.126016 master-1 kubenswrapper[4771]: I1011 10:29:30.125941 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826e1279-bc0d-426e-b6e0-5108268f340e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "826e1279-bc0d-426e-b6e0-5108268f340e" (UID: "826e1279-bc0d-426e-b6e0-5108268f340e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:29:30.126360 master-2 kubenswrapper[4776]: I1011 10:29:30.126224 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" Oct 11 10:29:30.138720 master-2 kubenswrapper[4776]: I1011 10:29:30.138609 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podStartSLOduration=41.278663674 podStartE2EDuration="43.138590567s" podCreationTimestamp="2025-10-11 10:28:47 +0000 UTC" firstStartedPulling="2025-10-11 10:29:28.008420155 +0000 UTC m=+202.792846864" lastFinishedPulling="2025-10-11 10:29:29.868347048 +0000 UTC m=+204.652773757" observedRunningTime="2025-10-11 10:29:30.137854387 +0000 UTC m=+204.922281136" watchObservedRunningTime="2025-10-11 10:29:30.138590567 +0000 UTC m=+204.923017276" Oct 11 10:29:30.160032 master-2 kubenswrapper[4776]: I1011 10:29:30.159397 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" podStartSLOduration=7.677467215 podStartE2EDuration="9.159365621s" podCreationTimestamp="2025-10-11 10:29:21 +0000 UTC" firstStartedPulling="2025-10-11 10:29:28.390374108 +0000 UTC m=+203.174800827" lastFinishedPulling="2025-10-11 10:29:29.872272524 +0000 UTC m=+204.656699233" observedRunningTime="2025-10-11 10:29:30.159178055 +0000 UTC m=+204.943604814" watchObservedRunningTime="2025-10-11 10:29:30.159365621 +0000 UTC m=+204.943792380" Oct 11 10:29:30.183791 master-1 kubenswrapper[4771]: I1011 10:29:30.183665 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-555f658fd6-n5n6g_027736d1-f3d3-490e-9ee1-d08bad7a25b7/openshift-apiserver/0.log" Oct 11 10:29:30.195663 master-2 kubenswrapper[4776]: I1011 10:29:30.195593 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-tpjwk" podStartSLOduration=1.195574384 podStartE2EDuration="1.195574384s" podCreationTimestamp="2025-10-11 10:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:30.194248368 +0000 UTC m=+204.978675077" watchObservedRunningTime="2025-10-11 10:29:30.195574384 +0000 UTC m=+204.980001093" Oct 11 10:29:30.223183 master-1 kubenswrapper[4771]: I1011 10:29:30.222970 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/826e1279-bc0d-426e-b6e0-5108268f340e-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:30.223183 master-1 kubenswrapper[4771]: I1011 10:29:30.223018 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/826e1279-bc0d-426e-b6e0-5108268f340e-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:30.223183 master-1 kubenswrapper[4771]: I1011 10:29:30.223032 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/826e1279-bc0d-426e-b6e0-5108268f340e-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:30.381064 master-1 kubenswrapper[4771]: I1011 10:29:30.381006 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-555f658fd6-n5n6g_027736d1-f3d3-490e-9ee1-d08bad7a25b7/openshift-apiserver-check-endpoints/0.log" Oct 11 10:29:30.393240 master-1 kubenswrapper[4771]: I1011 10:29:30.393204 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-1" Oct 11 10:29:30.393423 master-1 kubenswrapper[4771]: I1011 10:29:30.393188 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-1" event={"ID":"826e1279-bc0d-426e-b6e0-5108268f340e","Type":"ContainerDied","Data":"5363fbd5b12a9230e5f6b1dd57b8fb070fa12eb536a76d7bdfd11f7b2167cad9"} Oct 11 10:29:30.393488 master-1 kubenswrapper[4771]: I1011 10:29:30.393436 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5363fbd5b12a9230e5f6b1dd57b8fb070fa12eb536a76d7bdfd11f7b2167cad9" Oct 11 10:29:30.395586 master-1 kubenswrapper[4771]: I1011 10:29:30.395547 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerStarted","Data":"da72a8df71c43223b153f8d9058eb065866400eec698b95e777f1a50e9811194"} Oct 11 10:29:30.397334 master-1 kubenswrapper[4771]: I1011 10:29:30.397291 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-h7gnk" event={"ID":"6d20faa4-e5eb-4766-b4f5-30e491d1820c","Type":"ContainerStarted","Data":"faf96c0ce971191959bb91c959b4ff7b528d62bb5ffc85249401a032257b3608"} Oct 11 10:29:30.397425 master-1 kubenswrapper[4771]: I1011 10:29:30.397373 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-h7gnk" event={"ID":"6d20faa4-e5eb-4766-b4f5-30e491d1820c","Type":"ContainerStarted","Data":"d9139183c94d28fcf1a4d7ca8fca07b3415c016dbb4dcf57c4b77203784c50aa"} Oct 11 10:29:30.399053 master-1 kubenswrapper[4771]: I1011 10:29:30.399010 4771 generic.go:334] "Generic (PLEG): container finished" podID="0b7d1d62-0062-47cd-a963-63893777198e" containerID="87c2f3c9c19accca6371d7e77d4bcd0cd514c07776fd9f98f32125516b8cf865" exitCode=0 Oct 11 10:29:30.399127 master-1 kubenswrapper[4771]: I1011 10:29:30.399082 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwwz9" event={"ID":"0b7d1d62-0062-47cd-a963-63893777198e","Type":"ContainerDied","Data":"87c2f3c9c19accca6371d7e77d4bcd0cd514c07776fd9f98f32125516b8cf865"} Oct 11 10:29:30.400833 master-1 kubenswrapper[4771]: I1011 10:29:30.400587 4771 generic.go:334] "Generic (PLEG): container finished" podID="38131fcf-d407-4ba3-b7bf-471586bab887" containerID="46478dfa370c61d5e583543ca4a34b66afd1e95ecf434515eb16283cfe8a52de" exitCode=0 Oct 11 10:29:30.400833 master-1 kubenswrapper[4771]: I1011 10:29:30.400611 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8tm6" event={"ID":"38131fcf-d407-4ba3-b7bf-471586bab887","Type":"ContainerDied","Data":"46478dfa370c61d5e583543ca4a34b66afd1e95ecf434515eb16283cfe8a52de"} Oct 11 10:29:30.414396 master-1 kubenswrapper[4771]: I1011 10:29:30.414300 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-h7gnk" podStartSLOduration=1.41427744 podStartE2EDuration="1.41427744s" podCreationTimestamp="2025-10-11 10:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:30.413294153 +0000 UTC m=+202.387520594" watchObservedRunningTime="2025-10-11 10:29:30.41427744 +0000 UTC m=+202.388503891" Oct 11 10:29:30.578440 master-2 kubenswrapper[4776]: I1011 10:29:30.578332 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-555f658fd6-wmcqt_7a89cb41-fb97-4282-a12d-c6f8d87bc41e/fix-audit-permissions/0.log" Oct 11 10:29:30.781849 master-2 kubenswrapper[4776]: I1011 10:29:30.781780 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-555f658fd6-wmcqt_7a89cb41-fb97-4282-a12d-c6f8d87bc41e/openshift-apiserver/0.log" Oct 11 10:29:30.967833 master-2 kubenswrapper[4776]: I1011 10:29:30.967788 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:30.970371 master-2 kubenswrapper[4776]: I1011 10:29:30.970336 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:30.970371 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:30.970371 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:30.970371 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:30.970510 master-2 kubenswrapper[4776]: I1011 10:29:30.970383 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:30.981634 master-2 kubenswrapper[4776]: I1011 10:29:30.980715 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-555f658fd6-wmcqt_7a89cb41-fb97-4282-a12d-c6f8d87bc41e/openshift-apiserver-check-endpoints/0.log" Oct 11 10:29:31.180512 master-2 kubenswrapper[4776]: I1011 10:29:31.180467 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-8wc54_a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1/etcd-operator/0.log" Oct 11 10:29:31.385614 master-2 kubenswrapper[4776]: I1011 10:29:31.385496 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5745565d84-bq4rs_88129ec6-6f99-42a1-842a-6a965c6b58fe/openshift-controller-manager-operator/0.log" Oct 11 10:29:31.406198 master-1 kubenswrapper[4771]: I1011 10:29:31.406139 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm" event={"ID":"68bdaf37-fa14-4c86-a697-881df7c9c7f1","Type":"ContainerStarted","Data":"a4abbedd79e929c2bd4cf2fc94f8f962fc6824e374c69e8840181904d932e12c"} Oct 11 10:29:31.406792 master-1 kubenswrapper[4771]: I1011 10:29:31.406416 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm" Oct 11 10:29:31.408934 master-1 kubenswrapper[4771]: I1011 10:29:31.408850 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" event={"ID":"04cd4a19-2532-43d1-9144-1f59d9e52d19","Type":"ContainerStarted","Data":"d9d09acfb9b74efc71914e418c9f7ad84873a3a13515d6cfcddf159cfd555604"} Oct 11 10:29:31.410794 master-1 kubenswrapper[4771]: I1011 10:29:31.410775 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm" Oct 11 10:29:31.412384 master-1 kubenswrapper[4771]: I1011 10:29:31.412343 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-67qps" event={"ID":"2631acfc-dace-435d-8ea9-65d023c13ab6","Type":"ContainerStarted","Data":"cdd4374bd67def2eb7eddbd9f2e5b7b6ce62ae68a06fcef83eba60f54d368831"} Oct 11 10:29:31.412612 master-1 kubenswrapper[4771]: I1011 10:29:31.412576 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-67qps" Oct 11 10:29:31.414166 master-1 kubenswrapper[4771]: I1011 10:29:31.414103 4771 generic.go:334] "Generic (PLEG): container finished" podID="26005893-ecd8-4acb-8417-71a97ed97cbe" containerID="142add763393fde94b8ed6a34c3ef572a32e34909b409ad71cf3570c801fa30d" exitCode=0 Oct 11 10:29:31.414234 master-1 kubenswrapper[4771]: I1011 10:29:31.414176 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xkrc6" event={"ID":"26005893-ecd8-4acb-8417-71a97ed97cbe","Type":"ContainerDied","Data":"142add763393fde94b8ed6a34c3ef572a32e34909b409ad71cf3570c801fa30d"} Oct 11 10:29:31.416867 master-1 kubenswrapper[4771]: I1011 10:29:31.416817 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-67qps" Oct 11 10:29:31.425937 master-1 kubenswrapper[4771]: I1011 10:29:31.425853 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm" podStartSLOduration=2.7429406309999997 podStartE2EDuration="9.425831845s" podCreationTimestamp="2025-10-11 10:29:22 +0000 UTC" firstStartedPulling="2025-10-11 10:29:23.659928926 +0000 UTC m=+195.634155367" lastFinishedPulling="2025-10-11 10:29:30.34282014 +0000 UTC m=+202.317046581" observedRunningTime="2025-10-11 10:29:31.423623764 +0000 UTC m=+203.397850215" watchObservedRunningTime="2025-10-11 10:29:31.425831845 +0000 UTC m=+203.400058326" Oct 11 10:29:31.443991 master-1 kubenswrapper[4771]: I1011 10:29:31.443919 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-67qps" podStartSLOduration=5.959374821 podStartE2EDuration="10.443899573s" podCreationTimestamp="2025-10-11 10:29:21 +0000 UTC" firstStartedPulling="2025-10-11 10:29:25.909389896 +0000 UTC m=+197.883616337" lastFinishedPulling="2025-10-11 10:29:30.393914648 +0000 UTC m=+202.368141089" observedRunningTime="2025-10-11 10:29:31.441813906 +0000 UTC m=+203.416040347" watchObservedRunningTime="2025-10-11 10:29:31.443899573 +0000 UTC m=+203.418126014" Oct 11 10:29:31.494734 master-1 kubenswrapper[4771]: I1011 10:29:31.494402 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:29:31.497924 master-1 kubenswrapper[4771]: I1011 10:29:31.497775 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:31.497924 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:31.497924 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:31.497924 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:31.498132 master-1 kubenswrapper[4771]: I1011 10:29:31.497909 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:31.533040 master-1 kubenswrapper[4771]: I1011 10:29:31.532976 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podStartSLOduration=40.100956873 podStartE2EDuration="44.532960908s" podCreationTimestamp="2025-10-11 10:28:47 +0000 UTC" firstStartedPulling="2025-10-11 10:29:25.544419654 +0000 UTC m=+197.518646105" lastFinishedPulling="2025-10-11 10:29:29.976423689 +0000 UTC m=+201.950650140" observedRunningTime="2025-10-11 10:29:31.530615714 +0000 UTC m=+203.504842205" watchObservedRunningTime="2025-10-11 10:29:31.532960908 +0000 UTC m=+203.507187349" Oct 11 10:29:31.565644 master-1 kubenswrapper[4771]: I1011 10:29:31.565602 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-1"] Oct 11 10:29:31.566175 master-1 kubenswrapper[4771]: I1011 10:29:31.565919 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-2-master-1" podUID="792389a1-400d-4a07-a0a5-e80b2edfd8f1" containerName="installer" containerID="cri-o://1847b9a9f31d4cf6b7fede3d6231e62c7c7aec1680e7c800a880c6ba363a8798" gracePeriod=30 Oct 11 10:29:31.968978 master-2 kubenswrapper[4776]: I1011 10:29:31.968912 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:31.968978 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:31.968978 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:31.968978 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:31.968978 master-2 kubenswrapper[4776]: I1011 10:29:31.968960 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:32.236360 master-2 kubenswrapper[4776]: I1011 10:29:32.236260 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc"] Oct 11 10:29:32.237000 master-2 kubenswrapper[4776]: I1011 10:29:32.236982 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.239407 master-2 kubenswrapper[4776]: I1011 10:29:32.239270 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Oct 11 10:29:32.241098 master-2 kubenswrapper[4776]: I1011 10:29:32.241056 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Oct 11 10:29:32.241098 master-2 kubenswrapper[4776]: I1011 10:29:32.241089 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Oct 11 10:29:32.241224 master-2 kubenswrapper[4776]: I1011 10:29:32.241074 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc"] Oct 11 10:29:32.271237 master-2 kubenswrapper[4776]: I1011 10:29:32.271202 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-prometheus-operator-tls\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.271410 master-2 kubenswrapper[4776]: I1011 10:29:32.271267 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.271410 master-2 kubenswrapper[4776]: I1011 10:29:32.271315 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-metrics-client-ca\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.271410 master-2 kubenswrapper[4776]: I1011 10:29:32.271362 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndq25\" (UniqueName: \"kubernetes.io/projected/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-kube-api-access-ndq25\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.372388 master-2 kubenswrapper[4776]: I1011 10:29:32.372310 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndq25\" (UniqueName: \"kubernetes.io/projected/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-kube-api-access-ndq25\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.372610 master-2 kubenswrapper[4776]: I1011 10:29:32.372412 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-prometheus-operator-tls\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.372610 master-2 kubenswrapper[4776]: I1011 10:29:32.372470 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.373135 master-2 kubenswrapper[4776]: I1011 10:29:32.373107 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-metrics-client-ca\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.373195 master-2 kubenswrapper[4776]: E1011 10:29:32.372627 4776 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Oct 11 10:29:32.373280 master-2 kubenswrapper[4776]: E1011 10:29:32.373243 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-prometheus-operator-tls podName:d59f55bb-61cf-47d6-b57b-6b02c1cf3b60 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:32.873221556 +0000 UTC m=+207.657648265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-prometheus-operator-tls") pod "prometheus-operator-574d7f8db8-cwbcc" (UID: "d59f55bb-61cf-47d6-b57b-6b02c1cf3b60") : secret "prometheus-operator-tls" not found Oct 11 10:29:32.373922 master-2 kubenswrapper[4776]: I1011 10:29:32.373901 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-metrics-client-ca\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.421224 master-1 kubenswrapper[4771]: I1011 10:29:32.421157 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-1_792389a1-400d-4a07-a0a5-e80b2edfd8f1/installer/0.log" Oct 11 10:29:32.421684 master-1 kubenswrapper[4771]: I1011 10:29:32.421231 4771 generic.go:334] "Generic (PLEG): container finished" podID="792389a1-400d-4a07-a0a5-e80b2edfd8f1" containerID="1847b9a9f31d4cf6b7fede3d6231e62c7c7aec1680e7c800a880c6ba363a8798" exitCode=1 Oct 11 10:29:32.422007 master-1 kubenswrapper[4771]: I1011 10:29:32.421961 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-1" event={"ID":"792389a1-400d-4a07-a0a5-e80b2edfd8f1","Type":"ContainerDied","Data":"1847b9a9f31d4cf6b7fede3d6231e62c7c7aec1680e7c800a880c6ba363a8798"} Oct 11 10:29:32.456122 master-2 kubenswrapper[4776]: I1011 10:29:32.456043 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndq25\" (UniqueName: \"kubernetes.io/projected/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-kube-api-access-ndq25\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.456400 master-2 kubenswrapper[4776]: I1011 10:29:32.456351 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.499609 master-1 kubenswrapper[4771]: I1011 10:29:32.499072 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:32.499609 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:32.499609 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:32.499609 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:32.499609 master-1 kubenswrapper[4771]: I1011 10:29:32.499133 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:32.621597 master-1 kubenswrapper[4771]: I1011 10:29:32.621572 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-1_792389a1-400d-4a07-a0a5-e80b2edfd8f1/installer/0.log" Oct 11 10:29:32.621705 master-1 kubenswrapper[4771]: I1011 10:29:32.621635 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-1" Oct 11 10:29:32.759653 master-1 kubenswrapper[4771]: I1011 10:29:32.759263 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/792389a1-400d-4a07-a0a5-e80b2edfd8f1-kube-api-access\") pod \"792389a1-400d-4a07-a0a5-e80b2edfd8f1\" (UID: \"792389a1-400d-4a07-a0a5-e80b2edfd8f1\") " Oct 11 10:29:32.759653 master-1 kubenswrapper[4771]: I1011 10:29:32.759399 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/792389a1-400d-4a07-a0a5-e80b2edfd8f1-var-lock\") pod \"792389a1-400d-4a07-a0a5-e80b2edfd8f1\" (UID: \"792389a1-400d-4a07-a0a5-e80b2edfd8f1\") " Oct 11 10:29:32.759653 master-1 kubenswrapper[4771]: I1011 10:29:32.759436 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/792389a1-400d-4a07-a0a5-e80b2edfd8f1-kubelet-dir\") pod \"792389a1-400d-4a07-a0a5-e80b2edfd8f1\" (UID: \"792389a1-400d-4a07-a0a5-e80b2edfd8f1\") " Oct 11 10:29:32.759653 master-1 kubenswrapper[4771]: I1011 10:29:32.759618 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/792389a1-400d-4a07-a0a5-e80b2edfd8f1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "792389a1-400d-4a07-a0a5-e80b2edfd8f1" (UID: "792389a1-400d-4a07-a0a5-e80b2edfd8f1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:29:32.759653 master-1 kubenswrapper[4771]: I1011 10:29:32.759617 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/792389a1-400d-4a07-a0a5-e80b2edfd8f1-var-lock" (OuterVolumeSpecName: "var-lock") pod "792389a1-400d-4a07-a0a5-e80b2edfd8f1" (UID: "792389a1-400d-4a07-a0a5-e80b2edfd8f1"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:29:32.762380 master-1 kubenswrapper[4771]: I1011 10:29:32.762313 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/792389a1-400d-4a07-a0a5-e80b2edfd8f1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "792389a1-400d-4a07-a0a5-e80b2edfd8f1" (UID: "792389a1-400d-4a07-a0a5-e80b2edfd8f1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:29:32.860870 master-1 kubenswrapper[4771]: I1011 10:29:32.860798 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/792389a1-400d-4a07-a0a5-e80b2edfd8f1-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:32.860870 master-1 kubenswrapper[4771]: I1011 10:29:32.860834 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/792389a1-400d-4a07-a0a5-e80b2edfd8f1-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:32.860870 master-1 kubenswrapper[4771]: I1011 10:29:32.860845 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/792389a1-400d-4a07-a0a5-e80b2edfd8f1-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:32.878415 master-2 kubenswrapper[4776]: I1011 10:29:32.878300 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-prometheus-operator-tls\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.894167 master-2 kubenswrapper[4776]: I1011 10:29:32.894112 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-prometheus-operator-tls\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.971705 master-2 kubenswrapper[4776]: I1011 10:29:32.971091 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:32.971705 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:32.971705 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:32.971705 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:32.972225 master-2 kubenswrapper[4776]: I1011 10:29:32.971724 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:33.135016 master-2 kubenswrapper[4776]: I1011 10:29:33.134968 4776 generic.go:334] "Generic (PLEG): container finished" podID="89e02bcb-b3fe-4a45-a531-4ab41d8ee424" containerID="2311ef45db3839160f50cf52dfc54b1dab3ed31b8a810ff4165ecab2eb84274b" exitCode=0 Oct 11 10:29:33.135095 master-2 kubenswrapper[4776]: I1011 10:29:33.135051 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" event={"ID":"89e02bcb-b3fe-4a45-a531-4ab41d8ee424","Type":"ContainerDied","Data":"2311ef45db3839160f50cf52dfc54b1dab3ed31b8a810ff4165ecab2eb84274b"} Oct 11 10:29:33.135641 master-2 kubenswrapper[4776]: I1011 10:29:33.135614 4776 scope.go:117] "RemoveContainer" containerID="2311ef45db3839160f50cf52dfc54b1dab3ed31b8a810ff4165ecab2eb84274b" Oct 11 10:29:33.138507 master-2 kubenswrapper[4776]: I1011 10:29:33.138467 4776 generic.go:334] "Generic (PLEG): container finished" podID="05cf2994-c049-4f42-b2d8-83b23e7e763a" containerID="09c81be857efc36ce4d8daa4c934f1437649343613c3da9aac63b8db86978ed6" exitCode=0 Oct 11 10:29:33.138507 master-2 kubenswrapper[4776]: I1011 10:29:33.138504 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" event={"ID":"05cf2994-c049-4f42-b2d8-83b23e7e763a","Type":"ContainerDied","Data":"09c81be857efc36ce4d8daa4c934f1437649343613c3da9aac63b8db86978ed6"} Oct 11 10:29:33.138886 master-2 kubenswrapper[4776]: I1011 10:29:33.138862 4776 scope.go:117] "RemoveContainer" containerID="09c81be857efc36ce4d8daa4c934f1437649343613c3da9aac63b8db86978ed6" Oct 11 10:29:33.155334 master-2 kubenswrapper[4776]: I1011 10:29:33.155273 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:33.427002 master-1 kubenswrapper[4771]: I1011 10:29:33.426921 4771 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="7cb1d532ea2d89b9be52c186df240a63c25f0ae1db2bbf1085a6c556a31b0cb6" exitCode=0 Oct 11 10:29:33.427679 master-1 kubenswrapper[4771]: I1011 10:29:33.427009 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerDied","Data":"7cb1d532ea2d89b9be52c186df240a63c25f0ae1db2bbf1085a6c556a31b0cb6"} Oct 11 10:29:33.430222 master-1 kubenswrapper[4771]: I1011 10:29:33.430197 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-1_792389a1-400d-4a07-a0a5-e80b2edfd8f1/installer/0.log" Oct 11 10:29:33.430796 master-1 kubenswrapper[4771]: I1011 10:29:33.430761 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-1" Oct 11 10:29:33.438239 master-1 kubenswrapper[4771]: I1011 10:29:33.437173 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-1" event={"ID":"792389a1-400d-4a07-a0a5-e80b2edfd8f1","Type":"ContainerDied","Data":"278d42f198fc93ee50b135376d28ae4eb2fe4bcf6f5f1c9223b4e9e7ffd7be30"} Oct 11 10:29:33.438239 master-1 kubenswrapper[4771]: I1011 10:29:33.437644 4771 scope.go:117] "RemoveContainer" containerID="1847b9a9f31d4cf6b7fede3d6231e62c7c7aec1680e7c800a880c6ba363a8798" Oct 11 10:29:33.483217 master-1 kubenswrapper[4771]: I1011 10:29:33.483152 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-1"] Oct 11 10:29:33.485437 master-1 kubenswrapper[4771]: I1011 10:29:33.485391 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-1"] Oct 11 10:29:33.496699 master-1 kubenswrapper[4771]: I1011 10:29:33.496652 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:33.496699 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:33.496699 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:33.496699 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:33.496965 master-1 kubenswrapper[4771]: I1011 10:29:33.496712 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:33.970043 master-2 kubenswrapper[4776]: I1011 10:29:33.969975 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:33.970043 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:33.970043 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:33.970043 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:33.970379 master-2 kubenswrapper[4776]: I1011 10:29:33.970060 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:34.143411 master-2 kubenswrapper[4776]: I1011 10:29:34.143323 4776 generic.go:334] "Generic (PLEG): container finished" podID="6967590c-695e-4e20-964b-0c643abdf367" containerID="e3b061ce9d0eb2a283f25a5377c1ec78f61f62a5f692f1f7bc57aa0c47f8c828" exitCode=0 Oct 11 10:29:34.143411 master-2 kubenswrapper[4776]: I1011 10:29:34.143355 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" event={"ID":"6967590c-695e-4e20-964b-0c643abdf367","Type":"ContainerDied","Data":"e3b061ce9d0eb2a283f25a5377c1ec78f61f62a5f692f1f7bc57aa0c47f8c828"} Oct 11 10:29:34.144213 master-2 kubenswrapper[4776]: I1011 10:29:34.143731 4776 scope.go:117] "RemoveContainer" containerID="e3b061ce9d0eb2a283f25a5377c1ec78f61f62a5f692f1f7bc57aa0c47f8c828" Oct 11 10:29:34.144927 master-2 kubenswrapper[4776]: I1011 10:29:34.144894 4776 generic.go:334] "Generic (PLEG): container finished" podID="e487f283-7482-463c-90b6-a812e00d0e35" containerID="75e09f57e9f3d2d1f9408b0cb83b216f2432311fdbe734afce1ac9bd82b32464" exitCode=0 Oct 11 10:29:34.144927 master-2 kubenswrapper[4776]: I1011 10:29:34.144921 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" event={"ID":"e487f283-7482-463c-90b6-a812e00d0e35","Type":"ContainerDied","Data":"75e09f57e9f3d2d1f9408b0cb83b216f2432311fdbe734afce1ac9bd82b32464"} Oct 11 10:29:34.145189 master-2 kubenswrapper[4776]: I1011 10:29:34.145134 4776 scope.go:117] "RemoveContainer" containerID="75e09f57e9f3d2d1f9408b0cb83b216f2432311fdbe734afce1ac9bd82b32464" Oct 11 10:29:34.296386 master-1 kubenswrapper[4771]: I1011 10:29:34.296026 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-guard-master-1"] Oct 11 10:29:34.296386 master-1 kubenswrapper[4771]: E1011 10:29:34.296188 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826e1279-bc0d-426e-b6e0-5108268f340e" containerName="installer" Oct 11 10:29:34.296386 master-1 kubenswrapper[4771]: I1011 10:29:34.296203 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="826e1279-bc0d-426e-b6e0-5108268f340e" containerName="installer" Oct 11 10:29:34.296386 master-1 kubenswrapper[4771]: E1011 10:29:34.296213 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792389a1-400d-4a07-a0a5-e80b2edfd8f1" containerName="installer" Oct 11 10:29:34.296386 master-1 kubenswrapper[4771]: I1011 10:29:34.296221 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="792389a1-400d-4a07-a0a5-e80b2edfd8f1" containerName="installer" Oct 11 10:29:34.296386 master-1 kubenswrapper[4771]: I1011 10:29:34.296296 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="826e1279-bc0d-426e-b6e0-5108268f340e" containerName="installer" Oct 11 10:29:34.296386 master-1 kubenswrapper[4771]: I1011 10:29:34.296307 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="792389a1-400d-4a07-a0a5-e80b2edfd8f1" containerName="installer" Oct 11 10:29:34.297870 master-1 kubenswrapper[4771]: I1011 10:29:34.297297 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-master-1" Oct 11 10:29:34.302595 master-1 kubenswrapper[4771]: I1011 10:29:34.301606 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Oct 11 10:29:34.302595 master-1 kubenswrapper[4771]: I1011 10:29:34.301657 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"openshift-service-ca.crt" Oct 11 10:29:34.304683 master-1 kubenswrapper[4771]: I1011 10:29:34.304662 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/etcd-guard-master-1"] Oct 11 10:29:34.441902 master-1 kubenswrapper[4771]: I1011 10:29:34.441858 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-1_6534d9db-a553-4c39-bf4a-014a359ee336/installer/0.log" Oct 11 10:29:34.442394 master-1 kubenswrapper[4771]: I1011 10:29:34.441914 4771 generic.go:334] "Generic (PLEG): container finished" podID="6534d9db-a553-4c39-bf4a-014a359ee336" containerID="c9e465db2f016eeb1b9eb6a1701316ad91386e0556613224875082e886221894" exitCode=1 Oct 11 10:29:34.444140 master-1 kubenswrapper[4771]: I1011 10:29:34.444093 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="792389a1-400d-4a07-a0a5-e80b2edfd8f1" path="/var/lib/kubelet/pods/792389a1-400d-4a07-a0a5-e80b2edfd8f1/volumes" Oct 11 10:29:34.444140 master-1 kubenswrapper[4771]: I1011 10:29:34.444116 4771 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="73a809cbd925d698effb0736fbe0b6f4efb7a431de066e52bb22b55ae4cc3e55" exitCode=0 Oct 11 10:29:34.444635 master-1 kubenswrapper[4771]: I1011 10:29:34.444590 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-1" event={"ID":"6534d9db-a553-4c39-bf4a-014a359ee336","Type":"ContainerDied","Data":"c9e465db2f016eeb1b9eb6a1701316ad91386e0556613224875082e886221894"} Oct 11 10:29:34.444635 master-1 kubenswrapper[4771]: I1011 10:29:34.444630 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerDied","Data":"73a809cbd925d698effb0736fbe0b6f4efb7a431de066e52bb22b55ae4cc3e55"} Oct 11 10:29:34.482024 master-1 kubenswrapper[4771]: I1011 10:29:34.481909 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9wvj\" (UniqueName: \"kubernetes.io/projected/3fc4970d-4f34-4fc6-9791-6218f8e42eb9-kube-api-access-g9wvj\") pod \"etcd-guard-master-1\" (UID: \"3fc4970d-4f34-4fc6-9791-6218f8e42eb9\") " pod="openshift-etcd/etcd-guard-master-1" Oct 11 10:29:34.502087 master-1 kubenswrapper[4771]: I1011 10:29:34.502021 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:34.502087 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:34.502087 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:34.502087 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:34.502302 master-1 kubenswrapper[4771]: I1011 10:29:34.502115 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:34.557084 master-1 kubenswrapper[4771]: I1011 10:29:34.557029 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-1_6534d9db-a553-4c39-bf4a-014a359ee336/installer/0.log" Oct 11 10:29:34.557230 master-1 kubenswrapper[4771]: I1011 10:29:34.557109 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-1" Oct 11 10:29:34.587444 master-1 kubenswrapper[4771]: I1011 10:29:34.587374 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6534d9db-a553-4c39-bf4a-014a359ee336-var-lock\") pod \"6534d9db-a553-4c39-bf4a-014a359ee336\" (UID: \"6534d9db-a553-4c39-bf4a-014a359ee336\") " Oct 11 10:29:34.587654 master-1 kubenswrapper[4771]: I1011 10:29:34.587456 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6534d9db-a553-4c39-bf4a-014a359ee336-kubelet-dir\") pod \"6534d9db-a553-4c39-bf4a-014a359ee336\" (UID: \"6534d9db-a553-4c39-bf4a-014a359ee336\") " Oct 11 10:29:34.587654 master-1 kubenswrapper[4771]: I1011 10:29:34.587497 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6534d9db-a553-4c39-bf4a-014a359ee336-kube-api-access\") pod \"6534d9db-a553-4c39-bf4a-014a359ee336\" (UID: \"6534d9db-a553-4c39-bf4a-014a359ee336\") " Oct 11 10:29:34.587654 master-1 kubenswrapper[4771]: I1011 10:29:34.587530 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6534d9db-a553-4c39-bf4a-014a359ee336-var-lock" (OuterVolumeSpecName: "var-lock") pod "6534d9db-a553-4c39-bf4a-014a359ee336" (UID: "6534d9db-a553-4c39-bf4a-014a359ee336"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:29:34.587654 master-1 kubenswrapper[4771]: I1011 10:29:34.587593 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6534d9db-a553-4c39-bf4a-014a359ee336-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6534d9db-a553-4c39-bf4a-014a359ee336" (UID: "6534d9db-a553-4c39-bf4a-014a359ee336"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:29:34.587654 master-1 kubenswrapper[4771]: I1011 10:29:34.587645 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9wvj\" (UniqueName: \"kubernetes.io/projected/3fc4970d-4f34-4fc6-9791-6218f8e42eb9-kube-api-access-g9wvj\") pod \"etcd-guard-master-1\" (UID: \"3fc4970d-4f34-4fc6-9791-6218f8e42eb9\") " pod="openshift-etcd/etcd-guard-master-1" Oct 11 10:29:34.587948 master-1 kubenswrapper[4771]: I1011 10:29:34.587745 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6534d9db-a553-4c39-bf4a-014a359ee336-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:34.587948 master-1 kubenswrapper[4771]: I1011 10:29:34.587766 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6534d9db-a553-4c39-bf4a-014a359ee336-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:34.591667 master-1 kubenswrapper[4771]: I1011 10:29:34.591557 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6534d9db-a553-4c39-bf4a-014a359ee336-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6534d9db-a553-4c39-bf4a-014a359ee336" (UID: "6534d9db-a553-4c39-bf4a-014a359ee336"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:29:34.616537 master-1 kubenswrapper[4771]: I1011 10:29:34.616491 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9wvj\" (UniqueName: \"kubernetes.io/projected/3fc4970d-4f34-4fc6-9791-6218f8e42eb9-kube-api-access-g9wvj\") pod \"etcd-guard-master-1\" (UID: \"3fc4970d-4f34-4fc6-9791-6218f8e42eb9\") " pod="openshift-etcd/etcd-guard-master-1" Oct 11 10:29:34.627346 master-1 kubenswrapper[4771]: I1011 10:29:34.627310 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-master-1" Oct 11 10:29:34.688531 master-1 kubenswrapper[4771]: I1011 10:29:34.688499 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6534d9db-a553-4c39-bf4a-014a359ee336-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:34.952868 master-2 kubenswrapper[4776]: I1011 10:29:34.952664 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc"] Oct 11 10:29:34.969509 master-2 kubenswrapper[4776]: I1011 10:29:34.969476 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:34.969509 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:34.969509 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:34.969509 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:34.969803 master-2 kubenswrapper[4776]: I1011 10:29:34.969527 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:35.060116 master-1 kubenswrapper[4771]: I1011 10:29:35.060030 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/etcd-guard-master-1"] Oct 11 10:29:35.068115 master-1 kubenswrapper[4771]: W1011 10:29:35.068077 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fc4970d_4f34_4fc6_9791_6218f8e42eb9.slice/crio-7d7d5bb1e54a26f56bba9c02805eb5d544d7ecd2f66f6f9c7a4e1cd7ea203bc0 WatchSource:0}: Error finding container 7d7d5bb1e54a26f56bba9c02805eb5d544d7ecd2f66f6f9c7a4e1cd7ea203bc0: Status 404 returned error can't find the container with id 7d7d5bb1e54a26f56bba9c02805eb5d544d7ecd2f66f6f9c7a4e1cd7ea203bc0 Oct 11 10:29:35.160263 master-2 kubenswrapper[4776]: I1011 10:29:35.160214 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" event={"ID":"89e02bcb-b3fe-4a45-a531-4ab41d8ee424","Type":"ContainerStarted","Data":"04593c9a8e4176a6b2a0cc6b66798cfe315899c4cef7952e21b368edd8e44a59"} Oct 11 10:29:35.162011 master-2 kubenswrapper[4776]: I1011 10:29:35.161975 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" event={"ID":"05cf2994-c049-4f42-b2d8-83b23e7e763a","Type":"ContainerStarted","Data":"d42f6d5a7ad6d1dda7a68f7de1dc7e076bddcd07acb9d4de53e05e48fc3a150f"} Oct 11 10:29:35.163896 master-2 kubenswrapper[4776]: I1011 10:29:35.163843 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" event={"ID":"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60","Type":"ContainerStarted","Data":"37956e3fb55cda8feb4fe7a4112049c9a8dffa4e87d666c5864d55b4c361351f"} Oct 11 10:29:35.169711 master-2 kubenswrapper[4776]: I1011 10:29:35.169659 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" event={"ID":"e487f283-7482-463c-90b6-a812e00d0e35","Type":"ContainerStarted","Data":"66ac492f9bc499fbe2cdd855b8a45b0ac86e5a06c95abb9a4fee261a78d012fb"} Oct 11 10:29:35.171844 master-2 kubenswrapper[4776]: I1011 10:29:35.171822 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwqr6" event={"ID":"444ea5b2-c9dc-4685-9f66-2273b30d9045","Type":"ContainerStarted","Data":"2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab"} Oct 11 10:29:35.178893 master-2 kubenswrapper[4776]: I1011 10:29:35.178401 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" event={"ID":"6967590c-695e-4e20-964b-0c643abdf367","Type":"ContainerStarted","Data":"2f0c2ef6b9765f3091f3311664d99563342d78a3080dd836513b6906718f0fd5"} Oct 11 10:29:35.456613 master-1 kubenswrapper[4771]: I1011 10:29:35.456566 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-1_6534d9db-a553-4c39-bf4a-014a359ee336/installer/0.log" Oct 11 10:29:35.457064 master-1 kubenswrapper[4771]: I1011 10:29:35.456861 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-1" Oct 11 10:29:35.457561 master-1 kubenswrapper[4771]: I1011 10:29:35.457206 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-1" event={"ID":"6534d9db-a553-4c39-bf4a-014a359ee336","Type":"ContainerDied","Data":"b5b289645c8dafc708db0dfb37bf1e6882fdc062aac0a46f6f992e36cadc5dc7"} Oct 11 10:29:35.457561 master-1 kubenswrapper[4771]: I1011 10:29:35.457292 4771 scope.go:117] "RemoveContainer" containerID="c9e465db2f016eeb1b9eb6a1701316ad91386e0556613224875082e886221894" Oct 11 10:29:35.465712 master-1 kubenswrapper[4771]: I1011 10:29:35.465659 4771 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="ae91d16391eadcea61802eeaddf65a6f0304c66c1e1f73804b3842b3041b8d05" exitCode=0 Oct 11 10:29:35.466133 master-1 kubenswrapper[4771]: I1011 10:29:35.465794 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerDied","Data":"ae91d16391eadcea61802eeaddf65a6f0304c66c1e1f73804b3842b3041b8d05"} Oct 11 10:29:35.467761 master-1 kubenswrapper[4771]: I1011 10:29:35.467716 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-guard-master-1" event={"ID":"3fc4970d-4f34-4fc6-9791-6218f8e42eb9","Type":"ContainerStarted","Data":"7d7d5bb1e54a26f56bba9c02805eb5d544d7ecd2f66f6f9c7a4e1cd7ea203bc0"} Oct 11 10:29:35.494635 master-1 kubenswrapper[4771]: I1011 10:29:35.494571 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:29:35.496565 master-1 kubenswrapper[4771]: I1011 10:29:35.496530 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:35.496565 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:35.496565 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:35.496565 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:35.496750 master-1 kubenswrapper[4771]: I1011 10:29:35.496572 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:35.506143 master-1 kubenswrapper[4771]: I1011 10:29:35.506099 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-1"] Oct 11 10:29:35.510935 master-1 kubenswrapper[4771]: I1011 10:29:35.510888 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-1"] Oct 11 10:29:35.623426 master-2 kubenswrapper[4776]: I1011 10:29:35.623304 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:29:35.623426 master-2 kubenswrapper[4776]: E1011 10:29:35.623404 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:29:35.623653 master-2 kubenswrapper[4776]: E1011 10:29:35.623449 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:30:39.623436497 +0000 UTC m=+274.407863206 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:29:35.970293 master-2 kubenswrapper[4776]: I1011 10:29:35.970235 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:35.970293 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:35.970293 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:35.970293 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:35.970554 master-2 kubenswrapper[4776]: I1011 10:29:35.970317 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:36.185724 master-2 kubenswrapper[4776]: I1011 10:29:36.184783 4776 generic.go:334] "Generic (PLEG): container finished" podID="9d362fb9-48e4-4d72-a940-ec6c9c051fac" containerID="53f868ac8b0d0bcce62eb761aa1d944f2aeff4c3ea9d582cec7865a78d5991fa" exitCode=0 Oct 11 10:29:36.185724 master-2 kubenswrapper[4776]: I1011 10:29:36.184836 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" event={"ID":"9d362fb9-48e4-4d72-a940-ec6c9c051fac","Type":"ContainerDied","Data":"53f868ac8b0d0bcce62eb761aa1d944f2aeff4c3ea9d582cec7865a78d5991fa"} Oct 11 10:29:36.185724 master-2 kubenswrapper[4776]: I1011 10:29:36.185223 4776 scope.go:117] "RemoveContainer" containerID="53f868ac8b0d0bcce62eb761aa1d944f2aeff4c3ea9d582cec7865a78d5991fa" Oct 11 10:29:36.190226 master-2 kubenswrapper[4776]: I1011 10:29:36.188997 4776 generic.go:334] "Generic (PLEG): container finished" podID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerID="2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab" exitCode=0 Oct 11 10:29:36.190226 master-2 kubenswrapper[4776]: I1011 10:29:36.189022 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwqr6" event={"ID":"444ea5b2-c9dc-4685-9f66-2273b30d9045","Type":"ContainerDied","Data":"2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab"} Oct 11 10:29:36.442339 master-1 kubenswrapper[4771]: I1011 10:29:36.442264 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6534d9db-a553-4c39-bf4a-014a359ee336" path="/var/lib/kubelet/pods/6534d9db-a553-4c39-bf4a-014a359ee336/volumes" Oct 11 10:29:36.474648 master-1 kubenswrapper[4771]: I1011 10:29:36.473562 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-guard-master-1" event={"ID":"3fc4970d-4f34-4fc6-9791-6218f8e42eb9","Type":"ContainerStarted","Data":"d9979a6eb84532ea8c8fdf3474fb9576ae4ba27a4e7d93fc9eeeadab89ea349f"} Oct 11 10:29:36.474648 master-1 kubenswrapper[4771]: I1011 10:29:36.474599 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-guard-master-1" Oct 11 10:29:36.476128 master-1 kubenswrapper[4771]: I1011 10:29:36.475623 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:29:36.476128 master-1 kubenswrapper[4771]: I1011 10:29:36.475685 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:29:36.478077 master-1 kubenswrapper[4771]: I1011 10:29:36.478033 4771 generic.go:334] "Generic (PLEG): container finished" podID="868ea5b9-b62a-4683-82c9-760de94ef155" containerID="081fde9dac0d8c6f0177a9a06139a4e92fb38ea47b03713cf7f04ea063469f84" exitCode=0 Oct 11 10:29:36.478208 master-1 kubenswrapper[4771]: I1011 10:29:36.478101 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-854f54f8c9-hw5fc" event={"ID":"868ea5b9-b62a-4683-82c9-760de94ef155","Type":"ContainerDied","Data":"081fde9dac0d8c6f0177a9a06139a4e92fb38ea47b03713cf7f04ea063469f84"} Oct 11 10:29:36.478446 master-1 kubenswrapper[4771]: I1011 10:29:36.478409 4771 scope.go:117] "RemoveContainer" containerID="081fde9dac0d8c6f0177a9a06139a4e92fb38ea47b03713cf7f04ea063469f84" Oct 11 10:29:36.481568 master-1 kubenswrapper[4771]: I1011 10:29:36.481537 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerStarted","Data":"2c883b5cc483b4b1102cd2f4e0032f04b4e86dfac92c219c11959045c43545c8"} Oct 11 10:29:36.493427 master-1 kubenswrapper[4771]: I1011 10:29:36.493334 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-guard-master-1" podStartSLOduration=2.49331896 podStartE2EDuration="2.49331896s" podCreationTimestamp="2025-10-11 10:29:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:36.492534218 +0000 UTC m=+208.466760689" watchObservedRunningTime="2025-10-11 10:29:36.49331896 +0000 UTC m=+208.467545421" Oct 11 10:29:36.497771 master-1 kubenswrapper[4771]: I1011 10:29:36.497733 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:36.497771 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:36.497771 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:36.497771 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:36.498087 master-1 kubenswrapper[4771]: I1011 10:29:36.497783 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:36.970658 master-2 kubenswrapper[4776]: I1011 10:29:36.970575 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:36.970658 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:36.970658 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:36.970658 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:36.971097 master-2 kubenswrapper[4776]: I1011 10:29:36.970711 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:37.199972 master-2 kubenswrapper[4776]: I1011 10:29:37.199877 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwqr6" event={"ID":"444ea5b2-c9dc-4685-9f66-2273b30d9045","Type":"ContainerStarted","Data":"79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a"} Oct 11 10:29:37.209779 master-2 kubenswrapper[4776]: I1011 10:29:37.209657 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" event={"ID":"9d362fb9-48e4-4d72-a940-ec6c9c051fac","Type":"ContainerStarted","Data":"1b4a8ced5ca6a681439ab9f258f847eef6764901729df9db483938353402fc3b"} Oct 11 10:29:37.210361 master-2 kubenswrapper[4776]: I1011 10:29:37.210317 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:29:37.213285 master-2 kubenswrapper[4776]: I1011 10:29:37.213238 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" event={"ID":"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60","Type":"ContainerStarted","Data":"659605d0ecdd890d98bc3c30bed35c1685bb6e09336e89d7e95ffc3574457f60"} Oct 11 10:29:37.213285 master-2 kubenswrapper[4776]: I1011 10:29:37.213283 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" event={"ID":"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60","Type":"ContainerStarted","Data":"c1cea60d20b8b7bbe3f7f6818ae17857d3e5b363e53892761bddf5afe67f98a5"} Oct 11 10:29:37.224766 master-2 kubenswrapper[4776]: I1011 10:29:37.223016 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mwqr6" podStartSLOduration=2.680106029 podStartE2EDuration="13.223003089s" podCreationTimestamp="2025-10-11 10:29:24 +0000 UTC" firstStartedPulling="2025-10-11 10:29:26.089501059 +0000 UTC m=+200.873927798" lastFinishedPulling="2025-10-11 10:29:36.632398149 +0000 UTC m=+211.416824858" observedRunningTime="2025-10-11 10:29:37.222818314 +0000 UTC m=+212.007245063" watchObservedRunningTime="2025-10-11 10:29:37.223003089 +0000 UTC m=+212.007429798" Oct 11 10:29:37.246953 master-2 kubenswrapper[4776]: I1011 10:29:37.246845 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" podStartSLOduration=3.567997012 podStartE2EDuration="5.246817307s" podCreationTimestamp="2025-10-11 10:29:32 +0000 UTC" firstStartedPulling="2025-10-11 10:29:34.960468342 +0000 UTC m=+209.744895051" lastFinishedPulling="2025-10-11 10:29:36.639288637 +0000 UTC m=+211.423715346" observedRunningTime="2025-10-11 10:29:37.243757973 +0000 UTC m=+212.028184682" watchObservedRunningTime="2025-10-11 10:29:37.246817307 +0000 UTC m=+212.031244056" Oct 11 10:29:37.486043 master-1 kubenswrapper[4771]: I1011 10:29:37.485867 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:29:37.486043 master-1 kubenswrapper[4771]: I1011 10:29:37.485949 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:29:37.497188 master-1 kubenswrapper[4771]: I1011 10:29:37.497063 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:37.497188 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:37.497188 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:37.497188 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:37.497188 master-1 kubenswrapper[4771]: I1011 10:29:37.497123 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:37.967242 master-2 kubenswrapper[4776]: I1011 10:29:37.967144 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:37.969418 master-2 kubenswrapper[4776]: I1011 10:29:37.969371 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:37.969418 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:37.969418 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:37.969418 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:37.969720 master-2 kubenswrapper[4776]: I1011 10:29:37.969468 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:38.219980 master-2 kubenswrapper[4776]: I1011 10:29:38.219797 4776 generic.go:334] "Generic (PLEG): container finished" podID="7004f3ff-6db8-446d-94c1-1223e975299d" containerID="13931b8d42a71308bf45f4bd6921b1ab789c1a4f3b0b726209cf504aecb722a9" exitCode=0 Oct 11 10:29:38.220803 master-2 kubenswrapper[4776]: I1011 10:29:38.219981 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" event={"ID":"7004f3ff-6db8-446d-94c1-1223e975299d","Type":"ContainerDied","Data":"13931b8d42a71308bf45f4bd6921b1ab789c1a4f3b0b726209cf504aecb722a9"} Oct 11 10:29:38.220986 master-2 kubenswrapper[4776]: I1011 10:29:38.220917 4776 scope.go:117] "RemoveContainer" containerID="13931b8d42a71308bf45f4bd6921b1ab789c1a4f3b0b726209cf504aecb722a9" Oct 11 10:29:38.488981 master-1 kubenswrapper[4771]: I1011 10:29:38.488832 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:29:38.488981 master-1 kubenswrapper[4771]: I1011 10:29:38.488916 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:29:38.497653 master-1 kubenswrapper[4771]: I1011 10:29:38.497544 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:38.497653 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:38.497653 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:38.497653 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:38.497653 master-1 kubenswrapper[4771]: I1011 10:29:38.497645 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:38.563522 master-2 kubenswrapper[4776]: I1011 10:29:38.563365 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:29:38.563522 master-2 kubenswrapper[4776]: E1011 10:29:38.563512 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:29:38.563908 master-2 kubenswrapper[4776]: E1011 10:29:38.563578 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:30:42.563560918 +0000 UTC m=+277.347987627 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:29:38.898176 master-1 kubenswrapper[4771]: I1011 10:29:38.898129 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/etcd-guard-master-1"] Oct 11 10:29:38.970338 master-2 kubenswrapper[4776]: I1011 10:29:38.969702 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:38.970338 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:38.970338 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:38.970338 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:38.970338 master-2 kubenswrapper[4776]: I1011 10:29:38.969791 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:39.227958 master-2 kubenswrapper[4776]: I1011 10:29:39.227790 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" event={"ID":"7004f3ff-6db8-446d-94c1-1223e975299d","Type":"ContainerStarted","Data":"9d576122c332ab9836c7092653bba3fb6cc3dc9cf6006fb55fc126faded0454e"} Oct 11 10:29:39.230932 master-2 kubenswrapper[4776]: I1011 10:29:39.230842 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5745565d84-bq4rs_88129ec6-6f99-42a1-842a-6a965c6b58fe/openshift-controller-manager-operator/0.log" Oct 11 10:29:39.230932 master-2 kubenswrapper[4776]: I1011 10:29:39.230899 4776 generic.go:334] "Generic (PLEG): container finished" podID="88129ec6-6f99-42a1-842a-6a965c6b58fe" containerID="e266c7a2e3d240b36e8aa83f32c98d86c0362e7f150797bd2e151f66b7e2430e" exitCode=1 Oct 11 10:29:39.230932 master-2 kubenswrapper[4776]: I1011 10:29:39.230932 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" event={"ID":"88129ec6-6f99-42a1-842a-6a965c6b58fe","Type":"ContainerDied","Data":"e266c7a2e3d240b36e8aa83f32c98d86c0362e7f150797bd2e151f66b7e2430e"} Oct 11 10:29:39.231539 master-2 kubenswrapper[4776]: I1011 10:29:39.231340 4776 scope.go:117] "RemoveContainer" containerID="e266c7a2e3d240b36e8aa83f32c98d86c0362e7f150797bd2e151f66b7e2430e" Oct 11 10:29:39.335509 master-1 kubenswrapper[4771]: I1011 10:29:39.335458 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-1"] Oct 11 10:29:39.335729 master-1 kubenswrapper[4771]: E1011 10:29:39.335625 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6534d9db-a553-4c39-bf4a-014a359ee336" containerName="installer" Oct 11 10:29:39.335729 master-1 kubenswrapper[4771]: I1011 10:29:39.335642 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6534d9db-a553-4c39-bf4a-014a359ee336" containerName="installer" Oct 11 10:29:39.335729 master-1 kubenswrapper[4771]: I1011 10:29:39.335713 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="6534d9db-a553-4c39-bf4a-014a359ee336" containerName="installer" Oct 11 10:29:39.336096 master-1 kubenswrapper[4771]: I1011 10:29:39.336073 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-1" Oct 11 10:29:39.340763 master-1 kubenswrapper[4771]: I1011 10:29:39.340727 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 11 10:29:39.342221 master-1 kubenswrapper[4771]: I1011 10:29:39.342182 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-1"] Oct 11 10:29:39.442185 master-1 kubenswrapper[4771]: I1011 10:29:39.442123 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/776c3745-2f4c-4a78-b1cd-77a7a1532df3-kube-api-access\") pod \"installer-3-master-1\" (UID: \"776c3745-2f4c-4a78-b1cd-77a7a1532df3\") " pod="openshift-kube-controller-manager/installer-3-master-1" Oct 11 10:29:39.442636 master-1 kubenswrapper[4771]: I1011 10:29:39.442208 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/776c3745-2f4c-4a78-b1cd-77a7a1532df3-kubelet-dir\") pod \"installer-3-master-1\" (UID: \"776c3745-2f4c-4a78-b1cd-77a7a1532df3\") " pod="openshift-kube-controller-manager/installer-3-master-1" Oct 11 10:29:39.443660 master-1 kubenswrapper[4771]: I1011 10:29:39.443011 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/776c3745-2f4c-4a78-b1cd-77a7a1532df3-var-lock\") pod \"installer-3-master-1\" (UID: \"776c3745-2f4c-4a78-b1cd-77a7a1532df3\") " pod="openshift-kube-controller-manager/installer-3-master-1" Oct 11 10:29:39.491841 master-1 kubenswrapper[4771]: I1011 10:29:39.491759 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:29:39.491841 master-1 kubenswrapper[4771]: I1011 10:29:39.491828 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:29:39.496298 master-1 kubenswrapper[4771]: I1011 10:29:39.496255 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:39.496298 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:39.496298 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:39.496298 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:39.496501 master-1 kubenswrapper[4771]: I1011 10:29:39.496313 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:39.550215 master-1 kubenswrapper[4771]: I1011 10:29:39.550131 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/776c3745-2f4c-4a78-b1cd-77a7a1532df3-kube-api-access\") pod \"installer-3-master-1\" (UID: \"776c3745-2f4c-4a78-b1cd-77a7a1532df3\") " pod="openshift-kube-controller-manager/installer-3-master-1" Oct 11 10:29:39.550215 master-1 kubenswrapper[4771]: I1011 10:29:39.550214 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/776c3745-2f4c-4a78-b1cd-77a7a1532df3-kubelet-dir\") pod \"installer-3-master-1\" (UID: \"776c3745-2f4c-4a78-b1cd-77a7a1532df3\") " pod="openshift-kube-controller-manager/installer-3-master-1" Oct 11 10:29:39.550491 master-1 kubenswrapper[4771]: I1011 10:29:39.550275 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/776c3745-2f4c-4a78-b1cd-77a7a1532df3-var-lock\") pod \"installer-3-master-1\" (UID: \"776c3745-2f4c-4a78-b1cd-77a7a1532df3\") " pod="openshift-kube-controller-manager/installer-3-master-1" Oct 11 10:29:39.550537 master-1 kubenswrapper[4771]: I1011 10:29:39.550515 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/776c3745-2f4c-4a78-b1cd-77a7a1532df3-var-lock\") pod \"installer-3-master-1\" (UID: \"776c3745-2f4c-4a78-b1cd-77a7a1532df3\") " pod="openshift-kube-controller-manager/installer-3-master-1" Oct 11 10:29:39.550603 master-1 kubenswrapper[4771]: I1011 10:29:39.550573 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/776c3745-2f4c-4a78-b1cd-77a7a1532df3-kubelet-dir\") pod \"installer-3-master-1\" (UID: \"776c3745-2f4c-4a78-b1cd-77a7a1532df3\") " pod="openshift-kube-controller-manager/installer-3-master-1" Oct 11 10:29:39.578484 master-1 kubenswrapper[4771]: I1011 10:29:39.578404 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/776c3745-2f4c-4a78-b1cd-77a7a1532df3-kube-api-access\") pod \"installer-3-master-1\" (UID: \"776c3745-2f4c-4a78-b1cd-77a7a1532df3\") " pod="openshift-kube-controller-manager/installer-3-master-1" Oct 11 10:29:39.628986 master-1 kubenswrapper[4771]: I1011 10:29:39.628784 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:29:39.628986 master-1 kubenswrapper[4771]: I1011 10:29:39.628885 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:29:39.651382 master-1 kubenswrapper[4771]: I1011 10:29:39.651258 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-1" Oct 11 10:29:39.969990 master-2 kubenswrapper[4776]: I1011 10:29:39.969903 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:39.969990 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:39.969990 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:39.969990 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:39.971011 master-2 kubenswrapper[4776]: I1011 10:29:39.970093 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:40.240609 master-2 kubenswrapper[4776]: I1011 10:29:40.240473 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5745565d84-bq4rs_88129ec6-6f99-42a1-842a-6a965c6b58fe/openshift-controller-manager-operator/0.log" Oct 11 10:29:40.240609 master-2 kubenswrapper[4776]: I1011 10:29:40.240538 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" event={"ID":"88129ec6-6f99-42a1-842a-6a965c6b58fe","Type":"ContainerStarted","Data":"b6a7c367c0b1c516d9da1e5da56e626bdc4db09395e1cc1c9318830bf75af8ca"} Oct 11 10:29:40.496591 master-1 kubenswrapper[4771]: I1011 10:29:40.496508 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:40.496591 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:40.496591 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:40.496591 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:40.497237 master-1 kubenswrapper[4771]: I1011 10:29:40.496596 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:40.968915 master-2 kubenswrapper[4776]: I1011 10:29:40.968822 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:40.968915 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:40.968915 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:40.968915 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:40.969263 master-2 kubenswrapper[4776]: I1011 10:29:40.968935 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:41.340166 master-2 kubenswrapper[4776]: I1011 10:29:41.340012 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:29:41.497425 master-1 kubenswrapper[4771]: I1011 10:29:41.497322 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:41.497425 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:41.497425 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:41.497425 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:41.497425 master-1 kubenswrapper[4771]: I1011 10:29:41.497408 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:41.969924 master-2 kubenswrapper[4776]: I1011 10:29:41.969844 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:41.969924 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:41.969924 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:41.969924 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:41.970185 master-2 kubenswrapper[4776]: I1011 10:29:41.969945 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:42.496641 master-1 kubenswrapper[4771]: I1011 10:29:42.496593 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:42.496641 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:42.496641 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:42.496641 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:42.496919 master-1 kubenswrapper[4771]: I1011 10:29:42.496654 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:42.970248 master-2 kubenswrapper[4776]: I1011 10:29:42.970139 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:42.970248 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:42.970248 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:42.970248 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:42.971423 master-2 kubenswrapper[4776]: I1011 10:29:42.970251 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:43.260457 master-2 kubenswrapper[4776]: I1011 10:29:43.260343 4776 generic.go:334] "Generic (PLEG): container finished" podID="f8050d30-444b-40a5-829c-1e3b788910a0" containerID="5001bd6df546d1ceae6c934b8abd9de1f6f93838b1e654bff89ff6b24eb56ca9" exitCode=0 Oct 11 10:29:43.260775 master-2 kubenswrapper[4776]: I1011 10:29:43.260472 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" event={"ID":"f8050d30-444b-40a5-829c-1e3b788910a0","Type":"ContainerDied","Data":"5001bd6df546d1ceae6c934b8abd9de1f6f93838b1e654bff89ff6b24eb56ca9"} Oct 11 10:29:43.262519 master-2 kubenswrapper[4776]: I1011 10:29:43.261244 4776 scope.go:117] "RemoveContainer" containerID="5001bd6df546d1ceae6c934b8abd9de1f6f93838b1e654bff89ff6b24eb56ca9" Oct 11 10:29:43.263446 master-2 kubenswrapper[4776]: I1011 10:29:43.262718 4776 generic.go:334] "Generic (PLEG): container finished" podID="a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1" containerID="0b7bb22c9bcc10fdcba6be60dad53b1b80998cc309ea84f073feff75133d2485" exitCode=0 Oct 11 10:29:43.263446 master-2 kubenswrapper[4776]: I1011 10:29:43.262766 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" event={"ID":"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1","Type":"ContainerDied","Data":"0b7bb22c9bcc10fdcba6be60dad53b1b80998cc309ea84f073feff75133d2485"} Oct 11 10:29:43.263446 master-2 kubenswrapper[4776]: I1011 10:29:43.263315 4776 scope.go:117] "RemoveContainer" containerID="0b7bb22c9bcc10fdcba6be60dad53b1b80998cc309ea84f073feff75133d2485" Oct 11 10:29:43.499835 master-1 kubenswrapper[4771]: I1011 10:29:43.499782 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:43.499835 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:43.499835 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:43.499835 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:43.500254 master-1 kubenswrapper[4771]: I1011 10:29:43.499833 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:43.512600 master-1 kubenswrapper[4771]: I1011 10:29:43.512559 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwwz9" event={"ID":"0b7d1d62-0062-47cd-a963-63893777198e","Type":"ContainerStarted","Data":"d9ec3c198a5bb93b32bff398f90b94c113b4f2ba904501149fb967a49c67ec80"} Oct 11 10:29:43.515342 master-1 kubenswrapper[4771]: I1011 10:29:43.515301 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8tm6" event={"ID":"38131fcf-d407-4ba3-b7bf-471586bab887","Type":"ContainerStarted","Data":"10eecae7180584a993b9109e41de9729732ec8af959166bad8fe7ba33a08f83b"} Oct 11 10:29:43.517009 master-1 kubenswrapper[4771]: I1011 10:29:43.516988 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-854f54f8c9-hw5fc" event={"ID":"868ea5b9-b62a-4683-82c9-760de94ef155","Type":"ContainerStarted","Data":"082eee2a8f515d9f2a8d8b7d1ce478df6413dccc9907ab7bcad6ffb296b971cc"} Oct 11 10:29:43.520251 master-1 kubenswrapper[4771]: I1011 10:29:43.520227 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerStarted","Data":"d00f571fb5251acb052a97b0ee5169046519d14d8990c98b7ea440fa842ffd37"} Oct 11 10:29:43.645836 master-1 kubenswrapper[4771]: I1011 10:29:43.645769 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-1"] Oct 11 10:29:43.734224 master-1 kubenswrapper[4771]: W1011 10:29:43.734161 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod776c3745_2f4c_4a78_b1cd_77a7a1532df3.slice/crio-ccc3bd0dec107a1860fda6d334afbd26f993ae533a829286bea5468aeefd8bf7 WatchSource:0}: Error finding container ccc3bd0dec107a1860fda6d334afbd26f993ae533a829286bea5468aeefd8bf7: Status 404 returned error can't find the container with id ccc3bd0dec107a1860fda6d334afbd26f993ae533a829286bea5468aeefd8bf7 Oct 11 10:29:43.970324 master-2 kubenswrapper[4776]: I1011 10:29:43.970199 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:43.970324 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:43.970324 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:43.970324 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:43.970324 master-2 kubenswrapper[4776]: I1011 10:29:43.970271 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:44.104088 master-1 kubenswrapper[4771]: I1011 10:29:44.104027 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9d7j4"] Oct 11 10:29:44.104701 master-1 kubenswrapper[4771]: I1011 10:29:44.104673 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" Oct 11 10:29:44.107977 master-1 kubenswrapper[4771]: I1011 10:29:44.107943 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Oct 11 10:29:44.114666 master-2 kubenswrapper[4776]: I1011 10:29:44.114604 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7tbzg"] Oct 11 10:29:44.115427 master-2 kubenswrapper[4776]: I1011 10:29:44.115386 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.118595 master-2 kubenswrapper[4776]: I1011 10:29:44.118544 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Oct 11 10:29:44.203644 master-1 kubenswrapper[4771]: I1011 10:29:44.203512 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfgsk\" (UniqueName: \"kubernetes.io/projected/806cd59c-056a-4fb4-a3b4-cb716c01cdea-kube-api-access-tfgsk\") pod \"cni-sysctl-allowlist-ds-9d7j4\" (UID: \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" Oct 11 10:29:44.203644 master-1 kubenswrapper[4771]: I1011 10:29:44.203588 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/806cd59c-056a-4fb4-a3b4-cb716c01cdea-ready\") pod \"cni-sysctl-allowlist-ds-9d7j4\" (UID: \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" Oct 11 10:29:44.203871 master-1 kubenswrapper[4771]: I1011 10:29:44.203666 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/806cd59c-056a-4fb4-a3b4-cb716c01cdea-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9d7j4\" (UID: \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" Oct 11 10:29:44.203871 master-1 kubenswrapper[4771]: I1011 10:29:44.203808 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/806cd59c-056a-4fb4-a3b4-cb716c01cdea-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9d7j4\" (UID: \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" Oct 11 10:29:44.235198 master-2 kubenswrapper[4776]: I1011 10:29:44.235049 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.235198 master-2 kubenswrapper[4776]: I1011 10:29:44.235143 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.235621 master-2 kubenswrapper[4776]: I1011 10:29:44.235573 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-ready\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.235708 master-2 kubenswrapper[4776]: I1011 10:29:44.235637 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9qsm\" (UniqueName: \"kubernetes.io/projected/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-kube-api-access-x9qsm\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.271293 master-2 kubenswrapper[4776]: I1011 10:29:44.271200 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" event={"ID":"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1","Type":"ContainerStarted","Data":"16105f8d068da7f4b93fbced46c0b757ac012260cda17d23e7fdedd94f1849d6"} Oct 11 10:29:44.274183 master-2 kubenswrapper[4776]: I1011 10:29:44.274134 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" event={"ID":"f8050d30-444b-40a5-829c-1e3b788910a0","Type":"ContainerStarted","Data":"66d9d78635fc74937985f0529439e2b1341a4631940c42b266388afccee1f55f"} Oct 11 10:29:44.305060 master-1 kubenswrapper[4771]: I1011 10:29:44.305011 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfgsk\" (UniqueName: \"kubernetes.io/projected/806cd59c-056a-4fb4-a3b4-cb716c01cdea-kube-api-access-tfgsk\") pod \"cni-sysctl-allowlist-ds-9d7j4\" (UID: \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" Oct 11 10:29:44.305060 master-1 kubenswrapper[4771]: I1011 10:29:44.305057 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/806cd59c-056a-4fb4-a3b4-cb716c01cdea-ready\") pod \"cni-sysctl-allowlist-ds-9d7j4\" (UID: \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" Oct 11 10:29:44.305536 master-1 kubenswrapper[4771]: I1011 10:29:44.305093 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/806cd59c-056a-4fb4-a3b4-cb716c01cdea-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9d7j4\" (UID: \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" Oct 11 10:29:44.305536 master-1 kubenswrapper[4771]: I1011 10:29:44.305126 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/806cd59c-056a-4fb4-a3b4-cb716c01cdea-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9d7j4\" (UID: \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" Oct 11 10:29:44.305536 master-1 kubenswrapper[4771]: I1011 10:29:44.305235 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/806cd59c-056a-4fb4-a3b4-cb716c01cdea-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9d7j4\" (UID: \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" Oct 11 10:29:44.305787 master-1 kubenswrapper[4771]: I1011 10:29:44.305598 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/806cd59c-056a-4fb4-a3b4-cb716c01cdea-ready\") pod \"cni-sysctl-allowlist-ds-9d7j4\" (UID: \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" Oct 11 10:29:44.306948 master-1 kubenswrapper[4771]: I1011 10:29:44.306872 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/806cd59c-056a-4fb4-a3b4-cb716c01cdea-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9d7j4\" (UID: \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" Oct 11 10:29:44.324555 master-1 kubenswrapper[4771]: I1011 10:29:44.324485 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfgsk\" (UniqueName: \"kubernetes.io/projected/806cd59c-056a-4fb4-a3b4-cb716c01cdea-kube-api-access-tfgsk\") pod \"cni-sysctl-allowlist-ds-9d7j4\" (UID: \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" Oct 11 10:29:44.337785 master-2 kubenswrapper[4776]: I1011 10:29:44.336802 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-ready\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.337785 master-2 kubenswrapper[4776]: I1011 10:29:44.336858 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9qsm\" (UniqueName: \"kubernetes.io/projected/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-kube-api-access-x9qsm\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.337785 master-2 kubenswrapper[4776]: I1011 10:29:44.336950 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.337785 master-2 kubenswrapper[4776]: I1011 10:29:44.336966 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.337785 master-2 kubenswrapper[4776]: I1011 10:29:44.337077 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.337785 master-2 kubenswrapper[4776]: I1011 10:29:44.337781 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-ready\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.338642 master-2 kubenswrapper[4776]: I1011 10:29:44.338610 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.354120 master-2 kubenswrapper[4776]: I1011 10:29:44.354068 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9qsm\" (UniqueName: \"kubernetes.io/projected/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-kube-api-access-x9qsm\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.431841 master-2 kubenswrapper[4776]: I1011 10:29:44.431760 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.448781 master-1 kubenswrapper[4771]: I1011 10:29:44.448684 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" Oct 11 10:29:44.467144 master-1 kubenswrapper[4771]: W1011 10:29:44.466859 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod806cd59c_056a_4fb4_a3b4_cb716c01cdea.slice/crio-7147f14021bd8181c058d8c3ce2203cdae664d32eab5196f21ee167281d79073 WatchSource:0}: Error finding container 7147f14021bd8181c058d8c3ce2203cdae664d32eab5196f21ee167281d79073: Status 404 returned error can't find the container with id 7147f14021bd8181c058d8c3ce2203cdae664d32eab5196f21ee167281d79073 Oct 11 10:29:44.497494 master-1 kubenswrapper[4771]: I1011 10:29:44.497437 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:44.497494 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:44.497494 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:44.497494 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:44.497862 master-1 kubenswrapper[4771]: I1011 10:29:44.497526 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:44.536188 master-1 kubenswrapper[4771]: I1011 10:29:44.536094 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-1" event={"ID":"776c3745-2f4c-4a78-b1cd-77a7a1532df3","Type":"ContainerStarted","Data":"2a2c47f6b163a67c15dfe1ca6c1ec25571de95f1ae3f653d4b9ded6b99ad45a9"} Oct 11 10:29:44.536188 master-1 kubenswrapper[4771]: I1011 10:29:44.536182 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-1" event={"ID":"776c3745-2f4c-4a78-b1cd-77a7a1532df3","Type":"ContainerStarted","Data":"ccc3bd0dec107a1860fda6d334afbd26f993ae533a829286bea5468aeefd8bf7"} Oct 11 10:29:44.542321 master-1 kubenswrapper[4771]: I1011 10:29:44.542277 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerStarted","Data":"958716eeae4e87693f249c2e50f77614f7904dfcce99812a8c2f6b2e06fbacf0"} Oct 11 10:29:44.542438 master-1 kubenswrapper[4771]: I1011 10:29:44.542334 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerStarted","Data":"cf1bd37b9035aa3a513b51ba1e14c267a24bb1ac86b542f0095d01337d817711"} Oct 11 10:29:44.542438 master-1 kubenswrapper[4771]: I1011 10:29:44.542410 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerStarted","Data":"4e485d5d5712b6cda3c4f5674c0f91abe7502ccb2d49bc78c03ccbef061a43f0"} Oct 11 10:29:44.546336 master-1 kubenswrapper[4771]: I1011 10:29:44.546290 4771 generic.go:334] "Generic (PLEG): container finished" podID="26005893-ecd8-4acb-8417-71a97ed97cbe" containerID="4bde2f0bff6002ac88c69a20de25c24e27ed2402f74ddf6b6f429bda18e25de4" exitCode=0 Oct 11 10:29:44.546467 master-1 kubenswrapper[4771]: I1011 10:29:44.546432 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xkrc6" event={"ID":"26005893-ecd8-4acb-8417-71a97ed97cbe","Type":"ContainerDied","Data":"4bde2f0bff6002ac88c69a20de25c24e27ed2402f74ddf6b6f429bda18e25de4"} Oct 11 10:29:44.550590 master-1 kubenswrapper[4771]: I1011 10:29:44.550534 4771 generic.go:334] "Generic (PLEG): container finished" podID="0b7d1d62-0062-47cd-a963-63893777198e" containerID="d9ec3c198a5bb93b32bff398f90b94c113b4f2ba904501149fb967a49c67ec80" exitCode=0 Oct 11 10:29:44.550705 master-1 kubenswrapper[4771]: I1011 10:29:44.550603 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwwz9" event={"ID":"0b7d1d62-0062-47cd-a963-63893777198e","Type":"ContainerDied","Data":"d9ec3c198a5bb93b32bff398f90b94c113b4f2ba904501149fb967a49c67ec80"} Oct 11 10:29:44.556692 master-1 kubenswrapper[4771]: I1011 10:29:44.556642 4771 generic.go:334] "Generic (PLEG): container finished" podID="38131fcf-d407-4ba3-b7bf-471586bab887" containerID="10eecae7180584a993b9109e41de9729732ec8af959166bad8fe7ba33a08f83b" exitCode=0 Oct 11 10:29:44.556823 master-1 kubenswrapper[4771]: I1011 10:29:44.556774 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8tm6" event={"ID":"38131fcf-d407-4ba3-b7bf-471586bab887","Type":"ContainerDied","Data":"10eecae7180584a993b9109e41de9729732ec8af959166bad8fe7ba33a08f83b"} Oct 11 10:29:44.559192 master-1 kubenswrapper[4771]: I1011 10:29:44.559139 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" event={"ID":"806cd59c-056a-4fb4-a3b4-cb716c01cdea","Type":"ContainerStarted","Data":"7147f14021bd8181c058d8c3ce2203cdae664d32eab5196f21ee167281d79073"} Oct 11 10:29:44.560623 master-1 kubenswrapper[4771]: I1011 10:29:44.560540 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-1" podStartSLOduration=5.560514138 podStartE2EDuration="5.560514138s" podCreationTimestamp="2025-10-11 10:29:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:44.556816216 +0000 UTC m=+216.531042717" watchObservedRunningTime="2025-10-11 10:29:44.560514138 +0000 UTC m=+216.534740619" Oct 11 10:29:44.566670 master-1 kubenswrapper[4771]: I1011 10:29:44.566612 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-dvv69"] Oct 11 10:29:44.567842 master-1 kubenswrapper[4771]: I1011 10:29:44.567803 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.570014 master-1 kubenswrapper[4771]: I1011 10:29:44.569923 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs"] Oct 11 10:29:44.571278 master-1 kubenswrapper[4771]: I1011 10:29:44.571197 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs" Oct 11 10:29:44.574709 master-1 kubenswrapper[4771]: I1011 10:29:44.574661 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Oct 11 10:29:44.577201 master-2 kubenswrapper[4776]: I1011 10:29:44.576983 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-x7xhm"] Oct 11 10:29:44.577290 master-1 kubenswrapper[4771]: I1011 10:29:44.577217 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Oct 11 10:29:44.577962 master-1 kubenswrapper[4771]: I1011 10:29:44.577913 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Oct 11 10:29:44.578043 master-2 kubenswrapper[4776]: I1011 10:29:44.577987 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.578367 master-1 kubenswrapper[4771]: I1011 10:29:44.578225 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Oct 11 10:29:44.580199 master-1 kubenswrapper[4771]: I1011 10:29:44.580129 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Oct 11 10:29:44.580328 master-2 kubenswrapper[4776]: I1011 10:29:44.580280 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Oct 11 10:29:44.580374 master-1 kubenswrapper[4771]: I1011 10:29:44.580292 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Oct 11 10:29:44.580423 master-2 kubenswrapper[4776]: I1011 10:29:44.580334 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-57fbd47578-g6s84"] Oct 11 10:29:44.580613 master-2 kubenswrapper[4776]: I1011 10:29:44.580570 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Oct 11 10:29:44.580883 master-1 kubenswrapper[4771]: I1011 10:29:44.580855 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Oct 11 10:29:44.581497 master-2 kubenswrapper[4776]: I1011 10:29:44.581458 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.584406 master-2 kubenswrapper[4776]: I1011 10:29:44.584359 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Oct 11 10:29:44.584580 master-2 kubenswrapper[4776]: I1011 10:29:44.584546 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Oct 11 10:29:44.584754 master-2 kubenswrapper[4776]: I1011 10:29:44.584720 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Oct 11 10:29:44.586583 master-2 kubenswrapper[4776]: I1011 10:29:44.586534 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-57fbd47578-g6s84"] Oct 11 10:29:44.597651 master-1 kubenswrapper[4771]: I1011 10:29:44.597598 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs"] Oct 11 10:29:44.620652 master-1 kubenswrapper[4771]: I1011 10:29:44.612230 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-root\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.620944 master-1 kubenswrapper[4771]: I1011 10:29:44.614614 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-1" podStartSLOduration=17.614576139 podStartE2EDuration="17.614576139s" podCreationTimestamp="2025-10-11 10:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:44.613928011 +0000 UTC m=+216.588154512" watchObservedRunningTime="2025-10-11 10:29:44.614576139 +0000 UTC m=+216.588802630" Oct 11 10:29:44.620944 master-1 kubenswrapper[4771]: I1011 10:29:44.620792 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-node-exporter-wtmp\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.620944 master-1 kubenswrapper[4771]: I1011 10:29:44.620895 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcz25\" (UniqueName: \"kubernetes.io/projected/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-kube-api-access-qcz25\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.621103 master-1 kubenswrapper[4771]: I1011 10:29:44.620941 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4893176f-942c-49bf-aaab-9c238ecdaaa7-openshift-state-metrics-tls\") pod \"openshift-state-metrics-56d8dcb55c-xgtjs\" (UID: \"4893176f-942c-49bf-aaab-9c238ecdaaa7\") " pod="openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs" Oct 11 10:29:44.621103 master-1 kubenswrapper[4771]: I1011 10:29:44.621009 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wchwm\" (UniqueName: \"kubernetes.io/projected/4893176f-942c-49bf-aaab-9c238ecdaaa7-kube-api-access-wchwm\") pod \"openshift-state-metrics-56d8dcb55c-xgtjs\" (UID: \"4893176f-942c-49bf-aaab-9c238ecdaaa7\") " pod="openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs" Oct 11 10:29:44.621103 master-1 kubenswrapper[4771]: I1011 10:29:44.621040 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-metrics-client-ca\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.621232 master-1 kubenswrapper[4771]: I1011 10:29:44.621147 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-node-exporter-textfile\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.621232 master-1 kubenswrapper[4771]: I1011 10:29:44.621181 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-node-exporter-tls\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.621317 master-1 kubenswrapper[4771]: I1011 10:29:44.621293 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-sys\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.621380 master-1 kubenswrapper[4771]: I1011 10:29:44.621338 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.622304 master-1 kubenswrapper[4771]: I1011 10:29:44.621455 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4893176f-942c-49bf-aaab-9c238ecdaaa7-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-56d8dcb55c-xgtjs\" (UID: \"4893176f-942c-49bf-aaab-9c238ecdaaa7\") " pod="openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs" Oct 11 10:29:44.622304 master-1 kubenswrapper[4771]: I1011 10:29:44.621502 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4893176f-942c-49bf-aaab-9c238ecdaaa7-metrics-client-ca\") pod \"openshift-state-metrics-56d8dcb55c-xgtjs\" (UID: \"4893176f-942c-49bf-aaab-9c238ecdaaa7\") " pod="openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs" Oct 11 10:29:44.723726 master-1 kubenswrapper[4771]: I1011 10:29:44.722220 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcz25\" (UniqueName: \"kubernetes.io/projected/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-kube-api-access-qcz25\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.723726 master-1 kubenswrapper[4771]: I1011 10:29:44.722290 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4893176f-942c-49bf-aaab-9c238ecdaaa7-openshift-state-metrics-tls\") pod \"openshift-state-metrics-56d8dcb55c-xgtjs\" (UID: \"4893176f-942c-49bf-aaab-9c238ecdaaa7\") " pod="openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs" Oct 11 10:29:44.723726 master-1 kubenswrapper[4771]: I1011 10:29:44.722329 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wchwm\" (UniqueName: \"kubernetes.io/projected/4893176f-942c-49bf-aaab-9c238ecdaaa7-kube-api-access-wchwm\") pod \"openshift-state-metrics-56d8dcb55c-xgtjs\" (UID: \"4893176f-942c-49bf-aaab-9c238ecdaaa7\") " pod="openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs" Oct 11 10:29:44.723726 master-1 kubenswrapper[4771]: I1011 10:29:44.722392 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-metrics-client-ca\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.723726 master-1 kubenswrapper[4771]: I1011 10:29:44.722458 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-node-exporter-tls\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.723726 master-1 kubenswrapper[4771]: I1011 10:29:44.722493 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-node-exporter-textfile\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.723726 master-1 kubenswrapper[4771]: I1011 10:29:44.722545 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-sys\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.723726 master-1 kubenswrapper[4771]: I1011 10:29:44.722578 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.723726 master-1 kubenswrapper[4771]: I1011 10:29:44.722650 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4893176f-942c-49bf-aaab-9c238ecdaaa7-metrics-client-ca\") pod \"openshift-state-metrics-56d8dcb55c-xgtjs\" (UID: \"4893176f-942c-49bf-aaab-9c238ecdaaa7\") " pod="openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs" Oct 11 10:29:44.723726 master-1 kubenswrapper[4771]: I1011 10:29:44.722682 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4893176f-942c-49bf-aaab-9c238ecdaaa7-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-56d8dcb55c-xgtjs\" (UID: \"4893176f-942c-49bf-aaab-9c238ecdaaa7\") " pod="openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs" Oct 11 10:29:44.723726 master-1 kubenswrapper[4771]: I1011 10:29:44.722735 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-root\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.723726 master-1 kubenswrapper[4771]: I1011 10:29:44.722764 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-node-exporter-wtmp\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.723726 master-1 kubenswrapper[4771]: I1011 10:29:44.723018 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-node-exporter-wtmp\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.723726 master-1 kubenswrapper[4771]: I1011 10:29:44.723079 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-root\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.723726 master-1 kubenswrapper[4771]: I1011 10:29:44.723300 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-node-exporter-textfile\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.723726 master-1 kubenswrapper[4771]: I1011 10:29:44.723644 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-metrics-client-ca\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.723726 master-1 kubenswrapper[4771]: I1011 10:29:44.723642 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-sys\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.724750 master-1 kubenswrapper[4771]: I1011 10:29:44.724709 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4893176f-942c-49bf-aaab-9c238ecdaaa7-metrics-client-ca\") pod \"openshift-state-metrics-56d8dcb55c-xgtjs\" (UID: \"4893176f-942c-49bf-aaab-9c238ecdaaa7\") " pod="openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs" Oct 11 10:29:44.726741 master-1 kubenswrapper[4771]: I1011 10:29:44.726701 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4893176f-942c-49bf-aaab-9c238ecdaaa7-openshift-state-metrics-tls\") pod \"openshift-state-metrics-56d8dcb55c-xgtjs\" (UID: \"4893176f-942c-49bf-aaab-9c238ecdaaa7\") " pod="openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs" Oct 11 10:29:44.727586 master-1 kubenswrapper[4771]: I1011 10:29:44.727540 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-node-exporter-tls\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.728514 master-1 kubenswrapper[4771]: I1011 10:29:44.728472 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.729903 master-1 kubenswrapper[4771]: I1011 10:29:44.729863 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4893176f-942c-49bf-aaab-9c238ecdaaa7-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-56d8dcb55c-xgtjs\" (UID: \"4893176f-942c-49bf-aaab-9c238ecdaaa7\") " pod="openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs" Oct 11 10:29:44.741404 master-2 kubenswrapper[4776]: I1011 10:29:44.741328 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/2c2bfc6c-87cf-45df-8901-abe788ae6d98-volume-directive-shadow\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.741404 master-2 kubenswrapper[4776]: I1011 10:29:44.741396 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.741625 master-2 kubenswrapper[4776]: I1011 10:29:44.741433 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cb15485f-03bd-4281-8626-f35346cf4b0b-sys\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.741625 master-2 kubenswrapper[4776]: I1011 10:29:44.741460 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2c2bfc6c-87cf-45df-8901-abe788ae6d98-metrics-client-ca\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.741625 master-2 kubenswrapper[4776]: I1011 10:29:44.741537 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtfnj\" (UniqueName: \"kubernetes.io/projected/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-api-access-mtfnj\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.741625 master-2 kubenswrapper[4776]: I1011 10:29:44.741607 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cb15485f-03bd-4281-8626-f35346cf4b0b-metrics-client-ca\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.741782 master-2 kubenswrapper[4776]: I1011 10:29:44.741636 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-wtmp\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.741782 master-2 kubenswrapper[4776]: I1011 10:29:44.741654 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-textfile\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.741782 master-2 kubenswrapper[4776]: I1011 10:29:44.741687 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbx7g\" (UniqueName: \"kubernetes.io/projected/cb15485f-03bd-4281-8626-f35346cf4b0b-kube-api-access-sbx7g\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.741782 master-2 kubenswrapper[4776]: I1011 10:29:44.741715 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/cb15485f-03bd-4281-8626-f35346cf4b0b-root\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.741895 master-2 kubenswrapper[4776]: I1011 10:29:44.741798 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-state-metrics-tls\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.741895 master-2 kubenswrapper[4776]: I1011 10:29:44.741866 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.741954 master-2 kubenswrapper[4776]: I1011 10:29:44.741930 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-tls\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.741985 master-2 kubenswrapper[4776]: I1011 10:29:44.741956 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.749216 master-1 kubenswrapper[4771]: I1011 10:29:44.749175 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcz25\" (UniqueName: \"kubernetes.io/projected/bb58d9ff-af20-40c4-9dfe-c9c10fb5c410-kube-api-access-qcz25\") pod \"node-exporter-dvv69\" (UID: \"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410\") " pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.757258 master-1 kubenswrapper[4771]: I1011 10:29:44.757198 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wchwm\" (UniqueName: \"kubernetes.io/projected/4893176f-942c-49bf-aaab-9c238ecdaaa7-kube-api-access-wchwm\") pod \"openshift-state-metrics-56d8dcb55c-xgtjs\" (UID: \"4893176f-942c-49bf-aaab-9c238ecdaaa7\") " pod="openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs" Oct 11 10:29:44.842621 master-2 kubenswrapper[4776]: I1011 10:29:44.842448 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-state-metrics-tls\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.842621 master-2 kubenswrapper[4776]: I1011 10:29:44.842495 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.842621 master-2 kubenswrapper[4776]: I1011 10:29:44.842527 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-tls\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.842621 master-2 kubenswrapper[4776]: I1011 10:29:44.842546 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.842621 master-2 kubenswrapper[4776]: I1011 10:29:44.842578 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/2c2bfc6c-87cf-45df-8901-abe788ae6d98-volume-directive-shadow\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.842621 master-2 kubenswrapper[4776]: I1011 10:29:44.842598 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.842621 master-2 kubenswrapper[4776]: I1011 10:29:44.842620 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cb15485f-03bd-4281-8626-f35346cf4b0b-sys\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843098 master-2 kubenswrapper[4776]: I1011 10:29:44.842640 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2c2bfc6c-87cf-45df-8901-abe788ae6d98-metrics-client-ca\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.843098 master-2 kubenswrapper[4776]: I1011 10:29:44.842662 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtfnj\" (UniqueName: \"kubernetes.io/projected/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-api-access-mtfnj\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.843098 master-2 kubenswrapper[4776]: I1011 10:29:44.842719 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cb15485f-03bd-4281-8626-f35346cf4b0b-metrics-client-ca\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843098 master-2 kubenswrapper[4776]: I1011 10:29:44.842741 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-wtmp\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843098 master-2 kubenswrapper[4776]: I1011 10:29:44.842760 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-textfile\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843098 master-2 kubenswrapper[4776]: I1011 10:29:44.842778 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbx7g\" (UniqueName: \"kubernetes.io/projected/cb15485f-03bd-4281-8626-f35346cf4b0b-kube-api-access-sbx7g\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843098 master-2 kubenswrapper[4776]: I1011 10:29:44.842972 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-wtmp\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843098 master-2 kubenswrapper[4776]: I1011 10:29:44.843030 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/cb15485f-03bd-4281-8626-f35346cf4b0b-root\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843098 master-2 kubenswrapper[4776]: I1011 10:29:44.843098 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/cb15485f-03bd-4281-8626-f35346cf4b0b-root\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843471 master-2 kubenswrapper[4776]: I1011 10:29:44.843092 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cb15485f-03bd-4281-8626-f35346cf4b0b-sys\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843516 master-2 kubenswrapper[4776]: I1011 10:29:44.843495 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-textfile\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843647 master-2 kubenswrapper[4776]: I1011 10:29:44.843595 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cb15485f-03bd-4281-8626-f35346cf4b0b-metrics-client-ca\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843647 master-2 kubenswrapper[4776]: I1011 10:29:44.843614 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/2c2bfc6c-87cf-45df-8901-abe788ae6d98-volume-directive-shadow\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.843647 master-2 kubenswrapper[4776]: I1011 10:29:44.843642 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2c2bfc6c-87cf-45df-8901-abe788ae6d98-metrics-client-ca\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.844032 master-2 kubenswrapper[4776]: I1011 10:29:44.843978 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.846601 master-2 kubenswrapper[4776]: I1011 10:29:44.846537 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.846699 master-2 kubenswrapper[4776]: I1011 10:29:44.846559 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-tls\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.847779 master-2 kubenswrapper[4776]: I1011 10:29:44.847668 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-state-metrics-tls\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.848085 master-2 kubenswrapper[4776]: I1011 10:29:44.848040 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.859310 master-2 kubenswrapper[4776]: I1011 10:29:44.859237 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtfnj\" (UniqueName: \"kubernetes.io/projected/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-api-access-mtfnj\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.860605 master-2 kubenswrapper[4776]: I1011 10:29:44.860560 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbx7g\" (UniqueName: \"kubernetes.io/projected/cb15485f-03bd-4281-8626-f35346cf4b0b-kube-api-access-sbx7g\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.895866 master-1 kubenswrapper[4771]: I1011 10:29:44.895755 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-dvv69" Oct 11 10:29:44.908356 master-2 kubenswrapper[4776]: I1011 10:29:44.908282 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.910196 master-1 kubenswrapper[4771]: I1011 10:29:44.910140 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs" Oct 11 10:29:44.910409 master-1 kubenswrapper[4771]: W1011 10:29:44.910340 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb58d9ff_af20_40c4_9dfe_c9c10fb5c410.slice/crio-58922d7545ec86f39b36fd35d48be93f0e0a80f1b4264052cd2d4e50edcd3ffd WatchSource:0}: Error finding container 58922d7545ec86f39b36fd35d48be93f0e0a80f1b4264052cd2d4e50edcd3ffd: Status 404 returned error can't find the container with id 58922d7545ec86f39b36fd35d48be93f0e0a80f1b4264052cd2d4e50edcd3ffd Oct 11 10:29:44.922341 master-2 kubenswrapper[4776]: W1011 10:29:44.922277 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb15485f_03bd_4281_8626_f35346cf4b0b.slice/crio-2eedcac0ac5383da254695df2a43bb4e3962da313e81ad250ee06f2761d62e4c WatchSource:0}: Error finding container 2eedcac0ac5383da254695df2a43bb4e3962da313e81ad250ee06f2761d62e4c: Status 404 returned error can't find the container with id 2eedcac0ac5383da254695df2a43bb4e3962da313e81ad250ee06f2761d62e4c Oct 11 10:29:44.937696 master-2 kubenswrapper[4776]: I1011 10:29:44.937610 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.970245 master-2 kubenswrapper[4776]: I1011 10:29:44.970141 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:44.970245 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:44.970245 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:44.970245 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:44.970996 master-2 kubenswrapper[4776]: I1011 10:29:44.970247 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:45.039179 master-2 kubenswrapper[4776]: I1011 10:29:45.039140 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:45.039306 master-2 kubenswrapper[4776]: I1011 10:29:45.039213 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:45.079902 master-2 kubenswrapper[4776]: I1011 10:29:45.079859 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:45.289282 master-2 kubenswrapper[4776]: I1011 10:29:45.289226 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-x7xhm" event={"ID":"cb15485f-03bd-4281-8626-f35346cf4b0b","Type":"ContainerStarted","Data":"2eedcac0ac5383da254695df2a43bb4e3962da313e81ad250ee06f2761d62e4c"} Oct 11 10:29:45.291853 master-2 kubenswrapper[4776]: I1011 10:29:45.291795 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" event={"ID":"2ee67bf2-b525-4e43-8f3c-be748c32c8d2","Type":"ContainerStarted","Data":"cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0"} Oct 11 10:29:45.291937 master-2 kubenswrapper[4776]: I1011 10:29:45.291858 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" event={"ID":"2ee67bf2-b525-4e43-8f3c-be748c32c8d2","Type":"ContainerStarted","Data":"18658a149bb98f078235774d71bf5a21b20cf7314050d8e8690f3080b789d04a"} Oct 11 10:29:45.292096 master-2 kubenswrapper[4776]: I1011 10:29:45.292067 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:45.306963 master-2 kubenswrapper[4776]: I1011 10:29:45.306870 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" podStartSLOduration=1.306857406 podStartE2EDuration="1.306857406s" podCreationTimestamp="2025-10-11 10:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:45.305140099 +0000 UTC m=+220.089566808" watchObservedRunningTime="2025-10-11 10:29:45.306857406 +0000 UTC m=+220.091284115" Oct 11 10:29:45.312249 master-1 kubenswrapper[4771]: I1011 10:29:45.312184 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs"] Oct 11 10:29:45.314780 master-2 kubenswrapper[4776]: I1011 10:29:45.314736 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:45.318178 master-1 kubenswrapper[4771]: W1011 10:29:45.318114 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4893176f_942c_49bf_aaab_9c238ecdaaa7.slice/crio-339368d8966a6c55474373995ab7421c35827ed42bcee77b3b7f7b2f23b12cf7 WatchSource:0}: Error finding container 339368d8966a6c55474373995ab7421c35827ed42bcee77b3b7f7b2f23b12cf7: Status 404 returned error can't find the container with id 339368d8966a6c55474373995ab7421c35827ed42bcee77b3b7f7b2f23b12cf7 Oct 11 10:29:45.326500 master-2 kubenswrapper[4776]: I1011 10:29:45.326453 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:45.342658 master-2 kubenswrapper[4776]: W1011 10:29:45.342607 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c2bfc6c_87cf_45df_8901_abe788ae6d98.slice/crio-d62a435256f887ce698cc9f644a4d6f03bf9dfd34c32363942434fec8c556ad7 WatchSource:0}: Error finding container d62a435256f887ce698cc9f644a4d6f03bf9dfd34c32363942434fec8c556ad7: Status 404 returned error can't find the container with id d62a435256f887ce698cc9f644a4d6f03bf9dfd34c32363942434fec8c556ad7 Oct 11 10:29:45.343470 master-2 kubenswrapper[4776]: I1011 10:29:45.343427 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-57fbd47578-g6s84"] Oct 11 10:29:45.496785 master-1 kubenswrapper[4771]: I1011 10:29:45.496744 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:45.496785 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:45.496785 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:45.496785 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:45.497073 master-1 kubenswrapper[4771]: I1011 10:29:45.496804 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:45.567510 master-1 kubenswrapper[4771]: I1011 10:29:45.567331 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xkrc6" event={"ID":"26005893-ecd8-4acb-8417-71a97ed97cbe","Type":"ContainerStarted","Data":"9f359af209588aa409904f71581bb63e20e019ac6f684b2bb1874bdc33d16458"} Oct 11 10:29:45.576559 master-1 kubenswrapper[4771]: I1011 10:29:45.576510 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwwz9" event={"ID":"0b7d1d62-0062-47cd-a963-63893777198e","Type":"ContainerStarted","Data":"f22b32b9efc5550c030d0dab72ccfb608faf225ce28a60e3581e98388d7a1f20"} Oct 11 10:29:45.581220 master-1 kubenswrapper[4771]: I1011 10:29:45.581171 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-dvv69" event={"ID":"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410","Type":"ContainerStarted","Data":"58922d7545ec86f39b36fd35d48be93f0e0a80f1b4264052cd2d4e50edcd3ffd"} Oct 11 10:29:45.587234 master-1 kubenswrapper[4771]: I1011 10:29:45.584441 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs" event={"ID":"4893176f-942c-49bf-aaab-9c238ecdaaa7","Type":"ContainerStarted","Data":"d4c6df9cc8fee2566997beb74041b60350024094905c50b4f5913430e02a3c29"} Oct 11 10:29:45.587234 master-1 kubenswrapper[4771]: I1011 10:29:45.584493 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs" event={"ID":"4893176f-942c-49bf-aaab-9c238ecdaaa7","Type":"ContainerStarted","Data":"4f36dcfa6cd4c190aa3fd44b1c6524338d0e8c1c9598feb3a67a82b66a19a482"} Oct 11 10:29:45.587234 master-1 kubenswrapper[4771]: I1011 10:29:45.584508 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs" event={"ID":"4893176f-942c-49bf-aaab-9c238ecdaaa7","Type":"ContainerStarted","Data":"339368d8966a6c55474373995ab7421c35827ed42bcee77b3b7f7b2f23b12cf7"} Oct 11 10:29:45.587234 master-1 kubenswrapper[4771]: I1011 10:29:45.585206 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xkrc6" podStartSLOduration=1.737669662 podStartE2EDuration="23.585194776s" podCreationTimestamp="2025-10-11 10:29:22 +0000 UTC" firstStartedPulling="2025-10-11 10:29:23.103742516 +0000 UTC m=+195.077968997" lastFinishedPulling="2025-10-11 10:29:44.95126767 +0000 UTC m=+216.925494111" observedRunningTime="2025-10-11 10:29:45.583831318 +0000 UTC m=+217.558057769" watchObservedRunningTime="2025-10-11 10:29:45.585194776 +0000 UTC m=+217.559421227" Oct 11 10:29:45.587234 master-1 kubenswrapper[4771]: I1011 10:29:45.586782 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8tm6" event={"ID":"38131fcf-d407-4ba3-b7bf-471586bab887","Type":"ContainerStarted","Data":"c5ddefdc367347ae7e3aa6121d147be1b4ebca7be06e0180a8a6603ea9ef59cd"} Oct 11 10:29:45.595736 master-1 kubenswrapper[4771]: I1011 10:29:45.595681 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" event={"ID":"806cd59c-056a-4fb4-a3b4-cb716c01cdea","Type":"ContainerStarted","Data":"5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32"} Oct 11 10:29:45.623382 master-1 kubenswrapper[4771]: I1011 10:29:45.623266 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gwwz9" podStartSLOduration=1.756663513 podStartE2EDuration="24.623245985s" podCreationTimestamp="2025-10-11 10:29:21 +0000 UTC" firstStartedPulling="2025-10-11 10:29:22.119285732 +0000 UTC m=+194.093512173" lastFinishedPulling="2025-10-11 10:29:44.985868204 +0000 UTC m=+216.960094645" observedRunningTime="2025-10-11 10:29:45.607792649 +0000 UTC m=+217.582019100" watchObservedRunningTime="2025-10-11 10:29:45.623245985 +0000 UTC m=+217.597472426" Oct 11 10:29:45.639054 master-1 kubenswrapper[4771]: I1011 10:29:45.638853 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g8tm6" podStartSLOduration=1.89407783 podStartE2EDuration="22.638825634s" podCreationTimestamp="2025-10-11 10:29:23 +0000 UTC" firstStartedPulling="2025-10-11 10:29:24.268890636 +0000 UTC m=+196.243117087" lastFinishedPulling="2025-10-11 10:29:45.01363844 +0000 UTC m=+216.987864891" observedRunningTime="2025-10-11 10:29:45.624383936 +0000 UTC m=+217.598610387" watchObservedRunningTime="2025-10-11 10:29:45.638825634 +0000 UTC m=+217.613052075" Oct 11 10:29:45.639623 master-1 kubenswrapper[4771]: I1011 10:29:45.639574 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" podStartSLOduration=1.639563725 podStartE2EDuration="1.639563725s" podCreationTimestamp="2025-10-11 10:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:45.638706621 +0000 UTC m=+217.612933082" watchObservedRunningTime="2025-10-11 10:29:45.639563725 +0000 UTC m=+217.613790176" Oct 11 10:29:45.970732 master-2 kubenswrapper[4776]: I1011 10:29:45.970624 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:45.970732 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:45.970732 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:45.970732 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:45.970732 master-2 kubenswrapper[4776]: I1011 10:29:45.970725 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:46.297837 master-2 kubenswrapper[4776]: I1011 10:29:46.297578 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" event={"ID":"2c2bfc6c-87cf-45df-8901-abe788ae6d98","Type":"ContainerStarted","Data":"d62a435256f887ce698cc9f644a4d6f03bf9dfd34c32363942434fec8c556ad7"} Oct 11 10:29:46.496491 master-1 kubenswrapper[4771]: I1011 10:29:46.496404 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:46.496491 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:46.496491 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:46.496491 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:46.496925 master-1 kubenswrapper[4771]: I1011 10:29:46.496498 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:46.600874 master-1 kubenswrapper[4771]: I1011 10:29:46.600662 4771 generic.go:334] "Generic (PLEG): container finished" podID="bb58d9ff-af20-40c4-9dfe-c9c10fb5c410" containerID="40c63db4dbb96bdb2c0fd427a80f8837a81a5b41a83bc4f90d570adbaaf25d38" exitCode=0 Oct 11 10:29:46.600874 master-1 kubenswrapper[4771]: I1011 10:29:46.600721 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-dvv69" event={"ID":"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410","Type":"ContainerDied","Data":"40c63db4dbb96bdb2c0fd427a80f8837a81a5b41a83bc4f90d570adbaaf25d38"} Oct 11 10:29:46.601681 master-1 kubenswrapper[4771]: I1011 10:29:46.601642 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" Oct 11 10:29:46.622565 master-1 kubenswrapper[4771]: I1011 10:29:46.622233 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" Oct 11 10:29:46.860661 master-1 kubenswrapper[4771]: I1011 10:29:46.860595 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 11 10:29:46.863886 master-1 kubenswrapper[4771]: I1011 10:29:46.863829 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:29:46.910894 master-1 kubenswrapper[4771]: I1011 10:29:46.910825 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 11 10:29:46.953018 master-1 kubenswrapper[4771]: I1011 10:29:46.952961 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-resource-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"89fad8183e18ab3ad0c46d272335e5f8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:29:46.953150 master-1 kubenswrapper[4771]: I1011 10:29:46.953119 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-cert-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"89fad8183e18ab3ad0c46d272335e5f8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:29:46.974703 master-2 kubenswrapper[4776]: I1011 10:29:46.974560 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:46.974703 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:46.974703 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:46.974703 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:46.975556 master-2 kubenswrapper[4776]: I1011 10:29:46.974721 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:47.055280 master-1 kubenswrapper[4771]: I1011 10:29:47.055201 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-cert-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"89fad8183e18ab3ad0c46d272335e5f8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:29:47.055526 master-1 kubenswrapper[4771]: I1011 10:29:47.055286 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-cert-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"89fad8183e18ab3ad0c46d272335e5f8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:29:47.055526 master-1 kubenswrapper[4771]: I1011 10:29:47.055444 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-resource-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"89fad8183e18ab3ad0c46d272335e5f8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:29:47.055802 master-1 kubenswrapper[4771]: I1011 10:29:47.055569 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-resource-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"89fad8183e18ab3ad0c46d272335e5f8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:29:47.111395 master-1 kubenswrapper[4771]: I1011 10:29:47.111200 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9d7j4"] Oct 11 10:29:47.113096 master-2 kubenswrapper[4776]: I1011 10:29:47.113013 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7tbzg"] Oct 11 10:29:47.208023 master-1 kubenswrapper[4771]: I1011 10:29:47.207963 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:29:47.498950 master-1 kubenswrapper[4771]: I1011 10:29:47.498825 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:47.498950 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:47.498950 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:47.498950 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:47.498950 master-1 kubenswrapper[4771]: I1011 10:29:47.498883 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:47.616554 master-1 kubenswrapper[4771]: I1011 10:29:47.616464 4771 generic.go:334] "Generic (PLEG): container finished" podID="7662f87a-13ba-439c-b386-05e68284803c" containerID="6597ee1a813020ee9e9d9c3bc4ac9547370cdcefee548bc443d67590ef76026d" exitCode=0 Oct 11 10:29:47.616554 master-1 kubenswrapper[4771]: I1011 10:29:47.616550 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-1" event={"ID":"7662f87a-13ba-439c-b386-05e68284803c","Type":"ContainerDied","Data":"6597ee1a813020ee9e9d9c3bc4ac9547370cdcefee548bc443d67590ef76026d"} Oct 11 10:29:47.618029 master-1 kubenswrapper[4771]: I1011 10:29:47.617987 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"89fad8183e18ab3ad0c46d272335e5f8","Type":"ContainerStarted","Data":"bafca73396f947e9fa263ed96b26d1a45ed0144ffb97a2f796fec9628cf617b5"} Oct 11 10:29:47.621801 master-1 kubenswrapper[4771]: I1011 10:29:47.621731 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-dvv69" event={"ID":"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410","Type":"ContainerStarted","Data":"24513fb2040a9a28a68a9c8836d96b175f053d07650962edd7c9621aef3f50f3"} Oct 11 10:29:47.621801 master-1 kubenswrapper[4771]: I1011 10:29:47.621777 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-dvv69" event={"ID":"bb58d9ff-af20-40c4-9dfe-c9c10fb5c410","Type":"ContainerStarted","Data":"91ae22454328a412be0a05f1050fb49a523649b290c10707524ee5a3197cab20"} Oct 11 10:29:47.624875 master-1 kubenswrapper[4771]: I1011 10:29:47.624830 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs" event={"ID":"4893176f-942c-49bf-aaab-9c238ecdaaa7","Type":"ContainerStarted","Data":"0afa205339a7ec15254554c90558c4fb34abcdc63c5bc2d3cadaf2ec152e8a3d"} Oct 11 10:29:47.668487 master-1 kubenswrapper[4771]: I1011 10:29:47.668384 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs" podStartSLOduration=2.375374159 podStartE2EDuration="3.668340372s" podCreationTimestamp="2025-10-11 10:29:44 +0000 UTC" firstStartedPulling="2025-10-11 10:29:45.528203345 +0000 UTC m=+217.502429786" lastFinishedPulling="2025-10-11 10:29:46.821169558 +0000 UTC m=+218.795395999" observedRunningTime="2025-10-11 10:29:47.66465646 +0000 UTC m=+219.638882951" watchObservedRunningTime="2025-10-11 10:29:47.668340372 +0000 UTC m=+219.642566833" Oct 11 10:29:47.680194 master-1 kubenswrapper[4771]: I1011 10:29:47.680086 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-dvv69" podStartSLOduration=2.511971883 podStartE2EDuration="3.680061114s" podCreationTimestamp="2025-10-11 10:29:44 +0000 UTC" firstStartedPulling="2025-10-11 10:29:44.915424832 +0000 UTC m=+216.889651293" lastFinishedPulling="2025-10-11 10:29:46.083514083 +0000 UTC m=+218.057740524" observedRunningTime="2025-10-11 10:29:47.679172129 +0000 UTC m=+219.653398620" watchObservedRunningTime="2025-10-11 10:29:47.680061114 +0000 UTC m=+219.654287585" Oct 11 10:29:47.772886 master-1 kubenswrapper[4771]: I1011 10:29:47.772716 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-h7gnk_6d20faa4-e5eb-4766-b4f5-30e491d1820c/machine-config-server/0.log" Oct 11 10:29:47.778365 master-2 kubenswrapper[4776]: I1011 10:29:47.778319 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:29:47.969432 master-2 kubenswrapper[4776]: I1011 10:29:47.969297 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:47.969432 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:47.969432 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:47.969432 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:47.969922 master-2 kubenswrapper[4776]: I1011 10:29:47.969426 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:48.272298 master-1 kubenswrapper[4771]: I1011 10:29:48.272214 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-1" Oct 11 10:29:48.272298 master-1 kubenswrapper[4771]: I1011 10:29:48.272294 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-1" Oct 11 10:29:48.307046 master-2 kubenswrapper[4776]: I1011 10:29:48.306944 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" podUID="2ee67bf2-b525-4e43-8f3c-be748c32c8d2" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" gracePeriod=30 Oct 11 10:29:48.498290 master-1 kubenswrapper[4771]: I1011 10:29:48.498089 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:48.498290 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:48.498290 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:48.498290 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:48.498740 master-1 kubenswrapper[4771]: I1011 10:29:48.498275 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:48.634954 master-1 kubenswrapper[4771]: I1011 10:29:48.634874 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/0.log" Oct 11 10:29:48.636852 master-1 kubenswrapper[4771]: I1011 10:29:48.636793 4771 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="d00f571fb5251acb052a97b0ee5169046519d14d8990c98b7ea440fa842ffd37" exitCode=1 Oct 11 10:29:48.637019 master-1 kubenswrapper[4771]: I1011 10:29:48.636961 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerDied","Data":"d00f571fb5251acb052a97b0ee5169046519d14d8990c98b7ea440fa842ffd37"} Oct 11 10:29:48.637146 master-1 kubenswrapper[4771]: I1011 10:29:48.637086 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" podUID="806cd59c-056a-4fb4-a3b4-cb716c01cdea" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32" gracePeriod=30 Oct 11 10:29:48.639038 master-1 kubenswrapper[4771]: I1011 10:29:48.638422 4771 scope.go:117] "RemoveContainer" containerID="d00f571fb5251acb052a97b0ee5169046519d14d8990c98b7ea440fa842ffd37" Oct 11 10:29:48.943194 master-1 kubenswrapper[4771]: I1011 10:29:48.942895 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-1" Oct 11 10:29:48.969153 master-2 kubenswrapper[4776]: I1011 10:29:48.969094 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:48.969153 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:48.969153 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:48.969153 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:48.969531 master-2 kubenswrapper[4776]: I1011 10:29:48.969164 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:49.079931 master-1 kubenswrapper[4771]: I1011 10:29:49.079792 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7662f87a-13ba-439c-b386-05e68284803c-var-lock\") pod \"7662f87a-13ba-439c-b386-05e68284803c\" (UID: \"7662f87a-13ba-439c-b386-05e68284803c\") " Oct 11 10:29:49.079931 master-1 kubenswrapper[4771]: I1011 10:29:49.079913 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7662f87a-13ba-439c-b386-05e68284803c-kubelet-dir\") pod \"7662f87a-13ba-439c-b386-05e68284803c\" (UID: \"7662f87a-13ba-439c-b386-05e68284803c\") " Oct 11 10:29:49.080585 master-1 kubenswrapper[4771]: I1011 10:29:49.079994 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7662f87a-13ba-439c-b386-05e68284803c-var-lock" (OuterVolumeSpecName: "var-lock") pod "7662f87a-13ba-439c-b386-05e68284803c" (UID: "7662f87a-13ba-439c-b386-05e68284803c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:29:49.080585 master-1 kubenswrapper[4771]: I1011 10:29:49.080042 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7662f87a-13ba-439c-b386-05e68284803c-kube-api-access\") pod \"7662f87a-13ba-439c-b386-05e68284803c\" (UID: \"7662f87a-13ba-439c-b386-05e68284803c\") " Oct 11 10:29:49.080585 master-1 kubenswrapper[4771]: I1011 10:29:49.080104 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7662f87a-13ba-439c-b386-05e68284803c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7662f87a-13ba-439c-b386-05e68284803c" (UID: "7662f87a-13ba-439c-b386-05e68284803c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:29:49.080585 master-1 kubenswrapper[4771]: I1011 10:29:49.080451 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7662f87a-13ba-439c-b386-05e68284803c-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:49.080585 master-1 kubenswrapper[4771]: I1011 10:29:49.080486 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7662f87a-13ba-439c-b386-05e68284803c-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:49.084531 master-1 kubenswrapper[4771]: I1011 10:29:49.084478 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7662f87a-13ba-439c-b386-05e68284803c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7662f87a-13ba-439c-b386-05e68284803c" (UID: "7662f87a-13ba-439c-b386-05e68284803c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:29:49.182426 master-1 kubenswrapper[4771]: I1011 10:29:49.182223 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7662f87a-13ba-439c-b386-05e68284803c-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:29:49.283616 master-1 kubenswrapper[4771]: I1011 10:29:49.283506 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:29:49.284200 master-1 kubenswrapper[4771]: E1011 10:29:49.283800 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker podName:d7647696-42d9-4dd9-bc3b-a4d52a42cf9a nodeName:}" failed. No retries permitted until 2025-10-11 10:30:53.283771733 +0000 UTC m=+285.257998204 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-bqdlc" (UID: "d7647696-42d9-4dd9-bc3b-a4d52a42cf9a") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:29:49.384600 master-1 kubenswrapper[4771]: I1011 10:29:49.384540 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:29:49.385148 master-1 kubenswrapper[4771]: E1011 10:29:49.385064 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker podName:6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b nodeName:}" failed. No retries permitted until 2025-10-11 10:30:53.38486314 +0000 UTC m=+285.359089581 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-tpzsm" (UID: "6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:29:49.496996 master-1 kubenswrapper[4771]: I1011 10:29:49.496862 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:49.496996 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:49.496996 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:49.496996 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:49.497419 master-1 kubenswrapper[4771]: I1011 10:29:49.497384 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:49.628860 master-1 kubenswrapper[4771]: I1011 10:29:49.628751 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:29:49.629188 master-1 kubenswrapper[4771]: I1011 10:29:49.628888 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:29:49.645204 master-1 kubenswrapper[4771]: I1011 10:29:49.645148 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/0.log" Oct 11 10:29:49.648323 master-1 kubenswrapper[4771]: I1011 10:29:49.648276 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerStarted","Data":"af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde"} Oct 11 10:29:49.651299 master-1 kubenswrapper[4771]: I1011 10:29:49.651262 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-1" event={"ID":"7662f87a-13ba-439c-b386-05e68284803c","Type":"ContainerDied","Data":"ed6acc7afd35b0ef55fa3ef023c51664249170487a6297da51f4f2e72955fbfc"} Oct 11 10:29:49.651444 master-1 kubenswrapper[4771]: I1011 10:29:49.651303 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed6acc7afd35b0ef55fa3ef023c51664249170487a6297da51f4f2e72955fbfc" Oct 11 10:29:49.651444 master-1 kubenswrapper[4771]: I1011 10:29:49.651422 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-1" Oct 11 10:29:49.915701 master-1 kubenswrapper[4771]: I1011 10:29:49.914147 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-65d86dff78-bg7lk"] Oct 11 10:29:49.915701 master-1 kubenswrapper[4771]: E1011 10:29:49.914394 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7662f87a-13ba-439c-b386-05e68284803c" containerName="installer" Oct 11 10:29:49.915701 master-1 kubenswrapper[4771]: I1011 10:29:49.914407 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="7662f87a-13ba-439c-b386-05e68284803c" containerName="installer" Oct 11 10:29:49.915701 master-1 kubenswrapper[4771]: I1011 10:29:49.914495 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="7662f87a-13ba-439c-b386-05e68284803c" containerName="installer" Oct 11 10:29:49.915701 master-1 kubenswrapper[4771]: I1011 10:29:49.915169 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:49.919546 master-1 kubenswrapper[4771]: I1011 10:29:49.918984 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Oct 11 10:29:49.920115 master-1 kubenswrapper[4771]: I1011 10:29:49.919827 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Oct 11 10:29:49.920115 master-1 kubenswrapper[4771]: I1011 10:29:49.919933 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-ap7ej74ueigk4" Oct 11 10:29:49.920693 master-1 kubenswrapper[4771]: I1011 10:29:49.920658 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Oct 11 10:29:49.922224 master-1 kubenswrapper[4771]: I1011 10:29:49.922190 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Oct 11 10:29:49.922477 master-2 kubenswrapper[4776]: I1011 10:29:49.922422 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-65d86dff78-crzgp"] Oct 11 10:29:49.923187 master-2 kubenswrapper[4776]: I1011 10:29:49.923157 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:49.926080 master-2 kubenswrapper[4776]: I1011 10:29:49.926044 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Oct 11 10:29:49.926677 master-1 kubenswrapper[4771]: I1011 10:29:49.926304 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-65d86dff78-bg7lk"] Oct 11 10:29:49.927043 master-2 kubenswrapper[4776]: I1011 10:29:49.927019 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Oct 11 10:29:49.927115 master-2 kubenswrapper[4776]: I1011 10:29:49.927046 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-ap7ej74ueigk4" Oct 11 10:29:49.927211 master-2 kubenswrapper[4776]: I1011 10:29:49.927191 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Oct 11 10:29:49.927260 master-2 kubenswrapper[4776]: I1011 10:29:49.927206 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Oct 11 10:29:49.960055 master-2 kubenswrapper[4776]: I1011 10:29:49.959994 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-65d86dff78-crzgp"] Oct 11 10:29:49.968836 master-2 kubenswrapper[4776]: I1011 10:29:49.968788 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:49.968836 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:49.968836 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:49.968836 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:49.969097 master-2 kubenswrapper[4776]: I1011 10:29:49.968849 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:49.991201 master-1 kubenswrapper[4771]: I1011 10:29:49.991132 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/daf74cdb-6bdb-465a-8e3e-194e8868570f-metrics-server-audit-profiles\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:49.991476 master-1 kubenswrapper[4771]: I1011 10:29:49.991307 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/daf74cdb-6bdb-465a-8e3e-194e8868570f-secret-metrics-client-certs\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:49.991476 master-1 kubenswrapper[4771]: I1011 10:29:49.991381 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf74cdb-6bdb-465a-8e3e-194e8868570f-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:49.991622 master-1 kubenswrapper[4771]: I1011 10:29:49.991479 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/daf74cdb-6bdb-465a-8e3e-194e8868570f-audit-log\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:49.991622 master-1 kubenswrapper[4771]: I1011 10:29:49.991518 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf74cdb-6bdb-465a-8e3e-194e8868570f-client-ca-bundle\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:49.991622 master-1 kubenswrapper[4771]: I1011 10:29:49.991549 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s2hx\" (UniqueName: \"kubernetes.io/projected/daf74cdb-6bdb-465a-8e3e-194e8868570f-kube-api-access-5s2hx\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:49.991622 master-1 kubenswrapper[4771]: I1011 10:29:49.991585 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/daf74cdb-6bdb-465a-8e3e-194e8868570f-secret-metrics-server-tls\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:50.092328 master-1 kubenswrapper[4771]: I1011 10:29:50.092253 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/daf74cdb-6bdb-465a-8e3e-194e8868570f-secret-metrics-client-certs\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:50.092328 master-1 kubenswrapper[4771]: I1011 10:29:50.092320 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf74cdb-6bdb-465a-8e3e-194e8868570f-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:50.092678 master-1 kubenswrapper[4771]: I1011 10:29:50.092399 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/daf74cdb-6bdb-465a-8e3e-194e8868570f-audit-log\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:50.092678 master-1 kubenswrapper[4771]: I1011 10:29:50.092428 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf74cdb-6bdb-465a-8e3e-194e8868570f-client-ca-bundle\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:50.092678 master-1 kubenswrapper[4771]: I1011 10:29:50.092451 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s2hx\" (UniqueName: \"kubernetes.io/projected/daf74cdb-6bdb-465a-8e3e-194e8868570f-kube-api-access-5s2hx\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:50.092678 master-1 kubenswrapper[4771]: I1011 10:29:50.092476 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/daf74cdb-6bdb-465a-8e3e-194e8868570f-secret-metrics-server-tls\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:50.092678 master-1 kubenswrapper[4771]: I1011 10:29:50.092506 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/daf74cdb-6bdb-465a-8e3e-194e8868570f-metrics-server-audit-profiles\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:50.093749 master-1 kubenswrapper[4771]: I1011 10:29:50.093589 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/daf74cdb-6bdb-465a-8e3e-194e8868570f-audit-log\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:50.094170 master-1 kubenswrapper[4771]: I1011 10:29:50.094081 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf74cdb-6bdb-465a-8e3e-194e8868570f-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:50.095108 master-1 kubenswrapper[4771]: I1011 10:29:50.095053 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/daf74cdb-6bdb-465a-8e3e-194e8868570f-metrics-server-audit-profiles\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:50.098042 master-1 kubenswrapper[4771]: I1011 10:29:50.097982 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/daf74cdb-6bdb-465a-8e3e-194e8868570f-secret-metrics-client-certs\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:50.098297 master-1 kubenswrapper[4771]: I1011 10:29:50.098245 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf74cdb-6bdb-465a-8e3e-194e8868570f-client-ca-bundle\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:50.113729 master-1 kubenswrapper[4771]: I1011 10:29:50.113651 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/daf74cdb-6bdb-465a-8e3e-194e8868570f-secret-metrics-server-tls\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:50.117264 master-2 kubenswrapper[4776]: I1011 10:29:50.117199 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-client-certs\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.117493 master-2 kubenswrapper[4776]: I1011 10:29:50.117466 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2vd6\" (UniqueName: \"kubernetes.io/projected/5473628e-94c8-4706-bb03-ff4836debe5f-kube-api-access-q2vd6\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.117541 master-2 kubenswrapper[4776]: I1011 10:29:50.117503 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-server-tls\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.117583 master-2 kubenswrapper[4776]: I1011 10:29:50.117546 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-metrics-server-audit-profiles\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.117625 master-2 kubenswrapper[4776]: I1011 10:29:50.117589 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.117779 master-2 kubenswrapper[4776]: I1011 10:29:50.117648 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.117779 master-2 kubenswrapper[4776]: I1011 10:29:50.117699 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5473628e-94c8-4706-bb03-ff4836debe5f-audit-log\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.128094 master-1 kubenswrapper[4771]: I1011 10:29:50.128024 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s2hx\" (UniqueName: \"kubernetes.io/projected/daf74cdb-6bdb-465a-8e3e-194e8868570f-kube-api-access-5s2hx\") pod \"metrics-server-65d86dff78-bg7lk\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:50.219019 master-2 kubenswrapper[4776]: I1011 10:29:50.218885 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2vd6\" (UniqueName: \"kubernetes.io/projected/5473628e-94c8-4706-bb03-ff4836debe5f-kube-api-access-q2vd6\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.219019 master-2 kubenswrapper[4776]: I1011 10:29:50.218946 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-server-tls\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.219312 master-2 kubenswrapper[4776]: I1011 10:29:50.219270 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-metrics-server-audit-profiles\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.219354 master-2 kubenswrapper[4776]: I1011 10:29:50.219328 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.219387 master-2 kubenswrapper[4776]: I1011 10:29:50.219377 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.219418 master-2 kubenswrapper[4776]: I1011 10:29:50.219405 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5473628e-94c8-4706-bb03-ff4836debe5f-audit-log\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.219460 master-2 kubenswrapper[4776]: I1011 10:29:50.219444 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-client-certs\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.220241 master-2 kubenswrapper[4776]: I1011 10:29:50.220210 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-metrics-server-audit-profiles\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.220592 master-2 kubenswrapper[4776]: I1011 10:29:50.220564 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5473628e-94c8-4706-bb03-ff4836debe5f-audit-log\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.220849 master-2 kubenswrapper[4776]: I1011 10:29:50.220806 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.222417 master-2 kubenswrapper[4776]: I1011 10:29:50.222389 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-client-certs\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.223720 master-2 kubenswrapper[4776]: I1011 10:29:50.223695 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.224653 master-2 kubenswrapper[4776]: I1011 10:29:50.224623 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-server-tls\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.238740 master-1 kubenswrapper[4771]: I1011 10:29:50.238602 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:50.240148 master-2 kubenswrapper[4776]: I1011 10:29:50.240103 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2vd6\" (UniqueName: \"kubernetes.io/projected/5473628e-94c8-4706-bb03-ff4836debe5f-kube-api-access-q2vd6\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.259654 master-2 kubenswrapper[4776]: I1011 10:29:50.259609 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.496941 master-1 kubenswrapper[4771]: I1011 10:29:50.496334 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:50.496941 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:50.496941 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:50.496941 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:50.496941 master-1 kubenswrapper[4771]: I1011 10:29:50.496915 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:50.703478 master-1 kubenswrapper[4771]: I1011 10:29:50.703419 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-65d86dff78-bg7lk"] Oct 11 10:29:50.706611 master-1 kubenswrapper[4771]: W1011 10:29:50.706527 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddaf74cdb_6bdb_465a_8e3e_194e8868570f.slice/crio-307edc8bf8db53981b4988030525d3bc29e6569573860e0ae13cb952073e6408 WatchSource:0}: Error finding container 307edc8bf8db53981b4988030525d3bc29e6569573860e0ae13cb952073e6408: Status 404 returned error can't find the container with id 307edc8bf8db53981b4988030525d3bc29e6569573860e0ae13cb952073e6408 Oct 11 10:29:50.969896 master-2 kubenswrapper[4776]: I1011 10:29:50.969806 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:50.969896 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:50.969896 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:50.969896 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:50.971012 master-2 kubenswrapper[4776]: I1011 10:29:50.969912 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:51.292239 master-1 kubenswrapper[4771]: I1011 10:29:51.292163 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1"] Oct 11 10:29:51.292816 master-1 kubenswrapper[4771]: I1011 10:29:51.292777 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 11 10:29:51.296015 master-1 kubenswrapper[4771]: I1011 10:29:51.295968 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Oct 11 10:29:51.296582 master-1 kubenswrapper[4771]: I1011 10:29:51.296543 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"openshift-service-ca.crt" Oct 11 10:29:51.306753 master-1 kubenswrapper[4771]: I1011 10:29:51.300531 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1"] Oct 11 10:29:51.417210 master-1 kubenswrapper[4771]: I1011 10:29:51.417100 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkcv5\" (UniqueName: \"kubernetes.io/projected/bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe-kube-api-access-vkcv5\") pod \"openshift-kube-scheduler-guard-master-1\" (UID: \"bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 11 10:29:51.496717 master-1 kubenswrapper[4771]: I1011 10:29:51.496665 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:51.496717 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:51.496717 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:51.496717 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:51.497014 master-1 kubenswrapper[4771]: I1011 10:29:51.496729 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:51.518383 master-1 kubenswrapper[4771]: I1011 10:29:51.518319 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkcv5\" (UniqueName: \"kubernetes.io/projected/bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe-kube-api-access-vkcv5\") pod \"openshift-kube-scheduler-guard-master-1\" (UID: \"bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 11 10:29:51.551004 master-1 kubenswrapper[4771]: I1011 10:29:51.550861 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkcv5\" (UniqueName: \"kubernetes.io/projected/bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe-kube-api-access-vkcv5\") pod \"openshift-kube-scheduler-guard-master-1\" (UID: \"bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 11 10:29:51.606268 master-1 kubenswrapper[4771]: I1011 10:29:51.606201 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 11 10:29:51.660084 master-1 kubenswrapper[4771]: I1011 10:29:51.660048 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gwwz9" Oct 11 10:29:51.660174 master-1 kubenswrapper[4771]: I1011 10:29:51.660091 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gwwz9" Oct 11 10:29:51.661982 master-1 kubenswrapper[4771]: I1011 10:29:51.661949 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" event={"ID":"daf74cdb-6bdb-465a-8e3e-194e8868570f","Type":"ContainerStarted","Data":"307edc8bf8db53981b4988030525d3bc29e6569573860e0ae13cb952073e6408"} Oct 11 10:29:51.694028 master-1 kubenswrapper[4771]: I1011 10:29:51.693990 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gwwz9" Oct 11 10:29:51.955887 master-2 kubenswrapper[4776]: I1011 10:29:51.955824 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-65d86dff78-crzgp"] Oct 11 10:29:51.969425 master-2 kubenswrapper[4776]: I1011 10:29:51.969383 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:51.969425 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:51.969425 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:51.969425 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:51.969585 master-2 kubenswrapper[4776]: I1011 10:29:51.969439 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:51.974245 master-2 kubenswrapper[4776]: W1011 10:29:51.974206 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5473628e_94c8_4706_bb03_ff4836debe5f.slice/crio-8271b5e67a2ec2732b55bb993f1b0ac7f455b4b07969cedd192379e7f4289e39 WatchSource:0}: Error finding container 8271b5e67a2ec2732b55bb993f1b0ac7f455b4b07969cedd192379e7f4289e39: Status 404 returned error can't find the container with id 8271b5e67a2ec2732b55bb993f1b0ac7f455b4b07969cedd192379e7f4289e39 Oct 11 10:29:52.339835 master-2 kubenswrapper[4776]: I1011 10:29:52.339784 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" event={"ID":"5473628e-94c8-4706-bb03-ff4836debe5f","Type":"ContainerStarted","Data":"8271b5e67a2ec2732b55bb993f1b0ac7f455b4b07969cedd192379e7f4289e39"} Oct 11 10:29:52.342264 master-2 kubenswrapper[4776]: I1011 10:29:52.342130 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" event={"ID":"2c2bfc6c-87cf-45df-8901-abe788ae6d98","Type":"ContainerStarted","Data":"06f6d905d8def3d26d26325a7f849e8c50d0542fa115d03c8952c0b6e0c2eafd"} Oct 11 10:29:52.342264 master-2 kubenswrapper[4776]: I1011 10:29:52.342193 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" event={"ID":"2c2bfc6c-87cf-45df-8901-abe788ae6d98","Type":"ContainerStarted","Data":"cdd2de3fd84dd126318e808cc1775b92798830295eb7937eec44a1eefa782762"} Oct 11 10:29:52.342264 master-2 kubenswrapper[4776]: I1011 10:29:52.342236 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" event={"ID":"2c2bfc6c-87cf-45df-8901-abe788ae6d98","Type":"ContainerStarted","Data":"fb448ff35a94e97cf160c5d407990f97b568932826e467052ce18e54f038fa54"} Oct 11 10:29:52.343850 master-2 kubenswrapper[4776]: I1011 10:29:52.343735 4776 generic.go:334] "Generic (PLEG): container finished" podID="cb15485f-03bd-4281-8626-f35346cf4b0b" containerID="54a301115c8deb7385996e166f74a05ca63fa343e1cab483f27041f8c5a2154c" exitCode=0 Oct 11 10:29:52.343850 master-2 kubenswrapper[4776]: I1011 10:29:52.343796 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-x7xhm" event={"ID":"cb15485f-03bd-4281-8626-f35346cf4b0b","Type":"ContainerDied","Data":"54a301115c8deb7385996e166f74a05ca63fa343e1cab483f27041f8c5a2154c"} Oct 11 10:29:52.367203 master-2 kubenswrapper[4776]: I1011 10:29:52.367045 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" podStartSLOduration=2.176462263 podStartE2EDuration="8.367020649s" podCreationTimestamp="2025-10-11 10:29:44 +0000 UTC" firstStartedPulling="2025-10-11 10:29:45.345861685 +0000 UTC m=+220.130288394" lastFinishedPulling="2025-10-11 10:29:51.536420041 +0000 UTC m=+226.320846780" observedRunningTime="2025-10-11 10:29:52.364261134 +0000 UTC m=+227.148687853" watchObservedRunningTime="2025-10-11 10:29:52.367020649 +0000 UTC m=+227.151447368" Oct 11 10:29:52.500844 master-1 kubenswrapper[4771]: I1011 10:29:52.500704 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:52.500844 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:52.500844 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:52.500844 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:52.501420 master-1 kubenswrapper[4771]: I1011 10:29:52.500894 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:52.641449 master-1 kubenswrapper[4771]: I1011 10:29:52.641395 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xkrc6" Oct 11 10:29:52.641449 master-1 kubenswrapper[4771]: I1011 10:29:52.641458 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xkrc6" Oct 11 10:29:52.677959 master-1 kubenswrapper[4771]: I1011 10:29:52.677907 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xkrc6" Oct 11 10:29:52.722062 master-1 kubenswrapper[4771]: I1011 10:29:52.722025 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xkrc6" Oct 11 10:29:52.732775 master-1 kubenswrapper[4771]: I1011 10:29:52.732734 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gwwz9" Oct 11 10:29:52.969902 master-2 kubenswrapper[4776]: I1011 10:29:52.969827 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:52.969902 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:52.969902 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:52.969902 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:52.970239 master-2 kubenswrapper[4776]: I1011 10:29:52.969921 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:53.272186 master-1 kubenswrapper[4771]: I1011 10:29:53.272099 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-1" Oct 11 10:29:53.352616 master-2 kubenswrapper[4776]: I1011 10:29:53.352539 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-x7xhm" event={"ID":"cb15485f-03bd-4281-8626-f35346cf4b0b","Type":"ContainerStarted","Data":"81d0e718b95a9649bb78b0d797f1b25b8834ef71dd2b7ad870b6bc73505c9984"} Oct 11 10:29:53.352616 master-2 kubenswrapper[4776]: I1011 10:29:53.352624 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-x7xhm" event={"ID":"cb15485f-03bd-4281-8626-f35346cf4b0b","Type":"ContainerStarted","Data":"8c2043e5a51573290d304377951d953849d43114b0c6805feb6ffa45dc2b9f39"} Oct 11 10:29:53.375440 master-2 kubenswrapper[4776]: I1011 10:29:53.374003 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-x7xhm" podStartSLOduration=2.763992289 podStartE2EDuration="9.373985057s" podCreationTimestamp="2025-10-11 10:29:44 +0000 UTC" firstStartedPulling="2025-10-11 10:29:44.92446315 +0000 UTC m=+219.708889849" lastFinishedPulling="2025-10-11 10:29:51.534455868 +0000 UTC m=+226.318882617" observedRunningTime="2025-10-11 10:29:53.373265037 +0000 UTC m=+228.157691746" watchObservedRunningTime="2025-10-11 10:29:53.373985057 +0000 UTC m=+228.158411786" Oct 11 10:29:53.497551 master-1 kubenswrapper[4771]: I1011 10:29:53.497473 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:53.497551 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:53.497551 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:53.497551 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:53.497949 master-1 kubenswrapper[4771]: I1011 10:29:53.497562 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:53.699991 master-2 kubenswrapper[4776]: I1011 10:29:53.699905 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc"] Oct 11 10:29:53.702804 master-2 kubenswrapper[4776]: I1011 10:29:53.701224 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" Oct 11 10:29:53.719366 master-2 kubenswrapper[4776]: I1011 10:29:53.719334 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc"] Oct 11 10:29:53.840920 master-1 kubenswrapper[4771]: I1011 10:29:53.840842 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g8tm6" Oct 11 10:29:53.840920 master-1 kubenswrapper[4771]: I1011 10:29:53.840900 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g8tm6" Oct 11 10:29:53.874168 master-2 kubenswrapper[4776]: I1011 10:29:53.874095 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2221fbca-4225-4685-9878-86ab81050ad4-webhook-certs\") pod \"multus-admission-controller-7b6b7bb859-5bmjc\" (UID: \"2221fbca-4225-4685-9878-86ab81050ad4\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" Oct 11 10:29:53.874376 master-2 kubenswrapper[4776]: I1011 10:29:53.874225 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-426ls\" (UniqueName: \"kubernetes.io/projected/2221fbca-4225-4685-9878-86ab81050ad4-kube-api-access-426ls\") pod \"multus-admission-controller-7b6b7bb859-5bmjc\" (UID: \"2221fbca-4225-4685-9878-86ab81050ad4\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" Oct 11 10:29:53.876043 master-1 kubenswrapper[4771]: I1011 10:29:53.875971 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g8tm6" Oct 11 10:29:53.968556 master-2 kubenswrapper[4776]: I1011 10:29:53.968454 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:53.968556 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:53.968556 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:53.968556 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:53.968556 master-2 kubenswrapper[4776]: I1011 10:29:53.968513 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:53.974952 master-2 kubenswrapper[4776]: I1011 10:29:53.974912 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-426ls\" (UniqueName: \"kubernetes.io/projected/2221fbca-4225-4685-9878-86ab81050ad4-kube-api-access-426ls\") pod \"multus-admission-controller-7b6b7bb859-5bmjc\" (UID: \"2221fbca-4225-4685-9878-86ab81050ad4\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" Oct 11 10:29:53.975002 master-2 kubenswrapper[4776]: I1011 10:29:53.974962 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2221fbca-4225-4685-9878-86ab81050ad4-webhook-certs\") pod \"multus-admission-controller-7b6b7bb859-5bmjc\" (UID: \"2221fbca-4225-4685-9878-86ab81050ad4\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" Oct 11 10:29:53.978031 master-2 kubenswrapper[4776]: I1011 10:29:53.977998 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2221fbca-4225-4685-9878-86ab81050ad4-webhook-certs\") pod \"multus-admission-controller-7b6b7bb859-5bmjc\" (UID: \"2221fbca-4225-4685-9878-86ab81050ad4\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" Oct 11 10:29:53.978140 master-1 kubenswrapper[4771]: E1011 10:29:53.978087 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5268b2f2ae2aef0c7f2e7a6e651ed702.slice/crio-af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:29:53.992567 master-2 kubenswrapper[4776]: I1011 10:29:53.992490 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-426ls\" (UniqueName: \"kubernetes.io/projected/2221fbca-4225-4685-9878-86ab81050ad4-kube-api-access-426ls\") pod \"multus-admission-controller-7b6b7bb859-5bmjc\" (UID: \"2221fbca-4225-4685-9878-86ab81050ad4\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" Oct 11 10:29:54.029079 master-2 kubenswrapper[4776]: I1011 10:29:54.029049 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" Oct 11 10:29:54.249400 master-2 kubenswrapper[4776]: I1011 10:29:54.249346 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc"] Oct 11 10:29:54.253230 master-2 kubenswrapper[4776]: W1011 10:29:54.253174 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2221fbca_4225_4685_9878_86ab81050ad4.slice/crio-0f7a635d7a63d025acfd4b8de59509dee92c85f7119a06daaf16db4f2f473ca4 WatchSource:0}: Error finding container 0f7a635d7a63d025acfd4b8de59509dee92c85f7119a06daaf16db4f2f473ca4: Status 404 returned error can't find the container with id 0f7a635d7a63d025acfd4b8de59509dee92c85f7119a06daaf16db4f2f473ca4 Oct 11 10:29:54.362367 master-2 kubenswrapper[4776]: I1011 10:29:54.362280 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" event={"ID":"5473628e-94c8-4706-bb03-ff4836debe5f","Type":"ContainerStarted","Data":"bc423808a1318a501a04a81a0b62715e5af3476c9da3fb5de99b8aa1ff2380a0"} Oct 11 10:29:54.362940 master-2 kubenswrapper[4776]: I1011 10:29:54.362370 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:54.363713 master-2 kubenswrapper[4776]: I1011 10:29:54.363631 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" event={"ID":"2221fbca-4225-4685-9878-86ab81050ad4","Type":"ContainerStarted","Data":"0f7a635d7a63d025acfd4b8de59509dee92c85f7119a06daaf16db4f2f473ca4"} Oct 11 10:29:54.383138 master-2 kubenswrapper[4776]: I1011 10:29:54.383029 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" podStartSLOduration=4.042096784 podStartE2EDuration="5.383010451s" podCreationTimestamp="2025-10-11 10:29:49 +0000 UTC" firstStartedPulling="2025-10-11 10:29:51.976543425 +0000 UTC m=+226.760970154" lastFinishedPulling="2025-10-11 10:29:53.317457112 +0000 UTC m=+228.101883821" observedRunningTime="2025-10-11 10:29:54.380629516 +0000 UTC m=+229.165056235" watchObservedRunningTime="2025-10-11 10:29:54.383010451 +0000 UTC m=+229.167437170" Oct 11 10:29:54.434294 master-2 kubenswrapper[4776]: E1011 10:29:54.434190 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:29:54.435661 master-2 kubenswrapper[4776]: E1011 10:29:54.435603 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:29:54.437415 master-2 kubenswrapper[4776]: E1011 10:29:54.437345 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:29:54.437472 master-2 kubenswrapper[4776]: E1011 10:29:54.437409 4776 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" podUID="2ee67bf2-b525-4e43-8f3c-be748c32c8d2" containerName="kube-multus-additional-cni-plugins" Oct 11 10:29:54.452249 master-1 kubenswrapper[4771]: E1011 10:29:54.452164 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:29:54.454198 master-1 kubenswrapper[4771]: E1011 10:29:54.454128 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:29:54.455930 master-1 kubenswrapper[4771]: E1011 10:29:54.455891 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:29:54.456041 master-1 kubenswrapper[4771]: E1011 10:29:54.455935 4771 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" podUID="806cd59c-056a-4fb4-a3b4-cb716c01cdea" containerName="kube-multus-additional-cni-plugins" Oct 11 10:29:54.498794 master-1 kubenswrapper[4771]: I1011 10:29:54.498723 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:54.498794 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:54.498794 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:54.498794 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:54.499164 master-1 kubenswrapper[4771]: I1011 10:29:54.498844 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:54.629730 master-1 kubenswrapper[4771]: I1011 10:29:54.629635 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:29:54.629985 master-1 kubenswrapper[4771]: I1011 10:29:54.629731 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:29:54.679390 master-1 kubenswrapper[4771]: I1011 10:29:54.679307 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/1.log" Oct 11 10:29:54.682476 master-1 kubenswrapper[4771]: I1011 10:29:54.682426 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/0.log" Oct 11 10:29:54.685376 master-1 kubenswrapper[4771]: I1011 10:29:54.685286 4771 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde" exitCode=1 Oct 11 10:29:54.685516 master-1 kubenswrapper[4771]: I1011 10:29:54.685409 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerDied","Data":"af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde"} Oct 11 10:29:54.685516 master-1 kubenswrapper[4771]: I1011 10:29:54.685499 4771 scope.go:117] "RemoveContainer" containerID="d00f571fb5251acb052a97b0ee5169046519d14d8990c98b7ea440fa842ffd37" Oct 11 10:29:54.686638 master-1 kubenswrapper[4771]: I1011 10:29:54.686584 4771 scope.go:117] "RemoveContainer" containerID="af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde" Oct 11 10:29:54.687193 master-1 kubenswrapper[4771]: E1011 10:29:54.687127 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd pod=etcd-master-1_openshift-etcd(5268b2f2ae2aef0c7f2e7a6e651ed702)\"" pod="openshift-etcd/etcd-master-1" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" Oct 11 10:29:54.754406 master-1 kubenswrapper[4771]: I1011 10:29:54.754196 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59"] Oct 11 10:29:54.755818 master-1 kubenswrapper[4771]: I1011 10:29:54.755764 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:54.758295 master-1 kubenswrapper[4771]: I1011 10:29:54.758239 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g8tm6" Oct 11 10:29:54.759152 master-1 kubenswrapper[4771]: I1011 10:29:54.759080 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Oct 11 10:29:54.759235 master-1 kubenswrapper[4771]: I1011 10:29:54.759080 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Oct 11 10:29:54.759389 master-1 kubenswrapper[4771]: I1011 10:29:54.759313 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Oct 11 10:29:54.761013 master-1 kubenswrapper[4771]: I1011 10:29:54.760949 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Oct 11 10:29:54.761196 master-1 kubenswrapper[4771]: I1011 10:29:54.761023 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Oct 11 10:29:54.765955 master-1 kubenswrapper[4771]: I1011 10:29:54.765903 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59"] Oct 11 10:29:54.769973 master-1 kubenswrapper[4771]: I1011 10:29:54.769927 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" Oct 11 10:29:54.895205 master-1 kubenswrapper[4771]: I1011 10:29:54.895128 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1"] Oct 11 10:29:54.899626 master-1 kubenswrapper[4771]: I1011 10:29:54.899502 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6hzt\" (UniqueName: \"kubernetes.io/projected/24ee422a-a8f9-436d-b2be-ee2cfa387868-kube-api-access-p6hzt\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:54.899728 master-1 kubenswrapper[4771]: I1011 10:29:54.899662 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/24ee422a-a8f9-436d-b2be-ee2cfa387868-secret-telemeter-client\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:54.900066 master-1 kubenswrapper[4771]: I1011 10:29:54.899984 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24ee422a-a8f9-436d-b2be-ee2cfa387868-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:54.900143 master-1 kubenswrapper[4771]: I1011 10:29:54.900109 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/24ee422a-a8f9-436d-b2be-ee2cfa387868-telemeter-client-tls\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:54.900223 master-1 kubenswrapper[4771]: I1011 10:29:54.900169 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/24ee422a-a8f9-436d-b2be-ee2cfa387868-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:54.900339 master-1 kubenswrapper[4771]: I1011 10:29:54.900292 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24ee422a-a8f9-436d-b2be-ee2cfa387868-serving-certs-ca-bundle\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:54.900517 master-1 kubenswrapper[4771]: I1011 10:29:54.900468 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/24ee422a-a8f9-436d-b2be-ee2cfa387868-metrics-client-ca\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:54.900600 master-1 kubenswrapper[4771]: I1011 10:29:54.900527 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/24ee422a-a8f9-436d-b2be-ee2cfa387868-federate-client-tls\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:54.969895 master-2 kubenswrapper[4776]: I1011 10:29:54.969854 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:54.969895 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:54.969895 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:54.969895 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:54.970284 master-2 kubenswrapper[4776]: I1011 10:29:54.970255 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:55.001561 master-1 kubenswrapper[4771]: I1011 10:29:55.001454 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6hzt\" (UniqueName: \"kubernetes.io/projected/24ee422a-a8f9-436d-b2be-ee2cfa387868-kube-api-access-p6hzt\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:55.001561 master-1 kubenswrapper[4771]: I1011 10:29:55.001548 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/24ee422a-a8f9-436d-b2be-ee2cfa387868-secret-telemeter-client\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:55.001776 master-1 kubenswrapper[4771]: I1011 10:29:55.001600 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24ee422a-a8f9-436d-b2be-ee2cfa387868-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:55.001776 master-1 kubenswrapper[4771]: I1011 10:29:55.001644 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/24ee422a-a8f9-436d-b2be-ee2cfa387868-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:55.001776 master-1 kubenswrapper[4771]: I1011 10:29:55.001678 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/24ee422a-a8f9-436d-b2be-ee2cfa387868-telemeter-client-tls\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:55.001776 master-1 kubenswrapper[4771]: I1011 10:29:55.001716 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24ee422a-a8f9-436d-b2be-ee2cfa387868-serving-certs-ca-bundle\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:55.002078 master-1 kubenswrapper[4771]: I1011 10:29:55.001788 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/24ee422a-a8f9-436d-b2be-ee2cfa387868-metrics-client-ca\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:55.002078 master-1 kubenswrapper[4771]: I1011 10:29:55.001833 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/24ee422a-a8f9-436d-b2be-ee2cfa387868-federate-client-tls\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:55.003438 master-1 kubenswrapper[4771]: I1011 10:29:55.003325 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/24ee422a-a8f9-436d-b2be-ee2cfa387868-metrics-client-ca\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:55.004101 master-1 kubenswrapper[4771]: I1011 10:29:55.004017 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24ee422a-a8f9-436d-b2be-ee2cfa387868-serving-certs-ca-bundle\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:55.004546 master-1 kubenswrapper[4771]: I1011 10:29:55.004426 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24ee422a-a8f9-436d-b2be-ee2cfa387868-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:55.007039 master-1 kubenswrapper[4771]: I1011 10:29:55.006982 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/24ee422a-a8f9-436d-b2be-ee2cfa387868-secret-telemeter-client\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:55.007223 master-1 kubenswrapper[4771]: I1011 10:29:55.007157 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/24ee422a-a8f9-436d-b2be-ee2cfa387868-telemeter-client-tls\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:55.007932 master-1 kubenswrapper[4771]: I1011 10:29:55.007864 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/24ee422a-a8f9-436d-b2be-ee2cfa387868-federate-client-tls\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:55.008780 master-1 kubenswrapper[4771]: I1011 10:29:55.008725 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/24ee422a-a8f9-436d-b2be-ee2cfa387868-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:55.036222 master-1 kubenswrapper[4771]: I1011 10:29:55.036133 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6hzt\" (UniqueName: \"kubernetes.io/projected/24ee422a-a8f9-436d-b2be-ee2cfa387868-kube-api-access-p6hzt\") pod \"telemeter-client-5b5c6cc5dd-rhh59\" (UID: \"24ee422a-a8f9-436d-b2be-ee2cfa387868\") " pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:55.129902 master-1 kubenswrapper[4771]: I1011 10:29:55.129814 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" Oct 11 10:29:55.373333 master-2 kubenswrapper[4776]: I1011 10:29:55.373152 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" event={"ID":"2221fbca-4225-4685-9878-86ab81050ad4","Type":"ContainerStarted","Data":"9469a91709839fac0cc396b2e2a4c0dd37bd9a803d5418f5ea9f88f855d63e81"} Oct 11 10:29:55.373333 master-2 kubenswrapper[4776]: I1011 10:29:55.373219 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" event={"ID":"2221fbca-4225-4685-9878-86ab81050ad4","Type":"ContainerStarted","Data":"2c2a3e675a4041f09f4ce3a6aabd9daff8fd9052dd7ca6fdda6f9f80c3f22749"} Oct 11 10:29:55.394427 master-2 kubenswrapper[4776]: I1011 10:29:55.394323 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" podStartSLOduration=2.394302946 podStartE2EDuration="2.394302946s" podCreationTimestamp="2025-10-11 10:29:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:55.392428095 +0000 UTC m=+230.176854834" watchObservedRunningTime="2025-10-11 10:29:55.394302946 +0000 UTC m=+230.178729675" Oct 11 10:29:55.427852 master-2 kubenswrapper[4776]: I1011 10:29:55.427777 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-s5r5b"] Oct 11 10:29:55.428259 master-2 kubenswrapper[4776]: I1011 10:29:55.428152 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" podUID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerName="multus-admission-controller" containerID="cri-o://38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3" gracePeriod=30 Oct 11 10:29:55.428577 master-2 kubenswrapper[4776]: I1011 10:29:55.428481 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" podUID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerName="kube-rbac-proxy" containerID="cri-o://4c073c5ee5fd6035e588b594f71843fa0867444b7edf11350aaa49874157a615" gracePeriod=30 Oct 11 10:29:55.454477 master-1 kubenswrapper[4771]: I1011 10:29:55.454402 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-7b6b7bb859-rwvpf"] Oct 11 10:29:55.455539 master-1 kubenswrapper[4771]: I1011 10:29:55.455509 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7b6b7bb859-rwvpf" Oct 11 10:29:55.459089 master-1 kubenswrapper[4771]: I1011 10:29:55.459030 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 11 10:29:55.463013 master-1 kubenswrapper[4771]: I1011 10:29:55.462961 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7b6b7bb859-rwvpf"] Oct 11 10:29:55.499161 master-1 kubenswrapper[4771]: I1011 10:29:55.498672 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:55.499161 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:55.499161 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:55.499161 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:55.499161 master-1 kubenswrapper[4771]: I1011 10:29:55.498778 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:55.609052 master-1 kubenswrapper[4771]: I1011 10:29:55.608951 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5ce7321b-beff-4c96-9998-a3177ac79f36-webhook-certs\") pod \"multus-admission-controller-7b6b7bb859-rwvpf\" (UID: \"5ce7321b-beff-4c96-9998-a3177ac79f36\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-rwvpf" Oct 11 10:29:55.609052 master-1 kubenswrapper[4771]: I1011 10:29:55.609044 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvhg6\" (UniqueName: \"kubernetes.io/projected/5ce7321b-beff-4c96-9998-a3177ac79f36-kube-api-access-zvhg6\") pod \"multus-admission-controller-7b6b7bb859-rwvpf\" (UID: \"5ce7321b-beff-4c96-9998-a3177ac79f36\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-rwvpf" Oct 11 10:29:55.710902 master-1 kubenswrapper[4771]: I1011 10:29:55.710738 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5ce7321b-beff-4c96-9998-a3177ac79f36-webhook-certs\") pod \"multus-admission-controller-7b6b7bb859-rwvpf\" (UID: \"5ce7321b-beff-4c96-9998-a3177ac79f36\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-rwvpf" Oct 11 10:29:55.710902 master-1 kubenswrapper[4771]: I1011 10:29:55.710821 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvhg6\" (UniqueName: \"kubernetes.io/projected/5ce7321b-beff-4c96-9998-a3177ac79f36-kube-api-access-zvhg6\") pod \"multus-admission-controller-7b6b7bb859-rwvpf\" (UID: \"5ce7321b-beff-4c96-9998-a3177ac79f36\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-rwvpf" Oct 11 10:29:55.718835 master-1 kubenswrapper[4771]: I1011 10:29:55.718159 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5ce7321b-beff-4c96-9998-a3177ac79f36-webhook-certs\") pod \"multus-admission-controller-7b6b7bb859-rwvpf\" (UID: \"5ce7321b-beff-4c96-9998-a3177ac79f36\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-rwvpf" Oct 11 10:29:55.734622 master-1 kubenswrapper[4771]: I1011 10:29:55.734560 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvhg6\" (UniqueName: \"kubernetes.io/projected/5ce7321b-beff-4c96-9998-a3177ac79f36-kube-api-access-zvhg6\") pod \"multus-admission-controller-7b6b7bb859-rwvpf\" (UID: \"5ce7321b-beff-4c96-9998-a3177ac79f36\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-rwvpf" Oct 11 10:29:55.775778 master-1 kubenswrapper[4771]: I1011 10:29:55.775683 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7b6b7bb859-rwvpf" Oct 11 10:29:55.969069 master-2 kubenswrapper[4776]: I1011 10:29:55.969001 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:55.969069 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:55.969069 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:55.969069 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:55.969069 master-2 kubenswrapper[4776]: I1011 10:29:55.969065 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:56.240607 master-1 kubenswrapper[4771]: I1011 10:29:56.240531 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:29:56.241458 master-1 kubenswrapper[4771]: E1011 10:29:56.240776 4771 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:29:56.241458 master-1 kubenswrapper[4771]: E1011 10:29:56.240924 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca podName:537a2b50-0394-47bd-941a-def350316943 nodeName:}" failed. No retries permitted until 2025-10-11 10:31:00.240893599 +0000 UTC m=+292.215120060 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca") pod "route-controller-manager-5bcc5987f5-f92xw" (UID: "537a2b50-0394-47bd-941a-def350316943") : configmap "client-ca" not found Oct 11 10:29:56.382662 master-2 kubenswrapper[4776]: I1011 10:29:56.382481 4776 generic.go:334] "Generic (PLEG): container finished" podID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerID="4c073c5ee5fd6035e588b594f71843fa0867444b7edf11350aaa49874157a615" exitCode=0 Oct 11 10:29:56.382662 master-2 kubenswrapper[4776]: I1011 10:29:56.382586 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" event={"ID":"cbf33a7e-abea-411d-9a19-85cfe67debe3","Type":"ContainerDied","Data":"4c073c5ee5fd6035e588b594f71843fa0867444b7edf11350aaa49874157a615"} Oct 11 10:29:56.497763 master-1 kubenswrapper[4771]: I1011 10:29:56.497571 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:56.497763 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:56.497763 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:56.497763 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:56.497763 master-1 kubenswrapper[4771]: I1011 10:29:56.497670 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:56.719730 master-1 kubenswrapper[4771]: I1011 10:29:56.718759 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" event={"ID":"daf74cdb-6bdb-465a-8e3e-194e8868570f","Type":"ContainerStarted","Data":"d71774e5747fba198d1f1c685867c43372766be8110c50262b34cb5aee247b7d"} Oct 11 10:29:56.719730 master-1 kubenswrapper[4771]: I1011 10:29:56.718914 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:29:56.720746 master-1 kubenswrapper[4771]: I1011 10:29:56.720705 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/1.log" Oct 11 10:29:56.955652 master-1 kubenswrapper[4771]: I1011 10:29:56.955531 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" podStartSLOduration=2.106971813 podStartE2EDuration="7.955495798s" podCreationTimestamp="2025-10-11 10:29:49 +0000 UTC" firstStartedPulling="2025-10-11 10:29:50.70963954 +0000 UTC m=+222.683866011" lastFinishedPulling="2025-10-11 10:29:56.558163535 +0000 UTC m=+228.532389996" observedRunningTime="2025-10-11 10:29:56.737878579 +0000 UTC m=+228.712105030" watchObservedRunningTime="2025-10-11 10:29:56.955495798 +0000 UTC m=+228.929722269" Oct 11 10:29:56.959735 master-1 kubenswrapper[4771]: I1011 10:29:56.959694 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1"] Oct 11 10:29:57.012719 master-1 kubenswrapper[4771]: I1011 10:29:57.012654 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7b6b7bb859-rwvpf"] Oct 11 10:29:57.016377 master-1 kubenswrapper[4771]: I1011 10:29:57.015692 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59"] Oct 11 10:29:57.031841 master-2 kubenswrapper[4776]: I1011 10:29:57.031752 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:57.031841 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:57.031841 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:57.031841 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:57.032240 master-1 kubenswrapper[4771]: W1011 10:29:57.032198 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24ee422a_a8f9_436d_b2be_ee2cfa387868.slice/crio-a4590572e64d77528c45fc16250bd4f7576fcf527774c6298823721b8c5268fd WatchSource:0}: Error finding container a4590572e64d77528c45fc16250bd4f7576fcf527774c6298823721b8c5268fd: Status 404 returned error can't find the container with id a4590572e64d77528c45fc16250bd4f7576fcf527774c6298823721b8c5268fd Oct 11 10:29:57.032278 master-2 kubenswrapper[4776]: I1011 10:29:57.031846 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:57.499843 master-1 kubenswrapper[4771]: I1011 10:29:57.499771 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:57.499843 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:57.499843 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:57.499843 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:57.501095 master-1 kubenswrapper[4771]: I1011 10:29:57.499873 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:57.732337 master-1 kubenswrapper[4771]: I1011 10:29:57.732240 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7b6b7bb859-rwvpf" event={"ID":"5ce7321b-beff-4c96-9998-a3177ac79f36","Type":"ContainerStarted","Data":"965a15ffe7b54d5e8a5dee16cc0508a1fdef2cf049f7f7b216482accd5552ba0"} Oct 11 10:29:57.734308 master-1 kubenswrapper[4771]: I1011 10:29:57.734251 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" event={"ID":"bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe","Type":"ContainerStarted","Data":"f0fea7d6a8eafab0f8489dde0fdb5801048d55c50b2e9df820711886dd128bb4"} Oct 11 10:29:57.736267 master-1 kubenswrapper[4771]: I1011 10:29:57.736166 4771 generic.go:334] "Generic (PLEG): container finished" podID="89fad8183e18ab3ad0c46d272335e5f8" containerID="2a73de07f276bd8a0b93475494fdae31f01c7c950b265a424f35d3d72462410c" exitCode=0 Oct 11 10:29:57.736351 master-1 kubenswrapper[4771]: I1011 10:29:57.736291 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"89fad8183e18ab3ad0c46d272335e5f8","Type":"ContainerDied","Data":"2a73de07f276bd8a0b93475494fdae31f01c7c950b265a424f35d3d72462410c"} Oct 11 10:29:57.738755 master-1 kubenswrapper[4771]: I1011 10:29:57.738703 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" event={"ID":"24ee422a-a8f9-436d-b2be-ee2cfa387868","Type":"ContainerStarted","Data":"a4590572e64d77528c45fc16250bd4f7576fcf527774c6298823721b8c5268fd"} Oct 11 10:29:57.968956 master-2 kubenswrapper[4776]: I1011 10:29:57.968905 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:57.968956 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:57.968956 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:57.968956 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:57.969650 master-2 kubenswrapper[4776]: I1011 10:29:57.968963 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:58.273088 master-1 kubenswrapper[4771]: I1011 10:29:58.273019 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd/etcd-master-1" Oct 11 10:29:58.273247 master-1 kubenswrapper[4771]: I1011 10:29:58.273112 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-1" Oct 11 10:29:58.273247 master-1 kubenswrapper[4771]: I1011 10:29:58.273137 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-1" Oct 11 10:29:58.274204 master-1 kubenswrapper[4771]: I1011 10:29:58.274161 4771 scope.go:117] "RemoveContainer" containerID="af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde" Oct 11 10:29:58.274672 master-1 kubenswrapper[4771]: E1011 10:29:58.274628 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd pod=etcd-master-1_openshift-etcd(5268b2f2ae2aef0c7f2e7a6e651ed702)\"" pod="openshift-etcd/etcd-master-1" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" Oct 11 10:29:58.498988 master-1 kubenswrapper[4771]: I1011 10:29:58.498892 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:58.498988 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:58.498988 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:58.498988 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:58.499309 master-1 kubenswrapper[4771]: I1011 10:29:58.499033 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:58.668918 master-1 kubenswrapper[4771]: I1011 10:29:58.668857 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:29:58.669449 master-1 kubenswrapper[4771]: E1011 10:29:58.669009 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:29:58.669449 master-1 kubenswrapper[4771]: E1011 10:29:58.669085 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca podName:c9e9455e-0b47-4623-9b4c-ef79cf62a254 nodeName:}" failed. No retries permitted until 2025-10-11 10:31:02.669066365 +0000 UTC m=+294.643292806 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca") pod "controller-manager-565f857764-nhm4g" (UID: "c9e9455e-0b47-4623-9b4c-ef79cf62a254") : configmap "client-ca" not found Oct 11 10:29:58.704426 master-2 kubenswrapper[4776]: I1011 10:29:58.704136 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:29:58.968912 master-2 kubenswrapper[4776]: I1011 10:29:58.968771 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:58.968912 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:58.968912 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:58.968912 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:58.969792 master-2 kubenswrapper[4776]: I1011 10:29:58.969758 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:59.497435 master-1 kubenswrapper[4771]: I1011 10:29:59.497314 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:59.497435 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:29:59.497435 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:29:59.497435 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:29:59.497435 master-1 kubenswrapper[4771]: I1011 10:29:59.497430 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:59.630350 master-1 kubenswrapper[4771]: I1011 10:29:59.630234 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:29:59.630677 master-1 kubenswrapper[4771]: I1011 10:29:59.630386 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:29:59.755019 master-1 kubenswrapper[4771]: I1011 10:29:59.754874 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" event={"ID":"bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe","Type":"ContainerStarted","Data":"bb2efb4d50d7aa4eebda9d5e309a23e278332e08f60110025acc022441421550"} Oct 11 10:29:59.969909 master-2 kubenswrapper[4776]: I1011 10:29:59.969840 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:59.969909 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:59.969909 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:59.969909 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:59.969909 master-2 kubenswrapper[4776]: I1011 10:29:59.969907 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:00.167268 master-1 kubenswrapper[4771]: I1011 10:30:00.167180 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v"] Oct 11 10:30:00.168145 master-1 kubenswrapper[4771]: I1011 10:30:00.168090 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v" Oct 11 10:30:00.171893 master-1 kubenswrapper[4771]: I1011 10:30:00.171833 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 11 10:30:00.171893 master-1 kubenswrapper[4771]: I1011 10:30:00.171890 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 11 10:30:00.178027 master-1 kubenswrapper[4771]: I1011 10:30:00.177922 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v"] Oct 11 10:30:00.191441 master-1 kubenswrapper[4771]: I1011 10:30:00.191378 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf56n\" (UniqueName: \"kubernetes.io/projected/f4904d67-3c44-40d9-8ea8-026d727e9486-kube-api-access-cf56n\") pod \"collect-profiles-29336310-8nc4v\" (UID: \"f4904d67-3c44-40d9-8ea8-026d727e9486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v" Oct 11 10:30:00.191799 master-1 kubenswrapper[4771]: I1011 10:30:00.191725 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4904d67-3c44-40d9-8ea8-026d727e9486-secret-volume\") pod \"collect-profiles-29336310-8nc4v\" (UID: \"f4904d67-3c44-40d9-8ea8-026d727e9486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v" Oct 11 10:30:00.191966 master-1 kubenswrapper[4771]: I1011 10:30:00.191915 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4904d67-3c44-40d9-8ea8-026d727e9486-config-volume\") pod \"collect-profiles-29336310-8nc4v\" (UID: \"f4904d67-3c44-40d9-8ea8-026d727e9486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v" Oct 11 10:30:00.293210 master-1 kubenswrapper[4771]: I1011 10:30:00.293089 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf56n\" (UniqueName: \"kubernetes.io/projected/f4904d67-3c44-40d9-8ea8-026d727e9486-kube-api-access-cf56n\") pod \"collect-profiles-29336310-8nc4v\" (UID: \"f4904d67-3c44-40d9-8ea8-026d727e9486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v" Oct 11 10:30:00.293556 master-1 kubenswrapper[4771]: I1011 10:30:00.293258 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4904d67-3c44-40d9-8ea8-026d727e9486-secret-volume\") pod \"collect-profiles-29336310-8nc4v\" (UID: \"f4904d67-3c44-40d9-8ea8-026d727e9486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v" Oct 11 10:30:00.293556 master-1 kubenswrapper[4771]: I1011 10:30:00.293329 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4904d67-3c44-40d9-8ea8-026d727e9486-config-volume\") pod \"collect-profiles-29336310-8nc4v\" (UID: \"f4904d67-3c44-40d9-8ea8-026d727e9486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v" Oct 11 10:30:00.295022 master-1 kubenswrapper[4771]: I1011 10:30:00.294958 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4904d67-3c44-40d9-8ea8-026d727e9486-config-volume\") pod \"collect-profiles-29336310-8nc4v\" (UID: \"f4904d67-3c44-40d9-8ea8-026d727e9486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v" Oct 11 10:30:00.298895 master-1 kubenswrapper[4771]: I1011 10:30:00.298831 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4904d67-3c44-40d9-8ea8-026d727e9486-secret-volume\") pod \"collect-profiles-29336310-8nc4v\" (UID: \"f4904d67-3c44-40d9-8ea8-026d727e9486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v" Oct 11 10:30:00.324337 master-1 kubenswrapper[4771]: I1011 10:30:00.324242 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf56n\" (UniqueName: \"kubernetes.io/projected/f4904d67-3c44-40d9-8ea8-026d727e9486-kube-api-access-cf56n\") pod \"collect-profiles-29336310-8nc4v\" (UID: \"f4904d67-3c44-40d9-8ea8-026d727e9486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v" Oct 11 10:30:00.403891 master-2 kubenswrapper[4776]: I1011 10:30:00.403843 4776 generic.go:334] "Generic (PLEG): container finished" podID="59763d5b-237f-4095-bf52-86bb0154381c" containerID="3ce0e4c23d3462cc28a54aa78bddda37020e10bc5a0b28d2d4d54aa602abe170" exitCode=0 Oct 11 10:30:00.403891 master-2 kubenswrapper[4776]: I1011 10:30:00.403889 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" event={"ID":"59763d5b-237f-4095-bf52-86bb0154381c","Type":"ContainerDied","Data":"3ce0e4c23d3462cc28a54aa78bddda37020e10bc5a0b28d2d4d54aa602abe170"} Oct 11 10:30:00.404345 master-2 kubenswrapper[4776]: I1011 10:30:00.404324 4776 scope.go:117] "RemoveContainer" containerID="3ce0e4c23d3462cc28a54aa78bddda37020e10bc5a0b28d2d4d54aa602abe170" Oct 11 10:30:00.491340 master-1 kubenswrapper[4771]: I1011 10:30:00.491151 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v" Oct 11 10:30:00.510781 master-1 kubenswrapper[4771]: I1011 10:30:00.510689 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:00.510781 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:00.510781 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:00.510781 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:00.511149 master-1 kubenswrapper[4771]: I1011 10:30:00.510778 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:00.778437 master-1 kubenswrapper[4771]: I1011 10:30:00.778234 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"89fad8183e18ab3ad0c46d272335e5f8","Type":"ContainerStarted","Data":"0400db595d18039edaf6ab7ccb3c1b1a3510ae9588fc33a6a91a15e993a6d1a4"} Oct 11 10:30:00.779207 master-1 kubenswrapper[4771]: I1011 10:30:00.778529 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 11 10:30:00.785156 master-1 kubenswrapper[4771]: I1011 10:30:00.785103 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 11 10:30:00.797546 master-1 kubenswrapper[4771]: I1011 10:30:00.797435 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podStartSLOduration=9.797402467 podStartE2EDuration="9.797402467s" podCreationTimestamp="2025-10-11 10:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:30:00.795023551 +0000 UTC m=+232.769250032" watchObservedRunningTime="2025-10-11 10:30:00.797402467 +0000 UTC m=+232.771628938" Oct 11 10:30:00.939170 master-1 kubenswrapper[4771]: I1011 10:30:00.939092 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v"] Oct 11 10:30:00.946657 master-1 kubenswrapper[4771]: W1011 10:30:00.946582 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4904d67_3c44_40d9_8ea8_026d727e9486.slice/crio-fb9abe4abde9e0f6d44c16dbc1ef22f8a3f53fff1d42a62d4fb8563051892a8d WatchSource:0}: Error finding container fb9abe4abde9e0f6d44c16dbc1ef22f8a3f53fff1d42a62d4fb8563051892a8d: Status 404 returned error can't find the container with id fb9abe4abde9e0f6d44c16dbc1ef22f8a3f53fff1d42a62d4fb8563051892a8d Oct 11 10:30:00.969314 master-2 kubenswrapper[4776]: I1011 10:30:00.969071 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:00.969314 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:00.969314 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:00.969314 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:00.969314 master-2 kubenswrapper[4776]: I1011 10:30:00.969207 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:01.496869 master-1 kubenswrapper[4771]: I1011 10:30:01.496783 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:01.496869 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:01.496869 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:01.496869 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:01.497223 master-1 kubenswrapper[4771]: I1011 10:30:01.496880 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:01.785228 master-1 kubenswrapper[4771]: I1011 10:30:01.785162 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v" event={"ID":"f4904d67-3c44-40d9-8ea8-026d727e9486","Type":"ContainerStarted","Data":"fb9abe4abde9e0f6d44c16dbc1ef22f8a3f53fff1d42a62d4fb8563051892a8d"} Oct 11 10:30:01.972039 master-2 kubenswrapper[4776]: I1011 10:30:01.971923 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:01.972039 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:01.972039 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:01.972039 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:01.972039 master-2 kubenswrapper[4776]: I1011 10:30:01.972019 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:02.418017 master-2 kubenswrapper[4776]: I1011 10:30:02.417972 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" event={"ID":"59763d5b-237f-4095-bf52-86bb0154381c","Type":"ContainerStarted","Data":"a6ee38636b55ee7ea51eb00b2f2e1868a2169757d3514e2b54cde6a87e060504"} Oct 11 10:30:02.496073 master-1 kubenswrapper[4771]: I1011 10:30:02.495944 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:02.496073 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:02.496073 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:02.496073 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:02.496073 master-1 kubenswrapper[4771]: I1011 10:30:02.496026 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:02.794711 master-1 kubenswrapper[4771]: I1011 10:30:02.794514 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v" event={"ID":"f4904d67-3c44-40d9-8ea8-026d727e9486","Type":"ContainerStarted","Data":"ea50bb78d4de53e43e9be3f2830fede428957c124838ed0305c9a99b641c0252"} Oct 11 10:30:02.798381 master-1 kubenswrapper[4771]: I1011 10:30:02.798251 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"89fad8183e18ab3ad0c46d272335e5f8","Type":"ContainerStarted","Data":"27a52449e5ec1bd52177b8ae4e5229c8bc4e5a7be149b07a0e7cb307be3932da"} Oct 11 10:30:02.969509 master-2 kubenswrapper[4776]: I1011 10:30:02.969426 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:02.969509 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:02.969509 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:02.969509 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:02.969509 master-2 kubenswrapper[4776]: I1011 10:30:02.969503 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:03.498124 master-1 kubenswrapper[4771]: I1011 10:30:03.498047 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:03.498124 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:03.498124 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:03.498124 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:03.498570 master-1 kubenswrapper[4771]: I1011 10:30:03.498141 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:03.822949 master-1 kubenswrapper[4771]: I1011 10:30:03.822790 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v" podStartSLOduration=3.822765536 podStartE2EDuration="3.822765536s" podCreationTimestamp="2025-10-11 10:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:30:03.821218814 +0000 UTC m=+235.795445345" watchObservedRunningTime="2025-10-11 10:30:03.822765536 +0000 UTC m=+235.796992017" Oct 11 10:30:03.970540 master-2 kubenswrapper[4776]: I1011 10:30:03.970445 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:03.970540 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:03.970540 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:03.970540 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:03.970540 master-2 kubenswrapper[4776]: I1011 10:30:03.970536 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:04.435931 master-2 kubenswrapper[4776]: E1011 10:30:04.435720 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:30:04.438117 master-2 kubenswrapper[4776]: E1011 10:30:04.438066 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:30:04.441266 master-2 kubenswrapper[4776]: E1011 10:30:04.441201 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:30:04.441341 master-2 kubenswrapper[4776]: E1011 10:30:04.441287 4776 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" podUID="2ee67bf2-b525-4e43-8f3c-be748c32c8d2" containerName="kube-multus-additional-cni-plugins" Oct 11 10:30:04.452594 master-1 kubenswrapper[4771]: E1011 10:30:04.452443 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:30:04.454777 master-1 kubenswrapper[4771]: E1011 10:30:04.454739 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:30:04.456272 master-1 kubenswrapper[4771]: E1011 10:30:04.456230 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:30:04.456331 master-1 kubenswrapper[4771]: E1011 10:30:04.456273 4771 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" podUID="806cd59c-056a-4fb4-a3b4-cb716c01cdea" containerName="kube-multus-additional-cni-plugins" Oct 11 10:30:04.499378 master-1 kubenswrapper[4771]: I1011 10:30:04.499299 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:04.499378 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:04.499378 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:04.499378 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:04.499706 master-1 kubenswrapper[4771]: I1011 10:30:04.499424 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:04.631181 master-1 kubenswrapper[4771]: I1011 10:30:04.630604 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:30:04.631181 master-1 kubenswrapper[4771]: I1011 10:30:04.630693 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:30:04.809374 master-1 kubenswrapper[4771]: I1011 10:30:04.809277 4771 generic.go:334] "Generic (PLEG): container finished" podID="f4904d67-3c44-40d9-8ea8-026d727e9486" containerID="ea50bb78d4de53e43e9be3f2830fede428957c124838ed0305c9a99b641c0252" exitCode=0 Oct 11 10:30:04.809618 master-1 kubenswrapper[4771]: I1011 10:30:04.809376 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v" event={"ID":"f4904d67-3c44-40d9-8ea8-026d727e9486","Type":"ContainerDied","Data":"ea50bb78d4de53e43e9be3f2830fede428957c124838ed0305c9a99b641c0252"} Oct 11 10:30:04.972173 master-2 kubenswrapper[4776]: I1011 10:30:04.972004 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:04.972173 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:04.972173 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:04.972173 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:04.972173 master-2 kubenswrapper[4776]: I1011 10:30:04.972088 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:05.497125 master-1 kubenswrapper[4771]: I1011 10:30:05.496944 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:05.497125 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:05.497125 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:05.497125 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:05.497125 master-1 kubenswrapper[4771]: I1011 10:30:05.497064 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:05.818949 master-1 kubenswrapper[4771]: I1011 10:30:05.818871 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"89fad8183e18ab3ad0c46d272335e5f8","Type":"ContainerStarted","Data":"4f12c3536caf37d890a386fecb2c94e5fc57775602e9a539771326b213c3ae7e"} Oct 11 10:30:05.837849 master-1 kubenswrapper[4771]: I1011 10:30:05.837758 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podStartSLOduration=19.837729302 podStartE2EDuration="19.837729302s" podCreationTimestamp="2025-10-11 10:29:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:30:05.83655462 +0000 UTC m=+237.810781131" watchObservedRunningTime="2025-10-11 10:30:05.837729302 +0000 UTC m=+237.811955753" Oct 11 10:30:05.969266 master-2 kubenswrapper[4776]: I1011 10:30:05.969180 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:05.969266 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:05.969266 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:05.969266 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:05.969266 master-2 kubenswrapper[4776]: I1011 10:30:05.969266 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:06.497066 master-1 kubenswrapper[4771]: I1011 10:30:06.496937 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:06.497066 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:06.497066 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:06.497066 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:06.497066 master-1 kubenswrapper[4771]: I1011 10:30:06.497041 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:06.678843 master-1 kubenswrapper[4771]: I1011 10:30:06.678798 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v" Oct 11 10:30:06.690002 master-1 kubenswrapper[4771]: I1011 10:30:06.689956 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4904d67-3c44-40d9-8ea8-026d727e9486-config-volume\") pod \"f4904d67-3c44-40d9-8ea8-026d727e9486\" (UID: \"f4904d67-3c44-40d9-8ea8-026d727e9486\") " Oct 11 10:30:06.690166 master-1 kubenswrapper[4771]: I1011 10:30:06.690045 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4904d67-3c44-40d9-8ea8-026d727e9486-secret-volume\") pod \"f4904d67-3c44-40d9-8ea8-026d727e9486\" (UID: \"f4904d67-3c44-40d9-8ea8-026d727e9486\") " Oct 11 10:30:06.690207 master-1 kubenswrapper[4771]: I1011 10:30:06.690190 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf56n\" (UniqueName: \"kubernetes.io/projected/f4904d67-3c44-40d9-8ea8-026d727e9486-kube-api-access-cf56n\") pod \"f4904d67-3c44-40d9-8ea8-026d727e9486\" (UID: \"f4904d67-3c44-40d9-8ea8-026d727e9486\") " Oct 11 10:30:06.690653 master-1 kubenswrapper[4771]: I1011 10:30:06.690609 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4904d67-3c44-40d9-8ea8-026d727e9486-config-volume" (OuterVolumeSpecName: "config-volume") pod "f4904d67-3c44-40d9-8ea8-026d727e9486" (UID: "f4904d67-3c44-40d9-8ea8-026d727e9486"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:30:06.693568 master-1 kubenswrapper[4771]: I1011 10:30:06.693511 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4904d67-3c44-40d9-8ea8-026d727e9486-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f4904d67-3c44-40d9-8ea8-026d727e9486" (UID: "f4904d67-3c44-40d9-8ea8-026d727e9486"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:30:06.695405 master-1 kubenswrapper[4771]: I1011 10:30:06.694598 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4904d67-3c44-40d9-8ea8-026d727e9486-kube-api-access-cf56n" (OuterVolumeSpecName: "kube-api-access-cf56n") pod "f4904d67-3c44-40d9-8ea8-026d727e9486" (UID: "f4904d67-3c44-40d9-8ea8-026d727e9486"). InnerVolumeSpecName "kube-api-access-cf56n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:30:06.792700 master-1 kubenswrapper[4771]: I1011 10:30:06.792557 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4904d67-3c44-40d9-8ea8-026d727e9486-config-volume\") on node \"master-1\" DevicePath \"\"" Oct 11 10:30:06.792700 master-1 kubenswrapper[4771]: I1011 10:30:06.792621 4771 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4904d67-3c44-40d9-8ea8-026d727e9486-secret-volume\") on node \"master-1\" DevicePath \"\"" Oct 11 10:30:06.792700 master-1 kubenswrapper[4771]: I1011 10:30:06.792645 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf56n\" (UniqueName: \"kubernetes.io/projected/f4904d67-3c44-40d9-8ea8-026d727e9486-kube-api-access-cf56n\") on node \"master-1\" DevicePath \"\"" Oct 11 10:30:06.826783 master-1 kubenswrapper[4771]: I1011 10:30:06.826570 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v" event={"ID":"f4904d67-3c44-40d9-8ea8-026d727e9486","Type":"ContainerDied","Data":"fb9abe4abde9e0f6d44c16dbc1ef22f8a3f53fff1d42a62d4fb8563051892a8d"} Oct 11 10:30:06.826783 master-1 kubenswrapper[4771]: I1011 10:30:06.826624 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v" Oct 11 10:30:06.826783 master-1 kubenswrapper[4771]: I1011 10:30:06.826637 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb9abe4abde9e0f6d44c16dbc1ef22f8a3f53fff1d42a62d4fb8563051892a8d" Oct 11 10:30:06.826783 master-1 kubenswrapper[4771]: I1011 10:30:06.826759 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:30:06.970599 master-2 kubenswrapper[4776]: I1011 10:30:06.970419 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:06.970599 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:06.970599 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:06.970599 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:06.970599 master-2 kubenswrapper[4776]: I1011 10:30:06.970483 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:07.497808 master-1 kubenswrapper[4771]: I1011 10:30:07.497694 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:07.497808 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:07.497808 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:07.497808 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:07.497808 master-1 kubenswrapper[4771]: I1011 10:30:07.497793 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:07.968654 master-2 kubenswrapper[4776]: I1011 10:30:07.968602 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:07.968654 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:07.968654 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:07.968654 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:07.969011 master-2 kubenswrapper[4776]: I1011 10:30:07.968665 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:08.496800 master-1 kubenswrapper[4771]: I1011 10:30:08.496725 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:08.496800 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:08.496800 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:08.496800 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:08.497248 master-1 kubenswrapper[4771]: I1011 10:30:08.496818 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:08.969516 master-2 kubenswrapper[4776]: I1011 10:30:08.969456 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:08.969516 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:08.969516 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:08.969516 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:08.970401 master-2 kubenswrapper[4776]: I1011 10:30:08.969518 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:09.438037 master-1 kubenswrapper[4771]: I1011 10:30:09.437852 4771 scope.go:117] "RemoveContainer" containerID="af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde" Oct 11 10:30:09.496908 master-1 kubenswrapper[4771]: I1011 10:30:09.496843 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:09.496908 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:09.496908 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:09.496908 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:09.497273 master-1 kubenswrapper[4771]: I1011 10:30:09.496912 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:09.631414 master-1 kubenswrapper[4771]: I1011 10:30:09.631313 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:30:09.631414 master-1 kubenswrapper[4771]: I1011 10:30:09.631395 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:30:09.969295 master-2 kubenswrapper[4776]: I1011 10:30:09.969242 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:09.969295 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:09.969295 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:09.969295 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:09.969555 master-2 kubenswrapper[4776]: I1011 10:30:09.969315 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:10.497071 master-1 kubenswrapper[4771]: I1011 10:30:10.496981 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:10.497071 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:10.497071 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:10.497071 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:10.498186 master-1 kubenswrapper[4771]: I1011 10:30:10.497091 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:10.969765 master-2 kubenswrapper[4776]: I1011 10:30:10.969694 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:10.969765 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:10.969765 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:10.969765 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:10.970429 master-2 kubenswrapper[4776]: I1011 10:30:10.969769 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:11.496858 master-1 kubenswrapper[4771]: I1011 10:30:11.496789 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:11.496858 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:11.496858 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:11.496858 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:11.497499 master-1 kubenswrapper[4771]: I1011 10:30:11.496869 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:11.970173 master-2 kubenswrapper[4776]: I1011 10:30:11.970086 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:11.970173 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:11.970173 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:11.970173 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:11.971244 master-2 kubenswrapper[4776]: I1011 10:30:11.970170 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:12.496885 master-1 kubenswrapper[4771]: I1011 10:30:12.496810 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:12.496885 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:12.496885 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:12.496885 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:12.498195 master-1 kubenswrapper[4771]: I1011 10:30:12.496903 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:12.864265 master-1 kubenswrapper[4771]: I1011 10:30:12.864167 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/1.log" Oct 11 10:30:12.867677 master-1 kubenswrapper[4771]: I1011 10:30:12.867588 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerStarted","Data":"8a1f303abdd5f0c02fe6a793df5fa8cce44a25f7b097770a28676ae74b39da7e"} Oct 11 10:30:12.870995 master-1 kubenswrapper[4771]: I1011 10:30:12.870920 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7b6b7bb859-rwvpf" event={"ID":"5ce7321b-beff-4c96-9998-a3177ac79f36","Type":"ContainerStarted","Data":"e3778718c55abd380f89d429871aa3167dd83cf5f32ed7e3ae6c0059601b60c2"} Oct 11 10:30:12.871147 master-1 kubenswrapper[4771]: I1011 10:30:12.870995 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7b6b7bb859-rwvpf" event={"ID":"5ce7321b-beff-4c96-9998-a3177ac79f36","Type":"ContainerStarted","Data":"76a63630e5dd4a315944c4777a18d2b03bde842d5b787f9b071acb9666f6fe9e"} Oct 11 10:30:12.873320 master-1 kubenswrapper[4771]: I1011 10:30:12.873256 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" event={"ID":"24ee422a-a8f9-436d-b2be-ee2cfa387868","Type":"ContainerStarted","Data":"8f225daf7060a64c85df526b1cea554f81def1c7cf4a4700bb4d653dc8571f96"} Oct 11 10:30:12.928234 master-1 kubenswrapper[4771]: I1011 10:30:12.928098 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-7b6b7bb859-rwvpf" podStartSLOduration=3.141481491 podStartE2EDuration="17.92806799s" podCreationTimestamp="2025-10-11 10:29:55 +0000 UTC" firstStartedPulling="2025-10-11 10:29:57.027127473 +0000 UTC m=+229.001353914" lastFinishedPulling="2025-10-11 10:30:11.813713982 +0000 UTC m=+243.787940413" observedRunningTime="2025-10-11 10:30:12.925012076 +0000 UTC m=+244.899238587" watchObservedRunningTime="2025-10-11 10:30:12.92806799 +0000 UTC m=+244.902294461" Oct 11 10:30:12.958117 master-2 kubenswrapper[4776]: I1011 10:30:12.958024 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-5r2t9"] Oct 11 10:30:12.958778 master-2 kubenswrapper[4776]: I1011 10:30:12.958313 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" podUID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerName="multus-admission-controller" containerID="cri-o://70bd3f9c400e0f2d03040b409d0be80f6f7bbda878ae150537f2b4ec7baf71bd" gracePeriod=30 Oct 11 10:30:12.958778 master-2 kubenswrapper[4776]: I1011 10:30:12.958422 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" podUID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerName="kube-rbac-proxy" containerID="cri-o://7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829" gracePeriod=30 Oct 11 10:30:12.970855 master-2 kubenswrapper[4776]: I1011 10:30:12.970762 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:12.970855 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:12.970855 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:12.970855 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:12.970855 master-2 kubenswrapper[4776]: I1011 10:30:12.970851 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:13.101048 master-2 kubenswrapper[4776]: I1011 10:30:13.100992 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-66df44bc95-kxhjc_7004f3ff-6db8-446d-94c1-1223e975299d/authentication-operator/0.log" Oct 11 10:30:13.273061 master-1 kubenswrapper[4771]: I1011 10:30:13.272862 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-1" Oct 11 10:30:13.301369 master-2 kubenswrapper[4776]: I1011 10:30:13.301268 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-66df44bc95-kxhjc_7004f3ff-6db8-446d-94c1-1223e975299d/authentication-operator/1.log" Oct 11 10:30:13.490773 master-2 kubenswrapper[4776]: I1011 10:30:13.490699 4776 generic.go:334] "Generic (PLEG): container finished" podID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerID="7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829" exitCode=0 Oct 11 10:30:13.490773 master-2 kubenswrapper[4776]: I1011 10:30:13.490756 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" event={"ID":"64310b0b-bae1-4ad3-b106-6d59d47d29b2","Type":"ContainerDied","Data":"7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829"} Oct 11 10:30:13.497676 master-1 kubenswrapper[4771]: I1011 10:30:13.497601 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:13.497676 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:13.497676 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:13.497676 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:13.498828 master-1 kubenswrapper[4771]: I1011 10:30:13.497692 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:13.502236 master-2 kubenswrapper[4776]: I1011 10:30:13.502187 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5ddb89f76-57kcw_c8cd90ff-e70c-4837-82c4-0fec67a8a51b/router/0.log" Oct 11 10:30:13.699581 master-1 kubenswrapper[4771]: I1011 10:30:13.699499 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5ddb89f76-z5t6x_04cd4a19-2532-43d1-9144-1f59d9e52d19/router/0.log" Oct 11 10:30:13.895979 master-2 kubenswrapper[4776]: I1011 10:30:13.895894 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-65b6f4d4c9-5wrz6_e350b624-6581-4982-96f3-cd5c37256e02/fix-audit-permissions/0.log" Oct 11 10:30:13.970197 master-2 kubenswrapper[4776]: I1011 10:30:13.970075 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:13.970197 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:13.970197 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:13.970197 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:13.970611 master-2 kubenswrapper[4776]: I1011 10:30:13.970201 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:14.105388 master-2 kubenswrapper[4776]: I1011 10:30:14.105287 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-65b6f4d4c9-5wrz6_e350b624-6581-4982-96f3-cd5c37256e02/oauth-apiserver/0.log" Oct 11 10:30:14.295141 master-1 kubenswrapper[4771]: I1011 10:30:14.295071 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-65b6f4d4c9-skwvw_004ee387-d0e9-4582-ad14-f571832ebd6e/fix-audit-permissions/0.log" Oct 11 10:30:14.435321 master-2 kubenswrapper[4776]: E1011 10:30:14.435225 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:30:14.438207 master-2 kubenswrapper[4776]: E1011 10:30:14.436734 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:30:14.439291 master-2 kubenswrapper[4776]: E1011 10:30:14.439234 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:30:14.439385 master-2 kubenswrapper[4776]: E1011 10:30:14.439306 4776 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" podUID="2ee67bf2-b525-4e43-8f3c-be748c32c8d2" containerName="kube-multus-additional-cni-plugins" Oct 11 10:30:14.451817 master-1 kubenswrapper[4771]: E1011 10:30:14.451708 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:30:14.454005 master-1 kubenswrapper[4771]: E1011 10:30:14.453901 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:30:14.456599 master-1 kubenswrapper[4771]: E1011 10:30:14.456501 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:30:14.456748 master-1 kubenswrapper[4771]: E1011 10:30:14.456606 4771 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" podUID="806cd59c-056a-4fb4-a3b4-cb716c01cdea" containerName="kube-multus-additional-cni-plugins" Oct 11 10:30:14.496915 master-1 kubenswrapper[4771]: I1011 10:30:14.496687 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:14.496915 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:14.496915 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:14.496915 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:14.496915 master-1 kubenswrapper[4771]: I1011 10:30:14.496765 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:14.502601 master-1 kubenswrapper[4771]: I1011 10:30:14.502396 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-65b6f4d4c9-skwvw_004ee387-d0e9-4582-ad14-f571832ebd6e/oauth-apiserver/0.log" Oct 11 10:30:14.631852 master-1 kubenswrapper[4771]: I1011 10:30:14.631643 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:30:14.631852 master-1 kubenswrapper[4771]: I1011 10:30:14.631769 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:30:14.703874 master-2 kubenswrapper[4776]: I1011 10:30:14.703629 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-8wc54_a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1/etcd-operator/0.log" Oct 11 10:30:14.889388 master-1 kubenswrapper[4771]: I1011 10:30:14.889241 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" event={"ID":"24ee422a-a8f9-436d-b2be-ee2cfa387868","Type":"ContainerStarted","Data":"045908b7671941d85cb856e82c18281ded8c47a82fa48745d298d16204847f5f"} Oct 11 10:30:14.889388 master-1 kubenswrapper[4771]: I1011 10:30:14.889342 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" event={"ID":"24ee422a-a8f9-436d-b2be-ee2cfa387868","Type":"ContainerStarted","Data":"610e3fd9a365564eea8cab0976a54e60fd08654c507389a3dcc5428ff493223c"} Oct 11 10:30:14.904600 master-2 kubenswrapper[4776]: I1011 10:30:14.904501 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-8wc54_a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1/etcd-operator/1.log" Oct 11 10:30:14.923916 master-1 kubenswrapper[4771]: I1011 10:30:14.923742 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59" podStartSLOduration=3.990968253 podStartE2EDuration="20.923705505s" podCreationTimestamp="2025-10-11 10:29:54 +0000 UTC" firstStartedPulling="2025-10-11 10:29:57.03465222 +0000 UTC m=+229.008878671" lastFinishedPulling="2025-10-11 10:30:13.967389472 +0000 UTC m=+245.941615923" observedRunningTime="2025-10-11 10:30:14.918211853 +0000 UTC m=+246.892438384" watchObservedRunningTime="2025-10-11 10:30:14.923705505 +0000 UTC m=+246.897931986" Oct 11 10:30:14.978660 master-2 kubenswrapper[4776]: I1011 10:30:14.978438 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:14.978660 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:14.978660 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:14.978660 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:14.979012 master-2 kubenswrapper[4776]: I1011 10:30:14.978653 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:15.093817 master-1 kubenswrapper[4771]: I1011 10:30:15.093716 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-guard-master-1_3fc4970d-4f34-4fc6-9791-6218f8e42eb9/guard/0.log" Oct 11 10:30:15.498045 master-1 kubenswrapper[4771]: I1011 10:30:15.497940 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:15.498045 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:15.498045 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:15.498045 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:15.498045 master-1 kubenswrapper[4771]: I1011 10:30:15.498037 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:15.970400 master-2 kubenswrapper[4776]: I1011 10:30:15.970280 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:15.970400 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:15.970400 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:15.970400 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:15.970400 master-2 kubenswrapper[4776]: I1011 10:30:15.970383 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:16.095680 master-1 kubenswrapper[4771]: I1011 10:30:16.095623 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/1.log" Oct 11 10:30:16.497255 master-1 kubenswrapper[4771]: I1011 10:30:16.497108 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:16.497255 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:16.497255 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:16.497255 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:16.497255 master-1 kubenswrapper[4771]: I1011 10:30:16.497222 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:16.896887 master-1 kubenswrapper[4771]: I1011 10:30:16.896805 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/setup/0.log" Oct 11 10:30:16.903474 master-1 kubenswrapper[4771]: I1011 10:30:16.903414 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-1_776c3745-2f4c-4a78-b1cd-77a7a1532df3/installer/0.log" Oct 11 10:30:16.903565 master-1 kubenswrapper[4771]: I1011 10:30:16.903521 4771 generic.go:334] "Generic (PLEG): container finished" podID="776c3745-2f4c-4a78-b1cd-77a7a1532df3" containerID="2a2c47f6b163a67c15dfe1ca6c1ec25571de95f1ae3f653d4b9ded6b99ad45a9" exitCode=1 Oct 11 10:30:16.903614 master-1 kubenswrapper[4771]: I1011 10:30:16.903590 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-1" event={"ID":"776c3745-2f4c-4a78-b1cd-77a7a1532df3","Type":"ContainerDied","Data":"2a2c47f6b163a67c15dfe1ca6c1ec25571de95f1ae3f653d4b9ded6b99ad45a9"} Oct 11 10:30:16.981723 master-2 kubenswrapper[4776]: I1011 10:30:16.981412 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:16.981723 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:16.981723 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:16.981723 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:16.981723 master-2 kubenswrapper[4776]: I1011 10:30:16.981512 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:17.095955 master-1 kubenswrapper[4771]: I1011 10:30:17.095906 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-ensure-env-vars/0.log" Oct 11 10:30:17.296394 master-1 kubenswrapper[4771]: I1011 10:30:17.296280 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-resources-copy/0.log" Oct 11 10:30:17.494973 master-1 kubenswrapper[4771]: I1011 10:30:17.494875 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcdctl/0.log" Oct 11 10:30:17.496736 master-1 kubenswrapper[4771]: I1011 10:30:17.496680 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:17.496736 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:17.496736 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:17.496736 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:17.496881 master-1 kubenswrapper[4771]: I1011 10:30:17.496764 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:17.703562 master-1 kubenswrapper[4771]: I1011 10:30:17.703474 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/2.log" Oct 11 10:30:17.797971 master-1 kubenswrapper[4771]: I1011 10:30:17.797619 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-h7gnk_6d20faa4-e5eb-4766-b4f5-30e491d1820c/machine-config-server/0.log" Oct 11 10:30:17.807373 master-2 kubenswrapper[4776]: I1011 10:30:17.807266 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:30:17.901732 master-1 kubenswrapper[4771]: I1011 10:30:17.901675 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-metrics/0.log" Oct 11 10:30:17.971075 master-2 kubenswrapper[4776]: I1011 10:30:17.970993 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:17.971075 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:17.971075 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:17.971075 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:17.971505 master-2 kubenswrapper[4776]: I1011 10:30:17.971085 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:18.102269 master-1 kubenswrapper[4771]: I1011 10:30:18.102218 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-readyz/0.log" Oct 11 10:30:18.232211 master-1 kubenswrapper[4771]: I1011 10:30:18.232130 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-1_776c3745-2f4c-4a78-b1cd-77a7a1532df3/installer/0.log" Oct 11 10:30:18.232211 master-1 kubenswrapper[4771]: I1011 10:30:18.232205 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-1" Oct 11 10:30:18.272114 master-1 kubenswrapper[4771]: I1011 10:30:18.271994 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-1" Oct 11 10:30:18.295392 master-1 kubenswrapper[4771]: I1011 10:30:18.295320 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-rev/0.log" Oct 11 10:30:18.340390 master-1 kubenswrapper[4771]: I1011 10:30:18.340318 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/776c3745-2f4c-4a78-b1cd-77a7a1532df3-kube-api-access\") pod \"776c3745-2f4c-4a78-b1cd-77a7a1532df3\" (UID: \"776c3745-2f4c-4a78-b1cd-77a7a1532df3\") " Oct 11 10:30:18.340570 master-1 kubenswrapper[4771]: I1011 10:30:18.340432 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/776c3745-2f4c-4a78-b1cd-77a7a1532df3-var-lock\") pod \"776c3745-2f4c-4a78-b1cd-77a7a1532df3\" (UID: \"776c3745-2f4c-4a78-b1cd-77a7a1532df3\") " Oct 11 10:30:18.340570 master-1 kubenswrapper[4771]: I1011 10:30:18.340498 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/776c3745-2f4c-4a78-b1cd-77a7a1532df3-var-lock" (OuterVolumeSpecName: "var-lock") pod "776c3745-2f4c-4a78-b1cd-77a7a1532df3" (UID: "776c3745-2f4c-4a78-b1cd-77a7a1532df3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:30:18.340934 master-1 kubenswrapper[4771]: I1011 10:30:18.340866 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/776c3745-2f4c-4a78-b1cd-77a7a1532df3-kubelet-dir\") pod \"776c3745-2f4c-4a78-b1cd-77a7a1532df3\" (UID: \"776c3745-2f4c-4a78-b1cd-77a7a1532df3\") " Oct 11 10:30:18.341165 master-1 kubenswrapper[4771]: I1011 10:30:18.341065 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/776c3745-2f4c-4a78-b1cd-77a7a1532df3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "776c3745-2f4c-4a78-b1cd-77a7a1532df3" (UID: "776c3745-2f4c-4a78-b1cd-77a7a1532df3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:30:18.341799 master-1 kubenswrapper[4771]: I1011 10:30:18.341749 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/776c3745-2f4c-4a78-b1cd-77a7a1532df3-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:30:18.341855 master-1 kubenswrapper[4771]: I1011 10:30:18.341810 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/776c3745-2f4c-4a78-b1cd-77a7a1532df3-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:30:18.345117 master-1 kubenswrapper[4771]: I1011 10:30:18.345064 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/776c3745-2f4c-4a78-b1cd-77a7a1532df3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "776c3745-2f4c-4a78-b1cd-77a7a1532df3" (UID: "776c3745-2f4c-4a78-b1cd-77a7a1532df3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:30:18.405822 master-2 kubenswrapper[4776]: I1011 10:30:18.405756 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-7tbzg_2ee67bf2-b525-4e43-8f3c-be748c32c8d2/kube-multus-additional-cni-plugins/0.log" Oct 11 10:30:18.406292 master-2 kubenswrapper[4776]: I1011 10:30:18.405842 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:30:18.440072 master-2 kubenswrapper[4776]: I1011 10:30:18.439999 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-tuning-conf-dir\") pod \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " Oct 11 10:30:18.440148 master-2 kubenswrapper[4776]: I1011 10:30:18.440108 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-cni-sysctl-allowlist\") pod \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " Oct 11 10:30:18.440148 master-2 kubenswrapper[4776]: I1011 10:30:18.440147 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9qsm\" (UniqueName: \"kubernetes.io/projected/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-kube-api-access-x9qsm\") pod \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " Oct 11 10:30:18.440206 master-2 kubenswrapper[4776]: I1011 10:30:18.440152 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "2ee67bf2-b525-4e43-8f3c-be748c32c8d2" (UID: "2ee67bf2-b525-4e43-8f3c-be748c32c8d2"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:30:18.440206 master-2 kubenswrapper[4776]: I1011 10:30:18.440189 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-ready\") pod \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " Oct 11 10:30:18.440446 master-2 kubenswrapper[4776]: I1011 10:30:18.440422 4776 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-tuning-conf-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:30:18.440821 master-2 kubenswrapper[4776]: I1011 10:30:18.440793 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-ready" (OuterVolumeSpecName: "ready") pod "2ee67bf2-b525-4e43-8f3c-be748c32c8d2" (UID: "2ee67bf2-b525-4e43-8f3c-be748c32c8d2"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:30:18.441435 master-2 kubenswrapper[4776]: I1011 10:30:18.441344 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "2ee67bf2-b525-4e43-8f3c-be748c32c8d2" (UID: "2ee67bf2-b525-4e43-8f3c-be748c32c8d2"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:30:18.442715 master-1 kubenswrapper[4771]: I1011 10:30:18.442633 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/776c3745-2f4c-4a78-b1cd-77a7a1532df3-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:30:18.444026 master-2 kubenswrapper[4776]: I1011 10:30:18.443952 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-kube-api-access-x9qsm" (OuterVolumeSpecName: "kube-api-access-x9qsm") pod "2ee67bf2-b525-4e43-8f3c-be748c32c8d2" (UID: "2ee67bf2-b525-4e43-8f3c-be748c32c8d2"). InnerVolumeSpecName "kube-api-access-x9qsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:30:18.497492 master-1 kubenswrapper[4771]: I1011 10:30:18.497399 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:18.497492 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:18.497492 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:18.497492 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:18.497492 master-1 kubenswrapper[4771]: I1011 10:30:18.497475 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:18.503609 master-1 kubenswrapper[4771]: I1011 10:30:18.503549 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-1_826e1279-bc0d-426e-b6e0-5108268f340e/installer/0.log" Oct 11 10:30:18.525121 master-2 kubenswrapper[4776]: I1011 10:30:18.525054 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-7tbzg_2ee67bf2-b525-4e43-8f3c-be748c32c8d2/kube-multus-additional-cni-plugins/0.log" Oct 11 10:30:18.525309 master-2 kubenswrapper[4776]: I1011 10:30:18.525162 4776 generic.go:334] "Generic (PLEG): container finished" podID="2ee67bf2-b525-4e43-8f3c-be748c32c8d2" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" exitCode=137 Oct 11 10:30:18.525309 master-2 kubenswrapper[4776]: I1011 10:30:18.525219 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" event={"ID":"2ee67bf2-b525-4e43-8f3c-be748c32c8d2","Type":"ContainerDied","Data":"cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0"} Oct 11 10:30:18.525309 master-2 kubenswrapper[4776]: I1011 10:30:18.525270 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:30:18.525405 master-2 kubenswrapper[4776]: I1011 10:30:18.525308 4776 scope.go:117] "RemoveContainer" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" Oct 11 10:30:18.525450 master-2 kubenswrapper[4776]: I1011 10:30:18.525285 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" event={"ID":"2ee67bf2-b525-4e43-8f3c-be748c32c8d2","Type":"ContainerDied","Data":"18658a149bb98f078235774d71bf5a21b20cf7314050d8e8690f3080b789d04a"} Oct 11 10:30:18.540973 master-2 kubenswrapper[4776]: I1011 10:30:18.540931 4776 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-cni-sysctl-allowlist\") on node \"master-2\" DevicePath \"\"" Oct 11 10:30:18.541053 master-2 kubenswrapper[4776]: I1011 10:30:18.540974 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9qsm\" (UniqueName: \"kubernetes.io/projected/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-kube-api-access-x9qsm\") on node \"master-2\" DevicePath \"\"" Oct 11 10:30:18.541053 master-2 kubenswrapper[4776]: I1011 10:30:18.540994 4776 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-ready\") on node \"master-2\" DevicePath \"\"" Oct 11 10:30:18.544372 master-2 kubenswrapper[4776]: I1011 10:30:18.544328 4776 scope.go:117] "RemoveContainer" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" Oct 11 10:30:18.544810 master-2 kubenswrapper[4776]: E1011 10:30:18.544777 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0\": container with ID starting with cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0 not found: ID does not exist" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" Oct 11 10:30:18.544865 master-2 kubenswrapper[4776]: I1011 10:30:18.544814 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0"} err="failed to get container status \"cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0\": rpc error: code = NotFound desc = could not find container \"cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0\": container with ID starting with cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0 not found: ID does not exist" Oct 11 10:30:18.560480 master-2 kubenswrapper[4776]: I1011 10:30:18.560226 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7tbzg"] Oct 11 10:30:18.563647 master-2 kubenswrapper[4776]: I1011 10:30:18.563581 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7tbzg"] Oct 11 10:30:18.675067 master-1 kubenswrapper[4771]: I1011 10:30:18.674996 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-guard-master-1" Oct 11 10:30:18.702264 master-2 kubenswrapper[4776]: I1011 10:30:18.702186 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/0.log" Oct 11 10:30:18.738333 master-1 kubenswrapper[4771]: E1011 10:30:18.738228 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod806cd59c_056a_4fb4_a3b4_cb716c01cdea.slice/crio-5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod776c3745_2f4c_4a78_b1cd_77a7a1532df3.slice/crio-conmon-2a2c47f6b163a67c15dfe1ca6c1ec25571de95f1ae3f653d4b9ded6b99ad45a9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod776c3745_2f4c_4a78_b1cd_77a7a1532df3.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod776c3745_2f4c_4a78_b1cd_77a7a1532df3.slice/crio-2a2c47f6b163a67c15dfe1ca6c1ec25571de95f1ae3f653d4b9ded6b99ad45a9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod776c3745_2f4c_4a78_b1cd_77a7a1532df3.slice/crio-ccc3bd0dec107a1860fda6d334afbd26f993ae533a829286bea5468aeefd8bf7\": RecentStats: unable to find data in memory cache]" Oct 11 10:30:18.738656 master-1 kubenswrapper[4771]: E1011 10:30:18.738479 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod806cd59c_056a_4fb4_a3b4_cb716c01cdea.slice/crio-conmon-5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:30:18.777150 master-1 kubenswrapper[4771]: I1011 10:30:18.776993 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-9d7j4_806cd59c-056a-4fb4-a3b4-cb716c01cdea/kube-multus-additional-cni-plugins/0.log" Oct 11 10:30:18.777150 master-1 kubenswrapper[4771]: I1011 10:30:18.777066 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" Oct 11 10:30:18.899189 master-2 kubenswrapper[4776]: I1011 10:30:18.899076 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/kube-rbac-proxy/0.log" Oct 11 10:30:18.917098 master-1 kubenswrapper[4771]: I1011 10:30:18.916914 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-9d7j4_806cd59c-056a-4fb4-a3b4-cb716c01cdea/kube-multus-additional-cni-plugins/0.log" Oct 11 10:30:18.917098 master-1 kubenswrapper[4771]: I1011 10:30:18.917013 4771 generic.go:334] "Generic (PLEG): container finished" podID="806cd59c-056a-4fb4-a3b4-cb716c01cdea" containerID="5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32" exitCode=137 Oct 11 10:30:18.917563 master-1 kubenswrapper[4771]: I1011 10:30:18.917132 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" Oct 11 10:30:18.917563 master-1 kubenswrapper[4771]: I1011 10:30:18.917156 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" event={"ID":"806cd59c-056a-4fb4-a3b4-cb716c01cdea","Type":"ContainerDied","Data":"5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32"} Oct 11 10:30:18.917563 master-1 kubenswrapper[4771]: I1011 10:30:18.917243 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9d7j4" event={"ID":"806cd59c-056a-4fb4-a3b4-cb716c01cdea","Type":"ContainerDied","Data":"7147f14021bd8181c058d8c3ce2203cdae664d32eab5196f21ee167281d79073"} Oct 11 10:30:18.917563 master-1 kubenswrapper[4771]: I1011 10:30:18.917278 4771 scope.go:117] "RemoveContainer" containerID="5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32" Oct 11 10:30:18.920226 master-1 kubenswrapper[4771]: I1011 10:30:18.920047 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-1_776c3745-2f4c-4a78-b1cd-77a7a1532df3/installer/0.log" Oct 11 10:30:18.920226 master-1 kubenswrapper[4771]: I1011 10:30:18.920117 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-1" event={"ID":"776c3745-2f4c-4a78-b1cd-77a7a1532df3","Type":"ContainerDied","Data":"ccc3bd0dec107a1860fda6d334afbd26f993ae533a829286bea5468aeefd8bf7"} Oct 11 10:30:18.920226 master-1 kubenswrapper[4771]: I1011 10:30:18.920148 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccc3bd0dec107a1860fda6d334afbd26f993ae533a829286bea5468aeefd8bf7" Oct 11 10:30:18.920535 master-1 kubenswrapper[4771]: I1011 10:30:18.920261 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-1" Oct 11 10:30:18.944092 master-1 kubenswrapper[4771]: I1011 10:30:18.944061 4771 scope.go:117] "RemoveContainer" containerID="5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32" Oct 11 10:30:18.945166 master-1 kubenswrapper[4771]: E1011 10:30:18.945062 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32\": container with ID starting with 5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32 not found: ID does not exist" containerID="5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32" Oct 11 10:30:18.945166 master-1 kubenswrapper[4771]: I1011 10:30:18.945150 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32"} err="failed to get container status \"5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32\": rpc error: code = NotFound desc = could not find container \"5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32\": container with ID starting with 5eb00d420609b3f327b546c9ccf510a7aca38ca978bdc995a30f1d5c6a5e3d32 not found: ID does not exist" Oct 11 10:30:18.948814 master-1 kubenswrapper[4771]: I1011 10:30:18.948758 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/806cd59c-056a-4fb4-a3b4-cb716c01cdea-tuning-conf-dir\") pod \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\" (UID: \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\") " Oct 11 10:30:18.949110 master-1 kubenswrapper[4771]: I1011 10:30:18.948820 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfgsk\" (UniqueName: \"kubernetes.io/projected/806cd59c-056a-4fb4-a3b4-cb716c01cdea-kube-api-access-tfgsk\") pod \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\" (UID: \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\") " Oct 11 10:30:18.949110 master-1 kubenswrapper[4771]: I1011 10:30:18.948879 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/806cd59c-056a-4fb4-a3b4-cb716c01cdea-cni-sysctl-allowlist\") pod \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\" (UID: \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\") " Oct 11 10:30:18.949110 master-1 kubenswrapper[4771]: I1011 10:30:18.948947 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/806cd59c-056a-4fb4-a3b4-cb716c01cdea-ready\") pod \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\" (UID: \"806cd59c-056a-4fb4-a3b4-cb716c01cdea\") " Oct 11 10:30:18.949930 master-1 kubenswrapper[4771]: I1011 10:30:18.949562 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/806cd59c-056a-4fb4-a3b4-cb716c01cdea-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "806cd59c-056a-4fb4-a3b4-cb716c01cdea" (UID: "806cd59c-056a-4fb4-a3b4-cb716c01cdea"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:30:18.950071 master-1 kubenswrapper[4771]: I1011 10:30:18.949834 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/806cd59c-056a-4fb4-a3b4-cb716c01cdea-ready" (OuterVolumeSpecName: "ready") pod "806cd59c-056a-4fb4-a3b4-cb716c01cdea" (UID: "806cd59c-056a-4fb4-a3b4-cb716c01cdea"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:30:18.950591 master-1 kubenswrapper[4771]: I1011 10:30:18.950502 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/806cd59c-056a-4fb4-a3b4-cb716c01cdea-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "806cd59c-056a-4fb4-a3b4-cb716c01cdea" (UID: "806cd59c-056a-4fb4-a3b4-cb716c01cdea"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:30:18.953283 master-1 kubenswrapper[4771]: I1011 10:30:18.953209 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/806cd59c-056a-4fb4-a3b4-cb716c01cdea-kube-api-access-tfgsk" (OuterVolumeSpecName: "kube-api-access-tfgsk") pod "806cd59c-056a-4fb4-a3b4-cb716c01cdea" (UID: "806cd59c-056a-4fb4-a3b4-cb716c01cdea"). InnerVolumeSpecName "kube-api-access-tfgsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:30:18.969076 master-2 kubenswrapper[4776]: I1011 10:30:18.968950 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:18.969076 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:18.969076 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:18.969076 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:18.969460 master-2 kubenswrapper[4776]: I1011 10:30:18.969426 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:19.050487 master-1 kubenswrapper[4771]: I1011 10:30:19.050403 4771 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/806cd59c-056a-4fb4-a3b4-cb716c01cdea-tuning-conf-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:30:19.050487 master-1 kubenswrapper[4771]: I1011 10:30:19.050467 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfgsk\" (UniqueName: \"kubernetes.io/projected/806cd59c-056a-4fb4-a3b4-cb716c01cdea-kube-api-access-tfgsk\") on node \"master-1\" DevicePath \"\"" Oct 11 10:30:19.050487 master-1 kubenswrapper[4771]: I1011 10:30:19.050492 4771 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/806cd59c-056a-4fb4-a3b4-cb716c01cdea-cni-sysctl-allowlist\") on node \"master-1\" DevicePath \"\"" Oct 11 10:30:19.050786 master-1 kubenswrapper[4771]: I1011 10:30:19.050516 4771 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/806cd59c-056a-4fb4-a3b4-cb716c01cdea-ready\") on node \"master-1\" DevicePath \"\"" Oct 11 10:30:19.106460 master-2 kubenswrapper[4776]: I1011 10:30:19.106433 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5ddb89f76-57kcw_c8cd90ff-e70c-4837-82c4-0fec67a8a51b/router/0.log" Oct 11 10:30:19.263553 master-1 kubenswrapper[4771]: I1011 10:30:19.263437 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9d7j4"] Oct 11 10:30:19.266855 master-1 kubenswrapper[4771]: I1011 10:30:19.266795 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9d7j4"] Oct 11 10:30:19.301741 master-1 kubenswrapper[4771]: I1011 10:30:19.301652 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5ddb89f76-z5t6x_04cd4a19-2532-43d1-9144-1f59d9e52d19/router/0.log" Oct 11 10:30:19.497788 master-1 kubenswrapper[4771]: I1011 10:30:19.497586 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:19.497788 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:19.497788 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:19.497788 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:19.497788 master-1 kubenswrapper[4771]: I1011 10:30:19.497730 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:19.505715 master-2 kubenswrapper[4776]: I1011 10:30:19.505609 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68f5d95b74-9h5mv_6967590c-695e-4e20-964b-0c643abdf367/kube-apiserver-operator/0.log" Oct 11 10:30:19.704645 master-2 kubenswrapper[4776]: I1011 10:30:19.704587 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68f5d95b74-9h5mv_6967590c-695e-4e20-964b-0c643abdf367/kube-apiserver-operator/1.log" Oct 11 10:30:19.902092 master-1 kubenswrapper[4771]: I1011 10:30:19.901956 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-1_776c3745-2f4c-4a78-b1cd-77a7a1532df3/installer/0.log" Oct 11 10:30:19.969874 master-2 kubenswrapper[4776]: I1011 10:30:19.969804 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:19.969874 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:19.969874 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:19.969874 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:19.970232 master-2 kubenswrapper[4776]: I1011 10:30:19.969922 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:20.067199 master-2 kubenswrapper[4776]: I1011 10:30:20.067110 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ee67bf2-b525-4e43-8f3c-be748c32c8d2" path="/var/lib/kubelet/pods/2ee67bf2-b525-4e43-8f3c-be748c32c8d2/volumes" Oct 11 10:30:20.103053 master-2 kubenswrapper[4776]: I1011 10:30:20.102996 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-5d85974df9-5gj77_e487f283-7482-463c-90b6-a812e00d0e35/kube-controller-manager-operator/0.log" Oct 11 10:30:20.305169 master-2 kubenswrapper[4776]: I1011 10:30:20.304953 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-5d85974df9-5gj77_e487f283-7482-463c-90b6-a812e00d0e35/kube-controller-manager-operator/1.log" Oct 11 10:30:20.446056 master-1 kubenswrapper[4771]: I1011 10:30:20.445957 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="806cd59c-056a-4fb4-a3b4-cb716c01cdea" path="/var/lib/kubelet/pods/806cd59c-056a-4fb4-a3b4-cb716c01cdea/volumes" Oct 11 10:30:20.497304 master-1 kubenswrapper[4771]: I1011 10:30:20.497215 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:20.497304 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:20.497304 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:20.497304 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:20.497682 master-1 kubenswrapper[4771]: I1011 10:30:20.497305 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:20.503051 master-1 kubenswrapper[4771]: I1011 10:30:20.502903 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-1_7662f87a-13ba-439c-b386-05e68284803c/installer/0.log" Oct 11 10:30:20.697107 master-1 kubenswrapper[4771]: I1011 10:30:20.696921 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-guard-master-1_bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe/guard/0.log" Oct 11 10:30:20.895564 master-1 kubenswrapper[4771]: I1011 10:30:20.895463 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_89fad8183e18ab3ad0c46d272335e5f8/wait-for-host-port/0.log" Oct 11 10:30:20.969349 master-2 kubenswrapper[4776]: I1011 10:30:20.969284 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:20.969349 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:20.969349 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:20.969349 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:20.970147 master-2 kubenswrapper[4776]: I1011 10:30:20.969359 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:21.103136 master-1 kubenswrapper[4771]: I1011 10:30:21.103086 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_89fad8183e18ab3ad0c46d272335e5f8/kube-scheduler/0.log" Oct 11 10:30:21.298105 master-1 kubenswrapper[4771]: I1011 10:30:21.297948 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_89fad8183e18ab3ad0c46d272335e5f8/kube-scheduler-cert-syncer/0.log" Oct 11 10:30:21.497409 master-1 kubenswrapper[4771]: I1011 10:30:21.497141 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:21.497409 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:21.497409 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:21.497409 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:21.497409 master-1 kubenswrapper[4771]: I1011 10:30:21.497236 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:21.499790 master-1 kubenswrapper[4771]: I1011 10:30:21.499723 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_89fad8183e18ab3ad0c46d272335e5f8/kube-scheduler-recovery-controller/0.log" Oct 11 10:30:21.701693 master-2 kubenswrapper[4776]: I1011 10:30:21.701607 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-766d6b44f6-s5shc_58aef476-6586-47bb-bf45-dbeccac6271a/kube-scheduler-operator-container/0.log" Oct 11 10:30:21.904182 master-2 kubenswrapper[4776]: I1011 10:30:21.904105 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-766d6b44f6-s5shc_58aef476-6586-47bb-bf45-dbeccac6271a/kube-scheduler-operator-container/1.log" Oct 11 10:30:21.970391 master-2 kubenswrapper[4776]: I1011 10:30:21.970228 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:21.970391 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:21.970391 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:21.970391 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:21.971085 master-2 kubenswrapper[4776]: I1011 10:30:21.970376 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:22.496680 master-1 kubenswrapper[4771]: I1011 10:30:22.496584 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:22.496680 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:22.496680 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:22.496680 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:22.496680 master-1 kubenswrapper[4771]: I1011 10:30:22.496673 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:22.970111 master-2 kubenswrapper[4776]: I1011 10:30:22.970055 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:22.970111 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:22.970111 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:22.970111 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:22.970542 master-2 kubenswrapper[4776]: I1011 10:30:22.970507 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:23.314350 master-2 kubenswrapper[4776]: I1011 10:30:23.314223 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-dczh4_e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc/cluster-olm-operator/0.log" Oct 11 10:30:23.498085 master-2 kubenswrapper[4776]: I1011 10:30:23.498033 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-dczh4_e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc/copy-catalogd-manifests/0.log" Oct 11 10:30:23.500179 master-1 kubenswrapper[4771]: I1011 10:30:23.500071 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:23.500179 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:23.500179 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:23.500179 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:23.501800 master-1 kubenswrapper[4771]: I1011 10:30:23.500195 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:23.698328 master-2 kubenswrapper[4776]: I1011 10:30:23.698276 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-dczh4_e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc/copy-operator-controller-manifests/0.log" Oct 11 10:30:23.901908 master-2 kubenswrapper[4776]: I1011 10:30:23.901836 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-dczh4_e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc/cluster-olm-operator/1.log" Oct 11 10:30:23.970532 master-2 kubenswrapper[4776]: I1011 10:30:23.970389 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:23.970532 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:23.970532 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:23.970532 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:23.970532 master-2 kubenswrapper[4776]: I1011 10:30:23.970494 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:24.106124 master-2 kubenswrapper[4776]: I1011 10:30:24.106059 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7d88655794-7jd4q_f8050d30-444b-40a5-829c-1e3b788910a0/openshift-apiserver-operator/0.log" Oct 11 10:30:24.301641 master-2 kubenswrapper[4776]: I1011 10:30:24.301520 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7d88655794-7jd4q_f8050d30-444b-40a5-829c-1e3b788910a0/openshift-apiserver-operator/1.log" Oct 11 10:30:24.497165 master-1 kubenswrapper[4771]: I1011 10:30:24.497079 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-555f658fd6-n5n6g_027736d1-f3d3-490e-9ee1-d08bad7a25b7/fix-audit-permissions/0.log" Oct 11 10:30:24.498047 master-1 kubenswrapper[4771]: I1011 10:30:24.497961 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:24.498047 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:24.498047 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:24.498047 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:24.498405 master-1 kubenswrapper[4771]: I1011 10:30:24.498059 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:24.703529 master-1 kubenswrapper[4771]: I1011 10:30:24.703426 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-555f658fd6-n5n6g_027736d1-f3d3-490e-9ee1-d08bad7a25b7/openshift-apiserver/0.log" Oct 11 10:30:24.899618 master-1 kubenswrapper[4771]: I1011 10:30:24.899442 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-555f658fd6-n5n6g_027736d1-f3d3-490e-9ee1-d08bad7a25b7/openshift-apiserver-check-endpoints/0.log" Oct 11 10:30:24.969764 master-2 kubenswrapper[4776]: I1011 10:30:24.969664 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:24.969764 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:24.969764 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:24.969764 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:24.970103 master-2 kubenswrapper[4776]: I1011 10:30:24.969796 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:25.095818 master-2 kubenswrapper[4776]: I1011 10:30:25.095748 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-555f658fd6-wmcqt_7a89cb41-fb97-4282-a12d-c6f8d87bc41e/fix-audit-permissions/0.log" Oct 11 10:30:25.303778 master-2 kubenswrapper[4776]: I1011 10:30:25.303606 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-555f658fd6-wmcqt_7a89cb41-fb97-4282-a12d-c6f8d87bc41e/openshift-apiserver/0.log" Oct 11 10:30:25.487852 master-2 kubenswrapper[4776]: E1011 10:30:25.487751 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf33a7e_abea_411d_9a19_85cfe67debe3.slice/crio-38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice/crio-cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice/crio-18658a149bb98f078235774d71bf5a21b20cf7314050d8e8690f3080b789d04a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64310b0b_bae1_4ad3_b106_6d59d47d29b2.slice/crio-7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64310b0b_bae1_4ad3_b106_6d59d47d29b2.slice/crio-conmon-7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf33a7e_abea_411d_9a19_85cfe67debe3.slice/crio-conmon-38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:30:25.488102 master-2 kubenswrapper[4776]: E1011 10:30:25.487831 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64310b0b_bae1_4ad3_b106_6d59d47d29b2.slice/crio-conmon-7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice/crio-cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice/crio-18658a149bb98f078235774d71bf5a21b20cf7314050d8e8690f3080b789d04a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice/crio-conmon-cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf33a7e_abea_411d_9a19_85cfe67debe3.slice/crio-conmon-38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf33a7e_abea_411d_9a19_85cfe67debe3.slice/crio-38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice\": RecentStats: unable to find data in memory cache]" Oct 11 10:30:25.488146 master-2 kubenswrapper[4776]: E1011 10:30:25.488110 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf33a7e_abea_411d_9a19_85cfe67debe3.slice/crio-conmon-38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf33a7e_abea_411d_9a19_85cfe67debe3.slice/crio-38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:30:25.488289 master-2 kubenswrapper[4776]: E1011 10:30:25.488189 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf33a7e_abea_411d_9a19_85cfe67debe3.slice/crio-38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf33a7e_abea_411d_9a19_85cfe67debe3.slice/crio-conmon-38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice/crio-conmon-cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64310b0b_bae1_4ad3_b106_6d59d47d29b2.slice/crio-conmon-7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64310b0b_bae1_4ad3_b106_6d59d47d29b2.slice/crio-7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice/crio-cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice/crio-18658a149bb98f078235774d71bf5a21b20cf7314050d8e8690f3080b789d04a\": RecentStats: unable to find data in memory cache]" Oct 11 10:30:25.497570 master-2 kubenswrapper[4776]: I1011 10:30:25.497487 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-555f658fd6-wmcqt_7a89cb41-fb97-4282-a12d-c6f8d87bc41e/openshift-apiserver-check-endpoints/0.log" Oct 11 10:30:25.497824 master-1 kubenswrapper[4771]: I1011 10:30:25.497733 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:25.497824 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:25.497824 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:25.497824 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:25.497824 master-1 kubenswrapper[4771]: I1011 10:30:25.497815 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:25.588012 master-2 kubenswrapper[4776]: I1011 10:30:25.587829 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-77b66fddc8-s5r5b_cbf33a7e-abea-411d-9a19-85cfe67debe3/multus-admission-controller/0.log" Oct 11 10:30:25.588012 master-2 kubenswrapper[4776]: I1011 10:30:25.587921 4776 generic.go:334] "Generic (PLEG): container finished" podID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerID="38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3" exitCode=137 Oct 11 10:30:25.588012 master-2 kubenswrapper[4776]: I1011 10:30:25.587978 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" event={"ID":"cbf33a7e-abea-411d-9a19-85cfe67debe3","Type":"ContainerDied","Data":"38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3"} Oct 11 10:30:25.702989 master-2 kubenswrapper[4776]: I1011 10:30:25.702914 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-8wc54_a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1/etcd-operator/0.log" Oct 11 10:30:25.902766 master-2 kubenswrapper[4776]: I1011 10:30:25.902599 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-8wc54_a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1/etcd-operator/1.log" Oct 11 10:30:25.969757 master-2 kubenswrapper[4776]: I1011 10:30:25.969698 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:25.969757 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:25.969757 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:25.969757 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:25.970121 master-2 kubenswrapper[4776]: I1011 10:30:25.969765 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:26.101716 master-2 kubenswrapper[4776]: I1011 10:30:26.100856 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5745565d84-bq4rs_88129ec6-6f99-42a1-842a-6a965c6b58fe/openshift-controller-manager-operator/0.log" Oct 11 10:30:26.266818 master-2 kubenswrapper[4776]: I1011 10:30:26.266794 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-77b66fddc8-s5r5b_cbf33a7e-abea-411d-9a19-85cfe67debe3/multus-admission-controller/0.log" Oct 11 10:30:26.267029 master-2 kubenswrapper[4776]: I1011 10:30:26.267016 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:30:26.303803 master-2 kubenswrapper[4776]: I1011 10:30:26.302490 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5745565d84-bq4rs_88129ec6-6f99-42a1-842a-6a965c6b58fe/openshift-controller-manager-operator/1.log" Oct 11 10:30:26.339748 master-2 kubenswrapper[4776]: I1011 10:30:26.339664 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b7hm\" (UniqueName: \"kubernetes.io/projected/cbf33a7e-abea-411d-9a19-85cfe67debe3-kube-api-access-9b7hm\") pod \"cbf33a7e-abea-411d-9a19-85cfe67debe3\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " Oct 11 10:30:26.339952 master-2 kubenswrapper[4776]: I1011 10:30:26.339830 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"cbf33a7e-abea-411d-9a19-85cfe67debe3\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " Oct 11 10:30:26.343148 master-2 kubenswrapper[4776]: I1011 10:30:26.343099 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf33a7e-abea-411d-9a19-85cfe67debe3-kube-api-access-9b7hm" (OuterVolumeSpecName: "kube-api-access-9b7hm") pod "cbf33a7e-abea-411d-9a19-85cfe67debe3" (UID: "cbf33a7e-abea-411d-9a19-85cfe67debe3"). InnerVolumeSpecName "kube-api-access-9b7hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:30:26.343396 master-2 kubenswrapper[4776]: I1011 10:30:26.343355 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "cbf33a7e-abea-411d-9a19-85cfe67debe3" (UID: "cbf33a7e-abea-411d-9a19-85cfe67debe3"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:30:26.442057 master-2 kubenswrapper[4776]: I1011 10:30:26.441913 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b7hm\" (UniqueName: \"kubernetes.io/projected/cbf33a7e-abea-411d-9a19-85cfe67debe3-kube-api-access-9b7hm\") on node \"master-2\" DevicePath \"\"" Oct 11 10:30:26.442057 master-2 kubenswrapper[4776]: I1011 10:30:26.442001 4776 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:30:26.496987 master-1 kubenswrapper[4771]: I1011 10:30:26.496922 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:26.496987 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:26.496987 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:26.496987 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:26.498232 master-1 kubenswrapper[4771]: I1011 10:30:26.498175 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:26.596623 master-2 kubenswrapper[4776]: I1011 10:30:26.596431 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-77b66fddc8-s5r5b_cbf33a7e-abea-411d-9a19-85cfe67debe3/multus-admission-controller/0.log" Oct 11 10:30:26.596623 master-2 kubenswrapper[4776]: I1011 10:30:26.596508 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" event={"ID":"cbf33a7e-abea-411d-9a19-85cfe67debe3","Type":"ContainerDied","Data":"d97eb420aaba8fb64f806ab9e8c4614e0403881645e49bc4703d4dfecdcf78f8"} Oct 11 10:30:26.596623 master-2 kubenswrapper[4776]: I1011 10:30:26.596547 4776 scope.go:117] "RemoveContainer" containerID="4c073c5ee5fd6035e588b594f71843fa0867444b7edf11350aaa49874157a615" Oct 11 10:30:26.597117 master-2 kubenswrapper[4776]: I1011 10:30:26.596733 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:30:26.612922 master-2 kubenswrapper[4776]: I1011 10:30:26.612868 4776 scope.go:117] "RemoveContainer" containerID="38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3" Oct 11 10:30:26.636240 master-2 kubenswrapper[4776]: I1011 10:30:26.636174 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-s5r5b"] Oct 11 10:30:26.642205 master-2 kubenswrapper[4776]: I1011 10:30:26.642147 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-s5r5b"] Oct 11 10:30:26.969851 master-2 kubenswrapper[4776]: I1011 10:30:26.969784 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:26.969851 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:26.969851 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:26.969851 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:26.970143 master-2 kubenswrapper[4776]: I1011 10:30:26.969870 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:27.301339 master-2 kubenswrapper[4776]: I1011 10:30:27.301201 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-f966fb6f8-8gkqg_e3281eb7-fb96-4bae-8c55-b79728d426b0/catalog-operator/0.log" Oct 11 10:30:27.496736 master-1 kubenswrapper[4771]: I1011 10:30:27.496397 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29336310-8nc4v_f4904d67-3c44-40d9-8ea8-026d727e9486/collect-profiles/0.log" Oct 11 10:30:27.497291 master-1 kubenswrapper[4771]: I1011 10:30:27.497234 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:27.497291 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:27.497291 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:27.497291 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:27.498195 master-1 kubenswrapper[4771]: I1011 10:30:27.497328 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:27.703894 master-2 kubenswrapper[4776]: I1011 10:30:27.703849 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-867f8475d9-8lf59_d4354488-1b32-422d-bb06-767a952192a5/olm-operator/0.log" Oct 11 10:30:27.897038 master-2 kubenswrapper[4776]: I1011 10:30:27.896956 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-798cc87f55-xzntp_e20ebc39-150b-472a-bb22-328d8f5db87b/kube-rbac-proxy/0.log" Oct 11 10:30:27.969812 master-2 kubenswrapper[4776]: I1011 10:30:27.969698 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:27.969812 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:27.969812 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:27.969812 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:27.970028 master-2 kubenswrapper[4776]: I1011 10:30:27.969807 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:28.074231 master-2 kubenswrapper[4776]: I1011 10:30:28.074096 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbf33a7e-abea-411d-9a19-85cfe67debe3" path="/var/lib/kubelet/pods/cbf33a7e-abea-411d-9a19-85cfe67debe3/volumes" Oct 11 10:30:28.098680 master-2 kubenswrapper[4776]: I1011 10:30:28.098616 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-798cc87f55-xzntp_e20ebc39-150b-472a-bb22-328d8f5db87b/package-server-manager/0.log" Oct 11 10:30:28.291598 master-1 kubenswrapper[4771]: I1011 10:30:28.291495 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-1" Oct 11 10:30:28.310102 master-1 kubenswrapper[4771]: I1011 10:30:28.310034 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-77c85f5c6-6zxmm_68bdaf37-fa14-4c86-a697-881df7c9c7f1/packageserver/0.log" Oct 11 10:30:28.314519 master-1 kubenswrapper[4771]: I1011 10:30:28.314429 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-1" Oct 11 10:30:28.496970 master-1 kubenswrapper[4771]: I1011 10:30:28.496857 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:28.496970 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:28.496970 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:28.496970 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:28.497747 master-1 kubenswrapper[4771]: I1011 10:30:28.496998 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:28.501593 master-2 kubenswrapper[4776]: I1011 10:30:28.501519 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-77c85f5c6-cfrh6_4e35cfca-8883-465b-b952-cc91f7f5dd81/packageserver/0.log" Oct 11 10:30:28.968596 master-2 kubenswrapper[4776]: I1011 10:30:28.968517 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:28.968596 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:28.968596 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:28.968596 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:28.968950 master-2 kubenswrapper[4776]: I1011 10:30:28.968631 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:29.497228 master-1 kubenswrapper[4771]: I1011 10:30:29.497123 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:29.497228 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:29.497228 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:29.497228 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:29.497228 master-1 kubenswrapper[4771]: I1011 10:30:29.497203 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:29.969105 master-2 kubenswrapper[4776]: I1011 10:30:29.969058 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:29.969105 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:29.969105 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:29.969105 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:29.969696 master-2 kubenswrapper[4776]: I1011 10:30:29.969113 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:30.246144 master-1 kubenswrapper[4771]: I1011 10:30:30.246083 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:30:30.268375 master-2 kubenswrapper[4776]: I1011 10:30:30.268229 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:30:30.497027 master-1 kubenswrapper[4771]: I1011 10:30:30.496819 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:30.497027 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:30.497027 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:30.497027 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:30.497662 master-1 kubenswrapper[4771]: I1011 10:30:30.497588 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:30.969502 master-2 kubenswrapper[4776]: I1011 10:30:30.969452 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:30.969502 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:30.969502 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:30.969502 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:30.970054 master-2 kubenswrapper[4776]: I1011 10:30:30.969514 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:31.497702 master-1 kubenswrapper[4771]: I1011 10:30:31.497606 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:31.497702 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:31.497702 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:31.497702 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:31.498427 master-1 kubenswrapper[4771]: I1011 10:30:31.497704 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:31.969754 master-2 kubenswrapper[4776]: I1011 10:30:31.969652 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:31.969754 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:31.969754 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:31.969754 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:31.969754 master-2 kubenswrapper[4776]: I1011 10:30:31.969732 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:32.496551 master-1 kubenswrapper[4771]: I1011 10:30:32.496492 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:32.496551 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:32.496551 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:32.496551 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:32.496551 master-1 kubenswrapper[4771]: I1011 10:30:32.496552 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:32.969995 master-2 kubenswrapper[4776]: I1011 10:30:32.969933 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:32.969995 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:32.969995 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:32.969995 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:32.970655 master-2 kubenswrapper[4776]: I1011 10:30:32.970008 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:33.027124 master-1 kubenswrapper[4771]: I1011 10:30:33.026967 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-1"] Oct 11 10:30:33.028271 master-1 kubenswrapper[4771]: E1011 10:30:33.027369 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="776c3745-2f4c-4a78-b1cd-77a7a1532df3" containerName="installer" Oct 11 10:30:33.028271 master-1 kubenswrapper[4771]: I1011 10:30:33.027432 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="776c3745-2f4c-4a78-b1cd-77a7a1532df3" containerName="installer" Oct 11 10:30:33.028271 master-1 kubenswrapper[4771]: E1011 10:30:33.027464 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806cd59c-056a-4fb4-a3b4-cb716c01cdea" containerName="kube-multus-additional-cni-plugins" Oct 11 10:30:33.028271 master-1 kubenswrapper[4771]: I1011 10:30:33.027482 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="806cd59c-056a-4fb4-a3b4-cb716c01cdea" containerName="kube-multus-additional-cni-plugins" Oct 11 10:30:33.028271 master-1 kubenswrapper[4771]: E1011 10:30:33.027506 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4904d67-3c44-40d9-8ea8-026d727e9486" containerName="collect-profiles" Oct 11 10:30:33.028271 master-1 kubenswrapper[4771]: I1011 10:30:33.027523 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4904d67-3c44-40d9-8ea8-026d727e9486" containerName="collect-profiles" Oct 11 10:30:33.028271 master-1 kubenswrapper[4771]: I1011 10:30:33.027782 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4904d67-3c44-40d9-8ea8-026d727e9486" containerName="collect-profiles" Oct 11 10:30:33.028271 master-1 kubenswrapper[4771]: I1011 10:30:33.027808 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="806cd59c-056a-4fb4-a3b4-cb716c01cdea" containerName="kube-multus-additional-cni-plugins" Oct 11 10:30:33.028271 master-1 kubenswrapper[4771]: I1011 10:30:33.027832 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="776c3745-2f4c-4a78-b1cd-77a7a1532df3" containerName="installer" Oct 11 10:30:33.029028 master-1 kubenswrapper[4771]: I1011 10:30:33.028816 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 11 10:30:33.031856 master-1 kubenswrapper[4771]: I1011 10:30:33.031798 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/041ccbf8-b64e-4909-b9ae-35b19705838a-kubelet-dir\") pod \"installer-3-retry-1-master-1\" (UID: \"041ccbf8-b64e-4909-b9ae-35b19705838a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 11 10:30:33.032069 master-1 kubenswrapper[4771]: I1011 10:30:33.031884 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/041ccbf8-b64e-4909-b9ae-35b19705838a-var-lock\") pod \"installer-3-retry-1-master-1\" (UID: \"041ccbf8-b64e-4909-b9ae-35b19705838a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 11 10:30:33.032253 master-1 kubenswrapper[4771]: I1011 10:30:33.032028 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/041ccbf8-b64e-4909-b9ae-35b19705838a-kube-api-access\") pod \"installer-3-retry-1-master-1\" (UID: \"041ccbf8-b64e-4909-b9ae-35b19705838a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 11 10:30:33.033044 master-1 kubenswrapper[4771]: I1011 10:30:33.033006 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 11 10:30:33.042011 master-1 kubenswrapper[4771]: I1011 10:30:33.041928 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-1"] Oct 11 10:30:33.133116 master-1 kubenswrapper[4771]: I1011 10:30:33.133002 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/041ccbf8-b64e-4909-b9ae-35b19705838a-kubelet-dir\") pod \"installer-3-retry-1-master-1\" (UID: \"041ccbf8-b64e-4909-b9ae-35b19705838a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 11 10:30:33.133116 master-1 kubenswrapper[4771]: I1011 10:30:33.133107 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/041ccbf8-b64e-4909-b9ae-35b19705838a-var-lock\") pod \"installer-3-retry-1-master-1\" (UID: \"041ccbf8-b64e-4909-b9ae-35b19705838a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 11 10:30:33.133604 master-1 kubenswrapper[4771]: I1011 10:30:33.133137 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/041ccbf8-b64e-4909-b9ae-35b19705838a-kubelet-dir\") pod \"installer-3-retry-1-master-1\" (UID: \"041ccbf8-b64e-4909-b9ae-35b19705838a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 11 10:30:33.133604 master-1 kubenswrapper[4771]: I1011 10:30:33.133252 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/041ccbf8-b64e-4909-b9ae-35b19705838a-kube-api-access\") pod \"installer-3-retry-1-master-1\" (UID: \"041ccbf8-b64e-4909-b9ae-35b19705838a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 11 10:30:33.133604 master-1 kubenswrapper[4771]: I1011 10:30:33.133341 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/041ccbf8-b64e-4909-b9ae-35b19705838a-var-lock\") pod \"installer-3-retry-1-master-1\" (UID: \"041ccbf8-b64e-4909-b9ae-35b19705838a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 11 10:30:33.155987 master-1 kubenswrapper[4771]: I1011 10:30:33.155945 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/041ccbf8-b64e-4909-b9ae-35b19705838a-kube-api-access\") pod \"installer-3-retry-1-master-1\" (UID: \"041ccbf8-b64e-4909-b9ae-35b19705838a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 11 10:30:33.360561 master-1 kubenswrapper[4771]: I1011 10:30:33.360447 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 11 10:30:33.497980 master-1 kubenswrapper[4771]: I1011 10:30:33.497865 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:33.497980 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:33.497980 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:33.497980 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:33.497980 master-1 kubenswrapper[4771]: I1011 10:30:33.497950 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:33.652268 master-1 kubenswrapper[4771]: I1011 10:30:33.652046 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-1"] Oct 11 10:30:33.661344 master-1 kubenswrapper[4771]: W1011 10:30:33.661257 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod041ccbf8_b64e_4909_b9ae_35b19705838a.slice/crio-50bab3edfacf7962918848de1e7a735ed5d3ec7f5b1d816706993472067c79b5 WatchSource:0}: Error finding container 50bab3edfacf7962918848de1e7a735ed5d3ec7f5b1d816706993472067c79b5: Status 404 returned error can't find the container with id 50bab3edfacf7962918848de1e7a735ed5d3ec7f5b1d816706993472067c79b5 Oct 11 10:30:33.969890 master-2 kubenswrapper[4776]: I1011 10:30:33.969798 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:33.969890 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:33.969890 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:33.969890 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:33.969890 master-2 kubenswrapper[4776]: I1011 10:30:33.969896 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:34.010456 master-1 kubenswrapper[4771]: I1011 10:30:34.010326 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" event={"ID":"041ccbf8-b64e-4909-b9ae-35b19705838a","Type":"ContainerStarted","Data":"50bab3edfacf7962918848de1e7a735ed5d3ec7f5b1d816706993472067c79b5"} Oct 11 10:30:34.497742 master-1 kubenswrapper[4771]: I1011 10:30:34.497648 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:34.497742 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:34.497742 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:34.497742 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:34.498803 master-1 kubenswrapper[4771]: I1011 10:30:34.497761 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:34.768881 master-2 kubenswrapper[4776]: E1011 10:30:34.768783 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" podUID="17bef070-1a9d-4090-b97a-7ce2c1c93b19" Oct 11 10:30:34.876914 master-1 kubenswrapper[4771]: I1011 10:30:34.876834 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw"] Oct 11 10:30:34.877330 master-1 kubenswrapper[4771]: I1011 10:30:34.877274 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" podUID="004ee387-d0e9-4582-ad14-f571832ebd6e" containerName="oauth-apiserver" containerID="cri-o://913475a2d9db4e326668ab52c1e0002f7c3998815697a186b12ad2e219935d68" gracePeriod=120 Oct 11 10:30:34.969389 master-2 kubenswrapper[4776]: I1011 10:30:34.969332 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:34.969389 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:34.969389 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:34.969389 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:34.969636 master-2 kubenswrapper[4776]: I1011 10:30:34.969416 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:35.018789 master-1 kubenswrapper[4771]: I1011 10:30:35.018723 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" event={"ID":"041ccbf8-b64e-4909-b9ae-35b19705838a","Type":"ContainerStarted","Data":"64183b8f0fe57cec48ea786bd6f2bde7521a6790010bcd3ba5698a2a91bb323f"} Oct 11 10:30:35.040460 master-1 kubenswrapper[4771]: I1011 10:30:35.040351 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" podStartSLOduration=2.040329303 podStartE2EDuration="2.040329303s" podCreationTimestamp="2025-10-11 10:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:30:35.036395356 +0000 UTC m=+267.010621817" watchObservedRunningTime="2025-10-11 10:30:35.040329303 +0000 UTC m=+267.014555764" Oct 11 10:30:35.497563 master-1 kubenswrapper[4771]: I1011 10:30:35.497475 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:35.497563 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:35.497563 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:35.497563 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:35.498266 master-1 kubenswrapper[4771]: I1011 10:30:35.497586 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:35.642814 master-2 kubenswrapper[4776]: I1011 10:30:35.642745 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:30:35.969125 master-2 kubenswrapper[4776]: I1011 10:30:35.969035 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:35.969125 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:35.969125 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:35.969125 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:35.969443 master-2 kubenswrapper[4776]: I1011 10:30:35.969145 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:36.497003 master-1 kubenswrapper[4771]: I1011 10:30:36.496885 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:36.497003 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:36.497003 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:36.497003 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:36.497003 master-1 kubenswrapper[4771]: I1011 10:30:36.496988 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:36.969655 master-2 kubenswrapper[4776]: I1011 10:30:36.969582 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:36.969655 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:36.969655 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:36.969655 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:36.970640 master-2 kubenswrapper[4776]: I1011 10:30:36.969676 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:37.348665 master-1 kubenswrapper[4771]: I1011 10:30:37.348545 4771 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-skwvw container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:30:37.348665 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:30:37.348665 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:30:37.348665 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:30:37.348665 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:30:37.348665 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:30:37.348665 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:30:37.348665 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:30:37.348665 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:30:37.348665 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:30:37.348665 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:30:37.348665 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:30:37.348665 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:30:37.348665 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:30:37.348665 master-1 kubenswrapper[4771]: I1011 10:30:37.348643 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" podUID="004ee387-d0e9-4582-ad14-f571832ebd6e" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:37.497240 master-1 kubenswrapper[4771]: I1011 10:30:37.497146 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:37.497240 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:37.497240 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:37.497240 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:37.497652 master-1 kubenswrapper[4771]: I1011 10:30:37.497260 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:37.790723 master-2 kubenswrapper[4776]: E1011 10:30:37.790599 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" podUID="cacd2d60-e8a5-450f-a4ad-dfc0194e3325" Oct 11 10:30:37.969499 master-2 kubenswrapper[4776]: I1011 10:30:37.969450 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:37.969499 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:37.969499 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:37.969499 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:37.969499 master-2 kubenswrapper[4776]: I1011 10:30:37.969498 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:38.208735 master-2 kubenswrapper[4776]: I1011 10:30:38.208630 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-2"] Oct 11 10:30:38.209066 master-2 kubenswrapper[4776]: E1011 10:30:38.208868 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee67bf2-b525-4e43-8f3c-be748c32c8d2" containerName="kube-multus-additional-cni-plugins" Oct 11 10:30:38.209066 master-2 kubenswrapper[4776]: I1011 10:30:38.208884 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee67bf2-b525-4e43-8f3c-be748c32c8d2" containerName="kube-multus-additional-cni-plugins" Oct 11 10:30:38.209066 master-2 kubenswrapper[4776]: E1011 10:30:38.208909 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerName="multus-admission-controller" Oct 11 10:30:38.209066 master-2 kubenswrapper[4776]: I1011 10:30:38.208917 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerName="multus-admission-controller" Oct 11 10:30:38.209066 master-2 kubenswrapper[4776]: E1011 10:30:38.208927 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerName="kube-rbac-proxy" Oct 11 10:30:38.209066 master-2 kubenswrapper[4776]: I1011 10:30:38.208935 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerName="kube-rbac-proxy" Oct 11 10:30:38.209066 master-2 kubenswrapper[4776]: I1011 10:30:38.209027 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee67bf2-b525-4e43-8f3c-be748c32c8d2" containerName="kube-multus-additional-cni-plugins" Oct 11 10:30:38.209066 master-2 kubenswrapper[4776]: I1011 10:30:38.209056 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerName="multus-admission-controller" Oct 11 10:30:38.209066 master-2 kubenswrapper[4776]: I1011 10:30:38.209068 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerName="kube-rbac-proxy" Oct 11 10:30:38.209616 master-2 kubenswrapper[4776]: I1011 10:30:38.209514 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.212203 master-2 kubenswrapper[4776]: I1011 10:30:38.212138 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Oct 11 10:30:38.252479 master-2 kubenswrapper[4776]: I1011 10:30:38.221119 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-2"] Oct 11 10:30:38.384801 master-2 kubenswrapper[4776]: I1011 10:30:38.384656 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-var-lock\") pod \"installer-1-master-2\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.384801 master-2 kubenswrapper[4776]: I1011 10:30:38.384790 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kube-api-access\") pod \"installer-1-master-2\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.385250 master-2 kubenswrapper[4776]: I1011 10:30:38.384826 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kubelet-dir\") pod \"installer-1-master-2\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.485471 master-2 kubenswrapper[4776]: I1011 10:30:38.485316 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-var-lock\") pod \"installer-1-master-2\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.485471 master-2 kubenswrapper[4776]: I1011 10:30:38.485380 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kube-api-access\") pod \"installer-1-master-2\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.485471 master-2 kubenswrapper[4776]: I1011 10:30:38.485408 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kubelet-dir\") pod \"installer-1-master-2\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.485471 master-2 kubenswrapper[4776]: I1011 10:30:38.485419 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-var-lock\") pod \"installer-1-master-2\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.485850 master-2 kubenswrapper[4776]: I1011 10:30:38.485506 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kubelet-dir\") pod \"installer-1-master-2\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.497508 master-1 kubenswrapper[4771]: I1011 10:30:38.497428 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:38.497508 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:38.497508 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:38.497508 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:38.498591 master-1 kubenswrapper[4771]: I1011 10:30:38.497522 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:38.512973 master-2 kubenswrapper[4776]: I1011 10:30:38.512894 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kube-api-access\") pod \"installer-1-master-2\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.566120 master-2 kubenswrapper[4776]: I1011 10:30:38.566055 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.660019 master-2 kubenswrapper[4776]: I1011 10:30:38.659966 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:30:38.969716 master-2 kubenswrapper[4776]: I1011 10:30:38.969633 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:38.969716 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:38.969716 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:38.969716 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:38.970371 master-2 kubenswrapper[4776]: I1011 10:30:38.969755 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:39.021290 master-2 kubenswrapper[4776]: I1011 10:30:39.021241 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-2"] Oct 11 10:30:39.026195 master-2 kubenswrapper[4776]: W1011 10:30:39.026140 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podebd3d140_91cb_4ec4_91a0_ec45a87da4ea.slice/crio-6698f474fcd951cc464ec2ef01cd4e2bff033b68f3b955d8d604935119d5705f WatchSource:0}: Error finding container 6698f474fcd951cc464ec2ef01cd4e2bff033b68f3b955d8d604935119d5705f: Status 404 returned error can't find the container with id 6698f474fcd951cc464ec2ef01cd4e2bff033b68f3b955d8d604935119d5705f Oct 11 10:30:39.496792 master-1 kubenswrapper[4771]: I1011 10:30:39.496674 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:39.496792 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:39.496792 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:39.496792 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:39.496792 master-1 kubenswrapper[4771]: I1011 10:30:39.496764 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:39.666575 master-2 kubenswrapper[4776]: I1011 10:30:39.666521 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-2" event={"ID":"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea","Type":"ContainerStarted","Data":"6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df"} Oct 11 10:30:39.666575 master-2 kubenswrapper[4776]: I1011 10:30:39.666576 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-2" event={"ID":"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea","Type":"ContainerStarted","Data":"6698f474fcd951cc464ec2ef01cd4e2bff033b68f3b955d8d604935119d5705f"} Oct 11 10:30:39.690006 master-2 kubenswrapper[4776]: I1011 10:30:39.689905 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-2" podStartSLOduration=1.6898792089999999 podStartE2EDuration="1.689879209s" podCreationTimestamp="2025-10-11 10:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:30:39.686153127 +0000 UTC m=+274.470579856" watchObservedRunningTime="2025-10-11 10:30:39.689879209 +0000 UTC m=+274.474305948" Oct 11 10:30:39.702612 master-2 kubenswrapper[4776]: I1011 10:30:39.702558 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:30:39.702783 master-2 kubenswrapper[4776]: E1011 10:30:39.702730 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:30:39.702902 master-2 kubenswrapper[4776]: E1011 10:30:39.702868 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:32:41.70283644 +0000 UTC m=+396.487263189 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:30:39.969547 master-2 kubenswrapper[4776]: I1011 10:30:39.969430 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:39.969547 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:39.969547 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:39.969547 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:39.969805 master-2 kubenswrapper[4776]: I1011 10:30:39.969533 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:40.497057 master-1 kubenswrapper[4771]: I1011 10:30:40.496951 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:40.497057 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:40.497057 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:40.497057 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:40.497057 master-1 kubenswrapper[4771]: I1011 10:30:40.497017 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:40.969282 master-2 kubenswrapper[4776]: I1011 10:30:40.969204 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:40.969282 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:40.969282 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:40.969282 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:40.969935 master-2 kubenswrapper[4776]: I1011 10:30:40.969298 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:41.497636 master-1 kubenswrapper[4771]: I1011 10:30:41.497551 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:41.497636 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:41.497636 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:41.497636 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:41.498525 master-1 kubenswrapper[4771]: I1011 10:30:41.497637 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:41.969443 master-2 kubenswrapper[4776]: I1011 10:30:41.969376 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:41.969443 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:41.969443 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:41.969443 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:41.970113 master-2 kubenswrapper[4776]: I1011 10:30:41.969442 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:42.349561 master-1 kubenswrapper[4771]: I1011 10:30:42.349437 4771 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-skwvw container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:30:42.349561 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:30:42.349561 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:30:42.349561 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:30:42.349561 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:30:42.349561 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:30:42.349561 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:30:42.349561 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:30:42.349561 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:30:42.349561 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:30:42.349561 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:30:42.349561 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:30:42.349561 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:30:42.349561 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:30:42.350346 master-1 kubenswrapper[4771]: I1011 10:30:42.349566 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" podUID="004ee387-d0e9-4582-ad14-f571832ebd6e" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:42.497518 master-1 kubenswrapper[4771]: I1011 10:30:42.497427 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:42.497518 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:42.497518 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:42.497518 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:42.498392 master-1 kubenswrapper[4771]: I1011 10:30:42.497542 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:42.646907 master-2 kubenswrapper[4776]: I1011 10:30:42.646768 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:30:42.646907 master-2 kubenswrapper[4776]: E1011 10:30:42.646921 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:30:42.647311 master-2 kubenswrapper[4776]: E1011 10:30:42.646978 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:32:44.646962069 +0000 UTC m=+399.431388778 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:30:42.969834 master-2 kubenswrapper[4776]: I1011 10:30:42.969787 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:42.969834 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:42.969834 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:42.969834 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:42.970433 master-2 kubenswrapper[4776]: I1011 10:30:42.969851 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:43.497582 master-1 kubenswrapper[4771]: I1011 10:30:43.497505 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:43.497582 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:43.497582 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:43.497582 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:43.497582 master-1 kubenswrapper[4771]: I1011 10:30:43.497584 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:43.690911 master-2 kubenswrapper[4776]: I1011 10:30:43.690622 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-77b66fddc8-5r2t9_64310b0b-bae1-4ad3-b106-6d59d47d29b2/multus-admission-controller/0.log" Oct 11 10:30:43.690911 master-2 kubenswrapper[4776]: I1011 10:30:43.690662 4776 generic.go:334] "Generic (PLEG): container finished" podID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerID="70bd3f9c400e0f2d03040b409d0be80f6f7bbda878ae150537f2b4ec7baf71bd" exitCode=137 Oct 11 10:30:43.690911 master-2 kubenswrapper[4776]: I1011 10:30:43.690719 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" event={"ID":"64310b0b-bae1-4ad3-b106-6d59d47d29b2","Type":"ContainerDied","Data":"70bd3f9c400e0f2d03040b409d0be80f6f7bbda878ae150537f2b4ec7baf71bd"} Oct 11 10:30:43.802427 master-2 kubenswrapper[4776]: I1011 10:30:43.802368 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-77b66fddc8-5r2t9_64310b0b-bae1-4ad3-b106-6d59d47d29b2/multus-admission-controller/0.log" Oct 11 10:30:43.802543 master-2 kubenswrapper[4776]: I1011 10:30:43.802468 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:30:43.863066 master-2 kubenswrapper[4776]: I1011 10:30:43.863000 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plpwg\" (UniqueName: \"kubernetes.io/projected/64310b0b-bae1-4ad3-b106-6d59d47d29b2-kube-api-access-plpwg\") pod \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " Oct 11 10:30:43.863335 master-2 kubenswrapper[4776]: I1011 10:30:43.863093 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " Oct 11 10:30:43.865934 master-2 kubenswrapper[4776]: I1011 10:30:43.865895 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "64310b0b-bae1-4ad3-b106-6d59d47d29b2" (UID: "64310b0b-bae1-4ad3-b106-6d59d47d29b2"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:30:43.866752 master-2 kubenswrapper[4776]: I1011 10:30:43.866710 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64310b0b-bae1-4ad3-b106-6d59d47d29b2-kube-api-access-plpwg" (OuterVolumeSpecName: "kube-api-access-plpwg") pod "64310b0b-bae1-4ad3-b106-6d59d47d29b2" (UID: "64310b0b-bae1-4ad3-b106-6d59d47d29b2"). InnerVolumeSpecName "kube-api-access-plpwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:30:43.964884 master-2 kubenswrapper[4776]: I1011 10:30:43.964813 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plpwg\" (UniqueName: \"kubernetes.io/projected/64310b0b-bae1-4ad3-b106-6d59d47d29b2-kube-api-access-plpwg\") on node \"master-2\" DevicePath \"\"" Oct 11 10:30:43.964884 master-2 kubenswrapper[4776]: I1011 10:30:43.964867 4776 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:30:43.969964 master-2 kubenswrapper[4776]: I1011 10:30:43.969898 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:43.969964 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:43.969964 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:43.969964 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:43.969964 master-2 kubenswrapper[4776]: I1011 10:30:43.969953 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:44.497139 master-1 kubenswrapper[4771]: I1011 10:30:44.497016 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:44.497139 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:44.497139 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:44.497139 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:44.497139 master-1 kubenswrapper[4771]: I1011 10:30:44.497110 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:44.700958 master-2 kubenswrapper[4776]: I1011 10:30:44.700862 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-77b66fddc8-5r2t9_64310b0b-bae1-4ad3-b106-6d59d47d29b2/multus-admission-controller/0.log" Oct 11 10:30:44.701252 master-2 kubenswrapper[4776]: I1011 10:30:44.700977 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" event={"ID":"64310b0b-bae1-4ad3-b106-6d59d47d29b2","Type":"ContainerDied","Data":"e72cc89f7bb8839ad3fcaec89df9b0ae1c41473603f0bffc6a5201981557d826"} Oct 11 10:30:44.701252 master-2 kubenswrapper[4776]: I1011 10:30:44.701038 4776 scope.go:117] "RemoveContainer" containerID="7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829" Oct 11 10:30:44.701252 master-2 kubenswrapper[4776]: I1011 10:30:44.701113 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:30:44.719612 master-2 kubenswrapper[4776]: I1011 10:30:44.719568 4776 scope.go:117] "RemoveContainer" containerID="70bd3f9c400e0f2d03040b409d0be80f6f7bbda878ae150537f2b4ec7baf71bd" Oct 11 10:30:44.729132 master-2 kubenswrapper[4776]: I1011 10:30:44.729034 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-5r2t9"] Oct 11 10:30:44.733940 master-2 kubenswrapper[4776]: I1011 10:30:44.733895 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-5r2t9"] Oct 11 10:30:44.970105 master-2 kubenswrapper[4776]: I1011 10:30:44.969879 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:44.970105 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:44.970105 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:44.970105 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:44.970105 master-2 kubenswrapper[4776]: I1011 10:30:44.970022 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:45.498531 master-1 kubenswrapper[4771]: I1011 10:30:45.498411 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:45.498531 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:45.498531 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:45.498531 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:45.498531 master-1 kubenswrapper[4771]: I1011 10:30:45.498511 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:45.970266 master-2 kubenswrapper[4776]: I1011 10:30:45.970165 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:45.970266 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:45.970266 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:45.970266 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:45.971265 master-2 kubenswrapper[4776]: I1011 10:30:45.970279 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:46.064164 master-2 kubenswrapper[4776]: I1011 10:30:46.064091 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" path="/var/lib/kubelet/pods/64310b0b-bae1-4ad3-b106-6d59d47d29b2/volumes" Oct 11 10:30:46.498211 master-1 kubenswrapper[4771]: I1011 10:30:46.498097 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:46.498211 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:46.498211 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:46.498211 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:46.498211 master-1 kubenswrapper[4771]: I1011 10:30:46.498206 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:46.969911 master-2 kubenswrapper[4776]: I1011 10:30:46.969834 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:46.969911 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:46.969911 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:46.969911 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:46.969911 master-2 kubenswrapper[4776]: I1011 10:30:46.969897 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:47.214481 master-1 kubenswrapper[4771]: I1011 10:30:47.214327 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:30:47.348296 master-1 kubenswrapper[4771]: I1011 10:30:47.348164 4771 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-skwvw container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:30:47.348296 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:30:47.348296 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:30:47.348296 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:30:47.348296 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:30:47.348296 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:30:47.348296 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:30:47.348296 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:30:47.348296 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:30:47.348296 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:30:47.348296 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:30:47.348296 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:30:47.348296 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:30:47.348296 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:30:47.349005 master-1 kubenswrapper[4771]: I1011 10:30:47.348331 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" podUID="004ee387-d0e9-4582-ad14-f571832ebd6e" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:47.349005 master-1 kubenswrapper[4771]: I1011 10:30:47.348576 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:30:47.489119 master-1 kubenswrapper[4771]: I1011 10:30:47.488919 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-ts25n"] Oct 11 10:30:47.489912 master-1 kubenswrapper[4771]: I1011 10:30:47.489854 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ts25n" Oct 11 10:30:47.493884 master-1 kubenswrapper[4771]: I1011 10:30:47.493805 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 11 10:30:47.493884 master-1 kubenswrapper[4771]: I1011 10:30:47.493854 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 11 10:30:47.494270 master-1 kubenswrapper[4771]: I1011 10:30:47.494222 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 11 10:30:47.494581 master-2 kubenswrapper[4776]: I1011 10:30:47.494505 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rr7vn"] Oct 11 10:30:47.494859 master-2 kubenswrapper[4776]: E1011 10:30:47.494749 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerName="multus-admission-controller" Oct 11 10:30:47.494859 master-2 kubenswrapper[4776]: I1011 10:30:47.494765 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerName="multus-admission-controller" Oct 11 10:30:47.494859 master-2 kubenswrapper[4776]: E1011 10:30:47.494785 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerName="kube-rbac-proxy" Oct 11 10:30:47.494859 master-2 kubenswrapper[4776]: I1011 10:30:47.494793 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerName="kube-rbac-proxy" Oct 11 10:30:47.495017 master-2 kubenswrapper[4776]: I1011 10:30:47.494907 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerName="multus-admission-controller" Oct 11 10:30:47.495017 master-2 kubenswrapper[4776]: I1011 10:30:47.494921 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerName="kube-rbac-proxy" Oct 11 10:30:47.495374 master-2 kubenswrapper[4776]: I1011 10:30:47.495337 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rr7vn" Oct 11 10:30:47.496619 master-1 kubenswrapper[4771]: I1011 10:30:47.496543 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:47.496619 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:47.496619 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:47.496619 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:47.497034 master-1 kubenswrapper[4771]: I1011 10:30:47.496629 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:47.498115 master-2 kubenswrapper[4776]: I1011 10:30:47.498063 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 11 10:30:47.498578 master-2 kubenswrapper[4776]: I1011 10:30:47.498531 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 11 10:30:47.498806 master-2 kubenswrapper[4776]: I1011 10:30:47.498760 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 11 10:30:47.501435 master-1 kubenswrapper[4771]: I1011 10:30:47.501331 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ts25n"] Oct 11 10:30:47.505556 master-2 kubenswrapper[4776]: I1011 10:30:47.505495 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rr7vn"] Oct 11 10:30:47.512279 master-2 kubenswrapper[4776]: I1011 10:30:47.512206 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b5880f74-fbfb-498e-9b47-d8d909d240e0-cert\") pod \"ingress-canary-rr7vn\" (UID: \"b5880f74-fbfb-498e-9b47-d8d909d240e0\") " pod="openshift-ingress-canary/ingress-canary-rr7vn" Oct 11 10:30:47.512417 master-2 kubenswrapper[4776]: I1011 10:30:47.512383 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59pw5\" (UniqueName: \"kubernetes.io/projected/b5880f74-fbfb-498e-9b47-d8d909d240e0-kube-api-access-59pw5\") pod \"ingress-canary-rr7vn\" (UID: \"b5880f74-fbfb-498e-9b47-d8d909d240e0\") " pod="openshift-ingress-canary/ingress-canary-rr7vn" Oct 11 10:30:47.522047 master-1 kubenswrapper[4771]: I1011 10:30:47.521026 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11d1de2f-e159-4967-935f-e7227794e6b4-cert\") pod \"ingress-canary-ts25n\" (UID: \"11d1de2f-e159-4967-935f-e7227794e6b4\") " pod="openshift-ingress-canary/ingress-canary-ts25n" Oct 11 10:30:47.522047 master-1 kubenswrapper[4771]: I1011 10:30:47.521091 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsflr\" (UniqueName: \"kubernetes.io/projected/11d1de2f-e159-4967-935f-e7227794e6b4-kube-api-access-rsflr\") pod \"ingress-canary-ts25n\" (UID: \"11d1de2f-e159-4967-935f-e7227794e6b4\") " pod="openshift-ingress-canary/ingress-canary-ts25n" Oct 11 10:30:47.613971 master-2 kubenswrapper[4776]: I1011 10:30:47.613854 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59pw5\" (UniqueName: \"kubernetes.io/projected/b5880f74-fbfb-498e-9b47-d8d909d240e0-kube-api-access-59pw5\") pod \"ingress-canary-rr7vn\" (UID: \"b5880f74-fbfb-498e-9b47-d8d909d240e0\") " pod="openshift-ingress-canary/ingress-canary-rr7vn" Oct 11 10:30:47.614219 master-2 kubenswrapper[4776]: I1011 10:30:47.614055 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b5880f74-fbfb-498e-9b47-d8d909d240e0-cert\") pod \"ingress-canary-rr7vn\" (UID: \"b5880f74-fbfb-498e-9b47-d8d909d240e0\") " pod="openshift-ingress-canary/ingress-canary-rr7vn" Oct 11 10:30:47.614361 master-2 kubenswrapper[4776]: E1011 10:30:47.614318 4776 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Oct 11 10:30:47.614459 master-2 kubenswrapper[4776]: E1011 10:30:47.614420 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5880f74-fbfb-498e-9b47-d8d909d240e0-cert podName:b5880f74-fbfb-498e-9b47-d8d909d240e0 nodeName:}" failed. No retries permitted until 2025-10-11 10:30:48.114389056 +0000 UTC m=+282.898815805 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b5880f74-fbfb-498e-9b47-d8d909d240e0-cert") pod "ingress-canary-rr7vn" (UID: "b5880f74-fbfb-498e-9b47-d8d909d240e0") : secret "canary-serving-cert" not found Oct 11 10:30:47.622799 master-1 kubenswrapper[4771]: I1011 10:30:47.622694 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11d1de2f-e159-4967-935f-e7227794e6b4-cert\") pod \"ingress-canary-ts25n\" (UID: \"11d1de2f-e159-4967-935f-e7227794e6b4\") " pod="openshift-ingress-canary/ingress-canary-ts25n" Oct 11 10:30:47.622799 master-1 kubenswrapper[4771]: I1011 10:30:47.622778 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsflr\" (UniqueName: \"kubernetes.io/projected/11d1de2f-e159-4967-935f-e7227794e6b4-kube-api-access-rsflr\") pod \"ingress-canary-ts25n\" (UID: \"11d1de2f-e159-4967-935f-e7227794e6b4\") " pod="openshift-ingress-canary/ingress-canary-ts25n" Oct 11 10:30:47.623056 master-1 kubenswrapper[4771]: E1011 10:30:47.622991 4771 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Oct 11 10:30:47.623177 master-1 kubenswrapper[4771]: E1011 10:30:47.623128 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11d1de2f-e159-4967-935f-e7227794e6b4-cert podName:11d1de2f-e159-4967-935f-e7227794e6b4 nodeName:}" failed. No retries permitted until 2025-10-11 10:30:48.123097497 +0000 UTC m=+280.097323968 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/11d1de2f-e159-4967-935f-e7227794e6b4-cert") pod "ingress-canary-ts25n" (UID: "11d1de2f-e159-4967-935f-e7227794e6b4") : secret "canary-serving-cert" not found Oct 11 10:30:47.643710 master-1 kubenswrapper[4771]: I1011 10:30:47.643625 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsflr\" (UniqueName: \"kubernetes.io/projected/11d1de2f-e159-4967-935f-e7227794e6b4-kube-api-access-rsflr\") pod \"ingress-canary-ts25n\" (UID: \"11d1de2f-e159-4967-935f-e7227794e6b4\") " pod="openshift-ingress-canary/ingress-canary-ts25n" Oct 11 10:30:47.644759 master-2 kubenswrapper[4776]: I1011 10:30:47.644659 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59pw5\" (UniqueName: \"kubernetes.io/projected/b5880f74-fbfb-498e-9b47-d8d909d240e0-kube-api-access-59pw5\") pod \"ingress-canary-rr7vn\" (UID: \"b5880f74-fbfb-498e-9b47-d8d909d240e0\") " pod="openshift-ingress-canary/ingress-canary-rr7vn" Oct 11 10:30:47.724884 master-2 kubenswrapper[4776]: I1011 10:30:47.724796 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/0.log" Oct 11 10:30:47.724884 master-2 kubenswrapper[4776]: I1011 10:30:47.724876 4776 generic.go:334] "Generic (PLEG): container finished" podID="6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c" containerID="2eff4353493e1e27d8a85bd8e32e0408e179cf5370df38de2ded9a10d5e6c314" exitCode=1 Oct 11 10:30:47.725170 master-2 kubenswrapper[4776]: I1011 10:30:47.724917 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" event={"ID":"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c","Type":"ContainerDied","Data":"2eff4353493e1e27d8a85bd8e32e0408e179cf5370df38de2ded9a10d5e6c314"} Oct 11 10:30:47.725521 master-2 kubenswrapper[4776]: I1011 10:30:47.725472 4776 scope.go:117] "RemoveContainer" containerID="2eff4353493e1e27d8a85bd8e32e0408e179cf5370df38de2ded9a10d5e6c314" Oct 11 10:30:47.799497 master-1 kubenswrapper[4771]: I1011 10:30:47.799323 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-h7gnk_6d20faa4-e5eb-4766-b4f5-30e491d1820c/machine-config-server/0.log" Oct 11 10:30:47.810116 master-2 kubenswrapper[4776]: I1011 10:30:47.810056 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:30:47.969479 master-2 kubenswrapper[4776]: I1011 10:30:47.969427 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:47.969479 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:47.969479 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:47.969479 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:47.969832 master-2 kubenswrapper[4776]: I1011 10:30:47.969498 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:48.120836 master-2 kubenswrapper[4776]: I1011 10:30:48.120662 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b5880f74-fbfb-498e-9b47-d8d909d240e0-cert\") pod \"ingress-canary-rr7vn\" (UID: \"b5880f74-fbfb-498e-9b47-d8d909d240e0\") " pod="openshift-ingress-canary/ingress-canary-rr7vn" Oct 11 10:30:48.126954 master-2 kubenswrapper[4776]: I1011 10:30:48.126866 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b5880f74-fbfb-498e-9b47-d8d909d240e0-cert\") pod \"ingress-canary-rr7vn\" (UID: \"b5880f74-fbfb-498e-9b47-d8d909d240e0\") " pod="openshift-ingress-canary/ingress-canary-rr7vn" Oct 11 10:30:48.129722 master-1 kubenswrapper[4771]: I1011 10:30:48.129593 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11d1de2f-e159-4967-935f-e7227794e6b4-cert\") pod \"ingress-canary-ts25n\" (UID: \"11d1de2f-e159-4967-935f-e7227794e6b4\") " pod="openshift-ingress-canary/ingress-canary-ts25n" Oct 11 10:30:48.135166 master-1 kubenswrapper[4771]: I1011 10:30:48.135053 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11d1de2f-e159-4967-935f-e7227794e6b4-cert\") pod \"ingress-canary-ts25n\" (UID: \"11d1de2f-e159-4967-935f-e7227794e6b4\") " pod="openshift-ingress-canary/ingress-canary-ts25n" Oct 11 10:30:48.268611 master-1 kubenswrapper[4771]: E1011 10:30:48.268484 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" podUID="d7647696-42d9-4dd9-bc3b-a4d52a42cf9a" Oct 11 10:30:48.365042 master-1 kubenswrapper[4771]: E1011 10:30:48.364923 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" podUID="6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b" Oct 11 10:30:48.418154 master-2 kubenswrapper[4776]: I1011 10:30:48.418109 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rr7vn" Oct 11 10:30:48.421640 master-1 kubenswrapper[4771]: I1011 10:30:48.421480 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ts25n" Oct 11 10:30:48.499750 master-1 kubenswrapper[4771]: I1011 10:30:48.499660 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:48.499750 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:48.499750 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:48.499750 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:48.500254 master-1 kubenswrapper[4771]: I1011 10:30:48.499757 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:48.731016 master-2 kubenswrapper[4776]: I1011 10:30:48.730898 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/0.log" Oct 11 10:30:48.731016 master-2 kubenswrapper[4776]: I1011 10:30:48.730952 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" event={"ID":"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c","Type":"ContainerStarted","Data":"9d1e12fc2f0f72ed70b132b8b498f2531a6ae8ae01d1a1a52b35463b0839dedd"} Oct 11 10:30:48.829981 master-2 kubenswrapper[4776]: I1011 10:30:48.829924 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rr7vn"] Oct 11 10:30:48.836132 master-2 kubenswrapper[4776]: W1011 10:30:48.835637 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5880f74_fbfb_498e_9b47_d8d909d240e0.slice/crio-e4ee22c8d31650be987ce1a5eb6d3d06ac9f7a3c9e2a93bf1334f4054f0147ac WatchSource:0}: Error finding container e4ee22c8d31650be987ce1a5eb6d3d06ac9f7a3c9e2a93bf1334f4054f0147ac: Status 404 returned error can't find the container with id e4ee22c8d31650be987ce1a5eb6d3d06ac9f7a3c9e2a93bf1334f4054f0147ac Oct 11 10:30:48.878193 master-1 kubenswrapper[4771]: I1011 10:30:48.877663 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ts25n"] Oct 11 10:30:48.885123 master-1 kubenswrapper[4771]: W1011 10:30:48.884974 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11d1de2f_e159_4967_935f_e7227794e6b4.slice/crio-04831bfcb220c1a21faa45b5bad9ab30e86939a4dfd17879df3441206774bf75 WatchSource:0}: Error finding container 04831bfcb220c1a21faa45b5bad9ab30e86939a4dfd17879df3441206774bf75: Status 404 returned error can't find the container with id 04831bfcb220c1a21faa45b5bad9ab30e86939a4dfd17879df3441206774bf75 Oct 11 10:30:48.969668 master-2 kubenswrapper[4776]: I1011 10:30:48.969610 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:48.969668 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:48.969668 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:48.969668 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:48.969933 master-2 kubenswrapper[4776]: I1011 10:30:48.969695 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:49.096584 master-1 kubenswrapper[4771]: I1011 10:30:49.096505 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ts25n" event={"ID":"11d1de2f-e159-4967-935f-e7227794e6b4","Type":"ContainerStarted","Data":"04831bfcb220c1a21faa45b5bad9ab30e86939a4dfd17879df3441206774bf75"} Oct 11 10:30:49.096584 master-1 kubenswrapper[4771]: I1011 10:30:49.096547 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:30:49.096982 master-1 kubenswrapper[4771]: I1011 10:30:49.096632 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:30:49.497596 master-1 kubenswrapper[4771]: I1011 10:30:49.497493 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:49.497596 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:49.497596 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:49.497596 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:49.498042 master-1 kubenswrapper[4771]: I1011 10:30:49.497608 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:49.704000 master-1 kubenswrapper[4771]: I1011 10:30:49.703937 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-1"] Oct 11 10:30:49.705104 master-1 kubenswrapper[4771]: I1011 10:30:49.705075 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-1" Oct 11 10:30:49.716121 master-1 kubenswrapper[4771]: I1011 10:30:49.716030 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-1"] Oct 11 10:30:49.737694 master-2 kubenswrapper[4776]: I1011 10:30:49.737585 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rr7vn" event={"ID":"b5880f74-fbfb-498e-9b47-d8d909d240e0","Type":"ContainerStarted","Data":"e9ab3ffb93281f8f6467ace2afc4934aa5d5c892f91a90c2d600053751198afa"} Oct 11 10:30:49.738195 master-2 kubenswrapper[4776]: I1011 10:30:49.738178 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rr7vn" event={"ID":"b5880f74-fbfb-498e-9b47-d8d909d240e0","Type":"ContainerStarted","Data":"e4ee22c8d31650be987ce1a5eb6d3d06ac9f7a3c9e2a93bf1334f4054f0147ac"} Oct 11 10:30:49.750976 master-1 kubenswrapper[4771]: I1011 10:30:49.750763 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fefe846e-30ef-4097-ac69-f771a74b2b98-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"fefe846e-30ef-4097-ac69-f771a74b2b98\") " pod="openshift-kube-scheduler/installer-5-master-1" Oct 11 10:30:49.750976 master-1 kubenswrapper[4771]: I1011 10:30:49.750932 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fefe846e-30ef-4097-ac69-f771a74b2b98-kube-api-access\") pod \"installer-5-master-1\" (UID: \"fefe846e-30ef-4097-ac69-f771a74b2b98\") " pod="openshift-kube-scheduler/installer-5-master-1" Oct 11 10:30:49.751324 master-1 kubenswrapper[4771]: I1011 10:30:49.751002 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fefe846e-30ef-4097-ac69-f771a74b2b98-var-lock\") pod \"installer-5-master-1\" (UID: \"fefe846e-30ef-4097-ac69-f771a74b2b98\") " pod="openshift-kube-scheduler/installer-5-master-1" Oct 11 10:30:49.763163 master-2 kubenswrapper[4776]: I1011 10:30:49.763067 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rr7vn" podStartSLOduration=2.76304178 podStartE2EDuration="2.76304178s" podCreationTimestamp="2025-10-11 10:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:30:49.758427535 +0000 UTC m=+284.542854264" watchObservedRunningTime="2025-10-11 10:30:49.76304178 +0000 UTC m=+284.547468489" Oct 11 10:30:49.853371 master-1 kubenswrapper[4771]: I1011 10:30:49.853283 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fefe846e-30ef-4097-ac69-f771a74b2b98-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"fefe846e-30ef-4097-ac69-f771a74b2b98\") " pod="openshift-kube-scheduler/installer-5-master-1" Oct 11 10:30:49.853580 master-1 kubenswrapper[4771]: I1011 10:30:49.853408 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fefe846e-30ef-4097-ac69-f771a74b2b98-kube-api-access\") pod \"installer-5-master-1\" (UID: \"fefe846e-30ef-4097-ac69-f771a74b2b98\") " pod="openshift-kube-scheduler/installer-5-master-1" Oct 11 10:30:49.853580 master-1 kubenswrapper[4771]: I1011 10:30:49.853478 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fefe846e-30ef-4097-ac69-f771a74b2b98-var-lock\") pod \"installer-5-master-1\" (UID: \"fefe846e-30ef-4097-ac69-f771a74b2b98\") " pod="openshift-kube-scheduler/installer-5-master-1" Oct 11 10:30:49.853715 master-1 kubenswrapper[4771]: I1011 10:30:49.853615 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fefe846e-30ef-4097-ac69-f771a74b2b98-var-lock\") pod \"installer-5-master-1\" (UID: \"fefe846e-30ef-4097-ac69-f771a74b2b98\") " pod="openshift-kube-scheduler/installer-5-master-1" Oct 11 10:30:49.853715 master-1 kubenswrapper[4771]: I1011 10:30:49.853680 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fefe846e-30ef-4097-ac69-f771a74b2b98-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"fefe846e-30ef-4097-ac69-f771a74b2b98\") " pod="openshift-kube-scheduler/installer-5-master-1" Oct 11 10:30:49.887903 master-1 kubenswrapper[4771]: I1011 10:30:49.887870 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fefe846e-30ef-4097-ac69-f771a74b2b98-kube-api-access\") pod \"installer-5-master-1\" (UID: \"fefe846e-30ef-4097-ac69-f771a74b2b98\") " pod="openshift-kube-scheduler/installer-5-master-1" Oct 11 10:30:49.969863 master-2 kubenswrapper[4776]: I1011 10:30:49.969805 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:49.969863 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:49.969863 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:49.969863 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:49.970471 master-2 kubenswrapper[4776]: I1011 10:30:49.970426 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:50.031859 master-1 kubenswrapper[4771]: I1011 10:30:50.031716 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-1" Oct 11 10:30:50.484845 master-1 kubenswrapper[4771]: I1011 10:30:50.484761 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-1"] Oct 11 10:30:50.497140 master-1 kubenswrapper[4771]: I1011 10:30:50.497067 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:50.497140 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:50.497140 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:50.497140 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:50.497470 master-1 kubenswrapper[4771]: I1011 10:30:50.497160 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:50.811600 master-2 kubenswrapper[4776]: I1011 10:30:50.811552 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-1-master-2"] Oct 11 10:30:50.812329 master-2 kubenswrapper[4776]: I1011 10:30:50.812303 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/installer-1-master-2" podUID="ebd3d140-91cb-4ec4-91a0-ec45a87da4ea" containerName="installer" containerID="cri-o://6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df" gracePeriod=30 Oct 11 10:30:50.968985 master-2 kubenswrapper[4776]: I1011 10:30:50.968902 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:50.968985 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:50.968985 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:50.968985 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:50.968985 master-2 kubenswrapper[4776]: I1011 10:30:50.968960 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:51.026584 master-1 kubenswrapper[4771]: I1011 10:30:51.026491 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-555f658fd6-n5n6g"] Oct 11 10:30:51.027234 master-1 kubenswrapper[4771]: I1011 10:30:51.026890 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="openshift-apiserver" containerID="cri-o://5ee744232b5a66fa90e18d0677b90fd7ff50cae1f9e1afc9158b036b712f32da" gracePeriod=120 Oct 11 10:30:51.027234 master-1 kubenswrapper[4771]: I1011 10:30:51.027075 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://893d86a98f61447fa7f11deae879fe95aeccf34e5a1d5e59961a43c4a181ec43" gracePeriod=120 Oct 11 10:30:51.111044 master-1 kubenswrapper[4771]: I1011 10:30:51.110976 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-1" event={"ID":"fefe846e-30ef-4097-ac69-f771a74b2b98","Type":"ContainerStarted","Data":"c1459d1f4e5756af7a1eb4e6fee99340755fd65bcc4e328af030a32aa61bf860"} Oct 11 10:30:51.497880 master-1 kubenswrapper[4771]: I1011 10:30:51.497802 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:51.497880 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:51.497880 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:51.497880 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:51.497880 master-1 kubenswrapper[4771]: I1011 10:30:51.497875 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:51.970149 master-2 kubenswrapper[4776]: I1011 10:30:51.970079 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:51.970149 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:51.970149 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:51.970149 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:51.970149 master-2 kubenswrapper[4776]: I1011 10:30:51.970150 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:52.119093 master-1 kubenswrapper[4771]: I1011 10:30:52.118975 4771 generic.go:334] "Generic (PLEG): container finished" podID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerID="893d86a98f61447fa7f11deae879fe95aeccf34e5a1d5e59961a43c4a181ec43" exitCode=0 Oct 11 10:30:52.119093 master-1 kubenswrapper[4771]: I1011 10:30:52.119056 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" event={"ID":"027736d1-f3d3-490e-9ee1-d08bad7a25b7","Type":"ContainerDied","Data":"893d86a98f61447fa7f11deae879fe95aeccf34e5a1d5e59961a43c4a181ec43"} Oct 11 10:30:52.120751 master-1 kubenswrapper[4771]: I1011 10:30:52.120684 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-1" event={"ID":"fefe846e-30ef-4097-ac69-f771a74b2b98","Type":"ContainerStarted","Data":"91d13e47f19b0473725a534d9929ad8d4221ea196c8d107ca009b7a28f766686"} Oct 11 10:30:52.123910 master-1 kubenswrapper[4771]: I1011 10:30:52.123827 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ts25n" event={"ID":"11d1de2f-e159-4967-935f-e7227794e6b4","Type":"ContainerStarted","Data":"103869ffe6a4e7707d5fbfc4f57248e30528d71a49060d3b2905fe7b63067371"} Oct 11 10:30:52.147926 master-1 kubenswrapper[4771]: I1011 10:30:52.147826 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-1" podStartSLOduration=3.147807723 podStartE2EDuration="3.147807723s" podCreationTimestamp="2025-10-11 10:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:30:52.143155344 +0000 UTC m=+284.117381855" watchObservedRunningTime="2025-10-11 10:30:52.147807723 +0000 UTC m=+284.122034194" Oct 11 10:30:52.167865 master-1 kubenswrapper[4771]: I1011 10:30:52.167724 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-ts25n" podStartSLOduration=2.924788827 podStartE2EDuration="5.167695663s" podCreationTimestamp="2025-10-11 10:30:47 +0000 UTC" firstStartedPulling="2025-10-11 10:30:48.887992205 +0000 UTC m=+280.862218686" lastFinishedPulling="2025-10-11 10:30:51.130899061 +0000 UTC m=+283.105125522" observedRunningTime="2025-10-11 10:30:52.165284572 +0000 UTC m=+284.139511053" watchObservedRunningTime="2025-10-11 10:30:52.167695663 +0000 UTC m=+284.141922104" Oct 11 10:30:52.347010 master-1 kubenswrapper[4771]: I1011 10:30:52.346931 4771 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-skwvw container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:30:52.347010 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:30:52.347010 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:30:52.347010 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:30:52.347010 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:30:52.347010 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:30:52.347010 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:30:52.347010 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:30:52.347010 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:30:52.347010 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:30:52.347010 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:30:52.347010 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:30:52.347010 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:30:52.347010 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:30:52.347010 master-1 kubenswrapper[4771]: I1011 10:30:52.347006 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" podUID="004ee387-d0e9-4582-ad14-f571832ebd6e" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:52.496816 master-1 kubenswrapper[4771]: I1011 10:30:52.496659 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:52.496816 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:52.496816 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:52.496816 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:52.496816 master-1 kubenswrapper[4771]: I1011 10:30:52.496727 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:52.969526 master-2 kubenswrapper[4776]: I1011 10:30:52.969452 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:52.969526 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:52.969526 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:52.969526 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:52.969951 master-2 kubenswrapper[4776]: I1011 10:30:52.969540 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:53.295657 master-1 kubenswrapper[4771]: I1011 10:30:53.295557 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:30:53.296545 master-1 kubenswrapper[4771]: E1011 10:30:53.295789 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker podName:d7647696-42d9-4dd9-bc3b-a4d52a42cf9a nodeName:}" failed. No retries permitted until 2025-10-11 10:32:55.295755558 +0000 UTC m=+407.269982099 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-bqdlc" (UID: "d7647696-42d9-4dd9-bc3b-a4d52a42cf9a") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:30:53.396961 master-1 kubenswrapper[4771]: I1011 10:30:53.396844 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:30:53.397615 master-1 kubenswrapper[4771]: E1011 10:30:53.397088 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker podName:6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b nodeName:}" failed. No retries permitted until 2025-10-11 10:32:55.397050687 +0000 UTC m=+407.371277168 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-tpzsm" (UID: "6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:30:53.497547 master-1 kubenswrapper[4771]: I1011 10:30:53.497476 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:53.497547 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:53.497547 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:53.497547 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:53.498015 master-1 kubenswrapper[4771]: I1011 10:30:53.497572 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:53.969482 master-2 kubenswrapper[4776]: I1011 10:30:53.969402 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:53.969482 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:53.969482 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:53.969482 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:53.970191 master-2 kubenswrapper[4776]: I1011 10:30:53.969482 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:54.497781 master-1 kubenswrapper[4771]: I1011 10:30:54.497690 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:54.497781 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:54.497781 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:54.497781 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:54.498867 master-1 kubenswrapper[4771]: I1011 10:30:54.497790 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:54.524552 master-1 kubenswrapper[4771]: I1011 10:30:54.524440 4771 patch_prober.go:28] interesting pod/apiserver-555f658fd6-n5n6g container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:30:54.524552 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:30:54.524552 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:30:54.524552 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:30:54.524552 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:30:54.524552 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:30:54.524552 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:30:54.524552 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:30:54.524552 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:30:54.524552 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:30:54.524552 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:30:54.524552 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:30:54.524552 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:30:54.524552 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:30:54.524552 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:30:54.524552 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:30:54.524552 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:30:54.524552 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:30:54.524552 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:30:54.525630 master-1 kubenswrapper[4771]: I1011 10:30:54.524569 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:54.605609 master-2 kubenswrapper[4776]: I1011 10:30:54.605503 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-2"] Oct 11 10:30:54.606503 master-2 kubenswrapper[4776]: I1011 10:30:54.606454 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.615730 master-2 kubenswrapper[4776]: I1011 10:30:54.615636 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-2"] Oct 11 10:30:54.803218 master-2 kubenswrapper[4776]: I1011 10:30:54.803162 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-var-lock\") pod \"installer-2-master-2\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.803218 master-2 kubenswrapper[4776]: I1011 10:30:54.803213 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kube-api-access\") pod \"installer-2-master-2\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.803526 master-2 kubenswrapper[4776]: I1011 10:30:54.803242 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kubelet-dir\") pod \"installer-2-master-2\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.904181 master-2 kubenswrapper[4776]: I1011 10:30:54.903904 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-var-lock\") pod \"installer-2-master-2\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.904181 master-2 kubenswrapper[4776]: I1011 10:30:54.903987 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kube-api-access\") pod \"installer-2-master-2\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.904181 master-2 kubenswrapper[4776]: I1011 10:30:54.904014 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kubelet-dir\") pod \"installer-2-master-2\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.904181 master-2 kubenswrapper[4776]: I1011 10:30:54.904015 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-var-lock\") pod \"installer-2-master-2\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.904181 master-2 kubenswrapper[4776]: I1011 10:30:54.904083 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kubelet-dir\") pod \"installer-2-master-2\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.926341 master-2 kubenswrapper[4776]: I1011 10:30:54.926284 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kube-api-access\") pod \"installer-2-master-2\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.932383 master-2 kubenswrapper[4776]: I1011 10:30:54.932278 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.969587 master-2 kubenswrapper[4776]: I1011 10:30:54.969507 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:54.969587 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:54.969587 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:54.969587 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:54.969587 master-2 kubenswrapper[4776]: I1011 10:30:54.969559 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:55.198724 master-1 kubenswrapper[4771]: E1011 10:30:55.198628 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" podUID="537a2b50-0394-47bd-941a-def350316943" Oct 11 10:30:55.332417 master-2 kubenswrapper[4776]: I1011 10:30:55.332325 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-2"] Oct 11 10:30:55.338601 master-2 kubenswrapper[4776]: W1011 10:30:55.338528 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod89edf964_f01b_4eaf_b627_9efa53a8f6d8.slice/crio-71e4b9dc7c050600ddc194a7eb0dee91146887aac61cb63950776221eacb81dc WatchSource:0}: Error finding container 71e4b9dc7c050600ddc194a7eb0dee91146887aac61cb63950776221eacb81dc: Status 404 returned error can't find the container with id 71e4b9dc7c050600ddc194a7eb0dee91146887aac61cb63950776221eacb81dc Oct 11 10:30:55.498014 master-1 kubenswrapper[4771]: I1011 10:30:55.497772 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:55.498014 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:55.498014 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:55.498014 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:55.498014 master-1 kubenswrapper[4771]: I1011 10:30:55.497950 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:55.766117 master-2 kubenswrapper[4776]: I1011 10:30:55.766071 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-2" event={"ID":"89edf964-f01b-4eaf-b627-9efa53a8f6d8","Type":"ContainerStarted","Data":"e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443"} Oct 11 10:30:55.766117 master-2 kubenswrapper[4776]: I1011 10:30:55.766122 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-2" event={"ID":"89edf964-f01b-4eaf-b627-9efa53a8f6d8","Type":"ContainerStarted","Data":"71e4b9dc7c050600ddc194a7eb0dee91146887aac61cb63950776221eacb81dc"} Oct 11 10:30:55.792499 master-2 kubenswrapper[4776]: I1011 10:30:55.792422 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-2" podStartSLOduration=1.792404978 podStartE2EDuration="1.792404978s" podCreationTimestamp="2025-10-11 10:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:30:55.789970592 +0000 UTC m=+290.574397311" watchObservedRunningTime="2025-10-11 10:30:55.792404978 +0000 UTC m=+290.576831697" Oct 11 10:30:55.969639 master-2 kubenswrapper[4776]: I1011 10:30:55.969481 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:55.969639 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:55.969639 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:55.969639 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:55.969639 master-2 kubenswrapper[4776]: I1011 10:30:55.969544 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:56.146040 master-1 kubenswrapper[4771]: I1011 10:30:56.145923 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:30:56.418458 master-1 kubenswrapper[4771]: I1011 10:30:56.418202 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-1"] Oct 11 10:30:56.418733 master-1 kubenswrapper[4771]: I1011 10:30:56.418597 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" podUID="041ccbf8-b64e-4909-b9ae-35b19705838a" containerName="installer" containerID="cri-o://64183b8f0fe57cec48ea786bd6f2bde7521a6790010bcd3ba5698a2a91bb323f" gracePeriod=30 Oct 11 10:30:56.497850 master-1 kubenswrapper[4771]: I1011 10:30:56.497771 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:56.497850 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:56.497850 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:56.497850 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:56.498793 master-1 kubenswrapper[4771]: I1011 10:30:56.497860 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:56.969976 master-2 kubenswrapper[4776]: I1011 10:30:56.969876 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:56.969976 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:56.969976 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:56.969976 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:56.969976 master-2 kubenswrapper[4776]: I1011 10:30:56.969941 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:57.349466 master-1 kubenswrapper[4771]: I1011 10:30:57.349321 4771 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-skwvw container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:30:57.349466 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:30:57.349466 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:30:57.349466 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:30:57.349466 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:30:57.349466 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:30:57.349466 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:30:57.349466 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:30:57.349466 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:30:57.349466 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:30:57.349466 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:30:57.349466 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:30:57.349466 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:30:57.349466 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:30:57.349466 master-1 kubenswrapper[4771]: I1011 10:30:57.349425 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" podUID="004ee387-d0e9-4582-ad14-f571832ebd6e" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:57.497982 master-1 kubenswrapper[4771]: I1011 10:30:57.497865 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:57.497982 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:57.497982 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:57.497982 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:57.497982 master-1 kubenswrapper[4771]: I1011 10:30:57.497950 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:57.631251 master-1 kubenswrapper[4771]: E1011 10:30:57.631055 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" podUID="c9e9455e-0b47-4623-9b4c-ef79cf62a254" Oct 11 10:30:57.975553 master-2 kubenswrapper[4776]: I1011 10:30:57.975513 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:57.975553 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:57.975553 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:57.975553 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:57.976388 master-2 kubenswrapper[4776]: I1011 10:30:57.976141 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:58.157914 master-1 kubenswrapper[4771]: I1011 10:30:58.157725 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:30:58.489637 master-1 kubenswrapper[4771]: I1011 10:30:58.489467 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-1"] Oct 11 10:30:58.490450 master-1 kubenswrapper[4771]: I1011 10:30:58.490406 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-1" Oct 11 10:30:58.494499 master-1 kubenswrapper[4771]: I1011 10:30:58.493928 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 11 10:30:58.497418 master-1 kubenswrapper[4771]: I1011 10:30:58.497333 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:58.497418 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:58.497418 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:58.497418 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:58.497654 master-1 kubenswrapper[4771]: I1011 10:30:58.497466 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:58.502292 master-1 kubenswrapper[4771]: I1011 10:30:58.502185 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-1"] Oct 11 10:30:58.562845 master-1 kubenswrapper[4771]: I1011 10:30:58.562755 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5e7e1ec-47a8-4283-9119-0d9d1343963e-var-lock\") pod \"installer-1-master-1\" (UID: \"f5e7e1ec-47a8-4283-9119-0d9d1343963e\") " pod="openshift-kube-apiserver/installer-1-master-1" Oct 11 10:30:58.563465 master-1 kubenswrapper[4771]: I1011 10:30:58.563432 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5e7e1ec-47a8-4283-9119-0d9d1343963e-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"f5e7e1ec-47a8-4283-9119-0d9d1343963e\") " pod="openshift-kube-apiserver/installer-1-master-1" Oct 11 10:30:58.563704 master-1 kubenswrapper[4771]: I1011 10:30:58.563675 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5e7e1ec-47a8-4283-9119-0d9d1343963e-kube-api-access\") pod \"installer-1-master-1\" (UID: \"f5e7e1ec-47a8-4283-9119-0d9d1343963e\") " pod="openshift-kube-apiserver/installer-1-master-1" Oct 11 10:30:58.665728 master-1 kubenswrapper[4771]: I1011 10:30:58.665590 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5e7e1ec-47a8-4283-9119-0d9d1343963e-var-lock\") pod \"installer-1-master-1\" (UID: \"f5e7e1ec-47a8-4283-9119-0d9d1343963e\") " pod="openshift-kube-apiserver/installer-1-master-1" Oct 11 10:30:58.666431 master-1 kubenswrapper[4771]: I1011 10:30:58.665770 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5e7e1ec-47a8-4283-9119-0d9d1343963e-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"f5e7e1ec-47a8-4283-9119-0d9d1343963e\") " pod="openshift-kube-apiserver/installer-1-master-1" Oct 11 10:30:58.666431 master-1 kubenswrapper[4771]: I1011 10:30:58.665820 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5e7e1ec-47a8-4283-9119-0d9d1343963e-var-lock\") pod \"installer-1-master-1\" (UID: \"f5e7e1ec-47a8-4283-9119-0d9d1343963e\") " pod="openshift-kube-apiserver/installer-1-master-1" Oct 11 10:30:58.666431 master-1 kubenswrapper[4771]: I1011 10:30:58.665867 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5e7e1ec-47a8-4283-9119-0d9d1343963e-kube-api-access\") pod \"installer-1-master-1\" (UID: \"f5e7e1ec-47a8-4283-9119-0d9d1343963e\") " pod="openshift-kube-apiserver/installer-1-master-1" Oct 11 10:30:58.666431 master-1 kubenswrapper[4771]: I1011 10:30:58.666007 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5e7e1ec-47a8-4283-9119-0d9d1343963e-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"f5e7e1ec-47a8-4283-9119-0d9d1343963e\") " pod="openshift-kube-apiserver/installer-1-master-1" Oct 11 10:30:58.701763 master-1 kubenswrapper[4771]: I1011 10:30:58.701666 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5e7e1ec-47a8-4283-9119-0d9d1343963e-kube-api-access\") pod \"installer-1-master-1\" (UID: \"f5e7e1ec-47a8-4283-9119-0d9d1343963e\") " pod="openshift-kube-apiserver/installer-1-master-1" Oct 11 10:30:58.814810 master-1 kubenswrapper[4771]: I1011 10:30:58.814719 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-1" Oct 11 10:30:58.819498 master-1 kubenswrapper[4771]: I1011 10:30:58.819438 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-1"] Oct 11 10:30:58.820450 master-1 kubenswrapper[4771]: I1011 10:30:58.820409 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-1" Oct 11 10:30:58.829664 master-1 kubenswrapper[4771]: I1011 10:30:58.829608 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-1"] Oct 11 10:30:58.868534 master-1 kubenswrapper[4771]: I1011 10:30:58.868451 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2d8f859-38d1-4916-8262-ff865eb9982c-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"e2d8f859-38d1-4916-8262-ff865eb9982c\") " pod="openshift-kube-controller-manager/installer-4-master-1" Oct 11 10:30:58.868641 master-1 kubenswrapper[4771]: I1011 10:30:58.868582 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2d8f859-38d1-4916-8262-ff865eb9982c-kube-api-access\") pod \"installer-4-master-1\" (UID: \"e2d8f859-38d1-4916-8262-ff865eb9982c\") " pod="openshift-kube-controller-manager/installer-4-master-1" Oct 11 10:30:58.868792 master-1 kubenswrapper[4771]: I1011 10:30:58.868656 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2d8f859-38d1-4916-8262-ff865eb9982c-var-lock\") pod \"installer-4-master-1\" (UID: \"e2d8f859-38d1-4916-8262-ff865eb9982c\") " pod="openshift-kube-controller-manager/installer-4-master-1" Oct 11 10:30:58.970200 master-2 kubenswrapper[4776]: I1011 10:30:58.970138 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:58.970200 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:58.970200 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:58.970200 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:58.970406 master-1 kubenswrapper[4771]: I1011 10:30:58.969986 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2d8f859-38d1-4916-8262-ff865eb9982c-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"e2d8f859-38d1-4916-8262-ff865eb9982c\") " pod="openshift-kube-controller-manager/installer-4-master-1" Oct 11 10:30:58.970406 master-1 kubenswrapper[4771]: I1011 10:30:58.970111 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2d8f859-38d1-4916-8262-ff865eb9982c-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"e2d8f859-38d1-4916-8262-ff865eb9982c\") " pod="openshift-kube-controller-manager/installer-4-master-1" Oct 11 10:30:58.970406 master-1 kubenswrapper[4771]: I1011 10:30:58.970119 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2d8f859-38d1-4916-8262-ff865eb9982c-kube-api-access\") pod \"installer-4-master-1\" (UID: \"e2d8f859-38d1-4916-8262-ff865eb9982c\") " pod="openshift-kube-controller-manager/installer-4-master-1" Oct 11 10:30:58.970406 master-1 kubenswrapper[4771]: I1011 10:30:58.970391 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2d8f859-38d1-4916-8262-ff865eb9982c-var-lock\") pod \"installer-4-master-1\" (UID: \"e2d8f859-38d1-4916-8262-ff865eb9982c\") " pod="openshift-kube-controller-manager/installer-4-master-1" Oct 11 10:30:58.970764 master-1 kubenswrapper[4771]: I1011 10:30:58.970492 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2d8f859-38d1-4916-8262-ff865eb9982c-var-lock\") pod \"installer-4-master-1\" (UID: \"e2d8f859-38d1-4916-8262-ff865eb9982c\") " pod="openshift-kube-controller-manager/installer-4-master-1" Oct 11 10:30:58.970864 master-2 kubenswrapper[4776]: I1011 10:30:58.970209 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:58.995239 master-1 kubenswrapper[4771]: I1011 10:30:58.995191 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2d8f859-38d1-4916-8262-ff865eb9982c-kube-api-access\") pod \"installer-4-master-1\" (UID: \"e2d8f859-38d1-4916-8262-ff865eb9982c\") " pod="openshift-kube-controller-manager/installer-4-master-1" Oct 11 10:30:59.183280 master-1 kubenswrapper[4771]: I1011 10:30:59.183070 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-1" Oct 11 10:30:59.300068 master-1 kubenswrapper[4771]: I1011 10:30:59.299596 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-1"] Oct 11 10:30:59.310187 master-1 kubenswrapper[4771]: W1011 10:30:59.310088 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf5e7e1ec_47a8_4283_9119_0d9d1343963e.slice/crio-59e5d21ded756cda1a7334faedc3327ece5481088be9442f57418e8a270fcabc WatchSource:0}: Error finding container 59e5d21ded756cda1a7334faedc3327ece5481088be9442f57418e8a270fcabc: Status 404 returned error can't find the container with id 59e5d21ded756cda1a7334faedc3327ece5481088be9442f57418e8a270fcabc Oct 11 10:30:59.497417 master-1 kubenswrapper[4771]: I1011 10:30:59.497287 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:59.497417 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:30:59.497417 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:30:59.497417 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:30:59.497417 master-1 kubenswrapper[4771]: I1011 10:30:59.497346 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:59.522539 master-1 kubenswrapper[4771]: I1011 10:30:59.522470 4771 patch_prober.go:28] interesting pod/apiserver-555f658fd6-n5n6g container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:30:59.522539 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:30:59.522539 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:30:59.522539 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:30:59.522539 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:30:59.522539 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:30:59.522539 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:30:59.522539 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:30:59.522539 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:30:59.522539 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:30:59.522539 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:30:59.522539 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:30:59.522539 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:30:59.522539 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:30:59.522539 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:30:59.522539 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:30:59.522539 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:30:59.522539 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:30:59.522539 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:30:59.528446 master-1 kubenswrapper[4771]: I1011 10:30:59.522561 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:59.653093 master-1 kubenswrapper[4771]: I1011 10:30:59.653021 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-1"] Oct 11 10:30:59.719266 master-1 kubenswrapper[4771]: W1011 10:30:59.719211 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode2d8f859_38d1_4916_8262_ff865eb9982c.slice/crio-8a6f9bd6405227d24a21db358afae4da6b7dc87d9e61c80b59ee89918fb69c92 WatchSource:0}: Error finding container 8a6f9bd6405227d24a21db358afae4da6b7dc87d9e61c80b59ee89918fb69c92: Status 404 returned error can't find the container with id 8a6f9bd6405227d24a21db358afae4da6b7dc87d9e61c80b59ee89918fb69c92 Oct 11 10:30:59.809922 master-2 kubenswrapper[4776]: I1011 10:30:59.809850 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-2-master-2"] Oct 11 10:30:59.810750 master-2 kubenswrapper[4776]: I1011 10:30:59.810136 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/installer-2-master-2" podUID="89edf964-f01b-4eaf-b627-9efa53a8f6d8" containerName="installer" containerID="cri-o://e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443" gracePeriod=30 Oct 11 10:30:59.968867 master-2 kubenswrapper[4776]: I1011 10:30:59.968770 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:59.968867 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:59.968867 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:59.968867 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:59.969163 master-2 kubenswrapper[4776]: I1011 10:30:59.968870 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:00.169509 master-1 kubenswrapper[4771]: I1011 10:31:00.169440 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-1" event={"ID":"f5e7e1ec-47a8-4283-9119-0d9d1343963e","Type":"ContainerStarted","Data":"d38cc7e81ae0071969a185999498646cddc10ee8b65bed60da29b4c1f46a55dc"} Oct 11 10:31:00.169673 master-1 kubenswrapper[4771]: I1011 10:31:00.169516 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-1" event={"ID":"f5e7e1ec-47a8-4283-9119-0d9d1343963e","Type":"ContainerStarted","Data":"59e5d21ded756cda1a7334faedc3327ece5481088be9442f57418e8a270fcabc"} Oct 11 10:31:00.171755 master-1 kubenswrapper[4771]: I1011 10:31:00.171706 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-1" event={"ID":"e2d8f859-38d1-4916-8262-ff865eb9982c","Type":"ContainerStarted","Data":"0904aae89e47c25a2e93dd629d94914a7beb5e409d6b4e15ac6ddcfa1b57aa4d"} Oct 11 10:31:00.171860 master-1 kubenswrapper[4771]: I1011 10:31:00.171752 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-1" event={"ID":"e2d8f859-38d1-4916-8262-ff865eb9982c","Type":"ContainerStarted","Data":"8a6f9bd6405227d24a21db358afae4da6b7dc87d9e61c80b59ee89918fb69c92"} Oct 11 10:31:00.195283 master-1 kubenswrapper[4771]: I1011 10:31:00.195168 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-1" podStartSLOduration=2.195143677 podStartE2EDuration="2.195143677s" podCreationTimestamp="2025-10-11 10:30:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:31:00.192977903 +0000 UTC m=+292.167204434" watchObservedRunningTime="2025-10-11 10:31:00.195143677 +0000 UTC m=+292.169370158" Oct 11 10:31:00.212689 master-1 kubenswrapper[4771]: I1011 10:31:00.212579 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-1" podStartSLOduration=2.212553505 podStartE2EDuration="2.212553505s" podCreationTimestamp="2025-10-11 10:30:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:31:00.209479243 +0000 UTC m=+292.183705714" watchObservedRunningTime="2025-10-11 10:31:00.212553505 +0000 UTC m=+292.186779976" Oct 11 10:31:00.265947 master-2 kubenswrapper[4776]: I1011 10:31:00.265751 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-2_89edf964-f01b-4eaf-b627-9efa53a8f6d8/installer/0.log" Oct 11 10:31:00.265947 master-2 kubenswrapper[4776]: I1011 10:31:00.265859 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-2" Oct 11 10:31:00.286634 master-1 kubenswrapper[4771]: I1011 10:31:00.286516 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:31:00.286909 master-1 kubenswrapper[4771]: E1011 10:31:00.286767 4771 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:31:00.286989 master-1 kubenswrapper[4771]: E1011 10:31:00.286908 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca podName:537a2b50-0394-47bd-941a-def350316943 nodeName:}" failed. No retries permitted until 2025-10-11 10:33:02.286873293 +0000 UTC m=+414.261099764 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca") pod "route-controller-manager-5bcc5987f5-f92xw" (UID: "537a2b50-0394-47bd-941a-def350316943") : configmap "client-ca" not found Oct 11 10:31:00.370999 master-2 kubenswrapper[4776]: I1011 10:31:00.370954 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-var-lock\") pod \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " Oct 11 10:31:00.371197 master-2 kubenswrapper[4776]: I1011 10:31:00.371073 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-var-lock" (OuterVolumeSpecName: "var-lock") pod "89edf964-f01b-4eaf-b627-9efa53a8f6d8" (UID: "89edf964-f01b-4eaf-b627-9efa53a8f6d8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:00.371197 master-2 kubenswrapper[4776]: I1011 10:31:00.371097 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kube-api-access\") pod \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " Oct 11 10:31:00.371268 master-2 kubenswrapper[4776]: I1011 10:31:00.371252 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kubelet-dir\") pod \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " Oct 11 10:31:00.371402 master-2 kubenswrapper[4776]: I1011 10:31:00.371364 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "89edf964-f01b-4eaf-b627-9efa53a8f6d8" (UID: "89edf964-f01b-4eaf-b627-9efa53a8f6d8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:00.371783 master-2 kubenswrapper[4776]: I1011 10:31:00.371761 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:31:00.371837 master-2 kubenswrapper[4776]: I1011 10:31:00.371785 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:31:00.374423 master-2 kubenswrapper[4776]: I1011 10:31:00.374361 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "89edf964-f01b-4eaf-b627-9efa53a8f6d8" (UID: "89edf964-f01b-4eaf-b627-9efa53a8f6d8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:31:00.473234 master-2 kubenswrapper[4776]: I1011 10:31:00.473155 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:31:00.497330 master-1 kubenswrapper[4771]: I1011 10:31:00.497205 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:00.497330 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:00.497330 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:00.497330 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:00.497330 master-1 kubenswrapper[4771]: I1011 10:31:00.497307 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:00.802721 master-2 kubenswrapper[4776]: I1011 10:31:00.802646 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-2_89edf964-f01b-4eaf-b627-9efa53a8f6d8/installer/0.log" Oct 11 10:31:00.802958 master-2 kubenswrapper[4776]: I1011 10:31:00.802727 4776 generic.go:334] "Generic (PLEG): container finished" podID="89edf964-f01b-4eaf-b627-9efa53a8f6d8" containerID="e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443" exitCode=1 Oct 11 10:31:00.802958 master-2 kubenswrapper[4776]: I1011 10:31:00.802766 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-2" event={"ID":"89edf964-f01b-4eaf-b627-9efa53a8f6d8","Type":"ContainerDied","Data":"e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443"} Oct 11 10:31:00.802958 master-2 kubenswrapper[4776]: I1011 10:31:00.802795 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-2" event={"ID":"89edf964-f01b-4eaf-b627-9efa53a8f6d8","Type":"ContainerDied","Data":"71e4b9dc7c050600ddc194a7eb0dee91146887aac61cb63950776221eacb81dc"} Oct 11 10:31:00.802958 master-2 kubenswrapper[4776]: I1011 10:31:00.802816 4776 scope.go:117] "RemoveContainer" containerID="e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443" Oct 11 10:31:00.802958 master-2 kubenswrapper[4776]: I1011 10:31:00.802933 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-2" Oct 11 10:31:00.829203 master-2 kubenswrapper[4776]: I1011 10:31:00.829162 4776 scope.go:117] "RemoveContainer" containerID="e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443" Oct 11 10:31:00.829637 master-2 kubenswrapper[4776]: E1011 10:31:00.829596 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443\": container with ID starting with e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443 not found: ID does not exist" containerID="e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443" Oct 11 10:31:00.829692 master-2 kubenswrapper[4776]: I1011 10:31:00.829641 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443"} err="failed to get container status \"e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443\": rpc error: code = NotFound desc = could not find container \"e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443\": container with ID starting with e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443 not found: ID does not exist" Oct 11 10:31:00.844190 master-2 kubenswrapper[4776]: I1011 10:31:00.844129 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-2-master-2"] Oct 11 10:31:00.854751 master-2 kubenswrapper[4776]: I1011 10:31:00.854657 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-2-master-2"] Oct 11 10:31:00.971533 master-2 kubenswrapper[4776]: I1011 10:31:00.971439 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:00.971533 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:00.971533 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:00.971533 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:00.972043 master-2 kubenswrapper[4776]: I1011 10:31:00.971549 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:01.498213 master-1 kubenswrapper[4771]: I1011 10:31:01.498105 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:01.498213 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:01.498213 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:01.498213 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:01.499051 master-1 kubenswrapper[4771]: I1011 10:31:01.498223 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:01.969820 master-2 kubenswrapper[4776]: I1011 10:31:01.969738 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:01.969820 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:01.969820 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:01.969820 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:01.970797 master-2 kubenswrapper[4776]: I1011 10:31:01.969827 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:02.066393 master-2 kubenswrapper[4776]: I1011 10:31:02.066301 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89edf964-f01b-4eaf-b627-9efa53a8f6d8" path="/var/lib/kubelet/pods/89edf964-f01b-4eaf-b627-9efa53a8f6d8/volumes" Oct 11 10:31:02.348883 master-1 kubenswrapper[4771]: I1011 10:31:02.348748 4771 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-skwvw container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:02.348883 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:31:02.348883 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:31:02.348883 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:31:02.348883 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:02.348883 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:31:02.348883 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:02.348883 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:02.348883 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:02.348883 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:31:02.348883 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:31:02.348883 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:31:02.348883 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:31:02.348883 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:31:02.348883 master-1 kubenswrapper[4771]: I1011 10:31:02.348854 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" podUID="004ee387-d0e9-4582-ad14-f571832ebd6e" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:02.496772 master-1 kubenswrapper[4771]: I1011 10:31:02.496651 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:02.496772 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:02.496772 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:02.496772 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:02.497288 master-1 kubenswrapper[4771]: I1011 10:31:02.496749 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:02.721909 master-1 kubenswrapper[4771]: I1011 10:31:02.721708 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:31:02.722718 master-1 kubenswrapper[4771]: E1011 10:31:02.721975 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:31:02.722718 master-1 kubenswrapper[4771]: E1011 10:31:02.722244 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca podName:c9e9455e-0b47-4623-9b4c-ef79cf62a254 nodeName:}" failed. No retries permitted until 2025-10-11 10:33:04.722213887 +0000 UTC m=+416.696440368 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca") pod "controller-manager-565f857764-nhm4g" (UID: "c9e9455e-0b47-4623-9b4c-ef79cf62a254") : configmap "client-ca" not found Oct 11 10:31:02.969249 master-2 kubenswrapper[4776]: I1011 10:31:02.969196 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:02.969249 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:02.969249 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:02.969249 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:02.969501 master-2 kubenswrapper[4776]: I1011 10:31:02.969269 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:03.206845 master-2 kubenswrapper[4776]: I1011 10:31:03.206781 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-3-master-2"] Oct 11 10:31:03.207499 master-2 kubenswrapper[4776]: E1011 10:31:03.207045 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89edf964-f01b-4eaf-b627-9efa53a8f6d8" containerName="installer" Oct 11 10:31:03.207499 master-2 kubenswrapper[4776]: I1011 10:31:03.207062 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="89edf964-f01b-4eaf-b627-9efa53a8f6d8" containerName="installer" Oct 11 10:31:03.207499 master-2 kubenswrapper[4776]: I1011 10:31:03.207173 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="89edf964-f01b-4eaf-b627-9efa53a8f6d8" containerName="installer" Oct 11 10:31:03.207758 master-2 kubenswrapper[4776]: I1011 10:31:03.207726 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.219542 master-2 kubenswrapper[4776]: I1011 10:31:03.219439 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-3-master-2"] Oct 11 10:31:03.310820 master-2 kubenswrapper[4776]: I1011 10:31:03.310730 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-var-lock\") pod \"installer-3-master-2\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.310820 master-2 kubenswrapper[4776]: I1011 10:31:03.310802 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kube-api-access\") pod \"installer-3-master-2\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.311309 master-2 kubenswrapper[4776]: I1011 10:31:03.311221 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kubelet-dir\") pod \"installer-3-master-2\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.412393 master-2 kubenswrapper[4776]: I1011 10:31:03.412339 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-var-lock\") pod \"installer-3-master-2\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.412393 master-2 kubenswrapper[4776]: I1011 10:31:03.412403 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kube-api-access\") pod \"installer-3-master-2\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.412758 master-2 kubenswrapper[4776]: I1011 10:31:03.412466 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kubelet-dir\") pod \"installer-3-master-2\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.412758 master-2 kubenswrapper[4776]: I1011 10:31:03.412615 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kubelet-dir\") pod \"installer-3-master-2\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.413012 master-2 kubenswrapper[4776]: I1011 10:31:03.412972 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-var-lock\") pod \"installer-3-master-2\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.497027 master-1 kubenswrapper[4771]: I1011 10:31:03.496910 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:03.497027 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:03.497027 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:03.497027 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:03.497027 master-1 kubenswrapper[4771]: I1011 10:31:03.497003 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:03.756176 master-2 kubenswrapper[4776]: I1011 10:31:03.756096 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kube-api-access\") pod \"installer-3-master-2\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.822264 master-2 kubenswrapper[4776]: I1011 10:31:03.822188 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.974231 master-2 kubenswrapper[4776]: I1011 10:31:03.973715 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:03.974231 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:03.974231 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:03.974231 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:03.974231 master-2 kubenswrapper[4776]: I1011 10:31:03.973814 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:04.277704 master-2 kubenswrapper[4776]: I1011 10:31:04.277624 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-3-master-2"] Oct 11 10:31:04.282204 master-2 kubenswrapper[4776]: W1011 10:31:04.282127 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podff524bb0_602a_4579_bac9_c3f5c19ec9ba.slice/crio-caf35abea272da4e2eae59b8aeddbb02fa8fc9c63850f420fe7ddea3a3fbfdf7 WatchSource:0}: Error finding container caf35abea272da4e2eae59b8aeddbb02fa8fc9c63850f420fe7ddea3a3fbfdf7: Status 404 returned error can't find the container with id caf35abea272da4e2eae59b8aeddbb02fa8fc9c63850f420fe7ddea3a3fbfdf7 Oct 11 10:31:04.496892 master-1 kubenswrapper[4771]: I1011 10:31:04.496785 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:04.496892 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:04.496892 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:04.496892 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:04.496892 master-1 kubenswrapper[4771]: I1011 10:31:04.496880 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:04.525140 master-1 kubenswrapper[4771]: I1011 10:31:04.525061 4771 patch_prober.go:28] interesting pod/apiserver-555f658fd6-n5n6g container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:04.525140 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:31:04.525140 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:31:04.525140 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:31:04.525140 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:04.525140 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:31:04.525140 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:04.525140 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:04.525140 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:04.525140 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:31:04.525140 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:31:04.525140 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:31:04.525140 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:31:04.525140 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:31:04.525140 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:31:04.525140 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:31:04.525140 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:31:04.525140 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:31:04.525140 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:31:04.526437 master-1 kubenswrapper[4771]: I1011 10:31:04.525156 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:04.526437 master-1 kubenswrapper[4771]: I1011 10:31:04.525296 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:31:04.832095 master-2 kubenswrapper[4776]: I1011 10:31:04.832025 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-3-master-2" event={"ID":"ff524bb0-602a-4579-bac9-c3f5c19ec9ba","Type":"ContainerStarted","Data":"5dcd1c043c2c18cfa07bcf1cba0c1e16ed116132974cf974809ac324fe8a6c21"} Oct 11 10:31:04.832095 master-2 kubenswrapper[4776]: I1011 10:31:04.832077 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-3-master-2" event={"ID":"ff524bb0-602a-4579-bac9-c3f5c19ec9ba","Type":"ContainerStarted","Data":"caf35abea272da4e2eae59b8aeddbb02fa8fc9c63850f420fe7ddea3a3fbfdf7"} Oct 11 10:31:04.853769 master-2 kubenswrapper[4776]: I1011 10:31:04.853699 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-3-master-2" podStartSLOduration=1.853667879 podStartE2EDuration="1.853667879s" podCreationTimestamp="2025-10-11 10:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:31:04.852460556 +0000 UTC m=+299.636887305" watchObservedRunningTime="2025-10-11 10:31:04.853667879 +0000 UTC m=+299.638094588" Oct 11 10:31:04.970245 master-2 kubenswrapper[4776]: I1011 10:31:04.970110 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:04.970245 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:04.970245 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:04.970245 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:04.970245 master-2 kubenswrapper[4776]: I1011 10:31:04.970209 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:05.209230 master-1 kubenswrapper[4771]: I1011 10:31:05.209124 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-retry-1-master-1_041ccbf8-b64e-4909-b9ae-35b19705838a/installer/0.log" Oct 11 10:31:05.209476 master-1 kubenswrapper[4771]: I1011 10:31:05.209453 4771 generic.go:334] "Generic (PLEG): container finished" podID="041ccbf8-b64e-4909-b9ae-35b19705838a" containerID="64183b8f0fe57cec48ea786bd6f2bde7521a6790010bcd3ba5698a2a91bb323f" exitCode=1 Oct 11 10:31:05.209571 master-1 kubenswrapper[4771]: I1011 10:31:05.209554 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" event={"ID":"041ccbf8-b64e-4909-b9ae-35b19705838a","Type":"ContainerDied","Data":"64183b8f0fe57cec48ea786bd6f2bde7521a6790010bcd3ba5698a2a91bb323f"} Oct 11 10:31:05.443743 master-1 kubenswrapper[4771]: I1011 10:31:05.443668 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-retry-1-master-1_041ccbf8-b64e-4909-b9ae-35b19705838a/installer/0.log" Oct 11 10:31:05.444016 master-1 kubenswrapper[4771]: I1011 10:31:05.443788 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 11 10:31:05.456147 master-1 kubenswrapper[4771]: I1011 10:31:05.456081 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/041ccbf8-b64e-4909-b9ae-35b19705838a-kubelet-dir\") pod \"041ccbf8-b64e-4909-b9ae-35b19705838a\" (UID: \"041ccbf8-b64e-4909-b9ae-35b19705838a\") " Oct 11 10:31:05.456461 master-1 kubenswrapper[4771]: I1011 10:31:05.456203 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/041ccbf8-b64e-4909-b9ae-35b19705838a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "041ccbf8-b64e-4909-b9ae-35b19705838a" (UID: "041ccbf8-b64e-4909-b9ae-35b19705838a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:05.456461 master-1 kubenswrapper[4771]: I1011 10:31:05.456247 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/041ccbf8-b64e-4909-b9ae-35b19705838a-kube-api-access\") pod \"041ccbf8-b64e-4909-b9ae-35b19705838a\" (UID: \"041ccbf8-b64e-4909-b9ae-35b19705838a\") " Oct 11 10:31:05.456461 master-1 kubenswrapper[4771]: I1011 10:31:05.456302 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/041ccbf8-b64e-4909-b9ae-35b19705838a-var-lock\") pod \"041ccbf8-b64e-4909-b9ae-35b19705838a\" (UID: \"041ccbf8-b64e-4909-b9ae-35b19705838a\") " Oct 11 10:31:05.456461 master-1 kubenswrapper[4771]: I1011 10:31:05.456388 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/041ccbf8-b64e-4909-b9ae-35b19705838a-var-lock" (OuterVolumeSpecName: "var-lock") pod "041ccbf8-b64e-4909-b9ae-35b19705838a" (UID: "041ccbf8-b64e-4909-b9ae-35b19705838a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:05.456757 master-1 kubenswrapper[4771]: I1011 10:31:05.456652 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/041ccbf8-b64e-4909-b9ae-35b19705838a-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:05.456757 master-1 kubenswrapper[4771]: I1011 10:31:05.456670 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/041ccbf8-b64e-4909-b9ae-35b19705838a-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:05.461530 master-1 kubenswrapper[4771]: I1011 10:31:05.461419 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041ccbf8-b64e-4909-b9ae-35b19705838a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "041ccbf8-b64e-4909-b9ae-35b19705838a" (UID: "041ccbf8-b64e-4909-b9ae-35b19705838a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:31:05.497414 master-1 kubenswrapper[4771]: I1011 10:31:05.497340 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:05.497414 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:05.497414 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:05.497414 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:05.498234 master-1 kubenswrapper[4771]: I1011 10:31:05.497441 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:05.557917 master-1 kubenswrapper[4771]: I1011 10:31:05.557879 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/041ccbf8-b64e-4909-b9ae-35b19705838a-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:05.925729 master-2 kubenswrapper[4776]: I1011 10:31:05.925632 4776 kubelet.go:1505] "Image garbage collection succeeded" Oct 11 10:31:05.969149 master-2 kubenswrapper[4776]: I1011 10:31:05.969087 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:05.969149 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:05.969149 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:05.969149 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:05.969426 master-2 kubenswrapper[4776]: I1011 10:31:05.969161 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:06.220430 master-1 kubenswrapper[4771]: I1011 10:31:06.220334 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-retry-1-master-1_041ccbf8-b64e-4909-b9ae-35b19705838a/installer/0.log" Oct 11 10:31:06.220879 master-1 kubenswrapper[4771]: I1011 10:31:06.220839 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" event={"ID":"041ccbf8-b64e-4909-b9ae-35b19705838a","Type":"ContainerDied","Data":"50bab3edfacf7962918848de1e7a735ed5d3ec7f5b1d816706993472067c79b5"} Oct 11 10:31:06.221070 master-1 kubenswrapper[4771]: I1011 10:31:06.220919 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 11 10:31:06.221180 master-1 kubenswrapper[4771]: I1011 10:31:06.221035 4771 scope.go:117] "RemoveContainer" containerID="64183b8f0fe57cec48ea786bd6f2bde7521a6790010bcd3ba5698a2a91bb323f" Oct 11 10:31:06.268056 master-1 kubenswrapper[4771]: I1011 10:31:06.267941 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-1"] Oct 11 10:31:06.275160 master-1 kubenswrapper[4771]: I1011 10:31:06.275122 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-1"] Oct 11 10:31:06.447587 master-1 kubenswrapper[4771]: I1011 10:31:06.447477 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="041ccbf8-b64e-4909-b9ae-35b19705838a" path="/var/lib/kubelet/pods/041ccbf8-b64e-4909-b9ae-35b19705838a/volumes" Oct 11 10:31:06.496969 master-1 kubenswrapper[4771]: I1011 10:31:06.496766 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:06.496969 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:06.496969 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:06.496969 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:06.496969 master-1 kubenswrapper[4771]: I1011 10:31:06.496866 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:06.968831 master-2 kubenswrapper[4776]: I1011 10:31:06.968741 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:06.968831 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:06.968831 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:06.968831 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:06.968831 master-2 kubenswrapper[4776]: I1011 10:31:06.968834 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:07.345613 master-1 kubenswrapper[4771]: I1011 10:31:07.345527 4771 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-skwvw container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:07.345613 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:31:07.345613 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:31:07.345613 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:31:07.345613 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:07.345613 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:31:07.345613 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:07.345613 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:07.345613 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:07.345613 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:31:07.345613 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:31:07.345613 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:31:07.345613 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:31:07.345613 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:31:07.345613 master-1 kubenswrapper[4771]: I1011 10:31:07.345590 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" podUID="004ee387-d0e9-4582-ad14-f571832ebd6e" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:07.497342 master-1 kubenswrapper[4771]: I1011 10:31:07.497028 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:07.497342 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:07.497342 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:07.497342 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:07.497342 master-1 kubenswrapper[4771]: I1011 10:31:07.497136 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:07.969637 master-2 kubenswrapper[4776]: I1011 10:31:07.969530 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:07.969637 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:07.969637 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:07.969637 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:07.969637 master-2 kubenswrapper[4776]: I1011 10:31:07.969602 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:08.497570 master-1 kubenswrapper[4771]: I1011 10:31:08.497433 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:08.497570 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:08.497570 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:08.497570 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:08.497570 master-1 kubenswrapper[4771]: I1011 10:31:08.497542 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:08.970062 master-2 kubenswrapper[4776]: I1011 10:31:08.969978 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:08.970062 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:08.970062 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:08.970062 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:08.970909 master-2 kubenswrapper[4776]: I1011 10:31:08.970069 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:09.497868 master-1 kubenswrapper[4771]: I1011 10:31:09.497733 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:09.497868 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:09.497868 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:09.497868 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:09.499026 master-1 kubenswrapper[4771]: I1011 10:31:09.497862 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:09.522571 master-1 kubenswrapper[4771]: I1011 10:31:09.522489 4771 patch_prober.go:28] interesting pod/apiserver-555f658fd6-n5n6g container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:09.522571 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:31:09.522571 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:31:09.522571 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:31:09.522571 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:09.522571 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:31:09.522571 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:09.522571 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:09.522571 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:09.522571 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:31:09.522571 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:31:09.522571 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:31:09.522571 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:31:09.522571 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:31:09.522571 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:31:09.522571 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:31:09.522571 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:31:09.522571 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:31:09.522571 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:31:09.523949 master-1 kubenswrapper[4771]: I1011 10:31:09.522581 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:09.970736 master-2 kubenswrapper[4776]: I1011 10:31:09.970606 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:09.970736 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:09.970736 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:09.970736 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:09.970736 master-2 kubenswrapper[4776]: I1011 10:31:09.970711 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:10.120051 master-2 kubenswrapper[4776]: E1011 10:31:10.119942 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podebd3d140_91cb_4ec4_91a0_ec45a87da4ea.slice/crio-6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podebd3d140_91cb_4ec4_91a0_ec45a87da4ea.slice/crio-6698f474fcd951cc464ec2ef01cd4e2bff033b68f3b955d8d604935119d5705f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podebd3d140_91cb_4ec4_91a0_ec45a87da4ea.slice/crio-conmon-6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:31:10.489757 master-2 kubenswrapper[4776]: I1011 10:31:10.489644 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-2_ebd3d140-91cb-4ec4-91a0-ec45a87da4ea/installer/0.log" Oct 11 10:31:10.489757 master-2 kubenswrapper[4776]: I1011 10:31:10.489736 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-2" Oct 11 10:31:10.497208 master-1 kubenswrapper[4771]: I1011 10:31:10.497070 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:10.497208 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:10.497208 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:10.497208 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:10.497208 master-1 kubenswrapper[4771]: I1011 10:31:10.497184 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:10.600106 master-2 kubenswrapper[4776]: I1011 10:31:10.599836 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kube-api-access\") pod \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " Oct 11 10:31:10.600106 master-2 kubenswrapper[4776]: I1011 10:31:10.599917 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kubelet-dir\") pod \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " Oct 11 10:31:10.600106 master-2 kubenswrapper[4776]: I1011 10:31:10.599993 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-var-lock\") pod \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " Oct 11 10:31:10.600570 master-2 kubenswrapper[4776]: I1011 10:31:10.600114 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ebd3d140-91cb-4ec4-91a0-ec45a87da4ea" (UID: "ebd3d140-91cb-4ec4-91a0-ec45a87da4ea"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:10.600570 master-2 kubenswrapper[4776]: I1011 10:31:10.600198 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-var-lock" (OuterVolumeSpecName: "var-lock") pod "ebd3d140-91cb-4ec4-91a0-ec45a87da4ea" (UID: "ebd3d140-91cb-4ec4-91a0-ec45a87da4ea"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:10.600570 master-2 kubenswrapper[4776]: I1011 10:31:10.600389 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:31:10.600570 master-2 kubenswrapper[4776]: I1011 10:31:10.600414 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:31:10.602457 master-2 kubenswrapper[4776]: I1011 10:31:10.602388 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ebd3d140-91cb-4ec4-91a0-ec45a87da4ea" (UID: "ebd3d140-91cb-4ec4-91a0-ec45a87da4ea"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:31:10.701134 master-2 kubenswrapper[4776]: I1011 10:31:10.701074 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:31:10.867243 master-2 kubenswrapper[4776]: I1011 10:31:10.867118 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-2_ebd3d140-91cb-4ec4-91a0-ec45a87da4ea/installer/0.log" Oct 11 10:31:10.867243 master-2 kubenswrapper[4776]: I1011 10:31:10.867191 4776 generic.go:334] "Generic (PLEG): container finished" podID="ebd3d140-91cb-4ec4-91a0-ec45a87da4ea" containerID="6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df" exitCode=1 Oct 11 10:31:10.867243 master-2 kubenswrapper[4776]: I1011 10:31:10.867228 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-2" event={"ID":"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea","Type":"ContainerDied","Data":"6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df"} Oct 11 10:31:10.867472 master-2 kubenswrapper[4776]: I1011 10:31:10.867248 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-2" Oct 11 10:31:10.867472 master-2 kubenswrapper[4776]: I1011 10:31:10.867275 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-2" event={"ID":"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea","Type":"ContainerDied","Data":"6698f474fcd951cc464ec2ef01cd4e2bff033b68f3b955d8d604935119d5705f"} Oct 11 10:31:10.867472 master-2 kubenswrapper[4776]: I1011 10:31:10.867316 4776 scope.go:117] "RemoveContainer" containerID="6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df" Oct 11 10:31:10.880345 master-2 kubenswrapper[4776]: I1011 10:31:10.880293 4776 scope.go:117] "RemoveContainer" containerID="6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df" Oct 11 10:31:10.880953 master-2 kubenswrapper[4776]: E1011 10:31:10.880891 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df\": container with ID starting with 6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df not found: ID does not exist" containerID="6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df" Oct 11 10:31:10.881075 master-2 kubenswrapper[4776]: I1011 10:31:10.880948 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df"} err="failed to get container status \"6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df\": rpc error: code = NotFound desc = could not find container \"6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df\": container with ID starting with 6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df not found: ID does not exist" Oct 11 10:31:10.911981 master-2 kubenswrapper[4776]: I1011 10:31:10.911904 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-1-master-2"] Oct 11 10:31:10.915126 master-2 kubenswrapper[4776]: I1011 10:31:10.915079 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-1-master-2"] Oct 11 10:31:10.969483 master-2 kubenswrapper[4776]: I1011 10:31:10.969419 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:10.969483 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:10.969483 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:10.969483 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:10.969748 master-2 kubenswrapper[4776]: I1011 10:31:10.969507 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:11.497407 master-1 kubenswrapper[4771]: I1011 10:31:11.497281 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:11.497407 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:11.497407 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:11.497407 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:11.497407 master-1 kubenswrapper[4771]: I1011 10:31:11.497403 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:11.970344 master-2 kubenswrapper[4776]: I1011 10:31:11.970225 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:11.970344 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:11.970344 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:11.970344 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:11.971282 master-2 kubenswrapper[4776]: I1011 10:31:11.970376 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:12.070253 master-2 kubenswrapper[4776]: I1011 10:31:12.070168 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebd3d140-91cb-4ec4-91a0-ec45a87da4ea" path="/var/lib/kubelet/pods/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea/volumes" Oct 11 10:31:12.282591 master-1 kubenswrapper[4771]: I1011 10:31:12.282517 4771 kubelet.go:1505] "Image garbage collection succeeded" Oct 11 10:31:12.349173 master-1 kubenswrapper[4771]: I1011 10:31:12.349059 4771 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-skwvw container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:12.349173 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:31:12.349173 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:31:12.349173 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:31:12.349173 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:12.349173 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:31:12.349173 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:12.349173 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:12.349173 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:12.349173 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:31:12.349173 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:31:12.349173 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:31:12.349173 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:31:12.349173 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:31:12.349173 master-1 kubenswrapper[4771]: I1011 10:31:12.349152 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" podUID="004ee387-d0e9-4582-ad14-f571832ebd6e" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:12.497894 master-1 kubenswrapper[4771]: I1011 10:31:12.497538 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:12.497894 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:12.497894 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:12.497894 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:12.497894 master-1 kubenswrapper[4771]: I1011 10:31:12.497619 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:12.758056 master-1 kubenswrapper[4771]: I1011 10:31:12.757967 4771 patch_prober.go:28] interesting pod/machine-config-daemon-9nzpz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 10:31:12.758465 master-1 kubenswrapper[4771]: I1011 10:31:12.758067 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9nzpz" podUID="ebb73d72-cbb7-4736-870e-79e86c9fa7f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 10:31:12.970513 master-2 kubenswrapper[4776]: I1011 10:31:12.970400 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:12.970513 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:12.970513 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:12.970513 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:12.971235 master-2 kubenswrapper[4776]: I1011 10:31:12.970558 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:13.496927 master-1 kubenswrapper[4771]: I1011 10:31:13.496835 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:13.496927 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:13.496927 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:13.496927 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:13.497412 master-1 kubenswrapper[4771]: I1011 10:31:13.496936 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:13.970412 master-2 kubenswrapper[4776]: I1011 10:31:13.970300 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:13.970412 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:13.970412 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:13.970412 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:13.971358 master-2 kubenswrapper[4776]: I1011 10:31:13.970423 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:14.497679 master-1 kubenswrapper[4771]: I1011 10:31:14.497547 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:14.497679 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:14.497679 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:14.497679 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:14.497679 master-1 kubenswrapper[4771]: I1011 10:31:14.497660 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:14.524308 master-1 kubenswrapper[4771]: I1011 10:31:14.524233 4771 patch_prober.go:28] interesting pod/apiserver-555f658fd6-n5n6g container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:14.524308 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:31:14.524308 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:31:14.524308 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:31:14.524308 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:14.524308 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:31:14.524308 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:14.524308 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:14.524308 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:14.524308 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:31:14.524308 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:31:14.524308 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:31:14.524308 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:31:14.524308 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:31:14.524308 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:31:14.524308 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:31:14.524308 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:31:14.524308 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:31:14.524308 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:31:14.525502 master-1 kubenswrapper[4771]: I1011 10:31:14.524326 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:14.970919 master-2 kubenswrapper[4776]: I1011 10:31:14.970775 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:14.970919 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:14.970919 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:14.970919 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:14.970919 master-2 kubenswrapper[4776]: I1011 10:31:14.970875 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:15.497725 master-1 kubenswrapper[4771]: I1011 10:31:15.497607 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:15.497725 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:15.497725 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:15.497725 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:15.497725 master-1 kubenswrapper[4771]: I1011 10:31:15.497718 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:15.970560 master-2 kubenswrapper[4776]: I1011 10:31:15.970444 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:15.970560 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:15.970560 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:15.970560 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:15.970560 master-2 kubenswrapper[4776]: I1011 10:31:15.970563 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:16.497384 master-1 kubenswrapper[4771]: I1011 10:31:16.497272 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:16.497384 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:16.497384 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:16.497384 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:16.498420 master-1 kubenswrapper[4771]: I1011 10:31:16.497416 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:16.970993 master-2 kubenswrapper[4776]: I1011 10:31:16.970896 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:16.970993 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:16.970993 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:16.970993 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:16.970993 master-2 kubenswrapper[4776]: I1011 10:31:16.970970 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:17.349679 master-1 kubenswrapper[4771]: I1011 10:31:17.349545 4771 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-skwvw container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:17.349679 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:31:17.349679 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:31:17.349679 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:31:17.349679 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:17.349679 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:31:17.349679 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:17.349679 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:17.349679 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:17.349679 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:31:17.349679 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:31:17.349679 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:31:17.349679 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:31:17.349679 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:31:17.350863 master-1 kubenswrapper[4771]: I1011 10:31:17.349685 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" podUID="004ee387-d0e9-4582-ad14-f571832ebd6e" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:17.498651 master-1 kubenswrapper[4771]: I1011 10:31:17.498554 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:17.498651 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:17.498651 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:17.498651 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:17.499767 master-1 kubenswrapper[4771]: I1011 10:31:17.498682 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:17.797585 master-1 kubenswrapper[4771]: I1011 10:31:17.797224 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-h7gnk_6d20faa4-e5eb-4766-b4f5-30e491d1820c/machine-config-server/0.log" Oct 11 10:31:17.807920 master-2 kubenswrapper[4776]: I1011 10:31:17.807823 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:31:17.969538 master-2 kubenswrapper[4776]: I1011 10:31:17.969407 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:17.969538 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:17.969538 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:17.969538 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:17.969538 master-2 kubenswrapper[4776]: I1011 10:31:17.969507 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:18.497847 master-1 kubenswrapper[4771]: I1011 10:31:18.497738 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:18.497847 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:18.497847 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:18.497847 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:18.498339 master-1 kubenswrapper[4771]: I1011 10:31:18.497840 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:18.970022 master-2 kubenswrapper[4776]: I1011 10:31:18.969917 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:18.970022 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:18.970022 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:18.970022 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:18.970022 master-2 kubenswrapper[4776]: I1011 10:31:18.970011 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:19.496919 master-1 kubenswrapper[4771]: I1011 10:31:19.496846 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:19.496919 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:19.496919 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:19.496919 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:19.497679 master-1 kubenswrapper[4771]: I1011 10:31:19.497598 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:19.525241 master-1 kubenswrapper[4771]: I1011 10:31:19.525176 4771 patch_prober.go:28] interesting pod/apiserver-555f658fd6-n5n6g container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:19.525241 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:31:19.525241 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:31:19.525241 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:31:19.525241 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:19.525241 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:31:19.525241 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:19.525241 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:19.525241 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:19.525241 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:31:19.525241 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:31:19.525241 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:31:19.525241 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:31:19.525241 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:31:19.525241 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:31:19.525241 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:31:19.525241 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:31:19.525241 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:31:19.525241 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:31:19.526085 master-1 kubenswrapper[4771]: I1011 10:31:19.525265 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:19.969937 master-2 kubenswrapper[4776]: I1011 10:31:19.969845 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:19.969937 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:19.969937 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:19.969937 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:19.969937 master-2 kubenswrapper[4776]: I1011 10:31:19.969928 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:20.497669 master-1 kubenswrapper[4771]: I1011 10:31:20.497575 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:20.497669 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:20.497669 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:20.497669 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:20.498390 master-1 kubenswrapper[4771]: I1011 10:31:20.497671 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:20.969610 master-2 kubenswrapper[4776]: I1011 10:31:20.969545 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:20.969610 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:20.969610 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:20.969610 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:20.969951 master-2 kubenswrapper[4776]: I1011 10:31:20.969643 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:21.497138 master-1 kubenswrapper[4771]: I1011 10:31:21.497012 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:21.497138 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:21.497138 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:21.497138 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:21.497138 master-1 kubenswrapper[4771]: I1011 10:31:21.497111 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:21.970364 master-2 kubenswrapper[4776]: I1011 10:31:21.970264 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:21.970364 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:21.970364 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:21.970364 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:21.971860 master-2 kubenswrapper[4776]: I1011 10:31:21.970385 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:22.349351 master-1 kubenswrapper[4771]: I1011 10:31:22.349261 4771 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-skwvw container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:22.349351 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:31:22.349351 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:31:22.349351 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:31:22.349351 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:22.349351 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:31:22.349351 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:22.349351 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:22.349351 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:22.349351 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:31:22.349351 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:31:22.349351 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:31:22.349351 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:31:22.349351 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:31:22.350075 master-1 kubenswrapper[4771]: I1011 10:31:22.349386 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" podUID="004ee387-d0e9-4582-ad14-f571832ebd6e" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:22.497540 master-1 kubenswrapper[4771]: I1011 10:31:22.497450 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:22.497540 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:22.497540 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:22.497540 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:22.497540 master-1 kubenswrapper[4771]: I1011 10:31:22.497527 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:22.755419 master-1 kubenswrapper[4771]: I1011 10:31:22.755258 4771 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 11 10:31:22.755839 master-1 kubenswrapper[4771]: I1011 10:31:22.755800 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler" containerID="cri-o://0400db595d18039edaf6ab7ccb3c1b1a3510ae9588fc33a6a91a15e993a6d1a4" gracePeriod=30 Oct 11 10:31:22.756049 master-1 kubenswrapper[4771]: I1011 10:31:22.755915 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler-recovery-controller" containerID="cri-o://4f12c3536caf37d890a386fecb2c94e5fc57775602e9a539771326b213c3ae7e" gracePeriod=30 Oct 11 10:31:22.756192 master-1 kubenswrapper[4771]: I1011 10:31:22.755989 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler-cert-syncer" containerID="cri-o://27a52449e5ec1bd52177b8ae4e5229c8bc4e5a7be149b07a0e7cb307be3932da" gracePeriod=30 Oct 11 10:31:22.756614 master-1 kubenswrapper[4771]: I1011 10:31:22.756558 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 11 10:31:22.757314 master-1 kubenswrapper[4771]: E1011 10:31:22.756818 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041ccbf8-b64e-4909-b9ae-35b19705838a" containerName="installer" Oct 11 10:31:22.757314 master-1 kubenswrapper[4771]: I1011 10:31:22.756839 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="041ccbf8-b64e-4909-b9ae-35b19705838a" containerName="installer" Oct 11 10:31:22.757314 master-1 kubenswrapper[4771]: E1011 10:31:22.756851 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler" Oct 11 10:31:22.757314 master-1 kubenswrapper[4771]: I1011 10:31:22.756860 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler" Oct 11 10:31:22.757314 master-1 kubenswrapper[4771]: E1011 10:31:22.756879 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="wait-for-host-port" Oct 11 10:31:22.757314 master-1 kubenswrapper[4771]: I1011 10:31:22.756888 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="wait-for-host-port" Oct 11 10:31:22.757314 master-1 kubenswrapper[4771]: E1011 10:31:22.756896 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler-recovery-controller" Oct 11 10:31:22.757314 master-1 kubenswrapper[4771]: I1011 10:31:22.756903 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler-recovery-controller" Oct 11 10:31:22.757314 master-1 kubenswrapper[4771]: E1011 10:31:22.756916 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler-cert-syncer" Oct 11 10:31:22.757314 master-1 kubenswrapper[4771]: I1011 10:31:22.756924 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler-cert-syncer" Oct 11 10:31:22.757314 master-1 kubenswrapper[4771]: I1011 10:31:22.757010 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="041ccbf8-b64e-4909-b9ae-35b19705838a" containerName="installer" Oct 11 10:31:22.757314 master-1 kubenswrapper[4771]: I1011 10:31:22.757024 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler-cert-syncer" Oct 11 10:31:22.757314 master-1 kubenswrapper[4771]: I1011 10:31:22.757036 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler" Oct 11 10:31:22.757314 master-1 kubenswrapper[4771]: I1011 10:31:22.757046 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler-recovery-controller" Oct 11 10:31:22.778012 master-1 kubenswrapper[4771]: I1011 10:31:22.777934 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-resource-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"a61df698d34d049669621b2249bfe758\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:31:22.778120 master-1 kubenswrapper[4771]: I1011 10:31:22.778046 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-cert-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"a61df698d34d049669621b2249bfe758\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:31:22.880433 master-1 kubenswrapper[4771]: I1011 10:31:22.880302 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-cert-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"a61df698d34d049669621b2249bfe758\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:31:22.880627 master-1 kubenswrapper[4771]: I1011 10:31:22.880562 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-resource-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"a61df698d34d049669621b2249bfe758\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:31:22.880742 master-1 kubenswrapper[4771]: I1011 10:31:22.880613 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-cert-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"a61df698d34d049669621b2249bfe758\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:31:22.880840 master-1 kubenswrapper[4771]: I1011 10:31:22.880672 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-resource-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"a61df698d34d049669621b2249bfe758\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:31:22.925861 master-1 kubenswrapper[4771]: I1011 10:31:22.925776 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_89fad8183e18ab3ad0c46d272335e5f8/kube-scheduler-cert-syncer/0.log" Oct 11 10:31:22.927221 master-1 kubenswrapper[4771]: I1011 10:31:22.927148 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:31:22.934320 master-1 kubenswrapper[4771]: I1011 10:31:22.934232 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" oldPodUID="89fad8183e18ab3ad0c46d272335e5f8" podUID="a61df698d34d049669621b2249bfe758" Oct 11 10:31:22.970354 master-2 kubenswrapper[4776]: I1011 10:31:22.970233 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:22.970354 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:22.970354 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:22.970354 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:22.971324 master-2 kubenswrapper[4776]: I1011 10:31:22.970371 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:22.981789 master-1 kubenswrapper[4771]: I1011 10:31:22.981686 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-cert-dir\") pod \"89fad8183e18ab3ad0c46d272335e5f8\" (UID: \"89fad8183e18ab3ad0c46d272335e5f8\") " Oct 11 10:31:22.982053 master-1 kubenswrapper[4771]: I1011 10:31:22.981880 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-resource-dir\") pod \"89fad8183e18ab3ad0c46d272335e5f8\" (UID: \"89fad8183e18ab3ad0c46d272335e5f8\") " Oct 11 10:31:22.982053 master-1 kubenswrapper[4771]: I1011 10:31:22.981914 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "89fad8183e18ab3ad0c46d272335e5f8" (UID: "89fad8183e18ab3ad0c46d272335e5f8"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:22.982193 master-1 kubenswrapper[4771]: I1011 10:31:22.982087 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "89fad8183e18ab3ad0c46d272335e5f8" (UID: "89fad8183e18ab3ad0c46d272335e5f8"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:22.982583 master-1 kubenswrapper[4771]: I1011 10:31:22.982541 4771 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-resource-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:22.982583 master-1 kubenswrapper[4771]: I1011 10:31:22.982578 4771 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-cert-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:23.324692 master-1 kubenswrapper[4771]: I1011 10:31:23.324591 4771 generic.go:334] "Generic (PLEG): container finished" podID="fefe846e-30ef-4097-ac69-f771a74b2b98" containerID="91d13e47f19b0473725a534d9929ad8d4221ea196c8d107ca009b7a28f766686" exitCode=0 Oct 11 10:31:23.325004 master-1 kubenswrapper[4771]: I1011 10:31:23.324722 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-1" event={"ID":"fefe846e-30ef-4097-ac69-f771a74b2b98","Type":"ContainerDied","Data":"91d13e47f19b0473725a534d9929ad8d4221ea196c8d107ca009b7a28f766686"} Oct 11 10:31:23.327934 master-1 kubenswrapper[4771]: I1011 10:31:23.327861 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_89fad8183e18ab3ad0c46d272335e5f8/kube-scheduler-cert-syncer/0.log" Oct 11 10:31:23.329562 master-1 kubenswrapper[4771]: I1011 10:31:23.329511 4771 generic.go:334] "Generic (PLEG): container finished" podID="89fad8183e18ab3ad0c46d272335e5f8" containerID="4f12c3536caf37d890a386fecb2c94e5fc57775602e9a539771326b213c3ae7e" exitCode=0 Oct 11 10:31:23.329562 master-1 kubenswrapper[4771]: I1011 10:31:23.329549 4771 generic.go:334] "Generic (PLEG): container finished" podID="89fad8183e18ab3ad0c46d272335e5f8" containerID="27a52449e5ec1bd52177b8ae4e5229c8bc4e5a7be149b07a0e7cb307be3932da" exitCode=2 Oct 11 10:31:23.329562 master-1 kubenswrapper[4771]: I1011 10:31:23.329565 4771 generic.go:334] "Generic (PLEG): container finished" podID="89fad8183e18ab3ad0c46d272335e5f8" containerID="0400db595d18039edaf6ab7ccb3c1b1a3510ae9588fc33a6a91a15e993a6d1a4" exitCode=0 Oct 11 10:31:23.329813 master-1 kubenswrapper[4771]: I1011 10:31:23.329608 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bafca73396f947e9fa263ed96b26d1a45ed0144ffb97a2f796fec9628cf617b5" Oct 11 10:31:23.329813 master-1 kubenswrapper[4771]: I1011 10:31:23.329652 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:31:23.353458 master-1 kubenswrapper[4771]: I1011 10:31:23.353326 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" oldPodUID="89fad8183e18ab3ad0c46d272335e5f8" podUID="a61df698d34d049669621b2249bfe758" Oct 11 10:31:23.361383 master-1 kubenswrapper[4771]: I1011 10:31:23.361296 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" oldPodUID="89fad8183e18ab3ad0c46d272335e5f8" podUID="a61df698d34d049669621b2249bfe758" Oct 11 10:31:23.496315 master-1 kubenswrapper[4771]: I1011 10:31:23.496198 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:23.496315 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:23.496315 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:23.496315 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:23.496899 master-1 kubenswrapper[4771]: I1011 10:31:23.496325 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:23.971165 master-2 kubenswrapper[4776]: I1011 10:31:23.971065 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:23.971165 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:23.971165 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:23.971165 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:23.971165 master-2 kubenswrapper[4776]: I1011 10:31:23.971159 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:24.450392 master-1 kubenswrapper[4771]: I1011 10:31:24.450295 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89fad8183e18ab3ad0c46d272335e5f8" path="/var/lib/kubelet/pods/89fad8183e18ab3ad0c46d272335e5f8/volumes" Oct 11 10:31:24.498052 master-1 kubenswrapper[4771]: I1011 10:31:24.497962 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:24.498052 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:24.498052 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:24.498052 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:24.498330 master-1 kubenswrapper[4771]: I1011 10:31:24.498050 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:24.524285 master-1 kubenswrapper[4771]: I1011 10:31:24.524171 4771 patch_prober.go:28] interesting pod/apiserver-555f658fd6-n5n6g container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:24.524285 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:31:24.524285 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:31:24.524285 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:31:24.524285 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:24.524285 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:31:24.524285 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:24.524285 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:24.524285 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:24.524285 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:31:24.524285 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:31:24.524285 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:31:24.524285 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:31:24.524285 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:31:24.524285 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:31:24.524285 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:31:24.524285 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:31:24.524285 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:31:24.524285 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:31:24.525326 master-1 kubenswrapper[4771]: I1011 10:31:24.524330 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:24.760167 master-1 kubenswrapper[4771]: I1011 10:31:24.760090 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-1" Oct 11 10:31:24.806060 master-1 kubenswrapper[4771]: I1011 10:31:24.805991 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fefe846e-30ef-4097-ac69-f771a74b2b98-kubelet-dir\") pod \"fefe846e-30ef-4097-ac69-f771a74b2b98\" (UID: \"fefe846e-30ef-4097-ac69-f771a74b2b98\") " Oct 11 10:31:24.806310 master-1 kubenswrapper[4771]: I1011 10:31:24.806083 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fefe846e-30ef-4097-ac69-f771a74b2b98-kube-api-access\") pod \"fefe846e-30ef-4097-ac69-f771a74b2b98\" (UID: \"fefe846e-30ef-4097-ac69-f771a74b2b98\") " Oct 11 10:31:24.806310 master-1 kubenswrapper[4771]: I1011 10:31:24.806132 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fefe846e-30ef-4097-ac69-f771a74b2b98-var-lock\") pod \"fefe846e-30ef-4097-ac69-f771a74b2b98\" (UID: \"fefe846e-30ef-4097-ac69-f771a74b2b98\") " Oct 11 10:31:24.806310 master-1 kubenswrapper[4771]: I1011 10:31:24.806189 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fefe846e-30ef-4097-ac69-f771a74b2b98-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fefe846e-30ef-4097-ac69-f771a74b2b98" (UID: "fefe846e-30ef-4097-ac69-f771a74b2b98"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:24.806643 master-1 kubenswrapper[4771]: I1011 10:31:24.806418 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fefe846e-30ef-4097-ac69-f771a74b2b98-var-lock" (OuterVolumeSpecName: "var-lock") pod "fefe846e-30ef-4097-ac69-f771a74b2b98" (UID: "fefe846e-30ef-4097-ac69-f771a74b2b98"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:24.806830 master-1 kubenswrapper[4771]: I1011 10:31:24.806744 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fefe846e-30ef-4097-ac69-f771a74b2b98-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:24.806930 master-1 kubenswrapper[4771]: I1011 10:31:24.806839 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fefe846e-30ef-4097-ac69-f771a74b2b98-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:24.811151 master-1 kubenswrapper[4771]: I1011 10:31:24.811085 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fefe846e-30ef-4097-ac69-f771a74b2b98-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fefe846e-30ef-4097-ac69-f771a74b2b98" (UID: "fefe846e-30ef-4097-ac69-f771a74b2b98"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:31:24.908680 master-1 kubenswrapper[4771]: I1011 10:31:24.908566 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fefe846e-30ef-4097-ac69-f771a74b2b98-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:24.970230 master-2 kubenswrapper[4776]: I1011 10:31:24.970119 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:24.970230 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:24.970230 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:24.970230 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:24.970518 master-2 kubenswrapper[4776]: I1011 10:31:24.970280 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:25.341475 master-1 kubenswrapper[4771]: I1011 10:31:25.341386 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-1" event={"ID":"fefe846e-30ef-4097-ac69-f771a74b2b98","Type":"ContainerDied","Data":"c1459d1f4e5756af7a1eb4e6fee99340755fd65bcc4e328af030a32aa61bf860"} Oct 11 10:31:25.341475 master-1 kubenswrapper[4771]: I1011 10:31:25.341453 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-1" Oct 11 10:31:25.341475 master-1 kubenswrapper[4771]: I1011 10:31:25.341477 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1459d1f4e5756af7a1eb4e6fee99340755fd65bcc4e328af030a32aa61bf860" Oct 11 10:31:25.497947 master-1 kubenswrapper[4771]: I1011 10:31:25.497846 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:25.497947 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:25.497947 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:25.497947 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:25.497947 master-1 kubenswrapper[4771]: I1011 10:31:25.497936 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:25.969512 master-2 kubenswrapper[4776]: I1011 10:31:25.969428 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:25.969512 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:25.969512 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:25.969512 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:25.969512 master-2 kubenswrapper[4776]: I1011 10:31:25.969504 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:26.336597 master-1 kubenswrapper[4771]: I1011 10:31:26.336523 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:31:26.347625 master-1 kubenswrapper[4771]: I1011 10:31:26.347543 4771 generic.go:334] "Generic (PLEG): container finished" podID="004ee387-d0e9-4582-ad14-f571832ebd6e" containerID="913475a2d9db4e326668ab52c1e0002f7c3998815697a186b12ad2e219935d68" exitCode=0 Oct 11 10:31:26.347625 master-1 kubenswrapper[4771]: I1011 10:31:26.347608 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" event={"ID":"004ee387-d0e9-4582-ad14-f571832ebd6e","Type":"ContainerDied","Data":"913475a2d9db4e326668ab52c1e0002f7c3998815697a186b12ad2e219935d68"} Oct 11 10:31:26.347625 master-1 kubenswrapper[4771]: I1011 10:31:26.347618 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" Oct 11 10:31:26.347904 master-1 kubenswrapper[4771]: I1011 10:31:26.347648 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw" event={"ID":"004ee387-d0e9-4582-ad14-f571832ebd6e","Type":"ContainerDied","Data":"70ee09355a354a55a1e3cc86654a95e054448e4680cbf989813075d48bc93f03"} Oct 11 10:31:26.347904 master-1 kubenswrapper[4771]: I1011 10:31:26.347681 4771 scope.go:117] "RemoveContainer" containerID="913475a2d9db4e326668ab52c1e0002f7c3998815697a186b12ad2e219935d68" Oct 11 10:31:26.365337 master-1 kubenswrapper[4771]: I1011 10:31:26.365296 4771 scope.go:117] "RemoveContainer" containerID="e7797162c7d48146c8bfccf87f14747095a7d1d8794c4946a5c000f5385fa346" Oct 11 10:31:26.387020 master-1 kubenswrapper[4771]: I1011 10:31:26.386929 4771 scope.go:117] "RemoveContainer" containerID="913475a2d9db4e326668ab52c1e0002f7c3998815697a186b12ad2e219935d68" Oct 11 10:31:26.387742 master-1 kubenswrapper[4771]: E1011 10:31:26.387687 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"913475a2d9db4e326668ab52c1e0002f7c3998815697a186b12ad2e219935d68\": container with ID starting with 913475a2d9db4e326668ab52c1e0002f7c3998815697a186b12ad2e219935d68 not found: ID does not exist" containerID="913475a2d9db4e326668ab52c1e0002f7c3998815697a186b12ad2e219935d68" Oct 11 10:31:26.387914 master-1 kubenswrapper[4771]: I1011 10:31:26.387753 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"913475a2d9db4e326668ab52c1e0002f7c3998815697a186b12ad2e219935d68"} err="failed to get container status \"913475a2d9db4e326668ab52c1e0002f7c3998815697a186b12ad2e219935d68\": rpc error: code = NotFound desc = could not find container \"913475a2d9db4e326668ab52c1e0002f7c3998815697a186b12ad2e219935d68\": container with ID starting with 913475a2d9db4e326668ab52c1e0002f7c3998815697a186b12ad2e219935d68 not found: ID does not exist" Oct 11 10:31:26.387914 master-1 kubenswrapper[4771]: I1011 10:31:26.387788 4771 scope.go:117] "RemoveContainer" containerID="e7797162c7d48146c8bfccf87f14747095a7d1d8794c4946a5c000f5385fa346" Oct 11 10:31:26.388431 master-1 kubenswrapper[4771]: E1011 10:31:26.388319 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7797162c7d48146c8bfccf87f14747095a7d1d8794c4946a5c000f5385fa346\": container with ID starting with e7797162c7d48146c8bfccf87f14747095a7d1d8794c4946a5c000f5385fa346 not found: ID does not exist" containerID="e7797162c7d48146c8bfccf87f14747095a7d1d8794c4946a5c000f5385fa346" Oct 11 10:31:26.388431 master-1 kubenswrapper[4771]: I1011 10:31:26.388384 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7797162c7d48146c8bfccf87f14747095a7d1d8794c4946a5c000f5385fa346"} err="failed to get container status \"e7797162c7d48146c8bfccf87f14747095a7d1d8794c4946a5c000f5385fa346\": rpc error: code = NotFound desc = could not find container \"e7797162c7d48146c8bfccf87f14747095a7d1d8794c4946a5c000f5385fa346\": container with ID starting with e7797162c7d48146c8bfccf87f14747095a7d1d8794c4946a5c000f5385fa346 not found: ID does not exist" Oct 11 10:31:26.428163 master-1 kubenswrapper[4771]: I1011 10:31:26.428096 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/004ee387-d0e9-4582-ad14-f571832ebd6e-etcd-serving-ca\") pod \"004ee387-d0e9-4582-ad14-f571832ebd6e\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " Oct 11 10:31:26.428163 master-1 kubenswrapper[4771]: I1011 10:31:26.428196 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-etcd-client\") pod \"004ee387-d0e9-4582-ad14-f571832ebd6e\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " Oct 11 10:31:26.428601 master-1 kubenswrapper[4771]: I1011 10:31:26.428285 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/004ee387-d0e9-4582-ad14-f571832ebd6e-audit-dir\") pod \"004ee387-d0e9-4582-ad14-f571832ebd6e\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " Oct 11 10:31:26.428601 master-1 kubenswrapper[4771]: I1011 10:31:26.428318 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/004ee387-d0e9-4582-ad14-f571832ebd6e-trusted-ca-bundle\") pod \"004ee387-d0e9-4582-ad14-f571832ebd6e\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " Oct 11 10:31:26.428601 master-1 kubenswrapper[4771]: I1011 10:31:26.428404 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/004ee387-d0e9-4582-ad14-f571832ebd6e-audit-policies\") pod \"004ee387-d0e9-4582-ad14-f571832ebd6e\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " Oct 11 10:31:26.428601 master-1 kubenswrapper[4771]: I1011 10:31:26.428462 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrj8w\" (UniqueName: \"kubernetes.io/projected/004ee387-d0e9-4582-ad14-f571832ebd6e-kube-api-access-vrj8w\") pod \"004ee387-d0e9-4582-ad14-f571832ebd6e\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " Oct 11 10:31:26.428601 master-1 kubenswrapper[4771]: I1011 10:31:26.428477 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/004ee387-d0e9-4582-ad14-f571832ebd6e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "004ee387-d0e9-4582-ad14-f571832ebd6e" (UID: "004ee387-d0e9-4582-ad14-f571832ebd6e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:26.428601 master-1 kubenswrapper[4771]: I1011 10:31:26.428526 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-encryption-config\") pod \"004ee387-d0e9-4582-ad14-f571832ebd6e\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " Oct 11 10:31:26.428601 master-1 kubenswrapper[4771]: I1011 10:31:26.428578 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-serving-cert\") pod \"004ee387-d0e9-4582-ad14-f571832ebd6e\" (UID: \"004ee387-d0e9-4582-ad14-f571832ebd6e\") " Oct 11 10:31:26.429189 master-1 kubenswrapper[4771]: I1011 10:31:26.428894 4771 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/004ee387-d0e9-4582-ad14-f571832ebd6e-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:26.429189 master-1 kubenswrapper[4771]: I1011 10:31:26.429149 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/004ee387-d0e9-4582-ad14-f571832ebd6e-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "004ee387-d0e9-4582-ad14-f571832ebd6e" (UID: "004ee387-d0e9-4582-ad14-f571832ebd6e"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:31:26.429414 master-1 kubenswrapper[4771]: I1011 10:31:26.429312 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/004ee387-d0e9-4582-ad14-f571832ebd6e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "004ee387-d0e9-4582-ad14-f571832ebd6e" (UID: "004ee387-d0e9-4582-ad14-f571832ebd6e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:31:26.429507 master-1 kubenswrapper[4771]: I1011 10:31:26.429470 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/004ee387-d0e9-4582-ad14-f571832ebd6e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "004ee387-d0e9-4582-ad14-f571832ebd6e" (UID: "004ee387-d0e9-4582-ad14-f571832ebd6e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:31:26.432907 master-1 kubenswrapper[4771]: I1011 10:31:26.432840 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "004ee387-d0e9-4582-ad14-f571832ebd6e" (UID: "004ee387-d0e9-4582-ad14-f571832ebd6e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:31:26.433052 master-1 kubenswrapper[4771]: I1011 10:31:26.433018 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/004ee387-d0e9-4582-ad14-f571832ebd6e-kube-api-access-vrj8w" (OuterVolumeSpecName: "kube-api-access-vrj8w") pod "004ee387-d0e9-4582-ad14-f571832ebd6e" (UID: "004ee387-d0e9-4582-ad14-f571832ebd6e"). InnerVolumeSpecName "kube-api-access-vrj8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:31:26.433804 master-1 kubenswrapper[4771]: I1011 10:31:26.433736 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "004ee387-d0e9-4582-ad14-f571832ebd6e" (UID: "004ee387-d0e9-4582-ad14-f571832ebd6e"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:31:26.434409 master-1 kubenswrapper[4771]: I1011 10:31:26.434279 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "004ee387-d0e9-4582-ad14-f571832ebd6e" (UID: "004ee387-d0e9-4582-ad14-f571832ebd6e"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:31:26.497672 master-1 kubenswrapper[4771]: I1011 10:31:26.497553 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:26.497672 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:26.497672 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:26.497672 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:26.498022 master-1 kubenswrapper[4771]: I1011 10:31:26.497676 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:26.530633 master-1 kubenswrapper[4771]: I1011 10:31:26.530482 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:26.530633 master-1 kubenswrapper[4771]: I1011 10:31:26.530540 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/004ee387-d0e9-4582-ad14-f571832ebd6e-etcd-serving-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:26.530633 master-1 kubenswrapper[4771]: I1011 10:31:26.530565 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-etcd-client\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:26.530633 master-1 kubenswrapper[4771]: I1011 10:31:26.530586 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/004ee387-d0e9-4582-ad14-f571832ebd6e-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:26.530633 master-1 kubenswrapper[4771]: I1011 10:31:26.530610 4771 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/004ee387-d0e9-4582-ad14-f571832ebd6e-audit-policies\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:26.530633 master-1 kubenswrapper[4771]: I1011 10:31:26.530629 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrj8w\" (UniqueName: \"kubernetes.io/projected/004ee387-d0e9-4582-ad14-f571832ebd6e-kube-api-access-vrj8w\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:26.531092 master-1 kubenswrapper[4771]: I1011 10:31:26.530649 4771 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/004ee387-d0e9-4582-ad14-f571832ebd6e-encryption-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:26.607157 master-1 kubenswrapper[4771]: I1011 10:31:26.607059 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:31:26.607491 master-1 kubenswrapper[4771]: I1011 10:31:26.607157 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:31:26.680640 master-1 kubenswrapper[4771]: I1011 10:31:26.680462 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw"] Oct 11 10:31:26.689523 master-1 kubenswrapper[4771]: I1011 10:31:26.689406 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw"] Oct 11 10:31:26.969216 master-2 kubenswrapper[4776]: I1011 10:31:26.969136 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:26.969216 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:26.969216 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:26.969216 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:26.970436 master-2 kubenswrapper[4776]: I1011 10:31:26.969245 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:27.496831 master-1 kubenswrapper[4771]: I1011 10:31:27.496722 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:27.496831 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:27.496831 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:27.496831 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:27.497503 master-1 kubenswrapper[4771]: I1011 10:31:27.496837 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:27.969382 master-2 kubenswrapper[4776]: I1011 10:31:27.969263 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:27.969382 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:27.969382 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:27.969382 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:27.970817 master-2 kubenswrapper[4776]: I1011 10:31:27.969395 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:28.446194 master-1 kubenswrapper[4771]: I1011 10:31:28.446114 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="004ee387-d0e9-4582-ad14-f571832ebd6e" path="/var/lib/kubelet/pods/004ee387-d0e9-4582-ad14-f571832ebd6e/volumes" Oct 11 10:31:28.498013 master-1 kubenswrapper[4771]: I1011 10:31:28.497935 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:28.498013 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:28.498013 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:28.498013 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:28.498424 master-1 kubenswrapper[4771]: I1011 10:31:28.498029 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:28.969572 master-2 kubenswrapper[4776]: I1011 10:31:28.969489 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:28.969572 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:28.969572 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:28.969572 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:28.969572 master-2 kubenswrapper[4776]: I1011 10:31:28.969552 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:29.496982 master-1 kubenswrapper[4771]: I1011 10:31:29.496880 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:29.496982 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:29.496982 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:29.496982 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:29.496982 master-1 kubenswrapper[4771]: I1011 10:31:29.496959 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:29.524332 master-1 kubenswrapper[4771]: I1011 10:31:29.524205 4771 patch_prober.go:28] interesting pod/apiserver-555f658fd6-n5n6g container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:29.524332 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:31:29.524332 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:31:29.524332 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:31:29.524332 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:29.524332 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:31:29.524332 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:29.524332 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:29.524332 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:29.524332 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:31:29.524332 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:31:29.524332 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:31:29.524332 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:31:29.524332 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:31:29.524332 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:31:29.524332 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:31:29.524332 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:31:29.524332 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:31:29.524332 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:31:29.525383 master-1 kubenswrapper[4771]: I1011 10:31:29.524410 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:29.969641 master-2 kubenswrapper[4776]: I1011 10:31:29.969566 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:29.969641 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:29.969641 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:29.969641 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:29.969641 master-2 kubenswrapper[4776]: I1011 10:31:29.969637 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:29.970808 master-2 kubenswrapper[4776]: I1011 10:31:29.969722 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:31:29.970808 master-2 kubenswrapper[4776]: I1011 10:31:29.970239 4776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"d0cb5ca87b019c6dd7de016a32a463f61a07f9fbd819c59a6476d827605ded9c"} pod="openshift-ingress/router-default-5ddb89f76-57kcw" containerMessage="Container router failed startup probe, will be restarted" Oct 11 10:31:29.970808 master-2 kubenswrapper[4776]: I1011 10:31:29.970282 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" containerID="cri-o://d0cb5ca87b019c6dd7de016a32a463f61a07f9fbd819c59a6476d827605ded9c" gracePeriod=3600 Oct 11 10:31:30.497197 master-1 kubenswrapper[4771]: I1011 10:31:30.497097 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:30.497197 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:31:30.497197 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:31:30.497197 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:31:30.498498 master-1 kubenswrapper[4771]: I1011 10:31:30.497204 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:30.498498 master-1 kubenswrapper[4771]: I1011 10:31:30.497309 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:31:30.498498 master-1 kubenswrapper[4771]: I1011 10:31:30.498071 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"d9d09acfb9b74efc71914e418c9f7ad84873a3a13515d6cfcddf159cfd555604"} pod="openshift-ingress/router-default-5ddb89f76-z5t6x" containerMessage="Container router failed startup probe, will be restarted" Oct 11 10:31:30.498498 master-1 kubenswrapper[4771]: I1011 10:31:30.498136 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" containerID="cri-o://d9d09acfb9b74efc71914e418c9f7ad84873a3a13515d6cfcddf159cfd555604" gracePeriod=3600 Oct 11 10:31:31.607092 master-1 kubenswrapper[4771]: I1011 10:31:31.606988 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:31:31.607092 master-1 kubenswrapper[4771]: I1011 10:31:31.607074 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:31:34.523426 master-1 kubenswrapper[4771]: I1011 10:31:34.523308 4771 patch_prober.go:28] interesting pod/apiserver-555f658fd6-n5n6g container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:34.523426 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:31:34.523426 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:31:34.523426 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:31:34.523426 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:34.523426 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:31:34.523426 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:34.523426 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:34.523426 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:34.523426 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:31:34.523426 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:31:34.523426 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:31:34.523426 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:31:34.523426 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:31:34.523426 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:31:34.523426 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:31:34.523426 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:31:34.523426 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:31:34.523426 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:31:34.525541 master-1 kubenswrapper[4771]: I1011 10:31:34.523441 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:35.730911 master-1 kubenswrapper[4771]: I1011 10:31:35.730797 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk"] Oct 11 10:31:35.731738 master-1 kubenswrapper[4771]: E1011 10:31:35.731091 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fefe846e-30ef-4097-ac69-f771a74b2b98" containerName="installer" Oct 11 10:31:35.731738 master-1 kubenswrapper[4771]: I1011 10:31:35.731114 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fefe846e-30ef-4097-ac69-f771a74b2b98" containerName="installer" Oct 11 10:31:35.731738 master-1 kubenswrapper[4771]: E1011 10:31:35.731131 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="004ee387-d0e9-4582-ad14-f571832ebd6e" containerName="oauth-apiserver" Oct 11 10:31:35.731738 master-1 kubenswrapper[4771]: I1011 10:31:35.731144 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="004ee387-d0e9-4582-ad14-f571832ebd6e" containerName="oauth-apiserver" Oct 11 10:31:35.731738 master-1 kubenswrapper[4771]: E1011 10:31:35.731163 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="004ee387-d0e9-4582-ad14-f571832ebd6e" containerName="fix-audit-permissions" Oct 11 10:31:35.731738 master-1 kubenswrapper[4771]: I1011 10:31:35.731186 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="004ee387-d0e9-4582-ad14-f571832ebd6e" containerName="fix-audit-permissions" Oct 11 10:31:35.731738 master-1 kubenswrapper[4771]: I1011 10:31:35.731333 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="004ee387-d0e9-4582-ad14-f571832ebd6e" containerName="oauth-apiserver" Oct 11 10:31:35.731738 master-1 kubenswrapper[4771]: I1011 10:31:35.731386 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="fefe846e-30ef-4097-ac69-f771a74b2b98" containerName="installer" Oct 11 10:31:35.732284 master-1 kubenswrapper[4771]: I1011 10:31:35.732232 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.736195 master-1 kubenswrapper[4771]: I1011 10:31:35.736124 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 11 10:31:35.736912 master-1 kubenswrapper[4771]: I1011 10:31:35.736859 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 11 10:31:35.737616 master-1 kubenswrapper[4771]: I1011 10:31:35.737491 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 11 10:31:35.737811 master-1 kubenswrapper[4771]: I1011 10:31:35.737678 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 11 10:31:35.737811 master-1 kubenswrapper[4771]: I1011 10:31:35.737784 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 11 10:31:35.738172 master-1 kubenswrapper[4771]: I1011 10:31:35.737847 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 11 10:31:35.738172 master-1 kubenswrapper[4771]: I1011 10:31:35.737959 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 11 10:31:35.738427 master-1 kubenswrapper[4771]: I1011 10:31:35.738235 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 11 10:31:35.751283 master-1 kubenswrapper[4771]: I1011 10:31:35.751216 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk"] Oct 11 10:31:35.852556 master-1 kubenswrapper[4771]: I1011 10:31:35.852437 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87cc032-b419-444c-8bf0-ef7405d7369d-serving-cert\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.852556 master-1 kubenswrapper[4771]: I1011 10:31:35.852524 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d87cc032-b419-444c-8bf0-ef7405d7369d-etcd-serving-ca\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.852556 master-1 kubenswrapper[4771]: I1011 10:31:35.852569 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d87cc032-b419-444c-8bf0-ef7405d7369d-trusted-ca-bundle\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.852957 master-1 kubenswrapper[4771]: I1011 10:31:35.852643 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d87cc032-b419-444c-8bf0-ef7405d7369d-encryption-config\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.852957 master-1 kubenswrapper[4771]: I1011 10:31:35.852731 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kldk2\" (UniqueName: \"kubernetes.io/projected/d87cc032-b419-444c-8bf0-ef7405d7369d-kube-api-access-kldk2\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.852957 master-1 kubenswrapper[4771]: I1011 10:31:35.852799 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d87cc032-b419-444c-8bf0-ef7405d7369d-audit-dir\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.852957 master-1 kubenswrapper[4771]: I1011 10:31:35.852839 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d87cc032-b419-444c-8bf0-ef7405d7369d-audit-policies\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.853249 master-1 kubenswrapper[4771]: I1011 10:31:35.853012 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d87cc032-b419-444c-8bf0-ef7405d7369d-etcd-client\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.954102 master-1 kubenswrapper[4771]: I1011 10:31:35.954013 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d87cc032-b419-444c-8bf0-ef7405d7369d-etcd-client\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.954271 master-1 kubenswrapper[4771]: I1011 10:31:35.954104 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87cc032-b419-444c-8bf0-ef7405d7369d-serving-cert\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.954271 master-1 kubenswrapper[4771]: I1011 10:31:35.954142 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d87cc032-b419-444c-8bf0-ef7405d7369d-etcd-serving-ca\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.954271 master-1 kubenswrapper[4771]: I1011 10:31:35.954166 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d87cc032-b419-444c-8bf0-ef7405d7369d-trusted-ca-bundle\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.954271 master-1 kubenswrapper[4771]: I1011 10:31:35.954211 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d87cc032-b419-444c-8bf0-ef7405d7369d-encryption-config\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.954271 master-1 kubenswrapper[4771]: I1011 10:31:35.954244 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kldk2\" (UniqueName: \"kubernetes.io/projected/d87cc032-b419-444c-8bf0-ef7405d7369d-kube-api-access-kldk2\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.954613 master-1 kubenswrapper[4771]: I1011 10:31:35.954293 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d87cc032-b419-444c-8bf0-ef7405d7369d-audit-dir\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.954613 master-1 kubenswrapper[4771]: I1011 10:31:35.954321 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d87cc032-b419-444c-8bf0-ef7405d7369d-audit-policies\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.954987 master-1 kubenswrapper[4771]: I1011 10:31:35.954905 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d87cc032-b419-444c-8bf0-ef7405d7369d-audit-dir\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.955414 master-1 kubenswrapper[4771]: I1011 10:31:35.955332 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d87cc032-b419-444c-8bf0-ef7405d7369d-audit-policies\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.955639 master-1 kubenswrapper[4771]: I1011 10:31:35.955507 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d87cc032-b419-444c-8bf0-ef7405d7369d-trusted-ca-bundle\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.956249 master-1 kubenswrapper[4771]: I1011 10:31:35.956159 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d87cc032-b419-444c-8bf0-ef7405d7369d-etcd-serving-ca\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.959638 master-1 kubenswrapper[4771]: I1011 10:31:35.959566 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d87cc032-b419-444c-8bf0-ef7405d7369d-encryption-config\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.959855 master-1 kubenswrapper[4771]: I1011 10:31:35.959702 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d87cc032-b419-444c-8bf0-ef7405d7369d-etcd-client\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.960449 master-1 kubenswrapper[4771]: I1011 10:31:35.960344 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87cc032-b419-444c-8bf0-ef7405d7369d-serving-cert\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:35.988732 master-1 kubenswrapper[4771]: I1011 10:31:35.988588 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kldk2\" (UniqueName: \"kubernetes.io/projected/d87cc032-b419-444c-8bf0-ef7405d7369d-kube-api-access-kldk2\") pod \"apiserver-6f855d6bcf-cwmmk\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:36.055825 master-1 kubenswrapper[4771]: I1011 10:31:36.055631 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:36.540436 master-1 kubenswrapper[4771]: I1011 10:31:36.540337 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk"] Oct 11 10:31:36.547590 master-1 kubenswrapper[4771]: W1011 10:31:36.547473 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd87cc032_b419_444c_8bf0_ef7405d7369d.slice/crio-f8786873b90c54bfb0b515ad88ba2ef097b9f25b5ded48493272a640d89c1d55 WatchSource:0}: Error finding container f8786873b90c54bfb0b515ad88ba2ef097b9f25b5ded48493272a640d89c1d55: Status 404 returned error can't find the container with id f8786873b90c54bfb0b515ad88ba2ef097b9f25b5ded48493272a640d89c1d55 Oct 11 10:31:36.607736 master-1 kubenswrapper[4771]: I1011 10:31:36.607619 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:31:36.608059 master-1 kubenswrapper[4771]: I1011 10:31:36.607747 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:31:36.608059 master-1 kubenswrapper[4771]: I1011 10:31:36.607939 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 11 10:31:36.609568 master-1 kubenswrapper[4771]: I1011 10:31:36.609496 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:31:36.609728 master-1 kubenswrapper[4771]: I1011 10:31:36.609582 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:31:37.416393 master-1 kubenswrapper[4771]: I1011 10:31:37.416249 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" event={"ID":"d87cc032-b419-444c-8bf0-ef7405d7369d","Type":"ContainerDied","Data":"cc3604bd3c6d5088cac6e57645a1372932a2b915b7df557349ccea609bf9af52"} Oct 11 10:31:37.417531 master-1 kubenswrapper[4771]: I1011 10:31:37.417443 4771 generic.go:334] "Generic (PLEG): container finished" podID="d87cc032-b419-444c-8bf0-ef7405d7369d" containerID="cc3604bd3c6d5088cac6e57645a1372932a2b915b7df557349ccea609bf9af52" exitCode=0 Oct 11 10:31:37.417655 master-1 kubenswrapper[4771]: I1011 10:31:37.417563 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" event={"ID":"d87cc032-b419-444c-8bf0-ef7405d7369d","Type":"ContainerStarted","Data":"f8786873b90c54bfb0b515ad88ba2ef097b9f25b5ded48493272a640d89c1d55"} Oct 11 10:31:38.049090 master-1 kubenswrapper[4771]: I1011 10:31:38.049007 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 11 10:31:38.051997 master-1 kubenswrapper[4771]: I1011 10:31:38.051551 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:31:38.082421 master-1 kubenswrapper[4771]: I1011 10:31:38.079082 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:31:38.082421 master-1 kubenswrapper[4771]: I1011 10:31:38.079535 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:31:38.082421 master-1 kubenswrapper[4771]: I1011 10:31:38.079609 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:31:38.099569 master-1 kubenswrapper[4771]: I1011 10:31:38.099514 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 11 10:31:38.181125 master-1 kubenswrapper[4771]: I1011 10:31:38.181042 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:31:38.181125 master-1 kubenswrapper[4771]: I1011 10:31:38.181108 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:31:38.181442 master-1 kubenswrapper[4771]: I1011 10:31:38.181171 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:31:38.181442 master-1 kubenswrapper[4771]: I1011 10:31:38.181242 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:31:38.181442 master-1 kubenswrapper[4771]: I1011 10:31:38.181239 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:31:38.181442 master-1 kubenswrapper[4771]: I1011 10:31:38.181339 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:31:38.395519 master-1 kubenswrapper[4771]: I1011 10:31:38.395288 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:31:38.421957 master-1 kubenswrapper[4771]: W1011 10:31:38.421912 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34b1362996d1e0c2cea0bee73eb18468.slice/crio-8e5c43de2cc367ec1c4a7e349d6c7a564b4d6502ecb1a203dea90aba296e3027 WatchSource:0}: Error finding container 8e5c43de2cc367ec1c4a7e349d6c7a564b4d6502ecb1a203dea90aba296e3027: Status 404 returned error can't find the container with id 8e5c43de2cc367ec1c4a7e349d6c7a564b4d6502ecb1a203dea90aba296e3027 Oct 11 10:31:38.426228 master-1 kubenswrapper[4771]: I1011 10:31:38.426151 4771 generic.go:334] "Generic (PLEG): container finished" podID="f5e7e1ec-47a8-4283-9119-0d9d1343963e" containerID="d38cc7e81ae0071969a185999498646cddc10ee8b65bed60da29b4c1f46a55dc" exitCode=0 Oct 11 10:31:38.426416 master-1 kubenswrapper[4771]: I1011 10:31:38.426219 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-1" event={"ID":"f5e7e1ec-47a8-4283-9119-0d9d1343963e","Type":"ContainerDied","Data":"d38cc7e81ae0071969a185999498646cddc10ee8b65bed60da29b4c1f46a55dc"} Oct 11 10:31:38.429254 master-1 kubenswrapper[4771]: I1011 10:31:38.429189 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" event={"ID":"d87cc032-b419-444c-8bf0-ef7405d7369d","Type":"ContainerStarted","Data":"79f8e8a3af9681261cf6c96297e08774526c159a1df96245fda7d956c1a72204"} Oct 11 10:31:38.436125 master-1 kubenswrapper[4771]: I1011 10:31:38.436067 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:31:38.496925 master-1 kubenswrapper[4771]: I1011 10:31:38.496874 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="309f472e-b390-4e80-9837-cda7353ae2b9" Oct 11 10:31:38.497078 master-1 kubenswrapper[4771]: I1011 10:31:38.496931 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="309f472e-b390-4e80-9837-cda7353ae2b9" Oct 11 10:31:38.501667 master-1 kubenswrapper[4771]: I1011 10:31:38.501567 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" podStartSLOduration=64.501539905 podStartE2EDuration="1m4.501539905s" podCreationTimestamp="2025-10-11 10:30:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:31:38.497178623 +0000 UTC m=+330.471405144" watchObservedRunningTime="2025-10-11 10:31:38.501539905 +0000 UTC m=+330.475766376" Oct 11 10:31:38.513426 master-1 kubenswrapper[4771]: I1011 10:31:38.513070 4771 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:31:38.514782 master-1 kubenswrapper[4771]: I1011 10:31:38.514752 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 11 10:31:38.520219 master-1 kubenswrapper[4771]: I1011 10:31:38.520201 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 11 10:31:38.529447 master-1 kubenswrapper[4771]: I1011 10:31:38.529419 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:31:38.531789 master-1 kubenswrapper[4771]: I1011 10:31:38.531768 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 11 10:31:38.545083 master-1 kubenswrapper[4771]: W1011 10:31:38.545048 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda61df698d34d049669621b2249bfe758.slice/crio-2d313c6886c969ebde302c0dbeabb72642e57f8da8af3701d11d8b20cfc8e2f6 WatchSource:0}: Error finding container 2d313c6886c969ebde302c0dbeabb72642e57f8da8af3701d11d8b20cfc8e2f6: Status 404 returned error can't find the container with id 2d313c6886c969ebde302c0dbeabb72642e57f8da8af3701d11d8b20cfc8e2f6 Oct 11 10:31:39.444841 master-1 kubenswrapper[4771]: I1011 10:31:39.444681 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"a61df698d34d049669621b2249bfe758","Type":"ContainerStarted","Data":"0ce25f4770e41c16a3e6b3f94195d1bb5c5e1e16d7e54fe07f63d76f33baa577"} Oct 11 10:31:39.444841 master-1 kubenswrapper[4771]: I1011 10:31:39.444745 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"a61df698d34d049669621b2249bfe758","Type":"ContainerStarted","Data":"2d313c6886c969ebde302c0dbeabb72642e57f8da8af3701d11d8b20cfc8e2f6"} Oct 11 10:31:39.447123 master-1 kubenswrapper[4771]: I1011 10:31:39.447060 4771 generic.go:334] "Generic (PLEG): container finished" podID="34b1362996d1e0c2cea0bee73eb18468" containerID="e822f71821ab6da8ae22657d298be7ecdc1b3a32a18978e721ac45edbf111296" exitCode=0 Oct 11 10:31:39.447450 master-1 kubenswrapper[4771]: I1011 10:31:39.447400 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"34b1362996d1e0c2cea0bee73eb18468","Type":"ContainerDied","Data":"e822f71821ab6da8ae22657d298be7ecdc1b3a32a18978e721ac45edbf111296"} Oct 11 10:31:39.447545 master-1 kubenswrapper[4771]: I1011 10:31:39.447454 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"34b1362996d1e0c2cea0bee73eb18468","Type":"ContainerStarted","Data":"8e5c43de2cc367ec1c4a7e349d6c7a564b4d6502ecb1a203dea90aba296e3027"} Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: I1011 10:31:39.521444 4771 patch_prober.go:28] interesting pod/apiserver-555f658fd6-n5n6g container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:31:39.522123 master-1 kubenswrapper[4771]: I1011 10:31:39.521524 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:39.781374 master-1 kubenswrapper[4771]: I1011 10:31:39.781302 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-1" Oct 11 10:31:39.802578 master-1 kubenswrapper[4771]: I1011 10:31:39.802494 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5e7e1ec-47a8-4283-9119-0d9d1343963e-kubelet-dir\") pod \"f5e7e1ec-47a8-4283-9119-0d9d1343963e\" (UID: \"f5e7e1ec-47a8-4283-9119-0d9d1343963e\") " Oct 11 10:31:39.802772 master-1 kubenswrapper[4771]: I1011 10:31:39.802585 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5e7e1ec-47a8-4283-9119-0d9d1343963e-var-lock\") pod \"f5e7e1ec-47a8-4283-9119-0d9d1343963e\" (UID: \"f5e7e1ec-47a8-4283-9119-0d9d1343963e\") " Oct 11 10:31:39.802772 master-1 kubenswrapper[4771]: I1011 10:31:39.802657 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5e7e1ec-47a8-4283-9119-0d9d1343963e-kube-api-access\") pod \"f5e7e1ec-47a8-4283-9119-0d9d1343963e\" (UID: \"f5e7e1ec-47a8-4283-9119-0d9d1343963e\") " Oct 11 10:31:39.803513 master-1 kubenswrapper[4771]: I1011 10:31:39.803466 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5e7e1ec-47a8-4283-9119-0d9d1343963e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f5e7e1ec-47a8-4283-9119-0d9d1343963e" (UID: "f5e7e1ec-47a8-4283-9119-0d9d1343963e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:39.803870 master-1 kubenswrapper[4771]: I1011 10:31:39.803823 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5e7e1ec-47a8-4283-9119-0d9d1343963e-var-lock" (OuterVolumeSpecName: "var-lock") pod "f5e7e1ec-47a8-4283-9119-0d9d1343963e" (UID: "f5e7e1ec-47a8-4283-9119-0d9d1343963e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:39.806799 master-1 kubenswrapper[4771]: I1011 10:31:39.806718 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5e7e1ec-47a8-4283-9119-0d9d1343963e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f5e7e1ec-47a8-4283-9119-0d9d1343963e" (UID: "f5e7e1ec-47a8-4283-9119-0d9d1343963e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:31:39.904611 master-1 kubenswrapper[4771]: I1011 10:31:39.904542 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5e7e1ec-47a8-4283-9119-0d9d1343963e-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:39.904785 master-1 kubenswrapper[4771]: I1011 10:31:39.904606 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5e7e1ec-47a8-4283-9119-0d9d1343963e-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:39.904785 master-1 kubenswrapper[4771]: I1011 10:31:39.904692 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5e7e1ec-47a8-4283-9119-0d9d1343963e-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:40.469957 master-1 kubenswrapper[4771]: I1011 10:31:40.468491 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-1" event={"ID":"f5e7e1ec-47a8-4283-9119-0d9d1343963e","Type":"ContainerDied","Data":"59e5d21ded756cda1a7334faedc3327ece5481088be9442f57418e8a270fcabc"} Oct 11 10:31:40.469957 master-1 kubenswrapper[4771]: I1011 10:31:40.468536 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59e5d21ded756cda1a7334faedc3327ece5481088be9442f57418e8a270fcabc" Oct 11 10:31:40.469957 master-1 kubenswrapper[4771]: I1011 10:31:40.468602 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-1" Oct 11 10:31:40.477098 master-1 kubenswrapper[4771]: I1011 10:31:40.476969 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"34b1362996d1e0c2cea0bee73eb18468","Type":"ContainerStarted","Data":"d197bce3e8ba0a7f6aff105b6e86788c609756474629f070cf3ae2b1f7ecea10"} Oct 11 10:31:40.477098 master-1 kubenswrapper[4771]: I1011 10:31:40.477034 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"34b1362996d1e0c2cea0bee73eb18468","Type":"ContainerStarted","Data":"c610027bfa053a5744033e9524cf5428c551fc36958f998db58017c8d204f5c1"} Oct 11 10:31:40.477098 master-1 kubenswrapper[4771]: I1011 10:31:40.477048 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"34b1362996d1e0c2cea0bee73eb18468","Type":"ContainerStarted","Data":"7b6b0f0ebd2652368dabe4dd6da18ce3abd22cb6b9eb7a0abf73685386aeadea"} Oct 11 10:31:41.056024 master-1 kubenswrapper[4771]: I1011 10:31:41.055954 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:41.056024 master-1 kubenswrapper[4771]: I1011 10:31:41.056031 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:41.073229 master-1 kubenswrapper[4771]: I1011 10:31:41.073171 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:41.497211 master-1 kubenswrapper[4771]: I1011 10:31:41.497088 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"34b1362996d1e0c2cea0bee73eb18468","Type":"ContainerStarted","Data":"49c30f2556445e43240415750853bc066c7b179aee60a314f7f90ecd36a50dcd"} Oct 11 10:31:41.497211 master-1 kubenswrapper[4771]: I1011 10:31:41.497144 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"34b1362996d1e0c2cea0bee73eb18468","Type":"ContainerStarted","Data":"3a652daa6b9bec9ce58b6d5f79f895564498416d8415b206a1a52c5a9a98d3f7"} Oct 11 10:31:41.501922 master-1 kubenswrapper[4771]: I1011 10:31:41.501885 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:31:41.520227 master-1 kubenswrapper[4771]: I1011 10:31:41.520135 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-1" podStartSLOduration=3.520113094 podStartE2EDuration="3.520113094s" podCreationTimestamp="2025-10-11 10:31:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:31:41.518705972 +0000 UTC m=+333.492932433" watchObservedRunningTime="2025-10-11 10:31:41.520113094 +0000 UTC m=+333.494339545" Oct 11 10:31:41.589211 master-2 kubenswrapper[4776]: I1011 10:31:41.588890 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6"] Oct 11 10:31:41.589867 master-2 kubenswrapper[4776]: I1011 10:31:41.589182 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" containerID="cri-o://c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912" gracePeriod=120 Oct 11 10:31:41.607854 master-1 kubenswrapper[4771]: I1011 10:31:41.607785 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:31:41.608083 master-1 kubenswrapper[4771]: I1011 10:31:41.607879 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: I1011 10:31:42.372590 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:31:42.373218 master-2 kubenswrapper[4776]: I1011 10:31:42.372656 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:42.507332 master-1 kubenswrapper[4771]: I1011 10:31:42.507100 4771 generic.go:334] "Generic (PLEG): container finished" podID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerID="5ee744232b5a66fa90e18d0677b90fd7ff50cae1f9e1afc9158b036b712f32da" exitCode=0 Oct 11 10:31:42.508879 master-1 kubenswrapper[4771]: I1011 10:31:42.508754 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" event={"ID":"027736d1-f3d3-490e-9ee1-d08bad7a25b7","Type":"ContainerDied","Data":"5ee744232b5a66fa90e18d0677b90fd7ff50cae1f9e1afc9158b036b712f32da"} Oct 11 10:31:42.518148 master-1 kubenswrapper[4771]: I1011 10:31:42.512281 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:31:42.630889 master-1 kubenswrapper[4771]: I1011 10:31:42.630788 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:31:42.740043 master-1 kubenswrapper[4771]: I1011 10:31:42.739969 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-etcd-serving-ca\") pod \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " Oct 11 10:31:42.740043 master-1 kubenswrapper[4771]: I1011 10:31:42.740048 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/027736d1-f3d3-490e-9ee1-d08bad7a25b7-serving-cert\") pod \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " Oct 11 10:31:42.740386 master-1 kubenswrapper[4771]: I1011 10:31:42.740087 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-config\") pod \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " Oct 11 10:31:42.740386 master-1 kubenswrapper[4771]: I1011 10:31:42.740133 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-trusted-ca-bundle\") pod \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " Oct 11 10:31:42.740386 master-1 kubenswrapper[4771]: I1011 10:31:42.740173 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-audit\") pod \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " Oct 11 10:31:42.740386 master-1 kubenswrapper[4771]: I1011 10:31:42.740206 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s269\" (UniqueName: \"kubernetes.io/projected/027736d1-f3d3-490e-9ee1-d08bad7a25b7-kube-api-access-4s269\") pod \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " Oct 11 10:31:42.740386 master-1 kubenswrapper[4771]: I1011 10:31:42.740239 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/027736d1-f3d3-490e-9ee1-d08bad7a25b7-audit-dir\") pod \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " Oct 11 10:31:42.740386 master-1 kubenswrapper[4771]: I1011 10:31:42.740289 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/027736d1-f3d3-490e-9ee1-d08bad7a25b7-encryption-config\") pod \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " Oct 11 10:31:42.740386 master-1 kubenswrapper[4771]: I1011 10:31:42.740331 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-image-import-ca\") pod \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " Oct 11 10:31:42.740834 master-1 kubenswrapper[4771]: I1011 10:31:42.740398 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/027736d1-f3d3-490e-9ee1-d08bad7a25b7-node-pullsecrets\") pod \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " Oct 11 10:31:42.740834 master-1 kubenswrapper[4771]: I1011 10:31:42.740435 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/027736d1-f3d3-490e-9ee1-d08bad7a25b7-etcd-client\") pod \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\" (UID: \"027736d1-f3d3-490e-9ee1-d08bad7a25b7\") " Oct 11 10:31:42.741233 master-1 kubenswrapper[4771]: I1011 10:31:42.741076 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-config" (OuterVolumeSpecName: "config") pod "027736d1-f3d3-490e-9ee1-d08bad7a25b7" (UID: "027736d1-f3d3-490e-9ee1-d08bad7a25b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:31:42.741233 master-1 kubenswrapper[4771]: I1011 10:31:42.741145 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/027736d1-f3d3-490e-9ee1-d08bad7a25b7-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "027736d1-f3d3-490e-9ee1-d08bad7a25b7" (UID: "027736d1-f3d3-490e-9ee1-d08bad7a25b7"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:42.741233 master-1 kubenswrapper[4771]: I1011 10:31:42.741127 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/027736d1-f3d3-490e-9ee1-d08bad7a25b7-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "027736d1-f3d3-490e-9ee1-d08bad7a25b7" (UID: "027736d1-f3d3-490e-9ee1-d08bad7a25b7"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:42.741497 master-1 kubenswrapper[4771]: I1011 10:31:42.741233 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "027736d1-f3d3-490e-9ee1-d08bad7a25b7" (UID: "027736d1-f3d3-490e-9ee1-d08bad7a25b7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:31:42.741497 master-1 kubenswrapper[4771]: I1011 10:31:42.741473 4771 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/027736d1-f3d3-490e-9ee1-d08bad7a25b7-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:42.741630 master-1 kubenswrapper[4771]: I1011 10:31:42.741503 4771 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/027736d1-f3d3-490e-9ee1-d08bad7a25b7-node-pullsecrets\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:42.741630 master-1 kubenswrapper[4771]: I1011 10:31:42.741578 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:42.741630 master-1 kubenswrapper[4771]: I1011 10:31:42.741598 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:42.741851 master-1 kubenswrapper[4771]: I1011 10:31:42.741635 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-audit" (OuterVolumeSpecName: "audit") pod "027736d1-f3d3-490e-9ee1-d08bad7a25b7" (UID: "027736d1-f3d3-490e-9ee1-d08bad7a25b7"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:31:42.742595 master-1 kubenswrapper[4771]: I1011 10:31:42.742266 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "027736d1-f3d3-490e-9ee1-d08bad7a25b7" (UID: "027736d1-f3d3-490e-9ee1-d08bad7a25b7"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:31:42.742595 master-1 kubenswrapper[4771]: I1011 10:31:42.742510 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "027736d1-f3d3-490e-9ee1-d08bad7a25b7" (UID: "027736d1-f3d3-490e-9ee1-d08bad7a25b7"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:31:42.744299 master-1 kubenswrapper[4771]: I1011 10:31:42.744237 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/027736d1-f3d3-490e-9ee1-d08bad7a25b7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "027736d1-f3d3-490e-9ee1-d08bad7a25b7" (UID: "027736d1-f3d3-490e-9ee1-d08bad7a25b7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:31:42.744492 master-1 kubenswrapper[4771]: I1011 10:31:42.744432 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/027736d1-f3d3-490e-9ee1-d08bad7a25b7-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "027736d1-f3d3-490e-9ee1-d08bad7a25b7" (UID: "027736d1-f3d3-490e-9ee1-d08bad7a25b7"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:31:42.744745 master-1 kubenswrapper[4771]: I1011 10:31:42.744697 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/027736d1-f3d3-490e-9ee1-d08bad7a25b7-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "027736d1-f3d3-490e-9ee1-d08bad7a25b7" (UID: "027736d1-f3d3-490e-9ee1-d08bad7a25b7"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:31:42.745605 master-1 kubenswrapper[4771]: I1011 10:31:42.745502 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/027736d1-f3d3-490e-9ee1-d08bad7a25b7-kube-api-access-4s269" (OuterVolumeSpecName: "kube-api-access-4s269") pod "027736d1-f3d3-490e-9ee1-d08bad7a25b7" (UID: "027736d1-f3d3-490e-9ee1-d08bad7a25b7"). InnerVolumeSpecName "kube-api-access-4s269". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:31:42.842469 master-1 kubenswrapper[4771]: I1011 10:31:42.842339 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/027736d1-f3d3-490e-9ee1-d08bad7a25b7-etcd-client\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:42.842469 master-1 kubenswrapper[4771]: I1011 10:31:42.842414 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-etcd-serving-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:42.842469 master-1 kubenswrapper[4771]: I1011 10:31:42.842423 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/027736d1-f3d3-490e-9ee1-d08bad7a25b7-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:42.842469 master-1 kubenswrapper[4771]: I1011 10:31:42.842432 4771 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-audit\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:42.842469 master-1 kubenswrapper[4771]: I1011 10:31:42.842444 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s269\" (UniqueName: \"kubernetes.io/projected/027736d1-f3d3-490e-9ee1-d08bad7a25b7-kube-api-access-4s269\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:42.842469 master-1 kubenswrapper[4771]: I1011 10:31:42.842456 4771 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/027736d1-f3d3-490e-9ee1-d08bad7a25b7-encryption-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:42.842469 master-1 kubenswrapper[4771]: I1011 10:31:42.842469 4771 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/027736d1-f3d3-490e-9ee1-d08bad7a25b7-image-import-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:42.911296 master-1 kubenswrapper[4771]: I1011 10:31:42.911222 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-1"] Oct 11 10:31:42.911590 master-1 kubenswrapper[4771]: E1011 10:31:42.911481 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5e7e1ec-47a8-4283-9119-0d9d1343963e" containerName="installer" Oct 11 10:31:42.911590 master-1 kubenswrapper[4771]: I1011 10:31:42.911501 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5e7e1ec-47a8-4283-9119-0d9d1343963e" containerName="installer" Oct 11 10:31:42.911590 master-1 kubenswrapper[4771]: E1011 10:31:42.911519 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="openshift-apiserver" Oct 11 10:31:42.911590 master-1 kubenswrapper[4771]: I1011 10:31:42.911530 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="openshift-apiserver" Oct 11 10:31:42.911590 master-1 kubenswrapper[4771]: E1011 10:31:42.911553 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="fix-audit-permissions" Oct 11 10:31:42.911590 master-1 kubenswrapper[4771]: I1011 10:31:42.911561 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="fix-audit-permissions" Oct 11 10:31:42.911590 master-1 kubenswrapper[4771]: E1011 10:31:42.911572 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="openshift-apiserver-check-endpoints" Oct 11 10:31:42.911590 master-1 kubenswrapper[4771]: I1011 10:31:42.911580 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="openshift-apiserver-check-endpoints" Oct 11 10:31:42.912025 master-1 kubenswrapper[4771]: I1011 10:31:42.911684 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="openshift-apiserver" Oct 11 10:31:42.912025 master-1 kubenswrapper[4771]: I1011 10:31:42.911700 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" containerName="openshift-apiserver-check-endpoints" Oct 11 10:31:42.912025 master-1 kubenswrapper[4771]: I1011 10:31:42.911720 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5e7e1ec-47a8-4283-9119-0d9d1343963e" containerName="installer" Oct 11 10:31:42.912306 master-1 kubenswrapper[4771]: I1011 10:31:42.912253 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 11 10:31:42.915551 master-1 kubenswrapper[4771]: I1011 10:31:42.915506 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"openshift-service-ca.crt" Oct 11 10:31:42.915680 master-1 kubenswrapper[4771]: I1011 10:31:42.915550 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 11 10:31:42.919525 master-1 kubenswrapper[4771]: I1011 10:31:42.919470 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-1"] Oct 11 10:31:42.944098 master-1 kubenswrapper[4771]: I1011 10:31:42.943636 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkxxx\" (UniqueName: \"kubernetes.io/projected/86b914fa-4ccd-42fb-965a-a1bc19442489-kube-api-access-pkxxx\") pod \"kube-apiserver-guard-master-1\" (UID: \"86b914fa-4ccd-42fb-965a-a1bc19442489\") " pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 11 10:31:43.044809 master-1 kubenswrapper[4771]: I1011 10:31:43.044699 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkxxx\" (UniqueName: \"kubernetes.io/projected/86b914fa-4ccd-42fb-965a-a1bc19442489-kube-api-access-pkxxx\") pod \"kube-apiserver-guard-master-1\" (UID: \"86b914fa-4ccd-42fb-965a-a1bc19442489\") " pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 11 10:31:43.065094 master-1 kubenswrapper[4771]: I1011 10:31:43.065011 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkxxx\" (UniqueName: \"kubernetes.io/projected/86b914fa-4ccd-42fb-965a-a1bc19442489-kube-api-access-pkxxx\") pod \"kube-apiserver-guard-master-1\" (UID: \"86b914fa-4ccd-42fb-965a-a1bc19442489\") " pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 11 10:31:43.237185 master-1 kubenswrapper[4771]: I1011 10:31:43.236936 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 11 10:31:43.262882 master-1 kubenswrapper[4771]: I1011 10:31:43.262805 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 11 10:31:43.264681 master-1 kubenswrapper[4771]: I1011 10:31:43.264628 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:31:43.347748 master-1 kubenswrapper[4771]: I1011 10:31:43.347018 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 11 10:31:43.348905 master-1 kubenswrapper[4771]: I1011 10:31:43.348867 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0c6dd9eb5bc384e5fbc388e7a2f95c28-cert-dir\") pod \"kube-controller-manager-master-1\" (UID: \"0c6dd9eb5bc384e5fbc388e7a2f95c28\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:31:43.349138 master-1 kubenswrapper[4771]: I1011 10:31:43.349111 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0c6dd9eb5bc384e5fbc388e7a2f95c28-resource-dir\") pod \"kube-controller-manager-master-1\" (UID: \"0c6dd9eb5bc384e5fbc388e7a2f95c28\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:31:43.395599 master-1 kubenswrapper[4771]: I1011 10:31:43.395533 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:31:43.395801 master-1 kubenswrapper[4771]: I1011 10:31:43.395627 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:31:43.405060 master-1 kubenswrapper[4771]: I1011 10:31:43.404997 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:31:43.451156 master-1 kubenswrapper[4771]: I1011 10:31:43.451068 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0c6dd9eb5bc384e5fbc388e7a2f95c28-resource-dir\") pod \"kube-controller-manager-master-1\" (UID: \"0c6dd9eb5bc384e5fbc388e7a2f95c28\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:31:43.451388 master-1 kubenswrapper[4771]: I1011 10:31:43.451226 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0c6dd9eb5bc384e5fbc388e7a2f95c28-cert-dir\") pod \"kube-controller-manager-master-1\" (UID: \"0c6dd9eb5bc384e5fbc388e7a2f95c28\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:31:43.451388 master-1 kubenswrapper[4771]: I1011 10:31:43.451238 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0c6dd9eb5bc384e5fbc388e7a2f95c28-resource-dir\") pod \"kube-controller-manager-master-1\" (UID: \"0c6dd9eb5bc384e5fbc388e7a2f95c28\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:31:43.452028 master-1 kubenswrapper[4771]: I1011 10:31:43.451962 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0c6dd9eb5bc384e5fbc388e7a2f95c28-cert-dir\") pod \"kube-controller-manager-master-1\" (UID: \"0c6dd9eb5bc384e5fbc388e7a2f95c28\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:31:43.515505 master-1 kubenswrapper[4771]: I1011 10:31:43.515286 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" event={"ID":"027736d1-f3d3-490e-9ee1-d08bad7a25b7","Type":"ContainerDied","Data":"0245a7fd6940eab125c14495c22d9aa4a273c8034b951fafcde945d3497b7a29"} Oct 11 10:31:43.515505 master-1 kubenswrapper[4771]: I1011 10:31:43.515428 4771 scope.go:117] "RemoveContainer" containerID="893d86a98f61447fa7f11deae879fe95aeccf34e5a1d5e59961a43c4a181ec43" Oct 11 10:31:43.515505 master-1 kubenswrapper[4771]: I1011 10:31:43.515310 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-555f658fd6-n5n6g" Oct 11 10:31:43.517843 master-1 kubenswrapper[4771]: I1011 10:31:43.517787 4771 generic.go:334] "Generic (PLEG): container finished" podID="e2d8f859-38d1-4916-8262-ff865eb9982c" containerID="0904aae89e47c25a2e93dd629d94914a7beb5e409d6b4e15ac6ddcfa1b57aa4d" exitCode=0 Oct 11 10:31:43.517936 master-1 kubenswrapper[4771]: I1011 10:31:43.517869 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-1" event={"ID":"e2d8f859-38d1-4916-8262-ff865eb9982c","Type":"ContainerDied","Data":"0904aae89e47c25a2e93dd629d94914a7beb5e409d6b4e15ac6ddcfa1b57aa4d"} Oct 11 10:31:43.523266 master-1 kubenswrapper[4771]: I1011 10:31:43.523207 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:31:43.550058 master-1 kubenswrapper[4771]: I1011 10:31:43.549999 4771 scope.go:117] "RemoveContainer" containerID="5ee744232b5a66fa90e18d0677b90fd7ff50cae1f9e1afc9158b036b712f32da" Oct 11 10:31:43.570344 master-1 kubenswrapper[4771]: I1011 10:31:43.570279 4771 scope.go:117] "RemoveContainer" containerID="41af63c058a1e7b90357082e0adac794e0e1b2996f71cfa6b9c3a91b7079c8d7" Oct 11 10:31:43.596646 master-1 kubenswrapper[4771]: I1011 10:31:43.596581 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-555f658fd6-n5n6g"] Oct 11 10:31:43.602821 master-1 kubenswrapper[4771]: I1011 10:31:43.602772 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-555f658fd6-n5n6g"] Oct 11 10:31:43.645212 master-1 kubenswrapper[4771]: I1011 10:31:43.645130 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:31:43.662133 master-1 kubenswrapper[4771]: W1011 10:31:43.662069 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c6dd9eb5bc384e5fbc388e7a2f95c28.slice/crio-c4fbba2242ad7533417c9230aa1cf12834880759f22bbf109485cc2aad57f7e7 WatchSource:0}: Error finding container c4fbba2242ad7533417c9230aa1cf12834880759f22bbf109485cc2aad57f7e7: Status 404 returned error can't find the container with id c4fbba2242ad7533417c9230aa1cf12834880759f22bbf109485cc2aad57f7e7 Oct 11 10:31:43.703409 master-1 kubenswrapper[4771]: I1011 10:31:43.703345 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-1"] Oct 11 10:31:43.713411 master-1 kubenswrapper[4771]: W1011 10:31:43.713371 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86b914fa_4ccd_42fb_965a_a1bc19442489.slice/crio-8732f5bab9a381941e5c31ffdd4f43b464797523fb9db751b3a1d2f56fdfdd7f WatchSource:0}: Error finding container 8732f5bab9a381941e5c31ffdd4f43b464797523fb9db751b3a1d2f56fdfdd7f: Status 404 returned error can't find the container with id 8732f5bab9a381941e5c31ffdd4f43b464797523fb9db751b3a1d2f56fdfdd7f Oct 11 10:31:43.940266 master-1 kubenswrapper[4771]: I1011 10:31:43.940175 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:31:44.446270 master-1 kubenswrapper[4771]: I1011 10:31:44.446200 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="027736d1-f3d3-490e-9ee1-d08bad7a25b7" path="/var/lib/kubelet/pods/027736d1-f3d3-490e-9ee1-d08bad7a25b7/volumes" Oct 11 10:31:44.523660 master-1 kubenswrapper[4771]: I1011 10:31:44.523614 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" event={"ID":"86b914fa-4ccd-42fb-965a-a1bc19442489","Type":"ContainerStarted","Data":"16917522dd8d6c564900b32768804f2aa15698ba6c7c0d38122dff408276f3fa"} Oct 11 10:31:44.524282 master-1 kubenswrapper[4771]: I1011 10:31:44.524257 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 11 10:31:44.524421 master-1 kubenswrapper[4771]: I1011 10:31:44.524402 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" event={"ID":"86b914fa-4ccd-42fb-965a-a1bc19442489","Type":"ContainerStarted","Data":"8732f5bab9a381941e5c31ffdd4f43b464797523fb9db751b3a1d2f56fdfdd7f"} Oct 11 10:31:44.525758 master-1 kubenswrapper[4771]: I1011 10:31:44.525703 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"0c6dd9eb5bc384e5fbc388e7a2f95c28","Type":"ContainerStarted","Data":"bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db"} Oct 11 10:31:44.525853 master-1 kubenswrapper[4771]: I1011 10:31:44.525782 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"0c6dd9eb5bc384e5fbc388e7a2f95c28","Type":"ContainerStarted","Data":"c4fbba2242ad7533417c9230aa1cf12834880759f22bbf109485cc2aad57f7e7"} Oct 11 10:31:44.530382 master-1 kubenswrapper[4771]: I1011 10:31:44.530343 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 11 10:31:44.542916 master-1 kubenswrapper[4771]: I1011 10:31:44.542871 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podStartSLOduration=2.542857399 podStartE2EDuration="2.542857399s" podCreationTimestamp="2025-10-11 10:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:31:44.541142447 +0000 UTC m=+336.515368898" watchObservedRunningTime="2025-10-11 10:31:44.542857399 +0000 UTC m=+336.517083850" Oct 11 10:31:44.878338 master-1 kubenswrapper[4771]: I1011 10:31:44.878285 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-1" Oct 11 10:31:44.971438 master-1 kubenswrapper[4771]: I1011 10:31:44.971335 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2d8f859-38d1-4916-8262-ff865eb9982c-kubelet-dir\") pod \"e2d8f859-38d1-4916-8262-ff865eb9982c\" (UID: \"e2d8f859-38d1-4916-8262-ff865eb9982c\") " Oct 11 10:31:44.971637 master-1 kubenswrapper[4771]: I1011 10:31:44.971443 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2d8f859-38d1-4916-8262-ff865eb9982c-kube-api-access\") pod \"e2d8f859-38d1-4916-8262-ff865eb9982c\" (UID: \"e2d8f859-38d1-4916-8262-ff865eb9982c\") " Oct 11 10:31:44.971637 master-1 kubenswrapper[4771]: I1011 10:31:44.971477 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2d8f859-38d1-4916-8262-ff865eb9982c-var-lock\") pod \"e2d8f859-38d1-4916-8262-ff865eb9982c\" (UID: \"e2d8f859-38d1-4916-8262-ff865eb9982c\") " Oct 11 10:31:44.971637 master-1 kubenswrapper[4771]: I1011 10:31:44.971462 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2d8f859-38d1-4916-8262-ff865eb9982c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e2d8f859-38d1-4916-8262-ff865eb9982c" (UID: "e2d8f859-38d1-4916-8262-ff865eb9982c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:44.971732 master-1 kubenswrapper[4771]: I1011 10:31:44.971674 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2d8f859-38d1-4916-8262-ff865eb9982c-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:44.971767 master-1 kubenswrapper[4771]: I1011 10:31:44.971733 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2d8f859-38d1-4916-8262-ff865eb9982c-var-lock" (OuterVolumeSpecName: "var-lock") pod "e2d8f859-38d1-4916-8262-ff865eb9982c" (UID: "e2d8f859-38d1-4916-8262-ff865eb9982c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:44.976477 master-1 kubenswrapper[4771]: I1011 10:31:44.974522 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d8f859-38d1-4916-8262-ff865eb9982c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e2d8f859-38d1-4916-8262-ff865eb9982c" (UID: "e2d8f859-38d1-4916-8262-ff865eb9982c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:31:45.072286 master-1 kubenswrapper[4771]: I1011 10:31:45.072226 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2d8f859-38d1-4916-8262-ff865eb9982c-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:45.072286 master-1 kubenswrapper[4771]: I1011 10:31:45.072255 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2d8f859-38d1-4916-8262-ff865eb9982c-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:31:45.533166 master-1 kubenswrapper[4771]: I1011 10:31:45.533105 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-1" event={"ID":"e2d8f859-38d1-4916-8262-ff865eb9982c","Type":"ContainerDied","Data":"8a6f9bd6405227d24a21db358afae4da6b7dc87d9e61c80b59ee89918fb69c92"} Oct 11 10:31:45.533166 master-1 kubenswrapper[4771]: I1011 10:31:45.533172 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a6f9bd6405227d24a21db358afae4da6b7dc87d9e61c80b59ee89918fb69c92" Oct 11 10:31:45.533751 master-1 kubenswrapper[4771]: I1011 10:31:45.533331 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-1" Oct 11 10:31:45.736097 master-1 kubenswrapper[4771]: I1011 10:31:45.736009 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-777cc846dc-qpmws"] Oct 11 10:31:45.736445 master-1 kubenswrapper[4771]: E1011 10:31:45.736347 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d8f859-38d1-4916-8262-ff865eb9982c" containerName="installer" Oct 11 10:31:45.736445 master-1 kubenswrapper[4771]: I1011 10:31:45.736415 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d8f859-38d1-4916-8262-ff865eb9982c" containerName="installer" Oct 11 10:31:45.736627 master-1 kubenswrapper[4771]: I1011 10:31:45.736600 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d8f859-38d1-4916-8262-ff865eb9982c" containerName="installer" Oct 11 10:31:45.737966 master-1 kubenswrapper[4771]: I1011 10:31:45.737909 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.741275 master-1 kubenswrapper[4771]: I1011 10:31:45.741220 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 11 10:31:45.741275 master-1 kubenswrapper[4771]: I1011 10:31:45.741234 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 11 10:31:45.741563 master-1 kubenswrapper[4771]: I1011 10:31:45.741388 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 11 10:31:45.741563 master-1 kubenswrapper[4771]: I1011 10:31:45.741220 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 11 10:31:45.742273 master-1 kubenswrapper[4771]: I1011 10:31:45.742232 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 11 10:31:45.742433 master-1 kubenswrapper[4771]: I1011 10:31:45.742296 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 11 10:31:45.742433 master-1 kubenswrapper[4771]: I1011 10:31:45.742415 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 11 10:31:45.743227 master-1 kubenswrapper[4771]: I1011 10:31:45.743164 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 11 10:31:45.743403 master-1 kubenswrapper[4771]: I1011 10:31:45.743318 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 11 10:31:45.749098 master-1 kubenswrapper[4771]: I1011 10:31:45.749050 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-777cc846dc-qpmws"] Oct 11 10:31:45.752643 master-1 kubenswrapper[4771]: I1011 10:31:45.752596 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 11 10:31:45.780383 master-1 kubenswrapper[4771]: I1011 10:31:45.780093 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e2fb9636-0787-426e-bd5e-cba0ea823b2b-node-pullsecrets\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.780515 master-1 kubenswrapper[4771]: I1011 10:31:45.780411 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-audit\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.780515 master-1 kubenswrapper[4771]: I1011 10:31:45.780501 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-config\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.780681 master-1 kubenswrapper[4771]: I1011 10:31:45.780606 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpdjh\" (UniqueName: \"kubernetes.io/projected/e2fb9636-0787-426e-bd5e-cba0ea823b2b-kube-api-access-dpdjh\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.780761 master-1 kubenswrapper[4771]: I1011 10:31:45.780693 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-etcd-serving-ca\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.780828 master-1 kubenswrapper[4771]: I1011 10:31:45.780785 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e2fb9636-0787-426e-bd5e-cba0ea823b2b-audit-dir\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.780892 master-1 kubenswrapper[4771]: I1011 10:31:45.780839 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e2fb9636-0787-426e-bd5e-cba0ea823b2b-encryption-config\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.781146 master-1 kubenswrapper[4771]: I1011 10:31:45.780866 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2fb9636-0787-426e-bd5e-cba0ea823b2b-serving-cert\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.781228 master-1 kubenswrapper[4771]: I1011 10:31:45.781210 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e2fb9636-0787-426e-bd5e-cba0ea823b2b-etcd-client\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.781294 master-1 kubenswrapper[4771]: I1011 10:31:45.781255 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-trusted-ca-bundle\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.781384 master-1 kubenswrapper[4771]: I1011 10:31:45.781370 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-image-import-ca\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.857919 master-2 kubenswrapper[4776]: E1011 10:31:45.857832 4776 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/etcd-pod.yaml\": /etc/kubernetes/manifests/etcd-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Oct 11 10:31:45.860377 master-2 kubenswrapper[4776]: I1011 10:31:45.860328 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:31:45.860561 master-2 kubenswrapper[4776]: E1011 10:31:45.860530 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebd3d140-91cb-4ec4-91a0-ec45a87da4ea" containerName="installer" Oct 11 10:31:45.860561 master-2 kubenswrapper[4776]: I1011 10:31:45.860551 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebd3d140-91cb-4ec4-91a0-ec45a87da4ea" containerName="installer" Oct 11 10:31:45.860657 master-2 kubenswrapper[4776]: I1011 10:31:45.860645 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebd3d140-91cb-4ec4-91a0-ec45a87da4ea" containerName="installer" Oct 11 10:31:45.862197 master-2 kubenswrapper[4776]: I1011 10:31:45.862164 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:31:45.882810 master-1 kubenswrapper[4771]: I1011 10:31:45.882687 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e2fb9636-0787-426e-bd5e-cba0ea823b2b-etcd-client\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.882810 master-1 kubenswrapper[4771]: I1011 10:31:45.882795 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-trusted-ca-bundle\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.883872 master-1 kubenswrapper[4771]: I1011 10:31:45.882870 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-image-import-ca\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.883872 master-1 kubenswrapper[4771]: I1011 10:31:45.882933 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e2fb9636-0787-426e-bd5e-cba0ea823b2b-node-pullsecrets\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.883872 master-1 kubenswrapper[4771]: I1011 10:31:45.882975 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-audit\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.883872 master-1 kubenswrapper[4771]: I1011 10:31:45.883024 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-config\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.883872 master-1 kubenswrapper[4771]: I1011 10:31:45.883087 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpdjh\" (UniqueName: \"kubernetes.io/projected/e2fb9636-0787-426e-bd5e-cba0ea823b2b-kube-api-access-dpdjh\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.883872 master-1 kubenswrapper[4771]: I1011 10:31:45.883121 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-etcd-serving-ca\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.883872 master-1 kubenswrapper[4771]: I1011 10:31:45.883206 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e2fb9636-0787-426e-bd5e-cba0ea823b2b-audit-dir\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.883872 master-1 kubenswrapper[4771]: I1011 10:31:45.883212 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e2fb9636-0787-426e-bd5e-cba0ea823b2b-node-pullsecrets\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.883872 master-1 kubenswrapper[4771]: I1011 10:31:45.883453 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e2fb9636-0787-426e-bd5e-cba0ea823b2b-encryption-config\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.883872 master-1 kubenswrapper[4771]: I1011 10:31:45.883488 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2fb9636-0787-426e-bd5e-cba0ea823b2b-serving-cert\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.883872 master-1 kubenswrapper[4771]: I1011 10:31:45.883528 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e2fb9636-0787-426e-bd5e-cba0ea823b2b-audit-dir\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.885655 master-1 kubenswrapper[4771]: I1011 10:31:45.884646 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-audit\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.885655 master-1 kubenswrapper[4771]: I1011 10:31:45.884833 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-trusted-ca-bundle\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.885655 master-1 kubenswrapper[4771]: I1011 10:31:45.885424 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-config\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.885655 master-1 kubenswrapper[4771]: I1011 10:31:45.885478 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-image-import-ca\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.885956 master-1 kubenswrapper[4771]: I1011 10:31:45.885677 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-etcd-serving-ca\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.887125 master-1 kubenswrapper[4771]: I1011 10:31:45.887066 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e2fb9636-0787-426e-bd5e-cba0ea823b2b-etcd-client\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.887471 master-1 kubenswrapper[4771]: I1011 10:31:45.887425 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e2fb9636-0787-426e-bd5e-cba0ea823b2b-encryption-config\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.888898 master-1 kubenswrapper[4771]: I1011 10:31:45.888846 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2fb9636-0787-426e-bd5e-cba0ea823b2b-serving-cert\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.898518 master-2 kubenswrapper[4776]: I1011 10:31:45.898438 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:31:45.912670 master-1 kubenswrapper[4771]: I1011 10:31:45.912594 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpdjh\" (UniqueName: \"kubernetes.io/projected/e2fb9636-0787-426e-bd5e-cba0ea823b2b-kube-api-access-dpdjh\") pod \"apiserver-777cc846dc-qpmws\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:45.999322 master-2 kubenswrapper[4776]: I1011 10:31:45.999265 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-data-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:45.999322 master-2 kubenswrapper[4776]: I1011 10:31:45.999317 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-cert-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:45.999628 master-2 kubenswrapper[4776]: I1011 10:31:45.999346 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-log-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:45.999628 master-2 kubenswrapper[4776]: I1011 10:31:45.999409 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-usr-local-bin\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:45.999628 master-2 kubenswrapper[4776]: I1011 10:31:45.999432 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-resource-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:45.999628 master-2 kubenswrapper[4776]: I1011 10:31:45.999505 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-static-pod-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.057994 master-1 kubenswrapper[4771]: I1011 10:31:46.057895 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:46.079805 master-2 kubenswrapper[4776]: I1011 10:31:46.079741 4776 generic.go:334] "Generic (PLEG): container finished" podID="ff524bb0-602a-4579-bac9-c3f5c19ec9ba" containerID="5dcd1c043c2c18cfa07bcf1cba0c1e16ed116132974cf974809ac324fe8a6c21" exitCode=0 Oct 11 10:31:46.079805 master-2 kubenswrapper[4776]: I1011 10:31:46.079791 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-3-master-2" event={"ID":"ff524bb0-602a-4579-bac9-c3f5c19ec9ba","Type":"ContainerDied","Data":"5dcd1c043c2c18cfa07bcf1cba0c1e16ed116132974cf974809ac324fe8a6c21"} Oct 11 10:31:46.100196 master-2 kubenswrapper[4776]: I1011 10:31:46.100133 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-log-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100415 master-2 kubenswrapper[4776]: I1011 10:31:46.100239 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-usr-local-bin\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100415 master-2 kubenswrapper[4776]: I1011 10:31:46.100274 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-resource-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100415 master-2 kubenswrapper[4776]: I1011 10:31:46.100320 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-static-pod-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100415 master-2 kubenswrapper[4776]: I1011 10:31:46.100343 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-data-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100415 master-2 kubenswrapper[4776]: I1011 10:31:46.100371 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-cert-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100561 master-2 kubenswrapper[4776]: I1011 10:31:46.100443 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-cert-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100561 master-2 kubenswrapper[4776]: I1011 10:31:46.100484 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-log-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100561 master-2 kubenswrapper[4776]: I1011 10:31:46.100514 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-usr-local-bin\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100561 master-2 kubenswrapper[4776]: I1011 10:31:46.100545 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-resource-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100667 master-2 kubenswrapper[4776]: I1011 10:31:46.100576 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-static-pod-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100667 master-2 kubenswrapper[4776]: I1011 10:31:46.100604 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-data-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.194923 master-2 kubenswrapper[4776]: I1011 10:31:46.194859 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.210254 master-2 kubenswrapper[4776]: W1011 10:31:46.210188 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc492168afa20f49cb6e3534e1871011b.slice/crio-0df9cdf55dcce811bcb02b907151466b6d03c26b87c72a12e09804928f72dd92 WatchSource:0}: Error finding container 0df9cdf55dcce811bcb02b907151466b6d03c26b87c72a12e09804928f72dd92: Status 404 returned error can't find the container with id 0df9cdf55dcce811bcb02b907151466b6d03c26b87c72a12e09804928f72dd92 Oct 11 10:31:46.211443 master-2 kubenswrapper[4776]: I1011 10:31:46.211419 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:31:46.544177 master-1 kubenswrapper[4771]: I1011 10:31:46.544097 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-777cc846dc-qpmws"] Oct 11 10:31:46.548665 master-1 kubenswrapper[4771]: I1011 10:31:46.547049 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"0c6dd9eb5bc384e5fbc388e7a2f95c28","Type":"ContainerStarted","Data":"38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5"} Oct 11 10:31:46.548665 master-1 kubenswrapper[4771]: I1011 10:31:46.547104 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"0c6dd9eb5bc384e5fbc388e7a2f95c28","Type":"ContainerStarted","Data":"c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb"} Oct 11 10:31:46.607178 master-1 kubenswrapper[4771]: I1011 10:31:46.607124 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:31:46.607511 master-1 kubenswrapper[4771]: I1011 10:31:46.607468 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:31:46.957798 master-1 kubenswrapper[4771]: I1011 10:31:46.957714 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-1"] Oct 11 10:31:46.958512 master-1 kubenswrapper[4771]: I1011 10:31:46.958484 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 11 10:31:46.961810 master-1 kubenswrapper[4771]: I1011 10:31:46.961748 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 11 10:31:46.962621 master-1 kubenswrapper[4771]: I1011 10:31:46.962556 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"openshift-service-ca.crt" Oct 11 10:31:46.979100 master-1 kubenswrapper[4771]: I1011 10:31:46.979051 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-1"] Oct 11 10:31:46.999638 master-1 kubenswrapper[4771]: I1011 10:31:46.999570 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szlf6\" (UniqueName: \"kubernetes.io/projected/a706deec-9223-4663-9db5-71147d242c34-kube-api-access-szlf6\") pod \"kube-controller-manager-guard-master-1\" (UID: \"a706deec-9223-4663-9db5-71147d242c34\") " pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 11 10:31:47.086689 master-2 kubenswrapper[4776]: I1011 10:31:47.086616 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerStarted","Data":"0df9cdf55dcce811bcb02b907151466b6d03c26b87c72a12e09804928f72dd92"} Oct 11 10:31:47.101190 master-1 kubenswrapper[4771]: I1011 10:31:47.101109 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szlf6\" (UniqueName: \"kubernetes.io/projected/a706deec-9223-4663-9db5-71147d242c34-kube-api-access-szlf6\") pod \"kube-controller-manager-guard-master-1\" (UID: \"a706deec-9223-4663-9db5-71147d242c34\") " pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 11 10:31:47.125920 master-1 kubenswrapper[4771]: I1011 10:31:47.125858 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szlf6\" (UniqueName: \"kubernetes.io/projected/a706deec-9223-4663-9db5-71147d242c34-kube-api-access-szlf6\") pod \"kube-controller-manager-guard-master-1\" (UID: \"a706deec-9223-4663-9db5-71147d242c34\") " pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 11 10:31:47.292754 master-1 kubenswrapper[4771]: I1011 10:31:47.292631 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 11 10:31:47.304435 master-2 kubenswrapper[4776]: I1011 10:31:47.304161 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: I1011 10:31:47.370044 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: I1011 10:31:47.370106 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:47.416757 master-2 kubenswrapper[4776]: I1011 10:31:47.416719 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kubelet-dir\") pod \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " Oct 11 10:31:47.416949 master-2 kubenswrapper[4776]: I1011 10:31:47.416929 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-var-lock\") pod \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " Oct 11 10:31:47.417094 master-2 kubenswrapper[4776]: I1011 10:31:47.417075 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kube-api-access\") pod \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " Oct 11 10:31:47.417264 master-2 kubenswrapper[4776]: I1011 10:31:47.416861 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ff524bb0-602a-4579-bac9-c3f5c19ec9ba" (UID: "ff524bb0-602a-4579-bac9-c3f5c19ec9ba"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:47.417322 master-2 kubenswrapper[4776]: I1011 10:31:47.416998 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-var-lock" (OuterVolumeSpecName: "var-lock") pod "ff524bb0-602a-4579-bac9-c3f5c19ec9ba" (UID: "ff524bb0-602a-4579-bac9-c3f5c19ec9ba"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:47.417637 master-2 kubenswrapper[4776]: I1011 10:31:47.417618 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:31:47.417762 master-2 kubenswrapper[4776]: I1011 10:31:47.417747 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:31:47.420174 master-2 kubenswrapper[4776]: I1011 10:31:47.420120 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ff524bb0-602a-4579-bac9-c3f5c19ec9ba" (UID: "ff524bb0-602a-4579-bac9-c3f5c19ec9ba"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:31:47.519531 master-2 kubenswrapper[4776]: I1011 10:31:47.519444 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:31:47.545099 master-1 kubenswrapper[4771]: I1011 10:31:47.545019 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-1"] Oct 11 10:31:47.548838 master-1 kubenswrapper[4771]: W1011 10:31:47.548779 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda706deec_9223_4663_9db5_71147d242c34.slice/crio-2a35a961efd9f1b6620917dd5d953d6d62a0fd18c86e7a1df3a9de3daf228dbf WatchSource:0}: Error finding container 2a35a961efd9f1b6620917dd5d953d6d62a0fd18c86e7a1df3a9de3daf228dbf: Status 404 returned error can't find the container with id 2a35a961efd9f1b6620917dd5d953d6d62a0fd18c86e7a1df3a9de3daf228dbf Oct 11 10:31:47.557904 master-1 kubenswrapper[4771]: I1011 10:31:47.557837 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"0c6dd9eb5bc384e5fbc388e7a2f95c28","Type":"ContainerStarted","Data":"06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1"} Oct 11 10:31:47.560393 master-1 kubenswrapper[4771]: I1011 10:31:47.560300 4771 generic.go:334] "Generic (PLEG): container finished" podID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerID="ed4ea2c827d3365e80a136d8fc9c70fdea44747628fc9e1b440208d196a14d73" exitCode=0 Oct 11 10:31:47.560526 master-1 kubenswrapper[4771]: I1011 10:31:47.560392 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" event={"ID":"e2fb9636-0787-426e-bd5e-cba0ea823b2b","Type":"ContainerDied","Data":"ed4ea2c827d3365e80a136d8fc9c70fdea44747628fc9e1b440208d196a14d73"} Oct 11 10:31:47.560526 master-1 kubenswrapper[4771]: I1011 10:31:47.560428 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" event={"ID":"e2fb9636-0787-426e-bd5e-cba0ea823b2b","Type":"ContainerStarted","Data":"a41a821c8fbcdc8c024fe125a36dfc655949ba099ab1bab4420d6e97047ce118"} Oct 11 10:31:47.586525 master-1 kubenswrapper[4771]: I1011 10:31:47.586440 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podStartSLOduration=4.586410382 podStartE2EDuration="4.586410382s" podCreationTimestamp="2025-10-11 10:31:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:31:47.582592236 +0000 UTC m=+339.556818717" watchObservedRunningTime="2025-10-11 10:31:47.586410382 +0000 UTC m=+339.560636823" Oct 11 10:31:47.711477 master-1 kubenswrapper[4771]: I1011 10:31:47.711326 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-1"] Oct 11 10:31:47.781003 master-1 kubenswrapper[4771]: I1011 10:31:47.780954 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-h7gnk_6d20faa4-e5eb-4766-b4f5-30e491d1820c/machine-config-server/0.log" Oct 11 10:31:47.788707 master-2 kubenswrapper[4776]: I1011 10:31:47.788656 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:31:48.093368 master-2 kubenswrapper[4776]: I1011 10:31:48.093330 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-3-master-2" event={"ID":"ff524bb0-602a-4579-bac9-c3f5c19ec9ba","Type":"ContainerDied","Data":"caf35abea272da4e2eae59b8aeddbb02fa8fc9c63850f420fe7ddea3a3fbfdf7"} Oct 11 10:31:48.093368 master-2 kubenswrapper[4776]: I1011 10:31:48.093369 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caf35abea272da4e2eae59b8aeddbb02fa8fc9c63850f420fe7ddea3a3fbfdf7" Oct 11 10:31:48.093946 master-2 kubenswrapper[4776]: I1011 10:31:48.093385 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:48.569599 master-1 kubenswrapper[4771]: I1011 10:31:48.569523 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" event={"ID":"e2fb9636-0787-426e-bd5e-cba0ea823b2b","Type":"ContainerStarted","Data":"9b7973318d321c4747b9166204be01b90470f6b7ff6c1031063eb5d24ec05b0e"} Oct 11 10:31:48.569599 master-1 kubenswrapper[4771]: I1011 10:31:48.569595 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" event={"ID":"e2fb9636-0787-426e-bd5e-cba0ea823b2b","Type":"ContainerStarted","Data":"5314d6ef2281ac080baefb268e1b24e3959c52d75eecf8bba9e60d0238801c00"} Oct 11 10:31:48.584167 master-1 kubenswrapper[4771]: I1011 10:31:48.584073 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" event={"ID":"a706deec-9223-4663-9db5-71147d242c34","Type":"ContainerStarted","Data":"1c31430bd5d9e081e88aabc1ad810a536394ce113461601cc517a81f452f8976"} Oct 11 10:31:48.584167 master-1 kubenswrapper[4771]: I1011 10:31:48.584167 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" event={"ID":"a706deec-9223-4663-9db5-71147d242c34","Type":"ContainerStarted","Data":"2a35a961efd9f1b6620917dd5d953d6d62a0fd18c86e7a1df3a9de3daf228dbf"} Oct 11 10:31:48.584514 master-1 kubenswrapper[4771]: I1011 10:31:48.584192 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 11 10:31:48.590146 master-1 kubenswrapper[4771]: I1011 10:31:48.590094 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 11 10:31:48.606143 master-1 kubenswrapper[4771]: I1011 10:31:48.606050 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" podStartSLOduration=57.60603043 podStartE2EDuration="57.60603043s" podCreationTimestamp="2025-10-11 10:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:31:48.60304228 +0000 UTC m=+340.577268751" watchObservedRunningTime="2025-10-11 10:31:48.60603043 +0000 UTC m=+340.580256901" Oct 11 10:31:48.621221 master-1 kubenswrapper[4771]: I1011 10:31:48.621149 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" podStartSLOduration=2.621136956 podStartE2EDuration="2.621136956s" podCreationTimestamp="2025-10-11 10:31:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:31:48.620386713 +0000 UTC m=+340.594613164" watchObservedRunningTime="2025-10-11 10:31:48.621136956 +0000 UTC m=+340.595363417" Oct 11 10:31:49.100947 master-2 kubenswrapper[4776]: I1011 10:31:49.100907 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738" exitCode=0 Oct 11 10:31:49.101419 master-2 kubenswrapper[4776]: I1011 10:31:49.100989 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerDied","Data":"c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738"} Oct 11 10:31:50.107826 master-2 kubenswrapper[4776]: I1011 10:31:50.107714 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9" exitCode=0 Oct 11 10:31:50.107826 master-2 kubenswrapper[4776]: I1011 10:31:50.107801 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerDied","Data":"fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9"} Oct 11 10:31:51.058919 master-1 kubenswrapper[4771]: I1011 10:31:51.058812 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:51.058919 master-1 kubenswrapper[4771]: I1011 10:31:51.058912 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:51.071342 master-1 kubenswrapper[4771]: I1011 10:31:51.071273 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:51.121308 master-2 kubenswrapper[4776]: I1011 10:31:51.121180 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607" exitCode=0 Oct 11 10:31:51.121308 master-2 kubenswrapper[4776]: I1011 10:31:51.121272 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerDied","Data":"8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607"} Oct 11 10:31:51.564308 master-1 kubenswrapper[4771]: I1011 10:31:51.564233 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-1"] Oct 11 10:31:51.608077 master-1 kubenswrapper[4771]: I1011 10:31:51.607989 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:31:51.608077 master-1 kubenswrapper[4771]: I1011 10:31:51.608068 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:31:51.609824 master-1 kubenswrapper[4771]: I1011 10:31:51.609770 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:31:51.695442 master-2 kubenswrapper[4776]: I1011 10:31:51.695383 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-555f658fd6-wmcqt"] Oct 11 10:31:51.695885 master-2 kubenswrapper[4776]: I1011 10:31:51.695841 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" containerID="cri-o://a486fb47915dcf562b4049ef498fba0a79dc2f0d9c2b35c61e3a77be9dcdeae7" gracePeriod=120 Oct 11 10:31:51.696135 master-2 kubenswrapper[4776]: I1011 10:31:51.696022 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://2fdacc499227869c48e6c020ccf86a1927bcc28943d27adf9666ea1d0e17f652" gracePeriod=120 Oct 11 10:31:52.130140 master-2 kubenswrapper[4776]: I1011 10:31:52.130058 4776 generic.go:334] "Generic (PLEG): container finished" podID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerID="2fdacc499227869c48e6c020ccf86a1927bcc28943d27adf9666ea1d0e17f652" exitCode=0 Oct 11 10:31:52.130140 master-2 kubenswrapper[4776]: I1011 10:31:52.130111 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" event={"ID":"7a89cb41-fb97-4282-a12d-c6f8d87bc41e","Type":"ContainerDied","Data":"2fdacc499227869c48e6c020ccf86a1927bcc28943d27adf9666ea1d0e17f652"} Oct 11 10:31:52.132134 master-2 kubenswrapper[4776]: I1011 10:31:52.132094 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/0.log" Oct 11 10:31:52.133138 master-2 kubenswrapper[4776]: I1011 10:31:52.133101 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="5d99e5586f3f25c98384bcaec74505355d716d542f0d5177b62c41981c0f4f1f" exitCode=1 Oct 11 10:31:52.133138 master-2 kubenswrapper[4776]: I1011 10:31:52.133126 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerStarted","Data":"e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6"} Oct 11 10:31:52.133138 master-2 kubenswrapper[4776]: I1011 10:31:52.133139 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerDied","Data":"5d99e5586f3f25c98384bcaec74505355d716d542f0d5177b62c41981c0f4f1f"} Oct 11 10:31:52.133333 master-2 kubenswrapper[4776]: I1011 10:31:52.133149 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerStarted","Data":"1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8"} Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: I1011 10:31:52.372552 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:31:52.373778 master-2 kubenswrapper[4776]: I1011 10:31:52.373348 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:52.373778 master-2 kubenswrapper[4776]: I1011 10:31:52.373437 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: I1011 10:31:52.929345 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: I1011 10:31:52.929440 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:53.140720 master-2 kubenswrapper[4776]: I1011 10:31:53.140687 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/0.log" Oct 11 10:31:53.142100 master-2 kubenswrapper[4776]: I1011 10:31:53.142071 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerStarted","Data":"2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d"} Oct 11 10:31:53.142175 master-2 kubenswrapper[4776]: I1011 10:31:53.142116 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerStarted","Data":"352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7"} Oct 11 10:31:53.142552 master-2 kubenswrapper[4776]: I1011 10:31:53.142524 4776 scope.go:117] "RemoveContainer" containerID="5d99e5586f3f25c98384bcaec74505355d716d542f0d5177b62c41981c0f4f1f" Oct 11 10:31:53.646043 master-1 kubenswrapper[4771]: I1011 10:31:53.645964 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:31:53.646043 master-1 kubenswrapper[4771]: I1011 10:31:53.646055 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:31:53.647019 master-1 kubenswrapper[4771]: I1011 10:31:53.646079 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:31:53.647019 master-1 kubenswrapper[4771]: I1011 10:31:53.646102 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:31:53.653236 master-1 kubenswrapper[4771]: I1011 10:31:53.653149 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:31:53.654696 master-1 kubenswrapper[4771]: I1011 10:31:53.654635 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:31:54.150118 master-2 kubenswrapper[4776]: I1011 10:31:54.150068 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/1.log" Oct 11 10:31:54.154793 master-2 kubenswrapper[4776]: I1011 10:31:54.154768 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/0.log" Oct 11 10:31:54.156900 master-2 kubenswrapper[4776]: I1011 10:31:54.156847 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc" exitCode=1 Oct 11 10:31:54.156900 master-2 kubenswrapper[4776]: I1011 10:31:54.156887 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerDied","Data":"983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc"} Oct 11 10:31:54.157033 master-2 kubenswrapper[4776]: I1011 10:31:54.156922 4776 scope.go:117] "RemoveContainer" containerID="5d99e5586f3f25c98384bcaec74505355d716d542f0d5177b62c41981c0f4f1f" Oct 11 10:31:54.157750 master-2 kubenswrapper[4776]: I1011 10:31:54.157717 4776 scope.go:117] "RemoveContainer" containerID="983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc" Oct 11 10:31:54.158185 master-2 kubenswrapper[4776]: E1011 10:31:54.158154 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd pod=etcd-master-2_openshift-etcd(c492168afa20f49cb6e3534e1871011b)\"" pod="openshift-etcd/etcd-master-2" podUID="c492168afa20f49cb6e3534e1871011b" Oct 11 10:31:54.628960 master-1 kubenswrapper[4771]: I1011 10:31:54.628882 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:31:55.163938 master-2 kubenswrapper[4776]: I1011 10:31:55.163887 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/1.log" Oct 11 10:31:55.168391 master-2 kubenswrapper[4776]: I1011 10:31:55.168370 4776 scope.go:117] "RemoveContainer" containerID="983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc" Oct 11 10:31:55.168799 master-2 kubenswrapper[4776]: E1011 10:31:55.168777 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd pod=etcd-master-2_openshift-etcd(c492168afa20f49cb6e3534e1871011b)\"" pod="openshift-etcd/etcd-master-2" podUID="c492168afa20f49cb6e3534e1871011b" Oct 11 10:31:56.195995 master-2 kubenswrapper[4776]: I1011 10:31:56.195928 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-2" Oct 11 10:31:56.195995 master-2 kubenswrapper[4776]: I1011 10:31:56.195974 4776 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd/etcd-master-2" Oct 11 10:31:56.195995 master-2 kubenswrapper[4776]: I1011 10:31:56.195985 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-2" Oct 11 10:31:56.195995 master-2 kubenswrapper[4776]: I1011 10:31:56.195995 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-2" Oct 11 10:31:56.196910 master-2 kubenswrapper[4776]: I1011 10:31:56.196588 4776 scope.go:117] "RemoveContainer" containerID="983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc" Oct 11 10:31:56.196910 master-2 kubenswrapper[4776]: E1011 10:31:56.196875 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd pod=etcd-master-2_openshift-etcd(c492168afa20f49cb6e3534e1871011b)\"" pod="openshift-etcd/etcd-master-2" podUID="c492168afa20f49cb6e3534e1871011b" Oct 11 10:31:56.607910 master-1 kubenswrapper[4771]: I1011 10:31:56.607602 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:31:56.609130 master-1 kubenswrapper[4771]: I1011 10:31:56.607912 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:31:56.642395 master-1 kubenswrapper[4771]: I1011 10:31:56.642259 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: I1011 10:31:57.371543 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: I1011 10:31:57.371618 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:57.707236 master-2 kubenswrapper[4776]: I1011 10:31:57.707168 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-guard-master-2"] Oct 11 10:31:57.708218 master-2 kubenswrapper[4776]: E1011 10:31:57.707403 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff524bb0-602a-4579-bac9-c3f5c19ec9ba" containerName="installer" Oct 11 10:31:57.708218 master-2 kubenswrapper[4776]: I1011 10:31:57.707418 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff524bb0-602a-4579-bac9-c3f5c19ec9ba" containerName="installer" Oct 11 10:31:57.708218 master-2 kubenswrapper[4776]: I1011 10:31:57.707535 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff524bb0-602a-4579-bac9-c3f5c19ec9ba" containerName="installer" Oct 11 10:31:57.708218 master-2 kubenswrapper[4776]: I1011 10:31:57.708005 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:31:57.711406 master-2 kubenswrapper[4776]: I1011 10:31:57.711370 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"openshift-service-ca.crt" Oct 11 10:31:57.712878 master-2 kubenswrapper[4776]: I1011 10:31:57.712847 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Oct 11 10:31:57.725821 master-2 kubenswrapper[4776]: I1011 10:31:57.721830 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/etcd-guard-master-2"] Oct 11 10:31:57.758305 master-2 kubenswrapper[4776]: I1011 10:31:57.758236 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbsb4\" (UniqueName: \"kubernetes.io/projected/9314095b-1661-46bd-8e19-2741d9d758fa-kube-api-access-gbsb4\") pod \"etcd-guard-master-2\" (UID: \"9314095b-1661-46bd-8e19-2741d9d758fa\") " pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:31:57.859453 master-2 kubenswrapper[4776]: I1011 10:31:57.859344 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbsb4\" (UniqueName: \"kubernetes.io/projected/9314095b-1661-46bd-8e19-2741d9d758fa-kube-api-access-gbsb4\") pod \"etcd-guard-master-2\" (UID: \"9314095b-1661-46bd-8e19-2741d9d758fa\") " pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:31:57.878230 master-2 kubenswrapper[4776]: I1011 10:31:57.878159 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbsb4\" (UniqueName: \"kubernetes.io/projected/9314095b-1661-46bd-8e19-2741d9d758fa-kube-api-access-gbsb4\") pod \"etcd-guard-master-2\" (UID: \"9314095b-1661-46bd-8e19-2741d9d758fa\") " pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: I1011 10:31:57.930523 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: I1011 10:31:57.930639 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:58.030505 master-2 kubenswrapper[4776]: I1011 10:31:58.030331 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:31:58.403523 master-1 kubenswrapper[4771]: I1011 10:31:58.403453 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:31:58.525891 master-2 kubenswrapper[4776]: I1011 10:31:58.525750 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/etcd-guard-master-2"] Oct 11 10:31:59.207260 master-2 kubenswrapper[4776]: I1011 10:31:59.207202 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-guard-master-2" event={"ID":"9314095b-1661-46bd-8e19-2741d9d758fa","Type":"ContainerStarted","Data":"dd92959466cbdad70a80055ac1e16987cd678122f01b686d6b49af348560fd6b"} Oct 11 10:31:59.207260 master-2 kubenswrapper[4776]: I1011 10:31:59.207248 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-guard-master-2" event={"ID":"9314095b-1661-46bd-8e19-2741d9d758fa","Type":"ContainerStarted","Data":"5bf637cb5c71d3038537bacbd0d464910a2872b51f4584c72a5dd453860e55c5"} Oct 11 10:31:59.207589 master-2 kubenswrapper[4776]: I1011 10:31:59.207514 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:31:59.231469 master-2 kubenswrapper[4776]: I1011 10:31:59.228892 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-guard-master-2" podStartSLOduration=2.2288674 podStartE2EDuration="2.2288674s" podCreationTimestamp="2025-10-11 10:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:31:59.225521989 +0000 UTC m=+354.009948728" watchObservedRunningTime="2025-10-11 10:31:59.2288674 +0000 UTC m=+354.013294149" Oct 11 10:32:01.607940 master-1 kubenswrapper[4771]: I1011 10:32:01.607834 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:32:01.607940 master-1 kubenswrapper[4771]: I1011 10:32:01.607919 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: I1011 10:32:02.371479 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:02.372350 master-2 kubenswrapper[4776]: I1011 10:32:02.371548 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: I1011 10:32:02.930451 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: I1011 10:32:02.930527 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:02.932075 master-2 kubenswrapper[4776]: I1011 10:32:02.930656 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:32:03.560981 master-2 kubenswrapper[4776]: I1011 10:32:03.560933 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-2"] Oct 11 10:32:03.562584 master-2 kubenswrapper[4776]: I1011 10:32:03.562562 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.567057 master-2 kubenswrapper[4776]: I1011 10:32:03.567008 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 11 10:32:03.573486 master-2 kubenswrapper[4776]: I1011 10:32:03.573443 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-2"] Oct 11 10:32:03.664289 master-2 kubenswrapper[4776]: I1011 10:32:03.664207 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-var-lock\") pod \"installer-4-master-2\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.664289 master-2 kubenswrapper[4776]: I1011 10:32:03.664277 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kube-api-access\") pod \"installer-4-master-2\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.664289 master-2 kubenswrapper[4776]: I1011 10:32:03.664304 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kubelet-dir\") pod \"installer-4-master-2\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.765965 master-2 kubenswrapper[4776]: I1011 10:32:03.765911 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-var-lock\") pod \"installer-4-master-2\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.766342 master-2 kubenswrapper[4776]: I1011 10:32:03.766310 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kube-api-access\") pod \"installer-4-master-2\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.766527 master-2 kubenswrapper[4776]: I1011 10:32:03.766501 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kubelet-dir\") pod \"installer-4-master-2\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.766734 master-2 kubenswrapper[4776]: I1011 10:32:03.766637 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kubelet-dir\") pod \"installer-4-master-2\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.766903 master-2 kubenswrapper[4776]: I1011 10:32:03.766068 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-var-lock\") pod \"installer-4-master-2\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.787461 master-2 kubenswrapper[4776]: I1011 10:32:03.787386 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kube-api-access\") pod \"installer-4-master-2\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.887441 master-2 kubenswrapper[4776]: I1011 10:32:03.887261 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:04.208582 master-2 kubenswrapper[4776]: I1011 10:32:04.208480 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:32:04.208850 master-2 kubenswrapper[4776]: I1011 10:32:04.208621 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:32:04.339947 master-2 kubenswrapper[4776]: I1011 10:32:04.339864 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-2"] Oct 11 10:32:04.347182 master-2 kubenswrapper[4776]: W1011 10:32:04.347100 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod860e97e2_10a4_4a16_ac4e_4a0fc7490200.slice/crio-2a0c267eb08670f3fe6902b94f01392021a837de4570d7e530b15613d10477eb WatchSource:0}: Error finding container 2a0c267eb08670f3fe6902b94f01392021a837de4570d7e530b15613d10477eb: Status 404 returned error can't find the container with id 2a0c267eb08670f3fe6902b94f01392021a837de4570d7e530b15613d10477eb Oct 11 10:32:04.710386 master-2 kubenswrapper[4776]: I1011 10:32:04.710253 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/etcd-guard-master-2"] Oct 11 10:32:05.243201 master-2 kubenswrapper[4776]: I1011 10:32:05.243148 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-2" event={"ID":"860e97e2-10a4-4a16-ac4e-4a0fc7490200","Type":"ContainerStarted","Data":"26ed7de5d40500630aec8d65a4a8355cae8b9e280b195543aa3fdc8cfbc315c3"} Oct 11 10:32:05.243201 master-2 kubenswrapper[4776]: I1011 10:32:05.243202 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-2" event={"ID":"860e97e2-10a4-4a16-ac4e-4a0fc7490200","Type":"ContainerStarted","Data":"2a0c267eb08670f3fe6902b94f01392021a837de4570d7e530b15613d10477eb"} Oct 11 10:32:05.270284 master-2 kubenswrapper[4776]: I1011 10:32:05.270095 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-2" podStartSLOduration=2.2700683010000002 podStartE2EDuration="2.270068301s" podCreationTimestamp="2025-10-11 10:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:32:05.268605981 +0000 UTC m=+360.053032680" watchObservedRunningTime="2025-10-11 10:32:05.270068301 +0000 UTC m=+360.054495040" Oct 11 10:32:06.607598 master-1 kubenswrapper[4771]: I1011 10:32:06.607538 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:32:06.608168 master-1 kubenswrapper[4771]: I1011 10:32:06.607641 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: I1011 10:32:07.374191 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:07.376228 master-2 kubenswrapper[4776]: I1011 10:32:07.374360 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: I1011 10:32:07.933204 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:07.934972 master-2 kubenswrapper[4776]: I1011 10:32:07.933357 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:08.059532 master-2 kubenswrapper[4776]: I1011 10:32:08.059420 4776 scope.go:117] "RemoveContainer" containerID="983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc" Oct 11 10:32:08.723791 master-1 kubenswrapper[4771]: I1011 10:32:08.723689 4771 generic.go:334] "Generic (PLEG): container finished" podID="a61df698d34d049669621b2249bfe758" containerID="0ce25f4770e41c16a3e6b3f94195d1bb5c5e1e16d7e54fe07f63d76f33baa577" exitCode=0 Oct 11 10:32:08.723791 master-1 kubenswrapper[4771]: I1011 10:32:08.723766 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"a61df698d34d049669621b2249bfe758","Type":"ContainerDied","Data":"0ce25f4770e41c16a3e6b3f94195d1bb5c5e1e16d7e54fe07f63d76f33baa577"} Oct 11 10:32:09.209762 master-2 kubenswrapper[4776]: I1011 10:32:09.209667 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:32:09.210442 master-2 kubenswrapper[4776]: I1011 10:32:09.209780 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:32:09.274411 master-2 kubenswrapper[4776]: I1011 10:32:09.274260 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/1.log" Oct 11 10:32:09.277393 master-2 kubenswrapper[4776]: I1011 10:32:09.277353 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerStarted","Data":"4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737"} Oct 11 10:32:09.324075 master-2 kubenswrapper[4776]: I1011 10:32:09.323988 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-2" podStartSLOduration=24.323957941 podStartE2EDuration="24.323957941s" podCreationTimestamp="2025-10-11 10:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:32:09.318849952 +0000 UTC m=+364.103276671" watchObservedRunningTime="2025-10-11 10:32:09.323957941 +0000 UTC m=+364.108384660" Oct 11 10:32:09.734887 master-1 kubenswrapper[4771]: I1011 10:32:09.734779 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"a61df698d34d049669621b2249bfe758","Type":"ContainerStarted","Data":"e3bd4833cb6b364aa83fca88897d32509fa085f61942c4a3cb75cdf814b22316"} Oct 11 10:32:09.734887 master-1 kubenswrapper[4771]: I1011 10:32:09.734854 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"a61df698d34d049669621b2249bfe758","Type":"ContainerStarted","Data":"e12640b361e2ef73c6e289951cab68eae36f0b9bc81be5fd9209771124b251a5"} Oct 11 10:32:09.734887 master-1 kubenswrapper[4771]: I1011 10:32:09.734875 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"a61df698d34d049669621b2249bfe758","Type":"ContainerStarted","Data":"63c473df6fa732d6511618b295787205edc98e4025bcc6c14cf2f92361ce263f"} Oct 11 10:32:09.735991 master-1 kubenswrapper[4771]: I1011 10:32:09.735171 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:32:09.756459 master-1 kubenswrapper[4771]: I1011 10:32:09.756371 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podStartSLOduration=31.756324966 podStartE2EDuration="31.756324966s" podCreationTimestamp="2025-10-11 10:31:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:32:09.754721978 +0000 UTC m=+361.728948429" watchObservedRunningTime="2025-10-11 10:32:09.756324966 +0000 UTC m=+361.730551427" Oct 11 10:32:10.629542 master-2 kubenswrapper[4776]: I1011 10:32:10.629462 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:32:11.195495 master-2 kubenswrapper[4776]: I1011 10:32:11.195429 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-2" Oct 11 10:32:11.613899 master-1 kubenswrapper[4771]: I1011 10:32:11.613819 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: I1011 10:32:12.373204 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:12.374498 master-2 kubenswrapper[4776]: I1011 10:32:12.373292 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: I1011 10:32:12.931852 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:12.932630 master-2 kubenswrapper[4776]: I1011 10:32:12.931955 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:16.197232 master-2 kubenswrapper[4776]: I1011 10:32:16.195426 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-2" Oct 11 10:32:16.316916 master-2 kubenswrapper[4776]: I1011 10:32:16.316851 4776 generic.go:334] "Generic (PLEG): container finished" podID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerID="d0cb5ca87b019c6dd7de016a32a463f61a07f9fbd819c59a6476d827605ded9c" exitCode=0 Oct 11 10:32:16.317081 master-2 kubenswrapper[4776]: I1011 10:32:16.316901 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-57kcw" event={"ID":"c8cd90ff-e70c-4837-82c4-0fec67a8a51b","Type":"ContainerDied","Data":"d0cb5ca87b019c6dd7de016a32a463f61a07f9fbd819c59a6476d827605ded9c"} Oct 11 10:32:16.782288 master-1 kubenswrapper[4771]: I1011 10:32:16.782191 4771 generic.go:334] "Generic (PLEG): container finished" podID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerID="d9d09acfb9b74efc71914e418c9f7ad84873a3a13515d6cfcddf159cfd555604" exitCode=0 Oct 11 10:32:16.783558 master-1 kubenswrapper[4771]: I1011 10:32:16.782342 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" event={"ID":"04cd4a19-2532-43d1-9144-1f59d9e52d19","Type":"ContainerDied","Data":"d9d09acfb9b74efc71914e418c9f7ad84873a3a13515d6cfcddf159cfd555604"} Oct 11 10:32:16.783859 master-1 kubenswrapper[4771]: I1011 10:32:16.783816 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" event={"ID":"04cd4a19-2532-43d1-9144-1f59d9e52d19","Type":"ContainerStarted","Data":"2fd6e0cb14ecdcadbf2571f6d4dd1d2a4a1e6cf999fc333d09b9fc98b284b780"} Oct 11 10:32:17.327592 master-2 kubenswrapper[4776]: I1011 10:32:17.327522 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-57kcw" event={"ID":"c8cd90ff-e70c-4837-82c4-0fec67a8a51b","Type":"ContainerStarted","Data":"532aa417a81b450c3ee53605926d217d88d13c85c6a1d9e5ea21cc1e35ca9346"} Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: I1011 10:32:17.378915 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:17.380054 master-2 kubenswrapper[4776]: I1011 10:32:17.380007 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:17.494341 master-1 kubenswrapper[4771]: I1011 10:32:17.494290 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:32:17.497547 master-1 kubenswrapper[4771]: I1011 10:32:17.497494 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:17.497547 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:17.497547 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:17.497547 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:17.497860 master-1 kubenswrapper[4771]: I1011 10:32:17.497594 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:17.797220 master-1 kubenswrapper[4771]: I1011 10:32:17.797042 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-h7gnk_6d20faa4-e5eb-4766-b4f5-30e491d1820c/machine-config-server/0.log" Oct 11 10:32:17.808207 master-2 kubenswrapper[4776]: I1011 10:32:17.808155 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: I1011 10:32:17.936939 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:17.938487 master-2 kubenswrapper[4776]: I1011 10:32:17.937874 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:17.967378 master-2 kubenswrapper[4776]: I1011 10:32:17.967310 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:32:17.967378 master-2 kubenswrapper[4776]: I1011 10:32:17.967381 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:32:17.971420 master-2 kubenswrapper[4776]: I1011 10:32:17.971339 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:17.971420 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:17.971420 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:17.971420 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:17.971652 master-2 kubenswrapper[4776]: I1011 10:32:17.971469 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:18.497230 master-1 kubenswrapper[4771]: I1011 10:32:18.497147 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:18.497230 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:18.497230 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:18.497230 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:18.497657 master-1 kubenswrapper[4771]: I1011 10:32:18.497244 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:18.970838 master-2 kubenswrapper[4776]: I1011 10:32:18.970705 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:18.970838 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:18.970838 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:18.970838 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:18.971913 master-2 kubenswrapper[4776]: I1011 10:32:18.970851 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:19.497808 master-1 kubenswrapper[4771]: I1011 10:32:19.497717 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:19.497808 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:19.497808 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:19.497808 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:19.498967 master-1 kubenswrapper[4771]: I1011 10:32:19.497833 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:19.969836 master-2 kubenswrapper[4776]: I1011 10:32:19.969747 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:19.969836 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:19.969836 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:19.969836 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:19.970420 master-2 kubenswrapper[4776]: I1011 10:32:19.969847 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:20.497639 master-1 kubenswrapper[4771]: I1011 10:32:20.497539 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:20.497639 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:20.497639 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:20.497639 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:20.498450 master-1 kubenswrapper[4771]: I1011 10:32:20.497640 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:20.970511 master-2 kubenswrapper[4776]: I1011 10:32:20.970468 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:20.970511 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:20.970511 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:20.970511 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:20.971313 master-2 kubenswrapper[4776]: I1011 10:32:20.971277 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:21.497656 master-1 kubenswrapper[4771]: I1011 10:32:21.497562 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:21.497656 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:21.497656 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:21.497656 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:21.498644 master-1 kubenswrapper[4771]: I1011 10:32:21.497660 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:21.970169 master-2 kubenswrapper[4776]: I1011 10:32:21.970063 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:21.970169 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:21.970169 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:21.970169 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:21.970169 master-2 kubenswrapper[4776]: I1011 10:32:21.970156 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: I1011 10:32:22.377930 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: I1011 10:32:22.378006 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:22.498873 master-1 kubenswrapper[4771]: I1011 10:32:22.498773 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:22.498873 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:22.498873 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:22.498873 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:22.499817 master-1 kubenswrapper[4771]: I1011 10:32:22.498897 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: I1011 10:32:22.932396 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:22.933598 master-2 kubenswrapper[4776]: I1011 10:32:22.932474 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:22.969703 master-2 kubenswrapper[4776]: I1011 10:32:22.969606 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:22.969703 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:22.969703 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:22.969703 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:22.969977 master-2 kubenswrapper[4776]: I1011 10:32:22.969704 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:23.497985 master-1 kubenswrapper[4771]: I1011 10:32:23.497927 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:23.497985 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:23.497985 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:23.497985 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:23.498527 master-1 kubenswrapper[4771]: I1011 10:32:23.498018 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:23.968961 master-2 kubenswrapper[4776]: I1011 10:32:23.968907 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:23.968961 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:23.968961 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:23.968961 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:23.969577 master-2 kubenswrapper[4776]: I1011 10:32:23.968984 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:24.497429 master-1 kubenswrapper[4771]: I1011 10:32:24.497327 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:24.497429 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:24.497429 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:24.497429 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:24.498054 master-1 kubenswrapper[4771]: I1011 10:32:24.497464 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:24.969787 master-2 kubenswrapper[4776]: I1011 10:32:24.969727 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:24.969787 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:24.969787 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:24.969787 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:24.970849 master-2 kubenswrapper[4776]: I1011 10:32:24.969789 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:25.494307 master-1 kubenswrapper[4771]: I1011 10:32:25.494253 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:32:25.496941 master-1 kubenswrapper[4771]: I1011 10:32:25.496878 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:25.496941 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:25.496941 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:25.496941 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:25.497401 master-1 kubenswrapper[4771]: I1011 10:32:25.496959 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:25.969667 master-2 kubenswrapper[4776]: I1011 10:32:25.969606 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:25.969667 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:25.969667 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:25.969667 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:25.969667 master-2 kubenswrapper[4776]: I1011 10:32:25.969662 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:26.213669 master-2 kubenswrapper[4776]: I1011 10:32:26.213581 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-2" Oct 11 10:32:26.231262 master-2 kubenswrapper[4776]: I1011 10:32:26.231067 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-2" Oct 11 10:32:26.498341 master-1 kubenswrapper[4771]: I1011 10:32:26.498215 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:26.498341 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:26.498341 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:26.498341 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:26.499316 master-1 kubenswrapper[4771]: I1011 10:32:26.498393 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:26.970122 master-2 kubenswrapper[4776]: I1011 10:32:26.970038 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:26.970122 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:26.970122 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:26.970122 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:26.971065 master-2 kubenswrapper[4776]: I1011 10:32:26.970122 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: I1011 10:32:27.374812 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: I1011 10:32:27.374922 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:27.497278 master-1 kubenswrapper[4771]: I1011 10:32:27.497164 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:27.497278 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:27.497278 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:27.497278 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:27.497278 master-1 kubenswrapper[4771]: I1011 10:32:27.497240 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: I1011 10:32:27.929490 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:27.930408 master-2 kubenswrapper[4776]: I1011 10:32:27.929578 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:27.969247 master-2 kubenswrapper[4776]: I1011 10:32:27.969158 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:27.969247 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:27.969247 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:27.969247 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:27.969247 master-2 kubenswrapper[4776]: I1011 10:32:27.969233 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:28.498009 master-1 kubenswrapper[4771]: I1011 10:32:28.497880 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:28.498009 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:28.498009 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:28.498009 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:28.498009 master-1 kubenswrapper[4771]: I1011 10:32:28.497998 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:28.969767 master-2 kubenswrapper[4776]: I1011 10:32:28.969710 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:28.969767 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:28.969767 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:28.969767 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:28.970920 master-2 kubenswrapper[4776]: I1011 10:32:28.970877 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:29.498592 master-1 kubenswrapper[4771]: I1011 10:32:29.498501 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:29.498592 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:29.498592 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:29.498592 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:29.499938 master-1 kubenswrapper[4771]: I1011 10:32:29.498635 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:29.969474 master-2 kubenswrapper[4776]: I1011 10:32:29.969382 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:29.969474 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:29.969474 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:29.969474 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:29.969474 master-2 kubenswrapper[4776]: I1011 10:32:29.969456 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:30.496872 master-1 kubenswrapper[4771]: I1011 10:32:30.496826 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:30.496872 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:30.496872 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:30.496872 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:30.497262 master-1 kubenswrapper[4771]: I1011 10:32:30.497233 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:30.969645 master-2 kubenswrapper[4776]: I1011 10:32:30.969577 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:30.969645 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:30.969645 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:30.969645 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:30.970428 master-2 kubenswrapper[4776]: I1011 10:32:30.969648 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:31.498123 master-1 kubenswrapper[4771]: I1011 10:32:31.498002 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:31.498123 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:31.498123 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:31.498123 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:31.498123 master-1 kubenswrapper[4771]: I1011 10:32:31.498103 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:31.969458 master-2 kubenswrapper[4776]: I1011 10:32:31.969390 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:31.969458 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:31.969458 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:31.969458 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:31.969836 master-2 kubenswrapper[4776]: I1011 10:32:31.969458 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:32.367945 master-2 kubenswrapper[4776]: I1011 10:32:32.367743 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.42:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.42:8443: connect: connection refused" start-of-body= Oct 11 10:32:32.367945 master-2 kubenswrapper[4776]: I1011 10:32:32.367842 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.42:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.42:8443: connect: connection refused" Oct 11 10:32:32.497722 master-1 kubenswrapper[4771]: I1011 10:32:32.497517 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:32.497722 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:32.497722 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:32.497722 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:32.497722 master-1 kubenswrapper[4771]: I1011 10:32:32.497617 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: I1011 10:32:32.929292 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:32.930174 master-2 kubenswrapper[4776]: I1011 10:32:32.929367 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:32.969560 master-2 kubenswrapper[4776]: I1011 10:32:32.969402 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:32.969560 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:32.969560 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:32.969560 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:32.969560 master-2 kubenswrapper[4776]: I1011 10:32:32.969464 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:33.100717 master-2 kubenswrapper[4776]: I1011 10:32:33.100145 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:32:33.141318 master-2 kubenswrapper[4776]: I1011 10:32:33.141268 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729"] Oct 11 10:32:33.141493 master-2 kubenswrapper[4776]: E1011 10:32:33.141442 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="fix-audit-permissions" Oct 11 10:32:33.141493 master-2 kubenswrapper[4776]: I1011 10:32:33.141454 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="fix-audit-permissions" Oct 11 10:32:33.141493 master-2 kubenswrapper[4776]: E1011 10:32:33.141470 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" Oct 11 10:32:33.141493 master-2 kubenswrapper[4776]: I1011 10:32:33.141476 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" Oct 11 10:32:33.141614 master-2 kubenswrapper[4776]: I1011 10:32:33.141567 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" Oct 11 10:32:33.142082 master-2 kubenswrapper[4776]: I1011 10:32:33.142057 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.153325 master-2 kubenswrapper[4776]: I1011 10:32:33.153283 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729"] Oct 11 10:32:33.164460 master-2 kubenswrapper[4776]: I1011 10:32:33.164298 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-encryption-config\") pod \"e350b624-6581-4982-96f3-cd5c37256e02\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " Oct 11 10:32:33.164460 master-2 kubenswrapper[4776]: I1011 10:32:33.164331 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-etcd-client\") pod \"e350b624-6581-4982-96f3-cd5c37256e02\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " Oct 11 10:32:33.164460 master-2 kubenswrapper[4776]: I1011 10:32:33.164386 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e350b624-6581-4982-96f3-cd5c37256e02-audit-dir\") pod \"e350b624-6581-4982-96f3-cd5c37256e02\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " Oct 11 10:32:33.164460 master-2 kubenswrapper[4776]: I1011 10:32:33.164405 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-etcd-serving-ca\") pod \"e350b624-6581-4982-96f3-cd5c37256e02\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " Oct 11 10:32:33.164460 master-2 kubenswrapper[4776]: I1011 10:32:33.164427 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-trusted-ca-bundle\") pod \"e350b624-6581-4982-96f3-cd5c37256e02\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " Oct 11 10:32:33.164460 master-2 kubenswrapper[4776]: I1011 10:32:33.164424 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e350b624-6581-4982-96f3-cd5c37256e02-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e350b624-6581-4982-96f3-cd5c37256e02" (UID: "e350b624-6581-4982-96f3-cd5c37256e02"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:32:33.164460 master-2 kubenswrapper[4776]: I1011 10:32:33.164445 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqlgq\" (UniqueName: \"kubernetes.io/projected/e350b624-6581-4982-96f3-cd5c37256e02-kube-api-access-tqlgq\") pod \"e350b624-6581-4982-96f3-cd5c37256e02\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " Oct 11 10:32:33.164460 master-2 kubenswrapper[4776]: I1011 10:32:33.164467 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-audit-policies\") pod \"e350b624-6581-4982-96f3-cd5c37256e02\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.164616 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert\") pod \"e350b624-6581-4982-96f3-cd5c37256e02\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.164771 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-audit-policies\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.164808 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-encryption-config\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.164827 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54fhv\" (UniqueName: \"kubernetes.io/projected/cc095688-9188-4472-9c26-d4d286e5ef06-kube-api-access-54fhv\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165036 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165091 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-serving-cert\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165161 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc095688-9188-4472-9c26-d4d286e5ef06-audit-dir\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165187 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-client\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165213 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "e350b624-6581-4982-96f3-cd5c37256e02" (UID: "e350b624-6581-4982-96f3-cd5c37256e02"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165221 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e350b624-6581-4982-96f3-cd5c37256e02" (UID: "e350b624-6581-4982-96f3-cd5c37256e02"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165283 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165346 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e350b624-6581-4982-96f3-cd5c37256e02-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165359 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-etcd-serving-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165369 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:33.165780 master-2 kubenswrapper[4776]: I1011 10:32:33.165591 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e350b624-6581-4982-96f3-cd5c37256e02" (UID: "e350b624-6581-4982-96f3-cd5c37256e02"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:32:33.170185 master-2 kubenswrapper[4776]: I1011 10:32:33.170072 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e350b624-6581-4982-96f3-cd5c37256e02" (UID: "e350b624-6581-4982-96f3-cd5c37256e02"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:32:33.170875 master-2 kubenswrapper[4776]: I1011 10:32:33.170831 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "e350b624-6581-4982-96f3-cd5c37256e02" (UID: "e350b624-6581-4982-96f3-cd5c37256e02"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:32:33.170976 master-2 kubenswrapper[4776]: I1011 10:32:33.170923 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e350b624-6581-4982-96f3-cd5c37256e02-kube-api-access-tqlgq" (OuterVolumeSpecName: "kube-api-access-tqlgq") pod "e350b624-6581-4982-96f3-cd5c37256e02" (UID: "e350b624-6581-4982-96f3-cd5c37256e02"). InnerVolumeSpecName "kube-api-access-tqlgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:32:33.171118 master-2 kubenswrapper[4776]: I1011 10:32:33.171080 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "e350b624-6581-4982-96f3-cd5c37256e02" (UID: "e350b624-6581-4982-96f3-cd5c37256e02"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:32:33.267029 master-2 kubenswrapper[4776]: I1011 10:32:33.266889 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.267029 master-2 kubenswrapper[4776]: I1011 10:32:33.266956 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-audit-policies\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.267029 master-2 kubenswrapper[4776]: I1011 10:32:33.267003 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-encryption-config\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.267029 master-2 kubenswrapper[4776]: I1011 10:32:33.267029 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54fhv\" (UniqueName: \"kubernetes.io/projected/cc095688-9188-4472-9c26-d4d286e5ef06-kube-api-access-54fhv\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.267287 master-2 kubenswrapper[4776]: I1011 10:32:33.267076 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.267287 master-2 kubenswrapper[4776]: I1011 10:32:33.267104 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-serving-cert\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.267287 master-2 kubenswrapper[4776]: I1011 10:32:33.267167 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc095688-9188-4472-9c26-d4d286e5ef06-audit-dir\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.267287 master-2 kubenswrapper[4776]: I1011 10:32:33.267191 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-client\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.267287 master-2 kubenswrapper[4776]: I1011 10:32:33.267237 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:33.267287 master-2 kubenswrapper[4776]: I1011 10:32:33.267253 4776 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-encryption-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:33.267287 master-2 kubenswrapper[4776]: I1011 10:32:33.267265 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-etcd-client\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:33.267287 master-2 kubenswrapper[4776]: I1011 10:32:33.267278 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqlgq\" (UniqueName: \"kubernetes.io/projected/e350b624-6581-4982-96f3-cd5c37256e02-kube-api-access-tqlgq\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:33.267287 master-2 kubenswrapper[4776]: I1011 10:32:33.267292 4776 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-audit-policies\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:33.267844 master-2 kubenswrapper[4776]: I1011 10:32:33.267781 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc095688-9188-4472-9c26-d4d286e5ef06-audit-dir\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.268358 master-2 kubenswrapper[4776]: I1011 10:32:33.268311 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-audit-policies\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.268551 master-2 kubenswrapper[4776]: I1011 10:32:33.268519 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.268595 master-2 kubenswrapper[4776]: I1011 10:32:33.268517 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.270699 master-2 kubenswrapper[4776]: I1011 10:32:33.270646 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-serving-cert\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.270942 master-2 kubenswrapper[4776]: I1011 10:32:33.270913 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-client\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.271706 master-2 kubenswrapper[4776]: I1011 10:32:33.271650 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-encryption-config\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.299002 master-2 kubenswrapper[4776]: I1011 10:32:33.298929 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54fhv\" (UniqueName: \"kubernetes.io/projected/cc095688-9188-4472-9c26-d4d286e5ef06-kube-api-access-54fhv\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.421571 master-2 kubenswrapper[4776]: I1011 10:32:33.421487 4776 generic.go:334] "Generic (PLEG): container finished" podID="e350b624-6581-4982-96f3-cd5c37256e02" containerID="c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912" exitCode=0 Oct 11 10:32:33.421571 master-2 kubenswrapper[4776]: I1011 10:32:33.421532 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" event={"ID":"e350b624-6581-4982-96f3-cd5c37256e02","Type":"ContainerDied","Data":"c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912"} Oct 11 10:32:33.421571 master-2 kubenswrapper[4776]: I1011 10:32:33.421559 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:32:33.421571 master-2 kubenswrapper[4776]: I1011 10:32:33.421568 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" event={"ID":"e350b624-6581-4982-96f3-cd5c37256e02","Type":"ContainerDied","Data":"7123427ff4a739a7b15f9487edb2172df73189bcf0b6f9273cbe9a3faa4de58f"} Oct 11 10:32:33.421571 master-2 kubenswrapper[4776]: I1011 10:32:33.421585 4776 scope.go:117] "RemoveContainer" containerID="c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912" Oct 11 10:32:33.445487 master-2 kubenswrapper[4776]: I1011 10:32:33.445408 4776 scope.go:117] "RemoveContainer" containerID="e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278" Oct 11 10:32:33.458927 master-2 kubenswrapper[4776]: I1011 10:32:33.454426 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.470829 master-2 kubenswrapper[4776]: I1011 10:32:33.470737 4776 scope.go:117] "RemoveContainer" containerID="c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912" Oct 11 10:32:33.472099 master-2 kubenswrapper[4776]: I1011 10:32:33.471897 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6"] Oct 11 10:32:33.472356 master-2 kubenswrapper[4776]: E1011 10:32:33.472279 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912\": container with ID starting with c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912 not found: ID does not exist" containerID="c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912" Oct 11 10:32:33.472409 master-2 kubenswrapper[4776]: I1011 10:32:33.472361 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912"} err="failed to get container status \"c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912\": rpc error: code = NotFound desc = could not find container \"c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912\": container with ID starting with c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912 not found: ID does not exist" Oct 11 10:32:33.472409 master-2 kubenswrapper[4776]: I1011 10:32:33.472396 4776 scope.go:117] "RemoveContainer" containerID="e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278" Oct 11 10:32:33.472836 master-2 kubenswrapper[4776]: E1011 10:32:33.472776 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278\": container with ID starting with e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278 not found: ID does not exist" containerID="e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278" Oct 11 10:32:33.472836 master-2 kubenswrapper[4776]: I1011 10:32:33.472806 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278"} err="failed to get container status \"e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278\": rpc error: code = NotFound desc = could not find container \"e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278\": container with ID starting with e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278 not found: ID does not exist" Oct 11 10:32:33.479892 master-2 kubenswrapper[4776]: I1011 10:32:33.479860 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6"] Oct 11 10:32:33.498055 master-1 kubenswrapper[4771]: I1011 10:32:33.497958 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:33.498055 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:33.498055 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:33.498055 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:33.498055 master-1 kubenswrapper[4771]: I1011 10:32:33.498063 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:33.916519 master-2 kubenswrapper[4776]: I1011 10:32:33.916457 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729"] Oct 11 10:32:33.970120 master-2 kubenswrapper[4776]: I1011 10:32:33.970034 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:33.970120 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:33.970120 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:33.970120 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:33.970120 master-2 kubenswrapper[4776]: I1011 10:32:33.970107 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:34.065552 master-2 kubenswrapper[4776]: I1011 10:32:34.064901 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e350b624-6581-4982-96f3-cd5c37256e02" path="/var/lib/kubelet/pods/e350b624-6581-4982-96f3-cd5c37256e02/volumes" Oct 11 10:32:34.429110 master-2 kubenswrapper[4776]: I1011 10:32:34.429066 4776 generic.go:334] "Generic (PLEG): container finished" podID="cc095688-9188-4472-9c26-d4d286e5ef06" containerID="890baf1a750c905b81b3a86397294058183d567d6c2fdd860242c1e809168b9e" exitCode=0 Oct 11 10:32:34.429612 master-2 kubenswrapper[4776]: I1011 10:32:34.429119 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" event={"ID":"cc095688-9188-4472-9c26-d4d286e5ef06","Type":"ContainerDied","Data":"890baf1a750c905b81b3a86397294058183d567d6c2fdd860242c1e809168b9e"} Oct 11 10:32:34.429612 master-2 kubenswrapper[4776]: I1011 10:32:34.429162 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" event={"ID":"cc095688-9188-4472-9c26-d4d286e5ef06","Type":"ContainerStarted","Data":"38caa553aa3028fefa0c3bd77280e5deedf30358e11b27817863ca0e8b11f26f"} Oct 11 10:32:34.497735 master-1 kubenswrapper[4771]: I1011 10:32:34.497580 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:34.497735 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:34.497735 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:34.497735 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:34.497735 master-1 kubenswrapper[4771]: I1011 10:32:34.497722 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:34.972737 master-2 kubenswrapper[4776]: I1011 10:32:34.972616 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:34.972737 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:34.972737 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:34.972737 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:34.972737 master-2 kubenswrapper[4776]: I1011 10:32:34.972714 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:35.441960 master-2 kubenswrapper[4776]: I1011 10:32:35.441878 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" event={"ID":"cc095688-9188-4472-9c26-d4d286e5ef06","Type":"ContainerStarted","Data":"6dd550d507d66e801941ec8d8dccd203204326eb4fa9e98d9d9de574d26fd168"} Oct 11 10:32:35.473707 master-2 kubenswrapper[4776]: I1011 10:32:35.473583 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podStartSLOduration=20.473561927 podStartE2EDuration="20.473561927s" podCreationTimestamp="2025-10-11 10:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:32:35.468932542 +0000 UTC m=+390.253359271" watchObservedRunningTime="2025-10-11 10:32:35.473561927 +0000 UTC m=+390.257988666" Oct 11 10:32:35.497700 master-1 kubenswrapper[4771]: I1011 10:32:35.497562 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:35.497700 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:35.497700 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:35.497700 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:35.497700 master-1 kubenswrapper[4771]: I1011 10:32:35.497669 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:35.969464 master-2 kubenswrapper[4776]: I1011 10:32:35.969392 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:35.969464 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:35.969464 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:35.969464 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:35.969896 master-2 kubenswrapper[4776]: I1011 10:32:35.969486 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:36.497562 master-1 kubenswrapper[4771]: I1011 10:32:36.497413 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:36.497562 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:36.497562 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:36.497562 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:36.498547 master-1 kubenswrapper[4771]: I1011 10:32:36.497555 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:36.969872 master-2 kubenswrapper[4776]: I1011 10:32:36.969716 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:36.969872 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:36.969872 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:36.969872 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:36.969872 master-2 kubenswrapper[4776]: I1011 10:32:36.969825 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:37.497199 master-1 kubenswrapper[4771]: I1011 10:32:37.497105 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:37.497199 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:37.497199 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:37.497199 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:37.497199 master-1 kubenswrapper[4771]: I1011 10:32:37.497188 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:37.587871 master-2 kubenswrapper[4776]: E1011 10:32:37.587811 4776 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-controller-manager-pod.yaml\": /etc/kubernetes/manifests/kube-controller-manager-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Oct 11 10:32:37.591278 master-2 kubenswrapper[4776]: I1011 10:32:37.591211 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:32:37.594069 master-2 kubenswrapper[4776]: I1011 10:32:37.594025 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:37.643226 master-2 kubenswrapper[4776]: I1011 10:32:37.643165 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:32:37.646639 master-2 kubenswrapper[4776]: I1011 10:32:37.646602 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-cert-dir\") pod \"kube-controller-manager-master-2\" (UID: \"4f88b73b0d121e855641834122063be9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:37.646704 master-2 kubenswrapper[4776]: I1011 10:32:37.646661 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-resource-dir\") pod \"kube-controller-manager-master-2\" (UID: \"4f88b73b0d121e855641834122063be9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:37.747897 master-2 kubenswrapper[4776]: I1011 10:32:37.747825 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-cert-dir\") pod \"kube-controller-manager-master-2\" (UID: \"4f88b73b0d121e855641834122063be9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:37.747897 master-2 kubenswrapper[4776]: I1011 10:32:37.747885 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-resource-dir\") pod \"kube-controller-manager-master-2\" (UID: \"4f88b73b0d121e855641834122063be9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:37.748289 master-2 kubenswrapper[4776]: I1011 10:32:37.747953 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-resource-dir\") pod \"kube-controller-manager-master-2\" (UID: \"4f88b73b0d121e855641834122063be9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:37.748289 master-2 kubenswrapper[4776]: I1011 10:32:37.747985 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-cert-dir\") pod \"kube-controller-manager-master-2\" (UID: \"4f88b73b0d121e855641834122063be9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: I1011 10:32:37.932573 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: I1011 10:32:37.932635 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:37.942165 master-2 kubenswrapper[4776]: I1011 10:32:37.942082 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:37.970203 master-2 kubenswrapper[4776]: I1011 10:32:37.970130 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:37.970203 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:37.970203 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:37.970203 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:37.971115 master-2 kubenswrapper[4776]: I1011 10:32:37.970223 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:37.976463 master-2 kubenswrapper[4776]: W1011 10:32:37.976375 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f88b73b0d121e855641834122063be9.slice/crio-916ddd0f284b303e9bb4961df811012eddf3484459c699dfd12d49002c642155 WatchSource:0}: Error finding container 916ddd0f284b303e9bb4961df811012eddf3484459c699dfd12d49002c642155: Status 404 returned error can't find the container with id 916ddd0f284b303e9bb4961df811012eddf3484459c699dfd12d49002c642155 Oct 11 10:32:38.455048 master-2 kubenswrapper[4776]: I1011 10:32:38.454984 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:38.455394 master-2 kubenswrapper[4776]: I1011 10:32:38.455367 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:38.467578 master-2 kubenswrapper[4776]: I1011 10:32:38.467538 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:38.469795 master-2 kubenswrapper[4776]: I1011 10:32:38.469752 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"4f88b73b0d121e855641834122063be9","Type":"ContainerStarted","Data":"916ddd0f284b303e9bb4961df811012eddf3484459c699dfd12d49002c642155"} Oct 11 10:32:38.472361 master-2 kubenswrapper[4776]: I1011 10:32:38.472297 4776 generic.go:334] "Generic (PLEG): container finished" podID="860e97e2-10a4-4a16-ac4e-4a0fc7490200" containerID="26ed7de5d40500630aec8d65a4a8355cae8b9e280b195543aa3fdc8cfbc315c3" exitCode=0 Oct 11 10:32:38.472501 master-2 kubenswrapper[4776]: I1011 10:32:38.472373 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-2" event={"ID":"860e97e2-10a4-4a16-ac4e-4a0fc7490200","Type":"ContainerDied","Data":"26ed7de5d40500630aec8d65a4a8355cae8b9e280b195543aa3fdc8cfbc315c3"} Oct 11 10:32:38.479527 master-2 kubenswrapper[4776]: I1011 10:32:38.479487 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:38.497474 master-1 kubenswrapper[4771]: I1011 10:32:38.497328 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:38.497474 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:38.497474 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:38.497474 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:38.498882 master-1 kubenswrapper[4771]: I1011 10:32:38.497477 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:38.608514 master-1 kubenswrapper[4771]: I1011 10:32:38.608405 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk"] Oct 11 10:32:38.608961 master-1 kubenswrapper[4771]: I1011 10:32:38.608803 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" podUID="d87cc032-b419-444c-8bf0-ef7405d7369d" containerName="oauth-apiserver" containerID="cri-o://79f8e8a3af9681261cf6c96297e08774526c159a1df96245fda7d956c1a72204" gracePeriod=120 Oct 11 10:32:38.644866 master-2 kubenswrapper[4776]: E1011 10:32:38.644745 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" podUID="17bef070-1a9d-4090-b97a-7ce2c1c93b19" Oct 11 10:32:38.968993 master-2 kubenswrapper[4776]: I1011 10:32:38.968891 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:38.968993 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:38.968993 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:38.968993 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:38.968993 master-2 kubenswrapper[4776]: I1011 10:32:38.968992 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:39.478356 master-2 kubenswrapper[4776]: I1011 10:32:39.478287 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:32:39.497500 master-1 kubenswrapper[4771]: I1011 10:32:39.497427 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:39.497500 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:39.497500 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:39.497500 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:39.498568 master-1 kubenswrapper[4771]: I1011 10:32:39.497520 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:39.770562 master-2 kubenswrapper[4776]: I1011 10:32:39.770467 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:39.782887 master-2 kubenswrapper[4776]: I1011 10:32:39.782401 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kubelet-dir\") pod \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " Oct 11 10:32:39.782887 master-2 kubenswrapper[4776]: I1011 10:32:39.782590 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-var-lock\") pod \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " Oct 11 10:32:39.782887 master-2 kubenswrapper[4776]: I1011 10:32:39.782808 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "860e97e2-10a4-4a16-ac4e-4a0fc7490200" (UID: "860e97e2-10a4-4a16-ac4e-4a0fc7490200"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:32:39.782887 master-2 kubenswrapper[4776]: I1011 10:32:39.782890 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-var-lock" (OuterVolumeSpecName: "var-lock") pod "860e97e2-10a4-4a16-ac4e-4a0fc7490200" (UID: "860e97e2-10a4-4a16-ac4e-4a0fc7490200"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:32:39.783830 master-2 kubenswrapper[4776]: I1011 10:32:39.783426 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kube-api-access\") pod \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " Oct 11 10:32:39.784508 master-2 kubenswrapper[4776]: I1011 10:32:39.784439 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:39.784508 master-2 kubenswrapper[4776]: I1011 10:32:39.784473 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:39.785469 master-2 kubenswrapper[4776]: I1011 10:32:39.785400 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "860e97e2-10a4-4a16-ac4e-4a0fc7490200" (UID: "860e97e2-10a4-4a16-ac4e-4a0fc7490200"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:32:39.885819 master-2 kubenswrapper[4776]: I1011 10:32:39.885735 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:39.970398 master-2 kubenswrapper[4776]: I1011 10:32:39.970300 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:39.970398 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:39.970398 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:39.970398 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:39.970398 master-2 kubenswrapper[4776]: I1011 10:32:39.970382 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:40.485459 master-2 kubenswrapper[4776]: I1011 10:32:40.485414 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-2" event={"ID":"860e97e2-10a4-4a16-ac4e-4a0fc7490200","Type":"ContainerDied","Data":"2a0c267eb08670f3fe6902b94f01392021a837de4570d7e530b15613d10477eb"} Oct 11 10:32:40.485459 master-2 kubenswrapper[4776]: I1011 10:32:40.485462 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a0c267eb08670f3fe6902b94f01392021a837de4570d7e530b15613d10477eb" Oct 11 10:32:40.485459 master-2 kubenswrapper[4776]: I1011 10:32:40.485427 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:40.497412 master-1 kubenswrapper[4771]: I1011 10:32:40.497304 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:40.497412 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:40.497412 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:40.497412 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:40.498551 master-1 kubenswrapper[4771]: I1011 10:32:40.497420 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:40.911146 master-1 kubenswrapper[4771]: I1011 10:32:40.911025 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-5-master-1"] Oct 11 10:32:40.912157 master-1 kubenswrapper[4771]: I1011 10:32:40.912087 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-1" Oct 11 10:32:40.968714 master-1 kubenswrapper[4771]: I1011 10:32:40.968650 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-5-master-1"] Oct 11 10:32:40.969645 master-2 kubenswrapper[4776]: I1011 10:32:40.969586 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:40.969645 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:40.969645 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:40.969645 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:40.969742 master-1 kubenswrapper[4771]: I1011 10:32:40.969692 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0f830cc-d36c-4ccd-97cb-2d4a99726684-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"f0f830cc-d36c-4ccd-97cb-2d4a99726684\") " pod="openshift-etcd/installer-5-master-1" Oct 11 10:32:40.969849 master-1 kubenswrapper[4771]: I1011 10:32:40.969767 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f0f830cc-d36c-4ccd-97cb-2d4a99726684-var-lock\") pod \"installer-5-master-1\" (UID: \"f0f830cc-d36c-4ccd-97cb-2d4a99726684\") " pod="openshift-etcd/installer-5-master-1" Oct 11 10:32:40.969919 master-1 kubenswrapper[4771]: I1011 10:32:40.969900 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0f830cc-d36c-4ccd-97cb-2d4a99726684-kube-api-access\") pod \"installer-5-master-1\" (UID: \"f0f830cc-d36c-4ccd-97cb-2d4a99726684\") " pod="openshift-etcd/installer-5-master-1" Oct 11 10:32:40.969928 master-2 kubenswrapper[4776]: I1011 10:32:40.969646 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:41.061959 master-1 kubenswrapper[4771]: I1011 10:32:41.061902 4771 patch_prober.go:28] interesting pod/apiserver-6f855d6bcf-cwmmk container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:41.061959 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:32:41.061959 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:32:41.061959 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:32:41.061959 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:41.061959 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:32:41.061959 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:41.061959 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:41.061959 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:41.061959 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:32:41.061959 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:32:41.061959 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:32:41.061959 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:32:41.061959 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:32:41.062808 master-1 kubenswrapper[4771]: I1011 10:32:41.061982 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" podUID="d87cc032-b419-444c-8bf0-ef7405d7369d" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:41.071136 master-1 kubenswrapper[4771]: I1011 10:32:41.071078 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0f830cc-d36c-4ccd-97cb-2d4a99726684-kube-api-access\") pod \"installer-5-master-1\" (UID: \"f0f830cc-d36c-4ccd-97cb-2d4a99726684\") " pod="openshift-etcd/installer-5-master-1" Oct 11 10:32:41.071240 master-1 kubenswrapper[4771]: I1011 10:32:41.071180 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0f830cc-d36c-4ccd-97cb-2d4a99726684-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"f0f830cc-d36c-4ccd-97cb-2d4a99726684\") " pod="openshift-etcd/installer-5-master-1" Oct 11 10:32:41.071288 master-1 kubenswrapper[4771]: I1011 10:32:41.071237 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f0f830cc-d36c-4ccd-97cb-2d4a99726684-var-lock\") pod \"installer-5-master-1\" (UID: \"f0f830cc-d36c-4ccd-97cb-2d4a99726684\") " pod="openshift-etcd/installer-5-master-1" Oct 11 10:32:41.071448 master-1 kubenswrapper[4771]: I1011 10:32:41.071423 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0f830cc-d36c-4ccd-97cb-2d4a99726684-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"f0f830cc-d36c-4ccd-97cb-2d4a99726684\") " pod="openshift-etcd/installer-5-master-1" Oct 11 10:32:41.071570 master-1 kubenswrapper[4771]: I1011 10:32:41.071450 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f0f830cc-d36c-4ccd-97cb-2d4a99726684-var-lock\") pod \"installer-5-master-1\" (UID: \"f0f830cc-d36c-4ccd-97cb-2d4a99726684\") " pod="openshift-etcd/installer-5-master-1" Oct 11 10:32:41.091552 master-1 kubenswrapper[4771]: I1011 10:32:41.091451 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0f830cc-d36c-4ccd-97cb-2d4a99726684-kube-api-access\") pod \"installer-5-master-1\" (UID: \"f0f830cc-d36c-4ccd-97cb-2d4a99726684\") " pod="openshift-etcd/installer-5-master-1" Oct 11 10:32:41.288640 master-1 kubenswrapper[4771]: I1011 10:32:41.288431 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-1" Oct 11 10:32:41.498113 master-1 kubenswrapper[4771]: I1011 10:32:41.497502 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:41.498113 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:41.498113 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:41.498113 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:41.498113 master-1 kubenswrapper[4771]: I1011 10:32:41.497645 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:41.661073 master-2 kubenswrapper[4776]: E1011 10:32:41.661009 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" podUID="cacd2d60-e8a5-450f-a4ad-dfc0194e3325" Oct 11 10:32:41.707723 master-2 kubenswrapper[4776]: I1011 10:32:41.707664 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:32:41.707904 master-2 kubenswrapper[4776]: E1011 10:32:41.707777 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:32:41.707904 master-2 kubenswrapper[4776]: E1011 10:32:41.707855 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:34:43.707832802 +0000 UTC m=+518.492259511 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:32:41.773726 master-1 kubenswrapper[4771]: I1011 10:32:41.773591 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-5-master-1"] Oct 11 10:32:41.957651 master-1 kubenswrapper[4771]: I1011 10:32:41.957596 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-5-master-1" event={"ID":"f0f830cc-d36c-4ccd-97cb-2d4a99726684","Type":"ContainerStarted","Data":"65e9818b973bf19dd26838510d379ddf1b30f23283f0995cf12628a1f6d4cb94"} Oct 11 10:32:41.970327 master-2 kubenswrapper[4776]: I1011 10:32:41.970171 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:41.970327 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:41.970327 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:41.970327 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:41.970327 master-2 kubenswrapper[4776]: I1011 10:32:41.970236 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:42.496981 master-2 kubenswrapper[4776]: I1011 10:32:42.496939 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:32:42.497429 master-1 kubenswrapper[4771]: I1011 10:32:42.497170 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:42.497429 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:42.497429 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:42.497429 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:42.497429 master-1 kubenswrapper[4771]: I1011 10:32:42.497282 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:42.925019 master-2 kubenswrapper[4776]: I1011 10:32:42.924968 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.41:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.41:8443: connect: connection refused" start-of-body= Oct 11 10:32:42.925463 master-2 kubenswrapper[4776]: I1011 10:32:42.925022 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.41:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.41:8443: connect: connection refused" Oct 11 10:32:42.965830 master-1 kubenswrapper[4771]: I1011 10:32:42.965720 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-5-master-1" event={"ID":"f0f830cc-d36c-4ccd-97cb-2d4a99726684","Type":"ContainerStarted","Data":"2b7fb64c483453dbfbd93869288690ed38d6d29cb105ac6ec22c06d0d9551aa1"} Oct 11 10:32:42.969255 master-2 kubenswrapper[4776]: I1011 10:32:42.969209 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:42.969255 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:42.969255 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:42.969255 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:42.969514 master-2 kubenswrapper[4776]: I1011 10:32:42.969273 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:42.984087 master-2 kubenswrapper[4776]: I1011 10:32:42.983521 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-2"] Oct 11 10:32:42.984087 master-2 kubenswrapper[4776]: E1011 10:32:42.983735 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860e97e2-10a4-4a16-ac4e-4a0fc7490200" containerName="installer" Oct 11 10:32:42.984087 master-2 kubenswrapper[4776]: I1011 10:32:42.983746 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="860e97e2-10a4-4a16-ac4e-4a0fc7490200" containerName="installer" Oct 11 10:32:42.984087 master-2 kubenswrapper[4776]: I1011 10:32:42.983844 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="860e97e2-10a4-4a16-ac4e-4a0fc7490200" containerName="installer" Oct 11 10:32:42.984476 master-2 kubenswrapper[4776]: I1011 10:32:42.984228 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:42.986024 master-1 kubenswrapper[4771]: I1011 10:32:42.985909 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-5-master-1" podStartSLOduration=2.985878544 podStartE2EDuration="2.985878544s" podCreationTimestamp="2025-10-11 10:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:32:42.981222409 +0000 UTC m=+394.955448860" watchObservedRunningTime="2025-10-11 10:32:42.985878544 +0000 UTC m=+394.960104985" Oct 11 10:32:42.988302 master-2 kubenswrapper[4776]: I1011 10:32:42.988265 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 11 10:32:42.998173 master-2 kubenswrapper[4776]: I1011 10:32:42.998144 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-2"] Oct 11 10:32:43.026337 master-2 kubenswrapper[4776]: I1011 10:32:43.026285 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-var-lock\") pod \"installer-2-master-2\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.026508 master-2 kubenswrapper[4776]: I1011 10:32:43.026477 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-kubelet-dir\") pod \"installer-2-master-2\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.026555 master-2 kubenswrapper[4776]: I1011 10:32:43.026529 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7d02073-00a3-41a2-8ca4-6932819886b8-kube-api-access\") pod \"installer-2-master-2\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.127600 master-2 kubenswrapper[4776]: I1011 10:32:43.127498 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-kubelet-dir\") pod \"installer-2-master-2\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.127600 master-2 kubenswrapper[4776]: I1011 10:32:43.127538 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7d02073-00a3-41a2-8ca4-6932819886b8-kube-api-access\") pod \"installer-2-master-2\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.127600 master-2 kubenswrapper[4776]: I1011 10:32:43.127573 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-var-lock\") pod \"installer-2-master-2\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.127938 master-2 kubenswrapper[4776]: I1011 10:32:43.127640 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-kubelet-dir\") pod \"installer-2-master-2\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.127938 master-2 kubenswrapper[4776]: I1011 10:32:43.127751 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-var-lock\") pod \"installer-2-master-2\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.148290 master-2 kubenswrapper[4776]: I1011 10:32:43.148216 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7d02073-00a3-41a2-8ca4-6932819886b8-kube-api-access\") pod \"installer-2-master-2\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.302466 master-2 kubenswrapper[4776]: I1011 10:32:43.302417 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.497649 master-1 kubenswrapper[4771]: I1011 10:32:43.497549 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:43.497649 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:43.497649 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:43.497649 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:43.497649 master-1 kubenswrapper[4771]: I1011 10:32:43.497641 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:43.505239 master-2 kubenswrapper[4776]: I1011 10:32:43.505188 4776 generic.go:334] "Generic (PLEG): container finished" podID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerID="a486fb47915dcf562b4049ef498fba0a79dc2f0d9c2b35c61e3a77be9dcdeae7" exitCode=0 Oct 11 10:32:43.505239 master-2 kubenswrapper[4776]: I1011 10:32:43.505231 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" event={"ID":"7a89cb41-fb97-4282-a12d-c6f8d87bc41e","Type":"ContainerDied","Data":"a486fb47915dcf562b4049ef498fba0a79dc2f0d9c2b35c61e3a77be9dcdeae7"} Oct 11 10:32:43.970112 master-2 kubenswrapper[4776]: I1011 10:32:43.970028 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:43.970112 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:43.970112 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:43.970112 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:43.970668 master-2 kubenswrapper[4776]: I1011 10:32:43.970156 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:44.498032 master-1 kubenswrapper[4771]: I1011 10:32:44.497961 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:44.498032 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:44.498032 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:44.498032 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:44.499036 master-1 kubenswrapper[4771]: I1011 10:32:44.498645 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:44.649747 master-2 kubenswrapper[4776]: I1011 10:32:44.649503 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:32:44.649953 master-2 kubenswrapper[4776]: E1011 10:32:44.649831 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:32:44.649953 master-2 kubenswrapper[4776]: E1011 10:32:44.649929 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:34:46.649905515 +0000 UTC m=+521.434332314 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:32:44.989899 master-2 kubenswrapper[4776]: I1011 10:32:44.989794 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:44.989899 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:44.989899 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:44.989899 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:44.989899 master-2 kubenswrapper[4776]: I1011 10:32:44.989851 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:45.129128 master-2 kubenswrapper[4776]: I1011 10:32:45.129072 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.154810 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-encryption-config\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.154882 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-serving-ca\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.154909 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-trusted-ca-bundle\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.154924 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit-dir\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.154943 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-serving-cert\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.154970 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-config\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.154997 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc5gq\" (UniqueName: \"kubernetes.io/projected/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-kube-api-access-wc5gq\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.155017 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-image-import-ca\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.155032 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-client\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.155076 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-node-pullsecrets\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.155109 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.157723 master-2 kubenswrapper[4776]: I1011 10:32:45.156141 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:32:45.157723 master-2 kubenswrapper[4776]: I1011 10:32:45.156180 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-config" (OuterVolumeSpecName: "config") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:32:45.157723 master-2 kubenswrapper[4776]: I1011 10:32:45.156222 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:32:45.157723 master-2 kubenswrapper[4776]: I1011 10:32:45.156375 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:32:45.157723 master-2 kubenswrapper[4776]: I1011 10:32:45.156609 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:32:45.157723 master-2 kubenswrapper[4776]: I1011 10:32:45.156635 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:32:45.157723 master-2 kubenswrapper[4776]: I1011 10:32:45.156819 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit" (OuterVolumeSpecName: "audit") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:32:45.159086 master-2 kubenswrapper[4776]: I1011 10:32:45.159037 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:32:45.159319 master-2 kubenswrapper[4776]: I1011 10:32:45.159238 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-kube-api-access-wc5gq" (OuterVolumeSpecName: "kube-api-access-wc5gq") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "kube-api-access-wc5gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:32:45.160362 master-2 kubenswrapper[4776]: I1011 10:32:45.160309 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:32:45.161066 master-2 kubenswrapper[4776]: I1011 10:32:45.161015 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:32:45.172385 master-2 kubenswrapper[4776]: I1011 10:32:45.172286 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-777cc846dc-729nm"] Oct 11 10:32:45.172623 master-2 kubenswrapper[4776]: E1011 10:32:45.172599 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver-check-endpoints" Oct 11 10:32:45.172623 master-2 kubenswrapper[4776]: I1011 10:32:45.172623 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver-check-endpoints" Oct 11 10:32:45.172776 master-2 kubenswrapper[4776]: E1011 10:32:45.172640 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" Oct 11 10:32:45.172776 master-2 kubenswrapper[4776]: I1011 10:32:45.172649 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" Oct 11 10:32:45.172776 master-2 kubenswrapper[4776]: E1011 10:32:45.172662 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="fix-audit-permissions" Oct 11 10:32:45.172776 master-2 kubenswrapper[4776]: I1011 10:32:45.172670 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="fix-audit-permissions" Oct 11 10:32:45.172973 master-2 kubenswrapper[4776]: I1011 10:32:45.172785 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver-check-endpoints" Oct 11 10:32:45.172973 master-2 kubenswrapper[4776]: I1011 10:32:45.172814 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" Oct 11 10:32:45.173800 master-2 kubenswrapper[4776]: I1011 10:32:45.173603 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.189201 master-2 kubenswrapper[4776]: I1011 10:32:45.189159 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-777cc846dc-729nm"] Oct 11 10:32:45.256715 master-2 kubenswrapper[4776]: I1011 10:32:45.256634 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-audit\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.256904 master-2 kubenswrapper[4776]: I1011 10:32:45.256879 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-serving-cert\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.256904 master-2 kubenswrapper[4776]: I1011 10:32:45.256929 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-trusted-ca-bundle\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.257154 master-2 kubenswrapper[4776]: I1011 10:32:45.256961 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-node-pullsecrets\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.257154 master-2 kubenswrapper[4776]: I1011 10:32:45.256981 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-etcd-serving-ca\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.257154 master-2 kubenswrapper[4776]: I1011 10:32:45.256998 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-encryption-config\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.257291 master-2 kubenswrapper[4776]: I1011 10:32:45.257169 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-audit-dir\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.257380 master-2 kubenswrapper[4776]: I1011 10:32:45.257349 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-etcd-client\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.257432 master-2 kubenswrapper[4776]: I1011 10:32:45.257385 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-image-import-ca\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.257574 master-2 kubenswrapper[4776]: I1011 10:32:45.257548 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-config\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.257665 master-2 kubenswrapper[4776]: I1011 10:32:45.257616 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qgjs\" (UniqueName: \"kubernetes.io/projected/3a156e42-88da-4ce6-9995-6865609e2711-kube-api-access-9qgjs\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.257819 master-2 kubenswrapper[4776]: I1011 10:32:45.257793 4776 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.257819 master-2 kubenswrapper[4776]: I1011 10:32:45.257814 4776 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-encryption-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.257907 master-2 kubenswrapper[4776]: I1011 10:32:45.257828 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-serving-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.257907 master-2 kubenswrapper[4776]: I1011 10:32:45.257842 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.257907 master-2 kubenswrapper[4776]: I1011 10:32:45.257854 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.257907 master-2 kubenswrapper[4776]: I1011 10:32:45.257865 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.257907 master-2 kubenswrapper[4776]: I1011 10:32:45.257879 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.257907 master-2 kubenswrapper[4776]: I1011 10:32:45.257891 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc5gq\" (UniqueName: \"kubernetes.io/projected/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-kube-api-access-wc5gq\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.257907 master-2 kubenswrapper[4776]: I1011 10:32:45.257904 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-client\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.258135 master-2 kubenswrapper[4776]: I1011 10:32:45.257916 4776 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-image-import-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.258135 master-2 kubenswrapper[4776]: I1011 10:32:45.257929 4776 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-node-pullsecrets\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.279240 master-2 kubenswrapper[4776]: I1011 10:32:45.279199 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-2"] Oct 11 10:32:45.287308 master-2 kubenswrapper[4776]: W1011 10:32:45.287263 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf WatchSource:0}: Error finding container d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf: Status 404 returned error can't find the container with id d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf Oct 11 10:32:45.359589 master-2 kubenswrapper[4776]: I1011 10:32:45.359523 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-node-pullsecrets\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359781 master-2 kubenswrapper[4776]: I1011 10:32:45.359632 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-etcd-serving-ca\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359781 master-2 kubenswrapper[4776]: I1011 10:32:45.359659 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-encryption-config\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359781 master-2 kubenswrapper[4776]: I1011 10:32:45.359539 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-node-pullsecrets\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359781 master-2 kubenswrapper[4776]: I1011 10:32:45.359710 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-audit-dir\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359781 master-2 kubenswrapper[4776]: I1011 10:32:45.359761 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-image-import-ca\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359781 master-2 kubenswrapper[4776]: I1011 10:32:45.359773 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-audit-dir\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359998 master-2 kubenswrapper[4776]: I1011 10:32:45.359785 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-etcd-client\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359998 master-2 kubenswrapper[4776]: I1011 10:32:45.359860 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-config\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359998 master-2 kubenswrapper[4776]: I1011 10:32:45.359896 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qgjs\" (UniqueName: \"kubernetes.io/projected/3a156e42-88da-4ce6-9995-6865609e2711-kube-api-access-9qgjs\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359998 master-2 kubenswrapper[4776]: I1011 10:32:45.359932 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-audit\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359998 master-2 kubenswrapper[4776]: I1011 10:32:45.359954 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-serving-cert\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359998 master-2 kubenswrapper[4776]: I1011 10:32:45.360000 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-trusted-ca-bundle\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.360937 master-2 kubenswrapper[4776]: I1011 10:32:45.360883 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-image-import-ca\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.361006 master-2 kubenswrapper[4776]: I1011 10:32:45.360970 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-audit\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.361456 master-2 kubenswrapper[4776]: I1011 10:32:45.361428 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-config\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.361576 master-2 kubenswrapper[4776]: I1011 10:32:45.361544 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-trusted-ca-bundle\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.362003 master-2 kubenswrapper[4776]: I1011 10:32:45.361976 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-etcd-serving-ca\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.362844 master-2 kubenswrapper[4776]: I1011 10:32:45.362816 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-serving-cert\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.363554 master-2 kubenswrapper[4776]: I1011 10:32:45.363521 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-etcd-client\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.364809 master-2 kubenswrapper[4776]: I1011 10:32:45.364771 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-encryption-config\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.384526 master-2 kubenswrapper[4776]: I1011 10:32:45.384486 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qgjs\" (UniqueName: \"kubernetes.io/projected/3a156e42-88da-4ce6-9995-6865609e2711-kube-api-access-9qgjs\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.490166 master-2 kubenswrapper[4776]: I1011 10:32:45.489746 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.496869 master-1 kubenswrapper[4771]: I1011 10:32:45.496780 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:45.496869 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:45.496869 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:45.496869 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:45.497492 master-1 kubenswrapper[4771]: I1011 10:32:45.496876 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:45.515335 master-2 kubenswrapper[4776]: I1011 10:32:45.515085 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-2" event={"ID":"d7d02073-00a3-41a2-8ca4-6932819886b8","Type":"ContainerStarted","Data":"2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123"} Oct 11 10:32:45.515335 master-2 kubenswrapper[4776]: I1011 10:32:45.515129 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-2" event={"ID":"d7d02073-00a3-41a2-8ca4-6932819886b8","Type":"ContainerStarted","Data":"d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf"} Oct 11 10:32:45.516472 master-2 kubenswrapper[4776]: I1011 10:32:45.516423 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"4f88b73b0d121e855641834122063be9","Type":"ContainerStarted","Data":"10893b6ff26cfe0860be9681aea4e014407f210edeb0807d7d50c1f9edb2d910"} Oct 11 10:32:45.518395 master-2 kubenswrapper[4776]: I1011 10:32:45.518365 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" event={"ID":"7a89cb41-fb97-4282-a12d-c6f8d87bc41e","Type":"ContainerDied","Data":"632a135875099c1d39a46b5212f4753eda648d4f1ce35df8cc0f167cab38ce86"} Oct 11 10:32:45.518471 master-2 kubenswrapper[4776]: I1011 10:32:45.518417 4776 scope.go:117] "RemoveContainer" containerID="2fdacc499227869c48e6c020ccf86a1927bcc28943d27adf9666ea1d0e17f652" Oct 11 10:32:45.518471 master-2 kubenswrapper[4776]: I1011 10:32:45.518438 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:32:45.531252 master-2 kubenswrapper[4776]: I1011 10:32:45.530586 4776 scope.go:117] "RemoveContainer" containerID="a486fb47915dcf562b4049ef498fba0a79dc2f0d9c2b35c61e3a77be9dcdeae7" Oct 11 10:32:45.541754 master-2 kubenswrapper[4776]: I1011 10:32:45.541591 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-2" podStartSLOduration=3.541570139 podStartE2EDuration="3.541570139s" podCreationTimestamp="2025-10-11 10:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:32:45.535756976 +0000 UTC m=+400.320183685" watchObservedRunningTime="2025-10-11 10:32:45.541570139 +0000 UTC m=+400.325996848" Oct 11 10:32:45.544037 master-2 kubenswrapper[4776]: I1011 10:32:45.544002 4776 scope.go:117] "RemoveContainer" containerID="b832fb464d44d9bbecf0e8282e7db004cc8bdd8588f8fbb153766321b64a0e01" Oct 11 10:32:45.561645 master-2 kubenswrapper[4776]: I1011 10:32:45.561595 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-555f658fd6-wmcqt"] Oct 11 10:32:45.565514 master-2 kubenswrapper[4776]: I1011 10:32:45.565463 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-555f658fd6-wmcqt"] Oct 11 10:32:45.968411 master-2 kubenswrapper[4776]: I1011 10:32:45.968364 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:45.968411 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:45.968411 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:45.968411 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:45.968750 master-2 kubenswrapper[4776]: I1011 10:32:45.968414 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:46.061303 master-1 kubenswrapper[4771]: I1011 10:32:46.061190 4771 patch_prober.go:28] interesting pod/apiserver-6f855d6bcf-cwmmk container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:46.061303 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:32:46.061303 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:32:46.061303 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:32:46.061303 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:46.061303 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:32:46.061303 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:46.061303 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:46.061303 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:46.061303 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:32:46.061303 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:32:46.061303 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:32:46.061303 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:32:46.061303 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:32:46.062098 master-1 kubenswrapper[4771]: I1011 10:32:46.061333 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" podUID="d87cc032-b419-444c-8bf0-ef7405d7369d" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:46.073288 master-2 kubenswrapper[4776]: I1011 10:32:46.073193 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" path="/var/lib/kubelet/pods/7a89cb41-fb97-4282-a12d-c6f8d87bc41e/volumes" Oct 11 10:32:46.138619 master-2 kubenswrapper[4776]: I1011 10:32:46.138576 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-2"] Oct 11 10:32:46.139178 master-2 kubenswrapper[4776]: I1011 10:32:46.139156 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:32:46.142832 master-2 kubenswrapper[4776]: I1011 10:32:46.142798 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 11 10:32:46.142915 master-2 kubenswrapper[4776]: I1011 10:32:46.142836 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"openshift-service-ca.crt" Oct 11 10:32:46.154482 master-2 kubenswrapper[4776]: I1011 10:32:46.150717 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-2"] Oct 11 10:32:46.159046 master-2 kubenswrapper[4776]: I1011 10:32:46.158513 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-777cc846dc-729nm"] Oct 11 10:32:46.169808 master-2 kubenswrapper[4776]: I1011 10:32:46.169761 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nrfh\" (UniqueName: \"kubernetes.io/projected/a5e255b2-14b3-42ed-9396-f96c40e231c0-kube-api-access-2nrfh\") pod \"kube-controller-manager-guard-master-2\" (UID: \"a5e255b2-14b3-42ed-9396-f96c40e231c0\") " pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:32:46.257274 master-2 kubenswrapper[4776]: W1011 10:32:46.257235 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a156e42_88da_4ce6_9995_6865609e2711.slice/crio-ee25a7d80d4e6c2818c35f57b04d647f323d2db779f406477d1f90d328d3f868 WatchSource:0}: Error finding container ee25a7d80d4e6c2818c35f57b04d647f323d2db779f406477d1f90d328d3f868: Status 404 returned error can't find the container with id ee25a7d80d4e6c2818c35f57b04d647f323d2db779f406477d1f90d328d3f868 Oct 11 10:32:46.271050 master-2 kubenswrapper[4776]: I1011 10:32:46.271008 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nrfh\" (UniqueName: \"kubernetes.io/projected/a5e255b2-14b3-42ed-9396-f96c40e231c0-kube-api-access-2nrfh\") pod \"kube-controller-manager-guard-master-2\" (UID: \"a5e255b2-14b3-42ed-9396-f96c40e231c0\") " pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:32:46.290817 master-2 kubenswrapper[4776]: I1011 10:32:46.290787 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nrfh\" (UniqueName: \"kubernetes.io/projected/a5e255b2-14b3-42ed-9396-f96c40e231c0-kube-api-access-2nrfh\") pod \"kube-controller-manager-guard-master-2\" (UID: \"a5e255b2-14b3-42ed-9396-f96c40e231c0\") " pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:32:46.463133 master-2 kubenswrapper[4776]: I1011 10:32:46.463070 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:32:46.497810 master-1 kubenswrapper[4771]: I1011 10:32:46.497671 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:46.497810 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:46.497810 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:46.497810 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:46.497810 master-1 kubenswrapper[4771]: I1011 10:32:46.497790 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:46.533034 master-2 kubenswrapper[4776]: I1011 10:32:46.531737 4776 generic.go:334] "Generic (PLEG): container finished" podID="3a156e42-88da-4ce6-9995-6865609e2711" containerID="e726f4cf3755426805ed7e9bd7973871407e8c8b66372a8c807859b61c3bd2f3" exitCode=0 Oct 11 10:32:46.533034 master-2 kubenswrapper[4776]: I1011 10:32:46.531789 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-729nm" event={"ID":"3a156e42-88da-4ce6-9995-6865609e2711","Type":"ContainerDied","Data":"e726f4cf3755426805ed7e9bd7973871407e8c8b66372a8c807859b61c3bd2f3"} Oct 11 10:32:46.533034 master-2 kubenswrapper[4776]: I1011 10:32:46.531834 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-729nm" event={"ID":"3a156e42-88da-4ce6-9995-6865609e2711","Type":"ContainerStarted","Data":"ee25a7d80d4e6c2818c35f57b04d647f323d2db779f406477d1f90d328d3f868"} Oct 11 10:32:46.877720 master-2 kubenswrapper[4776]: I1011 10:32:46.877469 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-2"] Oct 11 10:32:46.969202 master-2 kubenswrapper[4776]: I1011 10:32:46.968827 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:46.969202 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:46.969202 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:46.969202 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:46.969202 master-2 kubenswrapper[4776]: I1011 10:32:46.968880 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:47.345585 master-2 kubenswrapper[4776]: W1011 10:32:47.345344 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5e255b2_14b3_42ed_9396_f96c40e231c0.slice/crio-4872f242a302dc42e4940d88bf78d89ad7ca848f809cde970d865e2018784c48 WatchSource:0}: Error finding container 4872f242a302dc42e4940d88bf78d89ad7ca848f809cde970d865e2018784c48: Status 404 returned error can't find the container with id 4872f242a302dc42e4940d88bf78d89ad7ca848f809cde970d865e2018784c48 Oct 11 10:32:47.497474 master-1 kubenswrapper[4771]: I1011 10:32:47.497388 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:47.497474 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:47.497474 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:47.497474 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:47.498462 master-1 kubenswrapper[4771]: I1011 10:32:47.497500 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:47.546402 master-2 kubenswrapper[4776]: I1011 10:32:47.546327 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" event={"ID":"a5e255b2-14b3-42ed-9396-f96c40e231c0","Type":"ContainerStarted","Data":"4872f242a302dc42e4940d88bf78d89ad7ca848f809cde970d865e2018784c48"} Oct 11 10:32:47.548636 master-2 kubenswrapper[4776]: I1011 10:32:47.548566 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-729nm" event={"ID":"3a156e42-88da-4ce6-9995-6865609e2711","Type":"ContainerStarted","Data":"90d6acbfbe353ba98c33d9d9275a952ddeabac687eed9e519947f935a2f44edf"} Oct 11 10:32:47.780153 master-1 kubenswrapper[4771]: I1011 10:32:47.780004 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-h7gnk_6d20faa4-e5eb-4766-b4f5-30e491d1820c/machine-config-server/0.log" Oct 11 10:32:47.786749 master-2 kubenswrapper[4776]: I1011 10:32:47.786660 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:32:47.969370 master-2 kubenswrapper[4776]: I1011 10:32:47.969274 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:47.969370 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:47.969370 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:47.969370 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:47.969370 master-2 kubenswrapper[4776]: I1011 10:32:47.969336 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:48.498053 master-1 kubenswrapper[4771]: I1011 10:32:48.497940 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:48.498053 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:48.498053 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:48.498053 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:48.498788 master-1 kubenswrapper[4771]: I1011 10:32:48.498106 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:48.555322 master-2 kubenswrapper[4776]: I1011 10:32:48.555265 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"4f88b73b0d121e855641834122063be9","Type":"ContainerStarted","Data":"1c66c420f42b218cae752c5f11d7d84132ff9087b3b755852c6f533f1acaeece"} Oct 11 10:32:48.555322 master-2 kubenswrapper[4776]: I1011 10:32:48.555309 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"4f88b73b0d121e855641834122063be9","Type":"ContainerStarted","Data":"0f23b40fdead0283107ab65c161704aa7acddce7d9fe04f98e8ea9ab7f03a4cd"} Oct 11 10:32:48.555322 master-2 kubenswrapper[4776]: I1011 10:32:48.555319 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"4f88b73b0d121e855641834122063be9","Type":"ContainerStarted","Data":"f3c0c4b66c129c923e0c2ad907f82ab3a83141f8e7f3805af522d3e693204962"} Oct 11 10:32:48.556966 master-2 kubenswrapper[4776]: I1011 10:32:48.556947 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" event={"ID":"a5e255b2-14b3-42ed-9396-f96c40e231c0","Type":"ContainerStarted","Data":"8ec52bd758291544a17b9fe1ed1360f8ff65d06f8f59fe21646afc6be8c9e794"} Oct 11 10:32:48.557309 master-2 kubenswrapper[4776]: I1011 10:32:48.557257 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:32:48.559129 master-2 kubenswrapper[4776]: I1011 10:32:48.559093 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-729nm" event={"ID":"3a156e42-88da-4ce6-9995-6865609e2711","Type":"ContainerStarted","Data":"e2dd8c36e185fe9780ea6d5b908ce2843fc734e8fa6bcfa6808d36a4d7c261b0"} Oct 11 10:32:48.563352 master-2 kubenswrapper[4776]: I1011 10:32:48.563318 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:32:48.585334 master-2 kubenswrapper[4776]: I1011 10:32:48.585219 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podStartSLOduration=11.585197576 podStartE2EDuration="11.585197576s" podCreationTimestamp="2025-10-11 10:32:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:32:48.582855424 +0000 UTC m=+403.367282173" watchObservedRunningTime="2025-10-11 10:32:48.585197576 +0000 UTC m=+403.369624325" Oct 11 10:32:48.603280 master-2 kubenswrapper[4776]: I1011 10:32:48.603193 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podStartSLOduration=2.6031722 podStartE2EDuration="2.6031722s" podCreationTimestamp="2025-10-11 10:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:32:48.601722162 +0000 UTC m=+403.386148871" watchObservedRunningTime="2025-10-11 10:32:48.6031722 +0000 UTC m=+403.387598949" Oct 11 10:32:48.631726 master-2 kubenswrapper[4776]: I1011 10:32:48.629447 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podStartSLOduration=57.629432254 podStartE2EDuration="57.629432254s" podCreationTimestamp="2025-10-11 10:31:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:32:48.626747893 +0000 UTC m=+403.411174602" watchObservedRunningTime="2025-10-11 10:32:48.629432254 +0000 UTC m=+403.413858963" Oct 11 10:32:48.969046 master-2 kubenswrapper[4776]: I1011 10:32:48.968949 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:48.969046 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:48.969046 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:48.969046 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:48.969046 master-2 kubenswrapper[4776]: I1011 10:32:48.969006 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:49.496730 master-1 kubenswrapper[4771]: I1011 10:32:49.496681 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:49.496730 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:49.496730 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:49.496730 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:49.497029 master-1 kubenswrapper[4771]: I1011 10:32:49.496740 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:49.567906 master-2 kubenswrapper[4776]: I1011 10:32:49.567855 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/1.log" Oct 11 10:32:49.569016 master-2 kubenswrapper[4776]: I1011 10:32:49.568793 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/0.log" Oct 11 10:32:49.569016 master-2 kubenswrapper[4776]: I1011 10:32:49.568838 4776 generic.go:334] "Generic (PLEG): container finished" podID="6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c" containerID="9d1e12fc2f0f72ed70b132b8b498f2531a6ae8ae01d1a1a52b35463b0839dedd" exitCode=1 Oct 11 10:32:49.569484 master-2 kubenswrapper[4776]: I1011 10:32:49.569417 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" event={"ID":"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c","Type":"ContainerDied","Data":"9d1e12fc2f0f72ed70b132b8b498f2531a6ae8ae01d1a1a52b35463b0839dedd"} Oct 11 10:32:49.569803 master-2 kubenswrapper[4776]: I1011 10:32:49.569764 4776 scope.go:117] "RemoveContainer" containerID="2eff4353493e1e27d8a85bd8e32e0408e179cf5370df38de2ded9a10d5e6c314" Oct 11 10:32:49.570660 master-2 kubenswrapper[4776]: I1011 10:32:49.570622 4776 scope.go:117] "RemoveContainer" containerID="9d1e12fc2f0f72ed70b132b8b498f2531a6ae8ae01d1a1a52b35463b0839dedd" Oct 11 10:32:49.572235 master-2 kubenswrapper[4776]: E1011 10:32:49.571321 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-766ddf4575-wf7mj_openshift-ingress-operator(6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c)\"" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" podUID="6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c" Oct 11 10:32:49.970161 master-2 kubenswrapper[4776]: I1011 10:32:49.970103 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:49.970161 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:49.970161 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:49.970161 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:49.970677 master-2 kubenswrapper[4776]: I1011 10:32:49.970623 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:50.489923 master-2 kubenswrapper[4776]: I1011 10:32:50.489862 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:50.490149 master-2 kubenswrapper[4776]: I1011 10:32:50.490037 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:50.497574 master-2 kubenswrapper[4776]: I1011 10:32:50.497535 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:50.498272 master-1 kubenswrapper[4771]: I1011 10:32:50.498053 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:50.498272 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:50.498272 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:50.498272 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:50.499347 master-1 kubenswrapper[4771]: I1011 10:32:50.498273 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:50.584161 master-2 kubenswrapper[4776]: I1011 10:32:50.584108 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/1.log" Oct 11 10:32:50.591859 master-2 kubenswrapper[4776]: I1011 10:32:50.591787 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:50.969571 master-2 kubenswrapper[4776]: I1011 10:32:50.969537 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:50.969571 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:50.969571 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:50.969571 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:50.969897 master-2 kubenswrapper[4776]: I1011 10:32:50.969875 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:51.064078 master-1 kubenswrapper[4771]: I1011 10:32:51.063971 4771 patch_prober.go:28] interesting pod/apiserver-6f855d6bcf-cwmmk container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:51.064078 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:32:51.064078 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:32:51.064078 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:32:51.064078 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:51.064078 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:32:51.064078 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:51.064078 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:51.064078 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:51.064078 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:32:51.064078 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:32:51.064078 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:32:51.064078 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:32:51.064078 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:32:51.064078 master-1 kubenswrapper[4771]: I1011 10:32:51.064076 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" podUID="d87cc032-b419-444c-8bf0-ef7405d7369d" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:51.065117 master-1 kubenswrapper[4771]: I1011 10:32:51.064254 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:32:51.498097 master-1 kubenswrapper[4771]: I1011 10:32:51.497847 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:51.498097 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:51.498097 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:51.498097 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:51.498097 master-1 kubenswrapper[4771]: I1011 10:32:51.497953 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:51.829088 master-2 kubenswrapper[4776]: I1011 10:32:51.829033 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-777cc846dc-729nm"] Oct 11 10:32:51.969163 master-2 kubenswrapper[4776]: I1011 10:32:51.969109 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:51.969163 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:51.969163 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:51.969163 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:51.969457 master-2 kubenswrapper[4776]: I1011 10:32:51.969180 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:52.097572 master-1 kubenswrapper[4771]: E1011 10:32:52.097432 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" podUID="d7647696-42d9-4dd9-bc3b-a4d52a42cf9a" Oct 11 10:32:52.098199 master-1 kubenswrapper[4771]: E1011 10:32:52.097973 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" podUID="6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b" Oct 11 10:32:52.498303 master-1 kubenswrapper[4771]: I1011 10:32:52.498092 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:52.498303 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:52.498303 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:52.498303 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:52.498303 master-1 kubenswrapper[4771]: I1011 10:32:52.498192 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:52.944492 master-2 kubenswrapper[4776]: I1011 10:32:52.944397 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-2"] Oct 11 10:32:52.969761 master-2 kubenswrapper[4776]: I1011 10:32:52.969722 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:52.969761 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:52.969761 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:52.969761 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:52.970077 master-2 kubenswrapper[4776]: I1011 10:32:52.969771 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:53.026865 master-1 kubenswrapper[4771]: I1011 10:32:53.026757 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:32:53.027234 master-1 kubenswrapper[4771]: I1011 10:32:53.026781 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:32:53.497177 master-1 kubenswrapper[4771]: I1011 10:32:53.497095 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:53.497177 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:53.497177 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:53.497177 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:53.497722 master-1 kubenswrapper[4771]: I1011 10:32:53.497193 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:53.603820 master-2 kubenswrapper[4776]: I1011 10:32:53.603715 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" containerID="cri-o://90d6acbfbe353ba98c33d9d9275a952ddeabac687eed9e519947f935a2f44edf" gracePeriod=120 Oct 11 10:32:53.603820 master-2 kubenswrapper[4776]: I1011 10:32:53.603772 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://e2dd8c36e185fe9780ea6d5b908ce2843fc734e8fa6bcfa6808d36a4d7c261b0" gracePeriod=120 Oct 11 10:32:53.971386 master-2 kubenswrapper[4776]: I1011 10:32:53.971331 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:53.971386 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:53.971386 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:53.971386 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:53.972317 master-2 kubenswrapper[4776]: I1011 10:32:53.972288 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:54.497917 master-1 kubenswrapper[4771]: I1011 10:32:54.497853 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:54.497917 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:54.497917 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:54.497917 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:54.498766 master-1 kubenswrapper[4771]: I1011 10:32:54.497946 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:54.612136 master-2 kubenswrapper[4776]: I1011 10:32:54.612034 4776 generic.go:334] "Generic (PLEG): container finished" podID="3a156e42-88da-4ce6-9995-6865609e2711" containerID="e2dd8c36e185fe9780ea6d5b908ce2843fc734e8fa6bcfa6808d36a4d7c261b0" exitCode=0 Oct 11 10:32:54.612136 master-2 kubenswrapper[4776]: I1011 10:32:54.612078 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-729nm" event={"ID":"3a156e42-88da-4ce6-9995-6865609e2711","Type":"ContainerDied","Data":"e2dd8c36e185fe9780ea6d5b908ce2843fc734e8fa6bcfa6808d36a4d7c261b0"} Oct 11 10:32:54.969057 master-2 kubenswrapper[4776]: I1011 10:32:54.969002 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:54.969057 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:54.969057 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:54.969057 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:54.969057 master-2 kubenswrapper[4776]: I1011 10:32:54.969057 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:55.394382 master-1 kubenswrapper[4771]: I1011 10:32:55.394284 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:32:55.394654 master-1 kubenswrapper[4771]: E1011 10:32:55.394600 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker podName:d7647696-42d9-4dd9-bc3b-a4d52a42cf9a nodeName:}" failed. No retries permitted until 2025-10-11 10:34:57.394559322 +0000 UTC m=+529.368785953 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-bqdlc" (UID: "d7647696-42d9-4dd9-bc3b-a4d52a42cf9a") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:32:55.495685 master-1 kubenswrapper[4771]: I1011 10:32:55.495568 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: I1011 10:32:55.495728 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: I1011 10:32:55.495812 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:55.496024 master-1 kubenswrapper[4771]: E1011 10:32:55.495980 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker podName:6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b nodeName:}" failed. No retries permitted until 2025-10-11 10:34:57.495932801 +0000 UTC m=+529.470159282 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-tpzsm" (UID: "6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:32:55.497886 master-1 kubenswrapper[4771]: I1011 10:32:55.497836 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:55.497886 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:55.497886 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:55.497886 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:55.498501 master-1 kubenswrapper[4771]: I1011 10:32:55.497907 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:55.969395 master-2 kubenswrapper[4776]: I1011 10:32:55.969352 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:55.969395 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:55.969395 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:55.969395 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:55.969794 master-2 kubenswrapper[4776]: I1011 10:32:55.969765 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:56.063108 master-1 kubenswrapper[4771]: I1011 10:32:56.062971 4771 patch_prober.go:28] interesting pod/apiserver-6f855d6bcf-cwmmk container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:56.063108 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:32:56.063108 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:32:56.063108 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:32:56.063108 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:56.063108 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:32:56.063108 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:56.063108 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:56.063108 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:56.063108 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:32:56.063108 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:32:56.063108 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:32:56.063108 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:32:56.063108 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:32:56.063108 master-1 kubenswrapper[4771]: I1011 10:32:56.063045 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" podUID="d87cc032-b419-444c-8bf0-ef7405d7369d" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:56.497936 master-1 kubenswrapper[4771]: I1011 10:32:56.497724 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:56.497936 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:56.497936 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:56.497936 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:56.497936 master-1 kubenswrapper[4771]: I1011 10:32:56.497817 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:56.969449 master-2 kubenswrapper[4776]: I1011 10:32:56.969329 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:56.969449 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:56.969449 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:56.969449 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:56.970432 master-2 kubenswrapper[4776]: I1011 10:32:56.969455 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:57.498849 master-1 kubenswrapper[4771]: I1011 10:32:57.498742 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:57.498849 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:57.498849 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:57.498849 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:57.500218 master-1 kubenswrapper[4771]: I1011 10:32:57.498871 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:57.943286 master-2 kubenswrapper[4776]: I1011 10:32:57.943166 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:57.943286 master-2 kubenswrapper[4776]: I1011 10:32:57.943264 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:57.943613 master-2 kubenswrapper[4776]: I1011 10:32:57.943302 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:57.943613 master-2 kubenswrapper[4776]: I1011 10:32:57.943335 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:57.950707 master-2 kubenswrapper[4776]: I1011 10:32:57.950616 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:57.951103 master-2 kubenswrapper[4776]: I1011 10:32:57.951060 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:57.969697 master-2 kubenswrapper[4776]: I1011 10:32:57.969626 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:57.969697 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:57.969697 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:57.969697 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:57.970473 master-2 kubenswrapper[4776]: I1011 10:32:57.969707 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:58.498019 master-1 kubenswrapper[4771]: I1011 10:32:58.497931 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:58.498019 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:58.498019 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:58.498019 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:58.498514 master-1 kubenswrapper[4771]: I1011 10:32:58.498031 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:58.536971 master-1 kubenswrapper[4771]: I1011 10:32:58.536885 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:32:58.647631 master-2 kubenswrapper[4776]: I1011 10:32:58.644991 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:58.647631 master-2 kubenswrapper[4776]: I1011 10:32:58.646557 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:58.968881 master-2 kubenswrapper[4776]: I1011 10:32:58.968794 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:58.968881 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:58.968881 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:58.968881 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:58.969176 master-2 kubenswrapper[4776]: I1011 10:32:58.968933 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:59.147847 master-1 kubenswrapper[4771]: E1011 10:32:59.147719 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" podUID="537a2b50-0394-47bd-941a-def350316943" Oct 11 10:32:59.496971 master-1 kubenswrapper[4771]: I1011 10:32:59.496687 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:59.496971 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:32:59.496971 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:32:59.496971 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:32:59.496971 master-1 kubenswrapper[4771]: I1011 10:32:59.496836 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:59.969927 master-2 kubenswrapper[4776]: I1011 10:32:59.969861 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:59.969927 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:59.969927 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:59.969927 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:59.970642 master-2 kubenswrapper[4776]: I1011 10:32:59.969952 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:00.067575 master-1 kubenswrapper[4771]: I1011 10:33:00.067493 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: I1011 10:33:00.495732 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:33:00.497189 master-1 kubenswrapper[4771]: I1011 10:33:00.496992 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:00.497189 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:00.497189 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:00.497189 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:00.497189 master-1 kubenswrapper[4771]: I1011 10:33:00.497094 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:00.497518 master-2 kubenswrapper[4776]: I1011 10:33:00.495826 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:00.970399 master-2 kubenswrapper[4776]: I1011 10:33:00.970300 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:00.970399 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:00.970399 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:00.970399 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:00.970399 master-2 kubenswrapper[4776]: I1011 10:33:00.970387 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:01.061968 master-1 kubenswrapper[4771]: I1011 10:33:01.061858 4771 patch_prober.go:28] interesting pod/apiserver-6f855d6bcf-cwmmk container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:01.061968 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:33:01.061968 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:33:01.061968 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:33:01.061968 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:01.061968 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:33:01.061968 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:01.061968 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:01.061968 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:01.061968 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:33:01.061968 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:33:01.061968 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:33:01.061968 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:33:01.061968 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:33:01.061968 master-1 kubenswrapper[4771]: I1011 10:33:01.061960 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" podUID="d87cc032-b419-444c-8bf0-ef7405d7369d" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:01.159815 master-1 kubenswrapper[4771]: E1011 10:33:01.159635 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" podUID="c9e9455e-0b47-4623-9b4c-ef79cf62a254" Oct 11 10:33:01.497953 master-1 kubenswrapper[4771]: I1011 10:33:01.497772 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:01.497953 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:01.497953 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:01.497953 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:01.497953 master-1 kubenswrapper[4771]: I1011 10:33:01.497874 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:01.781791 master-2 kubenswrapper[4776]: I1011 10:33:01.781700 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-2"] Oct 11 10:33:01.782513 master-2 kubenswrapper[4776]: I1011 10:33:01.782462 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:01.787045 master-2 kubenswrapper[4776]: I1011 10:33:01.786974 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Oct 11 10:33:01.801245 master-2 kubenswrapper[4776]: I1011 10:33:01.801183 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-2"] Oct 11 10:33:01.881613 master-2 kubenswrapper[4776]: I1011 10:33:01.881531 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-var-lock\") pod \"installer-5-master-2\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:01.881916 master-2 kubenswrapper[4776]: I1011 10:33:01.881871 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kubelet-dir\") pod \"installer-5-master-2\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:01.881979 master-2 kubenswrapper[4776]: I1011 10:33:01.881965 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kube-api-access\") pod \"installer-5-master-2\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:01.970080 master-2 kubenswrapper[4776]: I1011 10:33:01.970002 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:01.970080 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:01.970080 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:01.970080 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:01.970412 master-2 kubenswrapper[4776]: I1011 10:33:01.970115 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:01.983245 master-2 kubenswrapper[4776]: I1011 10:33:01.983199 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kubelet-dir\") pod \"installer-5-master-2\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:01.983880 master-2 kubenswrapper[4776]: I1011 10:33:01.983267 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kube-api-access\") pod \"installer-5-master-2\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:01.983880 master-2 kubenswrapper[4776]: I1011 10:33:01.983321 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-var-lock\") pod \"installer-5-master-2\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:01.983880 master-2 kubenswrapper[4776]: I1011 10:33:01.983399 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kubelet-dir\") pod \"installer-5-master-2\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:01.983880 master-2 kubenswrapper[4776]: I1011 10:33:01.983414 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-var-lock\") pod \"installer-5-master-2\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:02.016582 master-2 kubenswrapper[4776]: I1011 10:33:02.016504 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kube-api-access\") pod \"installer-5-master-2\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:02.080628 master-1 kubenswrapper[4771]: I1011 10:33:02.080471 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:33:02.110803 master-2 kubenswrapper[4776]: I1011 10:33:02.110625 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:02.292902 master-1 kubenswrapper[4771]: I1011 10:33:02.292785 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:33:02.293800 master-1 kubenswrapper[4771]: E1011 10:33:02.292999 4771 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:33:02.293800 master-1 kubenswrapper[4771]: E1011 10:33:02.293187 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca podName:537a2b50-0394-47bd-941a-def350316943 nodeName:}" failed. No retries permitted until 2025-10-11 10:35:04.293145902 +0000 UTC m=+536.267372513 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca") pod "route-controller-manager-5bcc5987f5-f92xw" (UID: "537a2b50-0394-47bd-941a-def350316943") : configmap "client-ca" not found Oct 11 10:33:02.498172 master-1 kubenswrapper[4771]: I1011 10:33:02.497934 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:02.498172 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:02.498172 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:02.498172 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:02.498172 master-1 kubenswrapper[4771]: I1011 10:33:02.498016 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:02.547046 master-2 kubenswrapper[4776]: I1011 10:33:02.546989 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-2"] Oct 11 10:33:02.551715 master-2 kubenswrapper[4776]: W1011 10:33:02.551645 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfc3dcbf6_abe1_45ca_992b_4d1c7e419128.slice/crio-b7e2bd31994c8a8b44562a2051be2b5813378c028e80cc6d98912e4ac9acbe2f WatchSource:0}: Error finding container b7e2bd31994c8a8b44562a2051be2b5813378c028e80cc6d98912e4ac9acbe2f: Status 404 returned error can't find the container with id b7e2bd31994c8a8b44562a2051be2b5813378c028e80cc6d98912e4ac9acbe2f Oct 11 10:33:02.663198 master-2 kubenswrapper[4776]: I1011 10:33:02.663130 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-2" event={"ID":"fc3dcbf6-abe1-45ca-992b-4d1c7e419128","Type":"ContainerStarted","Data":"b7e2bd31994c8a8b44562a2051be2b5813378c028e80cc6d98912e4ac9acbe2f"} Oct 11 10:33:02.970100 master-2 kubenswrapper[4776]: I1011 10:33:02.970052 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:02.970100 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:02.970100 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:02.970100 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:02.970468 master-2 kubenswrapper[4776]: I1011 10:33:02.970421 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:03.497501 master-1 kubenswrapper[4771]: I1011 10:33:03.497431 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:03.497501 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:03.497501 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:03.497501 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:03.498822 master-1 kubenswrapper[4771]: I1011 10:33:03.498693 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:03.673329 master-2 kubenswrapper[4776]: I1011 10:33:03.673188 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-2" event={"ID":"fc3dcbf6-abe1-45ca-992b-4d1c7e419128","Type":"ContainerStarted","Data":"1c42b2aad8d56f14bb8f7731268d9b68d7e1d8d9d1d9e5f2c2ae8c23a10aa4cb"} Oct 11 10:33:03.700944 master-2 kubenswrapper[4776]: I1011 10:33:03.700771 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-2" podStartSLOduration=2.700725102 podStartE2EDuration="2.700725102s" podCreationTimestamp="2025-10-11 10:33:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:33:03.694581749 +0000 UTC m=+418.479008478" watchObservedRunningTime="2025-10-11 10:33:03.700725102 +0000 UTC m=+418.485151851" Oct 11 10:33:03.969902 master-2 kubenswrapper[4776]: I1011 10:33:03.969728 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:03.969902 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:03.969902 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:03.969902 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:03.969902 master-2 kubenswrapper[4776]: I1011 10:33:03.969815 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:04.060072 master-2 kubenswrapper[4776]: I1011 10:33:04.060011 4776 scope.go:117] "RemoveContainer" containerID="9d1e12fc2f0f72ed70b132b8b498f2531a6ae8ae01d1a1a52b35463b0839dedd" Oct 11 10:33:04.498239 master-1 kubenswrapper[4771]: I1011 10:33:04.498116 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:04.498239 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:04.498239 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:04.498239 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:04.498239 master-1 kubenswrapper[4771]: I1011 10:33:04.498234 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:04.680370 master-2 kubenswrapper[4776]: I1011 10:33:04.680293 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/1.log" Oct 11 10:33:04.681277 master-2 kubenswrapper[4776]: I1011 10:33:04.680777 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" event={"ID":"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c","Type":"ContainerStarted","Data":"c8328096ff83e7ba5543b3018932260182e624ccc8f4652947efc0b12da9a115"} Oct 11 10:33:04.722630 master-1 kubenswrapper[4771]: I1011 10:33:04.722498 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:33:04.723100 master-1 kubenswrapper[4771]: E1011 10:33:04.722743 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:33:04.723100 master-1 kubenswrapper[4771]: E1011 10:33:04.722907 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca podName:c9e9455e-0b47-4623-9b4c-ef79cf62a254 nodeName:}" failed. No retries permitted until 2025-10-11 10:35:06.722869174 +0000 UTC m=+538.697095645 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca") pod "controller-manager-565f857764-nhm4g" (UID: "c9e9455e-0b47-4623-9b4c-ef79cf62a254") : configmap "client-ca" not found Oct 11 10:33:04.970384 master-2 kubenswrapper[4776]: I1011 10:33:04.970214 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:04.970384 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:04.970384 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:04.970384 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:04.970384 master-2 kubenswrapper[4776]: I1011 10:33:04.970350 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: I1011 10:33:05.498704 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: I1011 10:33:05.498771 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:05.499040 master-1 kubenswrapper[4771]: I1011 10:33:05.498889 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:05.499040 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:05.499040 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:05.499040 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:05.499040 master-1 kubenswrapper[4771]: I1011 10:33:05.499021 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:05.499843 master-2 kubenswrapper[4776]: I1011 10:33:05.498870 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:33:05.969589 master-2 kubenswrapper[4776]: I1011 10:33:05.969527 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:05.969589 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:05.969589 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:05.969589 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:05.970322 master-2 kubenswrapper[4776]: I1011 10:33:05.969593 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:06.062937 master-1 kubenswrapper[4771]: I1011 10:33:06.062811 4771 patch_prober.go:28] interesting pod/apiserver-6f855d6bcf-cwmmk container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:06.062937 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:33:06.062937 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:33:06.062937 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:33:06.062937 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:06.062937 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:33:06.062937 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:06.062937 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:06.062937 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:06.062937 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:33:06.062937 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:33:06.062937 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:33:06.062937 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:33:06.062937 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:33:06.063811 master-1 kubenswrapper[4771]: I1011 10:33:06.062962 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" podUID="d87cc032-b419-444c-8bf0-ef7405d7369d" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:06.497525 master-1 kubenswrapper[4771]: I1011 10:33:06.497378 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:06.497525 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:06.497525 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:06.497525 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:06.497525 master-1 kubenswrapper[4771]: I1011 10:33:06.497454 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:06.971231 master-2 kubenswrapper[4776]: I1011 10:33:06.971016 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:06.971231 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:06.971231 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:06.971231 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:06.972655 master-2 kubenswrapper[4776]: I1011 10:33:06.972549 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:07.497565 master-1 kubenswrapper[4771]: I1011 10:33:07.497482 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:07.497565 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:07.497565 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:07.497565 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:07.498521 master-1 kubenswrapper[4771]: I1011 10:33:07.497580 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:07.969771 master-2 kubenswrapper[4776]: I1011 10:33:07.969643 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:07.969771 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:07.969771 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:07.969771 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:07.970345 master-2 kubenswrapper[4776]: I1011 10:33:07.970287 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:08.498236 master-1 kubenswrapper[4771]: I1011 10:33:08.498151 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:08.498236 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:08.498236 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:08.498236 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:08.499175 master-1 kubenswrapper[4771]: I1011 10:33:08.498254 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:08.969731 master-2 kubenswrapper[4776]: I1011 10:33:08.969570 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:08.969731 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:08.969731 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:08.969731 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:08.970596 master-2 kubenswrapper[4776]: I1011 10:33:08.969739 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:09.497000 master-1 kubenswrapper[4771]: I1011 10:33:09.496877 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:09.497000 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:09.497000 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:09.497000 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:09.497000 master-1 kubenswrapper[4771]: I1011 10:33:09.496966 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:09.970855 master-2 kubenswrapper[4776]: I1011 10:33:09.970766 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:09.970855 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:09.970855 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:09.970855 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:09.972095 master-2 kubenswrapper[4776]: I1011 10:33:09.970911 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:10.497298 master-1 kubenswrapper[4771]: I1011 10:33:10.497194 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:10.497298 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:10.497298 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:10.497298 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:10.497298 master-1 kubenswrapper[4771]: I1011 10:33:10.497287 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: I1011 10:33:10.498084 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: I1011 10:33:10.498139 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:10.969935 master-2 kubenswrapper[4776]: I1011 10:33:10.969848 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:10.969935 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:10.969935 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:10.969935 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:10.970491 master-2 kubenswrapper[4776]: I1011 10:33:10.969955 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:11.063848 master-1 kubenswrapper[4771]: I1011 10:33:11.063677 4771 patch_prober.go:28] interesting pod/apiserver-6f855d6bcf-cwmmk container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:11.063848 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:33:11.063848 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:33:11.063848 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:33:11.063848 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:11.063848 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:33:11.063848 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:11.063848 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:11.063848 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:11.063848 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:33:11.063848 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:33:11.063848 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:33:11.063848 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:33:11.063848 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:33:11.063848 master-1 kubenswrapper[4771]: I1011 10:33:11.063780 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" podUID="d87cc032-b419-444c-8bf0-ef7405d7369d" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:11.497162 master-1 kubenswrapper[4771]: I1011 10:33:11.497026 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:11.497162 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:11.497162 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:11.497162 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:11.497912 master-1 kubenswrapper[4771]: I1011 10:33:11.497879 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:11.970528 master-2 kubenswrapper[4776]: I1011 10:33:11.970437 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:11.970528 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:11.970528 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:11.970528 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:11.971537 master-2 kubenswrapper[4776]: I1011 10:33:11.970551 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:12.421590 master-1 kubenswrapper[4771]: I1011 10:33:12.421491 4771 scope.go:117] "RemoveContainer" containerID="a31d75d150e0d2dcf8878fd1b60bee95ea19d0157365ef6735168ff809442b4b" Oct 11 10:33:12.497235 master-1 kubenswrapper[4771]: I1011 10:33:12.497155 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:12.497235 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:12.497235 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:12.497235 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:12.498121 master-1 kubenswrapper[4771]: I1011 10:33:12.497265 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:12.969295 master-2 kubenswrapper[4776]: I1011 10:33:12.969230 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:12.969295 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:12.969295 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:12.969295 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:12.969579 master-2 kubenswrapper[4776]: I1011 10:33:12.969321 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:13.497002 master-1 kubenswrapper[4771]: I1011 10:33:13.496876 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:13.497002 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:13.497002 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:13.497002 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:13.497002 master-1 kubenswrapper[4771]: I1011 10:33:13.496952 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:13.726937 master-1 kubenswrapper[4771]: I1011 10:33:13.726869 4771 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-1"] Oct 11 10:33:13.728306 master-1 kubenswrapper[4771]: I1011 10:33:13.727901 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcdctl" containerID="cri-o://2c883b5cc483b4b1102cd2f4e0032f04b4e86dfac92c219c11959045c43545c8" gracePeriod=30 Oct 11 10:33:13.728726 master-1 kubenswrapper[4771]: I1011 10:33:13.727965 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" containerID="cri-o://8a1f303abdd5f0c02fe6a793df5fa8cce44a25f7b097770a28676ae74b39da7e" gracePeriod=30 Oct 11 10:33:13.728883 master-1 kubenswrapper[4771]: I1011 10:33:13.727966 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-readyz" containerID="cri-o://cf1bd37b9035aa3a513b51ba1e14c267a24bb1ac86b542f0095d01337d817711" gracePeriod=30 Oct 11 10:33:13.728883 master-1 kubenswrapper[4771]: I1011 10:33:13.728058 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-rev" containerID="cri-o://958716eeae4e87693f249c2e50f77614f7904dfcce99812a8c2f6b2e06fbacf0" gracePeriod=30 Oct 11 10:33:13.728883 master-1 kubenswrapper[4771]: I1011 10:33:13.728108 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-metrics" containerID="cri-o://4e485d5d5712b6cda3c4f5674c0f91abe7502ccb2d49bc78c03ccbef061a43f0" gracePeriod=30 Oct 11 10:33:13.731874 master-1 kubenswrapper[4771]: I1011 10:33:13.731699 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-1"] Oct 11 10:33:13.732045 master-1 kubenswrapper[4771]: E1011 10:33:13.731934 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" Oct 11 10:33:13.732045 master-1 kubenswrapper[4771]: I1011 10:33:13.731952 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" Oct 11 10:33:13.732045 master-1 kubenswrapper[4771]: E1011 10:33:13.731971 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-ensure-env-vars" Oct 11 10:33:13.732045 master-1 kubenswrapper[4771]: I1011 10:33:13.731981 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-ensure-env-vars" Oct 11 10:33:13.732045 master-1 kubenswrapper[4771]: E1011 10:33:13.731998 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-readyz" Oct 11 10:33:13.732045 master-1 kubenswrapper[4771]: I1011 10:33:13.732008 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-readyz" Oct 11 10:33:13.732045 master-1 kubenswrapper[4771]: E1011 10:33:13.732023 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-rev" Oct 11 10:33:13.732045 master-1 kubenswrapper[4771]: I1011 10:33:13.732032 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-rev" Oct 11 10:33:13.732861 master-1 kubenswrapper[4771]: E1011 10:33:13.732119 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="setup" Oct 11 10:33:13.732861 master-1 kubenswrapper[4771]: I1011 10:33:13.732133 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="setup" Oct 11 10:33:13.732861 master-1 kubenswrapper[4771]: E1011 10:33:13.732144 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcdctl" Oct 11 10:33:13.732861 master-1 kubenswrapper[4771]: I1011 10:33:13.732153 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcdctl" Oct 11 10:33:13.732861 master-1 kubenswrapper[4771]: E1011 10:33:13.732164 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" Oct 11 10:33:13.732861 master-1 kubenswrapper[4771]: I1011 10:33:13.732173 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" Oct 11 10:33:13.732861 master-1 kubenswrapper[4771]: E1011 10:33:13.732184 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-metrics" Oct 11 10:33:13.732861 master-1 kubenswrapper[4771]: I1011 10:33:13.732193 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-metrics" Oct 11 10:33:13.732861 master-1 kubenswrapper[4771]: E1011 10:33:13.732204 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-resources-copy" Oct 11 10:33:13.732861 master-1 kubenswrapper[4771]: I1011 10:33:13.732214 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-resources-copy" Oct 11 10:33:13.732861 master-1 kubenswrapper[4771]: I1011 10:33:13.732352 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-rev" Oct 11 10:33:13.732861 master-1 kubenswrapper[4771]: I1011 10:33:13.732377 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" Oct 11 10:33:13.732861 master-1 kubenswrapper[4771]: I1011 10:33:13.732409 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-metrics" Oct 11 10:33:13.732861 master-1 kubenswrapper[4771]: I1011 10:33:13.732419 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcdctl" Oct 11 10:33:13.732861 master-1 kubenswrapper[4771]: I1011 10:33:13.732433 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" Oct 11 10:33:13.732861 master-1 kubenswrapper[4771]: I1011 10:33:13.732447 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-readyz" Oct 11 10:33:13.732861 master-1 kubenswrapper[4771]: I1011 10:33:13.732461 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" Oct 11 10:33:13.732861 master-1 kubenswrapper[4771]: E1011 10:33:13.732587 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" Oct 11 10:33:13.732861 master-1 kubenswrapper[4771]: I1011 10:33:13.732600 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" Oct 11 10:33:13.752643 master-1 kubenswrapper[4771]: I1011 10:33:13.752531 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-log-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:33:13.752643 master-1 kubenswrapper[4771]: I1011 10:33:13.752609 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-static-pod-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:33:13.753214 master-1 kubenswrapper[4771]: I1011 10:33:13.752673 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-usr-local-bin\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:33:13.753214 master-1 kubenswrapper[4771]: I1011 10:33:13.752731 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-cert-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:33:13.753214 master-1 kubenswrapper[4771]: I1011 10:33:13.752764 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-resource-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:33:13.753214 master-1 kubenswrapper[4771]: I1011 10:33:13.752811 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-data-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:33:13.854555 master-1 kubenswrapper[4771]: I1011 10:33:13.854486 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-log-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:33:13.854555 master-1 kubenswrapper[4771]: I1011 10:33:13.854546 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-static-pod-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:33:13.854856 master-1 kubenswrapper[4771]: I1011 10:33:13.854575 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-usr-local-bin\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:33:13.854856 master-1 kubenswrapper[4771]: I1011 10:33:13.854620 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-cert-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:33:13.854856 master-1 kubenswrapper[4771]: I1011 10:33:13.854641 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-resource-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:33:13.854856 master-1 kubenswrapper[4771]: I1011 10:33:13.854645 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-log-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:33:13.854856 master-1 kubenswrapper[4771]: I1011 10:33:13.854745 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-usr-local-bin\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:33:13.854856 master-1 kubenswrapper[4771]: I1011 10:33:13.854773 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-data-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:33:13.854856 master-1 kubenswrapper[4771]: I1011 10:33:13.854679 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-data-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:33:13.854856 master-1 kubenswrapper[4771]: I1011 10:33:13.854812 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-resource-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:33:13.854856 master-1 kubenswrapper[4771]: I1011 10:33:13.854749 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-static-pod-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:33:13.854856 master-1 kubenswrapper[4771]: I1011 10:33:13.854820 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-cert-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:33:13.969468 master-2 kubenswrapper[4776]: I1011 10:33:13.969361 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:13.969468 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:13.969468 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:13.969468 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:13.970662 master-2 kubenswrapper[4776]: I1011 10:33:13.969475 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:14.160397 master-1 kubenswrapper[4771]: I1011 10:33:14.160295 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/1.log" Oct 11 10:33:14.161203 master-1 kubenswrapper[4771]: I1011 10:33:14.161137 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-rev/0.log" Oct 11 10:33:14.162750 master-1 kubenswrapper[4771]: I1011 10:33:14.162695 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-metrics/0.log" Oct 11 10:33:14.165068 master-1 kubenswrapper[4771]: I1011 10:33:14.164991 4771 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="958716eeae4e87693f249c2e50f77614f7904dfcce99812a8c2f6b2e06fbacf0" exitCode=2 Oct 11 10:33:14.165068 master-1 kubenswrapper[4771]: I1011 10:33:14.165036 4771 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="cf1bd37b9035aa3a513b51ba1e14c267a24bb1ac86b542f0095d01337d817711" exitCode=0 Oct 11 10:33:14.165068 master-1 kubenswrapper[4771]: I1011 10:33:14.165052 4771 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="4e485d5d5712b6cda3c4f5674c0f91abe7502ccb2d49bc78c03ccbef061a43f0" exitCode=2 Oct 11 10:33:14.167165 master-1 kubenswrapper[4771]: I1011 10:33:14.167097 4771 generic.go:334] "Generic (PLEG): container finished" podID="f0f830cc-d36c-4ccd-97cb-2d4a99726684" containerID="2b7fb64c483453dbfbd93869288690ed38d6d29cb105ac6ec22c06d0d9551aa1" exitCode=0 Oct 11 10:33:14.167165 master-1 kubenswrapper[4771]: I1011 10:33:14.167140 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-5-master-1" event={"ID":"f0f830cc-d36c-4ccd-97cb-2d4a99726684","Type":"ContainerDied","Data":"2b7fb64c483453dbfbd93869288690ed38d6d29cb105ac6ec22c06d0d9551aa1"} Oct 11 10:33:14.173801 master-1 kubenswrapper[4771]: I1011 10:33:14.173725 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-1" oldPodUID="5268b2f2ae2aef0c7f2e7a6e651ed702" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" Oct 11 10:33:14.497719 master-1 kubenswrapper[4771]: I1011 10:33:14.497552 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:14.497719 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:14.497719 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:14.497719 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:14.497719 master-1 kubenswrapper[4771]: I1011 10:33:14.497648 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:14.629223 master-1 kubenswrapper[4771]: I1011 10:33:14.629117 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:33:14.629223 master-1 kubenswrapper[4771]: I1011 10:33:14.629217 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:33:14.969508 master-2 kubenswrapper[4776]: I1011 10:33:14.969419 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:14.969508 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:14.969508 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:14.969508 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:14.969508 master-2 kubenswrapper[4776]: I1011 10:33:14.969486 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: I1011 10:33:15.497322 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:33:15.498152 master-2 kubenswrapper[4776]: I1011 10:33:15.497423 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:15.502890 master-1 kubenswrapper[4771]: I1011 10:33:15.502812 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:15.502890 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:15.502890 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:15.502890 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:15.504223 master-1 kubenswrapper[4771]: I1011 10:33:15.502913 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:15.535152 master-1 kubenswrapper[4771]: I1011 10:33:15.535067 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-1" Oct 11 10:33:15.575033 master-1 kubenswrapper[4771]: I1011 10:33:15.574900 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0f830cc-d36c-4ccd-97cb-2d4a99726684-kube-api-access\") pod \"f0f830cc-d36c-4ccd-97cb-2d4a99726684\" (UID: \"f0f830cc-d36c-4ccd-97cb-2d4a99726684\") " Oct 11 10:33:15.575311 master-1 kubenswrapper[4771]: I1011 10:33:15.575202 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0f830cc-d36c-4ccd-97cb-2d4a99726684-kubelet-dir\") pod \"f0f830cc-d36c-4ccd-97cb-2d4a99726684\" (UID: \"f0f830cc-d36c-4ccd-97cb-2d4a99726684\") " Oct 11 10:33:15.575311 master-1 kubenswrapper[4771]: I1011 10:33:15.575295 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f0f830cc-d36c-4ccd-97cb-2d4a99726684-var-lock\") pod \"f0f830cc-d36c-4ccd-97cb-2d4a99726684\" (UID: \"f0f830cc-d36c-4ccd-97cb-2d4a99726684\") " Oct 11 10:33:15.575798 master-1 kubenswrapper[4771]: I1011 10:33:15.575648 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0f830cc-d36c-4ccd-97cb-2d4a99726684-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f0f830cc-d36c-4ccd-97cb-2d4a99726684" (UID: "f0f830cc-d36c-4ccd-97cb-2d4a99726684"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:15.575798 master-1 kubenswrapper[4771]: I1011 10:33:15.575746 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0f830cc-d36c-4ccd-97cb-2d4a99726684-var-lock" (OuterVolumeSpecName: "var-lock") pod "f0f830cc-d36c-4ccd-97cb-2d4a99726684" (UID: "f0f830cc-d36c-4ccd-97cb-2d4a99726684"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:15.576348 master-1 kubenswrapper[4771]: I1011 10:33:15.576316 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0f830cc-d36c-4ccd-97cb-2d4a99726684-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:33:15.577850 master-1 kubenswrapper[4771]: I1011 10:33:15.576349 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f0f830cc-d36c-4ccd-97cb-2d4a99726684-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:33:15.578643 master-1 kubenswrapper[4771]: I1011 10:33:15.578579 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f830cc-d36c-4ccd-97cb-2d4a99726684-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f0f830cc-d36c-4ccd-97cb-2d4a99726684" (UID: "f0f830cc-d36c-4ccd-97cb-2d4a99726684"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:33:15.677586 master-1 kubenswrapper[4771]: I1011 10:33:15.677483 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0f830cc-d36c-4ccd-97cb-2d4a99726684-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:33:15.969668 master-2 kubenswrapper[4776]: I1011 10:33:15.969552 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:15.969668 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:15.969668 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:15.969668 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:15.969668 master-2 kubenswrapper[4776]: I1011 10:33:15.969661 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:16.064342 master-1 kubenswrapper[4771]: I1011 10:33:16.064261 4771 patch_prober.go:28] interesting pod/apiserver-6f855d6bcf-cwmmk container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:16.064342 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:33:16.064342 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:33:16.064342 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:33:16.064342 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:16.064342 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:33:16.064342 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:16.064342 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:16.064342 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:16.064342 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:33:16.064342 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:33:16.064342 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:33:16.064342 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:33:16.064342 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:33:16.065418 master-1 kubenswrapper[4771]: I1011 10:33:16.064402 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" podUID="d87cc032-b419-444c-8bf0-ef7405d7369d" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:16.185035 master-1 kubenswrapper[4771]: I1011 10:33:16.184955 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-5-master-1" event={"ID":"f0f830cc-d36c-4ccd-97cb-2d4a99726684","Type":"ContainerDied","Data":"65e9818b973bf19dd26838510d379ddf1b30f23283f0995cf12628a1f6d4cb94"} Oct 11 10:33:16.185035 master-1 kubenswrapper[4771]: I1011 10:33:16.185018 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65e9818b973bf19dd26838510d379ddf1b30f23283f0995cf12628a1f6d4cb94" Oct 11 10:33:16.185434 master-1 kubenswrapper[4771]: I1011 10:33:16.185040 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-1" Oct 11 10:33:16.497479 master-1 kubenswrapper[4771]: I1011 10:33:16.497196 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:16.497479 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:16.497479 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:16.497479 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:16.497479 master-1 kubenswrapper[4771]: I1011 10:33:16.497300 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:16.971556 master-2 kubenswrapper[4776]: I1011 10:33:16.971431 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:16.971556 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:16.971556 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:16.971556 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:16.971556 master-2 kubenswrapper[4776]: I1011 10:33:16.971548 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:17.497310 master-1 kubenswrapper[4771]: I1011 10:33:17.497222 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:17.497310 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:17.497310 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:17.497310 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:17.498023 master-1 kubenswrapper[4771]: I1011 10:33:17.497324 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:17.765482 master-2 kubenswrapper[4776]: I1011 10:33:17.765433 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-v6dfc_8757af56-20fb-439e-adba-7e4e50378936/assisted-installer-controller/0.log" Oct 11 10:33:17.791087 master-1 kubenswrapper[4771]: I1011 10:33:17.790932 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-h7gnk_6d20faa4-e5eb-4766-b4f5-30e491d1820c/machine-config-server/0.log" Oct 11 10:33:17.799492 master-2 kubenswrapper[4776]: I1011 10:33:17.799333 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:33:17.970698 master-2 kubenswrapper[4776]: I1011 10:33:17.970575 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:17.970698 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:17.970698 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:17.970698 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:17.970976 master-2 kubenswrapper[4776]: I1011 10:33:17.970731 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:18.497622 master-1 kubenswrapper[4771]: I1011 10:33:18.497505 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:18.497622 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:18.497622 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:18.497622 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:18.497622 master-1 kubenswrapper[4771]: I1011 10:33:18.497604 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:18.970257 master-2 kubenswrapper[4776]: I1011 10:33:18.970201 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:18.970257 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:18.970257 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:18.970257 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:18.971249 master-2 kubenswrapper[4776]: I1011 10:33:18.970829 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:19.497292 master-1 kubenswrapper[4771]: I1011 10:33:19.497219 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:19.497292 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:19.497292 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:19.497292 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:19.497615 master-1 kubenswrapper[4771]: I1011 10:33:19.497311 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:19.628440 master-1 kubenswrapper[4771]: I1011 10:33:19.628325 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:33:19.629187 master-1 kubenswrapper[4771]: I1011 10:33:19.628451 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:33:19.970414 master-2 kubenswrapper[4776]: I1011 10:33:19.970352 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:19.970414 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:19.970414 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:19.970414 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:19.971378 master-2 kubenswrapper[4776]: I1011 10:33:19.970424 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: I1011 10:33:20.496146 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:33:20.497319 master-2 kubenswrapper[4776]: I1011 10:33:20.496223 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:20.497874 master-1 kubenswrapper[4771]: I1011 10:33:20.497790 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:20.497874 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:20.497874 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:20.497874 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:20.497874 master-1 kubenswrapper[4771]: I1011 10:33:20.497871 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:20.970596 master-2 kubenswrapper[4776]: I1011 10:33:20.970517 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:20.970596 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:20.970596 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:20.970596 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:20.971580 master-2 kubenswrapper[4776]: I1011 10:33:20.970617 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:21.065462 master-1 kubenswrapper[4771]: I1011 10:33:21.065392 4771 patch_prober.go:28] interesting pod/apiserver-6f855d6bcf-cwmmk container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:21.065462 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:33:21.065462 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:33:21.065462 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:33:21.065462 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:21.065462 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:33:21.065462 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:21.065462 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:21.065462 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:21.065462 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:33:21.065462 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:33:21.065462 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:33:21.065462 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:33:21.065462 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:33:21.066471 master-1 kubenswrapper[4771]: I1011 10:33:21.065498 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" podUID="d87cc032-b419-444c-8bf0-ef7405d7369d" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:21.497910 master-1 kubenswrapper[4771]: I1011 10:33:21.497767 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:21.497910 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:21.497910 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:21.497910 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:21.497910 master-1 kubenswrapper[4771]: I1011 10:33:21.497863 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:21.969970 master-2 kubenswrapper[4776]: I1011 10:33:21.969907 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:21.969970 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:21.969970 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:21.969970 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:21.970574 master-2 kubenswrapper[4776]: I1011 10:33:21.970518 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:22.497334 master-1 kubenswrapper[4771]: I1011 10:33:22.497243 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:22.497334 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:22.497334 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:22.497334 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:22.498272 master-1 kubenswrapper[4771]: I1011 10:33:22.497352 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:22.971260 master-2 kubenswrapper[4776]: I1011 10:33:22.971167 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:22.971260 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:22.971260 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:22.971260 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:22.972934 master-2 kubenswrapper[4776]: I1011 10:33:22.971280 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:23.094312 master-2 kubenswrapper[4776]: I1011 10:33:23.094212 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-2"] Oct 11 10:33:23.098099 master-2 kubenswrapper[4776]: I1011 10:33:23.098031 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.171856 master-2 kubenswrapper[4776]: I1011 10:33:23.171758 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-2"] Oct 11 10:33:23.229865 master-2 kubenswrapper[4776]: I1011 10:33:23.225904 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-cert-dir\") pod \"kube-apiserver-master-2\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.229865 master-2 kubenswrapper[4776]: I1011 10:33:23.226016 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-resource-dir\") pod \"kube-apiserver-master-2\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.229865 master-2 kubenswrapper[4776]: I1011 10:33:23.226038 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-audit-dir\") pod \"kube-apiserver-master-2\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.327614 master-2 kubenswrapper[4776]: I1011 10:33:23.327520 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-cert-dir\") pod \"kube-apiserver-master-2\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.327614 master-2 kubenswrapper[4776]: I1011 10:33:23.327625 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-resource-dir\") pod \"kube-apiserver-master-2\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.328078 master-2 kubenswrapper[4776]: I1011 10:33:23.327656 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-audit-dir\") pod \"kube-apiserver-master-2\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.328078 master-2 kubenswrapper[4776]: I1011 10:33:23.327803 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-audit-dir\") pod \"kube-apiserver-master-2\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.328078 master-2 kubenswrapper[4776]: I1011 10:33:23.327805 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-resource-dir\") pod \"kube-apiserver-master-2\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.328078 master-2 kubenswrapper[4776]: I1011 10:33:23.327843 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-cert-dir\") pod \"kube-apiserver-master-2\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.466806 master-2 kubenswrapper[4776]: I1011 10:33:23.466729 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.500885 master-1 kubenswrapper[4771]: I1011 10:33:23.500752 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:23.500885 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:23.500885 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:23.500885 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:23.501855 master-1 kubenswrapper[4771]: I1011 10:33:23.500907 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:23.842329 master-2 kubenswrapper[4776]: I1011 10:33:23.841967 4776 generic.go:334] "Generic (PLEG): container finished" podID="9041570beb5002e8da158e70e12f0c16" containerID="f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1" exitCode=0 Oct 11 10:33:23.843336 master-2 kubenswrapper[4776]: I1011 10:33:23.842655 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"9041570beb5002e8da158e70e12f0c16","Type":"ContainerDied","Data":"f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1"} Oct 11 10:33:23.843336 master-2 kubenswrapper[4776]: I1011 10:33:23.842758 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"9041570beb5002e8da158e70e12f0c16","Type":"ContainerStarted","Data":"57f45f7af0db732a260dad8f49ae694c1ca688994699767f3768f884b738ad40"} Oct 11 10:33:23.845182 master-2 kubenswrapper[4776]: I1011 10:33:23.845131 4776 generic.go:334] "Generic (PLEG): container finished" podID="d7d02073-00a3-41a2-8ca4-6932819886b8" containerID="2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123" exitCode=0 Oct 11 10:33:23.845182 master-2 kubenswrapper[4776]: I1011 10:33:23.845174 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-2" event={"ID":"d7d02073-00a3-41a2-8ca4-6932819886b8","Type":"ContainerDied","Data":"2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123"} Oct 11 10:33:23.969545 master-2 kubenswrapper[4776]: I1011 10:33:23.969465 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:23.969545 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:23.969545 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:23.969545 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:23.969545 master-2 kubenswrapper[4776]: I1011 10:33:23.969533 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:24.498449 master-1 kubenswrapper[4771]: I1011 10:33:24.498344 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:24.498449 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:24.498449 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:24.498449 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:24.498449 master-1 kubenswrapper[4771]: I1011 10:33:24.498447 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:24.628577 master-1 kubenswrapper[4771]: I1011 10:33:24.628472 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:33:24.629596 master-1 kubenswrapper[4771]: I1011 10:33:24.628577 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:33:24.629596 master-1 kubenswrapper[4771]: I1011 10:33:24.628715 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-guard-master-1" Oct 11 10:33:24.629821 master-1 kubenswrapper[4771]: I1011 10:33:24.629622 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:33:24.629821 master-1 kubenswrapper[4771]: I1011 10:33:24.629718 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:33:24.868247 master-2 kubenswrapper[4776]: I1011 10:33:24.868103 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"9041570beb5002e8da158e70e12f0c16","Type":"ContainerStarted","Data":"f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9"} Oct 11 10:33:24.868714 master-2 kubenswrapper[4776]: I1011 10:33:24.868263 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"9041570beb5002e8da158e70e12f0c16","Type":"ContainerStarted","Data":"e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510"} Oct 11 10:33:24.868714 master-2 kubenswrapper[4776]: I1011 10:33:24.868281 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"9041570beb5002e8da158e70e12f0c16","Type":"ContainerStarted","Data":"ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da"} Oct 11 10:33:24.969937 master-2 kubenswrapper[4776]: I1011 10:33:24.969869 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:24.969937 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:24.969937 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:24.969937 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:24.970342 master-2 kubenswrapper[4776]: I1011 10:33:24.969983 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:25.176057 master-2 kubenswrapper[4776]: I1011 10:33:25.176006 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:33:25.256016 master-2 kubenswrapper[4776]: I1011 10:33:25.255948 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7d02073-00a3-41a2-8ca4-6932819886b8-kube-api-access\") pod \"d7d02073-00a3-41a2-8ca4-6932819886b8\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " Oct 11 10:33:25.256206 master-2 kubenswrapper[4776]: I1011 10:33:25.256087 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-kubelet-dir\") pod \"d7d02073-00a3-41a2-8ca4-6932819886b8\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " Oct 11 10:33:25.256206 master-2 kubenswrapper[4776]: I1011 10:33:25.256121 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-var-lock\") pod \"d7d02073-00a3-41a2-8ca4-6932819886b8\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " Oct 11 10:33:25.256295 master-2 kubenswrapper[4776]: I1011 10:33:25.256241 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d7d02073-00a3-41a2-8ca4-6932819886b8" (UID: "d7d02073-00a3-41a2-8ca4-6932819886b8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:25.256395 master-2 kubenswrapper[4776]: I1011 10:33:25.256346 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-var-lock" (OuterVolumeSpecName: "var-lock") pod "d7d02073-00a3-41a2-8ca4-6932819886b8" (UID: "d7d02073-00a3-41a2-8ca4-6932819886b8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:25.256511 master-2 kubenswrapper[4776]: I1011 10:33:25.256486 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:25.256511 master-2 kubenswrapper[4776]: I1011 10:33:25.256505 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:25.258662 master-2 kubenswrapper[4776]: I1011 10:33:25.258630 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7d02073-00a3-41a2-8ca4-6932819886b8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7d02073-00a3-41a2-8ca4-6932819886b8" (UID: "d7d02073-00a3-41a2-8ca4-6932819886b8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:33:25.359149 master-2 kubenswrapper[4776]: I1011 10:33:25.358982 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7d02073-00a3-41a2-8ca4-6932819886b8-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: I1011 10:33:25.494558 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:33:25.495360 master-2 kubenswrapper[4776]: I1011 10:33:25.494630 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:25.497312 master-1 kubenswrapper[4771]: I1011 10:33:25.497130 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:25.497312 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:25.497312 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:25.497312 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:25.497312 master-1 kubenswrapper[4771]: I1011 10:33:25.497327 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:25.876853 master-2 kubenswrapper[4776]: I1011 10:33:25.876793 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"9041570beb5002e8da158e70e12f0c16","Type":"ContainerStarted","Data":"784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6"} Oct 11 10:33:25.876853 master-2 kubenswrapper[4776]: I1011 10:33:25.876846 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"9041570beb5002e8da158e70e12f0c16","Type":"ContainerStarted","Data":"b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57"} Oct 11 10:33:25.877926 master-2 kubenswrapper[4776]: I1011 10:33:25.877895 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:25.879218 master-2 kubenswrapper[4776]: I1011 10:33:25.879190 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-2" event={"ID":"d7d02073-00a3-41a2-8ca4-6932819886b8","Type":"ContainerDied","Data":"d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf"} Oct 11 10:33:25.879296 master-2 kubenswrapper[4776]: I1011 10:33:25.879221 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf" Oct 11 10:33:25.879296 master-2 kubenswrapper[4776]: I1011 10:33:25.879256 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:33:25.902510 master-2 kubenswrapper[4776]: I1011 10:33:25.902433 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-2" podStartSLOduration=2.902414079 podStartE2EDuration="2.902414079s" podCreationTimestamp="2025-10-11 10:33:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:33:25.899299177 +0000 UTC m=+440.683725886" watchObservedRunningTime="2025-10-11 10:33:25.902414079 +0000 UTC m=+440.686840788" Oct 11 10:33:25.968522 master-2 kubenswrapper[4776]: I1011 10:33:25.968449 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:25.968522 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:25.968522 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:25.968522 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:25.968972 master-2 kubenswrapper[4776]: I1011 10:33:25.968550 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:26.066023 master-1 kubenswrapper[4771]: I1011 10:33:26.065869 4771 patch_prober.go:28] interesting pod/apiserver-6f855d6bcf-cwmmk container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:26.066023 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:33:26.066023 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:33:26.066023 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:33:26.066023 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:26.066023 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:33:26.066023 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:26.066023 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:26.066023 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:26.066023 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:33:26.066023 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:33:26.066023 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:33:26.066023 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:33:26.066023 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:33:26.067599 master-1 kubenswrapper[4771]: I1011 10:33:26.066055 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" podUID="d87cc032-b419-444c-8bf0-ef7405d7369d" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:26.497527 master-1 kubenswrapper[4771]: I1011 10:33:26.497317 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:26.497527 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:26.497527 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:26.497527 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:26.497527 master-1 kubenswrapper[4771]: I1011 10:33:26.497440 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:26.970118 master-2 kubenswrapper[4776]: I1011 10:33:26.970051 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:26.970118 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:26.970118 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:26.970118 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:26.970623 master-2 kubenswrapper[4776]: I1011 10:33:26.970160 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:27.497318 master-1 kubenswrapper[4771]: I1011 10:33:27.497209 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:27.497318 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:27.497318 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:27.497318 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:27.497318 master-1 kubenswrapper[4771]: I1011 10:33:27.497292 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:27.970432 master-2 kubenswrapper[4776]: I1011 10:33:27.970331 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:27.970432 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:27.970432 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:27.970432 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:27.971281 master-2 kubenswrapper[4776]: I1011 10:33:27.970451 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:27.978645 master-2 kubenswrapper[4776]: I1011 10:33:27.978572 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-2"] Oct 11 10:33:27.978866 master-2 kubenswrapper[4776]: E1011 10:33:27.978808 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d02073-00a3-41a2-8ca4-6932819886b8" containerName="installer" Oct 11 10:33:27.978866 master-2 kubenswrapper[4776]: I1011 10:33:27.978821 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d02073-00a3-41a2-8ca4-6932819886b8" containerName="installer" Oct 11 10:33:27.978982 master-2 kubenswrapper[4776]: I1011 10:33:27.978916 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7d02073-00a3-41a2-8ca4-6932819886b8" containerName="installer" Oct 11 10:33:27.979335 master-2 kubenswrapper[4776]: I1011 10:33:27.979286 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" Oct 11 10:33:27.983619 master-2 kubenswrapper[4776]: I1011 10:33:27.983576 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 11 10:33:27.984005 master-2 kubenswrapper[4776]: I1011 10:33:27.983974 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"openshift-service-ca.crt" Oct 11 10:33:27.995987 master-2 kubenswrapper[4776]: I1011 10:33:27.995907 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-2"] Oct 11 10:33:28.143062 master-2 kubenswrapper[4776]: I1011 10:33:28.142975 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv7m6\" (UniqueName: \"kubernetes.io/projected/1a003c5f-2a49-44fb-93a8-7a83319ce8e8-kube-api-access-bv7m6\") pod \"kube-apiserver-guard-master-2\" (UID: \"1a003c5f-2a49-44fb-93a8-7a83319ce8e8\") " pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" Oct 11 10:33:28.244534 master-2 kubenswrapper[4776]: I1011 10:33:28.244413 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv7m6\" (UniqueName: \"kubernetes.io/projected/1a003c5f-2a49-44fb-93a8-7a83319ce8e8-kube-api-access-bv7m6\") pod \"kube-apiserver-guard-master-2\" (UID: \"1a003c5f-2a49-44fb-93a8-7a83319ce8e8\") " pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" Oct 11 10:33:28.267801 master-2 kubenswrapper[4776]: I1011 10:33:28.267754 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv7m6\" (UniqueName: \"kubernetes.io/projected/1a003c5f-2a49-44fb-93a8-7a83319ce8e8-kube-api-access-bv7m6\") pod \"kube-apiserver-guard-master-2\" (UID: \"1a003c5f-2a49-44fb-93a8-7a83319ce8e8\") " pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" Oct 11 10:33:28.301955 master-2 kubenswrapper[4776]: I1011 10:33:28.301898 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" Oct 11 10:33:28.467652 master-2 kubenswrapper[4776]: I1011 10:33:28.467611 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:28.467652 master-2 kubenswrapper[4776]: I1011 10:33:28.467658 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:28.478317 master-2 kubenswrapper[4776]: I1011 10:33:28.478274 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:28.498246 master-1 kubenswrapper[4771]: I1011 10:33:28.498129 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:28.498246 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:28.498246 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:28.498246 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:28.498246 master-1 kubenswrapper[4771]: I1011 10:33:28.498260 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:28.763651 master-2 kubenswrapper[4776]: I1011 10:33:28.763585 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-2"] Oct 11 10:33:28.899559 master-2 kubenswrapper[4776]: I1011 10:33:28.899496 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" event={"ID":"1a003c5f-2a49-44fb-93a8-7a83319ce8e8","Type":"ContainerStarted","Data":"e944b9f0fe05de000f493e43201fbe6c63e5bc060919c3b00af98aac25efe17d"} Oct 11 10:33:28.903545 master-2 kubenswrapper[4776]: I1011 10:33:28.903521 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:28.968755 master-2 kubenswrapper[4776]: I1011 10:33:28.968718 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:28.968755 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:28.968755 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:28.968755 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:28.968991 master-2 kubenswrapper[4776]: I1011 10:33:28.968767 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:29.496611 master-1 kubenswrapper[4771]: I1011 10:33:29.496529 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:29.496611 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:29.496611 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:29.496611 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:29.497052 master-1 kubenswrapper[4771]: I1011 10:33:29.496689 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:29.628323 master-1 kubenswrapper[4771]: I1011 10:33:29.628244 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:33:29.628323 master-1 kubenswrapper[4771]: I1011 10:33:29.628328 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:33:29.906603 master-2 kubenswrapper[4776]: I1011 10:33:29.906559 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" event={"ID":"1a003c5f-2a49-44fb-93a8-7a83319ce8e8","Type":"ContainerStarted","Data":"beaf5a3da8aca93aa44591bd942154456555dc1572d65396b432139667d779a5"} Oct 11 10:33:29.907202 master-2 kubenswrapper[4776]: I1011 10:33:29.907181 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" Oct 11 10:33:29.912056 master-2 kubenswrapper[4776]: I1011 10:33:29.912026 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" Oct 11 10:33:29.929660 master-2 kubenswrapper[4776]: I1011 10:33:29.929547 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podStartSLOduration=2.929522853 podStartE2EDuration="2.929522853s" podCreationTimestamp="2025-10-11 10:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:33:29.928801254 +0000 UTC m=+444.713227973" watchObservedRunningTime="2025-10-11 10:33:29.929522853 +0000 UTC m=+444.713949572" Oct 11 10:33:29.969165 master-2 kubenswrapper[4776]: I1011 10:33:29.969109 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:29.969165 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:29.969165 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:29.969165 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:29.969165 master-2 kubenswrapper[4776]: I1011 10:33:29.969161 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:30.278441 master-1 kubenswrapper[4771]: I1011 10:33:30.277430 4771 generic.go:334] "Generic (PLEG): container finished" podID="d87cc032-b419-444c-8bf0-ef7405d7369d" containerID="79f8e8a3af9681261cf6c96297e08774526c159a1df96245fda7d956c1a72204" exitCode=0 Oct 11 10:33:30.278441 master-1 kubenswrapper[4771]: I1011 10:33:30.277506 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" event={"ID":"d87cc032-b419-444c-8bf0-ef7405d7369d","Type":"ContainerDied","Data":"79f8e8a3af9681261cf6c96297e08774526c159a1df96245fda7d956c1a72204"} Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: I1011 10:33:30.497911 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:33:30.498256 master-1 kubenswrapper[4771]: I1011 10:33:30.497600 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:30.498256 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:30.498256 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:30.498256 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:30.498256 master-1 kubenswrapper[4771]: I1011 10:33:30.497691 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:30.498645 master-2 kubenswrapper[4776]: I1011 10:33:30.497989 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:30.640493 master-1 kubenswrapper[4771]: I1011 10:33:30.640400 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:33:30.689533 master-1 kubenswrapper[4771]: I1011 10:33:30.689435 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d87cc032-b419-444c-8bf0-ef7405d7369d-trusted-ca-bundle\") pod \"d87cc032-b419-444c-8bf0-ef7405d7369d\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " Oct 11 10:33:30.689821 master-1 kubenswrapper[4771]: I1011 10:33:30.689577 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d87cc032-b419-444c-8bf0-ef7405d7369d-audit-policies\") pod \"d87cc032-b419-444c-8bf0-ef7405d7369d\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " Oct 11 10:33:30.689821 master-1 kubenswrapper[4771]: I1011 10:33:30.689652 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d87cc032-b419-444c-8bf0-ef7405d7369d-etcd-serving-ca\") pod \"d87cc032-b419-444c-8bf0-ef7405d7369d\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " Oct 11 10:33:30.689821 master-1 kubenswrapper[4771]: I1011 10:33:30.689689 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kldk2\" (UniqueName: \"kubernetes.io/projected/d87cc032-b419-444c-8bf0-ef7405d7369d-kube-api-access-kldk2\") pod \"d87cc032-b419-444c-8bf0-ef7405d7369d\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " Oct 11 10:33:30.689821 master-1 kubenswrapper[4771]: I1011 10:33:30.689733 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d87cc032-b419-444c-8bf0-ef7405d7369d-encryption-config\") pod \"d87cc032-b419-444c-8bf0-ef7405d7369d\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " Oct 11 10:33:30.689821 master-1 kubenswrapper[4771]: I1011 10:33:30.689780 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87cc032-b419-444c-8bf0-ef7405d7369d-serving-cert\") pod \"d87cc032-b419-444c-8bf0-ef7405d7369d\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " Oct 11 10:33:30.689821 master-1 kubenswrapper[4771]: I1011 10:33:30.689812 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d87cc032-b419-444c-8bf0-ef7405d7369d-audit-dir\") pod \"d87cc032-b419-444c-8bf0-ef7405d7369d\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " Oct 11 10:33:30.690323 master-1 kubenswrapper[4771]: I1011 10:33:30.689848 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d87cc032-b419-444c-8bf0-ef7405d7369d-etcd-client\") pod \"d87cc032-b419-444c-8bf0-ef7405d7369d\" (UID: \"d87cc032-b419-444c-8bf0-ef7405d7369d\") " Oct 11 10:33:30.690323 master-1 kubenswrapper[4771]: I1011 10:33:30.690012 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d87cc032-b419-444c-8bf0-ef7405d7369d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d87cc032-b419-444c-8bf0-ef7405d7369d" (UID: "d87cc032-b419-444c-8bf0-ef7405d7369d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:30.690323 master-1 kubenswrapper[4771]: I1011 10:33:30.690187 4771 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d87cc032-b419-444c-8bf0-ef7405d7369d-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:33:30.690961 master-1 kubenswrapper[4771]: I1011 10:33:30.690860 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d87cc032-b419-444c-8bf0-ef7405d7369d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d87cc032-b419-444c-8bf0-ef7405d7369d" (UID: "d87cc032-b419-444c-8bf0-ef7405d7369d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:33:30.691109 master-1 kubenswrapper[4771]: I1011 10:33:30.691004 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d87cc032-b419-444c-8bf0-ef7405d7369d-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d87cc032-b419-444c-8bf0-ef7405d7369d" (UID: "d87cc032-b419-444c-8bf0-ef7405d7369d"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:33:30.691403 master-1 kubenswrapper[4771]: I1011 10:33:30.691298 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d87cc032-b419-444c-8bf0-ef7405d7369d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d87cc032-b419-444c-8bf0-ef7405d7369d" (UID: "d87cc032-b419-444c-8bf0-ef7405d7369d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:33:30.694160 master-1 kubenswrapper[4771]: I1011 10:33:30.694097 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d87cc032-b419-444c-8bf0-ef7405d7369d-kube-api-access-kldk2" (OuterVolumeSpecName: "kube-api-access-kldk2") pod "d87cc032-b419-444c-8bf0-ef7405d7369d" (UID: "d87cc032-b419-444c-8bf0-ef7405d7369d"). InnerVolumeSpecName "kube-api-access-kldk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:33:30.694330 master-1 kubenswrapper[4771]: I1011 10:33:30.694289 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87cc032-b419-444c-8bf0-ef7405d7369d-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d87cc032-b419-444c-8bf0-ef7405d7369d" (UID: "d87cc032-b419-444c-8bf0-ef7405d7369d"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:33:30.694858 master-1 kubenswrapper[4771]: I1011 10:33:30.694787 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87cc032-b419-444c-8bf0-ef7405d7369d-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d87cc032-b419-444c-8bf0-ef7405d7369d" (UID: "d87cc032-b419-444c-8bf0-ef7405d7369d"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:33:30.695109 master-1 kubenswrapper[4771]: I1011 10:33:30.695043 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87cc032-b419-444c-8bf0-ef7405d7369d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d87cc032-b419-444c-8bf0-ef7405d7369d" (UID: "d87cc032-b419-444c-8bf0-ef7405d7369d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:33:30.792203 master-1 kubenswrapper[4771]: I1011 10:33:30.792043 4771 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d87cc032-b419-444c-8bf0-ef7405d7369d-audit-policies\") on node \"master-1\" DevicePath \"\"" Oct 11 10:33:30.792203 master-1 kubenswrapper[4771]: I1011 10:33:30.792131 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d87cc032-b419-444c-8bf0-ef7405d7369d-etcd-serving-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:33:30.792203 master-1 kubenswrapper[4771]: I1011 10:33:30.792155 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kldk2\" (UniqueName: \"kubernetes.io/projected/d87cc032-b419-444c-8bf0-ef7405d7369d-kube-api-access-kldk2\") on node \"master-1\" DevicePath \"\"" Oct 11 10:33:30.792203 master-1 kubenswrapper[4771]: I1011 10:33:30.792178 4771 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d87cc032-b419-444c-8bf0-ef7405d7369d-encryption-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:33:30.792203 master-1 kubenswrapper[4771]: I1011 10:33:30.792197 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87cc032-b419-444c-8bf0-ef7405d7369d-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:33:30.792203 master-1 kubenswrapper[4771]: I1011 10:33:30.792220 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d87cc032-b419-444c-8bf0-ef7405d7369d-etcd-client\") on node \"master-1\" DevicePath \"\"" Oct 11 10:33:30.792203 master-1 kubenswrapper[4771]: I1011 10:33:30.792239 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d87cc032-b419-444c-8bf0-ef7405d7369d-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:33:30.969871 master-2 kubenswrapper[4776]: I1011 10:33:30.969800 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:30.969871 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:30.969871 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:30.969871 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:30.970533 master-2 kubenswrapper[4776]: I1011 10:33:30.969891 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:31.286302 master-1 kubenswrapper[4771]: I1011 10:33:31.286221 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" event={"ID":"d87cc032-b419-444c-8bf0-ef7405d7369d","Type":"ContainerDied","Data":"f8786873b90c54bfb0b515ad88ba2ef097b9f25b5ded48493272a640d89c1d55"} Oct 11 10:33:31.286302 master-1 kubenswrapper[4771]: I1011 10:33:31.286289 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk" Oct 11 10:33:31.286888 master-1 kubenswrapper[4771]: I1011 10:33:31.286332 4771 scope.go:117] "RemoveContainer" containerID="79f8e8a3af9681261cf6c96297e08774526c159a1df96245fda7d956c1a72204" Oct 11 10:33:31.311497 master-1 kubenswrapper[4771]: I1011 10:33:31.311408 4771 scope.go:117] "RemoveContainer" containerID="cc3604bd3c6d5088cac6e57645a1372932a2b915b7df557349ccea609bf9af52" Oct 11 10:33:31.342322 master-1 kubenswrapper[4771]: I1011 10:33:31.342240 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk"] Oct 11 10:33:31.355621 master-1 kubenswrapper[4771]: I1011 10:33:31.355541 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk"] Oct 11 10:33:31.497927 master-1 kubenswrapper[4771]: I1011 10:33:31.497825 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:31.497927 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:31.497927 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:31.497927 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:31.497927 master-1 kubenswrapper[4771]: I1011 10:33:31.497925 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:31.970341 master-2 kubenswrapper[4776]: I1011 10:33:31.970232 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:31.970341 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:31.970341 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:31.970341 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:31.970341 master-2 kubenswrapper[4776]: I1011 10:33:31.970335 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:32.447201 master-1 kubenswrapper[4771]: I1011 10:33:32.447084 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d87cc032-b419-444c-8bf0-ef7405d7369d" path="/var/lib/kubelet/pods/d87cc032-b419-444c-8bf0-ef7405d7369d/volumes" Oct 11 10:33:32.496853 master-1 kubenswrapper[4771]: I1011 10:33:32.496787 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:32.496853 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:32.496853 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:32.496853 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:32.497166 master-1 kubenswrapper[4771]: I1011 10:33:32.496875 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:32.971013 master-2 kubenswrapper[4776]: I1011 10:33:32.970945 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:32.971013 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:32.971013 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:32.971013 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:32.971013 master-2 kubenswrapper[4776]: I1011 10:33:32.971005 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:33.497842 master-1 kubenswrapper[4771]: I1011 10:33:33.497752 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:33.497842 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:33.497842 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:33.497842 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:33.498863 master-1 kubenswrapper[4771]: I1011 10:33:33.497862 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:33.970875 master-2 kubenswrapper[4776]: I1011 10:33:33.970620 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:33.970875 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:33.970875 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:33.970875 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:33.971994 master-2 kubenswrapper[4776]: I1011 10:33:33.970901 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:33.975286 master-2 kubenswrapper[4776]: I1011 10:33:33.975227 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-2"] Oct 11 10:33:33.976774 master-2 kubenswrapper[4776]: I1011 10:33:33.976740 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:33:34.013420 master-2 kubenswrapper[4776]: I1011 10:33:34.013180 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-2"] Oct 11 10:33:34.118159 master-2 kubenswrapper[4776]: I1011 10:33:34.118112 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-resource-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"f26cf13b1c8c4f1b57c0ac506ef256a4\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:33:34.118159 master-2 kubenswrapper[4776]: I1011 10:33:34.118156 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-cert-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"f26cf13b1c8c4f1b57c0ac506ef256a4\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:33:34.219791 master-2 kubenswrapper[4776]: I1011 10:33:34.219757 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-resource-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"f26cf13b1c8c4f1b57c0ac506ef256a4\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:33:34.219953 master-2 kubenswrapper[4776]: I1011 10:33:34.219868 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-resource-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"f26cf13b1c8c4f1b57c0ac506ef256a4\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:33:34.220002 master-2 kubenswrapper[4776]: I1011 10:33:34.219932 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-cert-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"f26cf13b1c8c4f1b57c0ac506ef256a4\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:33:34.220060 master-2 kubenswrapper[4776]: I1011 10:33:34.220048 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-cert-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"f26cf13b1c8c4f1b57c0ac506ef256a4\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:33:34.311539 master-2 kubenswrapper[4776]: I1011 10:33:34.311414 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:33:34.497769 master-1 kubenswrapper[4771]: I1011 10:33:34.497654 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:34.497769 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:34.497769 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:34.497769 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:34.497769 master-1 kubenswrapper[4771]: I1011 10:33:34.497752 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:34.628916 master-1 kubenswrapper[4771]: I1011 10:33:34.628789 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:33:34.628916 master-1 kubenswrapper[4771]: I1011 10:33:34.628898 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:33:34.936876 master-2 kubenswrapper[4776]: I1011 10:33:34.936823 4776 generic.go:334] "Generic (PLEG): container finished" podID="fc3dcbf6-abe1-45ca-992b-4d1c7e419128" containerID="1c42b2aad8d56f14bb8f7731268d9b68d7e1d8d9d1d9e5f2c2ae8c23a10aa4cb" exitCode=0 Oct 11 10:33:34.937126 master-2 kubenswrapper[4776]: I1011 10:33:34.936907 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-2" event={"ID":"fc3dcbf6-abe1-45ca-992b-4d1c7e419128","Type":"ContainerDied","Data":"1c42b2aad8d56f14bb8f7731268d9b68d7e1d8d9d1d9e5f2c2ae8c23a10aa4cb"} Oct 11 10:33:34.938995 master-2 kubenswrapper[4776]: I1011 10:33:34.938958 4776 generic.go:334] "Generic (PLEG): container finished" podID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerID="c57c60875e1be574e46fe440bf3b2752ffb605bb2f328363af8d1f914310116f" exitCode=0 Oct 11 10:33:34.938995 master-2 kubenswrapper[4776]: I1011 10:33:34.938991 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"f26cf13b1c8c4f1b57c0ac506ef256a4","Type":"ContainerDied","Data":"c57c60875e1be574e46fe440bf3b2752ffb605bb2f328363af8d1f914310116f"} Oct 11 10:33:34.939098 master-2 kubenswrapper[4776]: I1011 10:33:34.939006 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"f26cf13b1c8c4f1b57c0ac506ef256a4","Type":"ContainerStarted","Data":"0ed01188c7283cfff70d6c5cb4504465f9e9f1843a1b8c89bb6c36df04a63ac6"} Oct 11 10:33:34.970633 master-2 kubenswrapper[4776]: I1011 10:33:34.970555 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:34.970633 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:34.970633 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:34.970633 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:34.970901 master-2 kubenswrapper[4776]: I1011 10:33:34.970635 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: I1011 10:33:35.496047 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: I1011 10:33:35.496126 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:35.497332 master-1 kubenswrapper[4771]: I1011 10:33:35.497230 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:35.497332 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:35.497332 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:35.497332 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:35.497781 master-1 kubenswrapper[4771]: I1011 10:33:35.497341 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:35.807297 master-1 kubenswrapper[4771]: I1011 10:33:35.807172 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b"] Oct 11 10:33:35.808113 master-1 kubenswrapper[4771]: E1011 10:33:35.807480 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87cc032-b419-444c-8bf0-ef7405d7369d" containerName="oauth-apiserver" Oct 11 10:33:35.808113 master-1 kubenswrapper[4771]: I1011 10:33:35.807502 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87cc032-b419-444c-8bf0-ef7405d7369d" containerName="oauth-apiserver" Oct 11 10:33:35.808113 master-1 kubenswrapper[4771]: E1011 10:33:35.807529 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f830cc-d36c-4ccd-97cb-2d4a99726684" containerName="installer" Oct 11 10:33:35.808113 master-1 kubenswrapper[4771]: I1011 10:33:35.807542 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f830cc-d36c-4ccd-97cb-2d4a99726684" containerName="installer" Oct 11 10:33:35.808113 master-1 kubenswrapper[4771]: E1011 10:33:35.807561 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87cc032-b419-444c-8bf0-ef7405d7369d" containerName="fix-audit-permissions" Oct 11 10:33:35.808113 master-1 kubenswrapper[4771]: I1011 10:33:35.807573 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87cc032-b419-444c-8bf0-ef7405d7369d" containerName="fix-audit-permissions" Oct 11 10:33:35.808113 master-1 kubenswrapper[4771]: I1011 10:33:35.807717 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f830cc-d36c-4ccd-97cb-2d4a99726684" containerName="installer" Oct 11 10:33:35.808113 master-1 kubenswrapper[4771]: I1011 10:33:35.807739 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d87cc032-b419-444c-8bf0-ef7405d7369d" containerName="oauth-apiserver" Oct 11 10:33:35.808929 master-1 kubenswrapper[4771]: I1011 10:33:35.808906 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.812266 master-1 kubenswrapper[4771]: I1011 10:33:35.812174 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 11 10:33:35.813138 master-1 kubenswrapper[4771]: I1011 10:33:35.812687 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 11 10:33:35.813138 master-1 kubenswrapper[4771]: I1011 10:33:35.813118 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 11 10:33:35.813438 master-1 kubenswrapper[4771]: I1011 10:33:35.813290 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 11 10:33:35.814312 master-1 kubenswrapper[4771]: I1011 10:33:35.814240 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 11 10:33:35.814312 master-1 kubenswrapper[4771]: I1011 10:33:35.814288 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 11 10:33:35.814592 master-1 kubenswrapper[4771]: I1011 10:33:35.814240 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 11 10:33:35.814592 master-1 kubenswrapper[4771]: I1011 10:33:35.814286 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 11 10:33:35.834989 master-1 kubenswrapper[4771]: I1011 10:33:35.834895 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b"] Oct 11 10:33:35.859553 master-1 kubenswrapper[4771]: I1011 10:33:35.859487 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crdvt\" (UniqueName: \"kubernetes.io/projected/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-kube-api-access-crdvt\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.859716 master-1 kubenswrapper[4771]: I1011 10:33:35.859607 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.859716 master-1 kubenswrapper[4771]: I1011 10:33:35.859671 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-audit-dir\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.859855 master-1 kubenswrapper[4771]: I1011 10:33:35.859703 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-encryption-config\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.859855 master-1 kubenswrapper[4771]: I1011 10:33:35.859776 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-serving-cert\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.859855 master-1 kubenswrapper[4771]: I1011 10:33:35.859848 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-etcd-client\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.860046 master-1 kubenswrapper[4771]: I1011 10:33:35.859873 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-audit-policies\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.860046 master-1 kubenswrapper[4771]: I1011 10:33:35.859933 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.951473 master-2 kubenswrapper[4776]: I1011 10:33:35.951390 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"f26cf13b1c8c4f1b57c0ac506ef256a4","Type":"ContainerStarted","Data":"3bed6e75ec56e4b27551229ac1cfed015f19cbbd37a51de7899f2a409f7f3107"} Oct 11 10:33:35.951768 master-2 kubenswrapper[4776]: I1011 10:33:35.951482 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"f26cf13b1c8c4f1b57c0ac506ef256a4","Type":"ContainerStarted","Data":"b1b4a56fa0152f300c6a99db97775492cfcdce4712ae78b30e2ac340b25efd8c"} Oct 11 10:33:35.951768 master-2 kubenswrapper[4776]: I1011 10:33:35.951545 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"f26cf13b1c8c4f1b57c0ac506ef256a4","Type":"ContainerStarted","Data":"65701df136730297d07e924a9003107719afae6bc3e70126f7680b788afdcc01"} Oct 11 10:33:35.960444 master-1 kubenswrapper[4771]: I1011 10:33:35.960346 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-audit-dir\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.960444 master-1 kubenswrapper[4771]: I1011 10:33:35.960433 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-encryption-config\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.960865 master-1 kubenswrapper[4771]: I1011 10:33:35.960491 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-serving-cert\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.960865 master-1 kubenswrapper[4771]: I1011 10:33:35.960528 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-etcd-client\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.960865 master-1 kubenswrapper[4771]: I1011 10:33:35.960550 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-audit-policies\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.960865 master-1 kubenswrapper[4771]: I1011 10:33:35.960576 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.960865 master-1 kubenswrapper[4771]: I1011 10:33:35.960587 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-audit-dir\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.960865 master-1 kubenswrapper[4771]: I1011 10:33:35.960611 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crdvt\" (UniqueName: \"kubernetes.io/projected/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-kube-api-access-crdvt\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.960865 master-1 kubenswrapper[4771]: I1011 10:33:35.960786 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.962205 master-1 kubenswrapper[4771]: I1011 10:33:35.962095 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-audit-policies\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.962604 master-1 kubenswrapper[4771]: I1011 10:33:35.962536 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.962700 master-1 kubenswrapper[4771]: I1011 10:33:35.962590 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.966056 master-1 kubenswrapper[4771]: I1011 10:33:35.965966 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-etcd-client\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.967081 master-1 kubenswrapper[4771]: I1011 10:33:35.966797 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-serving-cert\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.967212 master-1 kubenswrapper[4771]: I1011 10:33:35.967131 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-encryption-config\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:35.970129 master-2 kubenswrapper[4776]: I1011 10:33:35.970072 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:35.970129 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:35.970129 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:35.970129 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:35.970489 master-2 kubenswrapper[4776]: I1011 10:33:35.970143 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:35.980090 master-2 kubenswrapper[4776]: I1011 10:33:35.979846 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" podStartSLOduration=2.979804798 podStartE2EDuration="2.979804798s" podCreationTimestamp="2025-10-11 10:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:33:35.973748488 +0000 UTC m=+450.758175247" watchObservedRunningTime="2025-10-11 10:33:35.979804798 +0000 UTC m=+450.764231547" Oct 11 10:33:35.999401 master-1 kubenswrapper[4771]: I1011 10:33:35.999285 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crdvt\" (UniqueName: \"kubernetes.io/projected/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-kube-api-access-crdvt\") pod \"apiserver-68f4c55ff4-z898b\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:36.136510 master-1 kubenswrapper[4771]: I1011 10:33:36.136254 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:36.297460 master-2 kubenswrapper[4776]: I1011 10:33:36.297378 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:36.385228 master-2 kubenswrapper[4776]: I1011 10:33:36.384018 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-2"] Oct 11 10:33:36.448931 master-2 kubenswrapper[4776]: I1011 10:33:36.448826 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kube-api-access\") pod \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " Oct 11 10:33:36.449312 master-2 kubenswrapper[4776]: I1011 10:33:36.449144 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kubelet-dir\") pod \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " Oct 11 10:33:36.449312 master-2 kubenswrapper[4776]: I1011 10:33:36.449256 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fc3dcbf6-abe1-45ca-992b-4d1c7e419128" (UID: "fc3dcbf6-abe1-45ca-992b-4d1c7e419128"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:36.449474 master-2 kubenswrapper[4776]: I1011 10:33:36.449324 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-var-lock\") pod \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " Oct 11 10:33:36.449474 master-2 kubenswrapper[4776]: I1011 10:33:36.449405 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-var-lock" (OuterVolumeSpecName: "var-lock") pod "fc3dcbf6-abe1-45ca-992b-4d1c7e419128" (UID: "fc3dcbf6-abe1-45ca-992b-4d1c7e419128"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:36.449919 master-2 kubenswrapper[4776]: I1011 10:33:36.449874 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:36.449919 master-2 kubenswrapper[4776]: I1011 10:33:36.449899 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:36.452262 master-2 kubenswrapper[4776]: I1011 10:33:36.452144 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fc3dcbf6-abe1-45ca-992b-4d1c7e419128" (UID: "fc3dcbf6-abe1-45ca-992b-4d1c7e419128"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:33:36.496547 master-1 kubenswrapper[4771]: I1011 10:33:36.496345 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:36.496547 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:36.496547 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:36.496547 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:36.496547 master-1 kubenswrapper[4771]: I1011 10:33:36.496488 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:36.550987 master-2 kubenswrapper[4776]: I1011 10:33:36.550848 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:36.642016 master-1 kubenswrapper[4771]: I1011 10:33:36.641947 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b"] Oct 11 10:33:36.972350 master-2 kubenswrapper[4776]: I1011 10:33:36.972284 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:36.972350 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:36.972350 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:36.972350 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:36.973096 master-2 kubenswrapper[4776]: I1011 10:33:36.972436 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:36.974176 master-2 kubenswrapper[4776]: I1011 10:33:36.974112 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-2" event={"ID":"fc3dcbf6-abe1-45ca-992b-4d1c7e419128","Type":"ContainerDied","Data":"b7e2bd31994c8a8b44562a2051be2b5813378c028e80cc6d98912e4ac9acbe2f"} Oct 11 10:33:36.974497 master-2 kubenswrapper[4776]: I1011 10:33:36.974190 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7e2bd31994c8a8b44562a2051be2b5813378c028e80cc6d98912e4ac9acbe2f" Oct 11 10:33:36.974497 master-2 kubenswrapper[4776]: I1011 10:33:36.974270 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:36.974838 master-2 kubenswrapper[4776]: I1011 10:33:36.974600 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:33:37.331041 master-1 kubenswrapper[4771]: I1011 10:33:37.330949 4771 generic.go:334] "Generic (PLEG): container finished" podID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" containerID="099a8b3dcef0438896afc75fcd82f68fe99e85fb11c77c0389001ba13a5e3c61" exitCode=0 Oct 11 10:33:37.331041 master-1 kubenswrapper[4771]: I1011 10:33:37.331013 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" event={"ID":"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40","Type":"ContainerDied","Data":"099a8b3dcef0438896afc75fcd82f68fe99e85fb11c77c0389001ba13a5e3c61"} Oct 11 10:33:37.331041 master-1 kubenswrapper[4771]: I1011 10:33:37.331048 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" event={"ID":"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40","Type":"ContainerStarted","Data":"ef282371271fc7902dfe16d939904e98053b587f042204eef235e27cd9b5b8b6"} Oct 11 10:33:37.497376 master-1 kubenswrapper[4771]: I1011 10:33:37.497296 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:37.497376 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:37.497376 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:37.497376 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:37.497692 master-1 kubenswrapper[4771]: I1011 10:33:37.497388 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:37.970657 master-2 kubenswrapper[4776]: I1011 10:33:37.970544 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:37.970657 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:37.970657 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:37.970657 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:37.970657 master-2 kubenswrapper[4776]: I1011 10:33:37.970623 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:38.342158 master-1 kubenswrapper[4771]: I1011 10:33:38.342030 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" event={"ID":"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40","Type":"ContainerStarted","Data":"db0964db198d321448b29e5ac3377a039eb46842494e887c86878f66ad14d217"} Oct 11 10:33:38.374756 master-1 kubenswrapper[4771]: I1011 10:33:38.374607 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" podStartSLOduration=60.37457609 podStartE2EDuration="1m0.37457609s" podCreationTimestamp="2025-10-11 10:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:33:38.373168059 +0000 UTC m=+450.347394540" watchObservedRunningTime="2025-10-11 10:33:38.37457609 +0000 UTC m=+450.348802571" Oct 11 10:33:38.426206 master-2 kubenswrapper[4776]: I1011 10:33:38.426115 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2"] Oct 11 10:33:38.426754 master-2 kubenswrapper[4776]: E1011 10:33:38.426709 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc3dcbf6-abe1-45ca-992b-4d1c7e419128" containerName="installer" Oct 11 10:33:38.426754 master-2 kubenswrapper[4776]: I1011 10:33:38.426749 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc3dcbf6-abe1-45ca-992b-4d1c7e419128" containerName="installer" Oct 11 10:33:38.427044 master-2 kubenswrapper[4776]: I1011 10:33:38.427002 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc3dcbf6-abe1-45ca-992b-4d1c7e419128" containerName="installer" Oct 11 10:33:38.428154 master-2 kubenswrapper[4776]: I1011 10:33:38.428102 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" Oct 11 10:33:38.433020 master-2 kubenswrapper[4776]: I1011 10:33:38.432840 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"openshift-service-ca.crt" Oct 11 10:33:38.433490 master-2 kubenswrapper[4776]: I1011 10:33:38.432967 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Oct 11 10:33:38.438830 master-2 kubenswrapper[4776]: I1011 10:33:38.438764 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2"] Oct 11 10:33:38.497486 master-1 kubenswrapper[4771]: I1011 10:33:38.497345 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:38.497486 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:38.497486 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:38.497486 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:38.497486 master-1 kubenswrapper[4771]: I1011 10:33:38.497469 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:38.582082 master-2 kubenswrapper[4776]: I1011 10:33:38.581980 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfzf9\" (UniqueName: \"kubernetes.io/projected/c76a7758-6688-4e6c-a01a-c3e29db3c134-kube-api-access-bfzf9\") pod \"openshift-kube-scheduler-guard-master-2\" (UID: \"c76a7758-6688-4e6c-a01a-c3e29db3c134\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" Oct 11 10:33:38.683365 master-2 kubenswrapper[4776]: I1011 10:33:38.683183 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfzf9\" (UniqueName: \"kubernetes.io/projected/c76a7758-6688-4e6c-a01a-c3e29db3c134-kube-api-access-bfzf9\") pod \"openshift-kube-scheduler-guard-master-2\" (UID: \"c76a7758-6688-4e6c-a01a-c3e29db3c134\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" Oct 11 10:33:38.710058 master-2 kubenswrapper[4776]: I1011 10:33:38.709962 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfzf9\" (UniqueName: \"kubernetes.io/projected/c76a7758-6688-4e6c-a01a-c3e29db3c134-kube-api-access-bfzf9\") pod \"openshift-kube-scheduler-guard-master-2\" (UID: \"c76a7758-6688-4e6c-a01a-c3e29db3c134\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" Oct 11 10:33:38.802983 master-2 kubenswrapper[4776]: I1011 10:33:38.802872 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" Oct 11 10:33:38.969583 master-2 kubenswrapper[4776]: I1011 10:33:38.969463 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:38.969583 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:38.969583 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:38.969583 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:38.969583 master-2 kubenswrapper[4776]: I1011 10:33:38.969515 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:39.265369 master-2 kubenswrapper[4776]: I1011 10:33:39.265287 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2"] Oct 11 10:33:39.497114 master-1 kubenswrapper[4771]: I1011 10:33:39.496993 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:39.497114 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:39.497114 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:39.497114 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:39.497114 master-1 kubenswrapper[4771]: I1011 10:33:39.497091 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:39.628762 master-1 kubenswrapper[4771]: I1011 10:33:39.628652 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:33:39.629037 master-1 kubenswrapper[4771]: I1011 10:33:39.628760 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:33:39.971465 master-2 kubenswrapper[4776]: I1011 10:33:39.971349 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:39.971465 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:39.971465 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:39.971465 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:39.971465 master-2 kubenswrapper[4776]: I1011 10:33:39.971458 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:39.999075 master-2 kubenswrapper[4776]: I1011 10:33:39.998961 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" event={"ID":"c76a7758-6688-4e6c-a01a-c3e29db3c134","Type":"ContainerStarted","Data":"20056c73232015d79ef714b5dad538c641ef391a0cd27dd4dc7ed866bc33b1e0"} Oct 11 10:33:39.999075 master-2 kubenswrapper[4776]: I1011 10:33:39.999043 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" event={"ID":"c76a7758-6688-4e6c-a01a-c3e29db3c134","Type":"ContainerStarted","Data":"5fd0ea97c1803fb86d9cb87015cc2e41104b2914dc612d7d83cad62059528472"} Oct 11 10:33:39.999386 master-2 kubenswrapper[4776]: I1011 10:33:39.999303 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" Oct 11 10:33:40.006570 master-2 kubenswrapper[4776]: I1011 10:33:40.006531 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" Oct 11 10:33:40.023843 master-2 kubenswrapper[4776]: I1011 10:33:40.023652 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" podStartSLOduration=2.023630833 podStartE2EDuration="2.023630833s" podCreationTimestamp="2025-10-11 10:33:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:33:40.021081416 +0000 UTC m=+454.805508165" watchObservedRunningTime="2025-10-11 10:33:40.023630833 +0000 UTC m=+454.808057552" Oct 11 10:33:40.497386 master-1 kubenswrapper[4771]: I1011 10:33:40.497281 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:40.497386 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:40.497386 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:40.497386 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:40.498429 master-1 kubenswrapper[4771]: I1011 10:33:40.497394 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: I1011 10:33:40.498984 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:33:40.500203 master-2 kubenswrapper[4776]: I1011 10:33:40.499071 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:40.969770 master-2 kubenswrapper[4776]: I1011 10:33:40.969613 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:40.969770 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:40.969770 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:40.969770 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:40.970252 master-2 kubenswrapper[4776]: I1011 10:33:40.969805 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:41.136594 master-1 kubenswrapper[4771]: I1011 10:33:41.136471 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:41.136594 master-1 kubenswrapper[4771]: I1011 10:33:41.136587 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:41.148672 master-1 kubenswrapper[4771]: I1011 10:33:41.148580 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:41.372065 master-1 kubenswrapper[4771]: I1011 10:33:41.371956 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:33:41.498925 master-1 kubenswrapper[4771]: I1011 10:33:41.498757 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:41.498925 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:41.498925 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:41.498925 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:41.498925 master-1 kubenswrapper[4771]: I1011 10:33:41.498853 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:41.971991 master-2 kubenswrapper[4776]: I1011 10:33:41.971875 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:41.971991 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:41.971991 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:41.971991 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:41.972916 master-2 kubenswrapper[4776]: I1011 10:33:41.972006 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:42.497688 master-1 kubenswrapper[4771]: I1011 10:33:42.497608 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:42.497688 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:42.497688 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:42.497688 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:42.498332 master-1 kubenswrapper[4771]: I1011 10:33:42.497706 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:42.970734 master-2 kubenswrapper[4776]: I1011 10:33:42.970626 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:42.970734 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:42.970734 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:42.970734 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:42.971168 master-2 kubenswrapper[4776]: I1011 10:33:42.970771 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:43.475244 master-2 kubenswrapper[4776]: I1011 10:33:43.475159 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:43.497128 master-1 kubenswrapper[4771]: I1011 10:33:43.497034 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:43.497128 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:43.497128 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:43.497128 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:43.499906 master-1 kubenswrapper[4771]: I1011 10:33:43.497133 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:43.970755 master-2 kubenswrapper[4776]: I1011 10:33:43.970629 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:43.970755 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:43.970755 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:43.970755 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:43.971184 master-2 kubenswrapper[4776]: I1011 10:33:43.970788 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:44.334872 master-1 kubenswrapper[4771]: I1011 10:33:44.334767 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/2.log" Oct 11 10:33:44.335855 master-1 kubenswrapper[4771]: I1011 10:33:44.335785 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/1.log" Oct 11 10:33:44.336476 master-1 kubenswrapper[4771]: I1011 10:33:44.336427 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-rev/0.log" Oct 11 10:33:44.337757 master-1 kubenswrapper[4771]: I1011 10:33:44.337700 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-metrics/0.log" Oct 11 10:33:44.338409 master-1 kubenswrapper[4771]: I1011 10:33:44.338330 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcdctl/0.log" Oct 11 10:33:44.340292 master-1 kubenswrapper[4771]: I1011 10:33:44.340231 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 11 10:33:44.347002 master-1 kubenswrapper[4771]: I1011 10:33:44.346928 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-1" oldPodUID="5268b2f2ae2aef0c7f2e7a6e651ed702" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" Oct 11 10:33:44.383835 master-1 kubenswrapper[4771]: I1011 10:33:44.383759 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/2.log" Oct 11 10:33:44.384661 master-1 kubenswrapper[4771]: I1011 10:33:44.384609 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/1.log" Oct 11 10:33:44.385454 master-1 kubenswrapper[4771]: I1011 10:33:44.385351 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-rev/0.log" Oct 11 10:33:44.386849 master-1 kubenswrapper[4771]: I1011 10:33:44.386804 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-metrics/0.log" Oct 11 10:33:44.387478 master-1 kubenswrapper[4771]: I1011 10:33:44.387431 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcdctl/0.log" Oct 11 10:33:44.389092 master-1 kubenswrapper[4771]: I1011 10:33:44.389030 4771 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="8a1f303abdd5f0c02fe6a793df5fa8cce44a25f7b097770a28676ae74b39da7e" exitCode=137 Oct 11 10:33:44.389092 master-1 kubenswrapper[4771]: I1011 10:33:44.389082 4771 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="2c883b5cc483b4b1102cd2f4e0032f04b4e86dfac92c219c11959045c43545c8" exitCode=137 Oct 11 10:33:44.389231 master-1 kubenswrapper[4771]: I1011 10:33:44.389153 4771 scope.go:117] "RemoveContainer" containerID="8a1f303abdd5f0c02fe6a793df5fa8cce44a25f7b097770a28676ae74b39da7e" Oct 11 10:33:44.389231 master-1 kubenswrapper[4771]: I1011 10:33:44.389187 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 11 10:33:44.396183 master-1 kubenswrapper[4771]: I1011 10:33:44.396118 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-1" oldPodUID="5268b2f2ae2aef0c7f2e7a6e651ed702" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" Oct 11 10:33:44.403145 master-1 kubenswrapper[4771]: I1011 10:33:44.402992 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-usr-local-bin\") pod \"5268b2f2ae2aef0c7f2e7a6e651ed702\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " Oct 11 10:33:44.403220 master-1 kubenswrapper[4771]: I1011 10:33:44.403178 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-resource-dir\") pod \"5268b2f2ae2aef0c7f2e7a6e651ed702\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " Oct 11 10:33:44.403420 master-1 kubenswrapper[4771]: I1011 10:33:44.403348 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-cert-dir\") pod \"5268b2f2ae2aef0c7f2e7a6e651ed702\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " Oct 11 10:33:44.403493 master-1 kubenswrapper[4771]: I1011 10:33:44.403426 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-data-dir\") pod \"5268b2f2ae2aef0c7f2e7a6e651ed702\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " Oct 11 10:33:44.403545 master-1 kubenswrapper[4771]: I1011 10:33:44.403471 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "5268b2f2ae2aef0c7f2e7a6e651ed702" (UID: "5268b2f2ae2aef0c7f2e7a6e651ed702"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:44.403545 master-1 kubenswrapper[4771]: I1011 10:33:44.403454 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "5268b2f2ae2aef0c7f2e7a6e651ed702" (UID: "5268b2f2ae2aef0c7f2e7a6e651ed702"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:44.403545 master-1 kubenswrapper[4771]: I1011 10:33:44.403507 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-log-dir\") pod \"5268b2f2ae2aef0c7f2e7a6e651ed702\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " Oct 11 10:33:44.403641 master-1 kubenswrapper[4771]: I1011 10:33:44.403507 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "5268b2f2ae2aef0c7f2e7a6e651ed702" (UID: "5268b2f2ae2aef0c7f2e7a6e651ed702"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:44.403641 master-1 kubenswrapper[4771]: I1011 10:33:44.403546 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-data-dir" (OuterVolumeSpecName: "data-dir") pod "5268b2f2ae2aef0c7f2e7a6e651ed702" (UID: "5268b2f2ae2aef0c7f2e7a6e651ed702"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:44.403641 master-1 kubenswrapper[4771]: I1011 10:33:44.403552 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-log-dir" (OuterVolumeSpecName: "log-dir") pod "5268b2f2ae2aef0c7f2e7a6e651ed702" (UID: "5268b2f2ae2aef0c7f2e7a6e651ed702"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:44.403749 master-1 kubenswrapper[4771]: I1011 10:33:44.403641 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-static-pod-dir\") pod \"5268b2f2ae2aef0c7f2e7a6e651ed702\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " Oct 11 10:33:44.403749 master-1 kubenswrapper[4771]: I1011 10:33:44.403681 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "5268b2f2ae2aef0c7f2e7a6e651ed702" (UID: "5268b2f2ae2aef0c7f2e7a6e651ed702"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:44.404175 master-1 kubenswrapper[4771]: I1011 10:33:44.404123 4771 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-static-pod-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:33:44.404227 master-1 kubenswrapper[4771]: I1011 10:33:44.404176 4771 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-usr-local-bin\") on node \"master-1\" DevicePath \"\"" Oct 11 10:33:44.404227 master-1 kubenswrapper[4771]: I1011 10:33:44.404198 4771 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-resource-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:33:44.404227 master-1 kubenswrapper[4771]: I1011 10:33:44.404217 4771 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-cert-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:33:44.404314 master-1 kubenswrapper[4771]: I1011 10:33:44.404235 4771 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-data-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:33:44.404314 master-1 kubenswrapper[4771]: I1011 10:33:44.404253 4771 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-log-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:33:44.410592 master-1 kubenswrapper[4771]: I1011 10:33:44.410539 4771 scope.go:117] "RemoveContainer" containerID="af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde" Oct 11 10:33:44.441746 master-1 kubenswrapper[4771]: I1011 10:33:44.441682 4771 scope.go:117] "RemoveContainer" containerID="958716eeae4e87693f249c2e50f77614f7904dfcce99812a8c2f6b2e06fbacf0" Oct 11 10:33:44.447668 master-1 kubenswrapper[4771]: I1011 10:33:44.447595 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" path="/var/lib/kubelet/pods/5268b2f2ae2aef0c7f2e7a6e651ed702/volumes" Oct 11 10:33:44.461904 master-1 kubenswrapper[4771]: I1011 10:33:44.461833 4771 scope.go:117] "RemoveContainer" containerID="cf1bd37b9035aa3a513b51ba1e14c267a24bb1ac86b542f0095d01337d817711" Oct 11 10:33:44.486422 master-1 kubenswrapper[4771]: I1011 10:33:44.486344 4771 scope.go:117] "RemoveContainer" containerID="4e485d5d5712b6cda3c4f5674c0f91abe7502ccb2d49bc78c03ccbef061a43f0" Oct 11 10:33:44.496907 master-1 kubenswrapper[4771]: I1011 10:33:44.496849 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:44.496907 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:44.496907 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:44.496907 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:44.497101 master-1 kubenswrapper[4771]: I1011 10:33:44.496936 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:44.507408 master-1 kubenswrapper[4771]: I1011 10:33:44.507333 4771 scope.go:117] "RemoveContainer" containerID="2c883b5cc483b4b1102cd2f4e0032f04b4e86dfac92c219c11959045c43545c8" Oct 11 10:33:44.527221 master-1 kubenswrapper[4771]: I1011 10:33:44.527169 4771 scope.go:117] "RemoveContainer" containerID="ae91d16391eadcea61802eeaddf65a6f0304c66c1e1f73804b3842b3041b8d05" Oct 11 10:33:44.555438 master-1 kubenswrapper[4771]: I1011 10:33:44.555378 4771 scope.go:117] "RemoveContainer" containerID="73a809cbd925d698effb0736fbe0b6f4efb7a431de066e52bb22b55ae4cc3e55" Oct 11 10:33:44.581872 master-1 kubenswrapper[4771]: I1011 10:33:44.581808 4771 scope.go:117] "RemoveContainer" containerID="7cb1d532ea2d89b9be52c186df240a63c25f0ae1db2bbf1085a6c556a31b0cb6" Oct 11 10:33:44.610161 master-1 kubenswrapper[4771]: I1011 10:33:44.610093 4771 scope.go:117] "RemoveContainer" containerID="8a1f303abdd5f0c02fe6a793df5fa8cce44a25f7b097770a28676ae74b39da7e" Oct 11 10:33:44.610796 master-1 kubenswrapper[4771]: E1011 10:33:44.610722 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a1f303abdd5f0c02fe6a793df5fa8cce44a25f7b097770a28676ae74b39da7e\": container with ID starting with 8a1f303abdd5f0c02fe6a793df5fa8cce44a25f7b097770a28676ae74b39da7e not found: ID does not exist" containerID="8a1f303abdd5f0c02fe6a793df5fa8cce44a25f7b097770a28676ae74b39da7e" Oct 11 10:33:44.610930 master-1 kubenswrapper[4771]: I1011 10:33:44.610808 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a1f303abdd5f0c02fe6a793df5fa8cce44a25f7b097770a28676ae74b39da7e"} err="failed to get container status \"8a1f303abdd5f0c02fe6a793df5fa8cce44a25f7b097770a28676ae74b39da7e\": rpc error: code = NotFound desc = could not find container \"8a1f303abdd5f0c02fe6a793df5fa8cce44a25f7b097770a28676ae74b39da7e\": container with ID starting with 8a1f303abdd5f0c02fe6a793df5fa8cce44a25f7b097770a28676ae74b39da7e not found: ID does not exist" Oct 11 10:33:44.610930 master-1 kubenswrapper[4771]: I1011 10:33:44.610858 4771 scope.go:117] "RemoveContainer" containerID="af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde" Oct 11 10:33:44.611544 master-1 kubenswrapper[4771]: E1011 10:33:44.611486 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde\": container with ID starting with af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde not found: ID does not exist" containerID="af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde" Oct 11 10:33:44.611683 master-1 kubenswrapper[4771]: I1011 10:33:44.611549 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde"} err="failed to get container status \"af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde\": rpc error: code = NotFound desc = could not find container \"af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde\": container with ID starting with af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde not found: ID does not exist" Oct 11 10:33:44.611683 master-1 kubenswrapper[4771]: I1011 10:33:44.611591 4771 scope.go:117] "RemoveContainer" containerID="958716eeae4e87693f249c2e50f77614f7904dfcce99812a8c2f6b2e06fbacf0" Oct 11 10:33:44.612073 master-1 kubenswrapper[4771]: E1011 10:33:44.611994 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"958716eeae4e87693f249c2e50f77614f7904dfcce99812a8c2f6b2e06fbacf0\": container with ID starting with 958716eeae4e87693f249c2e50f77614f7904dfcce99812a8c2f6b2e06fbacf0 not found: ID does not exist" containerID="958716eeae4e87693f249c2e50f77614f7904dfcce99812a8c2f6b2e06fbacf0" Oct 11 10:33:44.612410 master-1 kubenswrapper[4771]: I1011 10:33:44.612061 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"958716eeae4e87693f249c2e50f77614f7904dfcce99812a8c2f6b2e06fbacf0"} err="failed to get container status \"958716eeae4e87693f249c2e50f77614f7904dfcce99812a8c2f6b2e06fbacf0\": rpc error: code = NotFound desc = could not find container \"958716eeae4e87693f249c2e50f77614f7904dfcce99812a8c2f6b2e06fbacf0\": container with ID starting with 958716eeae4e87693f249c2e50f77614f7904dfcce99812a8c2f6b2e06fbacf0 not found: ID does not exist" Oct 11 10:33:44.612410 master-1 kubenswrapper[4771]: I1011 10:33:44.612098 4771 scope.go:117] "RemoveContainer" containerID="cf1bd37b9035aa3a513b51ba1e14c267a24bb1ac86b542f0095d01337d817711" Oct 11 10:33:44.612772 master-1 kubenswrapper[4771]: E1011 10:33:44.612703 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf1bd37b9035aa3a513b51ba1e14c267a24bb1ac86b542f0095d01337d817711\": container with ID starting with cf1bd37b9035aa3a513b51ba1e14c267a24bb1ac86b542f0095d01337d817711 not found: ID does not exist" containerID="cf1bd37b9035aa3a513b51ba1e14c267a24bb1ac86b542f0095d01337d817711" Oct 11 10:33:44.612772 master-1 kubenswrapper[4771]: I1011 10:33:44.612753 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf1bd37b9035aa3a513b51ba1e14c267a24bb1ac86b542f0095d01337d817711"} err="failed to get container status \"cf1bd37b9035aa3a513b51ba1e14c267a24bb1ac86b542f0095d01337d817711\": rpc error: code = NotFound desc = could not find container \"cf1bd37b9035aa3a513b51ba1e14c267a24bb1ac86b542f0095d01337d817711\": container with ID starting with cf1bd37b9035aa3a513b51ba1e14c267a24bb1ac86b542f0095d01337d817711 not found: ID does not exist" Oct 11 10:33:44.612982 master-1 kubenswrapper[4771]: I1011 10:33:44.612783 4771 scope.go:117] "RemoveContainer" containerID="4e485d5d5712b6cda3c4f5674c0f91abe7502ccb2d49bc78c03ccbef061a43f0" Oct 11 10:33:44.613310 master-1 kubenswrapper[4771]: E1011 10:33:44.613248 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e485d5d5712b6cda3c4f5674c0f91abe7502ccb2d49bc78c03ccbef061a43f0\": container with ID starting with 4e485d5d5712b6cda3c4f5674c0f91abe7502ccb2d49bc78c03ccbef061a43f0 not found: ID does not exist" containerID="4e485d5d5712b6cda3c4f5674c0f91abe7502ccb2d49bc78c03ccbef061a43f0" Oct 11 10:33:44.613429 master-1 kubenswrapper[4771]: I1011 10:33:44.613300 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e485d5d5712b6cda3c4f5674c0f91abe7502ccb2d49bc78c03ccbef061a43f0"} err="failed to get container status \"4e485d5d5712b6cda3c4f5674c0f91abe7502ccb2d49bc78c03ccbef061a43f0\": rpc error: code = NotFound desc = could not find container \"4e485d5d5712b6cda3c4f5674c0f91abe7502ccb2d49bc78c03ccbef061a43f0\": container with ID starting with 4e485d5d5712b6cda3c4f5674c0f91abe7502ccb2d49bc78c03ccbef061a43f0 not found: ID does not exist" Oct 11 10:33:44.613429 master-1 kubenswrapper[4771]: I1011 10:33:44.613333 4771 scope.go:117] "RemoveContainer" containerID="2c883b5cc483b4b1102cd2f4e0032f04b4e86dfac92c219c11959045c43545c8" Oct 11 10:33:44.613902 master-1 kubenswrapper[4771]: E1011 10:33:44.613799 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c883b5cc483b4b1102cd2f4e0032f04b4e86dfac92c219c11959045c43545c8\": container with ID starting with 2c883b5cc483b4b1102cd2f4e0032f04b4e86dfac92c219c11959045c43545c8 not found: ID does not exist" containerID="2c883b5cc483b4b1102cd2f4e0032f04b4e86dfac92c219c11959045c43545c8" Oct 11 10:33:44.613998 master-1 kubenswrapper[4771]: I1011 10:33:44.613928 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c883b5cc483b4b1102cd2f4e0032f04b4e86dfac92c219c11959045c43545c8"} err="failed to get container status \"2c883b5cc483b4b1102cd2f4e0032f04b4e86dfac92c219c11959045c43545c8\": rpc error: code = NotFound desc = could not find container \"2c883b5cc483b4b1102cd2f4e0032f04b4e86dfac92c219c11959045c43545c8\": container with ID starting with 2c883b5cc483b4b1102cd2f4e0032f04b4e86dfac92c219c11959045c43545c8 not found: ID does not exist" Oct 11 10:33:44.614074 master-1 kubenswrapper[4771]: I1011 10:33:44.614008 4771 scope.go:117] "RemoveContainer" containerID="ae91d16391eadcea61802eeaddf65a6f0304c66c1e1f73804b3842b3041b8d05" Oct 11 10:33:44.614628 master-1 kubenswrapper[4771]: E1011 10:33:44.614564 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae91d16391eadcea61802eeaddf65a6f0304c66c1e1f73804b3842b3041b8d05\": container with ID starting with ae91d16391eadcea61802eeaddf65a6f0304c66c1e1f73804b3842b3041b8d05 not found: ID does not exist" containerID="ae91d16391eadcea61802eeaddf65a6f0304c66c1e1f73804b3842b3041b8d05" Oct 11 10:33:44.614734 master-1 kubenswrapper[4771]: I1011 10:33:44.614627 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae91d16391eadcea61802eeaddf65a6f0304c66c1e1f73804b3842b3041b8d05"} err="failed to get container status \"ae91d16391eadcea61802eeaddf65a6f0304c66c1e1f73804b3842b3041b8d05\": rpc error: code = NotFound desc = could not find container \"ae91d16391eadcea61802eeaddf65a6f0304c66c1e1f73804b3842b3041b8d05\": container with ID starting with ae91d16391eadcea61802eeaddf65a6f0304c66c1e1f73804b3842b3041b8d05 not found: ID does not exist" Oct 11 10:33:44.614734 master-1 kubenswrapper[4771]: I1011 10:33:44.614666 4771 scope.go:117] "RemoveContainer" containerID="73a809cbd925d698effb0736fbe0b6f4efb7a431de066e52bb22b55ae4cc3e55" Oct 11 10:33:44.615171 master-1 kubenswrapper[4771]: E1011 10:33:44.615117 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73a809cbd925d698effb0736fbe0b6f4efb7a431de066e52bb22b55ae4cc3e55\": container with ID starting with 73a809cbd925d698effb0736fbe0b6f4efb7a431de066e52bb22b55ae4cc3e55 not found: ID does not exist" containerID="73a809cbd925d698effb0736fbe0b6f4efb7a431de066e52bb22b55ae4cc3e55" Oct 11 10:33:44.615257 master-1 kubenswrapper[4771]: I1011 10:33:44.615164 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73a809cbd925d698effb0736fbe0b6f4efb7a431de066e52bb22b55ae4cc3e55"} err="failed to get container status \"73a809cbd925d698effb0736fbe0b6f4efb7a431de066e52bb22b55ae4cc3e55\": rpc error: code = NotFound desc = could not find container \"73a809cbd925d698effb0736fbe0b6f4efb7a431de066e52bb22b55ae4cc3e55\": container with ID starting with 73a809cbd925d698effb0736fbe0b6f4efb7a431de066e52bb22b55ae4cc3e55 not found: ID does not exist" Oct 11 10:33:44.615257 master-1 kubenswrapper[4771]: I1011 10:33:44.615192 4771 scope.go:117] "RemoveContainer" containerID="7cb1d532ea2d89b9be52c186df240a63c25f0ae1db2bbf1085a6c556a31b0cb6" Oct 11 10:33:44.615741 master-1 kubenswrapper[4771]: E1011 10:33:44.615670 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cb1d532ea2d89b9be52c186df240a63c25f0ae1db2bbf1085a6c556a31b0cb6\": container with ID starting with 7cb1d532ea2d89b9be52c186df240a63c25f0ae1db2bbf1085a6c556a31b0cb6 not found: ID does not exist" containerID="7cb1d532ea2d89b9be52c186df240a63c25f0ae1db2bbf1085a6c556a31b0cb6" Oct 11 10:33:44.615741 master-1 kubenswrapper[4771]: I1011 10:33:44.615721 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cb1d532ea2d89b9be52c186df240a63c25f0ae1db2bbf1085a6c556a31b0cb6"} err="failed to get container status \"7cb1d532ea2d89b9be52c186df240a63c25f0ae1db2bbf1085a6c556a31b0cb6\": rpc error: code = NotFound desc = could not find container \"7cb1d532ea2d89b9be52c186df240a63c25f0ae1db2bbf1085a6c556a31b0cb6\": container with ID starting with 7cb1d532ea2d89b9be52c186df240a63c25f0ae1db2bbf1085a6c556a31b0cb6 not found: ID does not exist" Oct 11 10:33:44.615957 master-1 kubenswrapper[4771]: I1011 10:33:44.615753 4771 scope.go:117] "RemoveContainer" containerID="8a1f303abdd5f0c02fe6a793df5fa8cce44a25f7b097770a28676ae74b39da7e" Oct 11 10:33:44.616194 master-1 kubenswrapper[4771]: I1011 10:33:44.616137 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a1f303abdd5f0c02fe6a793df5fa8cce44a25f7b097770a28676ae74b39da7e"} err="failed to get container status \"8a1f303abdd5f0c02fe6a793df5fa8cce44a25f7b097770a28676ae74b39da7e\": rpc error: code = NotFound desc = could not find container \"8a1f303abdd5f0c02fe6a793df5fa8cce44a25f7b097770a28676ae74b39da7e\": container with ID starting with 8a1f303abdd5f0c02fe6a793df5fa8cce44a25f7b097770a28676ae74b39da7e not found: ID does not exist" Oct 11 10:33:44.616194 master-1 kubenswrapper[4771]: I1011 10:33:44.616166 4771 scope.go:117] "RemoveContainer" containerID="af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde" Oct 11 10:33:44.616715 master-1 kubenswrapper[4771]: I1011 10:33:44.616632 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde"} err="failed to get container status \"af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde\": rpc error: code = NotFound desc = could not find container \"af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde\": container with ID starting with af5e1ce8eeaa4f31e923faf3733d7f57ee1a57b8083addf279a4ff665cbc3fde not found: ID does not exist" Oct 11 10:33:44.616715 master-1 kubenswrapper[4771]: I1011 10:33:44.616688 4771 scope.go:117] "RemoveContainer" containerID="958716eeae4e87693f249c2e50f77614f7904dfcce99812a8c2f6b2e06fbacf0" Oct 11 10:33:44.617223 master-1 kubenswrapper[4771]: I1011 10:33:44.617160 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"958716eeae4e87693f249c2e50f77614f7904dfcce99812a8c2f6b2e06fbacf0"} err="failed to get container status \"958716eeae4e87693f249c2e50f77614f7904dfcce99812a8c2f6b2e06fbacf0\": rpc error: code = NotFound desc = could not find container \"958716eeae4e87693f249c2e50f77614f7904dfcce99812a8c2f6b2e06fbacf0\": container with ID starting with 958716eeae4e87693f249c2e50f77614f7904dfcce99812a8c2f6b2e06fbacf0 not found: ID does not exist" Oct 11 10:33:44.617223 master-1 kubenswrapper[4771]: I1011 10:33:44.617204 4771 scope.go:117] "RemoveContainer" containerID="cf1bd37b9035aa3a513b51ba1e14c267a24bb1ac86b542f0095d01337d817711" Oct 11 10:33:44.617724 master-1 kubenswrapper[4771]: I1011 10:33:44.617656 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf1bd37b9035aa3a513b51ba1e14c267a24bb1ac86b542f0095d01337d817711"} err="failed to get container status \"cf1bd37b9035aa3a513b51ba1e14c267a24bb1ac86b542f0095d01337d817711\": rpc error: code = NotFound desc = could not find container \"cf1bd37b9035aa3a513b51ba1e14c267a24bb1ac86b542f0095d01337d817711\": container with ID starting with cf1bd37b9035aa3a513b51ba1e14c267a24bb1ac86b542f0095d01337d817711 not found: ID does not exist" Oct 11 10:33:44.617724 master-1 kubenswrapper[4771]: I1011 10:33:44.617702 4771 scope.go:117] "RemoveContainer" containerID="4e485d5d5712b6cda3c4f5674c0f91abe7502ccb2d49bc78c03ccbef061a43f0" Oct 11 10:33:44.618166 master-1 kubenswrapper[4771]: I1011 10:33:44.618107 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e485d5d5712b6cda3c4f5674c0f91abe7502ccb2d49bc78c03ccbef061a43f0"} err="failed to get container status \"4e485d5d5712b6cda3c4f5674c0f91abe7502ccb2d49bc78c03ccbef061a43f0\": rpc error: code = NotFound desc = could not find container \"4e485d5d5712b6cda3c4f5674c0f91abe7502ccb2d49bc78c03ccbef061a43f0\": container with ID starting with 4e485d5d5712b6cda3c4f5674c0f91abe7502ccb2d49bc78c03ccbef061a43f0 not found: ID does not exist" Oct 11 10:33:44.618166 master-1 kubenswrapper[4771]: I1011 10:33:44.618144 4771 scope.go:117] "RemoveContainer" containerID="2c883b5cc483b4b1102cd2f4e0032f04b4e86dfac92c219c11959045c43545c8" Oct 11 10:33:44.618779 master-1 kubenswrapper[4771]: I1011 10:33:44.618698 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c883b5cc483b4b1102cd2f4e0032f04b4e86dfac92c219c11959045c43545c8"} err="failed to get container status \"2c883b5cc483b4b1102cd2f4e0032f04b4e86dfac92c219c11959045c43545c8\": rpc error: code = NotFound desc = could not find container \"2c883b5cc483b4b1102cd2f4e0032f04b4e86dfac92c219c11959045c43545c8\": container with ID starting with 2c883b5cc483b4b1102cd2f4e0032f04b4e86dfac92c219c11959045c43545c8 not found: ID does not exist" Oct 11 10:33:44.618877 master-1 kubenswrapper[4771]: I1011 10:33:44.618784 4771 scope.go:117] "RemoveContainer" containerID="ae91d16391eadcea61802eeaddf65a6f0304c66c1e1f73804b3842b3041b8d05" Oct 11 10:33:44.619270 master-1 kubenswrapper[4771]: I1011 10:33:44.619206 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae91d16391eadcea61802eeaddf65a6f0304c66c1e1f73804b3842b3041b8d05"} err="failed to get container status \"ae91d16391eadcea61802eeaddf65a6f0304c66c1e1f73804b3842b3041b8d05\": rpc error: code = NotFound desc = could not find container \"ae91d16391eadcea61802eeaddf65a6f0304c66c1e1f73804b3842b3041b8d05\": container with ID starting with ae91d16391eadcea61802eeaddf65a6f0304c66c1e1f73804b3842b3041b8d05 not found: ID does not exist" Oct 11 10:33:44.619270 master-1 kubenswrapper[4771]: I1011 10:33:44.619254 4771 scope.go:117] "RemoveContainer" containerID="73a809cbd925d698effb0736fbe0b6f4efb7a431de066e52bb22b55ae4cc3e55" Oct 11 10:33:44.619841 master-1 kubenswrapper[4771]: I1011 10:33:44.619771 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73a809cbd925d698effb0736fbe0b6f4efb7a431de066e52bb22b55ae4cc3e55"} err="failed to get container status \"73a809cbd925d698effb0736fbe0b6f4efb7a431de066e52bb22b55ae4cc3e55\": rpc error: code = NotFound desc = could not find container \"73a809cbd925d698effb0736fbe0b6f4efb7a431de066e52bb22b55ae4cc3e55\": container with ID starting with 73a809cbd925d698effb0736fbe0b6f4efb7a431de066e52bb22b55ae4cc3e55 not found: ID does not exist" Oct 11 10:33:44.619841 master-1 kubenswrapper[4771]: I1011 10:33:44.619810 4771 scope.go:117] "RemoveContainer" containerID="7cb1d532ea2d89b9be52c186df240a63c25f0ae1db2bbf1085a6c556a31b0cb6" Oct 11 10:33:44.620225 master-1 kubenswrapper[4771]: I1011 10:33:44.620163 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cb1d532ea2d89b9be52c186df240a63c25f0ae1db2bbf1085a6c556a31b0cb6"} err="failed to get container status \"7cb1d532ea2d89b9be52c186df240a63c25f0ae1db2bbf1085a6c556a31b0cb6\": rpc error: code = NotFound desc = could not find container \"7cb1d532ea2d89b9be52c186df240a63c25f0ae1db2bbf1085a6c556a31b0cb6\": container with ID starting with 7cb1d532ea2d89b9be52c186df240a63c25f0ae1db2bbf1085a6c556a31b0cb6 not found: ID does not exist" Oct 11 10:33:44.628574 master-1 kubenswrapper[4771]: I1011 10:33:44.628513 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:33:44.628716 master-1 kubenswrapper[4771]: I1011 10:33:44.628579 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:33:44.704506 master-1 kubenswrapper[4771]: I1011 10:33:44.704039 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-1" oldPodUID="5268b2f2ae2aef0c7f2e7a6e651ed702" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" Oct 11 10:33:44.750960 master-2 kubenswrapper[4776]: E1011 10:33:44.750850 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-conmon-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf\": RecentStats: unable to find data in memory cache]" Oct 11 10:33:44.753222 master-2 kubenswrapper[4776]: E1011 10:33:44.753162 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-conmon-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:33:44.753373 master-2 kubenswrapper[4776]: E1011 10:33:44.753234 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf\": RecentStats: unable to find data in memory cache]" Oct 11 10:33:44.756950 master-2 kubenswrapper[4776]: E1011 10:33:44.756890 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-conmon-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podfc3dcbf6_abe1_45ca_992b_4d1c7e419128.slice/crio-1c42b2aad8d56f14bb8f7731268d9b68d7e1d8d9d1d9e5f2c2ae8c23a10aa4cb.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:33:44.757294 master-2 kubenswrapper[4776]: E1011 10:33:44.757250 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-conmon-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:33:44.757403 master-2 kubenswrapper[4776]: E1011 10:33:44.757360 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-conmon-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:33:44.757403 master-2 kubenswrapper[4776]: E1011 10:33:44.757395 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-conmon-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf\": RecentStats: unable to find data in memory cache]" Oct 11 10:33:44.970905 master-2 kubenswrapper[4776]: I1011 10:33:44.969735 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:44.970905 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:44.970905 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:44.970905 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:44.970905 master-2 kubenswrapper[4776]: I1011 10:33:44.969797 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:45.028664 master-2 kubenswrapper[4776]: I1011 10:33:45.028594 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2"] Oct 11 10:33:45.038078 master-2 kubenswrapper[4776]: I1011 10:33:45.038026 4776 generic.go:334] "Generic (PLEG): container finished" podID="3a156e42-88da-4ce6-9995-6865609e2711" containerID="90d6acbfbe353ba98c33d9d9275a952ddeabac687eed9e519947f935a2f44edf" exitCode=0 Oct 11 10:33:45.038288 master-2 kubenswrapper[4776]: I1011 10:33:45.038185 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-729nm" event={"ID":"3a156e42-88da-4ce6-9995-6865609e2711","Type":"ContainerDied","Data":"90d6acbfbe353ba98c33d9d9275a952ddeabac687eed9e519947f935a2f44edf"} Oct 11 10:33:45.038444 master-2 kubenswrapper[4776]: I1011 10:33:45.038415 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-729nm" event={"ID":"3a156e42-88da-4ce6-9995-6865609e2711","Type":"ContainerDied","Data":"ee25a7d80d4e6c2818c35f57b04d647f323d2db779f406477d1f90d328d3f868"} Oct 11 10:33:45.038555 master-2 kubenswrapper[4776]: I1011 10:33:45.038536 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee25a7d80d4e6c2818c35f57b04d647f323d2db779f406477d1f90d328d3f868" Oct 11 10:33:45.069081 master-2 kubenswrapper[4776]: I1011 10:33:45.069042 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:33:45.182942 master-2 kubenswrapper[4776]: I1011 10:33:45.182864 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-node-pullsecrets\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183140 master-2 kubenswrapper[4776]: I1011 10:33:45.182978 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-audit\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183140 master-2 kubenswrapper[4776]: I1011 10:33:45.182999 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-config\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183140 master-2 kubenswrapper[4776]: I1011 10:33:45.183017 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-trusted-ca-bundle\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183140 master-2 kubenswrapper[4776]: I1011 10:33:45.183041 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-etcd-serving-ca\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183140 master-2 kubenswrapper[4776]: I1011 10:33:45.183065 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-serving-cert\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183140 master-2 kubenswrapper[4776]: I1011 10:33:45.183079 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-encryption-config\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183140 master-2 kubenswrapper[4776]: I1011 10:33:45.183099 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-image-import-ca\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183140 master-2 kubenswrapper[4776]: I1011 10:33:45.183075 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:45.183140 master-2 kubenswrapper[4776]: I1011 10:33:45.183142 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:45.183515 master-2 kubenswrapper[4776]: I1011 10:33:45.183114 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-audit-dir\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183515 master-2 kubenswrapper[4776]: I1011 10:33:45.183280 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qgjs\" (UniqueName: \"kubernetes.io/projected/3a156e42-88da-4ce6-9995-6865609e2711-kube-api-access-9qgjs\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183515 master-2 kubenswrapper[4776]: I1011 10:33:45.183425 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-etcd-client\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.184113 master-2 kubenswrapper[4776]: I1011 10:33:45.184059 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:33:45.184213 master-2 kubenswrapper[4776]: I1011 10:33:45.184135 4776 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-node-pullsecrets\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.184213 master-2 kubenswrapper[4776]: I1011 10:33:45.184151 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.184608 master-2 kubenswrapper[4776]: I1011 10:33:45.184562 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-config" (OuterVolumeSpecName: "config") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:33:45.184608 master-2 kubenswrapper[4776]: I1011 10:33:45.184567 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:33:45.184747 master-2 kubenswrapper[4776]: I1011 10:33:45.184656 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-audit" (OuterVolumeSpecName: "audit") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:33:45.184848 master-2 kubenswrapper[4776]: I1011 10:33:45.184817 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:33:45.186125 master-2 kubenswrapper[4776]: I1011 10:33:45.186092 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:33:45.186419 master-2 kubenswrapper[4776]: I1011 10:33:45.186386 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:33:45.186551 master-2 kubenswrapper[4776]: I1011 10:33:45.186431 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:33:45.186999 master-2 kubenswrapper[4776]: I1011 10:33:45.186929 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a156e42-88da-4ce6-9995-6865609e2711-kube-api-access-9qgjs" (OuterVolumeSpecName: "kube-api-access-9qgjs") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "kube-api-access-9qgjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:33:45.286491 master-2 kubenswrapper[4776]: I1011 10:33:45.286268 4776 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-audit\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.286491 master-2 kubenswrapper[4776]: I1011 10:33:45.286358 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.286491 master-2 kubenswrapper[4776]: I1011 10:33:45.286399 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.286491 master-2 kubenswrapper[4776]: I1011 10:33:45.286417 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-etcd-serving-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.286491 master-2 kubenswrapper[4776]: I1011 10:33:45.286429 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.286491 master-2 kubenswrapper[4776]: I1011 10:33:45.286441 4776 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-encryption-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.286491 master-2 kubenswrapper[4776]: I1011 10:33:45.286452 4776 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-image-import-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.286491 master-2 kubenswrapper[4776]: I1011 10:33:45.286465 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qgjs\" (UniqueName: \"kubernetes.io/projected/3a156e42-88da-4ce6-9995-6865609e2711-kube-api-access-9qgjs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.286491 master-2 kubenswrapper[4776]: I1011 10:33:45.286477 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-etcd-client\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.497733 master-1 kubenswrapper[4771]: I1011 10:33:45.497675 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:45.497733 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:45.497733 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:45.497733 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:45.498320 master-1 kubenswrapper[4771]: I1011 10:33:45.498260 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:45.969813 master-2 kubenswrapper[4776]: I1011 10:33:45.969654 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:45.969813 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:45.969813 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:45.969813 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:45.971033 master-2 kubenswrapper[4776]: I1011 10:33:45.969812 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:46.043877 master-2 kubenswrapper[4776]: I1011 10:33:46.043786 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:33:46.093401 master-2 kubenswrapper[4776]: I1011 10:33:46.093319 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-777cc846dc-729nm"] Oct 11 10:33:46.098794 master-2 kubenswrapper[4776]: I1011 10:33:46.098728 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-777cc846dc-729nm"] Oct 11 10:33:46.497738 master-1 kubenswrapper[4771]: I1011 10:33:46.497634 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:46.497738 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:46.497738 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:46.497738 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:46.497738 master-1 kubenswrapper[4771]: I1011 10:33:46.497728 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:46.971571 master-2 kubenswrapper[4776]: I1011 10:33:46.971480 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:46.971571 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:46.971571 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:46.971571 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:46.971571 master-2 kubenswrapper[4776]: I1011 10:33:46.971560 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:47.497091 master-1 kubenswrapper[4771]: I1011 10:33:47.497010 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:47.497091 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:47.497091 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:47.497091 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:47.497552 master-1 kubenswrapper[4771]: I1011 10:33:47.497094 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:47.790029 master-1 kubenswrapper[4771]: I1011 10:33:47.789855 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-h7gnk_6d20faa4-e5eb-4766-b4f5-30e491d1820c/machine-config-server/0.log" Oct 11 10:33:47.802701 master-2 kubenswrapper[4776]: I1011 10:33:47.802541 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:33:47.971028 master-2 kubenswrapper[4776]: I1011 10:33:47.970942 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:47.971028 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:47.971028 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:47.971028 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:47.971028 master-2 kubenswrapper[4776]: I1011 10:33:47.971023 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:48.071095 master-2 kubenswrapper[4776]: I1011 10:33:48.070895 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a156e42-88da-4ce6-9995-6865609e2711" path="/var/lib/kubelet/pods/3a156e42-88da-4ce6-9995-6865609e2711/volumes" Oct 11 10:33:48.497535 master-1 kubenswrapper[4771]: I1011 10:33:48.497417 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:48.497535 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:48.497535 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:48.497535 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:48.497535 master-1 kubenswrapper[4771]: I1011 10:33:48.497509 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:48.822866 master-2 kubenswrapper[4776]: I1011 10:33:48.822796 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-7845cf54d8-h5nlf"] Oct 11 10:33:48.823183 master-2 kubenswrapper[4776]: E1011 10:33:48.823107 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" Oct 11 10:33:48.823183 master-2 kubenswrapper[4776]: I1011 10:33:48.823127 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" Oct 11 10:33:48.823183 master-2 kubenswrapper[4776]: E1011 10:33:48.823150 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver-check-endpoints" Oct 11 10:33:48.823183 master-2 kubenswrapper[4776]: I1011 10:33:48.823163 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver-check-endpoints" Oct 11 10:33:48.823447 master-2 kubenswrapper[4776]: E1011 10:33:48.823178 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="fix-audit-permissions" Oct 11 10:33:48.823447 master-2 kubenswrapper[4776]: I1011 10:33:48.823268 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="fix-audit-permissions" Oct 11 10:33:48.823575 master-2 kubenswrapper[4776]: I1011 10:33:48.823467 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver-check-endpoints" Oct 11 10:33:48.823575 master-2 kubenswrapper[4776]: I1011 10:33:48.823495 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" Oct 11 10:33:48.824924 master-2 kubenswrapper[4776]: I1011 10:33:48.824875 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.829538 master-2 kubenswrapper[4776]: I1011 10:33:48.829482 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 11 10:33:48.829538 master-2 kubenswrapper[4776]: I1011 10:33:48.829529 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 11 10:33:48.829893 master-2 kubenswrapper[4776]: I1011 10:33:48.829735 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 11 10:33:48.829970 master-2 kubenswrapper[4776]: I1011 10:33:48.829900 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 11 10:33:48.830189 master-2 kubenswrapper[4776]: I1011 10:33:48.830112 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 11 10:33:48.831184 master-2 kubenswrapper[4776]: I1011 10:33:48.831064 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 11 10:33:48.831184 master-2 kubenswrapper[4776]: I1011 10:33:48.831087 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 11 10:33:48.831847 master-2 kubenswrapper[4776]: I1011 10:33:48.831783 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 11 10:33:48.832019 master-2 kubenswrapper[4776]: I1011 10:33:48.831890 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 11 10:33:48.840600 master-2 kubenswrapper[4776]: I1011 10:33:48.840528 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7845cf54d8-h5nlf"] Oct 11 10:33:48.843487 master-2 kubenswrapper[4776]: I1011 10:33:48.843416 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 11 10:33:48.937058 master-2 kubenswrapper[4776]: I1011 10:33:48.936974 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-trusted-ca-bundle\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937058 master-2 kubenswrapper[4776]: I1011 10:33:48.937057 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q8hn\" (UniqueName: \"kubernetes.io/projected/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-kube-api-access-5q8hn\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937439 master-2 kubenswrapper[4776]: I1011 10:33:48.937127 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-config\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937439 master-2 kubenswrapper[4776]: I1011 10:33:48.937185 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-serving-cert\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937439 master-2 kubenswrapper[4776]: I1011 10:33:48.937279 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-node-pullsecrets\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937439 master-2 kubenswrapper[4776]: I1011 10:33:48.937339 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-image-import-ca\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937697 master-2 kubenswrapper[4776]: I1011 10:33:48.937453 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-serving-ca\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937697 master-2 kubenswrapper[4776]: I1011 10:33:48.937539 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937697 master-2 kubenswrapper[4776]: I1011 10:33:48.937581 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit-dir\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937918 master-2 kubenswrapper[4776]: I1011 10:33:48.937740 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-client\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937918 master-2 kubenswrapper[4776]: I1011 10:33:48.937779 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-encryption-config\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.970092 master-2 kubenswrapper[4776]: I1011 10:33:48.970008 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:48.970092 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:48.970092 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:48.970092 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:48.970353 master-2 kubenswrapper[4776]: I1011 10:33:48.970109 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:49.039109 master-2 kubenswrapper[4776]: I1011 10:33:49.039019 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-serving-ca\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039109 master-2 kubenswrapper[4776]: I1011 10:33:49.039104 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039377 master-2 kubenswrapper[4776]: I1011 10:33:49.039143 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit-dir\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039377 master-2 kubenswrapper[4776]: I1011 10:33:49.039212 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-client\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039377 master-2 kubenswrapper[4776]: I1011 10:33:49.039245 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-encryption-config\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039377 master-2 kubenswrapper[4776]: I1011 10:33:49.039335 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q8hn\" (UniqueName: \"kubernetes.io/projected/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-kube-api-access-5q8hn\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039377 master-2 kubenswrapper[4776]: I1011 10:33:49.039375 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-trusted-ca-bundle\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039727 master-2 kubenswrapper[4776]: I1011 10:33:49.039410 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-config\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039727 master-2 kubenswrapper[4776]: I1011 10:33:49.039438 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-serving-cert\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039727 master-2 kubenswrapper[4776]: I1011 10:33:49.039461 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-node-pullsecrets\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039727 master-2 kubenswrapper[4776]: I1011 10:33:49.039480 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-image-import-ca\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039727 master-2 kubenswrapper[4776]: I1011 10:33:49.039546 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit-dir\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039727 master-2 kubenswrapper[4776]: I1011 10:33:49.039715 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-node-pullsecrets\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.040177 master-2 kubenswrapper[4776]: I1011 10:33:49.040030 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.040553 master-2 kubenswrapper[4776]: I1011 10:33:49.040471 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-image-import-ca\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.041085 master-2 kubenswrapper[4776]: I1011 10:33:49.041002 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-config\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.041988 master-2 kubenswrapper[4776]: I1011 10:33:49.041933 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-serving-ca\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.042172 master-2 kubenswrapper[4776]: I1011 10:33:49.042111 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-trusted-ca-bundle\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.043618 master-2 kubenswrapper[4776]: I1011 10:33:49.043540 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-serving-cert\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.044579 master-2 kubenswrapper[4776]: I1011 10:33:49.044516 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-encryption-config\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.064025 master-2 kubenswrapper[4776]: I1011 10:33:49.063934 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-client\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.070884 master-2 kubenswrapper[4776]: I1011 10:33:49.070826 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q8hn\" (UniqueName: \"kubernetes.io/projected/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-kube-api-access-5q8hn\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.191826 master-2 kubenswrapper[4776]: I1011 10:33:49.191723 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.497409 master-1 kubenswrapper[4771]: I1011 10:33:49.497286 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:49.497409 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:49.497409 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:49.497409 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:49.498351 master-1 kubenswrapper[4771]: I1011 10:33:49.497439 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:49.628571 master-1 kubenswrapper[4771]: I1011 10:33:49.628444 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:33:49.628936 master-1 kubenswrapper[4771]: I1011 10:33:49.628581 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:33:49.668366 master-2 kubenswrapper[4776]: I1011 10:33:49.668298 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7845cf54d8-h5nlf"] Oct 11 10:33:49.676237 master-2 kubenswrapper[4776]: W1011 10:33:49.676168 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c500140_fe5c_4fa2_914b_bb1e0c5758ab.slice/crio-096d8e6920a041962b0dbcc41b2a283a99938e8e8c28669a5a9ec5f599e847be WatchSource:0}: Error finding container 096d8e6920a041962b0dbcc41b2a283a99938e8e8c28669a5a9ec5f599e847be: Status 404 returned error can't find the container with id 096d8e6920a041962b0dbcc41b2a283a99938e8e8c28669a5a9ec5f599e847be Oct 11 10:33:49.969626 master-2 kubenswrapper[4776]: I1011 10:33:49.969582 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:49.969626 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:49.969626 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:49.969626 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:49.969815 master-2 kubenswrapper[4776]: I1011 10:33:49.969640 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:50.075378 master-2 kubenswrapper[4776]: I1011 10:33:50.075308 4776 generic.go:334] "Generic (PLEG): container finished" podID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerID="227b0ea6948a9655dda8b2fd87923ef92a7b65ccb09fd037cc6c580377f3d16c" exitCode=0 Oct 11 10:33:50.075378 master-2 kubenswrapper[4776]: I1011 10:33:50.075370 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" event={"ID":"8c500140-fe5c-4fa2-914b-bb1e0c5758ab","Type":"ContainerDied","Data":"227b0ea6948a9655dda8b2fd87923ef92a7b65ccb09fd037cc6c580377f3d16c"} Oct 11 10:33:50.075604 master-2 kubenswrapper[4776]: I1011 10:33:50.075406 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" event={"ID":"8c500140-fe5c-4fa2-914b-bb1e0c5758ab","Type":"ContainerStarted","Data":"096d8e6920a041962b0dbcc41b2a283a99938e8e8c28669a5a9ec5f599e847be"} Oct 11 10:33:50.437070 master-1 kubenswrapper[4771]: I1011 10:33:50.436933 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 11 10:33:50.460047 master-1 kubenswrapper[4771]: I1011 10:33:50.459968 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-1" podUID="4bd63676-ce88-4eeb-9b7f-afaa4af19ad3" Oct 11 10:33:50.460047 master-1 kubenswrapper[4771]: I1011 10:33:50.460037 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-1" podUID="4bd63676-ce88-4eeb-9b7f-afaa4af19ad3" Oct 11 10:33:50.481534 master-1 kubenswrapper[4771]: I1011 10:33:50.481338 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-1"] Oct 11 10:33:50.482678 master-1 kubenswrapper[4771]: I1011 10:33:50.482582 4771 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-1" Oct 11 10:33:50.491873 master-1 kubenswrapper[4771]: I1011 10:33:50.491758 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-1"] Oct 11 10:33:50.497940 master-1 kubenswrapper[4771]: I1011 10:33:50.497883 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:50.497940 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:50.497940 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:50.497940 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:50.498784 master-1 kubenswrapper[4771]: I1011 10:33:50.497954 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:50.505896 master-1 kubenswrapper[4771]: I1011 10:33:50.505837 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 11 10:33:50.514119 master-1 kubenswrapper[4771]: I1011 10:33:50.514031 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-1"] Oct 11 10:33:50.534449 master-1 kubenswrapper[4771]: W1011 10:33:50.534333 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b1859aa05c2c75eb43d086c9ccd9c86.slice/crio-a4cfb375594ea793c8065e46bfa1f2102f58cd8b6a44fe0473d2e0310433bf19 WatchSource:0}: Error finding container a4cfb375594ea793c8065e46bfa1f2102f58cd8b6a44fe0473d2e0310433bf19: Status 404 returned error can't find the container with id a4cfb375594ea793c8065e46bfa1f2102f58cd8b6a44fe0473d2e0310433bf19 Oct 11 10:33:50.970263 master-2 kubenswrapper[4776]: I1011 10:33:50.970207 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:50.970263 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:50.970263 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:50.970263 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:50.970263 master-2 kubenswrapper[4776]: I1011 10:33:50.970267 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:51.083105 master-2 kubenswrapper[4776]: I1011 10:33:51.083025 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" event={"ID":"8c500140-fe5c-4fa2-914b-bb1e0c5758ab","Type":"ContainerStarted","Data":"33b8451dee3f8d5ed8e144b04e3c4757d199f647e9b246655c277be3cef812a5"} Oct 11 10:33:51.083105 master-2 kubenswrapper[4776]: I1011 10:33:51.083082 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" event={"ID":"8c500140-fe5c-4fa2-914b-bb1e0c5758ab","Type":"ContainerStarted","Data":"a1a9c7629f1fd873d3ec9d24b009ce28e04c1ae342e924bb98a2d69fd1fdcc5f"} Oct 11 10:33:51.116658 master-2 kubenswrapper[4776]: I1011 10:33:51.116569 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podStartSLOduration=60.116549588 podStartE2EDuration="1m0.116549588s" podCreationTimestamp="2025-10-11 10:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:33:51.1136065 +0000 UTC m=+465.898033209" watchObservedRunningTime="2025-10-11 10:33:51.116549588 +0000 UTC m=+465.900976297" Oct 11 10:33:51.439085 master-1 kubenswrapper[4771]: I1011 10:33:51.438974 4771 generic.go:334] "Generic (PLEG): container finished" podID="2b1859aa05c2c75eb43d086c9ccd9c86" containerID="5df2d69fcce5aa4d0f872e664dab924a82b358ddfdc487a9796493b554db07ec" exitCode=0 Oct 11 10:33:51.439085 master-1 kubenswrapper[4771]: I1011 10:33:51.439050 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"2b1859aa05c2c75eb43d086c9ccd9c86","Type":"ContainerDied","Data":"5df2d69fcce5aa4d0f872e664dab924a82b358ddfdc487a9796493b554db07ec"} Oct 11 10:33:51.439564 master-1 kubenswrapper[4771]: I1011 10:33:51.439116 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"2b1859aa05c2c75eb43d086c9ccd9c86","Type":"ContainerStarted","Data":"a4cfb375594ea793c8065e46bfa1f2102f58cd8b6a44fe0473d2e0310433bf19"} Oct 11 10:33:51.496616 master-1 kubenswrapper[4771]: I1011 10:33:51.496542 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:51.496616 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:51.496616 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:51.496616 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:51.496929 master-1 kubenswrapper[4771]: I1011 10:33:51.496643 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:51.970009 master-2 kubenswrapper[4776]: I1011 10:33:51.969931 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:51.970009 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:51.970009 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:51.970009 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:51.970606 master-2 kubenswrapper[4776]: I1011 10:33:51.970012 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:52.460559 master-1 kubenswrapper[4771]: I1011 10:33:52.460439 4771 generic.go:334] "Generic (PLEG): container finished" podID="2b1859aa05c2c75eb43d086c9ccd9c86" containerID="f36eed4b60a75dfc18926f5f7a62c7fe09c6ef035bfef9182c1502b7c4eeb07b" exitCode=0 Oct 11 10:33:52.460559 master-1 kubenswrapper[4771]: I1011 10:33:52.460525 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"2b1859aa05c2c75eb43d086c9ccd9c86","Type":"ContainerDied","Data":"f36eed4b60a75dfc18926f5f7a62c7fe09c6ef035bfef9182c1502b7c4eeb07b"} Oct 11 10:33:52.497794 master-1 kubenswrapper[4771]: I1011 10:33:52.497694 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:52.497794 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:52.497794 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:52.497794 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:52.498717 master-1 kubenswrapper[4771]: I1011 10:33:52.498666 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:52.969529 master-2 kubenswrapper[4776]: I1011 10:33:52.969470 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:52.969529 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:52.969529 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:52.969529 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:52.969529 master-2 kubenswrapper[4776]: I1011 10:33:52.969531 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:53.471091 master-1 kubenswrapper[4771]: I1011 10:33:53.470980 4771 generic.go:334] "Generic (PLEG): container finished" podID="2b1859aa05c2c75eb43d086c9ccd9c86" containerID="0d2abececcc3750380edf401f993d45ec701aaab0b1cc115175ab53e903df0d6" exitCode=0 Oct 11 10:33:53.471091 master-1 kubenswrapper[4771]: I1011 10:33:53.471070 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"2b1859aa05c2c75eb43d086c9ccd9c86","Type":"ContainerDied","Data":"0d2abececcc3750380edf401f993d45ec701aaab0b1cc115175ab53e903df0d6"} Oct 11 10:33:53.497400 master-1 kubenswrapper[4771]: I1011 10:33:53.497278 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:53.497400 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:53.497400 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:53.497400 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:53.497802 master-1 kubenswrapper[4771]: I1011 10:33:53.497424 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:53.970649 master-2 kubenswrapper[4776]: I1011 10:33:53.970531 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:53.970649 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:53.970649 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:53.970649 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:53.971517 master-2 kubenswrapper[4776]: I1011 10:33:53.970668 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:54.193141 master-2 kubenswrapper[4776]: I1011 10:33:54.193060 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:54.193141 master-2 kubenswrapper[4776]: I1011 10:33:54.193131 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:54.206223 master-2 kubenswrapper[4776]: I1011 10:33:54.206168 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:54.485900 master-1 kubenswrapper[4771]: I1011 10:33:54.485789 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"2b1859aa05c2c75eb43d086c9ccd9c86","Type":"ContainerStarted","Data":"ecbb0613c992785c9403e057fc0c874ad563e770ca35f25a2b4b2f7341f1c10c"} Oct 11 10:33:54.485900 master-1 kubenswrapper[4771]: I1011 10:33:54.485844 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"2b1859aa05c2c75eb43d086c9ccd9c86","Type":"ContainerStarted","Data":"1b08bbe8a016cc9703a454b83b5ccaac8367e55a0f3e2612f07c89255c5b066b"} Oct 11 10:33:54.485900 master-1 kubenswrapper[4771]: I1011 10:33:54.485854 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"2b1859aa05c2c75eb43d086c9ccd9c86","Type":"ContainerStarted","Data":"49bf7adabb62db980d637017833ab23f35546844d31309e50b509a3be2303a67"} Oct 11 10:33:54.497207 master-1 kubenswrapper[4771]: I1011 10:33:54.497100 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:54.497207 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:54.497207 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:54.497207 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:54.497669 master-1 kubenswrapper[4771]: I1011 10:33:54.497211 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:54.629188 master-1 kubenswrapper[4771]: I1011 10:33:54.629109 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:33:54.629333 master-1 kubenswrapper[4771]: I1011 10:33:54.629205 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:33:54.969582 master-2 kubenswrapper[4776]: I1011 10:33:54.969513 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:54.969582 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:54.969582 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:54.969582 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:54.969919 master-2 kubenswrapper[4776]: I1011 10:33:54.969609 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:55.116468 master-2 kubenswrapper[4776]: I1011 10:33:55.116427 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:55.193158 master-1 kubenswrapper[4771]: I1011 10:33:55.191850 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-777cc846dc-qpmws"] Oct 11 10:33:55.193158 master-1 kubenswrapper[4771]: I1011 10:33:55.192768 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="openshift-apiserver" containerID="cri-o://5314d6ef2281ac080baefb268e1b24e3959c52d75eecf8bba9e60d0238801c00" gracePeriod=120 Oct 11 10:33:55.194070 master-1 kubenswrapper[4771]: I1011 10:33:55.193744 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://9b7973318d321c4747b9166204be01b90470f6b7ff6c1031063eb5d24ec05b0e" gracePeriod=120 Oct 11 10:33:55.501283 master-1 kubenswrapper[4771]: I1011 10:33:55.501094 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:55.501283 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:55.501283 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:55.501283 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:55.501283 master-1 kubenswrapper[4771]: I1011 10:33:55.501184 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:55.502639 master-1 kubenswrapper[4771]: I1011 10:33:55.502570 4771 generic.go:334] "Generic (PLEG): container finished" podID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerID="9b7973318d321c4747b9166204be01b90470f6b7ff6c1031063eb5d24ec05b0e" exitCode=0 Oct 11 10:33:55.502712 master-1 kubenswrapper[4771]: I1011 10:33:55.502661 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" event={"ID":"e2fb9636-0787-426e-bd5e-cba0ea823b2b","Type":"ContainerDied","Data":"9b7973318d321c4747b9166204be01b90470f6b7ff6c1031063eb5d24ec05b0e"} Oct 11 10:33:55.510273 master-1 kubenswrapper[4771]: I1011 10:33:55.510148 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"2b1859aa05c2c75eb43d086c9ccd9c86","Type":"ContainerStarted","Data":"2f39d1ed6551318e8799ea55ecdfbfe51ea2b9b7b26411631664f953b1d0e296"} Oct 11 10:33:55.510393 master-1 kubenswrapper[4771]: I1011 10:33:55.510324 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"2b1859aa05c2c75eb43d086c9ccd9c86","Type":"ContainerStarted","Data":"84bbf7ab3fb66f6d01d7500d037317a4cb49a3eae4199b8937858e7e953c7fd3"} Oct 11 10:33:55.552629 master-1 kubenswrapper[4771]: I1011 10:33:55.552295 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-1" podStartSLOduration=5.552265338 podStartE2EDuration="5.552265338s" podCreationTimestamp="2025-10-11 10:33:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:33:55.550437464 +0000 UTC m=+467.524663935" watchObservedRunningTime="2025-10-11 10:33:55.552265338 +0000 UTC m=+467.526491809" Oct 11 10:33:55.970627 master-2 kubenswrapper[4776]: I1011 10:33:55.970503 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:55.970627 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:55.970627 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:55.970627 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:55.971614 master-2 kubenswrapper[4776]: I1011 10:33:55.970646 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: I1011 10:33:56.066539 4771 patch_prober.go:28] interesting pod/apiserver-777cc846dc-qpmws container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:33:56.066643 master-1 kubenswrapper[4771]: I1011 10:33:56.066620 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:56.497212 master-1 kubenswrapper[4771]: I1011 10:33:56.497046 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:56.497212 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:56.497212 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:56.497212 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:56.497212 master-1 kubenswrapper[4771]: I1011 10:33:56.497146 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:56.970269 master-2 kubenswrapper[4776]: I1011 10:33:56.970197 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:56.970269 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:56.970269 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:56.970269 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:56.970882 master-2 kubenswrapper[4776]: I1011 10:33:56.970276 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:57.498032 master-1 kubenswrapper[4771]: I1011 10:33:57.497964 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:57.498032 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:57.498032 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:57.498032 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:57.499254 master-1 kubenswrapper[4771]: I1011 10:33:57.498056 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:57.976271 master-2 kubenswrapper[4776]: I1011 10:33:57.976207 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:57.976271 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:57.976271 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:57.976271 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:57.977010 master-2 kubenswrapper[4776]: I1011 10:33:57.976295 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:58.498052 master-1 kubenswrapper[4771]: I1011 10:33:58.497906 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:58.498052 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:58.498052 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:58.498052 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:58.499263 master-1 kubenswrapper[4771]: I1011 10:33:58.498057 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:58.970029 master-2 kubenswrapper[4776]: I1011 10:33:58.969933 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:58.970029 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:58.970029 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:58.970029 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:58.970291 master-2 kubenswrapper[4776]: I1011 10:33:58.970046 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:59.496791 master-1 kubenswrapper[4771]: I1011 10:33:59.496671 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:59.496791 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:33:59.496791 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:33:59.496791 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:33:59.496791 master-1 kubenswrapper[4771]: I1011 10:33:59.496782 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:59.661752 master-1 kubenswrapper[4771]: I1011 10:33:59.661696 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-guard-master-1" Oct 11 10:33:59.970342 master-2 kubenswrapper[4776]: I1011 10:33:59.970156 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:59.970342 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:59.970342 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:59.970342 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:59.971303 master-2 kubenswrapper[4776]: I1011 10:33:59.970361 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:00.498698 master-1 kubenswrapper[4771]: I1011 10:34:00.498564 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:00.498698 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:34:00.498698 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:34:00.498698 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:34:00.498698 master-1 kubenswrapper[4771]: I1011 10:34:00.498666 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:00.506974 master-1 kubenswrapper[4771]: I1011 10:34:00.506886 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-1" Oct 11 10:34:00.506974 master-1 kubenswrapper[4771]: I1011 10:34:00.506977 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-1" Oct 11 10:34:00.970466 master-2 kubenswrapper[4776]: I1011 10:34:00.970369 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:00.970466 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:00.970466 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:00.970466 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:00.971478 master-2 kubenswrapper[4776]: I1011 10:34:00.970477 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:01.066699 master-1 kubenswrapper[4771]: I1011 10:34:01.066649 4771 patch_prober.go:28] interesting pod/apiserver-777cc846dc-qpmws container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:34:01.066699 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:34:01.066699 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:34:01.066699 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:34:01.066699 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:34:01.066699 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:34:01.066699 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:34:01.066699 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:34:01.066699 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:34:01.066699 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:34:01.066699 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:34:01.066699 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:34:01.066699 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:34:01.066699 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:34:01.066699 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:34:01.066699 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:34:01.066699 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:34:01.066699 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:34:01.066699 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:34:01.068253 master-1 kubenswrapper[4771]: I1011 10:34:01.067818 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:01.497292 master-1 kubenswrapper[4771]: I1011 10:34:01.497083 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:01.497292 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:34:01.497292 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:34:01.497292 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:34:01.497292 master-1 kubenswrapper[4771]: I1011 10:34:01.497183 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:01.970512 master-2 kubenswrapper[4776]: I1011 10:34:01.970431 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:01.970512 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:01.970512 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:01.970512 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:01.971665 master-2 kubenswrapper[4776]: I1011 10:34:01.970520 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:02.498161 master-1 kubenswrapper[4771]: I1011 10:34:02.498055 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:02.498161 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:34:02.498161 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:34:02.498161 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:34:02.498161 master-1 kubenswrapper[4771]: I1011 10:34:02.498141 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:02.971695 master-2 kubenswrapper[4776]: I1011 10:34:02.971603 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:02.971695 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:02.971695 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:02.971695 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:02.972268 master-2 kubenswrapper[4776]: I1011 10:34:02.971722 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:03.497085 master-1 kubenswrapper[4771]: I1011 10:34:03.496975 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:03.497085 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:34:03.497085 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:34:03.497085 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:34:03.497085 master-1 kubenswrapper[4771]: I1011 10:34:03.497052 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:03.970507 master-2 kubenswrapper[4776]: I1011 10:34:03.970444 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:03.970507 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:03.970507 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:03.970507 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:03.970798 master-2 kubenswrapper[4776]: I1011 10:34:03.970508 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:04.496702 master-1 kubenswrapper[4771]: I1011 10:34:04.496624 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:04.496702 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:34:04.496702 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:34:04.496702 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:34:04.496702 master-1 kubenswrapper[4771]: I1011 10:34:04.496697 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:04.969272 master-2 kubenswrapper[4776]: I1011 10:34:04.969210 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:04.969272 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:04.969272 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:04.969272 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:04.969814 master-2 kubenswrapper[4776]: I1011 10:34:04.969303 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:05.497686 master-1 kubenswrapper[4771]: I1011 10:34:05.497569 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:05.497686 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:34:05.497686 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:34:05.497686 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:34:05.497686 master-1 kubenswrapper[4771]: I1011 10:34:05.497645 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:05.970011 master-2 kubenswrapper[4776]: I1011 10:34:05.969937 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:05.970011 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:05.970011 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:05.970011 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:05.970731 master-2 kubenswrapper[4776]: I1011 10:34:05.970030 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:06.067284 master-1 kubenswrapper[4771]: I1011 10:34:06.067198 4771 patch_prober.go:28] interesting pod/apiserver-777cc846dc-qpmws container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:34:06.067284 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:34:06.067284 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:34:06.067284 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:34:06.067284 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:34:06.067284 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:34:06.067284 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:34:06.067284 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:34:06.067284 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:34:06.067284 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:34:06.067284 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:34:06.067284 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:34:06.067284 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:34:06.067284 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:34:06.067284 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:34:06.067284 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:34:06.067284 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:34:06.067284 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:34:06.067284 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:34:06.068284 master-1 kubenswrapper[4771]: I1011 10:34:06.067308 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:06.068284 master-1 kubenswrapper[4771]: I1011 10:34:06.067569 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:34:06.497961 master-1 kubenswrapper[4771]: I1011 10:34:06.497739 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:06.497961 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:34:06.497961 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:34:06.497961 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:34:06.497961 master-1 kubenswrapper[4771]: I1011 10:34:06.497859 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:06.970214 master-2 kubenswrapper[4776]: I1011 10:34:06.970095 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:06.970214 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:06.970214 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:06.970214 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:06.970214 master-2 kubenswrapper[4776]: I1011 10:34:06.970199 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:07.497569 master-1 kubenswrapper[4771]: I1011 10:34:07.497460 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:07.497569 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:34:07.497569 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:34:07.497569 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:34:07.497569 master-1 kubenswrapper[4771]: I1011 10:34:07.497569 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:07.970240 master-2 kubenswrapper[4776]: I1011 10:34:07.970154 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:07.970240 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:07.970240 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:07.970240 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:07.971375 master-2 kubenswrapper[4776]: I1011 10:34:07.970247 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:08.498140 master-1 kubenswrapper[4771]: I1011 10:34:08.498011 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:08.498140 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:34:08.498140 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:34:08.498140 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:34:08.499617 master-1 kubenswrapper[4771]: I1011 10:34:08.498143 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:08.969508 master-2 kubenswrapper[4776]: I1011 10:34:08.969455 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:08.969508 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:08.969508 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:08.969508 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:08.969801 master-2 kubenswrapper[4776]: I1011 10:34:08.969546 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:09.497870 master-1 kubenswrapper[4771]: I1011 10:34:09.497730 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:09.497870 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:34:09.497870 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:34:09.497870 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:34:09.497870 master-1 kubenswrapper[4771]: I1011 10:34:09.497836 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:09.969501 master-2 kubenswrapper[4776]: I1011 10:34:09.969429 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:09.969501 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:09.969501 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:09.969501 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:09.970136 master-2 kubenswrapper[4776]: I1011 10:34:09.969537 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:10.497505 master-1 kubenswrapper[4771]: I1011 10:34:10.497336 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:10.497505 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:34:10.497505 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:34:10.497505 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:34:10.497505 master-1 kubenswrapper[4771]: I1011 10:34:10.497472 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:10.526574 master-1 kubenswrapper[4771]: I1011 10:34:10.526447 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-1" Oct 11 10:34:10.543950 master-1 kubenswrapper[4771]: I1011 10:34:10.543883 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-1" Oct 11 10:34:10.970605 master-2 kubenswrapper[4776]: I1011 10:34:10.970509 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:10.970605 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:10.970605 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:10.970605 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:10.971203 master-2 kubenswrapper[4776]: I1011 10:34:10.970623 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:11.066396 master-1 kubenswrapper[4771]: I1011 10:34:11.066294 4771 patch_prober.go:28] interesting pod/apiserver-777cc846dc-qpmws container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:34:11.066396 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:34:11.066396 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:34:11.066396 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:34:11.066396 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:34:11.066396 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:34:11.066396 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:34:11.066396 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:34:11.066396 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:34:11.066396 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:34:11.066396 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:34:11.066396 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:34:11.066396 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:34:11.066396 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:34:11.066396 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:34:11.066396 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:34:11.066396 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:34:11.066396 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:34:11.066396 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:34:11.067920 master-1 kubenswrapper[4771]: I1011 10:34:11.066408 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:11.497672 master-1 kubenswrapper[4771]: I1011 10:34:11.497508 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:11.497672 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:34:11.497672 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:34:11.497672 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:34:11.497672 master-1 kubenswrapper[4771]: I1011 10:34:11.497598 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:11.969612 master-2 kubenswrapper[4776]: I1011 10:34:11.969554 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:11.969612 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:11.969612 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:11.969612 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:11.970251 master-2 kubenswrapper[4776]: I1011 10:34:11.970207 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:12.497574 master-1 kubenswrapper[4771]: I1011 10:34:12.497480 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:12.497574 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:34:12.497574 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:34:12.497574 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:34:12.497574 master-1 kubenswrapper[4771]: I1011 10:34:12.497560 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:12.969511 master-2 kubenswrapper[4776]: I1011 10:34:12.969403 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:12.969511 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:12.969511 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:12.969511 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:12.969511 master-2 kubenswrapper[4776]: I1011 10:34:12.969466 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:13.498106 master-1 kubenswrapper[4771]: I1011 10:34:13.497973 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:13.498106 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:34:13.498106 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:34:13.498106 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:34:13.499449 master-1 kubenswrapper[4771]: I1011 10:34:13.498098 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:13.970283 master-2 kubenswrapper[4776]: I1011 10:34:13.970192 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:13.970283 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:13.970283 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:13.970283 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:13.970283 master-2 kubenswrapper[4776]: I1011 10:34:13.970282 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:14.497077 master-1 kubenswrapper[4771]: I1011 10:34:14.496997 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:14.497077 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:34:14.497077 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:34:14.497077 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:34:14.497077 master-1 kubenswrapper[4771]: I1011 10:34:14.497086 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:14.970085 master-2 kubenswrapper[4776]: I1011 10:34:14.969978 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:14.970085 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:14.970085 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:14.970085 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:14.970085 master-2 kubenswrapper[4776]: I1011 10:34:14.970054 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:15.420022 master-1 kubenswrapper[4771]: I1011 10:34:15.419936 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-1"] Oct 11 10:34:15.420753 master-1 kubenswrapper[4771]: I1011 10:34:15.420610 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-1" Oct 11 10:34:15.435564 master-1 kubenswrapper[4771]: I1011 10:34:15.435468 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-1"] Oct 11 10:34:15.497195 master-1 kubenswrapper[4771]: I1011 10:34:15.497085 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:15.497195 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:34:15.497195 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:34:15.497195 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:34:15.497195 master-1 kubenswrapper[4771]: I1011 10:34:15.497186 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:15.534569 master-1 kubenswrapper[4771]: I1011 10:34:15.534445 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1c3b2b9-8880-496b-88ed-9706cd8ee23d-var-lock\") pod \"installer-2-master-1\" (UID: \"c1c3b2b9-8880-496b-88ed-9706cd8ee23d\") " pod="openshift-kube-apiserver/installer-2-master-1" Oct 11 10:34:15.534904 master-1 kubenswrapper[4771]: I1011 10:34:15.534765 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1c3b2b9-8880-496b-88ed-9706cd8ee23d-kube-api-access\") pod \"installer-2-master-1\" (UID: \"c1c3b2b9-8880-496b-88ed-9706cd8ee23d\") " pod="openshift-kube-apiserver/installer-2-master-1" Oct 11 10:34:15.534904 master-1 kubenswrapper[4771]: I1011 10:34:15.534807 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1c3b2b9-8880-496b-88ed-9706cd8ee23d-kubelet-dir\") pod \"installer-2-master-1\" (UID: \"c1c3b2b9-8880-496b-88ed-9706cd8ee23d\") " pod="openshift-kube-apiserver/installer-2-master-1" Oct 11 10:34:15.636789 master-1 kubenswrapper[4771]: I1011 10:34:15.636669 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1c3b2b9-8880-496b-88ed-9706cd8ee23d-kube-api-access\") pod \"installer-2-master-1\" (UID: \"c1c3b2b9-8880-496b-88ed-9706cd8ee23d\") " pod="openshift-kube-apiserver/installer-2-master-1" Oct 11 10:34:15.636789 master-1 kubenswrapper[4771]: I1011 10:34:15.636755 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1c3b2b9-8880-496b-88ed-9706cd8ee23d-kubelet-dir\") pod \"installer-2-master-1\" (UID: \"c1c3b2b9-8880-496b-88ed-9706cd8ee23d\") " pod="openshift-kube-apiserver/installer-2-master-1" Oct 11 10:34:15.636789 master-1 kubenswrapper[4771]: I1011 10:34:15.636809 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1c3b2b9-8880-496b-88ed-9706cd8ee23d-var-lock\") pod \"installer-2-master-1\" (UID: \"c1c3b2b9-8880-496b-88ed-9706cd8ee23d\") " pod="openshift-kube-apiserver/installer-2-master-1" Oct 11 10:34:15.637268 master-1 kubenswrapper[4771]: I1011 10:34:15.636977 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1c3b2b9-8880-496b-88ed-9706cd8ee23d-var-lock\") pod \"installer-2-master-1\" (UID: \"c1c3b2b9-8880-496b-88ed-9706cd8ee23d\") " pod="openshift-kube-apiserver/installer-2-master-1" Oct 11 10:34:15.637268 master-1 kubenswrapper[4771]: I1011 10:34:15.636973 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1c3b2b9-8880-496b-88ed-9706cd8ee23d-kubelet-dir\") pod \"installer-2-master-1\" (UID: \"c1c3b2b9-8880-496b-88ed-9706cd8ee23d\") " pod="openshift-kube-apiserver/installer-2-master-1" Oct 11 10:34:15.664663 master-1 kubenswrapper[4771]: I1011 10:34:15.664608 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1c3b2b9-8880-496b-88ed-9706cd8ee23d-kube-api-access\") pod \"installer-2-master-1\" (UID: \"c1c3b2b9-8880-496b-88ed-9706cd8ee23d\") " pod="openshift-kube-apiserver/installer-2-master-1" Oct 11 10:34:15.748584 master-1 kubenswrapper[4771]: I1011 10:34:15.748343 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-1" Oct 11 10:34:15.969861 master-2 kubenswrapper[4776]: I1011 10:34:15.969741 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:15.969861 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:15.969861 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:15.969861 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:15.970419 master-2 kubenswrapper[4776]: I1011 10:34:15.969856 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:16.065757 master-1 kubenswrapper[4771]: I1011 10:34:16.065698 4771 patch_prober.go:28] interesting pod/apiserver-777cc846dc-qpmws container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:34:16.065757 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:34:16.065757 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:34:16.065757 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:34:16.065757 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:34:16.065757 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:34:16.065757 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:34:16.065757 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:34:16.065757 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:34:16.065757 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:34:16.065757 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:34:16.065757 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:34:16.065757 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:34:16.065757 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:34:16.065757 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:34:16.065757 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:34:16.065757 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:34:16.065757 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:34:16.065757 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:34:16.067041 master-1 kubenswrapper[4771]: I1011 10:34:16.065779 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:16.230261 master-1 kubenswrapper[4771]: I1011 10:34:16.229800 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-1"] Oct 11 10:34:16.497855 master-1 kubenswrapper[4771]: I1011 10:34:16.497743 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:16.497855 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:34:16.497855 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:34:16.497855 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:34:16.499089 master-1 kubenswrapper[4771]: I1011 10:34:16.497863 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:16.499089 master-1 kubenswrapper[4771]: I1011 10:34:16.497950 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:34:16.499282 master-1 kubenswrapper[4771]: I1011 10:34:16.499106 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"2fd6e0cb14ecdcadbf2571f6d4dd1d2a4a1e6cf999fc333d09b9fc98b284b780"} pod="openshift-ingress/router-default-5ddb89f76-z5t6x" containerMessage="Container router failed startup probe, will be restarted" Oct 11 10:34:16.499282 master-1 kubenswrapper[4771]: I1011 10:34:16.499171 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" containerID="cri-o://2fd6e0cb14ecdcadbf2571f6d4dd1d2a4a1e6cf999fc333d09b9fc98b284b780" gracePeriod=3600 Oct 11 10:34:16.655268 master-1 kubenswrapper[4771]: I1011 10:34:16.655205 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-1" event={"ID":"c1c3b2b9-8880-496b-88ed-9706cd8ee23d","Type":"ContainerStarted","Data":"781880873ca29705a429a8abc16c37af29927d033898ec8fedabee8745269269"} Oct 11 10:34:16.970774 master-2 kubenswrapper[4776]: I1011 10:34:16.970632 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:16.970774 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:16.970774 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:16.970774 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:16.971979 master-2 kubenswrapper[4776]: I1011 10:34:16.970785 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:16.971979 master-2 kubenswrapper[4776]: I1011 10:34:16.970868 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:34:16.971979 master-2 kubenswrapper[4776]: I1011 10:34:16.971816 4776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"532aa417a81b450c3ee53605926d217d88d13c85c6a1d9e5ea21cc1e35ca9346"} pod="openshift-ingress/router-default-5ddb89f76-57kcw" containerMessage="Container router failed startup probe, will be restarted" Oct 11 10:34:16.971979 master-2 kubenswrapper[4776]: I1011 10:34:16.971884 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" containerID="cri-o://532aa417a81b450c3ee53605926d217d88d13c85c6a1d9e5ea21cc1e35ca9346" gracePeriod=3600 Oct 11 10:34:17.661817 master-1 kubenswrapper[4771]: I1011 10:34:17.661735 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-1" event={"ID":"c1c3b2b9-8880-496b-88ed-9706cd8ee23d","Type":"ContainerStarted","Data":"d5b95693856e76475228e56822cf59bc988be47b5715c02d7d3f81ff2fa1bb74"} Oct 11 10:34:17.688328 master-1 kubenswrapper[4771]: I1011 10:34:17.688207 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-1" podStartSLOduration=2.68818185 podStartE2EDuration="2.68818185s" podCreationTimestamp="2025-10-11 10:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:34:17.683469381 +0000 UTC m=+489.657695892" watchObservedRunningTime="2025-10-11 10:34:17.68818185 +0000 UTC m=+489.662408331" Oct 11 10:34:17.786987 master-1 kubenswrapper[4771]: I1011 10:34:17.786892 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-h7gnk_6d20faa4-e5eb-4766-b4f5-30e491d1820c/machine-config-server/0.log" Oct 11 10:34:17.793744 master-2 kubenswrapper[4776]: I1011 10:34:17.793602 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:34:19.003863 master-2 kubenswrapper[4776]: I1011 10:34:19.003735 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-5-master-2"] Oct 11 10:34:19.005432 master-2 kubenswrapper[4776]: I1011 10:34:19.005359 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.048481 master-2 kubenswrapper[4776]: I1011 10:34:19.022981 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-5-master-2"] Oct 11 10:34:19.165332 master-2 kubenswrapper[4776]: I1011 10:34:19.165263 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-kubelet-dir\") pod \"installer-5-master-2\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.165332 master-2 kubenswrapper[4776]: I1011 10:34:19.165340 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-var-lock\") pod \"installer-5-master-2\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.165568 master-2 kubenswrapper[4776]: I1011 10:34:19.165481 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebeec22d-9309-4efd-bbc0-f44c750a258c-kube-api-access\") pod \"installer-5-master-2\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.267121 master-2 kubenswrapper[4776]: I1011 10:34:19.266975 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebeec22d-9309-4efd-bbc0-f44c750a258c-kube-api-access\") pod \"installer-5-master-2\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.267305 master-2 kubenswrapper[4776]: I1011 10:34:19.267135 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-kubelet-dir\") pod \"installer-5-master-2\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.267305 master-2 kubenswrapper[4776]: I1011 10:34:19.267173 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-var-lock\") pod \"installer-5-master-2\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.267305 master-2 kubenswrapper[4776]: I1011 10:34:19.267259 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-var-lock\") pod \"installer-5-master-2\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.267465 master-2 kubenswrapper[4776]: I1011 10:34:19.267315 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-kubelet-dir\") pod \"installer-5-master-2\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.293558 master-2 kubenswrapper[4776]: I1011 10:34:19.293461 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebeec22d-9309-4efd-bbc0-f44c750a258c-kube-api-access\") pod \"installer-5-master-2\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.365240 master-2 kubenswrapper[4776]: I1011 10:34:19.365166 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.829544 master-2 kubenswrapper[4776]: I1011 10:34:19.829487 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-5-master-2"] Oct 11 10:34:19.842734 master-2 kubenswrapper[4776]: W1011 10:34:19.842640 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podebeec22d_9309_4efd_bbc0_f44c750a258c.slice/crio-6865f7f3eb804d27ba2426a02d0ac86e934a61a183cb3434647c8b9458b1b6af WatchSource:0}: Error finding container 6865f7f3eb804d27ba2426a02d0ac86e934a61a183cb3434647c8b9458b1b6af: Status 404 returned error can't find the container with id 6865f7f3eb804d27ba2426a02d0ac86e934a61a183cb3434647c8b9458b1b6af Oct 11 10:34:20.286259 master-2 kubenswrapper[4776]: I1011 10:34:20.286173 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-5-master-2" event={"ID":"ebeec22d-9309-4efd-bbc0-f44c750a258c","Type":"ContainerStarted","Data":"25ad594b9284fd2089fd6abfaa970257ef0a465b5e9177e3d753cb32feaf3eb1"} Oct 11 10:34:20.286259 master-2 kubenswrapper[4776]: I1011 10:34:20.286234 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-5-master-2" event={"ID":"ebeec22d-9309-4efd-bbc0-f44c750a258c","Type":"ContainerStarted","Data":"6865f7f3eb804d27ba2426a02d0ac86e934a61a183cb3434647c8b9458b1b6af"} Oct 11 10:34:20.309345 master-2 kubenswrapper[4776]: I1011 10:34:20.309239 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-5-master-2" podStartSLOduration=2.309213625 podStartE2EDuration="2.309213625s" podCreationTimestamp="2025-10-11 10:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:34:20.303850683 +0000 UTC m=+495.088277462" watchObservedRunningTime="2025-10-11 10:34:20.309213625 +0000 UTC m=+495.093640334" Oct 11 10:34:21.064871 master-1 kubenswrapper[4771]: I1011 10:34:21.064775 4771 patch_prober.go:28] interesting pod/apiserver-777cc846dc-qpmws container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:34:21.064871 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:34:21.064871 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:34:21.064871 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:34:21.064871 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:34:21.064871 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:34:21.064871 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:34:21.064871 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:34:21.064871 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:34:21.064871 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:34:21.064871 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:34:21.064871 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:34:21.064871 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:34:21.064871 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:34:21.064871 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:34:21.064871 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:34:21.064871 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:34:21.064871 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:34:21.064871 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:34:21.065813 master-1 kubenswrapper[4771]: I1011 10:34:21.064878 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:24.323111 master-2 kubenswrapper[4776]: I1011 10:34:24.323029 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:34:26.064583 master-1 kubenswrapper[4771]: I1011 10:34:26.064460 4771 patch_prober.go:28] interesting pod/apiserver-777cc846dc-qpmws container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:34:26.064583 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:34:26.064583 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:34:26.064583 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:34:26.064583 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:34:26.064583 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:34:26.064583 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:34:26.064583 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:34:26.064583 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:34:26.064583 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:34:26.064583 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:34:26.064583 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:34:26.064583 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:34:26.064583 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:34:26.064583 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:34:26.064583 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:34:26.064583 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:34:26.064583 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:34:26.064583 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:34:26.066589 master-1 kubenswrapper[4771]: I1011 10:34:26.064583 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: I1011 10:34:31.066643 4771 patch_prober.go:28] interesting pod/apiserver-777cc846dc-qpmws container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:34:31.066752 master-1 kubenswrapper[4771]: I1011 10:34:31.066733 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: I1011 10:34:36.064396 4771 patch_prober.go:28] interesting pod/apiserver-777cc846dc-qpmws container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:34:36.064541 master-1 kubenswrapper[4771]: I1011 10:34:36.064491 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:41.064815 master-1 kubenswrapper[4771]: I1011 10:34:41.064667 4771 patch_prober.go:28] interesting pod/apiserver-777cc846dc-qpmws container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:34:41.064815 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:34:41.064815 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:34:41.064815 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:34:41.064815 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:34:41.064815 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:34:41.064815 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:34:41.064815 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:34:41.064815 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:34:41.064815 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:34:41.064815 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:34:41.064815 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:34:41.064815 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:34:41.064815 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:34:41.064815 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:34:41.064815 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:34:41.064815 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:34:41.064815 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:34:41.064815 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:34:41.067169 master-1 kubenswrapper[4771]: I1011 10:34:41.067119 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:42.481193 master-2 kubenswrapper[4776]: E1011 10:34:42.480922 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" podUID="17bef070-1a9d-4090-b97a-7ce2c1c93b19" Oct 11 10:34:43.471706 master-2 kubenswrapper[4776]: I1011 10:34:43.471598 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:34:43.762505 master-2 kubenswrapper[4776]: I1011 10:34:43.762308 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:34:43.763030 master-2 kubenswrapper[4776]: E1011 10:34:43.762626 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:34:43.763030 master-2 kubenswrapper[4776]: E1011 10:34:43.762781 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:36:45.762744266 +0000 UTC m=+640.547171015 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:34:45.498317 master-2 kubenswrapper[4776]: E1011 10:34:45.498184 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" podUID="cacd2d60-e8a5-450f-a4ad-dfc0194e3325" Oct 11 10:34:46.059561 master-1 kubenswrapper[4771]: I1011 10:34:46.059432 4771 patch_prober.go:28] interesting pod/apiserver-777cc846dc-qpmws container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.129.0.48:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.48:8443: connect: connection refused" start-of-body= Oct 11 10:34:46.059561 master-1 kubenswrapper[4771]: I1011 10:34:46.059552 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.129.0.48:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.48:8443: connect: connection refused" Oct 11 10:34:46.496580 master-2 kubenswrapper[4776]: I1011 10:34:46.496492 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:34:46.711710 master-2 kubenswrapper[4776]: I1011 10:34:46.711629 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:34:46.713184 master-2 kubenswrapper[4776]: E1011 10:34:46.711823 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:34:46.713184 master-2 kubenswrapper[4776]: E1011 10:34:46.712157 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:36:48.712128363 +0000 UTC m=+643.496555102 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:34:46.861094 master-1 kubenswrapper[4771]: I1011 10:34:46.861028 4771 generic.go:334] "Generic (PLEG): container finished" podID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerID="5314d6ef2281ac080baefb268e1b24e3959c52d75eecf8bba9e60d0238801c00" exitCode=0 Oct 11 10:34:46.861311 master-1 kubenswrapper[4771]: I1011 10:34:46.861096 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" event={"ID":"e2fb9636-0787-426e-bd5e-cba0ea823b2b","Type":"ContainerDied","Data":"5314d6ef2281ac080baefb268e1b24e3959c52d75eecf8bba9e60d0238801c00"} Oct 11 10:34:46.861311 master-1 kubenswrapper[4771]: I1011 10:34:46.861143 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" event={"ID":"e2fb9636-0787-426e-bd5e-cba0ea823b2b","Type":"ContainerDied","Data":"a41a821c8fbcdc8c024fe125a36dfc655949ba099ab1bab4420d6e97047ce118"} Oct 11 10:34:46.861311 master-1 kubenswrapper[4771]: I1011 10:34:46.861168 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a41a821c8fbcdc8c024fe125a36dfc655949ba099ab1bab4420d6e97047ce118" Oct 11 10:34:46.876501 master-1 kubenswrapper[4771]: I1011 10:34:46.876282 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:34:46.926438 master-1 kubenswrapper[4771]: I1011 10:34:46.926321 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-7845cf54d8-g8x5z"] Oct 11 10:34:46.926888 master-1 kubenswrapper[4771]: E1011 10:34:46.926822 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="openshift-apiserver" Oct 11 10:34:46.926888 master-1 kubenswrapper[4771]: I1011 10:34:46.926877 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="openshift-apiserver" Oct 11 10:34:46.927059 master-1 kubenswrapper[4771]: E1011 10:34:46.926903 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="fix-audit-permissions" Oct 11 10:34:46.927059 master-1 kubenswrapper[4771]: I1011 10:34:46.926953 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="fix-audit-permissions" Oct 11 10:34:46.927059 master-1 kubenswrapper[4771]: E1011 10:34:46.926979 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="openshift-apiserver-check-endpoints" Oct 11 10:34:46.927059 master-1 kubenswrapper[4771]: I1011 10:34:46.926998 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="openshift-apiserver-check-endpoints" Oct 11 10:34:46.927329 master-1 kubenswrapper[4771]: I1011 10:34:46.927230 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="openshift-apiserver" Oct 11 10:34:46.927329 master-1 kubenswrapper[4771]: I1011 10:34:46.927262 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" containerName="openshift-apiserver-check-endpoints" Oct 11 10:34:46.928722 master-1 kubenswrapper[4771]: I1011 10:34:46.928670 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:46.936847 master-1 kubenswrapper[4771]: I1011 10:34:46.936786 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7845cf54d8-g8x5z"] Oct 11 10:34:46.961560 master-1 kubenswrapper[4771]: I1011 10:34:46.961435 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-audit\") pod \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " Oct 11 10:34:46.961869 master-1 kubenswrapper[4771]: I1011 10:34:46.961638 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpdjh\" (UniqueName: \"kubernetes.io/projected/e2fb9636-0787-426e-bd5e-cba0ea823b2b-kube-api-access-dpdjh\") pod \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " Oct 11 10:34:46.961869 master-1 kubenswrapper[4771]: I1011 10:34:46.961755 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e2fb9636-0787-426e-bd5e-cba0ea823b2b-etcd-client\") pod \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " Oct 11 10:34:46.961869 master-1 kubenswrapper[4771]: I1011 10:34:46.961835 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-image-import-ca\") pod \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " Oct 11 10:34:46.962214 master-1 kubenswrapper[4771]: I1011 10:34:46.961882 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-audit" (OuterVolumeSpecName: "audit") pod "e2fb9636-0787-426e-bd5e-cba0ea823b2b" (UID: "e2fb9636-0787-426e-bd5e-cba0ea823b2b"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:34:46.962214 master-1 kubenswrapper[4771]: I1011 10:34:46.961904 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-trusted-ca-bundle\") pod \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " Oct 11 10:34:46.962214 master-1 kubenswrapper[4771]: I1011 10:34:46.961981 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-etcd-serving-ca\") pod \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " Oct 11 10:34:46.962214 master-1 kubenswrapper[4771]: I1011 10:34:46.962035 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2fb9636-0787-426e-bd5e-cba0ea823b2b-serving-cert\") pod \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " Oct 11 10:34:46.962214 master-1 kubenswrapper[4771]: I1011 10:34:46.962113 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e2fb9636-0787-426e-bd5e-cba0ea823b2b-node-pullsecrets\") pod \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " Oct 11 10:34:46.962577 master-1 kubenswrapper[4771]: I1011 10:34:46.962216 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-config\") pod \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " Oct 11 10:34:46.962577 master-1 kubenswrapper[4771]: I1011 10:34:46.962378 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "e2fb9636-0787-426e-bd5e-cba0ea823b2b" (UID: "e2fb9636-0787-426e-bd5e-cba0ea823b2b"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:34:46.962577 master-1 kubenswrapper[4771]: I1011 10:34:46.962401 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e2fb9636-0787-426e-bd5e-cba0ea823b2b-audit-dir\") pod \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " Oct 11 10:34:46.962577 master-1 kubenswrapper[4771]: I1011 10:34:46.962507 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e2fb9636-0787-426e-bd5e-cba0ea823b2b-encryption-config\") pod \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\" (UID: \"e2fb9636-0787-426e-bd5e-cba0ea823b2b\") " Oct 11 10:34:46.962700 master-1 kubenswrapper[4771]: I1011 10:34:46.962618 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2fb9636-0787-426e-bd5e-cba0ea823b2b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "e2fb9636-0787-426e-bd5e-cba0ea823b2b" (UID: "e2fb9636-0787-426e-bd5e-cba0ea823b2b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:34:46.962747 master-1 kubenswrapper[4771]: I1011 10:34:46.962730 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2fb9636-0787-426e-bd5e-cba0ea823b2b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e2fb9636-0787-426e-bd5e-cba0ea823b2b" (UID: "e2fb9636-0787-426e-bd5e-cba0ea823b2b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:34:46.963075 master-1 kubenswrapper[4771]: I1011 10:34:46.963031 4771 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e2fb9636-0787-426e-bd5e-cba0ea823b2b-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:34:46.963075 master-1 kubenswrapper[4771]: I1011 10:34:46.963070 4771 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-audit\") on node \"master-1\" DevicePath \"\"" Oct 11 10:34:46.963142 master-1 kubenswrapper[4771]: I1011 10:34:46.963123 4771 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-image-import-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:34:46.963182 master-1 kubenswrapper[4771]: I1011 10:34:46.963146 4771 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e2fb9636-0787-426e-bd5e-cba0ea823b2b-node-pullsecrets\") on node \"master-1\" DevicePath \"\"" Oct 11 10:34:46.963320 master-1 kubenswrapper[4771]: I1011 10:34:46.963295 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "e2fb9636-0787-426e-bd5e-cba0ea823b2b" (UID: "e2fb9636-0787-426e-bd5e-cba0ea823b2b"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:34:46.963443 master-1 kubenswrapper[4771]: I1011 10:34:46.963350 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-config" (OuterVolumeSpecName: "config") pod "e2fb9636-0787-426e-bd5e-cba0ea823b2b" (UID: "e2fb9636-0787-426e-bd5e-cba0ea823b2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:34:46.963833 master-1 kubenswrapper[4771]: I1011 10:34:46.963754 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e2fb9636-0787-426e-bd5e-cba0ea823b2b" (UID: "e2fb9636-0787-426e-bd5e-cba0ea823b2b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:34:46.965633 master-1 kubenswrapper[4771]: I1011 10:34:46.965541 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2fb9636-0787-426e-bd5e-cba0ea823b2b-kube-api-access-dpdjh" (OuterVolumeSpecName: "kube-api-access-dpdjh") pod "e2fb9636-0787-426e-bd5e-cba0ea823b2b" (UID: "e2fb9636-0787-426e-bd5e-cba0ea823b2b"). InnerVolumeSpecName "kube-api-access-dpdjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:34:46.966855 master-1 kubenswrapper[4771]: I1011 10:34:46.966814 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2fb9636-0787-426e-bd5e-cba0ea823b2b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e2fb9636-0787-426e-bd5e-cba0ea823b2b" (UID: "e2fb9636-0787-426e-bd5e-cba0ea823b2b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:34:46.967994 master-1 kubenswrapper[4771]: I1011 10:34:46.967940 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2fb9636-0787-426e-bd5e-cba0ea823b2b-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "e2fb9636-0787-426e-bd5e-cba0ea823b2b" (UID: "e2fb9636-0787-426e-bd5e-cba0ea823b2b"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:34:46.974863 master-1 kubenswrapper[4771]: I1011 10:34:46.974807 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2fb9636-0787-426e-bd5e-cba0ea823b2b-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "e2fb9636-0787-426e-bd5e-cba0ea823b2b" (UID: "e2fb9636-0787-426e-bd5e-cba0ea823b2b"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:34:47.065158 master-1 kubenswrapper[4771]: I1011 10:34:47.065073 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2bf529d-094c-4406-8ce6-890cf8c0b840-audit-dir\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.065957 master-1 kubenswrapper[4771]: I1011 10:34:47.065177 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-etcd-serving-ca\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.065957 master-1 kubenswrapper[4771]: I1011 10:34:47.065216 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a2bf529d-094c-4406-8ce6-890cf8c0b840-encryption-config\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.065957 master-1 kubenswrapper[4771]: I1011 10:34:47.065390 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-image-import-ca\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.065957 master-1 kubenswrapper[4771]: I1011 10:34:47.065515 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a2bf529d-094c-4406-8ce6-890cf8c0b840-etcd-client\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.065957 master-1 kubenswrapper[4771]: I1011 10:34:47.065583 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2bf529d-094c-4406-8ce6-890cf8c0b840-node-pullsecrets\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.065957 master-1 kubenswrapper[4771]: I1011 10:34:47.065613 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b78cw\" (UniqueName: \"kubernetes.io/projected/a2bf529d-094c-4406-8ce6-890cf8c0b840-kube-api-access-b78cw\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.065957 master-1 kubenswrapper[4771]: I1011 10:34:47.065641 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-audit\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.065957 master-1 kubenswrapper[4771]: I1011 10:34:47.065665 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2bf529d-094c-4406-8ce6-890cf8c0b840-serving-cert\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.065957 master-1 kubenswrapper[4771]: I1011 10:34:47.065686 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-trusted-ca-bundle\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.065957 master-1 kubenswrapper[4771]: I1011 10:34:47.065723 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-config\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.065957 master-1 kubenswrapper[4771]: I1011 10:34:47.065840 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpdjh\" (UniqueName: \"kubernetes.io/projected/e2fb9636-0787-426e-bd5e-cba0ea823b2b-kube-api-access-dpdjh\") on node \"master-1\" DevicePath \"\"" Oct 11 10:34:47.065957 master-1 kubenswrapper[4771]: I1011 10:34:47.065858 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e2fb9636-0787-426e-bd5e-cba0ea823b2b-etcd-client\") on node \"master-1\" DevicePath \"\"" Oct 11 10:34:47.065957 master-1 kubenswrapper[4771]: I1011 10:34:47.065872 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:34:47.065957 master-1 kubenswrapper[4771]: I1011 10:34:47.065885 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-etcd-serving-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:34:47.066486 master-1 kubenswrapper[4771]: I1011 10:34:47.066009 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2fb9636-0787-426e-bd5e-cba0ea823b2b-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:34:47.066486 master-1 kubenswrapper[4771]: I1011 10:34:47.066087 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2fb9636-0787-426e-bd5e-cba0ea823b2b-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:34:47.066486 master-1 kubenswrapper[4771]: I1011 10:34:47.066109 4771 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e2fb9636-0787-426e-bd5e-cba0ea823b2b-encryption-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:34:47.167686 master-1 kubenswrapper[4771]: I1011 10:34:47.167500 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-image-import-ca\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.167686 master-1 kubenswrapper[4771]: I1011 10:34:47.167590 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a2bf529d-094c-4406-8ce6-890cf8c0b840-etcd-client\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.167686 master-1 kubenswrapper[4771]: I1011 10:34:47.167626 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2bf529d-094c-4406-8ce6-890cf8c0b840-node-pullsecrets\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.167686 master-1 kubenswrapper[4771]: I1011 10:34:47.167648 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b78cw\" (UniqueName: \"kubernetes.io/projected/a2bf529d-094c-4406-8ce6-890cf8c0b840-kube-api-access-b78cw\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.167686 master-1 kubenswrapper[4771]: I1011 10:34:47.167671 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-audit\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.167686 master-1 kubenswrapper[4771]: I1011 10:34:47.167691 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-trusted-ca-bundle\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.167686 master-1 kubenswrapper[4771]: I1011 10:34:47.167707 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2bf529d-094c-4406-8ce6-890cf8c0b840-serving-cert\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.168172 master-1 kubenswrapper[4771]: I1011 10:34:47.167734 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-config\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.168172 master-1 kubenswrapper[4771]: I1011 10:34:47.167758 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2bf529d-094c-4406-8ce6-890cf8c0b840-audit-dir\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.168172 master-1 kubenswrapper[4771]: I1011 10:34:47.167794 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-etcd-serving-ca\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.168172 master-1 kubenswrapper[4771]: I1011 10:34:47.167816 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a2bf529d-094c-4406-8ce6-890cf8c0b840-encryption-config\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.168172 master-1 kubenswrapper[4771]: I1011 10:34:47.167868 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2bf529d-094c-4406-8ce6-890cf8c0b840-node-pullsecrets\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.168172 master-1 kubenswrapper[4771]: I1011 10:34:47.168111 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2bf529d-094c-4406-8ce6-890cf8c0b840-audit-dir\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.169157 master-1 kubenswrapper[4771]: I1011 10:34:47.168770 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-image-import-ca\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.169157 master-1 kubenswrapper[4771]: I1011 10:34:47.169014 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-config\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.169157 master-1 kubenswrapper[4771]: I1011 10:34:47.169109 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-audit\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.169827 master-1 kubenswrapper[4771]: I1011 10:34:47.169776 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-etcd-serving-ca\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.170806 master-1 kubenswrapper[4771]: I1011 10:34:47.170718 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-trusted-ca-bundle\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.171577 master-1 kubenswrapper[4771]: I1011 10:34:47.171521 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a2bf529d-094c-4406-8ce6-890cf8c0b840-encryption-config\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.173483 master-1 kubenswrapper[4771]: I1011 10:34:47.173426 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2bf529d-094c-4406-8ce6-890cf8c0b840-serving-cert\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.173827 master-1 kubenswrapper[4771]: I1011 10:34:47.173777 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a2bf529d-094c-4406-8ce6-890cf8c0b840-etcd-client\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.199986 master-1 kubenswrapper[4771]: I1011 10:34:47.199918 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b78cw\" (UniqueName: \"kubernetes.io/projected/a2bf529d-094c-4406-8ce6-890cf8c0b840-kube-api-access-b78cw\") pod \"apiserver-7845cf54d8-g8x5z\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.249823 master-1 kubenswrapper[4771]: I1011 10:34:47.249740 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:47.724746 master-1 kubenswrapper[4771]: I1011 10:34:47.724612 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7845cf54d8-g8x5z"] Oct 11 10:34:47.730860 master-1 kubenswrapper[4771]: W1011 10:34:47.730780 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2bf529d_094c_4406_8ce6_890cf8c0b840.slice/crio-e398827cf779d365dfc4e6c2443dd2f776caa9a8ba75c41d00aafc513ef28957 WatchSource:0}: Error finding container e398827cf779d365dfc4e6c2443dd2f776caa9a8ba75c41d00aafc513ef28957: Status 404 returned error can't find the container with id e398827cf779d365dfc4e6c2443dd2f776caa9a8ba75c41d00aafc513ef28957 Oct 11 10:34:47.796698 master-1 kubenswrapper[4771]: I1011 10:34:47.796621 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-h7gnk_6d20faa4-e5eb-4766-b4f5-30e491d1820c/machine-config-server/0.log" Oct 11 10:34:47.805321 master-2 kubenswrapper[4776]: I1011 10:34:47.805260 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:34:47.872946 master-1 kubenswrapper[4771]: I1011 10:34:47.872856 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" event={"ID":"a2bf529d-094c-4406-8ce6-890cf8c0b840","Type":"ContainerStarted","Data":"e398827cf779d365dfc4e6c2443dd2f776caa9a8ba75c41d00aafc513ef28957"} Oct 11 10:34:47.872946 master-1 kubenswrapper[4771]: I1011 10:34:47.872917 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-777cc846dc-qpmws" Oct 11 10:34:47.927150 master-1 kubenswrapper[4771]: I1011 10:34:47.927076 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-777cc846dc-qpmws"] Oct 11 10:34:47.930165 master-1 kubenswrapper[4771]: I1011 10:34:47.930072 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-777cc846dc-qpmws"] Oct 11 10:34:48.447462 master-1 kubenswrapper[4771]: I1011 10:34:48.447328 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2fb9636-0787-426e-bd5e-cba0ea823b2b" path="/var/lib/kubelet/pods/e2fb9636-0787-426e-bd5e-cba0ea823b2b/volumes" Oct 11 10:34:48.881563 master-1 kubenswrapper[4771]: I1011 10:34:48.881469 4771 generic.go:334] "Generic (PLEG): container finished" podID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerID="5a44ec551f4491e724d147c13cc98b993a3968bac1f8f715ba1d91a8129c8004" exitCode=0 Oct 11 10:34:48.881563 master-1 kubenswrapper[4771]: I1011 10:34:48.881554 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" event={"ID":"a2bf529d-094c-4406-8ce6-890cf8c0b840","Type":"ContainerDied","Data":"5a44ec551f4491e724d147c13cc98b993a3968bac1f8f715ba1d91a8129c8004"} Oct 11 10:34:49.891467 master-1 kubenswrapper[4771]: I1011 10:34:49.891392 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" event={"ID":"a2bf529d-094c-4406-8ce6-890cf8c0b840","Type":"ContainerStarted","Data":"a0772db7a40ce6f228f65f235a6668a5f2f1781a4f227000cf9ad01206d856f2"} Oct 11 10:34:49.892318 master-1 kubenswrapper[4771]: I1011 10:34:49.891478 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" event={"ID":"a2bf529d-094c-4406-8ce6-890cf8c0b840","Type":"ContainerStarted","Data":"2ccd5ea4ca8c2b32e04ef7419d2c1c1ac0971dd1b18e1a37cd16058b70e5a98c"} Oct 11 10:34:49.923786 master-1 kubenswrapper[4771]: I1011 10:34:49.923716 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podStartSLOduration=54.923697914 podStartE2EDuration="54.923697914s" podCreationTimestamp="2025-10-11 10:33:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:34:49.92219166 +0000 UTC m=+521.896418141" watchObservedRunningTime="2025-10-11 10:34:49.923697914 +0000 UTC m=+521.897924365" Oct 11 10:34:51.554632 master-2 kubenswrapper[4776]: I1011 10:34:51.554562 4776 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:34:51.555267 master-2 kubenswrapper[4776]: E1011 10:34:51.554851 4776 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/etcd-pod.yaml\": /etc/kubernetes/manifests/etcd-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Oct 11 10:34:51.555267 master-2 kubenswrapper[4776]: I1011 10:34:51.554907 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcdctl" containerID="cri-o://1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8" gracePeriod=30 Oct 11 10:34:51.555267 master-2 kubenswrapper[4776]: I1011 10:34:51.554955 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-readyz" containerID="cri-o://352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7" gracePeriod=30 Oct 11 10:34:51.555267 master-2 kubenswrapper[4776]: I1011 10:34:51.554975 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-rev" containerID="cri-o://2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d" gracePeriod=30 Oct 11 10:34:51.555267 master-2 kubenswrapper[4776]: I1011 10:34:51.554982 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-metrics" containerID="cri-o://e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6" gracePeriod=30 Oct 11 10:34:51.555267 master-2 kubenswrapper[4776]: I1011 10:34:51.555176 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" containerID="cri-o://4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737" gracePeriod=30 Oct 11 10:34:51.565988 master-2 kubenswrapper[4776]: I1011 10:34:51.565630 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:34:51.566177 master-2 kubenswrapper[4776]: E1011 10:34:51.566130 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-resources-copy" Oct 11 10:34:51.566177 master-2 kubenswrapper[4776]: I1011 10:34:51.566158 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-resources-copy" Oct 11 10:34:51.566177 master-2 kubenswrapper[4776]: E1011 10:34:51.566175 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-rev" Oct 11 10:34:51.566281 master-2 kubenswrapper[4776]: I1011 10:34:51.566187 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-rev" Oct 11 10:34:51.566281 master-2 kubenswrapper[4776]: E1011 10:34:51.566207 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-ensure-env-vars" Oct 11 10:34:51.566281 master-2 kubenswrapper[4776]: I1011 10:34:51.566219 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-ensure-env-vars" Oct 11 10:34:51.566281 master-2 kubenswrapper[4776]: E1011 10:34:51.566236 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="setup" Oct 11 10:34:51.566281 master-2 kubenswrapper[4776]: I1011 10:34:51.566245 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="setup" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: E1011 10:34:51.566259 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: I1011 10:34:51.566306 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: E1011 10:34:51.566327 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcdctl" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: I1011 10:34:51.566339 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcdctl" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: E1011 10:34:51.566352 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-readyz" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: I1011 10:34:51.566363 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-readyz" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: E1011 10:34:51.566378 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: I1011 10:34:51.566388 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: E1011 10:34:51.566404 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-metrics" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: I1011 10:34:51.566415 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-metrics" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: E1011 10:34:51.566428 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" Oct 11 10:34:51.566801 master-2 kubenswrapper[4776]: I1011 10:34:51.566440 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" Oct 11 10:34:51.566801 master-2 kubenswrapper[4776]: I1011 10:34:51.566627 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" Oct 11 10:34:51.566801 master-2 kubenswrapper[4776]: I1011 10:34:51.566644 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-readyz" Oct 11 10:34:51.566801 master-2 kubenswrapper[4776]: I1011 10:34:51.566656 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-rev" Oct 11 10:34:51.566801 master-2 kubenswrapper[4776]: I1011 10:34:51.566673 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-metrics" Oct 11 10:34:51.566801 master-2 kubenswrapper[4776]: I1011 10:34:51.566690 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" Oct 11 10:34:51.566801 master-2 kubenswrapper[4776]: I1011 10:34:51.566727 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" Oct 11 10:34:51.566801 master-2 kubenswrapper[4776]: I1011 10:34:51.566746 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcdctl" Oct 11 10:34:51.596590 master-2 kubenswrapper[4776]: I1011 10:34:51.596531 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-log-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.596668 master-2 kubenswrapper[4776]: I1011 10:34:51.596608 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-cert-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.596668 master-2 kubenswrapper[4776]: I1011 10:34:51.596638 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-usr-local-bin\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.596776 master-2 kubenswrapper[4776]: I1011 10:34:51.596661 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-resource-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.596810 master-2 kubenswrapper[4776]: I1011 10:34:51.596772 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-static-pod-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.596844 master-2 kubenswrapper[4776]: I1011 10:34:51.596810 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-data-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698473 master-2 kubenswrapper[4776]: I1011 10:34:51.698385 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-data-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698619 master-2 kubenswrapper[4776]: I1011 10:34:51.698562 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-data-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698619 master-2 kubenswrapper[4776]: I1011 10:34:51.698604 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-log-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698694 master-2 kubenswrapper[4776]: I1011 10:34:51.698665 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-log-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698758 master-2 kubenswrapper[4776]: I1011 10:34:51.698738 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-cert-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698799 master-2 kubenswrapper[4776]: I1011 10:34:51.698782 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-usr-local-bin\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698849 master-2 kubenswrapper[4776]: I1011 10:34:51.698818 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-resource-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698905 master-2 kubenswrapper[4776]: I1011 10:34:51.698873 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-usr-local-bin\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698954 master-2 kubenswrapper[4776]: I1011 10:34:51.698910 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-cert-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698987 master-2 kubenswrapper[4776]: I1011 10:34:51.698928 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-static-pod-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.699019 master-2 kubenswrapper[4776]: I1011 10:34:51.698937 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-resource-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.699050 master-2 kubenswrapper[4776]: I1011 10:34:51.698888 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-static-pod-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:52.250682 master-1 kubenswrapper[4771]: I1011 10:34:52.250592 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:52.250682 master-1 kubenswrapper[4771]: I1011 10:34:52.250700 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:52.262800 master-1 kubenswrapper[4771]: I1011 10:34:52.262740 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:52.547299 master-2 kubenswrapper[4776]: I1011 10:34:52.546053 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/1.log" Oct 11 10:34:52.547299 master-2 kubenswrapper[4776]: I1011 10:34:52.546590 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd-rev/0.log" Oct 11 10:34:52.548884 master-2 kubenswrapper[4776]: I1011 10:34:52.548098 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd-metrics/0.log" Oct 11 10:34:52.550390 master-2 kubenswrapper[4776]: I1011 10:34:52.550331 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d" exitCode=2 Oct 11 10:34:52.550390 master-2 kubenswrapper[4776]: I1011 10:34:52.550363 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7" exitCode=0 Oct 11 10:34:52.550587 master-2 kubenswrapper[4776]: I1011 10:34:52.550453 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6" exitCode=2 Oct 11 10:34:52.555549 master-2 kubenswrapper[4776]: I1011 10:34:52.555458 4776 generic.go:334] "Generic (PLEG): container finished" podID="ebeec22d-9309-4efd-bbc0-f44c750a258c" containerID="25ad594b9284fd2089fd6abfaa970257ef0a465b5e9177e3d753cb32feaf3eb1" exitCode=0 Oct 11 10:34:52.556286 master-2 kubenswrapper[4776]: I1011 10:34:52.555544 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-5-master-2" event={"ID":"ebeec22d-9309-4efd-bbc0-f44c750a258c","Type":"ContainerDied","Data":"25ad594b9284fd2089fd6abfaa970257ef0a465b5e9177e3d753cb32feaf3eb1"} Oct 11 10:34:52.922096 master-1 kubenswrapper[4771]: I1011 10:34:52.922032 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:34:53.032000 master-2 kubenswrapper[4776]: I1011 10:34:53.031945 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:34:53.032401 master-2 kubenswrapper[4776]: I1011 10:34:53.032359 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:34:53.951169 master-2 kubenswrapper[4776]: I1011 10:34:53.951084 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:54.034636 master-2 kubenswrapper[4776]: I1011 10:34:54.034534 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-kubelet-dir\") pod \"ebeec22d-9309-4efd-bbc0-f44c750a258c\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " Oct 11 10:34:54.034636 master-2 kubenswrapper[4776]: I1011 10:34:54.034618 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ebeec22d-9309-4efd-bbc0-f44c750a258c" (UID: "ebeec22d-9309-4efd-bbc0-f44c750a258c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:34:54.035167 master-2 kubenswrapper[4776]: I1011 10:34:54.034836 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebeec22d-9309-4efd-bbc0-f44c750a258c-kube-api-access\") pod \"ebeec22d-9309-4efd-bbc0-f44c750a258c\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " Oct 11 10:34:54.035167 master-2 kubenswrapper[4776]: I1011 10:34:54.034909 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-var-lock\") pod \"ebeec22d-9309-4efd-bbc0-f44c750a258c\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " Oct 11 10:34:54.035309 master-2 kubenswrapper[4776]: I1011 10:34:54.035167 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-var-lock" (OuterVolumeSpecName: "var-lock") pod "ebeec22d-9309-4efd-bbc0-f44c750a258c" (UID: "ebeec22d-9309-4efd-bbc0-f44c750a258c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:34:54.035611 master-2 kubenswrapper[4776]: I1011 10:34:54.035544 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:34:54.035611 master-2 kubenswrapper[4776]: I1011 10:34:54.035595 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:34:54.037357 master-2 kubenswrapper[4776]: I1011 10:34:54.037315 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebeec22d-9309-4efd-bbc0-f44c750a258c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ebeec22d-9309-4efd-bbc0-f44c750a258c" (UID: "ebeec22d-9309-4efd-bbc0-f44c750a258c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:34:54.137503 master-2 kubenswrapper[4776]: I1011 10:34:54.137346 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebeec22d-9309-4efd-bbc0-f44c750a258c-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:34:54.573913 master-2 kubenswrapper[4776]: I1011 10:34:54.573820 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-5-master-2" event={"ID":"ebeec22d-9309-4efd-bbc0-f44c750a258c","Type":"ContainerDied","Data":"6865f7f3eb804d27ba2426a02d0ac86e934a61a183cb3434647c8b9458b1b6af"} Oct 11 10:34:54.573913 master-2 kubenswrapper[4776]: I1011 10:34:54.573871 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6865f7f3eb804d27ba2426a02d0ac86e934a61a183cb3434647c8b9458b1b6af" Oct 11 10:34:54.573913 master-2 kubenswrapper[4776]: I1011 10:34:54.573888 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:54.821447 master-1 kubenswrapper[4771]: I1011 10:34:54.821344 4771 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 11 10:34:54.822291 master-1 kubenswrapper[4771]: I1011 10:34:54.821963 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver" containerID="cri-o://7b6b0f0ebd2652368dabe4dd6da18ce3abd22cb6b9eb7a0abf73685386aeadea" gracePeriod=135 Oct 11 10:34:54.822291 master-1 kubenswrapper[4771]: I1011 10:34:54.822017 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-check-endpoints" containerID="cri-o://49c30f2556445e43240415750853bc066c7b179aee60a314f7f90ecd36a50dcd" gracePeriod=135 Oct 11 10:34:54.822291 master-1 kubenswrapper[4771]: I1011 10:34:54.822041 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://d197bce3e8ba0a7f6aff105b6e86788c609756474629f070cf3ae2b1f7ecea10" gracePeriod=135 Oct 11 10:34:54.822291 master-1 kubenswrapper[4771]: I1011 10:34:54.822131 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3a652daa6b9bec9ce58b6d5f79f895564498416d8415b206a1a52c5a9a98d3f7" gracePeriod=135 Oct 11 10:34:54.822716 master-1 kubenswrapper[4771]: I1011 10:34:54.822328 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c610027bfa053a5744033e9524cf5428c551fc36958f998db58017c8d204f5c1" gracePeriod=135 Oct 11 10:34:54.825037 master-1 kubenswrapper[4771]: I1011 10:34:54.824970 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 11 10:34:54.825324 master-1 kubenswrapper[4771]: E1011 10:34:54.825272 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="setup" Oct 11 10:34:54.825324 master-1 kubenswrapper[4771]: I1011 10:34:54.825304 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="setup" Oct 11 10:34:54.825324 master-1 kubenswrapper[4771]: E1011 10:34:54.825320 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver" Oct 11 10:34:54.825324 master-1 kubenswrapper[4771]: I1011 10:34:54.825333 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver" Oct 11 10:34:54.825736 master-1 kubenswrapper[4771]: E1011 10:34:54.825351 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-cert-regeneration-controller" Oct 11 10:34:54.825736 master-1 kubenswrapper[4771]: I1011 10:34:54.825371 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-cert-regeneration-controller" Oct 11 10:34:54.825736 master-1 kubenswrapper[4771]: E1011 10:34:54.825438 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-cert-syncer" Oct 11 10:34:54.825736 master-1 kubenswrapper[4771]: I1011 10:34:54.825451 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-cert-syncer" Oct 11 10:34:54.825736 master-1 kubenswrapper[4771]: E1011 10:34:54.825468 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-check-endpoints" Oct 11 10:34:54.825736 master-1 kubenswrapper[4771]: I1011 10:34:54.825482 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-check-endpoints" Oct 11 10:34:54.825736 master-1 kubenswrapper[4771]: E1011 10:34:54.825498 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-insecure-readyz" Oct 11 10:34:54.825736 master-1 kubenswrapper[4771]: I1011 10:34:54.825511 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-insecure-readyz" Oct 11 10:34:54.825736 master-1 kubenswrapper[4771]: I1011 10:34:54.825642 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-cert-regeneration-controller" Oct 11 10:34:54.825736 master-1 kubenswrapper[4771]: I1011 10:34:54.825663 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver" Oct 11 10:34:54.825736 master-1 kubenswrapper[4771]: I1011 10:34:54.825682 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-check-endpoints" Oct 11 10:34:54.825736 master-1 kubenswrapper[4771]: I1011 10:34:54.825704 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-cert-syncer" Oct 11 10:34:54.825736 master-1 kubenswrapper[4771]: I1011 10:34:54.825716 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-insecure-readyz" Oct 11 10:34:54.873118 master-1 kubenswrapper[4771]: I1011 10:34:54.873031 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:34:54.873466 master-1 kubenswrapper[4771]: I1011 10:34:54.873201 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:34:54.873466 master-1 kubenswrapper[4771]: I1011 10:34:54.873335 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:34:54.990778 master-1 kubenswrapper[4771]: I1011 10:34:54.990641 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:34:54.990778 master-1 kubenswrapper[4771]: I1011 10:34:54.990758 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:34:54.990962 master-1 kubenswrapper[4771]: I1011 10:34:54.990833 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:34:54.990962 master-1 kubenswrapper[4771]: I1011 10:34:54.990837 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:34:54.991075 master-1 kubenswrapper[4771]: I1011 10:34:54.991015 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:34:54.991166 master-1 kubenswrapper[4771]: I1011 10:34:54.991100 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:34:55.933727 master-1 kubenswrapper[4771]: I1011 10:34:55.933635 4771 generic.go:334] "Generic (PLEG): container finished" podID="c1c3b2b9-8880-496b-88ed-9706cd8ee23d" containerID="d5b95693856e76475228e56822cf59bc988be47b5715c02d7d3f81ff2fa1bb74" exitCode=0 Oct 11 10:34:55.934389 master-1 kubenswrapper[4771]: I1011 10:34:55.933725 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-1" event={"ID":"c1c3b2b9-8880-496b-88ed-9706cd8ee23d","Type":"ContainerDied","Data":"d5b95693856e76475228e56822cf59bc988be47b5715c02d7d3f81ff2fa1bb74"} Oct 11 10:34:55.940641 master-1 kubenswrapper[4771]: I1011 10:34:55.940574 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_34b1362996d1e0c2cea0bee73eb18468/kube-apiserver-cert-syncer/0.log" Oct 11 10:34:55.941981 master-1 kubenswrapper[4771]: I1011 10:34:55.941940 4771 generic.go:334] "Generic (PLEG): container finished" podID="34b1362996d1e0c2cea0bee73eb18468" containerID="49c30f2556445e43240415750853bc066c7b179aee60a314f7f90ecd36a50dcd" exitCode=0 Oct 11 10:34:55.941981 master-1 kubenswrapper[4771]: I1011 10:34:55.941977 4771 generic.go:334] "Generic (PLEG): container finished" podID="34b1362996d1e0c2cea0bee73eb18468" containerID="3a652daa6b9bec9ce58b6d5f79f895564498416d8415b206a1a52c5a9a98d3f7" exitCode=0 Oct 11 10:34:55.942197 master-1 kubenswrapper[4771]: I1011 10:34:55.941994 4771 generic.go:334] "Generic (PLEG): container finished" podID="34b1362996d1e0c2cea0bee73eb18468" containerID="d197bce3e8ba0a7f6aff105b6e86788c609756474629f070cf3ae2b1f7ecea10" exitCode=0 Oct 11 10:34:55.942197 master-1 kubenswrapper[4771]: I1011 10:34:55.942012 4771 generic.go:334] "Generic (PLEG): container finished" podID="34b1362996d1e0c2cea0bee73eb18468" containerID="c610027bfa053a5744033e9524cf5428c551fc36958f998db58017c8d204f5c1" exitCode=2 Oct 11 10:34:55.963847 master-1 kubenswrapper[4771]: I1011 10:34:55.963643 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="34b1362996d1e0c2cea0bee73eb18468" podUID="e39186c2ebd02622803bdbec6984de2a" Oct 11 10:34:56.028805 master-1 kubenswrapper[4771]: E1011 10:34:56.028693 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" podUID="d7647696-42d9-4dd9-bc3b-a4d52a42cf9a" Oct 11 10:34:56.028805 master-1 kubenswrapper[4771]: E1011 10:34:56.028718 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" podUID="6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b" Oct 11 10:34:56.948525 master-1 kubenswrapper[4771]: I1011 10:34:56.948435 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:34:56.949342 master-1 kubenswrapper[4771]: I1011 10:34:56.948547 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:34:57.329398 master-1 kubenswrapper[4771]: I1011 10:34:57.329313 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-1" Oct 11 10:34:57.425011 master-1 kubenswrapper[4771]: I1011 10:34:57.424899 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1c3b2b9-8880-496b-88ed-9706cd8ee23d-kubelet-dir\") pod \"c1c3b2b9-8880-496b-88ed-9706cd8ee23d\" (UID: \"c1c3b2b9-8880-496b-88ed-9706cd8ee23d\") " Oct 11 10:34:57.425357 master-1 kubenswrapper[4771]: I1011 10:34:57.425125 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1c3b2b9-8880-496b-88ed-9706cd8ee23d-var-lock\") pod \"c1c3b2b9-8880-496b-88ed-9706cd8ee23d\" (UID: \"c1c3b2b9-8880-496b-88ed-9706cd8ee23d\") " Oct 11 10:34:57.425357 master-1 kubenswrapper[4771]: I1011 10:34:57.425116 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1c3b2b9-8880-496b-88ed-9706cd8ee23d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c1c3b2b9-8880-496b-88ed-9706cd8ee23d" (UID: "c1c3b2b9-8880-496b-88ed-9706cd8ee23d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:34:57.425357 master-1 kubenswrapper[4771]: I1011 10:34:57.425232 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1c3b2b9-8880-496b-88ed-9706cd8ee23d-kube-api-access\") pod \"c1c3b2b9-8880-496b-88ed-9706cd8ee23d\" (UID: \"c1c3b2b9-8880-496b-88ed-9706cd8ee23d\") " Oct 11 10:34:57.425357 master-1 kubenswrapper[4771]: I1011 10:34:57.425265 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1c3b2b9-8880-496b-88ed-9706cd8ee23d-var-lock" (OuterVolumeSpecName: "var-lock") pod "c1c3b2b9-8880-496b-88ed-9706cd8ee23d" (UID: "c1c3b2b9-8880-496b-88ed-9706cd8ee23d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:34:57.425656 master-1 kubenswrapper[4771]: I1011 10:34:57.425582 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:34:57.425823 master-1 kubenswrapper[4771]: E1011 10:34:57.425781 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker podName:d7647696-42d9-4dd9-bc3b-a4d52a42cf9a nodeName:}" failed. No retries permitted until 2025-10-11 10:36:59.42575903 +0000 UTC m=+651.399985481 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-bqdlc" (UID: "d7647696-42d9-4dd9-bc3b-a4d52a42cf9a") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:34:57.425919 master-1 kubenswrapper[4771]: I1011 10:34:57.425829 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1c3b2b9-8880-496b-88ed-9706cd8ee23d-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:34:57.425919 master-1 kubenswrapper[4771]: I1011 10:34:57.425850 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1c3b2b9-8880-496b-88ed-9706cd8ee23d-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:34:57.429513 master-1 kubenswrapper[4771]: I1011 10:34:57.429415 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1c3b2b9-8880-496b-88ed-9706cd8ee23d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c1c3b2b9-8880-496b-88ed-9706cd8ee23d" (UID: "c1c3b2b9-8880-496b-88ed-9706cd8ee23d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:34:57.529275 master-1 kubenswrapper[4771]: I1011 10:34:57.529084 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:34:57.529581 master-1 kubenswrapper[4771]: I1011 10:34:57.529315 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1c3b2b9-8880-496b-88ed-9706cd8ee23d-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:34:57.529581 master-1 kubenswrapper[4771]: E1011 10:34:57.529339 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker podName:6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b nodeName:}" failed. No retries permitted until 2025-10-11 10:36:59.52929247 +0000 UTC m=+651.503518941 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-tpzsm" (UID: "6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:34:57.731883 master-1 kubenswrapper[4771]: I1011 10:34:57.731820 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-65d86dff78-bg7lk"] Oct 11 10:34:57.733008 master-1 kubenswrapper[4771]: I1011 10:34:57.732948 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" podUID="daf74cdb-6bdb-465a-8e3e-194e8868570f" containerName="metrics-server" containerID="cri-o://d71774e5747fba198d1f1c685867c43372766be8110c50262b34cb5aee247b7d" gracePeriod=170 Oct 11 10:34:57.955757 master-1 kubenswrapper[4771]: I1011 10:34:57.955676 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-1" event={"ID":"c1c3b2b9-8880-496b-88ed-9706cd8ee23d","Type":"ContainerDied","Data":"781880873ca29705a429a8abc16c37af29927d033898ec8fedabee8745269269"} Oct 11 10:34:57.955757 master-1 kubenswrapper[4771]: I1011 10:34:57.955750 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="781880873ca29705a429a8abc16c37af29927d033898ec8fedabee8745269269" Oct 11 10:34:57.956676 master-1 kubenswrapper[4771]: I1011 10:34:57.956523 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-1" Oct 11 10:34:58.031231 master-2 kubenswrapper[4776]: I1011 10:34:58.031143 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:34:58.031231 master-2 kubenswrapper[4776]: I1011 10:34:58.031232 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: I1011 10:34:58.245601 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:34:58.245752 master-1 kubenswrapper[4771]: I1011 10:34:58.245707 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:02.990859 master-1 kubenswrapper[4771]: I1011 10:35:02.990702 4771 generic.go:334] "Generic (PLEG): container finished" podID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerID="2fd6e0cb14ecdcadbf2571f6d4dd1d2a4a1e6cf999fc333d09b9fc98b284b780" exitCode=0 Oct 11 10:35:02.990859 master-1 kubenswrapper[4771]: I1011 10:35:02.990768 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" event={"ID":"04cd4a19-2532-43d1-9144-1f59d9e52d19","Type":"ContainerDied","Data":"2fd6e0cb14ecdcadbf2571f6d4dd1d2a4a1e6cf999fc333d09b9fc98b284b780"} Oct 11 10:35:02.990859 master-1 kubenswrapper[4771]: I1011 10:35:02.990853 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" event={"ID":"04cd4a19-2532-43d1-9144-1f59d9e52d19","Type":"ContainerStarted","Data":"143e04eafcd2b03e93df12dc4cef70c9cbf812c2f07ee907f7529b8a34ff8d77"} Oct 11 10:35:02.990859 master-1 kubenswrapper[4771]: I1011 10:35:02.990885 4771 scope.go:117] "RemoveContainer" containerID="d9d09acfb9b74efc71914e418c9f7ad84873a3a13515d6cfcddf159cfd555604" Oct 11 10:35:03.031173 master-2 kubenswrapper[4776]: I1011 10:35:03.031123 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:35:03.031637 master-2 kubenswrapper[4776]: I1011 10:35:03.031185 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:35:03.031637 master-2 kubenswrapper[4776]: I1011 10:35:03.031256 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:35:03.031845 master-2 kubenswrapper[4776]: I1011 10:35:03.031779 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:35:03.031994 master-2 kubenswrapper[4776]: I1011 10:35:03.031871 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:35:03.069712 master-1 kubenswrapper[4771]: E1011 10:35:03.069576 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" podUID="537a2b50-0394-47bd-941a-def350316943" Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: I1011 10:35:03.244974 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:35:03.245143 master-1 kubenswrapper[4771]: I1011 10:35:03.245075 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:03.494799 master-1 kubenswrapper[4771]: I1011 10:35:03.494690 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:35:03.499570 master-1 kubenswrapper[4771]: I1011 10:35:03.499408 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:03.499570 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:03.499570 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:03.499570 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:03.499869 master-1 kubenswrapper[4771]: I1011 10:35:03.499560 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:03.646513 master-2 kubenswrapper[4776]: I1011 10:35:03.646410 4776 generic.go:334] "Generic (PLEG): container finished" podID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerID="532aa417a81b450c3ee53605926d217d88d13c85c6a1d9e5ea21cc1e35ca9346" exitCode=0 Oct 11 10:35:03.646513 master-2 kubenswrapper[4776]: I1011 10:35:03.646463 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-57kcw" event={"ID":"c8cd90ff-e70c-4837-82c4-0fec67a8a51b","Type":"ContainerDied","Data":"532aa417a81b450c3ee53605926d217d88d13c85c6a1d9e5ea21cc1e35ca9346"} Oct 11 10:35:03.646513 master-2 kubenswrapper[4776]: I1011 10:35:03.646497 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-57kcw" event={"ID":"c8cd90ff-e70c-4837-82c4-0fec67a8a51b","Type":"ContainerStarted","Data":"37bb400b73bf025924c7c9c5bb9e1d981b6e77aaa21f8e234850cbe27200bcf9"} Oct 11 10:35:03.646513 master-2 kubenswrapper[4776]: I1011 10:35:03.646517 4776 scope.go:117] "RemoveContainer" containerID="d0cb5ca87b019c6dd7de016a32a463f61a07f9fbd819c59a6476d827605ded9c" Oct 11 10:35:03.967135 master-2 kubenswrapper[4776]: I1011 10:35:03.967018 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:35:03.971262 master-2 kubenswrapper[4776]: I1011 10:35:03.971211 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:03.971262 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:03.971262 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:03.971262 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:03.971418 master-2 kubenswrapper[4776]: I1011 10:35:03.971279 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:04.000665 master-1 kubenswrapper[4771]: I1011 10:35:04.000575 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:35:04.324250 master-1 kubenswrapper[4771]: I1011 10:35:04.324127 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca\") pod \"route-controller-manager-5bcc5987f5-f92xw\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:35:04.324596 master-1 kubenswrapper[4771]: E1011 10:35:04.324348 4771 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:35:04.324712 master-1 kubenswrapper[4771]: E1011 10:35:04.324632 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca podName:537a2b50-0394-47bd-941a-def350316943 nodeName:}" failed. No retries permitted until 2025-10-11 10:37:06.324585174 +0000 UTC m=+658.298811645 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca") pod "route-controller-manager-5bcc5987f5-f92xw" (UID: "537a2b50-0394-47bd-941a-def350316943") : configmap "client-ca" not found Oct 11 10:35:04.496881 master-1 kubenswrapper[4771]: I1011 10:35:04.496742 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:04.496881 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:04.496881 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:04.496881 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:04.497503 master-1 kubenswrapper[4771]: I1011 10:35:04.496911 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:04.970772 master-2 kubenswrapper[4776]: I1011 10:35:04.970660 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:04.970772 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:04.970772 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:04.970772 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:04.970772 master-2 kubenswrapper[4776]: I1011 10:35:04.970739 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:05.083595 master-1 kubenswrapper[4771]: E1011 10:35:05.083492 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" podUID="c9e9455e-0b47-4623-9b4c-ef79cf62a254" Oct 11 10:35:05.494649 master-1 kubenswrapper[4771]: I1011 10:35:05.494471 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:35:05.497596 master-1 kubenswrapper[4771]: I1011 10:35:05.497531 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:05.497596 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:05.497596 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:05.497596 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:05.497739 master-1 kubenswrapper[4771]: I1011 10:35:05.497623 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:05.665279 master-2 kubenswrapper[4776]: I1011 10:35:05.665171 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/2.log" Oct 11 10:35:05.665924 master-2 kubenswrapper[4776]: I1011 10:35:05.665888 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/1.log" Oct 11 10:35:05.666724 master-2 kubenswrapper[4776]: I1011 10:35:05.666368 4776 generic.go:334] "Generic (PLEG): container finished" podID="6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c" containerID="c8328096ff83e7ba5543b3018932260182e624ccc8f4652947efc0b12da9a115" exitCode=1 Oct 11 10:35:05.666724 master-2 kubenswrapper[4776]: I1011 10:35:05.666441 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" event={"ID":"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c","Type":"ContainerDied","Data":"c8328096ff83e7ba5543b3018932260182e624ccc8f4652947efc0b12da9a115"} Oct 11 10:35:05.666724 master-2 kubenswrapper[4776]: I1011 10:35:05.666506 4776 scope.go:117] "RemoveContainer" containerID="9d1e12fc2f0f72ed70b132b8b498f2531a6ae8ae01d1a1a52b35463b0839dedd" Oct 11 10:35:05.667493 master-2 kubenswrapper[4776]: I1011 10:35:05.667437 4776 scope.go:117] "RemoveContainer" containerID="c8328096ff83e7ba5543b3018932260182e624ccc8f4652947efc0b12da9a115" Oct 11 10:35:05.668051 master-2 kubenswrapper[4776]: E1011 10:35:05.667997 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-766ddf4575-wf7mj_openshift-ingress-operator(6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c)\"" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" podUID="6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c" Oct 11 10:35:05.970112 master-2 kubenswrapper[4776]: I1011 10:35:05.969962 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:05.970112 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:05.970112 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:05.970112 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:05.970112 master-2 kubenswrapper[4776]: I1011 10:35:05.970047 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:06.011059 master-1 kubenswrapper[4771]: I1011 10:35:06.010964 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:35:06.497587 master-1 kubenswrapper[4771]: I1011 10:35:06.497521 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:06.497587 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:06.497587 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:06.497587 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:06.498591 master-1 kubenswrapper[4771]: I1011 10:35:06.498509 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:06.676108 master-2 kubenswrapper[4776]: I1011 10:35:06.676051 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/2.log" Oct 11 10:35:06.761620 master-1 kubenswrapper[4771]: I1011 10:35:06.760735 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca\") pod \"controller-manager-565f857764-nhm4g\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:35:06.761620 master-1 kubenswrapper[4771]: E1011 10:35:06.760964 4771 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:35:06.761620 master-1 kubenswrapper[4771]: E1011 10:35:06.761118 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca podName:c9e9455e-0b47-4623-9b4c-ef79cf62a254 nodeName:}" failed. No retries permitted until 2025-10-11 10:37:08.761087995 +0000 UTC m=+660.735314436 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca") pod "controller-manager-565f857764-nhm4g" (UID: "c9e9455e-0b47-4623-9b4c-ef79cf62a254") : configmap "client-ca" not found Oct 11 10:35:06.971024 master-2 kubenswrapper[4776]: I1011 10:35:06.970831 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:06.971024 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:06.971024 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:06.971024 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:06.971024 master-2 kubenswrapper[4776]: I1011 10:35:06.970938 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:07.498311 master-1 kubenswrapper[4771]: I1011 10:35:07.498205 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:07.498311 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:07.498311 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:07.498311 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:07.499242 master-1 kubenswrapper[4771]: I1011 10:35:07.498319 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:07.967256 master-2 kubenswrapper[4776]: I1011 10:35:07.967189 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:35:07.969954 master-2 kubenswrapper[4776]: I1011 10:35:07.969899 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:07.969954 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:07.969954 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:07.969954 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:07.970119 master-2 kubenswrapper[4776]: I1011 10:35:07.969983 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:08.031602 master-2 kubenswrapper[4776]: I1011 10:35:08.031501 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:35:08.031887 master-2 kubenswrapper[4776]: I1011 10:35:08.031647 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: I1011 10:35:08.242542 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:35:08.242620 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:35:08.245553 master-1 kubenswrapper[4771]: I1011 10:35:08.242647 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:08.245553 master-1 kubenswrapper[4771]: I1011 10:35:08.242779 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: I1011 10:35:08.248836 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:35:08.249016 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:35:08.251430 master-1 kubenswrapper[4771]: I1011 10:35:08.249032 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:08.498303 master-1 kubenswrapper[4771]: I1011 10:35:08.498107 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:08.498303 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:08.498303 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:08.498303 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:08.498303 master-1 kubenswrapper[4771]: I1011 10:35:08.498189 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:08.970662 master-2 kubenswrapper[4776]: I1011 10:35:08.970576 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:08.970662 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:08.970662 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:08.970662 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:08.971177 master-2 kubenswrapper[4776]: I1011 10:35:08.970734 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:09.497126 master-1 kubenswrapper[4771]: I1011 10:35:09.497023 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:09.497126 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:09.497126 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:09.497126 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:09.497672 master-1 kubenswrapper[4771]: I1011 10:35:09.497142 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:09.971078 master-2 kubenswrapper[4776]: I1011 10:35:09.971009 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:09.971078 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:09.971078 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:09.971078 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:09.971838 master-2 kubenswrapper[4776]: I1011 10:35:09.971079 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:10.248461 master-1 kubenswrapper[4771]: I1011 10:35:10.248306 4771 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-bg7lk container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:35:10.248461 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:35:10.248461 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:35:10.248461 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:35:10.248461 master-1 kubenswrapper[4771]: [+]metric-storage-ready ok Oct 11 10:35:10.248461 master-1 kubenswrapper[4771]: [+]metric-informer-sync ok Oct 11 10:35:10.248461 master-1 kubenswrapper[4771]: [+]metadata-informer-sync ok Oct 11 10:35:10.248461 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:35:10.248461 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:35:10.248461 master-1 kubenswrapper[4771]: I1011 10:35:10.248456 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" podUID="daf74cdb-6bdb-465a-8e3e-194e8868570f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:10.497480 master-1 kubenswrapper[4771]: I1011 10:35:10.497327 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:10.497480 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:10.497480 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:10.497480 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:10.498114 master-1 kubenswrapper[4771]: I1011 10:35:10.497519 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:10.970304 master-2 kubenswrapper[4776]: I1011 10:35:10.970190 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:10.970304 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:10.970304 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:10.970304 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:10.970304 master-2 kubenswrapper[4776]: I1011 10:35:10.970277 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:11.497726 master-1 kubenswrapper[4771]: I1011 10:35:11.497607 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:11.497726 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:11.497726 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:11.497726 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:11.497726 master-1 kubenswrapper[4771]: I1011 10:35:11.497724 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:11.970367 master-2 kubenswrapper[4776]: I1011 10:35:11.970282 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:11.970367 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:11.970367 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:11.970367 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:11.971149 master-2 kubenswrapper[4776]: I1011 10:35:11.970358 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:12.498624 master-1 kubenswrapper[4771]: I1011 10:35:12.498530 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:12.498624 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:12.498624 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:12.498624 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:12.499826 master-1 kubenswrapper[4771]: I1011 10:35:12.499642 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:12.970479 master-2 kubenswrapper[4776]: I1011 10:35:12.970379 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:12.970479 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:12.970479 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:12.970479 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:12.970479 master-2 kubenswrapper[4776]: I1011 10:35:12.970465 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:13.032160 master-2 kubenswrapper[4776]: I1011 10:35:13.032071 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:35:13.032160 master-2 kubenswrapper[4776]: I1011 10:35:13.032143 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: I1011 10:35:13.244567 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:35:13.244663 master-1 kubenswrapper[4771]: I1011 10:35:13.244656 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:13.497840 master-1 kubenswrapper[4771]: I1011 10:35:13.497655 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:13.497840 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:13.497840 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:13.497840 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:13.497840 master-1 kubenswrapper[4771]: I1011 10:35:13.497746 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:13.969557 master-2 kubenswrapper[4776]: I1011 10:35:13.969477 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:13.969557 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:13.969557 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:13.969557 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:13.969557 master-2 kubenswrapper[4776]: I1011 10:35:13.969550 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:14.498075 master-1 kubenswrapper[4771]: I1011 10:35:14.497968 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:14.498075 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:14.498075 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:14.498075 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:14.498075 master-1 kubenswrapper[4771]: I1011 10:35:14.498070 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:14.970978 master-2 kubenswrapper[4776]: I1011 10:35:14.970897 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:14.970978 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:14.970978 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:14.970978 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:14.971825 master-2 kubenswrapper[4776]: I1011 10:35:14.971024 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:15.498297 master-1 kubenswrapper[4771]: I1011 10:35:15.498197 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:15.498297 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:15.498297 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:15.498297 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:15.498297 master-1 kubenswrapper[4771]: I1011 10:35:15.498286 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:15.969621 master-2 kubenswrapper[4776]: I1011 10:35:15.969493 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:15.969621 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:15.969621 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:15.969621 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:15.970124 master-2 kubenswrapper[4776]: I1011 10:35:15.969626 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:16.497269 master-1 kubenswrapper[4771]: I1011 10:35:16.497118 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:16.497269 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:16.497269 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:16.497269 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:16.497811 master-1 kubenswrapper[4771]: I1011 10:35:16.497267 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:16.971585 master-2 kubenswrapper[4776]: I1011 10:35:16.971476 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:16.971585 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:16.971585 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:16.971585 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:16.972387 master-2 kubenswrapper[4776]: I1011 10:35:16.971602 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:17.497709 master-1 kubenswrapper[4771]: I1011 10:35:17.497621 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:17.497709 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:17.497709 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:17.497709 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:17.497709 master-1 kubenswrapper[4771]: I1011 10:35:17.497713 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:17.791638 master-1 kubenswrapper[4771]: I1011 10:35:17.791449 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-h7gnk_6d20faa4-e5eb-4766-b4f5-30e491d1820c/machine-config-server/0.log" Oct 11 10:35:17.800533 master-2 kubenswrapper[4776]: I1011 10:35:17.800435 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:35:17.969712 master-2 kubenswrapper[4776]: I1011 10:35:17.969624 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:17.969712 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:17.969712 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:17.969712 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:17.970078 master-2 kubenswrapper[4776]: I1011 10:35:17.969812 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:18.031593 master-2 kubenswrapper[4776]: I1011 10:35:18.031510 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:35:18.032518 master-2 kubenswrapper[4776]: I1011 10:35:18.031597 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:35:18.059525 master-2 kubenswrapper[4776]: I1011 10:35:18.059346 4776 scope.go:117] "RemoveContainer" containerID="c8328096ff83e7ba5543b3018932260182e624ccc8f4652947efc0b12da9a115" Oct 11 10:35:18.059914 master-2 kubenswrapper[4776]: E1011 10:35:18.059859 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-766ddf4575-wf7mj_openshift-ingress-operator(6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c)\"" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" podUID="6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c" Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: I1011 10:35:18.245748 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:35:18.245842 master-1 kubenswrapper[4771]: I1011 10:35:18.245842 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:18.497836 master-1 kubenswrapper[4771]: I1011 10:35:18.497579 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:18.497836 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:18.497836 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:18.497836 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:18.497836 master-1 kubenswrapper[4771]: I1011 10:35:18.497697 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:18.969353 master-2 kubenswrapper[4776]: I1011 10:35:18.969283 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:18.969353 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:18.969353 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:18.969353 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:18.969604 master-2 kubenswrapper[4776]: I1011 10:35:18.969381 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:19.497404 master-1 kubenswrapper[4771]: I1011 10:35:19.497285 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:19.497404 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:19.497404 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:19.497404 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:19.498629 master-1 kubenswrapper[4771]: I1011 10:35:19.497437 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:19.969833 master-2 kubenswrapper[4776]: I1011 10:35:19.969745 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:19.969833 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:19.969833 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:19.969833 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:19.970722 master-2 kubenswrapper[4776]: I1011 10:35:19.969849 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:20.497239 master-1 kubenswrapper[4771]: I1011 10:35:20.497154 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:20.497239 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:20.497239 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:20.497239 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:20.498009 master-1 kubenswrapper[4771]: I1011 10:35:20.497245 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:20.970490 master-2 kubenswrapper[4776]: I1011 10:35:20.970427 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:20.970490 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:20.970490 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:20.970490 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:20.970490 master-2 kubenswrapper[4776]: I1011 10:35:20.970491 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:21.497279 master-1 kubenswrapper[4771]: I1011 10:35:21.497174 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:21.497279 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:21.497279 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:21.497279 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:21.497694 master-1 kubenswrapper[4771]: I1011 10:35:21.497277 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:21.780333 master-2 kubenswrapper[4776]: I1011 10:35:21.780260 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/2.log" Oct 11 10:35:21.780967 master-2 kubenswrapper[4776]: I1011 10:35:21.780928 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/1.log" Oct 11 10:35:21.781411 master-2 kubenswrapper[4776]: I1011 10:35:21.781371 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd-rev/0.log" Oct 11 10:35:21.783045 master-2 kubenswrapper[4776]: I1011 10:35:21.782984 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd-metrics/0.log" Oct 11 10:35:21.783636 master-2 kubenswrapper[4776]: I1011 10:35:21.783580 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcdctl/0.log" Oct 11 10:35:21.784378 master-2 kubenswrapper[4776]: I1011 10:35:21.784331 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/2.log" Oct 11 10:35:21.784890 master-2 kubenswrapper[4776]: I1011 10:35:21.784837 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/1.log" Oct 11 10:35:21.785331 master-2 kubenswrapper[4776]: I1011 10:35:21.785272 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:35:21.785636 master-2 kubenswrapper[4776]: I1011 10:35:21.785569 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd-rev/0.log" Oct 11 10:35:21.787100 master-2 kubenswrapper[4776]: I1011 10:35:21.787055 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd-metrics/0.log" Oct 11 10:35:21.787704 master-2 kubenswrapper[4776]: I1011 10:35:21.787654 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcdctl/0.log" Oct 11 10:35:21.789307 master-2 kubenswrapper[4776]: I1011 10:35:21.789262 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737" exitCode=137 Oct 11 10:35:21.789373 master-2 kubenswrapper[4776]: I1011 10:35:21.789304 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8" exitCode=137 Oct 11 10:35:21.789373 master-2 kubenswrapper[4776]: I1011 10:35:21.789364 4776 scope.go:117] "RemoveContainer" containerID="4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737" Oct 11 10:35:21.792655 master-2 kubenswrapper[4776]: I1011 10:35:21.792519 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-2" oldPodUID="c492168afa20f49cb6e3534e1871011b" podUID="2c4a583adfee975da84510940117e71a" Oct 11 10:35:21.808785 master-2 kubenswrapper[4776]: I1011 10:35:21.808750 4776 scope.go:117] "RemoveContainer" containerID="983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc" Oct 11 10:35:21.836955 master-2 kubenswrapper[4776]: I1011 10:35:21.836904 4776 scope.go:117] "RemoveContainer" containerID="2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d" Oct 11 10:35:21.852400 master-2 kubenswrapper[4776]: I1011 10:35:21.852354 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-cert-dir\") pod \"c492168afa20f49cb6e3534e1871011b\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " Oct 11 10:35:21.852400 master-2 kubenswrapper[4776]: I1011 10:35:21.852400 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-usr-local-bin\") pod \"c492168afa20f49cb6e3534e1871011b\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " Oct 11 10:35:21.852633 master-2 kubenswrapper[4776]: I1011 10:35:21.852445 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-data-dir\") pod \"c492168afa20f49cb6e3534e1871011b\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " Oct 11 10:35:21.852633 master-2 kubenswrapper[4776]: I1011 10:35:21.852487 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-static-pod-dir\") pod \"c492168afa20f49cb6e3534e1871011b\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " Oct 11 10:35:21.852745 master-2 kubenswrapper[4776]: I1011 10:35:21.852657 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-log-dir\") pod \"c492168afa20f49cb6e3534e1871011b\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " Oct 11 10:35:21.852745 master-2 kubenswrapper[4776]: I1011 10:35:21.852697 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-resource-dir\") pod \"c492168afa20f49cb6e3534e1871011b\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " Oct 11 10:35:21.852834 master-2 kubenswrapper[4776]: I1011 10:35:21.852791 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "c492168afa20f49cb6e3534e1871011b" (UID: "c492168afa20f49cb6e3534e1871011b"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:35:21.852834 master-2 kubenswrapper[4776]: I1011 10:35:21.852809 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-data-dir" (OuterVolumeSpecName: "data-dir") pod "c492168afa20f49cb6e3534e1871011b" (UID: "c492168afa20f49cb6e3534e1871011b"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:35:21.852979 master-2 kubenswrapper[4776]: I1011 10:35:21.852831 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "c492168afa20f49cb6e3534e1871011b" (UID: "c492168afa20f49cb6e3534e1871011b"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:35:21.852979 master-2 kubenswrapper[4776]: I1011 10:35:21.852847 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "c492168afa20f49cb6e3534e1871011b" (UID: "c492168afa20f49cb6e3534e1871011b"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:35:21.852979 master-2 kubenswrapper[4776]: I1011 10:35:21.852852 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-log-dir" (OuterVolumeSpecName: "log-dir") pod "c492168afa20f49cb6e3534e1871011b" (UID: "c492168afa20f49cb6e3534e1871011b"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:35:21.852979 master-2 kubenswrapper[4776]: I1011 10:35:21.852903 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "c492168afa20f49cb6e3534e1871011b" (UID: "c492168afa20f49cb6e3534e1871011b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:35:21.853130 master-2 kubenswrapper[4776]: I1011 10:35:21.852998 4776 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-cert-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:21.853130 master-2 kubenswrapper[4776]: I1011 10:35:21.853013 4776 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-usr-local-bin\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:21.853130 master-2 kubenswrapper[4776]: I1011 10:35:21.853021 4776 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-data-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:21.853130 master-2 kubenswrapper[4776]: I1011 10:35:21.853030 4776 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-static-pod-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:21.853130 master-2 kubenswrapper[4776]: I1011 10:35:21.853038 4776 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-log-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:21.853130 master-2 kubenswrapper[4776]: I1011 10:35:21.853045 4776 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-resource-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:21.856304 master-2 kubenswrapper[4776]: I1011 10:35:21.856265 4776 scope.go:117] "RemoveContainer" containerID="352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7" Oct 11 10:35:21.874630 master-2 kubenswrapper[4776]: I1011 10:35:21.874585 4776 scope.go:117] "RemoveContainer" containerID="e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6" Oct 11 10:35:21.907389 master-2 kubenswrapper[4776]: I1011 10:35:21.907333 4776 scope.go:117] "RemoveContainer" containerID="1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8" Oct 11 10:35:21.923989 master-2 kubenswrapper[4776]: I1011 10:35:21.920628 4776 scope.go:117] "RemoveContainer" containerID="8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607" Oct 11 10:35:21.941609 master-2 kubenswrapper[4776]: I1011 10:35:21.941582 4776 scope.go:117] "RemoveContainer" containerID="fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9" Oct 11 10:35:21.961913 master-2 kubenswrapper[4776]: I1011 10:35:21.961871 4776 scope.go:117] "RemoveContainer" containerID="c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738" Oct 11 10:35:21.970347 master-2 kubenswrapper[4776]: I1011 10:35:21.970297 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:21.970347 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:21.970347 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:21.970347 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:21.971024 master-2 kubenswrapper[4776]: I1011 10:35:21.970362 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:21.981838 master-2 kubenswrapper[4776]: I1011 10:35:21.981803 4776 scope.go:117] "RemoveContainer" containerID="4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737" Oct 11 10:35:21.982285 master-2 kubenswrapper[4776]: E1011 10:35:21.982242 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737\": container with ID starting with 4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737 not found: ID does not exist" containerID="4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737" Oct 11 10:35:21.982285 master-2 kubenswrapper[4776]: I1011 10:35:21.982279 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737"} err="failed to get container status \"4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737\": rpc error: code = NotFound desc = could not find container \"4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737\": container with ID starting with 4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737 not found: ID does not exist" Oct 11 10:35:21.982417 master-2 kubenswrapper[4776]: I1011 10:35:21.982296 4776 scope.go:117] "RemoveContainer" containerID="983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc" Oct 11 10:35:21.982726 master-2 kubenswrapper[4776]: E1011 10:35:21.982682 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc\": container with ID starting with 983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc not found: ID does not exist" containerID="983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc" Oct 11 10:35:21.982793 master-2 kubenswrapper[4776]: I1011 10:35:21.982728 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc"} err="failed to get container status \"983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc\": rpc error: code = NotFound desc = could not find container \"983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc\": container with ID starting with 983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc not found: ID does not exist" Oct 11 10:35:21.982793 master-2 kubenswrapper[4776]: I1011 10:35:21.982757 4776 scope.go:117] "RemoveContainer" containerID="2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d" Oct 11 10:35:21.983164 master-2 kubenswrapper[4776]: E1011 10:35:21.983132 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d\": container with ID starting with 2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d not found: ID does not exist" containerID="2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d" Oct 11 10:35:21.983164 master-2 kubenswrapper[4776]: I1011 10:35:21.983156 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d"} err="failed to get container status \"2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d\": rpc error: code = NotFound desc = could not find container \"2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d\": container with ID starting with 2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d not found: ID does not exist" Oct 11 10:35:21.983258 master-2 kubenswrapper[4776]: I1011 10:35:21.983172 4776 scope.go:117] "RemoveContainer" containerID="352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7" Oct 11 10:35:21.983533 master-2 kubenswrapper[4776]: E1011 10:35:21.983506 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7\": container with ID starting with 352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7 not found: ID does not exist" containerID="352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7" Oct 11 10:35:21.983533 master-2 kubenswrapper[4776]: I1011 10:35:21.983523 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7"} err="failed to get container status \"352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7\": rpc error: code = NotFound desc = could not find container \"352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7\": container with ID starting with 352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7 not found: ID does not exist" Oct 11 10:35:21.983533 master-2 kubenswrapper[4776]: I1011 10:35:21.983534 4776 scope.go:117] "RemoveContainer" containerID="e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6" Oct 11 10:35:21.983808 master-2 kubenswrapper[4776]: E1011 10:35:21.983776 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6\": container with ID starting with e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6 not found: ID does not exist" containerID="e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6" Oct 11 10:35:21.983872 master-2 kubenswrapper[4776]: I1011 10:35:21.983805 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6"} err="failed to get container status \"e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6\": rpc error: code = NotFound desc = could not find container \"e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6\": container with ID starting with e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6 not found: ID does not exist" Oct 11 10:35:21.983872 master-2 kubenswrapper[4776]: I1011 10:35:21.983825 4776 scope.go:117] "RemoveContainer" containerID="1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8" Oct 11 10:35:21.984096 master-2 kubenswrapper[4776]: E1011 10:35:21.984067 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8\": container with ID starting with 1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8 not found: ID does not exist" containerID="1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8" Oct 11 10:35:21.984096 master-2 kubenswrapper[4776]: I1011 10:35:21.984088 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8"} err="failed to get container status \"1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8\": rpc error: code = NotFound desc = could not find container \"1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8\": container with ID starting with 1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8 not found: ID does not exist" Oct 11 10:35:21.984185 master-2 kubenswrapper[4776]: I1011 10:35:21.984101 4776 scope.go:117] "RemoveContainer" containerID="8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607" Oct 11 10:35:21.984406 master-2 kubenswrapper[4776]: E1011 10:35:21.984375 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607\": container with ID starting with 8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607 not found: ID does not exist" containerID="8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607" Oct 11 10:35:21.984464 master-2 kubenswrapper[4776]: I1011 10:35:21.984401 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607"} err="failed to get container status \"8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607\": rpc error: code = NotFound desc = could not find container \"8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607\": container with ID starting with 8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607 not found: ID does not exist" Oct 11 10:35:21.984464 master-2 kubenswrapper[4776]: I1011 10:35:21.984416 4776 scope.go:117] "RemoveContainer" containerID="fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9" Oct 11 10:35:21.984725 master-2 kubenswrapper[4776]: E1011 10:35:21.984699 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9\": container with ID starting with fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9 not found: ID does not exist" containerID="fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9" Oct 11 10:35:21.984785 master-2 kubenswrapper[4776]: I1011 10:35:21.984725 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9"} err="failed to get container status \"fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9\": rpc error: code = NotFound desc = could not find container \"fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9\": container with ID starting with fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9 not found: ID does not exist" Oct 11 10:35:21.984785 master-2 kubenswrapper[4776]: I1011 10:35:21.984741 4776 scope.go:117] "RemoveContainer" containerID="c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738" Oct 11 10:35:21.985018 master-2 kubenswrapper[4776]: E1011 10:35:21.984987 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738\": container with ID starting with c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738 not found: ID does not exist" containerID="c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738" Oct 11 10:35:21.985018 master-2 kubenswrapper[4776]: I1011 10:35:21.985008 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738"} err="failed to get container status \"c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738\": rpc error: code = NotFound desc = could not find container \"c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738\": container with ID starting with c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738 not found: ID does not exist" Oct 11 10:35:21.985018 master-2 kubenswrapper[4776]: I1011 10:35:21.985019 4776 scope.go:117] "RemoveContainer" containerID="4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737" Oct 11 10:35:21.985264 master-2 kubenswrapper[4776]: I1011 10:35:21.985242 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737"} err="failed to get container status \"4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737\": rpc error: code = NotFound desc = could not find container \"4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737\": container with ID starting with 4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737 not found: ID does not exist" Oct 11 10:35:21.985407 master-2 kubenswrapper[4776]: I1011 10:35:21.985262 4776 scope.go:117] "RemoveContainer" containerID="983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc" Oct 11 10:35:21.985524 master-2 kubenswrapper[4776]: I1011 10:35:21.985497 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc"} err="failed to get container status \"983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc\": rpc error: code = NotFound desc = could not find container \"983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc\": container with ID starting with 983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc not found: ID does not exist" Oct 11 10:35:21.985524 master-2 kubenswrapper[4776]: I1011 10:35:21.985520 4776 scope.go:117] "RemoveContainer" containerID="2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d" Oct 11 10:35:21.985780 master-2 kubenswrapper[4776]: I1011 10:35:21.985754 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d"} err="failed to get container status \"2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d\": rpc error: code = NotFound desc = could not find container \"2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d\": container with ID starting with 2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d not found: ID does not exist" Oct 11 10:35:21.985780 master-2 kubenswrapper[4776]: I1011 10:35:21.985777 4776 scope.go:117] "RemoveContainer" containerID="352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7" Oct 11 10:35:21.986027 master-2 kubenswrapper[4776]: I1011 10:35:21.986004 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7"} err="failed to get container status \"352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7\": rpc error: code = NotFound desc = could not find container \"352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7\": container with ID starting with 352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7 not found: ID does not exist" Oct 11 10:35:21.986083 master-2 kubenswrapper[4776]: I1011 10:35:21.986026 4776 scope.go:117] "RemoveContainer" containerID="e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6" Oct 11 10:35:21.986251 master-2 kubenswrapper[4776]: I1011 10:35:21.986228 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6"} err="failed to get container status \"e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6\": rpc error: code = NotFound desc = could not find container \"e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6\": container with ID starting with e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6 not found: ID does not exist" Oct 11 10:35:21.986251 master-2 kubenswrapper[4776]: I1011 10:35:21.986247 4776 scope.go:117] "RemoveContainer" containerID="1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8" Oct 11 10:35:21.986464 master-2 kubenswrapper[4776]: I1011 10:35:21.986443 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8"} err="failed to get container status \"1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8\": rpc error: code = NotFound desc = could not find container \"1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8\": container with ID starting with 1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8 not found: ID does not exist" Oct 11 10:35:21.986464 master-2 kubenswrapper[4776]: I1011 10:35:21.986461 4776 scope.go:117] "RemoveContainer" containerID="8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607" Oct 11 10:35:21.986717 master-2 kubenswrapper[4776]: I1011 10:35:21.986650 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607"} err="failed to get container status \"8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607\": rpc error: code = NotFound desc = could not find container \"8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607\": container with ID starting with 8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607 not found: ID does not exist" Oct 11 10:35:21.986717 master-2 kubenswrapper[4776]: I1011 10:35:21.986692 4776 scope.go:117] "RemoveContainer" containerID="fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9" Oct 11 10:35:21.987090 master-2 kubenswrapper[4776]: I1011 10:35:21.987055 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9"} err="failed to get container status \"fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9\": rpc error: code = NotFound desc = could not find container \"fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9\": container with ID starting with fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9 not found: ID does not exist" Oct 11 10:35:21.987090 master-2 kubenswrapper[4776]: I1011 10:35:21.987075 4776 scope.go:117] "RemoveContainer" containerID="c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738" Oct 11 10:35:21.987345 master-2 kubenswrapper[4776]: I1011 10:35:21.987299 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738"} err="failed to get container status \"c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738\": rpc error: code = NotFound desc = could not find container \"c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738\": container with ID starting with c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738 not found: ID does not exist" Oct 11 10:35:22.065317 master-2 kubenswrapper[4776]: I1011 10:35:22.065244 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c492168afa20f49cb6e3534e1871011b" path="/var/lib/kubelet/pods/c492168afa20f49cb6e3534e1871011b/volumes" Oct 11 10:35:22.497468 master-1 kubenswrapper[4771]: I1011 10:35:22.497399 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:22.497468 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:22.497468 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:22.497468 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:22.498676 master-1 kubenswrapper[4771]: I1011 10:35:22.497480 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:22.795909 master-2 kubenswrapper[4776]: I1011 10:35:22.795863 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:35:22.802416 master-2 kubenswrapper[4776]: I1011 10:35:22.802333 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-2" oldPodUID="c492168afa20f49cb6e3534e1871011b" podUID="2c4a583adfee975da84510940117e71a" Oct 11 10:35:22.969268 master-2 kubenswrapper[4776]: I1011 10:35:22.969201 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:22.969268 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:22.969268 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:22.969268 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:22.969564 master-2 kubenswrapper[4776]: I1011 10:35:22.969282 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:23.032153 master-2 kubenswrapper[4776]: I1011 10:35:23.032057 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:35:23.032153 master-2 kubenswrapper[4776]: I1011 10:35:23.032126 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: I1011 10:35:23.245998 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:35:23.246096 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:35:23.248590 master-1 kubenswrapper[4771]: I1011 10:35:23.246113 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:23.498346 master-1 kubenswrapper[4771]: I1011 10:35:23.498210 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:23.498346 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:23.498346 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:23.498346 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:23.499397 master-1 kubenswrapper[4771]: I1011 10:35:23.499324 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:23.970109 master-2 kubenswrapper[4776]: I1011 10:35:23.970021 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:23.970109 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:23.970109 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:23.970109 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:23.970599 master-2 kubenswrapper[4776]: I1011 10:35:23.970121 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:24.497091 master-1 kubenswrapper[4771]: I1011 10:35:24.497012 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:24.497091 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:24.497091 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:24.497091 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:24.497613 master-1 kubenswrapper[4771]: I1011 10:35:24.497134 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:24.970461 master-2 kubenswrapper[4776]: I1011 10:35:24.970378 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:24.970461 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:24.970461 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:24.970461 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:24.971302 master-2 kubenswrapper[4776]: I1011 10:35:24.970483 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:25.498890 master-1 kubenswrapper[4771]: I1011 10:35:25.498747 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:25.498890 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:25.498890 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:25.498890 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:25.499914 master-1 kubenswrapper[4771]: I1011 10:35:25.498962 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:25.969384 master-2 kubenswrapper[4776]: I1011 10:35:25.969319 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:25.969384 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:25.969384 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:25.969384 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:25.969384 master-2 kubenswrapper[4776]: I1011 10:35:25.969372 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:26.496974 master-1 kubenswrapper[4771]: I1011 10:35:26.496875 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:26.496974 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:26.496974 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:26.496974 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:26.496974 master-1 kubenswrapper[4771]: I1011 10:35:26.496969 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:26.969104 master-2 kubenswrapper[4776]: I1011 10:35:26.969055 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:26.969104 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:26.969104 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:26.969104 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:26.969888 master-2 kubenswrapper[4776]: I1011 10:35:26.969113 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:27.497412 master-1 kubenswrapper[4771]: I1011 10:35:27.497290 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:27.497412 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:27.497412 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:27.497412 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:27.497412 master-1 kubenswrapper[4771]: I1011 10:35:27.497379 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:27.970532 master-2 kubenswrapper[4776]: I1011 10:35:27.970426 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:27.970532 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:27.970532 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:27.970532 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:27.970532 master-2 kubenswrapper[4776]: I1011 10:35:27.970518 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:28.031621 master-2 kubenswrapper[4776]: I1011 10:35:28.031521 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:35:28.031621 master-2 kubenswrapper[4776]: I1011 10:35:28.031619 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:35:28.041998 master-1 kubenswrapper[4771]: I1011 10:35:28.041900 4771 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 11 10:35:28.047724 master-2 kubenswrapper[4776]: I1011 10:35:28.047577 4776 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: I1011 10:35:28.245953 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:35:28.246067 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:35:28.248417 master-1 kubenswrapper[4771]: I1011 10:35:28.246073 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:28.497230 master-1 kubenswrapper[4771]: I1011 10:35:28.497131 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:28.497230 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:28.497230 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:28.497230 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:28.498296 master-1 kubenswrapper[4771]: I1011 10:35:28.497234 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:28.969264 master-2 kubenswrapper[4776]: I1011 10:35:28.969185 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:28.969264 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:28.969264 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:28.969264 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:28.969628 master-2 kubenswrapper[4776]: I1011 10:35:28.969294 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:29.058529 master-2 kubenswrapper[4776]: I1011 10:35:29.058457 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:35:29.078197 master-2 kubenswrapper[4776]: I1011 10:35:29.078163 4776 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-2" podUID="bcf1681b-75de-4981-b8a1-447e616b2f7b" Oct 11 10:35:29.078346 master-2 kubenswrapper[4776]: I1011 10:35:29.078329 4776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-2" podUID="bcf1681b-75de-4981-b8a1-447e616b2f7b" Oct 11 10:35:29.099856 master-2 kubenswrapper[4776]: I1011 10:35:29.099800 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:35:29.102429 master-2 kubenswrapper[4776]: I1011 10:35:29.102358 4776 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-2" Oct 11 10:35:29.111331 master-2 kubenswrapper[4776]: I1011 10:35:29.111245 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:35:29.126123 master-2 kubenswrapper[4776]: I1011 10:35:29.126078 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:35:29.129821 master-2 kubenswrapper[4776]: I1011 10:35:29.129789 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:35:29.146343 master-2 kubenswrapper[4776]: W1011 10:35:29.146297 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c4a583adfee975da84510940117e71a.slice/crio-1e7cfee8509ab14d62b205fdcccd1d761feda7516d2d81cde051d988a20a6538 WatchSource:0}: Error finding container 1e7cfee8509ab14d62b205fdcccd1d761feda7516d2d81cde051d988a20a6538: Status 404 returned error can't find the container with id 1e7cfee8509ab14d62b205fdcccd1d761feda7516d2d81cde051d988a20a6538 Oct 11 10:35:29.497433 master-1 kubenswrapper[4771]: I1011 10:35:29.497312 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:29.497433 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:29.497433 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:29.497433 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:29.497433 master-1 kubenswrapper[4771]: I1011 10:35:29.497435 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:29.838997 master-2 kubenswrapper[4776]: I1011 10:35:29.838915 4776 generic.go:334] "Generic (PLEG): container finished" podID="2c4a583adfee975da84510940117e71a" containerID="8498ac9ed169687a9469df6d265ee2510783d932551f6caa45673a37deb3682e" exitCode=0 Oct 11 10:35:29.838997 master-2 kubenswrapper[4776]: I1011 10:35:29.838971 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"2c4a583adfee975da84510940117e71a","Type":"ContainerDied","Data":"8498ac9ed169687a9469df6d265ee2510783d932551f6caa45673a37deb3682e"} Oct 11 10:35:29.838997 master-2 kubenswrapper[4776]: I1011 10:35:29.839003 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"2c4a583adfee975da84510940117e71a","Type":"ContainerStarted","Data":"1e7cfee8509ab14d62b205fdcccd1d761feda7516d2d81cde051d988a20a6538"} Oct 11 10:35:29.972053 master-2 kubenswrapper[4776]: I1011 10:35:29.972004 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:29.972053 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:29.972053 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:29.972053 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:29.972317 master-2 kubenswrapper[4776]: I1011 10:35:29.972064 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:30.245992 master-1 kubenswrapper[4771]: I1011 10:35:30.245903 4771 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-bg7lk container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:35:30.245992 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:35:30.245992 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:35:30.245992 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:35:30.245992 master-1 kubenswrapper[4771]: [+]metric-storage-ready ok Oct 11 10:35:30.245992 master-1 kubenswrapper[4771]: [+]metric-informer-sync ok Oct 11 10:35:30.245992 master-1 kubenswrapper[4771]: [+]metadata-informer-sync ok Oct 11 10:35:30.245992 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:35:30.245992 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:35:30.245992 master-1 kubenswrapper[4771]: I1011 10:35:30.245988 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" podUID="daf74cdb-6bdb-465a-8e3e-194e8868570f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:30.497413 master-1 kubenswrapper[4771]: I1011 10:35:30.497208 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:30.497413 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:30.497413 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:30.497413 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:30.497413 master-1 kubenswrapper[4771]: I1011 10:35:30.497279 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:30.846919 master-2 kubenswrapper[4776]: I1011 10:35:30.846879 4776 generic.go:334] "Generic (PLEG): container finished" podID="2c4a583adfee975da84510940117e71a" containerID="8727285f17e12497f3cb86862360f5e6e70608ca5f775837d9eae36b1c220a0e" exitCode=0 Oct 11 10:35:30.847425 master-2 kubenswrapper[4776]: I1011 10:35:30.846920 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"2c4a583adfee975da84510940117e71a","Type":"ContainerDied","Data":"8727285f17e12497f3cb86862360f5e6e70608ca5f775837d9eae36b1c220a0e"} Oct 11 10:35:30.969351 master-2 kubenswrapper[4776]: I1011 10:35:30.969270 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:30.969351 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:30.969351 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:30.969351 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:30.969351 master-2 kubenswrapper[4776]: I1011 10:35:30.969344 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:31.073987 master-2 kubenswrapper[4776]: E1011 10:35:31.073950 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:31.074114 master-2 kubenswrapper[4776]: E1011 10:35:31.074002 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:35:31.573986525 +0000 UTC m=+566.358413234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:31.358417 master-2 kubenswrapper[4776]: I1011 10:35:31.358378 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-v6dfc_8757af56-20fb-439e-adba-7e4e50378936/assisted-installer-controller/0.log" Oct 11 10:35:31.497944 master-1 kubenswrapper[4771]: I1011 10:35:31.497852 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:31.497944 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:31.497944 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:31.497944 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:31.498972 master-1 kubenswrapper[4771]: I1011 10:35:31.497955 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:31.580298 master-2 kubenswrapper[4776]: E1011 10:35:31.580227 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:31.580515 master-2 kubenswrapper[4776]: E1011 10:35:31.580315 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:35:32.580296854 +0000 UTC m=+567.364723563 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:31.855419 master-2 kubenswrapper[4776]: I1011 10:35:31.855356 4776 generic.go:334] "Generic (PLEG): container finished" podID="2c4a583adfee975da84510940117e71a" containerID="7f66f4dfc685ae37f005fda864fb1584f27f6f6ea0f20644d46be5a7beee01cb" exitCode=0 Oct 11 10:35:31.855419 master-2 kubenswrapper[4776]: I1011 10:35:31.855398 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"2c4a583adfee975da84510940117e71a","Type":"ContainerDied","Data":"7f66f4dfc685ae37f005fda864fb1584f27f6f6ea0f20644d46be5a7beee01cb"} Oct 11 10:35:31.968589 master-2 kubenswrapper[4776]: I1011 10:35:31.968495 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:31.968589 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:31.968589 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:31.968589 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:31.968589 master-2 kubenswrapper[4776]: I1011 10:35:31.968585 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:32.498135 master-1 kubenswrapper[4771]: I1011 10:35:32.498057 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:32.498135 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:32.498135 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:32.498135 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:32.499427 master-1 kubenswrapper[4771]: I1011 10:35:32.498147 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:32.593201 master-2 kubenswrapper[4776]: E1011 10:35:32.593162 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:32.593315 master-2 kubenswrapper[4776]: E1011 10:35:32.593231 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:35:34.593217189 +0000 UTC m=+569.377643898 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:32.867664 master-2 kubenswrapper[4776]: I1011 10:35:32.867568 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"2c4a583adfee975da84510940117e71a","Type":"ContainerStarted","Data":"8aca7dd04fbd9bc97f886a62f0850ed592b9776f6bcf8d57f228ba1b4d57e0dd"} Oct 11 10:35:32.867664 master-2 kubenswrapper[4776]: I1011 10:35:32.867655 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"2c4a583adfee975da84510940117e71a","Type":"ContainerStarted","Data":"431d1c1363285965b411f06e0338b448e40c4fef537351ea45fb00ac08129886"} Oct 11 10:35:32.868247 master-2 kubenswrapper[4776]: I1011 10:35:32.867715 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"2c4a583adfee975da84510940117e71a","Type":"ContainerStarted","Data":"56dc1b99eea54bd4bc4092f0e7a9e5c850ceefafdfda928c057fe6d1b40b5d1d"} Oct 11 10:35:32.868247 master-2 kubenswrapper[4776]: I1011 10:35:32.867739 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"2c4a583adfee975da84510940117e71a","Type":"ContainerStarted","Data":"8ca4916746dcde3d1a7ba8c08259545f440c11f186b53d82aba07a17030c92d1"} Oct 11 10:35:32.868247 master-2 kubenswrapper[4776]: I1011 10:35:32.867760 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"2c4a583adfee975da84510940117e71a","Type":"ContainerStarted","Data":"cc8943c5b4823b597a38ede8102a3e667afad877c11be87f804a4d9fcdbf5687"} Oct 11 10:35:32.919187 master-2 kubenswrapper[4776]: I1011 10:35:32.919100 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-2" podStartSLOduration=3.919043093 podStartE2EDuration="3.919043093s" podCreationTimestamp="2025-10-11 10:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:35:32.915723175 +0000 UTC m=+567.700149914" watchObservedRunningTime="2025-10-11 10:35:32.919043093 +0000 UTC m=+567.703469802" Oct 11 10:35:32.969127 master-2 kubenswrapper[4776]: I1011 10:35:32.969005 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:32.969127 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:32.969127 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:32.969127 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:32.969472 master-2 kubenswrapper[4776]: I1011 10:35:32.969169 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:33.058161 master-2 kubenswrapper[4776]: I1011 10:35:33.058027 4776 scope.go:117] "RemoveContainer" containerID="c8328096ff83e7ba5543b3018932260182e624ccc8f4652947efc0b12da9a115" Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: I1011 10:35:33.245782 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:35:33.245865 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:35:33.248158 master-1 kubenswrapper[4771]: I1011 10:35:33.245892 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:33.498055 master-1 kubenswrapper[4771]: I1011 10:35:33.497893 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:33.498055 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:33.498055 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:33.498055 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:33.498055 master-1 kubenswrapper[4771]: I1011 10:35:33.497988 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:33.877607 master-2 kubenswrapper[4776]: I1011 10:35:33.877544 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/2.log" Oct 11 10:35:33.878583 master-2 kubenswrapper[4776]: I1011 10:35:33.878539 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" event={"ID":"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c","Type":"ContainerStarted","Data":"8086a83d6fa23171a3f4677b881eaab20b411c82d7709f0eaf8a476e4028ed0e"} Oct 11 10:35:33.969920 master-2 kubenswrapper[4776]: I1011 10:35:33.969665 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:33.969920 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:33.969920 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:33.969920 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:33.969920 master-2 kubenswrapper[4776]: I1011 10:35:33.969764 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:34.132772 master-2 kubenswrapper[4776]: I1011 10:35:34.128083 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-2" Oct 11 10:35:34.497256 master-1 kubenswrapper[4771]: I1011 10:35:34.497160 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:34.497256 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:34.497256 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:34.497256 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:34.497256 master-1 kubenswrapper[4771]: I1011 10:35:34.497252 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:34.640434 master-2 kubenswrapper[4776]: E1011 10:35:34.640318 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:34.640434 master-2 kubenswrapper[4776]: E1011 10:35:34.640448 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:35:38.640423065 +0000 UTC m=+573.424849984 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:34.969906 master-2 kubenswrapper[4776]: I1011 10:35:34.969762 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:34.969906 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:34.969906 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:34.969906 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:34.971237 master-2 kubenswrapper[4776]: I1011 10:35:34.969947 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:35.497382 master-1 kubenswrapper[4771]: I1011 10:35:35.497297 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:35.497382 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:35.497382 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:35.497382 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:35.498341 master-1 kubenswrapper[4771]: I1011 10:35:35.497483 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:35.970262 master-2 kubenswrapper[4776]: I1011 10:35:35.970171 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:35.970262 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:35.970262 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:35.970262 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:35.971276 master-2 kubenswrapper[4776]: I1011 10:35:35.970293 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:36.497972 master-1 kubenswrapper[4771]: I1011 10:35:36.497869 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:36.497972 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:36.497972 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:36.497972 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:36.498932 master-1 kubenswrapper[4771]: I1011 10:35:36.498005 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:36.969999 master-2 kubenswrapper[4776]: I1011 10:35:36.969882 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:36.969999 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:36.969999 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:36.969999 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:36.969999 master-2 kubenswrapper[4776]: I1011 10:35:36.969965 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:37.498605 master-1 kubenswrapper[4771]: I1011 10:35:37.498546 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:37.498605 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:37.498605 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:37.498605 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:37.499443 master-1 kubenswrapper[4771]: I1011 10:35:37.499411 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:37.969594 master-2 kubenswrapper[4776]: I1011 10:35:37.969531 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:37.969594 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:37.969594 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:37.969594 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:37.969594 master-2 kubenswrapper[4776]: I1011 10:35:37.969587 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:38.032037 master-2 kubenswrapper[4776]: I1011 10:35:38.031927 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:35:38.032860 master-2 kubenswrapper[4776]: I1011 10:35:38.032031 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: I1011 10:35:38.244848 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:35:38.244921 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:35:38.247757 master-1 kubenswrapper[4771]: I1011 10:35:38.245591 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:38.497512 master-1 kubenswrapper[4771]: I1011 10:35:38.497240 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:38.497512 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:38.497512 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:38.497512 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:38.497512 master-1 kubenswrapper[4771]: I1011 10:35:38.497381 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:38.697810 master-2 kubenswrapper[4776]: E1011 10:35:38.697735 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:38.698203 master-2 kubenswrapper[4776]: E1011 10:35:38.697868 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:35:46.697834088 +0000 UTC m=+581.482260847 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:38.970357 master-2 kubenswrapper[4776]: I1011 10:35:38.970150 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:38.970357 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:38.970357 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:38.970357 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:38.970357 master-2 kubenswrapper[4776]: I1011 10:35:38.970303 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:39.128277 master-2 kubenswrapper[4776]: I1011 10:35:39.128188 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-2" Oct 11 10:35:39.498111 master-1 kubenswrapper[4771]: I1011 10:35:39.498042 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:39.498111 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:39.498111 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:39.498111 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:39.498711 master-1 kubenswrapper[4771]: I1011 10:35:39.498128 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:39.546447 master-2 kubenswrapper[4776]: I1011 10:35:39.546365 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-546b64dc7b-pdhmc"] Oct 11 10:35:39.546971 master-2 kubenswrapper[4776]: E1011 10:35:39.546938 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" podUID="cacd2d60-e8a5-450f-a4ad-dfc0194e3325" Oct 11 10:35:39.585750 master-2 kubenswrapper[4776]: I1011 10:35:39.585645 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb"] Oct 11 10:35:39.586283 master-2 kubenswrapper[4776]: E1011 10:35:39.586192 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" podUID="17bef070-1a9d-4090-b97a-7ce2c1c93b19" Oct 11 10:35:39.731465 master-1 kubenswrapper[4771]: I1011 10:35:39.731410 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-565f857764-nhm4g"] Oct 11 10:35:39.731852 master-1 kubenswrapper[4771]: E1011 10:35:39.731818 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" podUID="c9e9455e-0b47-4623-9b4c-ef79cf62a254" Oct 11 10:35:39.746910 master-1 kubenswrapper[4771]: I1011 10:35:39.746848 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw"] Oct 11 10:35:39.747825 master-1 kubenswrapper[4771]: E1011 10:35:39.747792 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" podUID="537a2b50-0394-47bd-941a-def350316943" Oct 11 10:35:39.918102 master-2 kubenswrapper[4776]: I1011 10:35:39.918043 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:35:39.918314 master-2 kubenswrapper[4776]: I1011 10:35:39.918043 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:35:39.924716 master-2 kubenswrapper[4776]: I1011 10:35:39.924688 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:35:39.930304 master-2 kubenswrapper[4776]: I1011 10:35:39.930274 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:35:39.970557 master-2 kubenswrapper[4776]: I1011 10:35:39.970501 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:39.970557 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:39.970557 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:39.970557 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:39.970884 master-2 kubenswrapper[4776]: I1011 10:35:39.970563 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:40.115958 master-2 kubenswrapper[4776]: I1011 10:35:40.115867 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") pod \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " Oct 11 10:35:40.115958 master-2 kubenswrapper[4776]: I1011 10:35:40.115936 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjjzl\" (UniqueName: \"kubernetes.io/projected/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-kube-api-access-tjjzl\") pod \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " Oct 11 10:35:40.115958 master-2 kubenswrapper[4776]: I1011 10:35:40.115954 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-config\") pod \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " Oct 11 10:35:40.116427 master-2 kubenswrapper[4776]: I1011 10:35:40.116006 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sqbz\" (UniqueName: \"kubernetes.io/projected/17bef070-1a9d-4090-b97a-7ce2c1c93b19-kube-api-access-9sqbz\") pod \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " Oct 11 10:35:40.116427 master-2 kubenswrapper[4776]: I1011 10:35:40.116024 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-proxy-ca-bundles\") pod \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " Oct 11 10:35:40.116427 master-2 kubenswrapper[4776]: I1011 10:35:40.116048 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") pod \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " Oct 11 10:35:40.116427 master-2 kubenswrapper[4776]: I1011 10:35:40.116099 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-config\") pod \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " Oct 11 10:35:40.116838 master-2 kubenswrapper[4776]: I1011 10:35:40.116536 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cacd2d60-e8a5-450f-a4ad-dfc0194e3325" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:35:40.116838 master-2 kubenswrapper[4776]: I1011 10:35:40.116608 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-config" (OuterVolumeSpecName: "config") pod "cacd2d60-e8a5-450f-a4ad-dfc0194e3325" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:35:40.116838 master-2 kubenswrapper[4776]: I1011 10:35:40.116616 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-config" (OuterVolumeSpecName: "config") pod "17bef070-1a9d-4090-b97a-7ce2c1c93b19" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:35:40.118978 master-2 kubenswrapper[4776]: I1011 10:35:40.118942 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17bef070-1a9d-4090-b97a-7ce2c1c93b19-kube-api-access-9sqbz" (OuterVolumeSpecName: "kube-api-access-9sqbz") pod "17bef070-1a9d-4090-b97a-7ce2c1c93b19" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19"). InnerVolumeSpecName "kube-api-access-9sqbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:35:40.119054 master-2 kubenswrapper[4776]: I1011 10:35:40.119031 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "17bef070-1a9d-4090-b97a-7ce2c1c93b19" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:35:40.119563 master-2 kubenswrapper[4776]: I1011 10:35:40.119518 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-kube-api-access-tjjzl" (OuterVolumeSpecName: "kube-api-access-tjjzl") pod "cacd2d60-e8a5-450f-a4ad-dfc0194e3325" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325"). InnerVolumeSpecName "kube-api-access-tjjzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:35:40.119771 master-2 kubenswrapper[4776]: I1011 10:35:40.119733 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cacd2d60-e8a5-450f-a4ad-dfc0194e3325" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:35:40.218018 master-2 kubenswrapper[4776]: I1011 10:35:40.217892 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjjzl\" (UniqueName: \"kubernetes.io/projected/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-kube-api-access-tjjzl\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:40.218018 master-2 kubenswrapper[4776]: I1011 10:35:40.217931 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:40.218018 master-2 kubenswrapper[4776]: I1011 10:35:40.217941 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sqbz\" (UniqueName: \"kubernetes.io/projected/17bef070-1a9d-4090-b97a-7ce2c1c93b19-kube-api-access-9sqbz\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:40.218018 master-2 kubenswrapper[4776]: I1011 10:35:40.217950 4776 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-proxy-ca-bundles\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:40.218018 master-2 kubenswrapper[4776]: I1011 10:35:40.217960 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:40.218018 master-2 kubenswrapper[4776]: I1011 10:35:40.217968 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:40.218018 master-2 kubenswrapper[4776]: I1011 10:35:40.217977 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:40.237218 master-1 kubenswrapper[4771]: I1011 10:35:40.237144 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:35:40.237774 master-1 kubenswrapper[4771]: I1011 10:35:40.237173 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:35:40.249968 master-1 kubenswrapper[4771]: I1011 10:35:40.249902 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:35:40.257953 master-1 kubenswrapper[4771]: I1011 10:35:40.257900 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:35:40.350538 master-1 kubenswrapper[4771]: I1011 10:35:40.350456 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537a2b50-0394-47bd-941a-def350316943-serving-cert\") pod \"537a2b50-0394-47bd-941a-def350316943\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " Oct 11 10:35:40.350827 master-1 kubenswrapper[4771]: I1011 10:35:40.350543 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-config\") pod \"537a2b50-0394-47bd-941a-def350316943\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " Oct 11 10:35:40.350827 master-1 kubenswrapper[4771]: I1011 10:35:40.350617 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wp47\" (UniqueName: \"kubernetes.io/projected/c9e9455e-0b47-4623-9b4c-ef79cf62a254-kube-api-access-9wp47\") pod \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " Oct 11 10:35:40.350827 master-1 kubenswrapper[4771]: I1011 10:35:40.350666 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9e9455e-0b47-4623-9b4c-ef79cf62a254-serving-cert\") pod \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " Oct 11 10:35:40.350827 master-1 kubenswrapper[4771]: I1011 10:35:40.350713 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-config\") pod \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " Oct 11 10:35:40.350827 master-1 kubenswrapper[4771]: I1011 10:35:40.350753 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-proxy-ca-bundles\") pod \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\" (UID: \"c9e9455e-0b47-4623-9b4c-ef79cf62a254\") " Oct 11 10:35:40.351383 master-1 kubenswrapper[4771]: I1011 10:35:40.351302 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwxw7\" (UniqueName: \"kubernetes.io/projected/537a2b50-0394-47bd-941a-def350316943-kube-api-access-zwxw7\") pod \"537a2b50-0394-47bd-941a-def350316943\" (UID: \"537a2b50-0394-47bd-941a-def350316943\") " Oct 11 10:35:40.351654 master-1 kubenswrapper[4771]: I1011 10:35:40.351596 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c9e9455e-0b47-4623-9b4c-ef79cf62a254" (UID: "c9e9455e-0b47-4623-9b4c-ef79cf62a254"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:35:40.351787 master-1 kubenswrapper[4771]: I1011 10:35:40.351751 4771 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-proxy-ca-bundles\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:40.352045 master-1 kubenswrapper[4771]: I1011 10:35:40.351975 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-config" (OuterVolumeSpecName: "config") pod "537a2b50-0394-47bd-941a-def350316943" (UID: "537a2b50-0394-47bd-941a-def350316943"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:35:40.352580 master-1 kubenswrapper[4771]: I1011 10:35:40.352478 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-config" (OuterVolumeSpecName: "config") pod "c9e9455e-0b47-4623-9b4c-ef79cf62a254" (UID: "c9e9455e-0b47-4623-9b4c-ef79cf62a254"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:35:40.358349 master-1 kubenswrapper[4771]: I1011 10:35:40.358252 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/537a2b50-0394-47bd-941a-def350316943-kube-api-access-zwxw7" (OuterVolumeSpecName: "kube-api-access-zwxw7") pod "537a2b50-0394-47bd-941a-def350316943" (UID: "537a2b50-0394-47bd-941a-def350316943"). InnerVolumeSpecName "kube-api-access-zwxw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:35:40.359852 master-1 kubenswrapper[4771]: I1011 10:35:40.359758 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e9455e-0b47-4623-9b4c-ef79cf62a254-kube-api-access-9wp47" (OuterVolumeSpecName: "kube-api-access-9wp47") pod "c9e9455e-0b47-4623-9b4c-ef79cf62a254" (UID: "c9e9455e-0b47-4623-9b4c-ef79cf62a254"). InnerVolumeSpecName "kube-api-access-9wp47". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:35:40.359852 master-1 kubenswrapper[4771]: I1011 10:35:40.359811 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537a2b50-0394-47bd-941a-def350316943-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "537a2b50-0394-47bd-941a-def350316943" (UID: "537a2b50-0394-47bd-941a-def350316943"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:35:40.360185 master-1 kubenswrapper[4771]: I1011 10:35:40.360095 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9e9455e-0b47-4623-9b4c-ef79cf62a254-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c9e9455e-0b47-4623-9b4c-ef79cf62a254" (UID: "c9e9455e-0b47-4623-9b4c-ef79cf62a254"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:35:40.452755 master-1 kubenswrapper[4771]: I1011 10:35:40.452723 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537a2b50-0394-47bd-941a-def350316943-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:40.452894 master-1 kubenswrapper[4771]: I1011 10:35:40.452880 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:40.452990 master-1 kubenswrapper[4771]: I1011 10:35:40.452975 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wp47\" (UniqueName: \"kubernetes.io/projected/c9e9455e-0b47-4623-9b4c-ef79cf62a254-kube-api-access-9wp47\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:40.453066 master-1 kubenswrapper[4771]: I1011 10:35:40.453054 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9e9455e-0b47-4623-9b4c-ef79cf62a254-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:40.453145 master-1 kubenswrapper[4771]: I1011 10:35:40.453133 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:40.453223 master-1 kubenswrapper[4771]: I1011 10:35:40.453210 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwxw7\" (UniqueName: \"kubernetes.io/projected/537a2b50-0394-47bd-941a-def350316943-kube-api-access-zwxw7\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:40.496094 master-1 kubenswrapper[4771]: I1011 10:35:40.495935 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:40.496094 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:40.496094 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:40.496094 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:40.496702 master-1 kubenswrapper[4771]: I1011 10:35:40.496659 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:40.924440 master-2 kubenswrapper[4776]: I1011 10:35:40.924350 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:35:40.924440 master-2 kubenswrapper[4776]: I1011 10:35:40.924387 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:35:40.970096 master-2 kubenswrapper[4776]: I1011 10:35:40.970006 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:40.970096 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:40.970096 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:40.970096 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:40.970096 master-2 kubenswrapper[4776]: I1011 10:35:40.970069 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:40.984506 master-2 kubenswrapper[4776]: I1011 10:35:40.984424 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb"] Oct 11 10:35:40.990346 master-2 kubenswrapper[4776]: I1011 10:35:40.990273 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv"] Oct 11 10:35:40.990770 master-2 kubenswrapper[4776]: E1011 10:35:40.990731 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebeec22d-9309-4efd-bbc0-f44c750a258c" containerName="installer" Oct 11 10:35:40.990830 master-2 kubenswrapper[4776]: I1011 10:35:40.990775 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebeec22d-9309-4efd-bbc0-f44c750a258c" containerName="installer" Oct 11 10:35:40.991052 master-2 kubenswrapper[4776]: I1011 10:35:40.990999 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebeec22d-9309-4efd-bbc0-f44c750a258c" containerName="installer" Oct 11 10:35:40.991788 master-2 kubenswrapper[4776]: I1011 10:35:40.991746 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:40.995006 master-2 kubenswrapper[4776]: I1011 10:35:40.994966 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 11 10:35:40.995006 master-2 kubenswrapper[4776]: I1011 10:35:40.994975 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 11 10:35:40.995328 master-2 kubenswrapper[4776]: I1011 10:35:40.995261 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 11 10:35:40.995328 master-2 kubenswrapper[4776]: I1011 10:35:40.995301 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 11 10:35:40.995484 master-2 kubenswrapper[4776]: I1011 10:35:40.995346 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 11 10:35:41.003873 master-2 kubenswrapper[4776]: I1011 10:35:40.997242 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb"] Oct 11 10:35:41.007228 master-2 kubenswrapper[4776]: I1011 10:35:41.007199 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv"] Oct 11 10:35:41.022528 master-2 kubenswrapper[4776]: I1011 10:35:41.022481 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-546b64dc7b-pdhmc"] Oct 11 10:35:41.026327 master-2 kubenswrapper[4776]: I1011 10:35:41.026287 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-serving-cert\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.026559 master-2 kubenswrapper[4776]: I1011 10:35:41.026522 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-config\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.026693 master-2 kubenswrapper[4776]: I1011 10:35:41.026650 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-client-ca\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.026811 master-2 kubenswrapper[4776]: I1011 10:35:41.026781 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccpmn\" (UniqueName: \"kubernetes.io/projected/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-kube-api-access-ccpmn\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.026921 master-2 kubenswrapper[4776]: I1011 10:35:41.026897 4776 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:41.027127 master-2 kubenswrapper[4776]: I1011 10:35:41.027101 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-546b64dc7b-pdhmc"] Oct 11 10:35:41.128393 master-2 kubenswrapper[4776]: I1011 10:35:41.128337 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-config\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.128393 master-2 kubenswrapper[4776]: I1011 10:35:41.128399 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-client-ca\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.128654 master-2 kubenswrapper[4776]: I1011 10:35:41.128429 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccpmn\" (UniqueName: \"kubernetes.io/projected/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-kube-api-access-ccpmn\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.128654 master-2 kubenswrapper[4776]: I1011 10:35:41.128461 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-serving-cert\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.128654 master-2 kubenswrapper[4776]: I1011 10:35:41.128515 4776 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:41.130010 master-2 kubenswrapper[4776]: I1011 10:35:41.129961 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-config\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.130364 master-2 kubenswrapper[4776]: I1011 10:35:41.130308 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-client-ca\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.136543 master-2 kubenswrapper[4776]: I1011 10:35:41.136501 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-serving-cert\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.150560 master-2 kubenswrapper[4776]: I1011 10:35:41.150520 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccpmn\" (UniqueName: \"kubernetes.io/projected/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-kube-api-access-ccpmn\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.242989 master-1 kubenswrapper[4771]: I1011 10:35:41.242920 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-565f857764-nhm4g" Oct 11 10:35:41.243991 master-1 kubenswrapper[4771]: I1011 10:35:41.243584 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw" Oct 11 10:35:41.293135 master-1 kubenswrapper[4771]: I1011 10:35:41.293041 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-565f857764-nhm4g"] Oct 11 10:35:41.298551 master-1 kubenswrapper[4771]: I1011 10:35:41.298478 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-565f857764-nhm4g"] Oct 11 10:35:41.320122 master-2 kubenswrapper[4776]: I1011 10:35:41.319949 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.333160 master-1 kubenswrapper[4771]: I1011 10:35:41.333022 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw"] Oct 11 10:35:41.339585 master-1 kubenswrapper[4771]: I1011 10:35:41.339520 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw"] Oct 11 10:35:41.365321 master-1 kubenswrapper[4771]: I1011 10:35:41.365229 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/537a2b50-0394-47bd-941a-def350316943-client-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:41.365321 master-1 kubenswrapper[4771]: I1011 10:35:41.365291 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9e9455e-0b47-4623-9b4c-ef79cf62a254-client-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:41.498349 master-1 kubenswrapper[4771]: I1011 10:35:41.498167 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:41.498349 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:41.498349 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:41.498349 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:41.498349 master-1 kubenswrapper[4771]: I1011 10:35:41.498256 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:41.688232 master-2 kubenswrapper[4776]: I1011 10:35:41.688164 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv"] Oct 11 10:35:41.692006 master-2 kubenswrapper[4776]: W1011 10:35:41.691944 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa039e2d_e3c6_47a6_ad16_9f189e5a70e7.slice/crio-af9d02f52a1563e3017910d20a929f11b45fd257a3dc461ad875648500138f09 WatchSource:0}: Error finding container af9d02f52a1563e3017910d20a929f11b45fd257a3dc461ad875648500138f09: Status 404 returned error can't find the container with id af9d02f52a1563e3017910d20a929f11b45fd257a3dc461ad875648500138f09 Oct 11 10:35:41.934978 master-2 kubenswrapper[4776]: I1011 10:35:41.934912 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" event={"ID":"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7","Type":"ContainerStarted","Data":"af9d02f52a1563e3017910d20a929f11b45fd257a3dc461ad875648500138f09"} Oct 11 10:35:41.969721 master-2 kubenswrapper[4776]: I1011 10:35:41.969574 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:41.969721 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:41.969721 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:41.969721 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:41.969721 master-2 kubenswrapper[4776]: I1011 10:35:41.969655 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:42.064559 master-2 kubenswrapper[4776]: I1011 10:35:42.064486 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17bef070-1a9d-4090-b97a-7ce2c1c93b19" path="/var/lib/kubelet/pods/17bef070-1a9d-4090-b97a-7ce2c1c93b19/volumes" Oct 11 10:35:42.064949 master-2 kubenswrapper[4776]: I1011 10:35:42.064909 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cacd2d60-e8a5-450f-a4ad-dfc0194e3325" path="/var/lib/kubelet/pods/cacd2d60-e8a5-450f-a4ad-dfc0194e3325/volumes" Oct 11 10:35:42.446053 master-1 kubenswrapper[4771]: I1011 10:35:42.445958 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="537a2b50-0394-47bd-941a-def350316943" path="/var/lib/kubelet/pods/537a2b50-0394-47bd-941a-def350316943/volumes" Oct 11 10:35:42.446908 master-1 kubenswrapper[4771]: I1011 10:35:42.446730 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9e9455e-0b47-4623-9b4c-ef79cf62a254" path="/var/lib/kubelet/pods/c9e9455e-0b47-4623-9b4c-ef79cf62a254/volumes" Oct 11 10:35:42.497527 master-1 kubenswrapper[4771]: I1011 10:35:42.497466 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:42.497527 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:42.497527 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:42.497527 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:42.498153 master-1 kubenswrapper[4771]: I1011 10:35:42.498111 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:42.969172 master-2 kubenswrapper[4776]: I1011 10:35:42.969114 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:42.969172 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:42.969172 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:42.969172 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:42.969172 master-2 kubenswrapper[4776]: I1011 10:35:42.969172 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:43.032361 master-2 kubenswrapper[4776]: I1011 10:35:43.032316 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:35:43.032594 master-2 kubenswrapper[4776]: I1011 10:35:43.032380 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:35:43.198935 master-2 kubenswrapper[4776]: I1011 10:35:43.198881 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: I1011 10:35:43.247638 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:35:43.247731 master-1 kubenswrapper[4771]: I1011 10:35:43.247727 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:43.496997 master-1 kubenswrapper[4771]: I1011 10:35:43.496917 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:43.496997 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:43.496997 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:43.496997 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:43.497997 master-1 kubenswrapper[4771]: I1011 10:35:43.497020 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:43.940896 master-1 kubenswrapper[4771]: I1011 10:35:43.940821 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z"] Oct 11 10:35:43.941541 master-1 kubenswrapper[4771]: E1011 10:35:43.941058 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c3b2b9-8880-496b-88ed-9706cd8ee23d" containerName="installer" Oct 11 10:35:43.941541 master-1 kubenswrapper[4771]: I1011 10:35:43.941074 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c3b2b9-8880-496b-88ed-9706cd8ee23d" containerName="installer" Oct 11 10:35:43.941541 master-1 kubenswrapper[4771]: I1011 10:35:43.941179 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1c3b2b9-8880-496b-88ed-9706cd8ee23d" containerName="installer" Oct 11 10:35:43.941944 master-1 kubenswrapper[4771]: I1011 10:35:43.941619 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:43.945528 master-1 kubenswrapper[4771]: I1011 10:35:43.945479 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 11 10:35:43.945741 master-1 kubenswrapper[4771]: I1011 10:35:43.945556 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 11 10:35:43.945837 master-1 kubenswrapper[4771]: I1011 10:35:43.945761 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 11 10:35:43.946186 master-1 kubenswrapper[4771]: I1011 10:35:43.945907 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 11 10:35:43.948393 master-1 kubenswrapper[4771]: I1011 10:35:43.948328 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 11 10:35:43.950005 master-1 kubenswrapper[4771]: I1011 10:35:43.949954 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 11 10:35:43.950238 master-1 kubenswrapper[4771]: I1011 10:35:43.950116 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m"] Oct 11 10:35:43.951922 master-1 kubenswrapper[4771]: I1011 10:35:43.951879 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" Oct 11 10:35:43.954351 master-2 kubenswrapper[4776]: I1011 10:35:43.954138 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77c7855cb4-l7mc2"] Oct 11 10:35:43.955184 master-2 kubenswrapper[4776]: I1011 10:35:43.954967 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:43.956448 master-1 kubenswrapper[4771]: I1011 10:35:43.955187 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 11 10:35:43.958703 master-2 kubenswrapper[4776]: I1011 10:35:43.958549 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 11 10:35:43.958842 master-2 kubenswrapper[4776]: I1011 10:35:43.958636 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 11 10:35:43.958925 master-2 kubenswrapper[4776]: I1011 10:35:43.958654 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 11 10:35:43.959554 master-1 kubenswrapper[4771]: I1011 10:35:43.958760 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z"] Oct 11 10:35:43.959554 master-1 kubenswrapper[4771]: I1011 10:35:43.959515 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 11 10:35:43.960001 master-1 kubenswrapper[4771]: I1011 10:35:43.959587 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 11 10:35:43.960001 master-1 kubenswrapper[4771]: I1011 10:35:43.959594 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 11 10:35:43.960001 master-1 kubenswrapper[4771]: I1011 10:35:43.959702 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 11 10:35:43.960330 master-2 kubenswrapper[4776]: I1011 10:35:43.960297 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 11 10:35:43.960550 master-2 kubenswrapper[4776]: I1011 10:35:43.960406 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 11 10:35:43.962957 master-2 kubenswrapper[4776]: I1011 10:35:43.962913 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77c7855cb4-l7mc2"] Oct 11 10:35:43.964726 master-1 kubenswrapper[4771]: I1011 10:35:43.964669 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m"] Oct 11 10:35:43.964979 master-2 kubenswrapper[4776]: I1011 10:35:43.964920 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mmxs\" (UniqueName: \"kubernetes.io/projected/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-kube-api-access-8mmxs\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:43.965125 master-2 kubenswrapper[4776]: I1011 10:35:43.965091 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-client-ca\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:43.965247 master-2 kubenswrapper[4776]: I1011 10:35:43.965217 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-serving-cert\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:43.965287 master-2 kubenswrapper[4776]: I1011 10:35:43.965260 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-config\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:43.965405 master-2 kubenswrapper[4776]: I1011 10:35:43.965363 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-proxy-ca-bundles\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:43.969346 master-2 kubenswrapper[4776]: I1011 10:35:43.969310 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:43.969346 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:43.969346 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:43.969346 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:43.975096 master-2 kubenswrapper[4776]: I1011 10:35:43.969359 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:43.975096 master-2 kubenswrapper[4776]: I1011 10:35:43.970903 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 11 10:35:44.003018 master-1 kubenswrapper[4771]: I1011 10:35:44.002961 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62f55273-2711-4de4-a399-5dae9b578f0c-serving-cert\") pod \"controller-manager-5cf7cfc4c5-6jg5z\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:44.003018 master-1 kubenswrapper[4771]: I1011 10:35:44.003014 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f55273-2711-4de4-a399-5dae9b578f0c-config\") pod \"controller-manager-5cf7cfc4c5-6jg5z\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:44.003251 master-1 kubenswrapper[4771]: I1011 10:35:44.003051 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-config\") pod \"route-controller-manager-68b68f45cd-mqn2m\" (UID: \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" Oct 11 10:35:44.003251 master-1 kubenswrapper[4771]: I1011 10:35:44.003138 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-client-ca\") pod \"route-controller-manager-68b68f45cd-mqn2m\" (UID: \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" Oct 11 10:35:44.003251 master-1 kubenswrapper[4771]: I1011 10:35:44.003181 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62f55273-2711-4de4-a399-5dae9b578f0c-client-ca\") pod \"controller-manager-5cf7cfc4c5-6jg5z\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:44.003251 master-1 kubenswrapper[4771]: I1011 10:35:44.003223 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thqtc\" (UniqueName: \"kubernetes.io/projected/62f55273-2711-4de4-a399-5dae9b578f0c-kube-api-access-thqtc\") pod \"controller-manager-5cf7cfc4c5-6jg5z\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:44.003651 master-1 kubenswrapper[4771]: I1011 10:35:44.003603 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zmdm\" (UniqueName: \"kubernetes.io/projected/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-kube-api-access-9zmdm\") pod \"route-controller-manager-68b68f45cd-mqn2m\" (UID: \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" Oct 11 10:35:44.003779 master-1 kubenswrapper[4771]: I1011 10:35:44.003697 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-serving-cert\") pod \"route-controller-manager-68b68f45cd-mqn2m\" (UID: \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" Oct 11 10:35:44.003848 master-1 kubenswrapper[4771]: I1011 10:35:44.003811 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62f55273-2711-4de4-a399-5dae9b578f0c-proxy-ca-bundles\") pod \"controller-manager-5cf7cfc4c5-6jg5z\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:44.066772 master-2 kubenswrapper[4776]: I1011 10:35:44.066721 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-proxy-ca-bundles\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.066967 master-2 kubenswrapper[4776]: I1011 10:35:44.066803 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mmxs\" (UniqueName: \"kubernetes.io/projected/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-kube-api-access-8mmxs\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.066967 master-2 kubenswrapper[4776]: I1011 10:35:44.066853 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-client-ca\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.066967 master-2 kubenswrapper[4776]: I1011 10:35:44.066906 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-serving-cert\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.066967 master-2 kubenswrapper[4776]: I1011 10:35:44.066929 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-config\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.068394 master-2 kubenswrapper[4776]: I1011 10:35:44.068369 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-config\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.068707 master-2 kubenswrapper[4776]: I1011 10:35:44.068656 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-proxy-ca-bundles\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.069312 master-2 kubenswrapper[4776]: I1011 10:35:44.069282 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-client-ca\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.072523 master-2 kubenswrapper[4776]: I1011 10:35:44.072410 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-serving-cert\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.097228 master-2 kubenswrapper[4776]: I1011 10:35:44.097180 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mmxs\" (UniqueName: \"kubernetes.io/projected/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-kube-api-access-8mmxs\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.104628 master-1 kubenswrapper[4771]: I1011 10:35:44.104507 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62f55273-2711-4de4-a399-5dae9b578f0c-proxy-ca-bundles\") pod \"controller-manager-5cf7cfc4c5-6jg5z\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:44.104628 master-1 kubenswrapper[4771]: I1011 10:35:44.104577 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62f55273-2711-4de4-a399-5dae9b578f0c-serving-cert\") pod \"controller-manager-5cf7cfc4c5-6jg5z\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:44.104628 master-1 kubenswrapper[4771]: I1011 10:35:44.104605 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f55273-2711-4de4-a399-5dae9b578f0c-config\") pod \"controller-manager-5cf7cfc4c5-6jg5z\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:44.104628 master-1 kubenswrapper[4771]: I1011 10:35:44.104634 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-config\") pod \"route-controller-manager-68b68f45cd-mqn2m\" (UID: \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" Oct 11 10:35:44.105182 master-1 kubenswrapper[4771]: I1011 10:35:44.104674 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-client-ca\") pod \"route-controller-manager-68b68f45cd-mqn2m\" (UID: \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" Oct 11 10:35:44.105182 master-1 kubenswrapper[4771]: I1011 10:35:44.104711 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62f55273-2711-4de4-a399-5dae9b578f0c-client-ca\") pod \"controller-manager-5cf7cfc4c5-6jg5z\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:44.105182 master-1 kubenswrapper[4771]: I1011 10:35:44.104751 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thqtc\" (UniqueName: \"kubernetes.io/projected/62f55273-2711-4de4-a399-5dae9b578f0c-kube-api-access-thqtc\") pod \"controller-manager-5cf7cfc4c5-6jg5z\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:44.105182 master-1 kubenswrapper[4771]: I1011 10:35:44.104791 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zmdm\" (UniqueName: \"kubernetes.io/projected/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-kube-api-access-9zmdm\") pod \"route-controller-manager-68b68f45cd-mqn2m\" (UID: \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" Oct 11 10:35:44.105182 master-1 kubenswrapper[4771]: I1011 10:35:44.104813 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-serving-cert\") pod \"route-controller-manager-68b68f45cd-mqn2m\" (UID: \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" Oct 11 10:35:44.106798 master-1 kubenswrapper[4771]: I1011 10:35:44.106703 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-client-ca\") pod \"route-controller-manager-68b68f45cd-mqn2m\" (UID: \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" Oct 11 10:35:44.106798 master-1 kubenswrapper[4771]: I1011 10:35:44.106771 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62f55273-2711-4de4-a399-5dae9b578f0c-client-ca\") pod \"controller-manager-5cf7cfc4c5-6jg5z\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:44.107777 master-1 kubenswrapper[4771]: I1011 10:35:44.107518 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62f55273-2711-4de4-a399-5dae9b578f0c-proxy-ca-bundles\") pod \"controller-manager-5cf7cfc4c5-6jg5z\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:44.108335 master-1 kubenswrapper[4771]: I1011 10:35:44.107877 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-config\") pod \"route-controller-manager-68b68f45cd-mqn2m\" (UID: \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" Oct 11 10:35:44.108335 master-1 kubenswrapper[4771]: I1011 10:35:44.108161 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f55273-2711-4de4-a399-5dae9b578f0c-config\") pod \"controller-manager-5cf7cfc4c5-6jg5z\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:44.110067 master-1 kubenswrapper[4771]: I1011 10:35:44.110000 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62f55273-2711-4de4-a399-5dae9b578f0c-serving-cert\") pod \"controller-manager-5cf7cfc4c5-6jg5z\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:44.110067 master-1 kubenswrapper[4771]: I1011 10:35:44.110060 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-serving-cert\") pod \"route-controller-manager-68b68f45cd-mqn2m\" (UID: \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" Oct 11 10:35:44.124962 master-1 kubenswrapper[4771]: I1011 10:35:44.124914 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zmdm\" (UniqueName: \"kubernetes.io/projected/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-kube-api-access-9zmdm\") pod \"route-controller-manager-68b68f45cd-mqn2m\" (UID: \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" Oct 11 10:35:44.125929 master-1 kubenswrapper[4771]: I1011 10:35:44.125889 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thqtc\" (UniqueName: \"kubernetes.io/projected/62f55273-2711-4de4-a399-5dae9b578f0c-kube-api-access-thqtc\") pod \"controller-manager-5cf7cfc4c5-6jg5z\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:44.266770 master-1 kubenswrapper[4771]: I1011 10:35:44.266595 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:44.291344 master-1 kubenswrapper[4771]: I1011 10:35:44.291219 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" Oct 11 10:35:44.294354 master-2 kubenswrapper[4776]: I1011 10:35:44.294301 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.497821 master-1 kubenswrapper[4771]: I1011 10:35:44.496793 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:44.497821 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:44.497821 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:44.497821 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:44.497821 master-1 kubenswrapper[4771]: I1011 10:35:44.496860 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:44.583009 master-1 kubenswrapper[4771]: I1011 10:35:44.582736 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m"] Oct 11 10:35:44.586999 master-1 kubenswrapper[4771]: W1011 10:35:44.586934 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode89d5fa2_4b2d_47b8_9f43_fbf5942eaff3.slice/crio-fb8e606f605b5a7a5119eb59ac6c30ff451c4fbab3f45cf0454534a92053916c WatchSource:0}: Error finding container fb8e606f605b5a7a5119eb59ac6c30ff451c4fbab3f45cf0454534a92053916c: Status 404 returned error can't find the container with id fb8e606f605b5a7a5119eb59ac6c30ff451c4fbab3f45cf0454534a92053916c Oct 11 10:35:44.716344 master-2 kubenswrapper[4776]: I1011 10:35:44.716287 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77c7855cb4-l7mc2"] Oct 11 10:35:44.721761 master-1 kubenswrapper[4771]: I1011 10:35:44.721651 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z"] Oct 11 10:35:44.724602 master-2 kubenswrapper[4776]: W1011 10:35:44.724552 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01d9e8ba_5d12_4d58_8db6_dbbea31c4df1.slice/crio-bc433b79c55289ef4a64be58481abb43a7688c174b82b2fdfa0d85577d07edb1 WatchSource:0}: Error finding container bc433b79c55289ef4a64be58481abb43a7688c174b82b2fdfa0d85577d07edb1: Status 404 returned error can't find the container with id bc433b79c55289ef4a64be58481abb43a7688c174b82b2fdfa0d85577d07edb1 Oct 11 10:35:44.730878 master-1 kubenswrapper[4771]: W1011 10:35:44.730817 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62f55273_2711_4de4_a399_5dae9b578f0c.slice/crio-b1d9019c57f2da10f531e4404f9aaba506a3aa469ba7140f3ac23a4a053a6c4e WatchSource:0}: Error finding container b1d9019c57f2da10f531e4404f9aaba506a3aa469ba7140f3ac23a4a053a6c4e: Status 404 returned error can't find the container with id b1d9019c57f2da10f531e4404f9aaba506a3aa469ba7140f3ac23a4a053a6c4e Oct 11 10:35:44.963312 master-2 kubenswrapper[4776]: I1011 10:35:44.963248 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" event={"ID":"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7","Type":"ContainerStarted","Data":"6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49"} Oct 11 10:35:44.963587 master-2 kubenswrapper[4776]: I1011 10:35:44.963532 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:44.964331 master-2 kubenswrapper[4776]: I1011 10:35:44.964303 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" event={"ID":"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1","Type":"ContainerStarted","Data":"bc433b79c55289ef4a64be58481abb43a7688c174b82b2fdfa0d85577d07edb1"} Oct 11 10:35:44.968745 master-2 kubenswrapper[4776]: I1011 10:35:44.968647 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:44.969304 master-2 kubenswrapper[4776]: I1011 10:35:44.969268 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:44.969304 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:44.969304 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:44.969304 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:44.969715 master-2 kubenswrapper[4776]: I1011 10:35:44.969316 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:44.980793 master-2 kubenswrapper[4776]: I1011 10:35:44.980720 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" podStartSLOduration=3.807172043 podStartE2EDuration="5.980703505s" podCreationTimestamp="2025-10-11 10:35:39 +0000 UTC" firstStartedPulling="2025-10-11 10:35:41.696022064 +0000 UTC m=+576.480448783" lastFinishedPulling="2025-10-11 10:35:43.869553536 +0000 UTC m=+578.653980245" observedRunningTime="2025-10-11 10:35:44.97827789 +0000 UTC m=+579.762704639" watchObservedRunningTime="2025-10-11 10:35:44.980703505 +0000 UTC m=+579.765130214" Oct 11 10:35:45.271861 master-1 kubenswrapper[4771]: I1011 10:35:45.271786 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" event={"ID":"62f55273-2711-4de4-a399-5dae9b578f0c","Type":"ContainerStarted","Data":"b1d9019c57f2da10f531e4404f9aaba506a3aa469ba7140f3ac23a4a053a6c4e"} Oct 11 10:35:45.274198 master-1 kubenswrapper[4771]: I1011 10:35:45.274147 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" event={"ID":"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3","Type":"ContainerStarted","Data":"fb8e606f605b5a7a5119eb59ac6c30ff451c4fbab3f45cf0454534a92053916c"} Oct 11 10:35:45.497066 master-1 kubenswrapper[4771]: I1011 10:35:45.497002 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:45.497066 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:45.497066 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:45.497066 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:45.497066 master-1 kubenswrapper[4771]: I1011 10:35:45.497084 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:45.970285 master-2 kubenswrapper[4776]: I1011 10:35:45.970206 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:45.970285 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:45.970285 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:45.970285 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:45.970860 master-2 kubenswrapper[4776]: I1011 10:35:45.970305 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:46.497294 master-1 kubenswrapper[4771]: I1011 10:35:46.497210 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:46.497294 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:46.497294 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:46.497294 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:46.498061 master-1 kubenswrapper[4771]: I1011 10:35:46.497299 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:46.700856 master-2 kubenswrapper[4776]: E1011 10:35:46.700814 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:46.701174 master-2 kubenswrapper[4776]: E1011 10:35:46.700906 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:36:02.700883215 +0000 UTC m=+597.485309974 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:46.969445 master-2 kubenswrapper[4776]: I1011 10:35:46.969309 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:46.969445 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:46.969445 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:46.969445 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:46.969445 master-2 kubenswrapper[4776]: I1011 10:35:46.969396 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:47.496730 master-1 kubenswrapper[4771]: I1011 10:35:47.496665 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:47.496730 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:47.496730 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:47.496730 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:47.497056 master-1 kubenswrapper[4771]: I1011 10:35:47.496758 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:47.794692 master-1 kubenswrapper[4771]: I1011 10:35:47.790171 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-h7gnk_6d20faa4-e5eb-4766-b4f5-30e491d1820c/machine-config-server/0.log" Oct 11 10:35:47.796899 master-2 kubenswrapper[4776]: I1011 10:35:47.796838 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:35:47.969058 master-2 kubenswrapper[4776]: I1011 10:35:47.969007 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:47.969058 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:47.969058 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:47.969058 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:47.969321 master-2 kubenswrapper[4776]: I1011 10:35:47.969069 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:47.984242 master-2 kubenswrapper[4776]: I1011 10:35:47.984197 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" event={"ID":"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1","Type":"ContainerStarted","Data":"15dc2b1cfde423b619736343b47ca9dd39eca021477b309bd20f1ac3429f0eac"} Oct 11 10:35:47.984695 master-2 kubenswrapper[4776]: I1011 10:35:47.984636 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:47.990808 master-2 kubenswrapper[4776]: I1011 10:35:47.990785 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:48.007379 master-2 kubenswrapper[4776]: I1011 10:35:48.007258 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" podStartSLOduration=6.394584964 podStartE2EDuration="9.007224289s" podCreationTimestamp="2025-10-11 10:35:39 +0000 UTC" firstStartedPulling="2025-10-11 10:35:44.727096339 +0000 UTC m=+579.511523048" lastFinishedPulling="2025-10-11 10:35:47.339735664 +0000 UTC m=+582.124162373" observedRunningTime="2025-10-11 10:35:48.002827663 +0000 UTC m=+582.787254372" watchObservedRunningTime="2025-10-11 10:35:48.007224289 +0000 UTC m=+582.791650998" Oct 11 10:35:48.113761 master-1 kubenswrapper[4771]: I1011 10:35:48.112279 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z"] Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: I1011 10:35:48.242306 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:35:48.242384 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:35:48.243845 master-1 kubenswrapper[4771]: I1011 10:35:48.242400 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:48.296207 master-1 kubenswrapper[4771]: I1011 10:35:48.296143 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" event={"ID":"62f55273-2711-4de4-a399-5dae9b578f0c","Type":"ContainerStarted","Data":"3bc57acf5e1c9c9a4233c54fddb269b4320393069c0094b92821c7de170f91e3"} Oct 11 10:35:48.296468 master-1 kubenswrapper[4771]: I1011 10:35:48.296227 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" podUID="62f55273-2711-4de4-a399-5dae9b578f0c" containerName="controller-manager" containerID="cri-o://3bc57acf5e1c9c9a4233c54fddb269b4320393069c0094b92821c7de170f91e3" gracePeriod=30 Oct 11 10:35:48.296468 master-1 kubenswrapper[4771]: I1011 10:35:48.296386 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:48.297958 master-1 kubenswrapper[4771]: I1011 10:35:48.297807 4771 patch_prober.go:28] interesting pod/controller-manager-5cf7cfc4c5-6jg5z container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.129.0.54:8443/healthz\": dial tcp 10.129.0.54:8443: connect: connection refused" start-of-body= Oct 11 10:35:48.297958 master-1 kubenswrapper[4771]: I1011 10:35:48.297874 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" podUID="62f55273-2711-4de4-a399-5dae9b578f0c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.129.0.54:8443/healthz\": dial tcp 10.129.0.54:8443: connect: connection refused" Oct 11 10:35:48.298213 master-1 kubenswrapper[4771]: I1011 10:35:48.298151 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" event={"ID":"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3","Type":"ContainerStarted","Data":"5088293846da37543ad3781b1214351a957d3cdeb729f2d5b67c96a0a56aa547"} Oct 11 10:35:48.298603 master-1 kubenswrapper[4771]: I1011 10:35:48.298553 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" Oct 11 10:35:48.300092 master-1 kubenswrapper[4771]: I1011 10:35:48.300044 4771 patch_prober.go:28] interesting pod/route-controller-manager-68b68f45cd-mqn2m container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.129.0.55:8443/healthz\": dial tcp 10.129.0.55:8443: connect: connection refused" start-of-body= Oct 11 10:35:48.300165 master-1 kubenswrapper[4771]: I1011 10:35:48.300098 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" podUID="e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.129.0.55:8443/healthz\": dial tcp 10.129.0.55:8443: connect: connection refused" Oct 11 10:35:48.317939 master-1 kubenswrapper[4771]: I1011 10:35:48.317841 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" podStartSLOduration=5.923964217 podStartE2EDuration="9.317813347s" podCreationTimestamp="2025-10-11 10:35:39 +0000 UTC" firstStartedPulling="2025-10-11 10:35:44.733467834 +0000 UTC m=+576.707694285" lastFinishedPulling="2025-10-11 10:35:48.127316974 +0000 UTC m=+580.101543415" observedRunningTime="2025-10-11 10:35:48.31655916 +0000 UTC m=+580.290785591" watchObservedRunningTime="2025-10-11 10:35:48.317813347 +0000 UTC m=+580.292039828" Oct 11 10:35:48.343137 master-1 kubenswrapper[4771]: I1011 10:35:48.343043 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" podStartSLOduration=5.805813347 podStartE2EDuration="9.343024417s" podCreationTimestamp="2025-10-11 10:35:39 +0000 UTC" firstStartedPulling="2025-10-11 10:35:44.590488035 +0000 UTC m=+576.564714486" lastFinishedPulling="2025-10-11 10:35:48.127699105 +0000 UTC m=+580.101925556" observedRunningTime="2025-10-11 10:35:48.341260205 +0000 UTC m=+580.315486696" watchObservedRunningTime="2025-10-11 10:35:48.343024417 +0000 UTC m=+580.317250848" Oct 11 10:35:48.497947 master-1 kubenswrapper[4771]: I1011 10:35:48.497891 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:48.497947 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:48.497947 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:48.497947 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:48.498241 master-1 kubenswrapper[4771]: I1011 10:35:48.497982 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:48.681048 master-1 kubenswrapper[4771]: I1011 10:35:48.680688 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-5cf7cfc4c5-6jg5z_62f55273-2711-4de4-a399-5dae9b578f0c/controller-manager/0.log" Oct 11 10:35:48.681048 master-1 kubenswrapper[4771]: I1011 10:35:48.680790 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:48.691240 master-1 kubenswrapper[4771]: I1011 10:35:48.691190 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f55273-2711-4de4-a399-5dae9b578f0c-config\") pod \"62f55273-2711-4de4-a399-5dae9b578f0c\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " Oct 11 10:35:48.691240 master-1 kubenswrapper[4771]: I1011 10:35:48.691239 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62f55273-2711-4de4-a399-5dae9b578f0c-serving-cert\") pod \"62f55273-2711-4de4-a399-5dae9b578f0c\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " Oct 11 10:35:48.691409 master-1 kubenswrapper[4771]: I1011 10:35:48.691288 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thqtc\" (UniqueName: \"kubernetes.io/projected/62f55273-2711-4de4-a399-5dae9b578f0c-kube-api-access-thqtc\") pod \"62f55273-2711-4de4-a399-5dae9b578f0c\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " Oct 11 10:35:48.691409 master-1 kubenswrapper[4771]: I1011 10:35:48.691316 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62f55273-2711-4de4-a399-5dae9b578f0c-proxy-ca-bundles\") pod \"62f55273-2711-4de4-a399-5dae9b578f0c\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " Oct 11 10:35:48.691409 master-1 kubenswrapper[4771]: I1011 10:35:48.691339 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62f55273-2711-4de4-a399-5dae9b578f0c-client-ca\") pod \"62f55273-2711-4de4-a399-5dae9b578f0c\" (UID: \"62f55273-2711-4de4-a399-5dae9b578f0c\") " Oct 11 10:35:48.692300 master-1 kubenswrapper[4771]: I1011 10:35:48.692273 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f55273-2711-4de4-a399-5dae9b578f0c-client-ca" (OuterVolumeSpecName: "client-ca") pod "62f55273-2711-4de4-a399-5dae9b578f0c" (UID: "62f55273-2711-4de4-a399-5dae9b578f0c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:35:48.692802 master-1 kubenswrapper[4771]: I1011 10:35:48.692448 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f55273-2711-4de4-a399-5dae9b578f0c-config" (OuterVolumeSpecName: "config") pod "62f55273-2711-4de4-a399-5dae9b578f0c" (UID: "62f55273-2711-4de4-a399-5dae9b578f0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:35:48.692802 master-1 kubenswrapper[4771]: I1011 10:35:48.692625 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f55273-2711-4de4-a399-5dae9b578f0c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "62f55273-2711-4de4-a399-5dae9b578f0c" (UID: "62f55273-2711-4de4-a399-5dae9b578f0c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:35:48.696417 master-1 kubenswrapper[4771]: I1011 10:35:48.696339 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62f55273-2711-4de4-a399-5dae9b578f0c-kube-api-access-thqtc" (OuterVolumeSpecName: "kube-api-access-thqtc") pod "62f55273-2711-4de4-a399-5dae9b578f0c" (UID: "62f55273-2711-4de4-a399-5dae9b578f0c"). InnerVolumeSpecName "kube-api-access-thqtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:35:48.699878 master-1 kubenswrapper[4771]: I1011 10:35:48.699806 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62f55273-2711-4de4-a399-5dae9b578f0c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "62f55273-2711-4de4-a399-5dae9b578f0c" (UID: "62f55273-2711-4de4-a399-5dae9b578f0c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:35:48.792758 master-1 kubenswrapper[4771]: I1011 10:35:48.792657 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thqtc\" (UniqueName: \"kubernetes.io/projected/62f55273-2711-4de4-a399-5dae9b578f0c-kube-api-access-thqtc\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:48.792758 master-1 kubenswrapper[4771]: I1011 10:35:48.792702 4771 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62f55273-2711-4de4-a399-5dae9b578f0c-proxy-ca-bundles\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:48.792758 master-1 kubenswrapper[4771]: I1011 10:35:48.792713 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62f55273-2711-4de4-a399-5dae9b578f0c-client-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:48.792758 master-1 kubenswrapper[4771]: I1011 10:35:48.792724 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f55273-2711-4de4-a399-5dae9b578f0c-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:48.792758 master-1 kubenswrapper[4771]: I1011 10:35:48.792733 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62f55273-2711-4de4-a399-5dae9b578f0c-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:48.969612 master-2 kubenswrapper[4776]: I1011 10:35:48.969563 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:48.969612 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:48.969612 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:48.969612 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:48.970331 master-2 kubenswrapper[4776]: I1011 10:35:48.969639 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:49.030175 master-2 kubenswrapper[4776]: I1011 10:35:49.030114 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mwqr6"] Oct 11 10:35:49.030409 master-2 kubenswrapper[4776]: I1011 10:35:49.030350 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mwqr6" podUID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerName="registry-server" containerID="cri-o://79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a" gracePeriod=2 Oct 11 10:35:49.138978 master-2 kubenswrapper[4776]: I1011 10:35:49.138934 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-2" Oct 11 10:35:49.153829 master-2 kubenswrapper[4776]: I1011 10:35:49.153792 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-2" Oct 11 10:35:49.255553 master-1 kubenswrapper[4771]: I1011 10:35:49.254562 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gwwz9"] Oct 11 10:35:49.255553 master-1 kubenswrapper[4771]: I1011 10:35:49.255071 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gwwz9" podUID="0b7d1d62-0062-47cd-a963-63893777198e" containerName="registry-server" containerID="cri-o://f22b32b9efc5550c030d0dab72ccfb608faf225ce28a60e3581e98388d7a1f20" gracePeriod=2 Oct 11 10:35:49.308919 master-1 kubenswrapper[4771]: I1011 10:35:49.308699 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-5cf7cfc4c5-6jg5z_62f55273-2711-4de4-a399-5dae9b578f0c/controller-manager/0.log" Oct 11 10:35:49.308919 master-1 kubenswrapper[4771]: I1011 10:35:49.308820 4771 generic.go:334] "Generic (PLEG): container finished" podID="62f55273-2711-4de4-a399-5dae9b578f0c" containerID="3bc57acf5e1c9c9a4233c54fddb269b4320393069c0094b92821c7de170f91e3" exitCode=2 Oct 11 10:35:49.309301 master-1 kubenswrapper[4771]: I1011 10:35:49.308985 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" event={"ID":"62f55273-2711-4de4-a399-5dae9b578f0c","Type":"ContainerDied","Data":"3bc57acf5e1c9c9a4233c54fddb269b4320393069c0094b92821c7de170f91e3"} Oct 11 10:35:49.309301 master-1 kubenswrapper[4771]: I1011 10:35:49.309051 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" Oct 11 10:35:49.309301 master-1 kubenswrapper[4771]: I1011 10:35:49.309092 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z" event={"ID":"62f55273-2711-4de4-a399-5dae9b578f0c","Type":"ContainerDied","Data":"b1d9019c57f2da10f531e4404f9aaba506a3aa469ba7140f3ac23a4a053a6c4e"} Oct 11 10:35:49.309301 master-1 kubenswrapper[4771]: I1011 10:35:49.309139 4771 scope.go:117] "RemoveContainer" containerID="3bc57acf5e1c9c9a4233c54fddb269b4320393069c0094b92821c7de170f91e3" Oct 11 10:35:49.315991 master-1 kubenswrapper[4771]: I1011 10:35:49.315907 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" Oct 11 10:35:49.386915 master-2 kubenswrapper[4776]: I1011 10:35:49.386815 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv"] Oct 11 10:35:49.387174 master-2 kubenswrapper[4776]: I1011 10:35:49.387126 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" podUID="fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" containerName="route-controller-manager" containerID="cri-o://6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49" gracePeriod=30 Oct 11 10:35:49.400481 master-1 kubenswrapper[4771]: I1011 10:35:49.400440 4771 scope.go:117] "RemoveContainer" containerID="3bc57acf5e1c9c9a4233c54fddb269b4320393069c0094b92821c7de170f91e3" Oct 11 10:35:49.401089 master-1 kubenswrapper[4771]: E1011 10:35:49.401036 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bc57acf5e1c9c9a4233c54fddb269b4320393069c0094b92821c7de170f91e3\": container with ID starting with 3bc57acf5e1c9c9a4233c54fddb269b4320393069c0094b92821c7de170f91e3 not found: ID does not exist" containerID="3bc57acf5e1c9c9a4233c54fddb269b4320393069c0094b92821c7de170f91e3" Oct 11 10:35:49.401136 master-1 kubenswrapper[4771]: I1011 10:35:49.401098 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc57acf5e1c9c9a4233c54fddb269b4320393069c0094b92821c7de170f91e3"} err="failed to get container status \"3bc57acf5e1c9c9a4233c54fddb269b4320393069c0094b92821c7de170f91e3\": rpc error: code = NotFound desc = could not find container \"3bc57acf5e1c9c9a4233c54fddb269b4320393069c0094b92821c7de170f91e3\": container with ID starting with 3bc57acf5e1c9c9a4233c54fddb269b4320393069c0094b92821c7de170f91e3 not found: ID does not exist" Oct 11 10:35:49.454910 master-2 kubenswrapper[4776]: I1011 10:35:49.454441 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xtrbk"] Oct 11 10:35:49.455534 master-2 kubenswrapper[4776]: I1011 10:35:49.455514 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.461451 master-1 kubenswrapper[4771]: I1011 10:35:49.459496 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z"] Oct 11 10:35:49.461928 master-2 kubenswrapper[4776]: I1011 10:35:49.461873 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-5v5km" Oct 11 10:35:49.466628 master-1 kubenswrapper[4771]: I1011 10:35:49.465912 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z"] Oct 11 10:35:49.500520 master-1 kubenswrapper[4771]: I1011 10:35:49.500454 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:49.500520 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:49.500520 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:49.500520 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:49.500520 master-1 kubenswrapper[4771]: I1011 10:35:49.500524 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:49.509418 master-2 kubenswrapper[4776]: I1011 10:35:49.509282 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xtrbk"] Oct 11 10:35:49.541886 master-2 kubenswrapper[4776]: I1011 10:35:49.541772 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1afe0068-3c97-4916-ba53-53f2841a95b0-catalog-content\") pod \"certified-operators-xtrbk\" (UID: \"1afe0068-3c97-4916-ba53-53f2841a95b0\") " pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.542588 master-2 kubenswrapper[4776]: I1011 10:35:49.542270 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbhj6\" (UniqueName: \"kubernetes.io/projected/1afe0068-3c97-4916-ba53-53f2841a95b0-kube-api-access-nbhj6\") pod \"certified-operators-xtrbk\" (UID: \"1afe0068-3c97-4916-ba53-53f2841a95b0\") " pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.542588 master-2 kubenswrapper[4776]: I1011 10:35:49.542359 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1afe0068-3c97-4916-ba53-53f2841a95b0-utilities\") pod \"certified-operators-xtrbk\" (UID: \"1afe0068-3c97-4916-ba53-53f2841a95b0\") " pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.579731 master-2 kubenswrapper[4776]: I1011 10:35:49.579209 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:35:49.644412 master-2 kubenswrapper[4776]: I1011 10:35:49.644320 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tlh8\" (UniqueName: \"kubernetes.io/projected/444ea5b2-c9dc-4685-9f66-2273b30d9045-kube-api-access-7tlh8\") pod \"444ea5b2-c9dc-4685-9f66-2273b30d9045\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " Oct 11 10:35:49.644412 master-2 kubenswrapper[4776]: I1011 10:35:49.644392 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-utilities\") pod \"444ea5b2-c9dc-4685-9f66-2273b30d9045\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " Oct 11 10:35:49.644704 master-2 kubenswrapper[4776]: I1011 10:35:49.644448 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-catalog-content\") pod \"444ea5b2-c9dc-4685-9f66-2273b30d9045\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " Oct 11 10:35:49.644704 master-2 kubenswrapper[4776]: I1011 10:35:49.644625 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1afe0068-3c97-4916-ba53-53f2841a95b0-catalog-content\") pod \"certified-operators-xtrbk\" (UID: \"1afe0068-3c97-4916-ba53-53f2841a95b0\") " pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.644842 master-2 kubenswrapper[4776]: I1011 10:35:49.644811 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbhj6\" (UniqueName: \"kubernetes.io/projected/1afe0068-3c97-4916-ba53-53f2841a95b0-kube-api-access-nbhj6\") pod \"certified-operators-xtrbk\" (UID: \"1afe0068-3c97-4916-ba53-53f2841a95b0\") " pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.644842 master-2 kubenswrapper[4776]: I1011 10:35:49.644838 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1afe0068-3c97-4916-ba53-53f2841a95b0-utilities\") pod \"certified-operators-xtrbk\" (UID: \"1afe0068-3c97-4916-ba53-53f2841a95b0\") " pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.645316 master-2 kubenswrapper[4776]: I1011 10:35:49.645291 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1afe0068-3c97-4916-ba53-53f2841a95b0-utilities\") pod \"certified-operators-xtrbk\" (UID: \"1afe0068-3c97-4916-ba53-53f2841a95b0\") " pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.646397 master-2 kubenswrapper[4776]: I1011 10:35:49.646023 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1afe0068-3c97-4916-ba53-53f2841a95b0-catalog-content\") pod \"certified-operators-xtrbk\" (UID: \"1afe0068-3c97-4916-ba53-53f2841a95b0\") " pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.647361 master-2 kubenswrapper[4776]: I1011 10:35:49.647308 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-utilities" (OuterVolumeSpecName: "utilities") pod "444ea5b2-c9dc-4685-9f66-2273b30d9045" (UID: "444ea5b2-c9dc-4685-9f66-2273b30d9045"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:35:49.647800 master-1 kubenswrapper[4771]: I1011 10:35:49.647742 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t6wtm"] Oct 11 10:35:49.648549 master-1 kubenswrapper[4771]: E1011 10:35:49.648533 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f55273-2711-4de4-a399-5dae9b578f0c" containerName="controller-manager" Oct 11 10:35:49.648652 master-1 kubenswrapper[4771]: I1011 10:35:49.648641 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f55273-2711-4de4-a399-5dae9b578f0c" containerName="controller-manager" Oct 11 10:35:49.648807 master-1 kubenswrapper[4771]: I1011 10:35:49.648796 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f55273-2711-4de4-a399-5dae9b578f0c" containerName="controller-manager" Oct 11 10:35:49.650250 master-1 kubenswrapper[4771]: I1011 10:35:49.650232 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t6wtm" Oct 11 10:35:49.654167 master-1 kubenswrapper[4771]: I1011 10:35:49.654113 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sp4kx" Oct 11 10:35:49.658793 master-2 kubenswrapper[4776]: I1011 10:35:49.658450 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444ea5b2-c9dc-4685-9f66-2273b30d9045-kube-api-access-7tlh8" (OuterVolumeSpecName: "kube-api-access-7tlh8") pod "444ea5b2-c9dc-4685-9f66-2273b30d9045" (UID: "444ea5b2-c9dc-4685-9f66-2273b30d9045"). InnerVolumeSpecName "kube-api-access-7tlh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:35:49.660079 master-1 kubenswrapper[4771]: I1011 10:35:49.660029 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t6wtm"] Oct 11 10:35:49.670698 master-2 kubenswrapper[4776]: I1011 10:35:49.670628 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbhj6\" (UniqueName: \"kubernetes.io/projected/1afe0068-3c97-4916-ba53-53f2841a95b0-kube-api-access-nbhj6\") pod \"certified-operators-xtrbk\" (UID: \"1afe0068-3c97-4916-ba53-53f2841a95b0\") " pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.703543 master-1 kubenswrapper[4771]: I1011 10:35:49.703468 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aebd88b2-f116-4ade-be8e-c293ccac533f-catalog-content\") pod \"community-operators-t6wtm\" (UID: \"aebd88b2-f116-4ade-be8e-c293ccac533f\") " pod="openshift-marketplace/community-operators-t6wtm" Oct 11 10:35:49.703869 master-1 kubenswrapper[4771]: I1011 10:35:49.703568 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aebd88b2-f116-4ade-be8e-c293ccac533f-utilities\") pod \"community-operators-t6wtm\" (UID: \"aebd88b2-f116-4ade-be8e-c293ccac533f\") " pod="openshift-marketplace/community-operators-t6wtm" Oct 11 10:35:49.703869 master-1 kubenswrapper[4771]: I1011 10:35:49.703699 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zbwc\" (UniqueName: \"kubernetes.io/projected/aebd88b2-f116-4ade-be8e-c293ccac533f-kube-api-access-7zbwc\") pod \"community-operators-t6wtm\" (UID: \"aebd88b2-f116-4ade-be8e-c293ccac533f\") " pod="openshift-marketplace/community-operators-t6wtm" Oct 11 10:35:49.722547 master-1 kubenswrapper[4771]: I1011 10:35:49.716261 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwwz9" Oct 11 10:35:49.724548 master-2 kubenswrapper[4776]: I1011 10:35:49.724476 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "444ea5b2-c9dc-4685-9f66-2273b30d9045" (UID: "444ea5b2-c9dc-4685-9f66-2273b30d9045"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:35:49.747725 master-2 kubenswrapper[4776]: I1011 10:35:49.747087 4776 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-catalog-content\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:49.747725 master-2 kubenswrapper[4776]: I1011 10:35:49.747133 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tlh8\" (UniqueName: \"kubernetes.io/projected/444ea5b2-c9dc-4685-9f66-2273b30d9045-kube-api-access-7tlh8\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:49.747725 master-2 kubenswrapper[4776]: I1011 10:35:49.747150 4776 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-utilities\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:49.778671 master-2 kubenswrapper[4776]: I1011 10:35:49.778628 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:49.804813 master-1 kubenswrapper[4771]: I1011 10:35:49.804741 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b7d1d62-0062-47cd-a963-63893777198e-catalog-content\") pod \"0b7d1d62-0062-47cd-a963-63893777198e\" (UID: \"0b7d1d62-0062-47cd-a963-63893777198e\") " Oct 11 10:35:49.804813 master-1 kubenswrapper[4771]: I1011 10:35:49.804803 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6ptz\" (UniqueName: \"kubernetes.io/projected/0b7d1d62-0062-47cd-a963-63893777198e-kube-api-access-r6ptz\") pod \"0b7d1d62-0062-47cd-a963-63893777198e\" (UID: \"0b7d1d62-0062-47cd-a963-63893777198e\") " Oct 11 10:35:49.805148 master-1 kubenswrapper[4771]: I1011 10:35:49.804903 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aebd88b2-f116-4ade-be8e-c293ccac533f-utilities\") pod \"community-operators-t6wtm\" (UID: \"aebd88b2-f116-4ade-be8e-c293ccac533f\") " pod="openshift-marketplace/community-operators-t6wtm" Oct 11 10:35:49.805148 master-1 kubenswrapper[4771]: I1011 10:35:49.804992 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zbwc\" (UniqueName: \"kubernetes.io/projected/aebd88b2-f116-4ade-be8e-c293ccac533f-kube-api-access-7zbwc\") pod \"community-operators-t6wtm\" (UID: \"aebd88b2-f116-4ade-be8e-c293ccac533f\") " pod="openshift-marketplace/community-operators-t6wtm" Oct 11 10:35:49.805148 master-1 kubenswrapper[4771]: I1011 10:35:49.805033 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aebd88b2-f116-4ade-be8e-c293ccac533f-catalog-content\") pod \"community-operators-t6wtm\" (UID: \"aebd88b2-f116-4ade-be8e-c293ccac533f\") " pod="openshift-marketplace/community-operators-t6wtm" Oct 11 10:35:49.805678 master-1 kubenswrapper[4771]: I1011 10:35:49.805629 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aebd88b2-f116-4ade-be8e-c293ccac533f-catalog-content\") pod \"community-operators-t6wtm\" (UID: \"aebd88b2-f116-4ade-be8e-c293ccac533f\") " pod="openshift-marketplace/community-operators-t6wtm" Oct 11 10:35:49.805862 master-1 kubenswrapper[4771]: I1011 10:35:49.805793 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aebd88b2-f116-4ade-be8e-c293ccac533f-utilities\") pod \"community-operators-t6wtm\" (UID: \"aebd88b2-f116-4ade-be8e-c293ccac533f\") " pod="openshift-marketplace/community-operators-t6wtm" Oct 11 10:35:49.807760 master-1 kubenswrapper[4771]: I1011 10:35:49.807700 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b7d1d62-0062-47cd-a963-63893777198e-kube-api-access-r6ptz" (OuterVolumeSpecName: "kube-api-access-r6ptz") pod "0b7d1d62-0062-47cd-a963-63893777198e" (UID: "0b7d1d62-0062-47cd-a963-63893777198e"). InnerVolumeSpecName "kube-api-access-r6ptz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:35:49.832825 master-1 kubenswrapper[4771]: I1011 10:35:49.832732 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zbwc\" (UniqueName: \"kubernetes.io/projected/aebd88b2-f116-4ade-be8e-c293ccac533f-kube-api-access-7zbwc\") pod \"community-operators-t6wtm\" (UID: \"aebd88b2-f116-4ade-be8e-c293ccac533f\") " pod="openshift-marketplace/community-operators-t6wtm" Oct 11 10:35:49.847796 master-2 kubenswrapper[4776]: I1011 10:35:49.847735 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-serving-cert\") pod \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " Oct 11 10:35:49.847796 master-2 kubenswrapper[4776]: I1011 10:35:49.847781 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-config\") pod \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " Oct 11 10:35:49.848048 master-2 kubenswrapper[4776]: I1011 10:35:49.847834 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccpmn\" (UniqueName: \"kubernetes.io/projected/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-kube-api-access-ccpmn\") pod \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " Oct 11 10:35:49.848048 master-2 kubenswrapper[4776]: I1011 10:35:49.847887 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-client-ca\") pod \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " Oct 11 10:35:49.848542 master-2 kubenswrapper[4776]: I1011 10:35:49.848512 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-client-ca" (OuterVolumeSpecName: "client-ca") pod "fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" (UID: "fa039e2d-e3c6-47a6-ad16-9f189e5a70e7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:35:49.848664 master-2 kubenswrapper[4776]: I1011 10:35:49.848599 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-config" (OuterVolumeSpecName: "config") pod "fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" (UID: "fa039e2d-e3c6-47a6-ad16-9f189e5a70e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:35:49.850603 master-2 kubenswrapper[4776]: I1011 10:35:49.850516 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" (UID: "fa039e2d-e3c6-47a6-ad16-9f189e5a70e7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:35:49.851747 master-2 kubenswrapper[4776]: I1011 10:35:49.851700 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-kube-api-access-ccpmn" (OuterVolumeSpecName: "kube-api-access-ccpmn") pod "fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" (UID: "fa039e2d-e3c6-47a6-ad16-9f189e5a70e7"). InnerVolumeSpecName "kube-api-access-ccpmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:35:49.874718 master-1 kubenswrapper[4771]: I1011 10:35:49.874614 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b7d1d62-0062-47cd-a963-63893777198e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b7d1d62-0062-47cd-a963-63893777198e" (UID: "0b7d1d62-0062-47cd-a963-63893777198e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:35:49.901082 master-2 kubenswrapper[4776]: I1011 10:35:49.901007 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.905827 master-1 kubenswrapper[4771]: I1011 10:35:49.905780 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b7d1d62-0062-47cd-a963-63893777198e-utilities\") pod \"0b7d1d62-0062-47cd-a963-63893777198e\" (UID: \"0b7d1d62-0062-47cd-a963-63893777198e\") " Oct 11 10:35:49.906057 master-1 kubenswrapper[4771]: I1011 10:35:49.906014 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b7d1d62-0062-47cd-a963-63893777198e-catalog-content\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:49.906057 master-1 kubenswrapper[4771]: I1011 10:35:49.906041 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6ptz\" (UniqueName: \"kubernetes.io/projected/0b7d1d62-0062-47cd-a963-63893777198e-kube-api-access-r6ptz\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:49.907074 master-1 kubenswrapper[4771]: I1011 10:35:49.907013 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b7d1d62-0062-47cd-a963-63893777198e-utilities" (OuterVolumeSpecName: "utilities") pod "0b7d1d62-0062-47cd-a963-63893777198e" (UID: "0b7d1d62-0062-47cd-a963-63893777198e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:35:49.944542 master-1 kubenswrapper[4771]: I1011 10:35:49.944462 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77c7855cb4-qkp68"] Oct 11 10:35:49.944784 master-1 kubenswrapper[4771]: E1011 10:35:49.944674 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b7d1d62-0062-47cd-a963-63893777198e" containerName="registry-server" Oct 11 10:35:49.944784 master-1 kubenswrapper[4771]: I1011 10:35:49.944689 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b7d1d62-0062-47cd-a963-63893777198e" containerName="registry-server" Oct 11 10:35:49.944784 master-1 kubenswrapper[4771]: E1011 10:35:49.944701 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b7d1d62-0062-47cd-a963-63893777198e" containerName="extract-utilities" Oct 11 10:35:49.944784 master-1 kubenswrapper[4771]: I1011 10:35:49.944709 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b7d1d62-0062-47cd-a963-63893777198e" containerName="extract-utilities" Oct 11 10:35:49.944784 master-1 kubenswrapper[4771]: E1011 10:35:49.944723 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b7d1d62-0062-47cd-a963-63893777198e" containerName="extract-content" Oct 11 10:35:49.944784 master-1 kubenswrapper[4771]: I1011 10:35:49.944733 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b7d1d62-0062-47cd-a963-63893777198e" containerName="extract-content" Oct 11 10:35:49.945165 master-1 kubenswrapper[4771]: I1011 10:35:49.944832 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b7d1d62-0062-47cd-a963-63893777198e" containerName="registry-server" Oct 11 10:35:49.945339 master-1 kubenswrapper[4771]: I1011 10:35:49.945285 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:35:49.948388 master-1 kubenswrapper[4771]: I1011 10:35:49.948302 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 11 10:35:49.948592 master-1 kubenswrapper[4771]: I1011 10:35:49.948568 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 11 10:35:49.948700 master-1 kubenswrapper[4771]: I1011 10:35:49.948633 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 11 10:35:49.948799 master-1 kubenswrapper[4771]: I1011 10:35:49.948701 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 11 10:35:49.949584 master-2 kubenswrapper[4776]: I1011 10:35:49.949532 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:49.949584 master-2 kubenswrapper[4776]: I1011 10:35:49.949572 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:49.949584 master-2 kubenswrapper[4776]: I1011 10:35:49.949587 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccpmn\" (UniqueName: \"kubernetes.io/projected/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-kube-api-access-ccpmn\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:49.949584 master-2 kubenswrapper[4776]: I1011 10:35:49.949599 4776 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-client-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:49.949671 master-1 kubenswrapper[4771]: I1011 10:35:49.949620 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 11 10:35:49.955965 master-1 kubenswrapper[4771]: I1011 10:35:49.955903 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 11 10:35:49.957861 master-1 kubenswrapper[4771]: I1011 10:35:49.957288 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77c7855cb4-qkp68"] Oct 11 10:35:49.970014 master-2 kubenswrapper[4776]: I1011 10:35:49.969946 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:49.970014 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:49.970014 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:49.970014 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:49.970563 master-2 kubenswrapper[4776]: I1011 10:35:49.970016 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:50.002172 master-2 kubenswrapper[4776]: I1011 10:35:50.002114 4776 generic.go:334] "Generic (PLEG): container finished" podID="fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" containerID="6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49" exitCode=0 Oct 11 10:35:50.002354 master-2 kubenswrapper[4776]: I1011 10:35:50.002189 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" event={"ID":"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7","Type":"ContainerDied","Data":"6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49"} Oct 11 10:35:50.002354 master-2 kubenswrapper[4776]: I1011 10:35:50.002217 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" event={"ID":"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7","Type":"ContainerDied","Data":"af9d02f52a1563e3017910d20a929f11b45fd257a3dc461ad875648500138f09"} Oct 11 10:35:50.002354 master-2 kubenswrapper[4776]: I1011 10:35:50.002256 4776 scope.go:117] "RemoveContainer" containerID="6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49" Oct 11 10:35:50.002354 master-2 kubenswrapper[4776]: I1011 10:35:50.002354 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:50.007769 master-1 kubenswrapper[4771]: I1011 10:35:50.007528 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwlwv\" (UniqueName: \"kubernetes.io/projected/e23d9d43-9980-4c16-91c4-9fc0bca161e6-kube-api-access-fwlwv\") pod \"controller-manager-77c7855cb4-qkp68\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:35:50.007769 master-1 kubenswrapper[4771]: I1011 10:35:50.007647 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e23d9d43-9980-4c16-91c4-9fc0bca161e6-client-ca\") pod \"controller-manager-77c7855cb4-qkp68\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:35:50.007769 master-1 kubenswrapper[4771]: I1011 10:35:50.007738 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e23d9d43-9980-4c16-91c4-9fc0bca161e6-config\") pod \"controller-manager-77c7855cb4-qkp68\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:35:50.008256 master-1 kubenswrapper[4771]: I1011 10:35:50.007785 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e23d9d43-9980-4c16-91c4-9fc0bca161e6-serving-cert\") pod \"controller-manager-77c7855cb4-qkp68\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:35:50.008256 master-1 kubenswrapper[4771]: I1011 10:35:50.007896 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e23d9d43-9980-4c16-91c4-9fc0bca161e6-proxy-ca-bundles\") pod \"controller-manager-77c7855cb4-qkp68\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:35:50.008256 master-1 kubenswrapper[4771]: I1011 10:35:50.007960 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b7d1d62-0062-47cd-a963-63893777198e-utilities\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:50.010717 master-2 kubenswrapper[4776]: I1011 10:35:50.010522 4776 generic.go:334] "Generic (PLEG): container finished" podID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerID="79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a" exitCode=0 Oct 11 10:35:50.010717 master-2 kubenswrapper[4776]: I1011 10:35:50.010590 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:35:50.010717 master-2 kubenswrapper[4776]: I1011 10:35:50.010587 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwqr6" event={"ID":"444ea5b2-c9dc-4685-9f66-2273b30d9045","Type":"ContainerDied","Data":"79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a"} Oct 11 10:35:50.010875 master-2 kubenswrapper[4776]: I1011 10:35:50.010753 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwqr6" event={"ID":"444ea5b2-c9dc-4685-9f66-2273b30d9045","Type":"ContainerDied","Data":"3de15bd009d5969b3f80470fb7549e4f068bb4b317e68ee93b70421988c245b5"} Oct 11 10:35:50.012943 master-1 kubenswrapper[4771]: I1011 10:35:50.012881 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t6wtm" Oct 11 10:35:50.038711 master-2 kubenswrapper[4776]: I1011 10:35:50.035902 4776 scope.go:117] "RemoveContainer" containerID="6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49" Oct 11 10:35:50.038711 master-2 kubenswrapper[4776]: E1011 10:35:50.036888 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49\": container with ID starting with 6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49 not found: ID does not exist" containerID="6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49" Oct 11 10:35:50.038711 master-2 kubenswrapper[4776]: I1011 10:35:50.036927 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49"} err="failed to get container status \"6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49\": rpc error: code = NotFound desc = could not find container \"6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49\": container with ID starting with 6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49 not found: ID does not exist" Oct 11 10:35:50.038711 master-2 kubenswrapper[4776]: I1011 10:35:50.036953 4776 scope.go:117] "RemoveContainer" containerID="79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a" Oct 11 10:35:50.040638 master-2 kubenswrapper[4776]: I1011 10:35:50.040604 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv"] Oct 11 10:35:50.048474 master-2 kubenswrapper[4776]: I1011 10:35:50.048386 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv"] Oct 11 10:35:50.067711 master-2 kubenswrapper[4776]: I1011 10:35:50.063893 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mwqr6"] Oct 11 10:35:50.068016 master-2 kubenswrapper[4776]: I1011 10:35:50.067886 4776 scope.go:117] "RemoveContainer" containerID="2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab" Oct 11 10:35:50.073061 master-2 kubenswrapper[4776]: I1011 10:35:50.073021 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" path="/var/lib/kubelet/pods/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7/volumes" Oct 11 10:35:50.073851 master-2 kubenswrapper[4776]: I1011 10:35:50.073816 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mwqr6"] Oct 11 10:35:50.089313 master-2 kubenswrapper[4776]: I1011 10:35:50.086861 4776 scope.go:117] "RemoveContainer" containerID="2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82" Oct 11 10:35:50.111014 master-1 kubenswrapper[4771]: I1011 10:35:50.108933 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e23d9d43-9980-4c16-91c4-9fc0bca161e6-config\") pod \"controller-manager-77c7855cb4-qkp68\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:35:50.111014 master-1 kubenswrapper[4771]: I1011 10:35:50.108991 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e23d9d43-9980-4c16-91c4-9fc0bca161e6-serving-cert\") pod \"controller-manager-77c7855cb4-qkp68\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:35:50.111014 master-1 kubenswrapper[4771]: I1011 10:35:50.109053 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e23d9d43-9980-4c16-91c4-9fc0bca161e6-proxy-ca-bundles\") pod \"controller-manager-77c7855cb4-qkp68\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:35:50.111014 master-1 kubenswrapper[4771]: I1011 10:35:50.109082 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwlwv\" (UniqueName: \"kubernetes.io/projected/e23d9d43-9980-4c16-91c4-9fc0bca161e6-kube-api-access-fwlwv\") pod \"controller-manager-77c7855cb4-qkp68\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:35:50.111014 master-1 kubenswrapper[4771]: I1011 10:35:50.109106 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e23d9d43-9980-4c16-91c4-9fc0bca161e6-client-ca\") pod \"controller-manager-77c7855cb4-qkp68\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:35:50.111014 master-1 kubenswrapper[4771]: I1011 10:35:50.110821 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e23d9d43-9980-4c16-91c4-9fc0bca161e6-client-ca\") pod \"controller-manager-77c7855cb4-qkp68\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:35:50.111014 master-1 kubenswrapper[4771]: I1011 10:35:50.110946 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e23d9d43-9980-4c16-91c4-9fc0bca161e6-config\") pod \"controller-manager-77c7855cb4-qkp68\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:35:50.112782 master-1 kubenswrapper[4771]: I1011 10:35:50.112736 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e23d9d43-9980-4c16-91c4-9fc0bca161e6-serving-cert\") pod \"controller-manager-77c7855cb4-qkp68\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:35:50.114119 master-2 kubenswrapper[4776]: I1011 10:35:50.114079 4776 scope.go:117] "RemoveContainer" containerID="79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a" Oct 11 10:35:50.114438 master-1 kubenswrapper[4771]: I1011 10:35:50.114381 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e23d9d43-9980-4c16-91c4-9fc0bca161e6-proxy-ca-bundles\") pod \"controller-manager-77c7855cb4-qkp68\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:35:50.114587 master-2 kubenswrapper[4776]: E1011 10:35:50.114562 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a\": container with ID starting with 79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a not found: ID does not exist" containerID="79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a" Oct 11 10:35:50.114692 master-2 kubenswrapper[4776]: I1011 10:35:50.114636 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a"} err="failed to get container status \"79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a\": rpc error: code = NotFound desc = could not find container \"79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a\": container with ID starting with 79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a not found: ID does not exist" Oct 11 10:35:50.114692 master-2 kubenswrapper[4776]: I1011 10:35:50.114656 4776 scope.go:117] "RemoveContainer" containerID="2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab" Oct 11 10:35:50.117556 master-2 kubenswrapper[4776]: E1011 10:35:50.117006 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab\": container with ID starting with 2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab not found: ID does not exist" containerID="2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab" Oct 11 10:35:50.117556 master-2 kubenswrapper[4776]: I1011 10:35:50.117066 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab"} err="failed to get container status \"2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab\": rpc error: code = NotFound desc = could not find container \"2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab\": container with ID starting with 2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab not found: ID does not exist" Oct 11 10:35:50.117556 master-2 kubenswrapper[4776]: I1011 10:35:50.117081 4776 scope.go:117] "RemoveContainer" containerID="2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82" Oct 11 10:35:50.122827 master-2 kubenswrapper[4776]: E1011 10:35:50.122362 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82\": container with ID starting with 2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82 not found: ID does not exist" containerID="2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82" Oct 11 10:35:50.122827 master-2 kubenswrapper[4776]: I1011 10:35:50.122436 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82"} err="failed to get container status \"2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82\": rpc error: code = NotFound desc = could not find container \"2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82\": container with ID starting with 2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82 not found: ID does not exist" Oct 11 10:35:50.134472 master-1 kubenswrapper[4771]: I1011 10:35:50.134170 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwlwv\" (UniqueName: \"kubernetes.io/projected/e23d9d43-9980-4c16-91c4-9fc0bca161e6-kube-api-access-fwlwv\") pod \"controller-manager-77c7855cb4-qkp68\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:35:50.154933 master-1 kubenswrapper[4771]: I1011 10:35:50.154861 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-6-master-1"] Oct 11 10:35:50.164912 master-1 kubenswrapper[4771]: I1011 10:35:50.163990 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-1" Oct 11 10:35:50.165643 master-1 kubenswrapper[4771]: I1011 10:35:50.165590 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-1"] Oct 11 10:35:50.166863 master-1 kubenswrapper[4771]: I1011 10:35:50.166840 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-js756" Oct 11 10:35:50.246550 master-1 kubenswrapper[4771]: I1011 10:35:50.246472 4771 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-bg7lk container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:35:50.246550 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:35:50.246550 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:35:50.246550 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:35:50.246550 master-1 kubenswrapper[4771]: [+]metric-storage-ready ok Oct 11 10:35:50.246550 master-1 kubenswrapper[4771]: [+]metric-informer-sync ok Oct 11 10:35:50.246550 master-1 kubenswrapper[4771]: [+]metadata-informer-sync ok Oct 11 10:35:50.246550 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:35:50.246550 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:35:50.246550 master-1 kubenswrapper[4771]: I1011 10:35:50.246537 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" podUID="daf74cdb-6bdb-465a-8e3e-194e8868570f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:50.272564 master-1 kubenswrapper[4771]: I1011 10:35:50.272437 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:35:50.308732 master-2 kubenswrapper[4776]: I1011 10:35:50.308687 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xtrbk"] Oct 11 10:35:50.311881 master-1 kubenswrapper[4771]: I1011 10:35:50.311788 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67e39e90-67d5-40f4-ad76-1b32adf359ed-kube-api-access\") pod \"installer-6-master-1\" (UID: \"67e39e90-67d5-40f4-ad76-1b32adf359ed\") " pod="openshift-kube-scheduler/installer-6-master-1" Oct 11 10:35:50.312110 master-1 kubenswrapper[4771]: I1011 10:35:50.312048 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/67e39e90-67d5-40f4-ad76-1b32adf359ed-var-lock\") pod \"installer-6-master-1\" (UID: \"67e39e90-67d5-40f4-ad76-1b32adf359ed\") " pod="openshift-kube-scheduler/installer-6-master-1" Oct 11 10:35:50.312208 master-1 kubenswrapper[4771]: I1011 10:35:50.312163 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67e39e90-67d5-40f4-ad76-1b32adf359ed-kubelet-dir\") pod \"installer-6-master-1\" (UID: \"67e39e90-67d5-40f4-ad76-1b32adf359ed\") " pod="openshift-kube-scheduler/installer-6-master-1" Oct 11 10:35:50.312913 master-2 kubenswrapper[4776]: W1011 10:35:50.312869 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1afe0068_3c97_4916_ba53_53f2841a95b0.slice/crio-6ec235918a9169886c8202b0389c297a53a2ef547006f644ae6b3794eb4bb069 WatchSource:0}: Error finding container 6ec235918a9169886c8202b0389c297a53a2ef547006f644ae6b3794eb4bb069: Status 404 returned error can't find the container with id 6ec235918a9169886c8202b0389c297a53a2ef547006f644ae6b3794eb4bb069 Oct 11 10:35:50.318574 master-1 kubenswrapper[4771]: I1011 10:35:50.318515 4771 generic.go:334] "Generic (PLEG): container finished" podID="0b7d1d62-0062-47cd-a963-63893777198e" containerID="f22b32b9efc5550c030d0dab72ccfb608faf225ce28a60e3581e98388d7a1f20" exitCode=0 Oct 11 10:35:50.318792 master-1 kubenswrapper[4771]: I1011 10:35:50.318632 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwwz9" event={"ID":"0b7d1d62-0062-47cd-a963-63893777198e","Type":"ContainerDied","Data":"f22b32b9efc5550c030d0dab72ccfb608faf225ce28a60e3581e98388d7a1f20"} Oct 11 10:35:50.318897 master-1 kubenswrapper[4771]: I1011 10:35:50.318884 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwwz9" event={"ID":"0b7d1d62-0062-47cd-a963-63893777198e","Type":"ContainerDied","Data":"94df55f9d42e35f3eb12d9d840811113835d067c33b17a8f7670d61e212cd7f3"} Oct 11 10:35:50.319215 master-1 kubenswrapper[4771]: I1011 10:35:50.319016 4771 scope.go:117] "RemoveContainer" containerID="f22b32b9efc5550c030d0dab72ccfb608faf225ce28a60e3581e98388d7a1f20" Oct 11 10:35:50.319365 master-1 kubenswrapper[4771]: I1011 10:35:50.319230 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwwz9" Oct 11 10:35:50.340413 master-1 kubenswrapper[4771]: I1011 10:35:50.340332 4771 scope.go:117] "RemoveContainer" containerID="d9ec3c198a5bb93b32bff398f90b94c113b4f2ba904501149fb967a49c67ec80" Oct 11 10:35:50.362347 master-1 kubenswrapper[4771]: I1011 10:35:50.362273 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gwwz9"] Oct 11 10:35:50.366682 master-1 kubenswrapper[4771]: I1011 10:35:50.366631 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gwwz9"] Oct 11 10:35:50.367378 master-1 kubenswrapper[4771]: I1011 10:35:50.367335 4771 scope.go:117] "RemoveContainer" containerID="87c2f3c9c19accca6371d7e77d4bcd0cd514c07776fd9f98f32125516b8cf865" Oct 11 10:35:50.438007 master-1 kubenswrapper[4771]: I1011 10:35:50.437947 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67e39e90-67d5-40f4-ad76-1b32adf359ed-kubelet-dir\") pod \"installer-6-master-1\" (UID: \"67e39e90-67d5-40f4-ad76-1b32adf359ed\") " pod="openshift-kube-scheduler/installer-6-master-1" Oct 11 10:35:50.438208 master-1 kubenswrapper[4771]: I1011 10:35:50.438022 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67e39e90-67d5-40f4-ad76-1b32adf359ed-kube-api-access\") pod \"installer-6-master-1\" (UID: \"67e39e90-67d5-40f4-ad76-1b32adf359ed\") " pod="openshift-kube-scheduler/installer-6-master-1" Oct 11 10:35:50.438208 master-1 kubenswrapper[4771]: I1011 10:35:50.438105 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/67e39e90-67d5-40f4-ad76-1b32adf359ed-var-lock\") pod \"installer-6-master-1\" (UID: \"67e39e90-67d5-40f4-ad76-1b32adf359ed\") " pod="openshift-kube-scheduler/installer-6-master-1" Oct 11 10:35:50.438713 master-1 kubenswrapper[4771]: I1011 10:35:50.438673 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/67e39e90-67d5-40f4-ad76-1b32adf359ed-var-lock\") pod \"installer-6-master-1\" (UID: \"67e39e90-67d5-40f4-ad76-1b32adf359ed\") " pod="openshift-kube-scheduler/installer-6-master-1" Oct 11 10:35:50.438884 master-1 kubenswrapper[4771]: I1011 10:35:50.438859 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67e39e90-67d5-40f4-ad76-1b32adf359ed-kubelet-dir\") pod \"installer-6-master-1\" (UID: \"67e39e90-67d5-40f4-ad76-1b32adf359ed\") " pod="openshift-kube-scheduler/installer-6-master-1" Oct 11 10:35:50.445263 master-1 kubenswrapper[4771]: I1011 10:35:50.445170 4771 scope.go:117] "RemoveContainer" containerID="f22b32b9efc5550c030d0dab72ccfb608faf225ce28a60e3581e98388d7a1f20" Oct 11 10:35:50.447742 master-1 kubenswrapper[4771]: E1011 10:35:50.447674 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f22b32b9efc5550c030d0dab72ccfb608faf225ce28a60e3581e98388d7a1f20\": container with ID starting with f22b32b9efc5550c030d0dab72ccfb608faf225ce28a60e3581e98388d7a1f20 not found: ID does not exist" containerID="f22b32b9efc5550c030d0dab72ccfb608faf225ce28a60e3581e98388d7a1f20" Oct 11 10:35:50.447834 master-1 kubenswrapper[4771]: I1011 10:35:50.447748 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f22b32b9efc5550c030d0dab72ccfb608faf225ce28a60e3581e98388d7a1f20"} err="failed to get container status \"f22b32b9efc5550c030d0dab72ccfb608faf225ce28a60e3581e98388d7a1f20\": rpc error: code = NotFound desc = could not find container \"f22b32b9efc5550c030d0dab72ccfb608faf225ce28a60e3581e98388d7a1f20\": container with ID starting with f22b32b9efc5550c030d0dab72ccfb608faf225ce28a60e3581e98388d7a1f20 not found: ID does not exist" Oct 11 10:35:50.447834 master-1 kubenswrapper[4771]: I1011 10:35:50.447789 4771 scope.go:117] "RemoveContainer" containerID="d9ec3c198a5bb93b32bff398f90b94c113b4f2ba904501149fb967a49c67ec80" Oct 11 10:35:50.448678 master-1 kubenswrapper[4771]: E1011 10:35:50.448636 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9ec3c198a5bb93b32bff398f90b94c113b4f2ba904501149fb967a49c67ec80\": container with ID starting with d9ec3c198a5bb93b32bff398f90b94c113b4f2ba904501149fb967a49c67ec80 not found: ID does not exist" containerID="d9ec3c198a5bb93b32bff398f90b94c113b4f2ba904501149fb967a49c67ec80" Oct 11 10:35:50.448678 master-1 kubenswrapper[4771]: I1011 10:35:50.448666 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9ec3c198a5bb93b32bff398f90b94c113b4f2ba904501149fb967a49c67ec80"} err="failed to get container status \"d9ec3c198a5bb93b32bff398f90b94c113b4f2ba904501149fb967a49c67ec80\": rpc error: code = NotFound desc = could not find container \"d9ec3c198a5bb93b32bff398f90b94c113b4f2ba904501149fb967a49c67ec80\": container with ID starting with d9ec3c198a5bb93b32bff398f90b94c113b4f2ba904501149fb967a49c67ec80 not found: ID does not exist" Oct 11 10:35:50.448788 master-1 kubenswrapper[4771]: I1011 10:35:50.448684 4771 scope.go:117] "RemoveContainer" containerID="87c2f3c9c19accca6371d7e77d4bcd0cd514c07776fd9f98f32125516b8cf865" Oct 11 10:35:50.449099 master-1 kubenswrapper[4771]: E1011 10:35:50.449063 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87c2f3c9c19accca6371d7e77d4bcd0cd514c07776fd9f98f32125516b8cf865\": container with ID starting with 87c2f3c9c19accca6371d7e77d4bcd0cd514c07776fd9f98f32125516b8cf865 not found: ID does not exist" containerID="87c2f3c9c19accca6371d7e77d4bcd0cd514c07776fd9f98f32125516b8cf865" Oct 11 10:35:50.449099 master-1 kubenswrapper[4771]: I1011 10:35:50.449089 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87c2f3c9c19accca6371d7e77d4bcd0cd514c07776fd9f98f32125516b8cf865"} err="failed to get container status \"87c2f3c9c19accca6371d7e77d4bcd0cd514c07776fd9f98f32125516b8cf865\": rpc error: code = NotFound desc = could not find container \"87c2f3c9c19accca6371d7e77d4bcd0cd514c07776fd9f98f32125516b8cf865\": container with ID starting with 87c2f3c9c19accca6371d7e77d4bcd0cd514c07776fd9f98f32125516b8cf865 not found: ID does not exist" Oct 11 10:35:50.452675 master-1 kubenswrapper[4771]: I1011 10:35:50.452628 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b7d1d62-0062-47cd-a963-63893777198e" path="/var/lib/kubelet/pods/0b7d1d62-0062-47cd-a963-63893777198e/volumes" Oct 11 10:35:50.454479 master-1 kubenswrapper[4771]: I1011 10:35:50.454342 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62f55273-2711-4de4-a399-5dae9b578f0c" path="/var/lib/kubelet/pods/62f55273-2711-4de4-a399-5dae9b578f0c/volumes" Oct 11 10:35:50.462990 master-1 kubenswrapper[4771]: I1011 10:35:50.462938 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67e39e90-67d5-40f4-ad76-1b32adf359ed-kube-api-access\") pod \"installer-6-master-1\" (UID: \"67e39e90-67d5-40f4-ad76-1b32adf359ed\") " pod="openshift-kube-scheduler/installer-6-master-1" Oct 11 10:35:50.474200 master-1 kubenswrapper[4771]: I1011 10:35:50.474137 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t6wtm"] Oct 11 10:35:50.484912 master-1 kubenswrapper[4771]: W1011 10:35:50.484849 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaebd88b2_f116_4ade_be8e_c293ccac533f.slice/crio-a0fc78bc0f59c467ca2c65c3c36d34a778b7360404a2f67daeae8fcd41742859 WatchSource:0}: Error finding container a0fc78bc0f59c467ca2c65c3c36d34a778b7360404a2f67daeae8fcd41742859: Status 404 returned error can't find the container with id a0fc78bc0f59c467ca2c65c3c36d34a778b7360404a2f67daeae8fcd41742859 Oct 11 10:35:50.495984 master-1 kubenswrapper[4771]: I1011 10:35:50.495944 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-1" Oct 11 10:35:50.497091 master-1 kubenswrapper[4771]: I1011 10:35:50.496896 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:50.497091 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:50.497091 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:50.497091 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:50.497091 master-1 kubenswrapper[4771]: I1011 10:35:50.496928 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:50.774102 master-1 kubenswrapper[4771]: I1011 10:35:50.774055 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77c7855cb4-qkp68"] Oct 11 10:35:50.776320 master-1 kubenswrapper[4771]: W1011 10:35:50.776258 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode23d9d43_9980_4c16_91c4_9fc0bca161e6.slice/crio-5577977e3fcec143fb9fe4819b109c252e41520252e8f2be4cdb67371fc4b2fd WatchSource:0}: Error finding container 5577977e3fcec143fb9fe4819b109c252e41520252e8f2be4cdb67371fc4b2fd: Status 404 returned error can't find the container with id 5577977e3fcec143fb9fe4819b109c252e41520252e8f2be4cdb67371fc4b2fd Oct 11 10:35:50.910708 master-1 kubenswrapper[4771]: I1011 10:35:50.910618 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-1"] Oct 11 10:35:50.915516 master-1 kubenswrapper[4771]: W1011 10:35:50.915466 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod67e39e90_67d5_40f4_ad76_1b32adf359ed.slice/crio-c6be9acab2b46a0600e4b835c238bce39535b79bcb5079c1d439519d3a10a7ca WatchSource:0}: Error finding container c6be9acab2b46a0600e4b835c238bce39535b79bcb5079c1d439519d3a10a7ca: Status 404 returned error can't find the container with id c6be9acab2b46a0600e4b835c238bce39535b79bcb5079c1d439519d3a10a7ca Oct 11 10:35:50.944560 master-2 kubenswrapper[4776]: I1011 10:35:50.944503 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5"] Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: E1011 10:35:50.944726 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerName="extract-content" Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: I1011 10:35:50.944742 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerName="extract-content" Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: E1011 10:35:50.944754 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" containerName="route-controller-manager" Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: I1011 10:35:50.944760 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" containerName="route-controller-manager" Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: E1011 10:35:50.944772 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerName="registry-server" Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: I1011 10:35:50.944779 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerName="registry-server" Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: E1011 10:35:50.944791 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerName="extract-utilities" Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: I1011 10:35:50.944797 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerName="extract-utilities" Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: I1011 10:35:50.944893 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerName="registry-server" Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: I1011 10:35:50.944904 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" containerName="route-controller-manager" Oct 11 10:35:50.945361 master-2 kubenswrapper[4776]: I1011 10:35:50.945339 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:50.947666 master-2 kubenswrapper[4776]: I1011 10:35:50.947639 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 11 10:35:50.948104 master-2 kubenswrapper[4776]: I1011 10:35:50.948073 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 11 10:35:50.949028 master-2 kubenswrapper[4776]: I1011 10:35:50.949003 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-vwjkz" Oct 11 10:35:50.949113 master-2 kubenswrapper[4776]: I1011 10:35:50.949033 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 11 10:35:50.949113 master-2 kubenswrapper[4776]: I1011 10:35:50.949060 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 11 10:35:50.949382 master-2 kubenswrapper[4776]: I1011 10:35:50.949357 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 11 10:35:50.955409 master-2 kubenswrapper[4776]: I1011 10:35:50.955320 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5"] Oct 11 10:35:50.965489 master-2 kubenswrapper[4776]: I1011 10:35:50.965437 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkssx\" (UniqueName: \"kubernetes.io/projected/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-kube-api-access-wkssx\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:50.965663 master-2 kubenswrapper[4776]: I1011 10:35:50.965505 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-config\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:50.965663 master-2 kubenswrapper[4776]: I1011 10:35:50.965567 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-client-ca\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:50.965663 master-2 kubenswrapper[4776]: I1011 10:35:50.965610 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-serving-cert\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:50.973343 master-2 kubenswrapper[4776]: I1011 10:35:50.973270 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:50.973343 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:50.973343 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:50.973343 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:50.973813 master-2 kubenswrapper[4776]: I1011 10:35:50.973344 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:51.017265 master-2 kubenswrapper[4776]: I1011 10:35:51.017218 4776 generic.go:334] "Generic (PLEG): container finished" podID="1afe0068-3c97-4916-ba53-53f2841a95b0" containerID="9df0a820a473c70d90d0917b469efe19d0dee775dca56c297c1256c405278716" exitCode=0 Oct 11 10:35:51.017600 master-2 kubenswrapper[4776]: I1011 10:35:51.017539 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtrbk" event={"ID":"1afe0068-3c97-4916-ba53-53f2841a95b0","Type":"ContainerDied","Data":"9df0a820a473c70d90d0917b469efe19d0dee775dca56c297c1256c405278716"} Oct 11 10:35:51.017741 master-2 kubenswrapper[4776]: I1011 10:35:51.017722 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtrbk" event={"ID":"1afe0068-3c97-4916-ba53-53f2841a95b0","Type":"ContainerStarted","Data":"6ec235918a9169886c8202b0389c297a53a2ef547006f644ae6b3794eb4bb069"} Oct 11 10:35:51.067015 master-2 kubenswrapper[4776]: I1011 10:35:51.066972 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkssx\" (UniqueName: \"kubernetes.io/projected/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-kube-api-access-wkssx\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:51.067258 master-2 kubenswrapper[4776]: I1011 10:35:51.067243 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-config\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:51.067401 master-2 kubenswrapper[4776]: I1011 10:35:51.067384 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-client-ca\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:51.067520 master-2 kubenswrapper[4776]: I1011 10:35:51.067504 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-serving-cert\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:51.068607 master-2 kubenswrapper[4776]: I1011 10:35:51.068576 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-config\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:51.068799 master-2 kubenswrapper[4776]: I1011 10:35:51.068760 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-client-ca\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:51.070791 master-2 kubenswrapper[4776]: I1011 10:35:51.070759 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-serving-cert\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:51.084181 master-2 kubenswrapper[4776]: I1011 10:35:51.084130 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkssx\" (UniqueName: \"kubernetes.io/projected/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-kube-api-access-wkssx\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:51.267152 master-2 kubenswrapper[4776]: I1011 10:35:51.267031 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:51.332264 master-1 kubenswrapper[4771]: I1011 10:35:51.332089 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" event={"ID":"e23d9d43-9980-4c16-91c4-9fc0bca161e6","Type":"ContainerStarted","Data":"c570237a7e93abdb8d6cb4489a86eb34cb5e25db0de47a00c9bf05de3a2ba3c4"} Oct 11 10:35:51.332264 master-1 kubenswrapper[4771]: I1011 10:35:51.332158 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" event={"ID":"e23d9d43-9980-4c16-91c4-9fc0bca161e6","Type":"ContainerStarted","Data":"5577977e3fcec143fb9fe4819b109c252e41520252e8f2be4cdb67371fc4b2fd"} Oct 11 10:35:51.332869 master-1 kubenswrapper[4771]: I1011 10:35:51.332511 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:35:51.334996 master-1 kubenswrapper[4771]: I1011 10:35:51.334919 4771 generic.go:334] "Generic (PLEG): container finished" podID="aebd88b2-f116-4ade-be8e-c293ccac533f" containerID="ab992109a756b222e67d346b2be7bdaea651f492c702f39749fdc53e167dd28f" exitCode=0 Oct 11 10:35:51.335064 master-1 kubenswrapper[4771]: I1011 10:35:51.334991 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6wtm" event={"ID":"aebd88b2-f116-4ade-be8e-c293ccac533f","Type":"ContainerDied","Data":"ab992109a756b222e67d346b2be7bdaea651f492c702f39749fdc53e167dd28f"} Oct 11 10:35:51.335064 master-1 kubenswrapper[4771]: I1011 10:35:51.335058 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6wtm" event={"ID":"aebd88b2-f116-4ade-be8e-c293ccac533f","Type":"ContainerStarted","Data":"a0fc78bc0f59c467ca2c65c3c36d34a778b7360404a2f67daeae8fcd41742859"} Oct 11 10:35:51.338058 master-1 kubenswrapper[4771]: I1011 10:35:51.338022 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:35:51.339567 master-1 kubenswrapper[4771]: I1011 10:35:51.339530 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-1" event={"ID":"67e39e90-67d5-40f4-ad76-1b32adf359ed","Type":"ContainerStarted","Data":"4ac39222fba40ff7cbe78740b5c6cfd319b2ad66eef840556f4373378718527a"} Oct 11 10:35:51.339617 master-1 kubenswrapper[4771]: I1011 10:35:51.339568 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-1" event={"ID":"67e39e90-67d5-40f4-ad76-1b32adf359ed","Type":"ContainerStarted","Data":"c6be9acab2b46a0600e4b835c238bce39535b79bcb5079c1d439519d3a10a7ca"} Oct 11 10:35:51.352044 master-1 kubenswrapper[4771]: I1011 10:35:51.351808 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" podStartSLOduration=3.35177216 podStartE2EDuration="3.35177216s" podCreationTimestamp="2025-10-11 10:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:35:51.350974587 +0000 UTC m=+583.325201038" watchObservedRunningTime="2025-10-11 10:35:51.35177216 +0000 UTC m=+583.325998641" Oct 11 10:35:51.372288 master-1 kubenswrapper[4771]: I1011 10:35:51.372185 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-6-master-1" podStartSLOduration=1.372149998 podStartE2EDuration="1.372149998s" podCreationTimestamp="2025-10-11 10:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:35:51.367805851 +0000 UTC m=+583.342032312" watchObservedRunningTime="2025-10-11 10:35:51.372149998 +0000 UTC m=+583.346376469" Oct 11 10:35:51.497720 master-1 kubenswrapper[4771]: I1011 10:35:51.497618 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:51.497720 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:51.497720 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:51.497720 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:51.497720 master-1 kubenswrapper[4771]: I1011 10:35:51.497723 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:51.649314 master-2 kubenswrapper[4776]: I1011 10:35:51.649272 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5"] Oct 11 10:35:51.652896 master-2 kubenswrapper[4776]: W1011 10:35:51.652853 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97bde30f_16ad_44f5_ac26_9f0ba5ba74f5.slice/crio-417196648a53013bc23124ecf6d1bf221fe3949e2ac324623e298e42a8c1ca2b WatchSource:0}: Error finding container 417196648a53013bc23124ecf6d1bf221fe3949e2ac324623e298e42a8c1ca2b: Status 404 returned error can't find the container with id 417196648a53013bc23124ecf6d1bf221fe3949e2ac324623e298e42a8c1ca2b Oct 11 10:35:51.848459 master-1 kubenswrapper[4771]: I1011 10:35:51.848168 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xkrc6"] Oct 11 10:35:51.848727 master-1 kubenswrapper[4771]: I1011 10:35:51.848517 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xkrc6" podUID="26005893-ecd8-4acb-8417-71a97ed97cbe" containerName="registry-server" containerID="cri-o://9f359af209588aa409904f71581bb63e20e019ac6f684b2bb1874bdc33d16458" gracePeriod=2 Oct 11 10:35:51.969778 master-2 kubenswrapper[4776]: I1011 10:35:51.969701 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:51.969778 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:51.969778 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:51.969778 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:51.969778 master-2 kubenswrapper[4776]: I1011 10:35:51.969762 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:52.026862 master-2 kubenswrapper[4776]: I1011 10:35:52.026813 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtrbk" event={"ID":"1afe0068-3c97-4916-ba53-53f2841a95b0","Type":"ContainerStarted","Data":"9fa35ef84abfec8ae375122e25146737d5391f1e1c18440583ca1f3d3b0910b8"} Oct 11 10:35:52.030910 master-2 kubenswrapper[4776]: I1011 10:35:52.030867 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" event={"ID":"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5","Type":"ContainerStarted","Data":"9278bc0b71056762d1acbc3fed930331878f846bd5deefeb8a2b904499d18eb2"} Oct 11 10:35:52.030910 master-2 kubenswrapper[4776]: I1011 10:35:52.030917 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" event={"ID":"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5","Type":"ContainerStarted","Data":"417196648a53013bc23124ecf6d1bf221fe3949e2ac324623e298e42a8c1ca2b"} Oct 11 10:35:52.031186 master-2 kubenswrapper[4776]: I1011 10:35:52.031142 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:52.032549 master-1 kubenswrapper[4771]: I1011 10:35:52.032471 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g8tm6"] Oct 11 10:35:52.032979 master-1 kubenswrapper[4771]: I1011 10:35:52.032918 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g8tm6" podUID="38131fcf-d407-4ba3-b7bf-471586bab887" containerName="registry-server" containerID="cri-o://c5ddefdc367347ae7e3aa6121d147be1b4ebca7be06e0180a8a6603ea9ef59cd" gracePeriod=2 Oct 11 10:35:52.059170 master-2 kubenswrapper[4776]: I1011 10:35:52.059111 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" podStartSLOduration=3.059093097 podStartE2EDuration="3.059093097s" podCreationTimestamp="2025-10-11 10:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:35:52.057992578 +0000 UTC m=+586.842419287" watchObservedRunningTime="2025-10-11 10:35:52.059093097 +0000 UTC m=+586.843519806" Oct 11 10:35:52.065992 master-2 kubenswrapper[4776]: I1011 10:35:52.065951 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="444ea5b2-c9dc-4685-9f66-2273b30d9045" path="/var/lib/kubelet/pods/444ea5b2-c9dc-4685-9f66-2273b30d9045/volumes" Oct 11 10:35:52.243993 master-1 kubenswrapper[4771]: I1011 10:35:52.243910 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9ncpc"] Oct 11 10:35:52.244999 master-1 kubenswrapper[4771]: I1011 10:35:52.244944 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ncpc" Oct 11 10:35:52.248256 master-1 kubenswrapper[4771]: I1011 10:35:52.247928 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-mbhtz" Oct 11 10:35:52.261252 master-1 kubenswrapper[4771]: I1011 10:35:52.261186 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ncpc"] Oct 11 10:35:52.267453 master-1 kubenswrapper[4771]: I1011 10:35:52.265344 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91e987bb-eae2-4f14-809d-1b1141882c7d-catalog-content\") pod \"redhat-marketplace-9ncpc\" (UID: \"91e987bb-eae2-4f14-809d-1b1141882c7d\") " pod="openshift-marketplace/redhat-marketplace-9ncpc" Oct 11 10:35:52.267453 master-1 kubenswrapper[4771]: I1011 10:35:52.265533 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91e987bb-eae2-4f14-809d-1b1141882c7d-utilities\") pod \"redhat-marketplace-9ncpc\" (UID: \"91e987bb-eae2-4f14-809d-1b1141882c7d\") " pod="openshift-marketplace/redhat-marketplace-9ncpc" Oct 11 10:35:52.267453 master-1 kubenswrapper[4771]: I1011 10:35:52.265577 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8gs9\" (UniqueName: \"kubernetes.io/projected/91e987bb-eae2-4f14-809d-1b1141882c7d-kube-api-access-z8gs9\") pod \"redhat-marketplace-9ncpc\" (UID: \"91e987bb-eae2-4f14-809d-1b1141882c7d\") " pod="openshift-marketplace/redhat-marketplace-9ncpc" Oct 11 10:35:52.349078 master-1 kubenswrapper[4771]: I1011 10:35:52.349013 4771 generic.go:334] "Generic (PLEG): container finished" podID="26005893-ecd8-4acb-8417-71a97ed97cbe" containerID="9f359af209588aa409904f71581bb63e20e019ac6f684b2bb1874bdc33d16458" exitCode=0 Oct 11 10:35:52.349892 master-1 kubenswrapper[4771]: I1011 10:35:52.349096 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xkrc6" event={"ID":"26005893-ecd8-4acb-8417-71a97ed97cbe","Type":"ContainerDied","Data":"9f359af209588aa409904f71581bb63e20e019ac6f684b2bb1874bdc33d16458"} Oct 11 10:35:52.352539 master-1 kubenswrapper[4771]: I1011 10:35:52.352419 4771 generic.go:334] "Generic (PLEG): container finished" podID="38131fcf-d407-4ba3-b7bf-471586bab887" containerID="c5ddefdc367347ae7e3aa6121d147be1b4ebca7be06e0180a8a6603ea9ef59cd" exitCode=0 Oct 11 10:35:52.352670 master-1 kubenswrapper[4771]: I1011 10:35:52.352519 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8tm6" event={"ID":"38131fcf-d407-4ba3-b7bf-471586bab887","Type":"ContainerDied","Data":"c5ddefdc367347ae7e3aa6121d147be1b4ebca7be06e0180a8a6603ea9ef59cd"} Oct 11 10:35:52.354852 master-1 kubenswrapper[4771]: I1011 10:35:52.354770 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6wtm" event={"ID":"aebd88b2-f116-4ade-be8e-c293ccac533f","Type":"ContainerStarted","Data":"909e21621c80ae096c10bbcb92430889443aedcf5f0b3f51e4c28a3ef5eaaddc"} Oct 11 10:35:52.366092 master-1 kubenswrapper[4771]: I1011 10:35:52.366045 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91e987bb-eae2-4f14-809d-1b1141882c7d-catalog-content\") pod \"redhat-marketplace-9ncpc\" (UID: \"91e987bb-eae2-4f14-809d-1b1141882c7d\") " pod="openshift-marketplace/redhat-marketplace-9ncpc" Oct 11 10:35:52.367489 master-1 kubenswrapper[4771]: I1011 10:35:52.366238 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91e987bb-eae2-4f14-809d-1b1141882c7d-utilities\") pod \"redhat-marketplace-9ncpc\" (UID: \"91e987bb-eae2-4f14-809d-1b1141882c7d\") " pod="openshift-marketplace/redhat-marketplace-9ncpc" Oct 11 10:35:52.367489 master-1 kubenswrapper[4771]: I1011 10:35:52.366271 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8gs9\" (UniqueName: \"kubernetes.io/projected/91e987bb-eae2-4f14-809d-1b1141882c7d-kube-api-access-z8gs9\") pod \"redhat-marketplace-9ncpc\" (UID: \"91e987bb-eae2-4f14-809d-1b1141882c7d\") " pod="openshift-marketplace/redhat-marketplace-9ncpc" Oct 11 10:35:52.367489 master-1 kubenswrapper[4771]: I1011 10:35:52.367156 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91e987bb-eae2-4f14-809d-1b1141882c7d-catalog-content\") pod \"redhat-marketplace-9ncpc\" (UID: \"91e987bb-eae2-4f14-809d-1b1141882c7d\") " pod="openshift-marketplace/redhat-marketplace-9ncpc" Oct 11 10:35:52.367489 master-1 kubenswrapper[4771]: I1011 10:35:52.367380 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91e987bb-eae2-4f14-809d-1b1141882c7d-utilities\") pod \"redhat-marketplace-9ncpc\" (UID: \"91e987bb-eae2-4f14-809d-1b1141882c7d\") " pod="openshift-marketplace/redhat-marketplace-9ncpc" Oct 11 10:35:52.387263 master-1 kubenswrapper[4771]: I1011 10:35:52.387193 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8gs9\" (UniqueName: \"kubernetes.io/projected/91e987bb-eae2-4f14-809d-1b1141882c7d-kube-api-access-z8gs9\") pod \"redhat-marketplace-9ncpc\" (UID: \"91e987bb-eae2-4f14-809d-1b1141882c7d\") " pod="openshift-marketplace/redhat-marketplace-9ncpc" Oct 11 10:35:52.406915 master-1 kubenswrapper[4771]: I1011 10:35:52.406788 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xkrc6" Oct 11 10:35:52.408419 master-2 kubenswrapper[4776]: I1011 10:35:52.408360 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:52.426154 master-1 kubenswrapper[4771]: I1011 10:35:52.426083 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8tm6" Oct 11 10:35:52.461014 master-1 kubenswrapper[4771]: I1011 10:35:52.460949 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-plxkp"] Oct 11 10:35:52.461310 master-1 kubenswrapper[4771]: E1011 10:35:52.461278 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38131fcf-d407-4ba3-b7bf-471586bab887" containerName="extract-utilities" Oct 11 10:35:52.461310 master-1 kubenswrapper[4771]: I1011 10:35:52.461303 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="38131fcf-d407-4ba3-b7bf-471586bab887" containerName="extract-utilities" Oct 11 10:35:52.461412 master-1 kubenswrapper[4771]: E1011 10:35:52.461316 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26005893-ecd8-4acb-8417-71a97ed97cbe" containerName="extract-utilities" Oct 11 10:35:52.461412 master-1 kubenswrapper[4771]: I1011 10:35:52.461327 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="26005893-ecd8-4acb-8417-71a97ed97cbe" containerName="extract-utilities" Oct 11 10:35:52.461412 master-1 kubenswrapper[4771]: E1011 10:35:52.461341 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26005893-ecd8-4acb-8417-71a97ed97cbe" containerName="extract-content" Oct 11 10:35:52.461412 master-1 kubenswrapper[4771]: I1011 10:35:52.461378 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="26005893-ecd8-4acb-8417-71a97ed97cbe" containerName="extract-content" Oct 11 10:35:52.461412 master-1 kubenswrapper[4771]: E1011 10:35:52.461395 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26005893-ecd8-4acb-8417-71a97ed97cbe" containerName="registry-server" Oct 11 10:35:52.461412 master-1 kubenswrapper[4771]: I1011 10:35:52.461404 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="26005893-ecd8-4acb-8417-71a97ed97cbe" containerName="registry-server" Oct 11 10:35:52.461576 master-1 kubenswrapper[4771]: E1011 10:35:52.461420 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38131fcf-d407-4ba3-b7bf-471586bab887" containerName="extract-content" Oct 11 10:35:52.461576 master-1 kubenswrapper[4771]: I1011 10:35:52.461431 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="38131fcf-d407-4ba3-b7bf-471586bab887" containerName="extract-content" Oct 11 10:35:52.461576 master-1 kubenswrapper[4771]: E1011 10:35:52.461450 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38131fcf-d407-4ba3-b7bf-471586bab887" containerName="registry-server" Oct 11 10:35:52.461576 master-1 kubenswrapper[4771]: I1011 10:35:52.461459 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="38131fcf-d407-4ba3-b7bf-471586bab887" containerName="registry-server" Oct 11 10:35:52.461679 master-1 kubenswrapper[4771]: I1011 10:35:52.461614 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="26005893-ecd8-4acb-8417-71a97ed97cbe" containerName="registry-server" Oct 11 10:35:52.461679 master-1 kubenswrapper[4771]: I1011 10:35:52.461638 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="38131fcf-d407-4ba3-b7bf-471586bab887" containerName="registry-server" Oct 11 10:35:52.462625 master-1 kubenswrapper[4771]: I1011 10:35:52.462590 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-plxkp"] Oct 11 10:35:52.462789 master-1 kubenswrapper[4771]: I1011 10:35:52.462735 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-plxkp" Oct 11 10:35:52.465655 master-1 kubenswrapper[4771]: I1011 10:35:52.465472 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-bpf7n" Oct 11 10:35:52.467651 master-1 kubenswrapper[4771]: I1011 10:35:52.467603 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46805d49-0205-4427-9403-2fd481f36555-catalog-content\") pod \"redhat-operators-plxkp\" (UID: \"46805d49-0205-4427-9403-2fd481f36555\") " pod="openshift-marketplace/redhat-operators-plxkp" Oct 11 10:35:52.467739 master-1 kubenswrapper[4771]: I1011 10:35:52.467681 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46805d49-0205-4427-9403-2fd481f36555-utilities\") pod \"redhat-operators-plxkp\" (UID: \"46805d49-0205-4427-9403-2fd481f36555\") " pod="openshift-marketplace/redhat-operators-plxkp" Oct 11 10:35:52.495931 master-1 kubenswrapper[4771]: I1011 10:35:52.495850 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:52.495931 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:52.495931 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:52.495931 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:52.495931 master-1 kubenswrapper[4771]: I1011 10:35:52.495922 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:52.568990 master-1 kubenswrapper[4771]: I1011 10:35:52.568907 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38131fcf-d407-4ba3-b7bf-471586bab887-utilities\") pod \"38131fcf-d407-4ba3-b7bf-471586bab887\" (UID: \"38131fcf-d407-4ba3-b7bf-471586bab887\") " Oct 11 10:35:52.569398 master-1 kubenswrapper[4771]: I1011 10:35:52.569253 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26005893-ecd8-4acb-8417-71a97ed97cbe-catalog-content\") pod \"26005893-ecd8-4acb-8417-71a97ed97cbe\" (UID: \"26005893-ecd8-4acb-8417-71a97ed97cbe\") " Oct 11 10:35:52.569398 master-1 kubenswrapper[4771]: I1011 10:35:52.569290 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpmjh\" (UniqueName: \"kubernetes.io/projected/38131fcf-d407-4ba3-b7bf-471586bab887-kube-api-access-gpmjh\") pod \"38131fcf-d407-4ba3-b7bf-471586bab887\" (UID: \"38131fcf-d407-4ba3-b7bf-471586bab887\") " Oct 11 10:35:52.569398 master-1 kubenswrapper[4771]: I1011 10:35:52.569313 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqnhk\" (UniqueName: \"kubernetes.io/projected/26005893-ecd8-4acb-8417-71a97ed97cbe-kube-api-access-hqnhk\") pod \"26005893-ecd8-4acb-8417-71a97ed97cbe\" (UID: \"26005893-ecd8-4acb-8417-71a97ed97cbe\") " Oct 11 10:35:52.569398 master-1 kubenswrapper[4771]: I1011 10:35:52.569333 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38131fcf-d407-4ba3-b7bf-471586bab887-catalog-content\") pod \"38131fcf-d407-4ba3-b7bf-471586bab887\" (UID: \"38131fcf-d407-4ba3-b7bf-471586bab887\") " Oct 11 10:35:52.569398 master-1 kubenswrapper[4771]: I1011 10:35:52.569376 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26005893-ecd8-4acb-8417-71a97ed97cbe-utilities\") pod \"26005893-ecd8-4acb-8417-71a97ed97cbe\" (UID: \"26005893-ecd8-4acb-8417-71a97ed97cbe\") " Oct 11 10:35:52.569562 master-1 kubenswrapper[4771]: I1011 10:35:52.569461 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46805d49-0205-4427-9403-2fd481f36555-utilities\") pod \"redhat-operators-plxkp\" (UID: \"46805d49-0205-4427-9403-2fd481f36555\") " pod="openshift-marketplace/redhat-operators-plxkp" Oct 11 10:35:52.569562 master-1 kubenswrapper[4771]: I1011 10:35:52.569497 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zjxq\" (UniqueName: \"kubernetes.io/projected/46805d49-0205-4427-9403-2fd481f36555-kube-api-access-4zjxq\") pod \"redhat-operators-plxkp\" (UID: \"46805d49-0205-4427-9403-2fd481f36555\") " pod="openshift-marketplace/redhat-operators-plxkp" Oct 11 10:35:52.569562 master-1 kubenswrapper[4771]: I1011 10:35:52.569551 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46805d49-0205-4427-9403-2fd481f36555-catalog-content\") pod \"redhat-operators-plxkp\" (UID: \"46805d49-0205-4427-9403-2fd481f36555\") " pod="openshift-marketplace/redhat-operators-plxkp" Oct 11 10:35:52.570065 master-1 kubenswrapper[4771]: I1011 10:35:52.570024 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46805d49-0205-4427-9403-2fd481f36555-catalog-content\") pod \"redhat-operators-plxkp\" (UID: \"46805d49-0205-4427-9403-2fd481f36555\") " pod="openshift-marketplace/redhat-operators-plxkp" Oct 11 10:35:52.571077 master-1 kubenswrapper[4771]: I1011 10:35:52.571023 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46805d49-0205-4427-9403-2fd481f36555-utilities\") pod \"redhat-operators-plxkp\" (UID: \"46805d49-0205-4427-9403-2fd481f36555\") " pod="openshift-marketplace/redhat-operators-plxkp" Oct 11 10:35:52.571139 master-1 kubenswrapper[4771]: I1011 10:35:52.571114 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38131fcf-d407-4ba3-b7bf-471586bab887-utilities" (OuterVolumeSpecName: "utilities") pod "38131fcf-d407-4ba3-b7bf-471586bab887" (UID: "38131fcf-d407-4ba3-b7bf-471586bab887"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:35:52.572289 master-1 kubenswrapper[4771]: I1011 10:35:52.572199 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26005893-ecd8-4acb-8417-71a97ed97cbe-utilities" (OuterVolumeSpecName: "utilities") pod "26005893-ecd8-4acb-8417-71a97ed97cbe" (UID: "26005893-ecd8-4acb-8417-71a97ed97cbe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:35:52.573810 master-1 kubenswrapper[4771]: I1011 10:35:52.573760 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26005893-ecd8-4acb-8417-71a97ed97cbe-kube-api-access-hqnhk" (OuterVolumeSpecName: "kube-api-access-hqnhk") pod "26005893-ecd8-4acb-8417-71a97ed97cbe" (UID: "26005893-ecd8-4acb-8417-71a97ed97cbe"). InnerVolumeSpecName "kube-api-access-hqnhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:35:52.574023 master-1 kubenswrapper[4771]: I1011 10:35:52.573980 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38131fcf-d407-4ba3-b7bf-471586bab887-kube-api-access-gpmjh" (OuterVolumeSpecName: "kube-api-access-gpmjh") pod "38131fcf-d407-4ba3-b7bf-471586bab887" (UID: "38131fcf-d407-4ba3-b7bf-471586bab887"). InnerVolumeSpecName "kube-api-access-gpmjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:35:52.592363 master-1 kubenswrapper[4771]: I1011 10:35:52.592285 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ncpc" Oct 11 10:35:52.594045 master-1 kubenswrapper[4771]: I1011 10:35:52.593976 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26005893-ecd8-4acb-8417-71a97ed97cbe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26005893-ecd8-4acb-8417-71a97ed97cbe" (UID: "26005893-ecd8-4acb-8417-71a97ed97cbe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:35:52.670751 master-1 kubenswrapper[4771]: I1011 10:35:52.670536 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zjxq\" (UniqueName: \"kubernetes.io/projected/46805d49-0205-4427-9403-2fd481f36555-kube-api-access-4zjxq\") pod \"redhat-operators-plxkp\" (UID: \"46805d49-0205-4427-9403-2fd481f36555\") " pod="openshift-marketplace/redhat-operators-plxkp" Oct 11 10:35:52.670751 master-1 kubenswrapper[4771]: I1011 10:35:52.670622 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26005893-ecd8-4acb-8417-71a97ed97cbe-catalog-content\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:52.670751 master-1 kubenswrapper[4771]: I1011 10:35:52.670641 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpmjh\" (UniqueName: \"kubernetes.io/projected/38131fcf-d407-4ba3-b7bf-471586bab887-kube-api-access-gpmjh\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:52.670751 master-1 kubenswrapper[4771]: I1011 10:35:52.670655 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqnhk\" (UniqueName: \"kubernetes.io/projected/26005893-ecd8-4acb-8417-71a97ed97cbe-kube-api-access-hqnhk\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:52.670751 master-1 kubenswrapper[4771]: I1011 10:35:52.670669 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26005893-ecd8-4acb-8417-71a97ed97cbe-utilities\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:52.670751 master-1 kubenswrapper[4771]: I1011 10:35:52.670680 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38131fcf-d407-4ba3-b7bf-471586bab887-utilities\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:52.693696 master-1 kubenswrapper[4771]: I1011 10:35:52.693599 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38131fcf-d407-4ba3-b7bf-471586bab887-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38131fcf-d407-4ba3-b7bf-471586bab887" (UID: "38131fcf-d407-4ba3-b7bf-471586bab887"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:35:52.694512 master-1 kubenswrapper[4771]: I1011 10:35:52.694471 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zjxq\" (UniqueName: \"kubernetes.io/projected/46805d49-0205-4427-9403-2fd481f36555-kube-api-access-4zjxq\") pod \"redhat-operators-plxkp\" (UID: \"46805d49-0205-4427-9403-2fd481f36555\") " pod="openshift-marketplace/redhat-operators-plxkp" Oct 11 10:35:52.771609 master-1 kubenswrapper[4771]: I1011 10:35:52.771525 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38131fcf-d407-4ba3-b7bf-471586bab887-catalog-content\") on node \"master-1\" DevicePath \"\"" Oct 11 10:35:52.783611 master-1 kubenswrapper[4771]: I1011 10:35:52.783504 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-plxkp" Oct 11 10:35:52.969592 master-2 kubenswrapper[4776]: I1011 10:35:52.969517 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:52.969592 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:52.969592 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:52.969592 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:52.969877 master-2 kubenswrapper[4776]: I1011 10:35:52.969594 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:53.033677 master-1 kubenswrapper[4771]: I1011 10:35:53.033578 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ncpc"] Oct 11 10:35:53.038231 master-2 kubenswrapper[4776]: I1011 10:35:53.038169 4776 generic.go:334] "Generic (PLEG): container finished" podID="1afe0068-3c97-4916-ba53-53f2841a95b0" containerID="9fa35ef84abfec8ae375122e25146737d5391f1e1c18440583ca1f3d3b0910b8" exitCode=0 Oct 11 10:35:53.038715 master-2 kubenswrapper[4776]: I1011 10:35:53.038224 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtrbk" event={"ID":"1afe0068-3c97-4916-ba53-53f2841a95b0","Type":"ContainerDied","Data":"9fa35ef84abfec8ae375122e25146737d5391f1e1c18440583ca1f3d3b0910b8"} Oct 11 10:35:53.040432 master-1 kubenswrapper[4771]: W1011 10:35:53.040317 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91e987bb_eae2_4f14_809d_1b1141882c7d.slice/crio-b754598af606a210c7b5b92cfa803febec63a5f9e91702fa35cc1b301b1a3cb0 WatchSource:0}: Error finding container b754598af606a210c7b5b92cfa803febec63a5f9e91702fa35cc1b301b1a3cb0: Status 404 returned error can't find the container with id b754598af606a210c7b5b92cfa803febec63a5f9e91702fa35cc1b301b1a3cb0 Oct 11 10:35:53.241021 master-1 kubenswrapper[4771]: I1011 10:35:53.240842 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-plxkp"] Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: I1011 10:35:53.244003 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:35:53.244048 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:35:53.246421 master-1 kubenswrapper[4771]: I1011 10:35:53.244054 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:53.253185 master-1 kubenswrapper[4771]: W1011 10:35:53.253105 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46805d49_0205_4427_9403_2fd481f36555.slice/crio-5c9680c9e6353071262fafb6e5336eeb9cd257540fc8b17b829f5b62da03bd5b WatchSource:0}: Error finding container 5c9680c9e6353071262fafb6e5336eeb9cd257540fc8b17b829f5b62da03bd5b: Status 404 returned error can't find the container with id 5c9680c9e6353071262fafb6e5336eeb9cd257540fc8b17b829f5b62da03bd5b Oct 11 10:35:53.366203 master-1 kubenswrapper[4771]: I1011 10:35:53.366166 4771 generic.go:334] "Generic (PLEG): container finished" podID="91e987bb-eae2-4f14-809d-1b1141882c7d" containerID="e95237df12bc6ebde5d45bedbd51537f6d7df95a3d30b122ee215229afe6e48c" exitCode=0 Oct 11 10:35:53.366975 master-1 kubenswrapper[4771]: I1011 10:35:53.366236 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ncpc" event={"ID":"91e987bb-eae2-4f14-809d-1b1141882c7d","Type":"ContainerDied","Data":"e95237df12bc6ebde5d45bedbd51537f6d7df95a3d30b122ee215229afe6e48c"} Oct 11 10:35:53.368170 master-1 kubenswrapper[4771]: I1011 10:35:53.367476 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ncpc" event={"ID":"91e987bb-eae2-4f14-809d-1b1141882c7d","Type":"ContainerStarted","Data":"b754598af606a210c7b5b92cfa803febec63a5f9e91702fa35cc1b301b1a3cb0"} Oct 11 10:35:53.369421 master-1 kubenswrapper[4771]: I1011 10:35:53.369333 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plxkp" event={"ID":"46805d49-0205-4427-9403-2fd481f36555","Type":"ContainerStarted","Data":"5c9680c9e6353071262fafb6e5336eeb9cd257540fc8b17b829f5b62da03bd5b"} Oct 11 10:35:53.373184 master-1 kubenswrapper[4771]: I1011 10:35:53.373127 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xkrc6" event={"ID":"26005893-ecd8-4acb-8417-71a97ed97cbe","Type":"ContainerDied","Data":"42678277150d23882615afd583505d1ee80fbc936870ab20c76affe3a676bd4c"} Oct 11 10:35:53.373184 master-1 kubenswrapper[4771]: I1011 10:35:53.373182 4771 scope.go:117] "RemoveContainer" containerID="9f359af209588aa409904f71581bb63e20e019ac6f684b2bb1874bdc33d16458" Oct 11 10:35:53.373403 master-1 kubenswrapper[4771]: I1011 10:35:53.373210 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xkrc6" Oct 11 10:35:53.377283 master-1 kubenswrapper[4771]: I1011 10:35:53.377227 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8tm6" event={"ID":"38131fcf-d407-4ba3-b7bf-471586bab887","Type":"ContainerDied","Data":"fd83c4d331d341ca058f07884e0c753dec2509d54999da528657ce66ee47354c"} Oct 11 10:35:53.377438 master-1 kubenswrapper[4771]: I1011 10:35:53.377247 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8tm6" Oct 11 10:35:53.379976 master-1 kubenswrapper[4771]: I1011 10:35:53.379941 4771 generic.go:334] "Generic (PLEG): container finished" podID="aebd88b2-f116-4ade-be8e-c293ccac533f" containerID="909e21621c80ae096c10bbcb92430889443aedcf5f0b3f51e4c28a3ef5eaaddc" exitCode=0 Oct 11 10:35:53.380088 master-1 kubenswrapper[4771]: I1011 10:35:53.380021 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6wtm" event={"ID":"aebd88b2-f116-4ade-be8e-c293ccac533f","Type":"ContainerDied","Data":"909e21621c80ae096c10bbcb92430889443aedcf5f0b3f51e4c28a3ef5eaaddc"} Oct 11 10:35:53.409132 master-1 kubenswrapper[4771]: I1011 10:35:53.409084 4771 scope.go:117] "RemoveContainer" containerID="4bde2f0bff6002ac88c69a20de25c24e27ed2402f74ddf6b6f429bda18e25de4" Oct 11 10:35:53.458404 master-1 kubenswrapper[4771]: I1011 10:35:53.458179 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xkrc6"] Oct 11 10:35:53.461587 master-1 kubenswrapper[4771]: I1011 10:35:53.461547 4771 scope.go:117] "RemoveContainer" containerID="142add763393fde94b8ed6a34c3ef572a32e34909b409ad71cf3570c801fa30d" Oct 11 10:35:53.463882 master-1 kubenswrapper[4771]: I1011 10:35:53.463834 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xkrc6"] Oct 11 10:35:53.477578 master-1 kubenswrapper[4771]: I1011 10:35:53.477511 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g8tm6"] Oct 11 10:35:53.482751 master-1 kubenswrapper[4771]: I1011 10:35:53.482696 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g8tm6"] Oct 11 10:35:53.497949 master-1 kubenswrapper[4771]: I1011 10:35:53.497832 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:53.497949 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:53.497949 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:53.497949 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:53.498130 master-1 kubenswrapper[4771]: I1011 10:35:53.497935 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:53.517068 master-1 kubenswrapper[4771]: I1011 10:35:53.517029 4771 scope.go:117] "RemoveContainer" containerID="c5ddefdc367347ae7e3aa6121d147be1b4ebca7be06e0180a8a6603ea9ef59cd" Oct 11 10:35:53.536863 master-1 kubenswrapper[4771]: I1011 10:35:53.536793 4771 scope.go:117] "RemoveContainer" containerID="10eecae7180584a993b9109e41de9729732ec8af959166bad8fe7ba33a08f83b" Oct 11 10:35:53.555125 master-1 kubenswrapper[4771]: I1011 10:35:53.555019 4771 scope.go:117] "RemoveContainer" containerID="46478dfa370c61d5e583543ca4a34b66afd1e95ecf434515eb16283cfe8a52de" Oct 11 10:35:53.970412 master-2 kubenswrapper[4776]: I1011 10:35:53.970353 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:53.970412 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:53.970412 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:53.970412 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:53.970412 master-2 kubenswrapper[4776]: I1011 10:35:53.970416 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:54.045729 master-2 kubenswrapper[4776]: I1011 10:35:54.045642 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtrbk" event={"ID":"1afe0068-3c97-4916-ba53-53f2841a95b0","Type":"ContainerStarted","Data":"6e709bd2044b6a322588e9fb5ed29a0cb190a96dddf3e5653cdd857e40bc453e"} Oct 11 10:35:54.069770 master-2 kubenswrapper[4776]: I1011 10:35:54.069697 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xtrbk" podStartSLOduration=2.651084953 podStartE2EDuration="5.069661545s" podCreationTimestamp="2025-10-11 10:35:49 +0000 UTC" firstStartedPulling="2025-10-11 10:35:51.01850293 +0000 UTC m=+585.802929639" lastFinishedPulling="2025-10-11 10:35:53.437079522 +0000 UTC m=+588.221506231" observedRunningTime="2025-10-11 10:35:54.06756879 +0000 UTC m=+588.851995499" watchObservedRunningTime="2025-10-11 10:35:54.069661545 +0000 UTC m=+588.854088244" Oct 11 10:35:54.396799 master-1 kubenswrapper[4771]: I1011 10:35:54.395882 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6wtm" event={"ID":"aebd88b2-f116-4ade-be8e-c293ccac533f","Type":"ContainerStarted","Data":"99d4f42a37c07831bbeb02c6d60d0ab3cac5eecfde12522617d4ee8db3495770"} Oct 11 10:35:54.399884 master-1 kubenswrapper[4771]: I1011 10:35:54.399032 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ncpc" event={"ID":"91e987bb-eae2-4f14-809d-1b1141882c7d","Type":"ContainerStarted","Data":"31c4e7524b25a68b245a3075aa05d28b3a09ace752abb0471cf1fe03cea33242"} Oct 11 10:35:54.402177 master-1 kubenswrapper[4771]: I1011 10:35:54.402050 4771 generic.go:334] "Generic (PLEG): container finished" podID="46805d49-0205-4427-9403-2fd481f36555" containerID="f3eab5f2de54810f88dae9b35d2a1d8e381f1f4f5815ea9b946ed648d69d5f2a" exitCode=0 Oct 11 10:35:54.402331 master-1 kubenswrapper[4771]: I1011 10:35:54.402165 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plxkp" event={"ID":"46805d49-0205-4427-9403-2fd481f36555","Type":"ContainerDied","Data":"f3eab5f2de54810f88dae9b35d2a1d8e381f1f4f5815ea9b946ed648d69d5f2a"} Oct 11 10:35:54.449249 master-1 kubenswrapper[4771]: I1011 10:35:54.449002 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26005893-ecd8-4acb-8417-71a97ed97cbe" path="/var/lib/kubelet/pods/26005893-ecd8-4acb-8417-71a97ed97cbe/volumes" Oct 11 10:35:54.450629 master-1 kubenswrapper[4771]: I1011 10:35:54.450559 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38131fcf-d407-4ba3-b7bf-471586bab887" path="/var/lib/kubelet/pods/38131fcf-d407-4ba3-b7bf-471586bab887/volumes" Oct 11 10:35:54.456533 master-1 kubenswrapper[4771]: I1011 10:35:54.456432 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t6wtm" podStartSLOduration=3.047163613 podStartE2EDuration="5.456400015s" podCreationTimestamp="2025-10-11 10:35:49 +0000 UTC" firstStartedPulling="2025-10-11 10:35:51.33678346 +0000 UTC m=+583.311009901" lastFinishedPulling="2025-10-11 10:35:53.746019822 +0000 UTC m=+585.720246303" observedRunningTime="2025-10-11 10:35:54.425838833 +0000 UTC m=+586.400065334" watchObservedRunningTime="2025-10-11 10:35:54.456400015 +0000 UTC m=+586.430626496" Oct 11 10:35:54.496605 master-1 kubenswrapper[4771]: I1011 10:35:54.496517 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:54.496605 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:54.496605 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:54.496605 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:54.497156 master-1 kubenswrapper[4771]: I1011 10:35:54.496611 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:54.969628 master-2 kubenswrapper[4776]: I1011 10:35:54.969470 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:54.969628 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:54.969628 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:54.969628 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:54.969628 master-2 kubenswrapper[4776]: I1011 10:35:54.969546 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:55.417006 master-1 kubenswrapper[4771]: I1011 10:35:55.416907 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plxkp" event={"ID":"46805d49-0205-4427-9403-2fd481f36555","Type":"ContainerStarted","Data":"31e117d1e88c620e780185b41834b342b6ae4b1958b78cceb77d28cf75bab545"} Oct 11 10:35:55.421660 master-1 kubenswrapper[4771]: I1011 10:35:55.421581 4771 generic.go:334] "Generic (PLEG): container finished" podID="91e987bb-eae2-4f14-809d-1b1141882c7d" containerID="31c4e7524b25a68b245a3075aa05d28b3a09ace752abb0471cf1fe03cea33242" exitCode=0 Oct 11 10:35:55.421823 master-1 kubenswrapper[4771]: I1011 10:35:55.421660 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ncpc" event={"ID":"91e987bb-eae2-4f14-809d-1b1141882c7d","Type":"ContainerDied","Data":"31c4e7524b25a68b245a3075aa05d28b3a09ace752abb0471cf1fe03cea33242"} Oct 11 10:35:55.497102 master-1 kubenswrapper[4771]: I1011 10:35:55.496910 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:55.497102 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:55.497102 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:55.497102 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:55.497102 master-1 kubenswrapper[4771]: I1011 10:35:55.497008 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:55.969756 master-2 kubenswrapper[4776]: I1011 10:35:55.969686 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:55.969756 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:55.969756 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:55.969756 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:55.969756 master-2 kubenswrapper[4776]: I1011 10:35:55.969743 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:56.431146 master-1 kubenswrapper[4771]: I1011 10:35:56.431060 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ncpc" event={"ID":"91e987bb-eae2-4f14-809d-1b1141882c7d","Type":"ContainerStarted","Data":"6080741b4d7eae18a44d1a08030b7a435ed3cfe3526dea8c026a6b1cfdfdc86d"} Oct 11 10:35:56.433664 master-1 kubenswrapper[4771]: I1011 10:35:56.433589 4771 generic.go:334] "Generic (PLEG): container finished" podID="46805d49-0205-4427-9403-2fd481f36555" containerID="31e117d1e88c620e780185b41834b342b6ae4b1958b78cceb77d28cf75bab545" exitCode=0 Oct 11 10:35:56.433664 master-1 kubenswrapper[4771]: I1011 10:35:56.433660 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plxkp" event={"ID":"46805d49-0205-4427-9403-2fd481f36555","Type":"ContainerDied","Data":"31e117d1e88c620e780185b41834b342b6ae4b1958b78cceb77d28cf75bab545"} Oct 11 10:35:56.496998 master-1 kubenswrapper[4771]: I1011 10:35:56.496898 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:56.496998 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:56.496998 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:56.496998 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:56.497481 master-1 kubenswrapper[4771]: I1011 10:35:56.497004 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:56.969625 master-2 kubenswrapper[4776]: I1011 10:35:56.969551 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:56.969625 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:56.969625 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:56.969625 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:56.969625 master-2 kubenswrapper[4776]: I1011 10:35:56.969619 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:57.443278 master-1 kubenswrapper[4771]: I1011 10:35:57.443178 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plxkp" event={"ID":"46805d49-0205-4427-9403-2fd481f36555","Type":"ContainerStarted","Data":"8b7a5f47530c9f5fd8e4c6cf8d74b538da46568b8286fc7969d4b9f71e24aa8c"} Oct 11 10:35:57.496646 master-1 kubenswrapper[4771]: I1011 10:35:57.496552 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:57.496646 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:57.496646 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:57.496646 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:57.496646 master-1 kubenswrapper[4771]: I1011 10:35:57.496639 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:57.968510 master-2 kubenswrapper[4776]: I1011 10:35:57.968444 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:57.968510 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:57.968510 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:57.968510 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:57.968510 master-2 kubenswrapper[4776]: I1011 10:35:57.968493 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: I1011 10:35:58.244586 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:35:58.244652 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:35:58.245792 master-1 kubenswrapper[4771]: I1011 10:35:58.244671 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:58.497611 master-1 kubenswrapper[4771]: I1011 10:35:58.497467 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:58.497611 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:58.497611 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:58.497611 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:58.497611 master-1 kubenswrapper[4771]: I1011 10:35:58.497558 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:58.968566 master-2 kubenswrapper[4776]: I1011 10:35:58.968485 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:58.968566 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:58.968566 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:58.968566 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:58.968566 master-2 kubenswrapper[4776]: I1011 10:35:58.968553 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:59.498639 master-1 kubenswrapper[4771]: I1011 10:35:59.498543 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:59.498639 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:35:59.498639 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:35:59.498639 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:35:59.499636 master-1 kubenswrapper[4771]: I1011 10:35:59.498638 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:59.901486 master-2 kubenswrapper[4776]: I1011 10:35:59.901418 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:59.901486 master-2 kubenswrapper[4776]: I1011 10:35:59.901494 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:59.961786 master-2 kubenswrapper[4776]: I1011 10:35:59.961735 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:59.970956 master-2 kubenswrapper[4776]: I1011 10:35:59.970925 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:59.970956 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:59.970956 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:59.970956 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:59.971629 master-2 kubenswrapper[4776]: I1011 10:35:59.971595 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:00.013586 master-1 kubenswrapper[4771]: I1011 10:36:00.013455 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t6wtm" Oct 11 10:36:00.013586 master-1 kubenswrapper[4771]: I1011 10:36:00.013561 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t6wtm" Oct 11 10:36:00.062295 master-1 kubenswrapper[4771]: I1011 10:36:00.062221 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t6wtm" Oct 11 10:36:00.119032 master-2 kubenswrapper[4776]: I1011 10:36:00.118958 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:36:00.497498 master-1 kubenswrapper[4771]: I1011 10:36:00.497409 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:00.497498 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:00.497498 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:00.497498 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:00.497792 master-1 kubenswrapper[4771]: I1011 10:36:00.497506 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:00.523163 master-1 kubenswrapper[4771]: I1011 10:36:00.523097 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t6wtm" Oct 11 10:36:00.968568 master-2 kubenswrapper[4776]: I1011 10:36:00.968483 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:00.968568 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:00.968568 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:00.968568 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:00.968568 master-2 kubenswrapper[4776]: I1011 10:36:00.968542 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:01.496866 master-1 kubenswrapper[4771]: I1011 10:36:01.496794 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:01.496866 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:01.496866 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:01.496866 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:01.497267 master-1 kubenswrapper[4771]: I1011 10:36:01.496868 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:01.969822 master-2 kubenswrapper[4776]: I1011 10:36:01.969709 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:01.969822 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:01.969822 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:01.969822 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:01.970862 master-2 kubenswrapper[4776]: I1011 10:36:01.970822 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:02.497617 master-1 kubenswrapper[4771]: I1011 10:36:02.497498 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:02.497617 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:02.497617 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:02.497617 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:02.497617 master-1 kubenswrapper[4771]: I1011 10:36:02.497606 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:02.593609 master-1 kubenswrapper[4771]: I1011 10:36:02.593525 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9ncpc" Oct 11 10:36:02.593609 master-1 kubenswrapper[4771]: I1011 10:36:02.593603 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9ncpc" Oct 11 10:36:02.669769 master-1 kubenswrapper[4771]: I1011 10:36:02.669710 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9ncpc" Oct 11 10:36:02.729131 master-2 kubenswrapper[4776]: E1011 10:36:02.729044 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:36:02.729131 master-2 kubenswrapper[4776]: E1011 10:36:02.729119 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:36:34.729103018 +0000 UTC m=+629.513529727 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:36:02.784688 master-1 kubenswrapper[4771]: I1011 10:36:02.784490 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-plxkp" Oct 11 10:36:02.784688 master-1 kubenswrapper[4771]: I1011 10:36:02.784577 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-plxkp" Oct 11 10:36:02.970357 master-2 kubenswrapper[4776]: I1011 10:36:02.970290 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:02.970357 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:02.970357 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:02.970357 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:02.971064 master-2 kubenswrapper[4776]: I1011 10:36:02.970366 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: I1011 10:36:03.243252 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:36:03.243380 master-1 kubenswrapper[4771]: I1011 10:36:03.243373 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:03.497482 master-1 kubenswrapper[4771]: I1011 10:36:03.497224 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:03.497482 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:03.497482 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:03.497482 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:03.497482 master-1 kubenswrapper[4771]: I1011 10:36:03.497321 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:03.551611 master-1 kubenswrapper[4771]: I1011 10:36:03.551519 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9ncpc" Oct 11 10:36:03.834316 master-1 kubenswrapper[4771]: I1011 10:36:03.834235 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-plxkp" podUID="46805d49-0205-4427-9403-2fd481f36555" containerName="registry-server" probeResult="failure" output=< Oct 11 10:36:03.834316 master-1 kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Oct 11 10:36:03.834316 master-1 kubenswrapper[4771]: > Oct 11 10:36:03.970280 master-2 kubenswrapper[4776]: I1011 10:36:03.970191 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:03.970280 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:03.970280 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:03.970280 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:03.971339 master-2 kubenswrapper[4776]: I1011 10:36:03.970295 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:04.496893 master-1 kubenswrapper[4771]: I1011 10:36:04.496799 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:04.496893 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:04.496893 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:04.496893 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:04.497455 master-1 kubenswrapper[4771]: I1011 10:36:04.496904 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:04.970349 master-2 kubenswrapper[4776]: I1011 10:36:04.970278 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:04.970349 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:04.970349 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:04.970349 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:04.970349 master-2 kubenswrapper[4776]: I1011 10:36:04.970362 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:05.496859 master-1 kubenswrapper[4771]: I1011 10:36:05.496779 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:05.496859 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:05.496859 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:05.496859 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:05.497413 master-1 kubenswrapper[4771]: I1011 10:36:05.496864 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:05.980624 master-2 kubenswrapper[4776]: I1011 10:36:05.980517 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:05.980624 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:05.980624 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:05.980624 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:05.982476 master-2 kubenswrapper[4776]: I1011 10:36:05.980631 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:06.497148 master-1 kubenswrapper[4771]: I1011 10:36:06.497085 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:06.497148 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:06.497148 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:06.497148 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:06.498097 master-1 kubenswrapper[4771]: I1011 10:36:06.498059 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:06.970629 master-2 kubenswrapper[4776]: I1011 10:36:06.970416 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:06.970629 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:06.970629 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:06.970629 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:06.970629 master-2 kubenswrapper[4776]: I1011 10:36:06.970517 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:07.497207 master-1 kubenswrapper[4771]: I1011 10:36:07.497150 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:07.497207 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:07.497207 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:07.497207 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:07.498171 master-1 kubenswrapper[4771]: I1011 10:36:07.497524 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:07.969695 master-2 kubenswrapper[4776]: I1011 10:36:07.969584 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:07.969695 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:07.969695 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:07.969695 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:07.970331 master-2 kubenswrapper[4776]: I1011 10:36:07.969743 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: I1011 10:36:08.245503 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:36:08.245592 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:36:08.247992 master-1 kubenswrapper[4771]: I1011 10:36:08.245601 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:08.498466 master-1 kubenswrapper[4771]: I1011 10:36:08.498218 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:08.498466 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:08.498466 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:08.498466 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:08.498466 master-1 kubenswrapper[4771]: I1011 10:36:08.498318 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:08.972298 master-2 kubenswrapper[4776]: I1011 10:36:08.969661 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:08.972298 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:08.972298 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:08.972298 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:08.972298 master-2 kubenswrapper[4776]: I1011 10:36:08.969803 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:09.497987 master-1 kubenswrapper[4771]: I1011 10:36:09.497789 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:09.497987 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:09.497987 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:09.497987 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:09.497987 master-1 kubenswrapper[4771]: I1011 10:36:09.497910 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:09.970831 master-2 kubenswrapper[4776]: I1011 10:36:09.970743 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:09.970831 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:09.970831 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:09.970831 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:09.971328 master-2 kubenswrapper[4776]: I1011 10:36:09.970852 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:10.247676 master-1 kubenswrapper[4771]: I1011 10:36:10.247474 4771 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-bg7lk container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:36:10.247676 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:36:10.247676 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:36:10.247676 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:36:10.247676 master-1 kubenswrapper[4771]: [+]metric-storage-ready ok Oct 11 10:36:10.247676 master-1 kubenswrapper[4771]: [+]metric-informer-sync ok Oct 11 10:36:10.247676 master-1 kubenswrapper[4771]: [+]metadata-informer-sync ok Oct 11 10:36:10.247676 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:36:10.247676 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:36:10.248470 master-1 kubenswrapper[4771]: I1011 10:36:10.247684 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" podUID="daf74cdb-6bdb-465a-8e3e-194e8868570f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:10.497259 master-1 kubenswrapper[4771]: I1011 10:36:10.497177 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:10.497259 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:10.497259 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:10.497259 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:10.497800 master-1 kubenswrapper[4771]: I1011 10:36:10.497271 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:10.970104 master-2 kubenswrapper[4776]: I1011 10:36:10.970015 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:10.970104 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:10.970104 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:10.970104 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:10.970104 master-2 kubenswrapper[4776]: I1011 10:36:10.970095 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:11.270878 master-2 kubenswrapper[4776]: E1011 10:36:11.270505 4776 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T10:36:01Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T10:36:01Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T10:36:01Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T10:36:01Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95\\\"],\\\"sizeBytes\\\":1565215279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e\\\"],\\\"sizeBytes\\\":1230574268},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:19171ea92892e53aa0604cd2c0b649c40966da57d9eac1a65807285eb30e4ae1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cb9acd88d372170c9d9491de391f25c2d29c04ae39825a0afc50a06fcc9a7f4c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1195809171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876\\\"],\\\"sizeBytes\\\":981963385},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d\\\"],\\\"sizeBytes\\\":945482213},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5\\\"],\\\"sizeBytes\\\":911296197},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6458d944052d69ffeffc62813d3a5cc3344ce7091b6df0ebf54d73c861355b01\\\"],\\\"sizeBytes\\\":873399372},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb\\\"],\\\"sizeBytes\\\":869140966},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f\\\"],\\\"sizeBytes\\\":855643597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7015eb7a0d62afeba6f2f0dbd57a8ef24b8477b00f66a6789ccf97b78271e9a\\\"],\\\"sizeBytes\\\":855233892},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7af38d71db3427e74eee755c1dc72589ae723a71d678c920c32868f459028ca\\\"],\\\"sizeBytes\\\":774809152},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6128c3fda0a374e4e705551260ee45b426a747e9d3e450d4ca1a3714fd404207\\\"],\\\"sizeBytes\\\":684971018},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca9272c8bbbde3ffdea2887c91dfb5ec4b09de7a8e2ae03aa5a47f56ff41e326\\\"],\\\"sizeBytes\\\":681716323},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2fe368c29648f07f2b0f3849feef0eda2000555e91d268e2b5a19526179619c\\\"],\\\"sizeBytes\\\":680965375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1656551c63dc1b09263ccc5fb52a13dff12d57e1c7510529789df1b41d253aa9\\\"],\\\"sizeBytes\\\":614682093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7\\\"],\\\"sizeBytes\\\":582409947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a\\\"],\\\"sizeBytes\\\":575181628},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6\\\"],\\\"sizeBytes\\\":551247630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c78b39674bd52b55017e08466030e88727f76514fbfa4e1918541697374881b3\\\"],\\\"sizeBytes\\\":541801559},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8\\\"],\\\"sizeBytes\\\":531186824},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbde693d384ae08cdaf9126a9a6359bb5515793f63108ef216cbddf1c995af3e\\\"],\\\"sizeBytes\\\":530836538},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174\\\"],\\\"sizeBytes\\\":511412209},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:ba6f0f2eca65cd386a5109ddbbdb3bab9bb9801e32de56ef34f80e634a7787be\\\"],\\\"sizeBytes\\\":511020601},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e\\\"],\\\"sizeBytes\\\":508004341},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d8df789ec16971dc14423860f7b20b9ee27d926e4e5be632714cadc15e7f9b32\\\"],\\\"sizeBytes\\\":506615759},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5f27555b2adaa9cd82922dde7517c78eac05afdd090d572e62a9a425b42a7d\\\"],\\\"sizeBytes\\\":506261367},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ef76839c19a20a0e01cdd2b9fd53ae31937d6f478b2c2343679099985fe9e47\\\"],\\\"sizeBytes\\\":505315113},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:805f1bf09553ecf2e9d735c881539c011947eee7bf4c977b074e2d0396b9d99a\\\"],\\\"sizeBytes\\\":504222816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:208d81ddcca0864f3a225e11a2fdcf7c67d32bae142bd9a9d154a76cffea08e7\\\"],\\\"sizeBytes\\\":504201850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55\\\"],\\\"sizeBytes\\\":501914388},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97de153ac76971fa69d4af7166c63416fbe37d759deb7833340c1c39d418b745\\\"],\\\"sizeBytes\\\":501585296},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f425875bda87dc167d613efc88c56256e48364b73174d1392f7d23301baec0b\\\"],\\\"sizeBytes\\\":501010081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05bf4bdb9af40d949fa343ad1fd1d79d032d0bd0eb188ed33fbdceeb5056ce0\\\"],\\\"sizeBytes\\\":499517132},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642\\\"],\\\"sizeBytes\\\":499422833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404\\\"],\\\"sizeBytes\\\":498371692},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4\\\"],\\\"sizeBytes\\\":498279559},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c3058c461907ec5ff06a628e935722d7ec8bf86fa90b95269372a6dc41444ce\\\"],\\\"sizeBytes\\\":497698695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b9e086347802546d8040d17296f434edf088305103b874c900beee3a3575c34\\\"],\\\"sizeBytes\\\":497656412},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:776b1203d0e4c0522ff38ffceeddfbad096e187b4d4c927f3ad89bac5f40d5c8\\\"],\\\"sizeBytes\\\":489230204},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa10afc83b17b0d76fcff8963f51e62ae851f145cd6c27f61a0604e0c713fe3a\\\"],\\\"sizeBytes\\\":489030103},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94bcc0ff0f9ec7df4aeb53fe4bf0310e26cb7b40bdf772efc95a7ccfcfe69721\\\"],\\\"sizeBytes\\\":488102305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08\\\"],\\\"sizeBytes\\\":480132757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:732db322c7ea7d239293fdd893e493775fd05ed4370bfe908c6995d4beabc0a4\\\"],\\\"sizeBytes\\\":477490934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:def4bc41ba62687d8c9a68b6f74c39240f651ec7a039a78a6535233581f430a7\\\"],\\\"sizeBytes\\\":477215701},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8586795f9801090b8f01a74743474c41b5987eefc3a9b2c58f937098a1704f\\\"],\\\"sizeBytes\\\":464468268},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ca84dadf413f08150ff8224f856cca12667b15168499013d0ff409dd323505d\\\"],\\\"sizeBytes\\\":463860143},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:90c5ef075961ab090e3854d470bb6659737ee76ac96637e6d0dd62080e38e26e\\\"],\\\"sizeBytes\\\":463718256},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5ad9f2d4b8cf9205c5aa91b1eb9abafc2a638c7bd4b3f971f3d6b9a4df7318f\\\"],\\\"sizeBytes\\\":461301475},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2bffa697d52826e0ba76ddc30a78f44b274be22ee87af8d1a9d1c8337162be9\\\"],\\\"sizeBytes\\\":460276288},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169\\\"],\\\"sizeBytes\\\":458126368}]}}\" for node \"master-2\": the server was unable to return a response in the time allotted, but may still be processing the request (patch nodes master-2)" Oct 11 10:36:11.497791 master-1 kubenswrapper[4771]: I1011 10:36:11.497674 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:11.497791 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:11.497791 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:11.497791 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:11.498557 master-1 kubenswrapper[4771]: I1011 10:36:11.497794 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:11.970474 master-2 kubenswrapper[4776]: I1011 10:36:11.970259 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:11.970474 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:11.970474 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:11.970474 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:11.970474 master-2 kubenswrapper[4776]: I1011 10:36:11.970362 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:12.245523 master-2 kubenswrapper[4776]: E1011 10:36:12.245381 4776 controller.go:195] "Failed to update lease" err="Put \"https://api-int.ocp.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:36:12.497812 master-1 kubenswrapper[4771]: I1011 10:36:12.497681 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:12.497812 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:12.497812 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:12.497812 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:12.497812 master-1 kubenswrapper[4771]: I1011 10:36:12.497804 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:12.540862 master-1 kubenswrapper[4771]: I1011 10:36:12.540509 4771 scope.go:117] "RemoveContainer" containerID="4f12c3536caf37d890a386fecb2c94e5fc57775602e9a539771326b213c3ae7e" Oct 11 10:36:12.561829 master-1 kubenswrapper[4771]: I1011 10:36:12.561760 4771 scope.go:117] "RemoveContainer" containerID="0400db595d18039edaf6ab7ccb3c1b1a3510ae9588fc33a6a91a15e993a6d1a4" Oct 11 10:36:12.586057 master-1 kubenswrapper[4771]: I1011 10:36:12.585989 4771 scope.go:117] "RemoveContainer" containerID="27a52449e5ec1bd52177b8ae4e5229c8bc4e5a7be149b07a0e7cb307be3932da" Oct 11 10:36:12.612621 master-1 kubenswrapper[4771]: I1011 10:36:12.612550 4771 scope.go:117] "RemoveContainer" containerID="2a73de07f276bd8a0b93475494fdae31f01c7c950b265a424f35d3d72462410c" Oct 11 10:36:12.846130 master-1 kubenswrapper[4771]: I1011 10:36:12.846054 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-plxkp" Oct 11 10:36:12.910259 master-1 kubenswrapper[4771]: I1011 10:36:12.910158 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-plxkp" Oct 11 10:36:12.971002 master-2 kubenswrapper[4776]: I1011 10:36:12.970922 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:12.971002 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:12.971002 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:12.971002 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:12.972059 master-2 kubenswrapper[4776]: I1011 10:36:12.971063 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: I1011 10:36:13.242517 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:36:13.242644 master-1 kubenswrapper[4771]: I1011 10:36:13.242599 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:13.497135 master-1 kubenswrapper[4771]: I1011 10:36:13.496981 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:13.497135 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:13.497135 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:13.497135 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:13.497135 master-1 kubenswrapper[4771]: I1011 10:36:13.497092 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:13.969488 master-2 kubenswrapper[4776]: I1011 10:36:13.969399 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:13.969488 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:13.969488 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:13.969488 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:13.969488 master-2 kubenswrapper[4776]: I1011 10:36:13.969481 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:14.498150 master-1 kubenswrapper[4771]: I1011 10:36:14.498040 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:14.498150 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:14.498150 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:14.498150 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:14.498150 master-1 kubenswrapper[4771]: I1011 10:36:14.498136 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:14.970919 master-2 kubenswrapper[4776]: I1011 10:36:14.970857 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:14.970919 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:14.970919 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:14.970919 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:14.971975 master-2 kubenswrapper[4776]: I1011 10:36:14.971854 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:15.497013 master-1 kubenswrapper[4771]: I1011 10:36:15.496910 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:15.497013 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:15.497013 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:15.497013 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:15.497539 master-1 kubenswrapper[4771]: I1011 10:36:15.497023 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:15.970289 master-2 kubenswrapper[4776]: I1011 10:36:15.970189 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:15.970289 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:15.970289 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:15.970289 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:15.970289 master-2 kubenswrapper[4776]: I1011 10:36:15.970266 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:16.498148 master-1 kubenswrapper[4771]: I1011 10:36:16.498036 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:16.498148 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:16.498148 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:16.498148 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:16.499352 master-1 kubenswrapper[4771]: I1011 10:36:16.498170 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:16.970057 master-2 kubenswrapper[4776]: I1011 10:36:16.969963 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:16.970057 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:16.970057 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:16.970057 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:16.971092 master-2 kubenswrapper[4776]: I1011 10:36:16.970086 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:17.497141 master-1 kubenswrapper[4771]: I1011 10:36:17.497051 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:17.497141 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:17.497141 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:17.497141 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:17.497663 master-1 kubenswrapper[4771]: I1011 10:36:17.497164 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:17.970270 master-2 kubenswrapper[4776]: I1011 10:36:17.970197 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:17.970270 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:17.970270 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:17.970270 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:17.970270 master-2 kubenswrapper[4776]: I1011 10:36:17.970273 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: I1011 10:36:18.241751 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:36:18.241830 master-1 kubenswrapper[4771]: I1011 10:36:18.241814 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:18.498517 master-1 kubenswrapper[4771]: I1011 10:36:18.498265 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:18.498517 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:18.498517 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:18.498517 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:18.498517 master-1 kubenswrapper[4771]: I1011 10:36:18.498417 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:18.970544 master-2 kubenswrapper[4776]: I1011 10:36:18.970437 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:18.970544 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:18.970544 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:18.970544 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:18.971511 master-2 kubenswrapper[4776]: I1011 10:36:18.970588 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:19.497880 master-1 kubenswrapper[4771]: I1011 10:36:19.497740 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:19.497880 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:19.497880 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:19.497880 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:19.497880 master-1 kubenswrapper[4771]: I1011 10:36:19.497868 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:19.970473 master-2 kubenswrapper[4776]: I1011 10:36:19.970401 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:19.970473 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:19.970473 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:19.970473 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:19.971181 master-2 kubenswrapper[4776]: I1011 10:36:19.970493 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:20.497290 master-1 kubenswrapper[4771]: I1011 10:36:20.497183 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:20.497290 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:20.497290 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:20.497290 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:20.497758 master-1 kubenswrapper[4771]: I1011 10:36:20.497296 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:20.969482 master-2 kubenswrapper[4776]: I1011 10:36:20.969403 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:20.969482 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:20.969482 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:20.969482 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:20.969970 master-2 kubenswrapper[4776]: I1011 10:36:20.969509 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:21.271890 master-2 kubenswrapper[4776]: E1011 10:36:21.271659 4776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-2\": Get \"https://api-int.ocp.openstack.lab:6443/api/v1/nodes/master-2?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:36:21.497651 master-1 kubenswrapper[4771]: I1011 10:36:21.497528 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:21.497651 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:21.497651 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:21.497651 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:21.498646 master-1 kubenswrapper[4771]: I1011 10:36:21.497642 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:21.969704 master-2 kubenswrapper[4776]: I1011 10:36:21.969577 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:21.969704 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:21.969704 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:21.969704 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:21.970197 master-2 kubenswrapper[4776]: I1011 10:36:21.969728 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:22.245860 master-2 kubenswrapper[4776]: E1011 10:36:22.245618 4776 controller.go:195] "Failed to update lease" err="Put \"https://api-int.ocp.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s\": context deadline exceeded" Oct 11 10:36:22.496875 master-1 kubenswrapper[4771]: I1011 10:36:22.496808 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:22.496875 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:22.496875 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:22.496875 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:22.497489 master-1 kubenswrapper[4771]: I1011 10:36:22.496903 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:22.545139 master-1 kubenswrapper[4771]: I1011 10:36:22.545051 4771 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 11 10:36:22.546140 master-1 kubenswrapper[4771]: I1011 10:36:22.545530 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler" containerID="cri-o://63c473df6fa732d6511618b295787205edc98e4025bcc6c14cf2f92361ce263f" gracePeriod=30 Oct 11 10:36:22.546140 master-1 kubenswrapper[4771]: I1011 10:36:22.545620 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler-recovery-controller" containerID="cri-o://e3bd4833cb6b364aa83fca88897d32509fa085f61942c4a3cb75cdf814b22316" gracePeriod=30 Oct 11 10:36:22.546140 master-1 kubenswrapper[4771]: I1011 10:36:22.545623 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler-cert-syncer" containerID="cri-o://e12640b361e2ef73c6e289951cab68eae36f0b9bc81be5fd9209771124b251a5" gracePeriod=30 Oct 11 10:36:22.547377 master-1 kubenswrapper[4771]: I1011 10:36:22.546626 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 11 10:36:22.547377 master-1 kubenswrapper[4771]: E1011 10:36:22.547250 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61df698d34d049669621b2249bfe758" containerName="wait-for-host-port" Oct 11 10:36:22.547377 master-1 kubenswrapper[4771]: I1011 10:36:22.547279 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61df698d34d049669621b2249bfe758" containerName="wait-for-host-port" Oct 11 10:36:22.547377 master-1 kubenswrapper[4771]: E1011 10:36:22.547306 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler-recovery-controller" Oct 11 10:36:22.547377 master-1 kubenswrapper[4771]: I1011 10:36:22.547323 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler-recovery-controller" Oct 11 10:36:22.547377 master-1 kubenswrapper[4771]: E1011 10:36:22.547340 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler" Oct 11 10:36:22.547377 master-1 kubenswrapper[4771]: I1011 10:36:22.547377 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler" Oct 11 10:36:22.547804 master-1 kubenswrapper[4771]: E1011 10:36:22.547394 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler-cert-syncer" Oct 11 10:36:22.547804 master-1 kubenswrapper[4771]: I1011 10:36:22.547407 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler-cert-syncer" Oct 11 10:36:22.547804 master-1 kubenswrapper[4771]: I1011 10:36:22.547640 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler-recovery-controller" Oct 11 10:36:22.547804 master-1 kubenswrapper[4771]: I1011 10:36:22.547710 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler" Oct 11 10:36:22.547804 master-1 kubenswrapper[4771]: I1011 10:36:22.547733 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler-cert-syncer" Oct 11 10:36:22.595045 master-1 kubenswrapper[4771]: I1011 10:36:22.594962 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1ffd3b5548bcf48fce7bfb9a8c802165-cert-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"1ffd3b5548bcf48fce7bfb9a8c802165\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:36:22.595304 master-1 kubenswrapper[4771]: I1011 10:36:22.595246 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1ffd3b5548bcf48fce7bfb9a8c802165-resource-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"1ffd3b5548bcf48fce7bfb9a8c802165\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:36:22.695889 master-1 kubenswrapper[4771]: I1011 10:36:22.695795 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1ffd3b5548bcf48fce7bfb9a8c802165-resource-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"1ffd3b5548bcf48fce7bfb9a8c802165\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:36:22.695889 master-1 kubenswrapper[4771]: I1011 10:36:22.695883 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1ffd3b5548bcf48fce7bfb9a8c802165-cert-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"1ffd3b5548bcf48fce7bfb9a8c802165\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:36:22.696195 master-1 kubenswrapper[4771]: I1011 10:36:22.695969 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1ffd3b5548bcf48fce7bfb9a8c802165-cert-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"1ffd3b5548bcf48fce7bfb9a8c802165\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:36:22.696195 master-1 kubenswrapper[4771]: I1011 10:36:22.695977 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1ffd3b5548bcf48fce7bfb9a8c802165-resource-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"1ffd3b5548bcf48fce7bfb9a8c802165\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:36:22.970448 master-2 kubenswrapper[4776]: I1011 10:36:22.970327 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:22.970448 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:22.970448 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:22.970448 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:22.970448 master-2 kubenswrapper[4776]: I1011 10:36:22.970411 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: I1011 10:36:23.244955 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:36:23.245042 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:36:23.247629 master-1 kubenswrapper[4771]: I1011 10:36:23.245042 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:23.497259 master-1 kubenswrapper[4771]: I1011 10:36:23.497068 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:23.497259 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:23.497259 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:23.497259 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:23.497259 master-1 kubenswrapper[4771]: I1011 10:36:23.497161 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:23.609101 master-1 kubenswrapper[4771]: I1011 10:36:23.609029 4771 generic.go:334] "Generic (PLEG): container finished" podID="67e39e90-67d5-40f4-ad76-1b32adf359ed" containerID="4ac39222fba40ff7cbe78740b5c6cfd319b2ad66eef840556f4373378718527a" exitCode=0 Oct 11 10:36:23.609986 master-1 kubenswrapper[4771]: I1011 10:36:23.609177 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-1" event={"ID":"67e39e90-67d5-40f4-ad76-1b32adf359ed","Type":"ContainerDied","Data":"4ac39222fba40ff7cbe78740b5c6cfd319b2ad66eef840556f4373378718527a"} Oct 11 10:36:23.612215 master-1 kubenswrapper[4771]: I1011 10:36:23.612159 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_a61df698d34d049669621b2249bfe758/kube-scheduler-cert-syncer/0.log" Oct 11 10:36:23.613340 master-1 kubenswrapper[4771]: I1011 10:36:23.613295 4771 generic.go:334] "Generic (PLEG): container finished" podID="a61df698d34d049669621b2249bfe758" containerID="e3bd4833cb6b364aa83fca88897d32509fa085f61942c4a3cb75cdf814b22316" exitCode=0 Oct 11 10:36:23.613340 master-1 kubenswrapper[4771]: I1011 10:36:23.613331 4771 generic.go:334] "Generic (PLEG): container finished" podID="a61df698d34d049669621b2249bfe758" containerID="e12640b361e2ef73c6e289951cab68eae36f0b9bc81be5fd9209771124b251a5" exitCode=2 Oct 11 10:36:23.615746 master-1 kubenswrapper[4771]: I1011 10:36:23.615682 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" oldPodUID="a61df698d34d049669621b2249bfe758" podUID="1ffd3b5548bcf48fce7bfb9a8c802165" Oct 11 10:36:23.969951 master-2 kubenswrapper[4776]: I1011 10:36:23.969815 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:23.969951 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:23.969951 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:23.969951 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:23.969951 master-2 kubenswrapper[4776]: I1011 10:36:23.969940 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:24.496935 master-1 kubenswrapper[4771]: I1011 10:36:24.496835 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:24.496935 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:24.496935 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:24.496935 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:24.497475 master-1 kubenswrapper[4771]: I1011 10:36:24.496950 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:24.968955 master-2 kubenswrapper[4776]: I1011 10:36:24.968875 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:24.968955 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:24.968955 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:24.968955 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:24.969317 master-2 kubenswrapper[4776]: I1011 10:36:24.968966 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:24.975941 master-1 kubenswrapper[4771]: I1011 10:36:24.975875 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-1" Oct 11 10:36:25.042349 master-1 kubenswrapper[4771]: I1011 10:36:25.042262 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67e39e90-67d5-40f4-ad76-1b32adf359ed-kubelet-dir\") pod \"67e39e90-67d5-40f4-ad76-1b32adf359ed\" (UID: \"67e39e90-67d5-40f4-ad76-1b32adf359ed\") " Oct 11 10:36:25.042349 master-1 kubenswrapper[4771]: I1011 10:36:25.042333 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/67e39e90-67d5-40f4-ad76-1b32adf359ed-var-lock\") pod \"67e39e90-67d5-40f4-ad76-1b32adf359ed\" (UID: \"67e39e90-67d5-40f4-ad76-1b32adf359ed\") " Oct 11 10:36:25.042789 master-1 kubenswrapper[4771]: I1011 10:36:25.042399 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67e39e90-67d5-40f4-ad76-1b32adf359ed-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "67e39e90-67d5-40f4-ad76-1b32adf359ed" (UID: "67e39e90-67d5-40f4-ad76-1b32adf359ed"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:36:25.042789 master-1 kubenswrapper[4771]: I1011 10:36:25.042431 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67e39e90-67d5-40f4-ad76-1b32adf359ed-kube-api-access\") pod \"67e39e90-67d5-40f4-ad76-1b32adf359ed\" (UID: \"67e39e90-67d5-40f4-ad76-1b32adf359ed\") " Oct 11 10:36:25.042789 master-1 kubenswrapper[4771]: I1011 10:36:25.042465 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67e39e90-67d5-40f4-ad76-1b32adf359ed-var-lock" (OuterVolumeSpecName: "var-lock") pod "67e39e90-67d5-40f4-ad76-1b32adf359ed" (UID: "67e39e90-67d5-40f4-ad76-1b32adf359ed"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:36:25.042789 master-1 kubenswrapper[4771]: I1011 10:36:25.042710 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67e39e90-67d5-40f4-ad76-1b32adf359ed-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:36:25.042789 master-1 kubenswrapper[4771]: I1011 10:36:25.042733 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/67e39e90-67d5-40f4-ad76-1b32adf359ed-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:36:25.047333 master-1 kubenswrapper[4771]: I1011 10:36:25.047250 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e39e90-67d5-40f4-ad76-1b32adf359ed-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "67e39e90-67d5-40f4-ad76-1b32adf359ed" (UID: "67e39e90-67d5-40f4-ad76-1b32adf359ed"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:36:25.145182 master-1 kubenswrapper[4771]: I1011 10:36:25.145041 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67e39e90-67d5-40f4-ad76-1b32adf359ed-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:36:25.497814 master-1 kubenswrapper[4771]: I1011 10:36:25.497751 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:25.497814 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:25.497814 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:25.497814 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:25.498401 master-1 kubenswrapper[4771]: I1011 10:36:25.498325 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:25.627480 master-1 kubenswrapper[4771]: I1011 10:36:25.627401 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-1" event={"ID":"67e39e90-67d5-40f4-ad76-1b32adf359ed","Type":"ContainerDied","Data":"c6be9acab2b46a0600e4b835c238bce39535b79bcb5079c1d439519d3a10a7ca"} Oct 11 10:36:25.627480 master-1 kubenswrapper[4771]: I1011 10:36:25.627481 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6be9acab2b46a0600e4b835c238bce39535b79bcb5079c1d439519d3a10a7ca" Oct 11 10:36:25.627896 master-1 kubenswrapper[4771]: I1011 10:36:25.627862 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-1" Oct 11 10:36:25.906263 master-1 kubenswrapper[4771]: E1011 10:36:25.906034 4771 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out, possibly due to previous leader failure" event="&Event{ObjectMeta:{redhat-marketplace-9ncpc.186d696f3f80d6d9 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-9ncpc,UID:91e987bb-eae2-4f14-809d-1b1141882c7d,APIVersion:v1,ResourceVersion:15448,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5\" in 475ms (475ms including waiting). Image size: 911296197 bytes.,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-11 10:35:55.899426521 +0000 UTC m=+587.873652962,LastTimestamp:2025-10-11 10:35:55.899426521 +0000 UTC m=+587.873652962,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 11 10:36:25.969025 master-2 kubenswrapper[4776]: I1011 10:36:25.968925 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:25.969025 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:25.969025 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:25.969025 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:25.969025 master-2 kubenswrapper[4776]: I1011 10:36:25.969020 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:26.497568 master-1 kubenswrapper[4771]: I1011 10:36:26.497443 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:26.497568 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:26.497568 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:26.497568 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:26.497568 master-1 kubenswrapper[4771]: I1011 10:36:26.497534 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:26.607959 master-1 kubenswrapper[4771]: I1011 10:36:26.607834 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:36:26.608244 master-1 kubenswrapper[4771]: I1011 10:36:26.607945 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:36:26.969488 master-2 kubenswrapper[4776]: I1011 10:36:26.969387 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:26.969488 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:26.969488 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:26.969488 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:26.969488 master-2 kubenswrapper[4776]: I1011 10:36:26.969443 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:27.496702 master-1 kubenswrapper[4771]: I1011 10:36:27.496609 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:27.496702 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:27.496702 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:27.496702 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:27.496702 master-1 kubenswrapper[4771]: I1011 10:36:27.496678 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:27.970502 master-2 kubenswrapper[4776]: I1011 10:36:27.970415 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:27.970502 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:27.970502 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:27.970502 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:27.970502 master-2 kubenswrapper[4776]: I1011 10:36:27.970499 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:28.237959 master-1 kubenswrapper[4771]: I1011 10:36:28.237858 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 11 10:36:28.237959 master-1 kubenswrapper[4771]: I1011 10:36:28.237932 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 11 10:36:28.238964 master-1 kubenswrapper[4771]: I1011 10:36:28.238612 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_a61df698d34d049669621b2249bfe758/kube-scheduler-cert-syncer/0.log" Oct 11 10:36:28.240054 master-1 kubenswrapper[4771]: I1011 10:36:28.240010 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:36:28.245791 master-1 kubenswrapper[4771]: I1011 10:36:28.245742 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" oldPodUID="a61df698d34d049669621b2249bfe758" podUID="1ffd3b5548bcf48fce7bfb9a8c802165" Oct 11 10:36:28.282613 master-1 kubenswrapper[4771]: I1011 10:36:28.282549 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-cert-dir\") pod \"a61df698d34d049669621b2249bfe758\" (UID: \"a61df698d34d049669621b2249bfe758\") " Oct 11 10:36:28.282789 master-1 kubenswrapper[4771]: I1011 10:36:28.282650 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "a61df698d34d049669621b2249bfe758" (UID: "a61df698d34d049669621b2249bfe758"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:36:28.282789 master-1 kubenswrapper[4771]: I1011 10:36:28.282732 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-resource-dir\") pod \"a61df698d34d049669621b2249bfe758\" (UID: \"a61df698d34d049669621b2249bfe758\") " Oct 11 10:36:28.282933 master-1 kubenswrapper[4771]: I1011 10:36:28.282847 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "a61df698d34d049669621b2249bfe758" (UID: "a61df698d34d049669621b2249bfe758"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:36:28.283041 master-1 kubenswrapper[4771]: I1011 10:36:28.283018 4771 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-cert-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:36:28.283041 master-1 kubenswrapper[4771]: I1011 10:36:28.283037 4771 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-resource-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:36:28.447471 master-1 kubenswrapper[4771]: I1011 10:36:28.447245 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a61df698d34d049669621b2249bfe758" path="/var/lib/kubelet/pods/a61df698d34d049669621b2249bfe758/volumes" Oct 11 10:36:28.498639 master-1 kubenswrapper[4771]: I1011 10:36:28.498544 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:28.498639 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:28.498639 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:28.498639 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:28.498639 master-1 kubenswrapper[4771]: I1011 10:36:28.498633 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:28.653020 master-1 kubenswrapper[4771]: I1011 10:36:28.648860 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_a61df698d34d049669621b2249bfe758/kube-scheduler-cert-syncer/0.log" Oct 11 10:36:28.653020 master-1 kubenswrapper[4771]: I1011 10:36:28.651987 4771 generic.go:334] "Generic (PLEG): container finished" podID="a61df698d34d049669621b2249bfe758" containerID="63c473df6fa732d6511618b295787205edc98e4025bcc6c14cf2f92361ce263f" exitCode=0 Oct 11 10:36:28.653020 master-1 kubenswrapper[4771]: I1011 10:36:28.652124 4771 scope.go:117] "RemoveContainer" containerID="e3bd4833cb6b364aa83fca88897d32509fa085f61942c4a3cb75cdf814b22316" Oct 11 10:36:28.653020 master-1 kubenswrapper[4771]: I1011 10:36:28.652659 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:36:28.658663 master-1 kubenswrapper[4771]: I1011 10:36:28.658591 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" oldPodUID="a61df698d34d049669621b2249bfe758" podUID="1ffd3b5548bcf48fce7bfb9a8c802165" Oct 11 10:36:28.708405 master-1 kubenswrapper[4771]: I1011 10:36:28.708177 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" oldPodUID="a61df698d34d049669621b2249bfe758" podUID="1ffd3b5548bcf48fce7bfb9a8c802165" Oct 11 10:36:28.714148 master-1 kubenswrapper[4771]: I1011 10:36:28.714087 4771 scope.go:117] "RemoveContainer" containerID="e12640b361e2ef73c6e289951cab68eae36f0b9bc81be5fd9209771124b251a5" Oct 11 10:36:28.715687 master-1 kubenswrapper[4771]: I1011 10:36:28.715641 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_34b1362996d1e0c2cea0bee73eb18468/kube-apiserver-cert-syncer/0.log" Oct 11 10:36:28.716911 master-1 kubenswrapper[4771]: I1011 10:36:28.716874 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:36:28.724161 master-1 kubenswrapper[4771]: I1011 10:36:28.724103 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="34b1362996d1e0c2cea0bee73eb18468" podUID="e39186c2ebd02622803bdbec6984de2a" Oct 11 10:36:28.735116 master-1 kubenswrapper[4771]: I1011 10:36:28.735035 4771 scope.go:117] "RemoveContainer" containerID="63c473df6fa732d6511618b295787205edc98e4025bcc6c14cf2f92361ce263f" Oct 11 10:36:28.751540 master-1 kubenswrapper[4771]: I1011 10:36:28.751387 4771 scope.go:117] "RemoveContainer" containerID="0ce25f4770e41c16a3e6b3f94195d1bb5c5e1e16d7e54fe07f63d76f33baa577" Oct 11 10:36:28.778107 master-1 kubenswrapper[4771]: I1011 10:36:28.778059 4771 scope.go:117] "RemoveContainer" containerID="e3bd4833cb6b364aa83fca88897d32509fa085f61942c4a3cb75cdf814b22316" Oct 11 10:36:28.778612 master-1 kubenswrapper[4771]: E1011 10:36:28.778580 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3bd4833cb6b364aa83fca88897d32509fa085f61942c4a3cb75cdf814b22316\": container with ID starting with e3bd4833cb6b364aa83fca88897d32509fa085f61942c4a3cb75cdf814b22316 not found: ID does not exist" containerID="e3bd4833cb6b364aa83fca88897d32509fa085f61942c4a3cb75cdf814b22316" Oct 11 10:36:28.778661 master-1 kubenswrapper[4771]: I1011 10:36:28.778611 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3bd4833cb6b364aa83fca88897d32509fa085f61942c4a3cb75cdf814b22316"} err="failed to get container status \"e3bd4833cb6b364aa83fca88897d32509fa085f61942c4a3cb75cdf814b22316\": rpc error: code = NotFound desc = could not find container \"e3bd4833cb6b364aa83fca88897d32509fa085f61942c4a3cb75cdf814b22316\": container with ID starting with e3bd4833cb6b364aa83fca88897d32509fa085f61942c4a3cb75cdf814b22316 not found: ID does not exist" Oct 11 10:36:28.778661 master-1 kubenswrapper[4771]: I1011 10:36:28.778635 4771 scope.go:117] "RemoveContainer" containerID="e12640b361e2ef73c6e289951cab68eae36f0b9bc81be5fd9209771124b251a5" Oct 11 10:36:28.779030 master-1 kubenswrapper[4771]: E1011 10:36:28.778983 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e12640b361e2ef73c6e289951cab68eae36f0b9bc81be5fd9209771124b251a5\": container with ID starting with e12640b361e2ef73c6e289951cab68eae36f0b9bc81be5fd9209771124b251a5 not found: ID does not exist" containerID="e12640b361e2ef73c6e289951cab68eae36f0b9bc81be5fd9209771124b251a5" Oct 11 10:36:28.779066 master-1 kubenswrapper[4771]: I1011 10:36:28.779031 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e12640b361e2ef73c6e289951cab68eae36f0b9bc81be5fd9209771124b251a5"} err="failed to get container status \"e12640b361e2ef73c6e289951cab68eae36f0b9bc81be5fd9209771124b251a5\": rpc error: code = NotFound desc = could not find container \"e12640b361e2ef73c6e289951cab68eae36f0b9bc81be5fd9209771124b251a5\": container with ID starting with e12640b361e2ef73c6e289951cab68eae36f0b9bc81be5fd9209771124b251a5 not found: ID does not exist" Oct 11 10:36:28.779066 master-1 kubenswrapper[4771]: I1011 10:36:28.779054 4771 scope.go:117] "RemoveContainer" containerID="63c473df6fa732d6511618b295787205edc98e4025bcc6c14cf2f92361ce263f" Oct 11 10:36:28.779413 master-1 kubenswrapper[4771]: E1011 10:36:28.779392 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63c473df6fa732d6511618b295787205edc98e4025bcc6c14cf2f92361ce263f\": container with ID starting with 63c473df6fa732d6511618b295787205edc98e4025bcc6c14cf2f92361ce263f not found: ID does not exist" containerID="63c473df6fa732d6511618b295787205edc98e4025bcc6c14cf2f92361ce263f" Oct 11 10:36:28.779456 master-1 kubenswrapper[4771]: I1011 10:36:28.779413 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63c473df6fa732d6511618b295787205edc98e4025bcc6c14cf2f92361ce263f"} err="failed to get container status \"63c473df6fa732d6511618b295787205edc98e4025bcc6c14cf2f92361ce263f\": rpc error: code = NotFound desc = could not find container \"63c473df6fa732d6511618b295787205edc98e4025bcc6c14cf2f92361ce263f\": container with ID starting with 63c473df6fa732d6511618b295787205edc98e4025bcc6c14cf2f92361ce263f not found: ID does not exist" Oct 11 10:36:28.779456 master-1 kubenswrapper[4771]: I1011 10:36:28.779427 4771 scope.go:117] "RemoveContainer" containerID="0ce25f4770e41c16a3e6b3f94195d1bb5c5e1e16d7e54fe07f63d76f33baa577" Oct 11 10:36:28.779765 master-1 kubenswrapper[4771]: E1011 10:36:28.779728 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ce25f4770e41c16a3e6b3f94195d1bb5c5e1e16d7e54fe07f63d76f33baa577\": container with ID starting with 0ce25f4770e41c16a3e6b3f94195d1bb5c5e1e16d7e54fe07f63d76f33baa577 not found: ID does not exist" containerID="0ce25f4770e41c16a3e6b3f94195d1bb5c5e1e16d7e54fe07f63d76f33baa577" Oct 11 10:36:28.779799 master-1 kubenswrapper[4771]: I1011 10:36:28.779770 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ce25f4770e41c16a3e6b3f94195d1bb5c5e1e16d7e54fe07f63d76f33baa577"} err="failed to get container status \"0ce25f4770e41c16a3e6b3f94195d1bb5c5e1e16d7e54fe07f63d76f33baa577\": rpc error: code = NotFound desc = could not find container \"0ce25f4770e41c16a3e6b3f94195d1bb5c5e1e16d7e54fe07f63d76f33baa577\": container with ID starting with 0ce25f4770e41c16a3e6b3f94195d1bb5c5e1e16d7e54fe07f63d76f33baa577 not found: ID does not exist" Oct 11 10:36:28.788979 master-1 kubenswrapper[4771]: I1011 10:36:28.788956 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-resource-dir\") pod \"34b1362996d1e0c2cea0bee73eb18468\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " Oct 11 10:36:28.789036 master-1 kubenswrapper[4771]: I1011 10:36:28.788985 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-audit-dir\") pod \"34b1362996d1e0c2cea0bee73eb18468\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " Oct 11 10:36:28.789036 master-1 kubenswrapper[4771]: I1011 10:36:28.789032 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-cert-dir\") pod \"34b1362996d1e0c2cea0bee73eb18468\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " Oct 11 10:36:28.789163 master-1 kubenswrapper[4771]: I1011 10:36:28.789129 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "34b1362996d1e0c2cea0bee73eb18468" (UID: "34b1362996d1e0c2cea0bee73eb18468"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:36:28.789198 master-1 kubenswrapper[4771]: I1011 10:36:28.789175 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "34b1362996d1e0c2cea0bee73eb18468" (UID: "34b1362996d1e0c2cea0bee73eb18468"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:36:28.789240 master-1 kubenswrapper[4771]: I1011 10:36:28.789224 4771 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-resource-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:36:28.789449 master-1 kubenswrapper[4771]: I1011 10:36:28.789312 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "34b1362996d1e0c2cea0bee73eb18468" (UID: "34b1362996d1e0c2cea0bee73eb18468"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:36:28.891560 master-1 kubenswrapper[4771]: I1011 10:36:28.891441 4771 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:36:28.891560 master-1 kubenswrapper[4771]: I1011 10:36:28.891536 4771 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-cert-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:36:28.969478 master-2 kubenswrapper[4776]: I1011 10:36:28.969418 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:28.969478 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:28.969478 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:28.969478 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:28.970440 master-2 kubenswrapper[4776]: I1011 10:36:28.969492 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:29.438137 master-1 kubenswrapper[4771]: I1011 10:36:29.437715 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 11 10:36:29.438763 master-1 kubenswrapper[4771]: I1011 10:36:29.438162 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 11 10:36:29.497458 master-1 kubenswrapper[4771]: I1011 10:36:29.497334 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:29.497458 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:29.497458 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:29.497458 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:29.497458 master-1 kubenswrapper[4771]: I1011 10:36:29.497409 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:29.663722 master-1 kubenswrapper[4771]: I1011 10:36:29.663626 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_34b1362996d1e0c2cea0bee73eb18468/kube-apiserver-cert-syncer/0.log" Oct 11 10:36:29.664910 master-1 kubenswrapper[4771]: I1011 10:36:29.664843 4771 generic.go:334] "Generic (PLEG): container finished" podID="34b1362996d1e0c2cea0bee73eb18468" containerID="7b6b0f0ebd2652368dabe4dd6da18ce3abd22cb6b9eb7a0abf73685386aeadea" exitCode=0 Oct 11 10:36:29.665052 master-1 kubenswrapper[4771]: I1011 10:36:29.664990 4771 scope.go:117] "RemoveContainer" containerID="49c30f2556445e43240415750853bc066c7b179aee60a314f7f90ecd36a50dcd" Oct 11 10:36:29.665156 master-1 kubenswrapper[4771]: I1011 10:36:29.665086 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:36:29.670586 master-1 kubenswrapper[4771]: I1011 10:36:29.670516 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="34b1362996d1e0c2cea0bee73eb18468" podUID="e39186c2ebd02622803bdbec6984de2a" Oct 11 10:36:29.685160 master-1 kubenswrapper[4771]: I1011 10:36:29.685072 4771 scope.go:117] "RemoveContainer" containerID="3a652daa6b9bec9ce58b6d5f79f895564498416d8415b206a1a52c5a9a98d3f7" Oct 11 10:36:29.698378 master-1 kubenswrapper[4771]: I1011 10:36:29.698287 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="34b1362996d1e0c2cea0bee73eb18468" podUID="e39186c2ebd02622803bdbec6984de2a" Oct 11 10:36:29.707224 master-1 kubenswrapper[4771]: I1011 10:36:29.707104 4771 scope.go:117] "RemoveContainer" containerID="d197bce3e8ba0a7f6aff105b6e86788c609756474629f070cf3ae2b1f7ecea10" Oct 11 10:36:29.725274 master-1 kubenswrapper[4771]: I1011 10:36:29.725195 4771 scope.go:117] "RemoveContainer" containerID="c610027bfa053a5744033e9524cf5428c551fc36958f998db58017c8d204f5c1" Oct 11 10:36:29.748648 master-1 kubenswrapper[4771]: I1011 10:36:29.748585 4771 scope.go:117] "RemoveContainer" containerID="7b6b0f0ebd2652368dabe4dd6da18ce3abd22cb6b9eb7a0abf73685386aeadea" Oct 11 10:36:29.768526 master-1 kubenswrapper[4771]: I1011 10:36:29.768464 4771 scope.go:117] "RemoveContainer" containerID="e822f71821ab6da8ae22657d298be7ecdc1b3a32a18978e721ac45edbf111296" Oct 11 10:36:29.801562 master-1 kubenswrapper[4771]: I1011 10:36:29.801291 4771 scope.go:117] "RemoveContainer" containerID="49c30f2556445e43240415750853bc066c7b179aee60a314f7f90ecd36a50dcd" Oct 11 10:36:29.802322 master-1 kubenswrapper[4771]: E1011 10:36:29.802246 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49c30f2556445e43240415750853bc066c7b179aee60a314f7f90ecd36a50dcd\": container with ID starting with 49c30f2556445e43240415750853bc066c7b179aee60a314f7f90ecd36a50dcd not found: ID does not exist" containerID="49c30f2556445e43240415750853bc066c7b179aee60a314f7f90ecd36a50dcd" Oct 11 10:36:29.802480 master-1 kubenswrapper[4771]: I1011 10:36:29.802318 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49c30f2556445e43240415750853bc066c7b179aee60a314f7f90ecd36a50dcd"} err="failed to get container status \"49c30f2556445e43240415750853bc066c7b179aee60a314f7f90ecd36a50dcd\": rpc error: code = NotFound desc = could not find container \"49c30f2556445e43240415750853bc066c7b179aee60a314f7f90ecd36a50dcd\": container with ID starting with 49c30f2556445e43240415750853bc066c7b179aee60a314f7f90ecd36a50dcd not found: ID does not exist" Oct 11 10:36:29.802480 master-1 kubenswrapper[4771]: I1011 10:36:29.802408 4771 scope.go:117] "RemoveContainer" containerID="3a652daa6b9bec9ce58b6d5f79f895564498416d8415b206a1a52c5a9a98d3f7" Oct 11 10:36:29.803094 master-1 kubenswrapper[4771]: E1011 10:36:29.803038 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a652daa6b9bec9ce58b6d5f79f895564498416d8415b206a1a52c5a9a98d3f7\": container with ID starting with 3a652daa6b9bec9ce58b6d5f79f895564498416d8415b206a1a52c5a9a98d3f7 not found: ID does not exist" containerID="3a652daa6b9bec9ce58b6d5f79f895564498416d8415b206a1a52c5a9a98d3f7" Oct 11 10:36:29.803185 master-1 kubenswrapper[4771]: I1011 10:36:29.803081 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a652daa6b9bec9ce58b6d5f79f895564498416d8415b206a1a52c5a9a98d3f7"} err="failed to get container status \"3a652daa6b9bec9ce58b6d5f79f895564498416d8415b206a1a52c5a9a98d3f7\": rpc error: code = NotFound desc = could not find container \"3a652daa6b9bec9ce58b6d5f79f895564498416d8415b206a1a52c5a9a98d3f7\": container with ID starting with 3a652daa6b9bec9ce58b6d5f79f895564498416d8415b206a1a52c5a9a98d3f7 not found: ID does not exist" Oct 11 10:36:29.803185 master-1 kubenswrapper[4771]: I1011 10:36:29.803115 4771 scope.go:117] "RemoveContainer" containerID="d197bce3e8ba0a7f6aff105b6e86788c609756474629f070cf3ae2b1f7ecea10" Oct 11 10:36:29.803658 master-1 kubenswrapper[4771]: E1011 10:36:29.803595 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d197bce3e8ba0a7f6aff105b6e86788c609756474629f070cf3ae2b1f7ecea10\": container with ID starting with d197bce3e8ba0a7f6aff105b6e86788c609756474629f070cf3ae2b1f7ecea10 not found: ID does not exist" containerID="d197bce3e8ba0a7f6aff105b6e86788c609756474629f070cf3ae2b1f7ecea10" Oct 11 10:36:29.803761 master-1 kubenswrapper[4771]: I1011 10:36:29.803653 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d197bce3e8ba0a7f6aff105b6e86788c609756474629f070cf3ae2b1f7ecea10"} err="failed to get container status \"d197bce3e8ba0a7f6aff105b6e86788c609756474629f070cf3ae2b1f7ecea10\": rpc error: code = NotFound desc = could not find container \"d197bce3e8ba0a7f6aff105b6e86788c609756474629f070cf3ae2b1f7ecea10\": container with ID starting with d197bce3e8ba0a7f6aff105b6e86788c609756474629f070cf3ae2b1f7ecea10 not found: ID does not exist" Oct 11 10:36:29.803761 master-1 kubenswrapper[4771]: I1011 10:36:29.803685 4771 scope.go:117] "RemoveContainer" containerID="c610027bfa053a5744033e9524cf5428c551fc36958f998db58017c8d204f5c1" Oct 11 10:36:29.804220 master-1 kubenswrapper[4771]: E1011 10:36:29.804166 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c610027bfa053a5744033e9524cf5428c551fc36958f998db58017c8d204f5c1\": container with ID starting with c610027bfa053a5744033e9524cf5428c551fc36958f998db58017c8d204f5c1 not found: ID does not exist" containerID="c610027bfa053a5744033e9524cf5428c551fc36958f998db58017c8d204f5c1" Oct 11 10:36:29.804302 master-1 kubenswrapper[4771]: I1011 10:36:29.804210 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c610027bfa053a5744033e9524cf5428c551fc36958f998db58017c8d204f5c1"} err="failed to get container status \"c610027bfa053a5744033e9524cf5428c551fc36958f998db58017c8d204f5c1\": rpc error: code = NotFound desc = could not find container \"c610027bfa053a5744033e9524cf5428c551fc36958f998db58017c8d204f5c1\": container with ID starting with c610027bfa053a5744033e9524cf5428c551fc36958f998db58017c8d204f5c1 not found: ID does not exist" Oct 11 10:36:29.804302 master-1 kubenswrapper[4771]: I1011 10:36:29.804238 4771 scope.go:117] "RemoveContainer" containerID="7b6b0f0ebd2652368dabe4dd6da18ce3abd22cb6b9eb7a0abf73685386aeadea" Oct 11 10:36:29.804710 master-1 kubenswrapper[4771]: E1011 10:36:29.804633 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b6b0f0ebd2652368dabe4dd6da18ce3abd22cb6b9eb7a0abf73685386aeadea\": container with ID starting with 7b6b0f0ebd2652368dabe4dd6da18ce3abd22cb6b9eb7a0abf73685386aeadea not found: ID does not exist" containerID="7b6b0f0ebd2652368dabe4dd6da18ce3abd22cb6b9eb7a0abf73685386aeadea" Oct 11 10:36:29.804794 master-1 kubenswrapper[4771]: I1011 10:36:29.804711 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b6b0f0ebd2652368dabe4dd6da18ce3abd22cb6b9eb7a0abf73685386aeadea"} err="failed to get container status \"7b6b0f0ebd2652368dabe4dd6da18ce3abd22cb6b9eb7a0abf73685386aeadea\": rpc error: code = NotFound desc = could not find container \"7b6b0f0ebd2652368dabe4dd6da18ce3abd22cb6b9eb7a0abf73685386aeadea\": container with ID starting with 7b6b0f0ebd2652368dabe4dd6da18ce3abd22cb6b9eb7a0abf73685386aeadea not found: ID does not exist" Oct 11 10:36:29.804794 master-1 kubenswrapper[4771]: I1011 10:36:29.804763 4771 scope.go:117] "RemoveContainer" containerID="e822f71821ab6da8ae22657d298be7ecdc1b3a32a18978e721ac45edbf111296" Oct 11 10:36:29.805253 master-1 kubenswrapper[4771]: E1011 10:36:29.805202 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e822f71821ab6da8ae22657d298be7ecdc1b3a32a18978e721ac45edbf111296\": container with ID starting with e822f71821ab6da8ae22657d298be7ecdc1b3a32a18978e721ac45edbf111296 not found: ID does not exist" containerID="e822f71821ab6da8ae22657d298be7ecdc1b3a32a18978e721ac45edbf111296" Oct 11 10:36:29.805344 master-1 kubenswrapper[4771]: I1011 10:36:29.805249 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e822f71821ab6da8ae22657d298be7ecdc1b3a32a18978e721ac45edbf111296"} err="failed to get container status \"e822f71821ab6da8ae22657d298be7ecdc1b3a32a18978e721ac45edbf111296\": rpc error: code = NotFound desc = could not find container \"e822f71821ab6da8ae22657d298be7ecdc1b3a32a18978e721ac45edbf111296\": container with ID starting with e822f71821ab6da8ae22657d298be7ecdc1b3a32a18978e721ac45edbf111296 not found: ID does not exist" Oct 11 10:36:29.969793 master-2 kubenswrapper[4776]: I1011 10:36:29.969730 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:29.969793 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:29.969793 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:29.969793 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:29.969793 master-2 kubenswrapper[4776]: I1011 10:36:29.969791 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:30.248788 master-1 kubenswrapper[4771]: I1011 10:36:30.248656 4771 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-bg7lk container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:36:30.248788 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:36:30.248788 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:36:30.248788 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:36:30.248788 master-1 kubenswrapper[4771]: [+]metric-storage-ready ok Oct 11 10:36:30.248788 master-1 kubenswrapper[4771]: [+]metric-informer-sync ok Oct 11 10:36:30.248788 master-1 kubenswrapper[4771]: [+]metadata-informer-sync ok Oct 11 10:36:30.248788 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:36:30.248788 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:36:30.248788 master-1 kubenswrapper[4771]: I1011 10:36:30.248778 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" podUID="daf74cdb-6bdb-465a-8e3e-194e8868570f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:30.447381 master-1 kubenswrapper[4771]: I1011 10:36:30.447238 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b1362996d1e0c2cea0bee73eb18468" path="/var/lib/kubelet/pods/34b1362996d1e0c2cea0bee73eb18468/volumes" Oct 11 10:36:30.497873 master-1 kubenswrapper[4771]: I1011 10:36:30.497761 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:30.497873 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:30.497873 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:30.497873 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:30.497873 master-1 kubenswrapper[4771]: I1011 10:36:30.497854 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:30.968591 master-2 kubenswrapper[4776]: I1011 10:36:30.968549 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:30.968591 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:30.968591 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:30.968591 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:30.969009 master-2 kubenswrapper[4776]: I1011 10:36:30.968980 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:31.272629 master-2 kubenswrapper[4776]: E1011 10:36:31.272499 4776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-2\": Get \"https://api-int.ocp.openstack.lab:6443/api/v1/nodes/master-2?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:36:31.497409 master-1 kubenswrapper[4771]: I1011 10:36:31.497255 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:31.497409 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:31.497409 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:31.497409 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:31.498502 master-1 kubenswrapper[4771]: I1011 10:36:31.497414 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:31.607255 master-1 kubenswrapper[4771]: I1011 10:36:31.607131 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:36:31.607255 master-1 kubenswrapper[4771]: I1011 10:36:31.607239 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:36:31.969920 master-2 kubenswrapper[4776]: I1011 10:36:31.969810 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:31.969920 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:31.969920 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:31.969920 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:31.969920 master-2 kubenswrapper[4776]: I1011 10:36:31.969904 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:32.247077 master-2 kubenswrapper[4776]: E1011 10:36:32.246929 4776 controller.go:195] "Failed to update lease" err="Put \"https://api-int.ocp.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:36:32.497487 master-1 kubenswrapper[4771]: I1011 10:36:32.497376 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:32.497487 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:32.497487 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:32.497487 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:32.498542 master-1 kubenswrapper[4771]: I1011 10:36:32.497566 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:32.969455 master-2 kubenswrapper[4776]: I1011 10:36:32.969375 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:32.969455 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:32.969455 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:32.969455 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:32.969455 master-2 kubenswrapper[4776]: I1011 10:36:32.969444 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:33.237809 master-1 kubenswrapper[4771]: I1011 10:36:33.237736 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 11 10:36:33.238120 master-1 kubenswrapper[4771]: I1011 10:36:33.237818 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 11 10:36:33.498164 master-1 kubenswrapper[4771]: I1011 10:36:33.497997 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:33.498164 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:33.498164 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:33.498164 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:33.499512 master-1 kubenswrapper[4771]: I1011 10:36:33.499458 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:33.969868 master-2 kubenswrapper[4776]: I1011 10:36:33.969736 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:33.969868 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:33.969868 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:33.969868 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:33.970581 master-2 kubenswrapper[4776]: I1011 10:36:33.969864 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:34.496959 master-1 kubenswrapper[4771]: I1011 10:36:34.496889 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:34.496959 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:34.496959 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:34.496959 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:34.497289 master-1 kubenswrapper[4771]: I1011 10:36:34.496982 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:34.803047 master-2 kubenswrapper[4776]: E1011 10:36:34.802999 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:36:34.803243 master-2 kubenswrapper[4776]: E1011 10:36:34.803219 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:37:38.803092955 +0000 UTC m=+693.587519684 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:36:34.970279 master-2 kubenswrapper[4776]: I1011 10:36:34.970204 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:34.970279 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:34.970279 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:34.970279 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:34.970279 master-2 kubenswrapper[4776]: I1011 10:36:34.970270 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:35.497430 master-1 kubenswrapper[4771]: I1011 10:36:35.497320 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:35.497430 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:35.497430 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:35.497430 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:35.498431 master-1 kubenswrapper[4771]: I1011 10:36:35.497436 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:35.970207 master-2 kubenswrapper[4776]: I1011 10:36:35.970123 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:35.970207 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:35.970207 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:35.970207 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:35.970207 master-2 kubenswrapper[4776]: I1011 10:36:35.970198 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:36.437207 master-1 kubenswrapper[4771]: I1011 10:36:36.437135 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:36:36.458647 master-1 kubenswrapper[4771]: I1011 10:36:36.458580 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="70960866-cb10-49c0-b28d-18f7fa34215c" Oct 11 10:36:36.458647 master-1 kubenswrapper[4771]: I1011 10:36:36.458640 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="70960866-cb10-49c0-b28d-18f7fa34215c" Oct 11 10:36:36.485082 master-1 kubenswrapper[4771]: I1011 10:36:36.485022 4771 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:36:36.498010 master-1 kubenswrapper[4771]: I1011 10:36:36.497915 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:36.498010 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:36.498010 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:36.498010 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:36.499005 master-1 kubenswrapper[4771]: I1011 10:36:36.498012 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:36.503856 master-1 kubenswrapper[4771]: I1011 10:36:36.503818 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:36:36.530883 master-1 kubenswrapper[4771]: W1011 10:36:36.530791 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffd3b5548bcf48fce7bfb9a8c802165.slice/crio-9d69cbb2721062172f20487d13f1ddcc1efa6d2ad3a2f2fe34e0e943d4e5aff7 WatchSource:0}: Error finding container 9d69cbb2721062172f20487d13f1ddcc1efa6d2ad3a2f2fe34e0e943d4e5aff7: Status 404 returned error can't find the container with id 9d69cbb2721062172f20487d13f1ddcc1efa6d2ad3a2f2fe34e0e943d4e5aff7 Oct 11 10:36:36.606931 master-1 kubenswrapper[4771]: I1011 10:36:36.606835 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:36:36.606931 master-1 kubenswrapper[4771]: I1011 10:36:36.606924 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:36:36.607214 master-1 kubenswrapper[4771]: I1011 10:36:36.607032 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 11 10:36:36.607585 master-1 kubenswrapper[4771]: I1011 10:36:36.607533 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:36:36.607674 master-1 kubenswrapper[4771]: I1011 10:36:36.607581 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:36:36.707531 master-1 kubenswrapper[4771]: I1011 10:36:36.707468 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"1ffd3b5548bcf48fce7bfb9a8c802165","Type":"ContainerStarted","Data":"9d69cbb2721062172f20487d13f1ddcc1efa6d2ad3a2f2fe34e0e943d4e5aff7"} Oct 11 10:36:36.734090 master-2 kubenswrapper[4776]: E1011 10:36:36.733967 4776 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{metrics-server-65d86dff78-crzgp.186d696977ca9b1f openshift-monitoring 14134 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-monitoring,Name:metrics-server-65d86dff78-crzgp,UID:5473628e-94c8-4706-bb03-ff4836debe5f,APIVersion:v1,ResourceVersion:10129,FieldPath:,},Reason:FailedMount,Message:MountVolume.SetUp failed for volume \"client-ca-bundle\" : secret \"metrics-server-ap7ej74ueigk4\" not found,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:35:31 +0000 UTC,LastTimestamp:2025-10-11 10:36:02.729086168 +0000 UTC m=+597.513512877,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:36:36.969653 master-2 kubenswrapper[4776]: I1011 10:36:36.969524 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:36.969653 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:36.969653 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:36.969653 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:36.970104 master-2 kubenswrapper[4776]: I1011 10:36:36.969646 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:37.437134 master-1 kubenswrapper[4771]: I1011 10:36:37.437008 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:36:37.459332 master-1 kubenswrapper[4771]: I1011 10:36:37.459249 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="1087abee-42bf-483a-b523-13b9169544e5" Oct 11 10:36:37.459332 master-1 kubenswrapper[4771]: I1011 10:36:37.459307 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="1087abee-42bf-483a-b523-13b9169544e5" Oct 11 10:36:37.486504 master-1 kubenswrapper[4771]: I1011 10:36:37.486414 4771 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:36:37.497339 master-1 kubenswrapper[4771]: I1011 10:36:37.497213 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:37.497339 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:37.497339 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:37.497339 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:37.497339 master-1 kubenswrapper[4771]: I1011 10:36:37.497332 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:37.509455 master-1 kubenswrapper[4771]: I1011 10:36:37.509328 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:36:37.537104 master-1 kubenswrapper[4771]: W1011 10:36:37.537007 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode39186c2ebd02622803bdbec6984de2a.slice/crio-ff12d9d351de2bbe6eed10390bc259efe990212b31b335f9dbf3e11e9545f55b WatchSource:0}: Error finding container ff12d9d351de2bbe6eed10390bc259efe990212b31b335f9dbf3e11e9545f55b: Status 404 returned error can't find the container with id ff12d9d351de2bbe6eed10390bc259efe990212b31b335f9dbf3e11e9545f55b Oct 11 10:36:37.718971 master-1 kubenswrapper[4771]: I1011 10:36:37.718894 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"e39186c2ebd02622803bdbec6984de2a","Type":"ContainerStarted","Data":"ff12d9d351de2bbe6eed10390bc259efe990212b31b335f9dbf3e11e9545f55b"} Oct 11 10:36:37.720534 master-1 kubenswrapper[4771]: I1011 10:36:37.720492 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"1ffd3b5548bcf48fce7bfb9a8c802165","Type":"ContainerStarted","Data":"e1bceca0463892708755bd3c74753a682b8d9cb1da4a4db3b72edc17dbc8cf99"} Oct 11 10:36:37.720987 master-1 kubenswrapper[4771]: I1011 10:36:37.720929 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="70960866-cb10-49c0-b28d-18f7fa34215c" Oct 11 10:36:37.720987 master-1 kubenswrapper[4771]: I1011 10:36:37.720980 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="70960866-cb10-49c0-b28d-18f7fa34215c" Oct 11 10:36:37.968832 master-2 kubenswrapper[4776]: I1011 10:36:37.968758 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:37.968832 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:37.968832 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:37.968832 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:37.969496 master-2 kubenswrapper[4776]: I1011 10:36:37.968837 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:38.238936 master-1 kubenswrapper[4771]: I1011 10:36:38.238812 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 11 10:36:38.238936 master-1 kubenswrapper[4771]: I1011 10:36:38.238901 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 11 10:36:38.496987 master-1 kubenswrapper[4771]: I1011 10:36:38.496652 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:38.496987 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:38.496987 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:38.496987 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:38.496987 master-1 kubenswrapper[4771]: I1011 10:36:38.496707 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:38.727263 master-1 kubenswrapper[4771]: I1011 10:36:38.727202 4771 generic.go:334] "Generic (PLEG): container finished" podID="e39186c2ebd02622803bdbec6984de2a" containerID="70b3ac8371a68d1ef8731071faccdda868469d141980d26eb4114af177d58912" exitCode=0 Oct 11 10:36:38.727858 master-1 kubenswrapper[4771]: I1011 10:36:38.727279 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"e39186c2ebd02622803bdbec6984de2a","Type":"ContainerDied","Data":"70b3ac8371a68d1ef8731071faccdda868469d141980d26eb4114af177d58912"} Oct 11 10:36:38.727858 master-1 kubenswrapper[4771]: I1011 10:36:38.727608 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="70960866-cb10-49c0-b28d-18f7fa34215c" Oct 11 10:36:38.727858 master-1 kubenswrapper[4771]: I1011 10:36:38.727626 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="70960866-cb10-49c0-b28d-18f7fa34215c" Oct 11 10:36:38.727858 master-1 kubenswrapper[4771]: I1011 10:36:38.727704 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="1087abee-42bf-483a-b523-13b9169544e5" Oct 11 10:36:38.727858 master-1 kubenswrapper[4771]: I1011 10:36:38.727728 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="1087abee-42bf-483a-b523-13b9169544e5" Oct 11 10:36:38.970491 master-2 kubenswrapper[4776]: I1011 10:36:38.970403 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:38.970491 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:38.970491 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:38.970491 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:38.970491 master-2 kubenswrapper[4776]: I1011 10:36:38.970480 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:39.497546 master-1 kubenswrapper[4771]: I1011 10:36:39.497435 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:39.497546 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:39.497546 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:39.497546 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:39.497546 master-1 kubenswrapper[4771]: I1011 10:36:39.497537 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:39.745206 master-1 kubenswrapper[4771]: I1011 10:36:39.742923 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"e39186c2ebd02622803bdbec6984de2a","Type":"ContainerStarted","Data":"7808e9129366cc3f545a8ddafc086a76064c7552891a59b63333320814e12109"} Oct 11 10:36:39.745206 master-1 kubenswrapper[4771]: I1011 10:36:39.742978 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"e39186c2ebd02622803bdbec6984de2a","Type":"ContainerStarted","Data":"0e245a1193b14427c4e53af18fc7d7caad0bae63a50c8f3671564cc8eb9cc3dd"} Oct 11 10:36:39.745206 master-1 kubenswrapper[4771]: I1011 10:36:39.742991 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"e39186c2ebd02622803bdbec6984de2a","Type":"ContainerStarted","Data":"be8c3f5de1224f1ddb95dff091a6c317c3ddd56bd6bebec9f107f4b1c1bd098a"} Oct 11 10:36:39.970480 master-2 kubenswrapper[4776]: I1011 10:36:39.970406 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:39.970480 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:39.970480 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:39.970480 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:39.971223 master-2 kubenswrapper[4776]: I1011 10:36:39.970502 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:40.496490 master-1 kubenswrapper[4771]: I1011 10:36:40.496380 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:40.496490 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:40.496490 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:40.496490 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:40.496490 master-1 kubenswrapper[4771]: I1011 10:36:40.496456 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:40.750469 master-1 kubenswrapper[4771]: I1011 10:36:40.750362 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"e39186c2ebd02622803bdbec6984de2a","Type":"ContainerStarted","Data":"50a31e36a673a3d932b0eed3747b62b95f6a9f4bd14409954bbb8b619a64ca07"} Oct 11 10:36:40.750469 master-1 kubenswrapper[4771]: I1011 10:36:40.750414 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"e39186c2ebd02622803bdbec6984de2a","Type":"ContainerStarted","Data":"637d81cee36b54a2c10877824c0d4c8cd57b4ef94675651e76c7ca2c91addea4"} Oct 11 10:36:40.750931 master-1 kubenswrapper[4771]: I1011 10:36:40.750534 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:36:40.750931 master-1 kubenswrapper[4771]: I1011 10:36:40.750627 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="1087abee-42bf-483a-b523-13b9169544e5" Oct 11 10:36:40.750931 master-1 kubenswrapper[4771]: I1011 10:36:40.750646 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="1087abee-42bf-483a-b523-13b9169544e5" Oct 11 10:36:40.969522 master-2 kubenswrapper[4776]: I1011 10:36:40.969434 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:40.969522 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:40.969522 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:40.969522 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:40.969522 master-2 kubenswrapper[4776]: I1011 10:36:40.969500 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:41.273812 master-2 kubenswrapper[4776]: E1011 10:36:41.273565 4776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-2\": Get \"https://api-int.ocp.openstack.lab:6443/api/v1/nodes/master-2?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:36:41.497041 master-1 kubenswrapper[4771]: I1011 10:36:41.496942 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:41.497041 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:41.497041 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:41.497041 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:41.497500 master-1 kubenswrapper[4771]: I1011 10:36:41.497050 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:41.607243 master-1 kubenswrapper[4771]: I1011 10:36:41.607169 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:36:41.607505 master-1 kubenswrapper[4771]: I1011 10:36:41.607292 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:36:41.758392 master-1 kubenswrapper[4771]: I1011 10:36:41.758178 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="1087abee-42bf-483a-b523-13b9169544e5" Oct 11 10:36:41.758392 master-1 kubenswrapper[4771]: I1011 10:36:41.758235 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="1087abee-42bf-483a-b523-13b9169544e5" Oct 11 10:36:41.970233 master-2 kubenswrapper[4776]: I1011 10:36:41.970118 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:41.970233 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:41.970233 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:41.970233 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:41.970233 master-2 kubenswrapper[4776]: I1011 10:36:41.970223 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:42.248035 master-2 kubenswrapper[4776]: E1011 10:36:42.247842 4776 controller.go:195] "Failed to update lease" err="Put \"https://api-int.ocp.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:36:42.458086 master-1 kubenswrapper[4771]: I1011 10:36:42.457979 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" oldPodUID="1ffd3b5548bcf48fce7bfb9a8c802165" podUID="72ed0d6c-89fd-4c0b-bd7d-af241a60bf0c" Oct 11 10:36:42.482916 master-1 kubenswrapper[4771]: I1011 10:36:42.482820 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="e39186c2ebd02622803bdbec6984de2a" podUID="233d76fa-d8e2-41eb-9272-6cdd0056b793" Oct 11 10:36:42.497184 master-1 kubenswrapper[4771]: I1011 10:36:42.497115 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:42.497184 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:42.497184 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:42.497184 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:42.497431 master-1 kubenswrapper[4771]: I1011 10:36:42.497236 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:42.970447 master-2 kubenswrapper[4776]: I1011 10:36:42.970351 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:42.970447 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:42.970447 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:42.970447 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:42.972100 master-2 kubenswrapper[4776]: I1011 10:36:42.970451 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:43.245207 master-1 kubenswrapper[4771]: I1011 10:36:43.245144 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 11 10:36:43.496887 master-1 kubenswrapper[4771]: I1011 10:36:43.496697 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:43.496887 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:43.496887 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:43.496887 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:43.496887 master-1 kubenswrapper[4771]: I1011 10:36:43.496790 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:43.970854 master-2 kubenswrapper[4776]: I1011 10:36:43.970740 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:43.970854 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:43.970854 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:43.970854 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:43.970854 master-2 kubenswrapper[4776]: I1011 10:36:43.970805 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:44.497802 master-1 kubenswrapper[4771]: I1011 10:36:44.497707 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:44.497802 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:44.497802 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:44.497802 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:44.498974 master-1 kubenswrapper[4771]: I1011 10:36:44.497816 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:44.970705 master-2 kubenswrapper[4776]: I1011 10:36:44.970617 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:44.970705 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:44.970705 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:44.970705 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:44.970705 master-2 kubenswrapper[4776]: I1011 10:36:44.970698 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:45.497422 master-1 kubenswrapper[4771]: I1011 10:36:45.497302 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:45.497422 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:45.497422 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:45.497422 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:45.497869 master-1 kubenswrapper[4771]: I1011 10:36:45.497422 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:45.970167 master-2 kubenswrapper[4776]: I1011 10:36:45.970095 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:45.970167 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:45.970167 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:45.970167 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:45.970767 master-2 kubenswrapper[4776]: I1011 10:36:45.970725 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:46.297292 master-2 kubenswrapper[4776]: I1011 10:36:46.297082 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:36:46.297292 master-2 kubenswrapper[4776]: I1011 10:36:46.297141 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:36:46.297292 master-2 kubenswrapper[4776]: I1011 10:36:46.297157 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:36:46.297292 master-2 kubenswrapper[4776]: I1011 10:36:46.297180 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeReady" Oct 11 10:36:46.497015 master-1 kubenswrapper[4771]: I1011 10:36:46.496936 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:46.497015 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:46.497015 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:46.497015 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:46.497543 master-1 kubenswrapper[4771]: I1011 10:36:46.497019 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:46.608485 master-1 kubenswrapper[4771]: I1011 10:36:46.608390 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:36:46.609060 master-1 kubenswrapper[4771]: I1011 10:36:46.608503 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:36:46.971034 master-2 kubenswrapper[4776]: I1011 10:36:46.970899 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:46.971034 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:46.971034 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:46.971034 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:46.971470 master-2 kubenswrapper[4776]: I1011 10:36:46.971066 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:47.497305 master-1 kubenswrapper[4771]: I1011 10:36:47.497214 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:47.497305 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:47.497305 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:47.497305 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:47.497811 master-1 kubenswrapper[4771]: I1011 10:36:47.497308 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:47.970386 master-2 kubenswrapper[4776]: I1011 10:36:47.970268 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:47.970386 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:47.970386 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:47.970386 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:47.970386 master-2 kubenswrapper[4776]: I1011 10:36:47.970361 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:48.498346 master-1 kubenswrapper[4771]: I1011 10:36:48.498247 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:48.498346 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:48.498346 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:48.498346 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:48.498985 master-1 kubenswrapper[4771]: I1011 10:36:48.498383 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:48.801696 master-1 kubenswrapper[4771]: I1011 10:36:48.801627 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-sk5cm_0e03dddd-4197-40ae-91f1-7e83f90dbd58/approver/0.log" Oct 11 10:36:48.802426 master-1 kubenswrapper[4771]: I1011 10:36:48.802324 4771 generic.go:334] "Generic (PLEG): container finished" podID="0e03dddd-4197-40ae-91f1-7e83f90dbd58" containerID="c93dfaf9a8b9fa7850e31e158a74ae1fbf85ec41153c0883cb5064b10872afdb" exitCode=1 Oct 11 10:36:48.802596 master-1 kubenswrapper[4771]: I1011 10:36:48.802422 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-sk5cm" event={"ID":"0e03dddd-4197-40ae-91f1-7e83f90dbd58","Type":"ContainerDied","Data":"c93dfaf9a8b9fa7850e31e158a74ae1fbf85ec41153c0883cb5064b10872afdb"} Oct 11 10:36:48.803328 master-1 kubenswrapper[4771]: I1011 10:36:48.803251 4771 scope.go:117] "RemoveContainer" containerID="c93dfaf9a8b9fa7850e31e158a74ae1fbf85ec41153c0883cb5064b10872afdb" Oct 11 10:36:48.968997 master-2 kubenswrapper[4776]: I1011 10:36:48.968894 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:48.968997 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:48.968997 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:48.968997 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:48.969433 master-2 kubenswrapper[4776]: I1011 10:36:48.969035 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:49.497490 master-1 kubenswrapper[4771]: I1011 10:36:49.497347 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:49.497490 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:49.497490 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:49.497490 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:49.497942 master-1 kubenswrapper[4771]: I1011 10:36:49.497494 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:49.811652 master-1 kubenswrapper[4771]: I1011 10:36:49.811568 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-sk5cm_0e03dddd-4197-40ae-91f1-7e83f90dbd58/approver/0.log" Oct 11 10:36:49.812762 master-1 kubenswrapper[4771]: I1011 10:36:49.812199 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-sk5cm" event={"ID":"0e03dddd-4197-40ae-91f1-7e83f90dbd58","Type":"ContainerStarted","Data":"bace159805a524fcefe768a06f0db4deab6f2bcec258b041da2198e3ef2b3ee4"} Oct 11 10:36:49.970041 master-2 kubenswrapper[4776]: I1011 10:36:49.969941 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:49.970041 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:49.970041 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:49.970041 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:49.970989 master-2 kubenswrapper[4776]: I1011 10:36:49.970094 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:50.245900 master-1 kubenswrapper[4771]: I1011 10:36:50.245713 4771 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-bg7lk container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:36:50.245900 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:36:50.245900 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:36:50.245900 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:36:50.245900 master-1 kubenswrapper[4771]: [+]metric-storage-ready ok Oct 11 10:36:50.245900 master-1 kubenswrapper[4771]: [+]metric-informer-sync ok Oct 11 10:36:50.245900 master-1 kubenswrapper[4771]: [+]metadata-informer-sync ok Oct 11 10:36:50.245900 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:36:50.245900 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:36:50.245900 master-1 kubenswrapper[4771]: I1011 10:36:50.245864 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" podUID="daf74cdb-6bdb-465a-8e3e-194e8868570f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:50.246865 master-1 kubenswrapper[4771]: I1011 10:36:50.246069 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:36:50.497295 master-1 kubenswrapper[4771]: I1011 10:36:50.497085 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:50.497295 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:50.497295 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:50.497295 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:50.497295 master-1 kubenswrapper[4771]: I1011 10:36:50.497181 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:50.969714 master-2 kubenswrapper[4776]: I1011 10:36:50.969652 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:50.969714 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:50.969714 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:50.969714 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:50.970085 master-2 kubenswrapper[4776]: I1011 10:36:50.969719 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:51.497082 master-1 kubenswrapper[4771]: I1011 10:36:51.497020 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:51.497082 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:51.497082 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:51.497082 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:51.497082 master-1 kubenswrapper[4771]: I1011 10:36:51.497097 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:51.607853 master-1 kubenswrapper[4771]: I1011 10:36:51.607745 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:36:51.608145 master-1 kubenswrapper[4771]: I1011 10:36:51.607849 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:36:51.970181 master-2 kubenswrapper[4776]: I1011 10:36:51.970086 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:51.970181 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:51.970181 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:51.970181 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:51.970787 master-2 kubenswrapper[4776]: I1011 10:36:51.970179 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:52.497318 master-1 kubenswrapper[4771]: I1011 10:36:52.497227 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:52.497318 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:52.497318 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:52.497318 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:52.498815 master-1 kubenswrapper[4771]: I1011 10:36:52.498758 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:52.970370 master-2 kubenswrapper[4776]: I1011 10:36:52.970234 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:52.970370 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:52.970370 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:52.970370 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:52.970370 master-2 kubenswrapper[4776]: I1011 10:36:52.970386 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:53.497533 master-1 kubenswrapper[4771]: I1011 10:36:53.497475 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:53.497533 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:53.497533 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:53.497533 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:53.497533 master-1 kubenswrapper[4771]: I1011 10:36:53.497534 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:53.968977 master-2 kubenswrapper[4776]: I1011 10:36:53.968851 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:53.968977 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:53.968977 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:53.968977 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:53.968977 master-2 kubenswrapper[4776]: I1011 10:36:53.968957 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:54.497870 master-1 kubenswrapper[4771]: I1011 10:36:54.497774 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:54.497870 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:54.497870 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:54.497870 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:54.498905 master-1 kubenswrapper[4771]: I1011 10:36:54.497875 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:54.971323 master-2 kubenswrapper[4776]: I1011 10:36:54.971259 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:54.971323 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:54.971323 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:54.971323 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:54.971323 master-2 kubenswrapper[4776]: I1011 10:36:54.971316 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:55.501859 master-1 kubenswrapper[4771]: I1011 10:36:55.501798 4771 patch_prober.go:28] interesting pod/router-default-5ddb89f76-z5t6x container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:55.501859 master-1 kubenswrapper[4771]: [-]has-synced failed: reason withheld Oct 11 10:36:55.501859 master-1 kubenswrapper[4771]: [+]process-running ok Oct 11 10:36:55.501859 master-1 kubenswrapper[4771]: healthz check failed Oct 11 10:36:55.503471 master-1 kubenswrapper[4771]: I1011 10:36:55.501887 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" podUID="04cd4a19-2532-43d1-9144-1f59d9e52d19" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:55.970622 master-2 kubenswrapper[4776]: I1011 10:36:55.970527 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:55.970622 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:55.970622 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:55.970622 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:55.970931 master-2 kubenswrapper[4776]: I1011 10:36:55.970646 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:56.498395 master-1 kubenswrapper[4771]: I1011 10:36:56.498266 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:36:56.502496 master-1 kubenswrapper[4771]: I1011 10:36:56.502421 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5ddb89f76-z5t6x" Oct 11 10:36:56.607601 master-1 kubenswrapper[4771]: I1011 10:36:56.607501 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:36:56.607601 master-1 kubenswrapper[4771]: I1011 10:36:56.607583 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:36:56.970500 master-2 kubenswrapper[4776]: I1011 10:36:56.970324 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:56.970500 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:56.970500 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:56.970500 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:56.971823 master-2 kubenswrapper[4776]: I1011 10:36:56.970505 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:57.968698 master-2 kubenswrapper[4776]: I1011 10:36:57.968603 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:57.968698 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:57.968698 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:57.968698 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:57.968698 master-2 kubenswrapper[4776]: I1011 10:36:57.968663 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:58.969116 master-2 kubenswrapper[4776]: I1011 10:36:58.969014 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:58.969116 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:58.969116 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:58.969116 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:58.969116 master-2 kubenswrapper[4776]: I1011 10:36:58.969100 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:59.433205 master-1 kubenswrapper[4771]: I1011 10:36:59.433096 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:36:59.434228 master-1 kubenswrapper[4771]: E1011 10:36:59.433409 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker podName:d7647696-42d9-4dd9-bc3b-a4d52a42cf9a nodeName:}" failed. No retries permitted until 2025-10-11 10:39:01.433335615 +0000 UTC m=+773.407562086 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-bqdlc" (UID: "d7647696-42d9-4dd9-bc3b-a4d52a42cf9a") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:36:59.535084 master-1 kubenswrapper[4771]: I1011 10:36:59.534985 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:36:59.535400 master-1 kubenswrapper[4771]: E1011 10:36:59.535247 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker podName:6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b nodeName:}" failed. No retries permitted until 2025-10-11 10:39:01.535215177 +0000 UTC m=+773.509441708 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-tpzsm" (UID: "6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:36:59.949906 master-1 kubenswrapper[4771]: E1011 10:36:59.949825 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" podUID="d7647696-42d9-4dd9-bc3b-a4d52a42cf9a" Oct 11 10:36:59.950255 master-1 kubenswrapper[4771]: E1011 10:36:59.949918 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" podUID="6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b" Oct 11 10:36:59.964810 master-2 kubenswrapper[4776]: I1011 10:36:59.964735 4776 status_manager.go:851] "Failed to get status for pod" podUID="1afe0068-3c97-4916-ba53-53f2841a95b0" pod="openshift-marketplace/certified-operators-xtrbk" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods certified-operators-xtrbk)" Oct 11 10:36:59.970453 master-2 kubenswrapper[4776]: I1011 10:36:59.970419 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:59.970453 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:59.970453 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:59.970453 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:59.970894 master-2 kubenswrapper[4776]: I1011 10:36:59.970453 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:00.887508 master-1 kubenswrapper[4771]: I1011 10:37:00.887430 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:37:00.887508 master-1 kubenswrapper[4771]: I1011 10:37:00.887470 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:37:00.969618 master-2 kubenswrapper[4776]: I1011 10:37:00.969584 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:37:00.969618 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:37:00.969618 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:37:00.969618 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:37:00.970086 master-2 kubenswrapper[4776]: I1011 10:37:00.970062 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:01.607387 master-1 kubenswrapper[4771]: I1011 10:37:01.607290 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:37:01.607387 master-1 kubenswrapper[4771]: I1011 10:37:01.607389 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:37:01.969445 master-2 kubenswrapper[4776]: I1011 10:37:01.969372 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:37:01.969445 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:37:01.969445 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:37:01.969445 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:37:01.969445 master-2 kubenswrapper[4776]: I1011 10:37:01.969427 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:02.969820 master-2 kubenswrapper[4776]: I1011 10:37:02.969738 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:37:02.969820 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:37:02.969820 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:37:02.969820 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:37:02.970455 master-2 kubenswrapper[4776]: I1011 10:37:02.969824 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:02.970455 master-2 kubenswrapper[4776]: I1011 10:37:02.969880 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:37:02.970564 master-2 kubenswrapper[4776]: I1011 10:37:02.970528 4776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"37bb400b73bf025924c7c9c5bb9e1d981b6e77aaa21f8e234850cbe27200bcf9"} pod="openshift-ingress/router-default-5ddb89f76-57kcw" containerMessage="Container router failed startup probe, will be restarted" Oct 11 10:37:02.970612 master-2 kubenswrapper[4776]: I1011 10:37:02.970575 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" containerID="cri-o://37bb400b73bf025924c7c9c5bb9e1d981b6e77aaa21f8e234850cbe27200bcf9" gracePeriod=3600 Oct 11 10:37:06.607344 master-1 kubenswrapper[4771]: I1011 10:37:06.607206 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:37:06.607344 master-1 kubenswrapper[4771]: I1011 10:37:06.607309 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:37:06.928422 master-1 kubenswrapper[4771]: I1011 10:37:06.928223 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_1ffd3b5548bcf48fce7bfb9a8c802165/wait-for-host-port/0.log" Oct 11 10:37:06.928422 master-1 kubenswrapper[4771]: I1011 10:37:06.928320 4771 generic.go:334] "Generic (PLEG): container finished" podID="1ffd3b5548bcf48fce7bfb9a8c802165" containerID="e1bceca0463892708755bd3c74753a682b8d9cb1da4a4db3b72edc17dbc8cf99" exitCode=124 Oct 11 10:37:06.928728 master-1 kubenswrapper[4771]: I1011 10:37:06.928419 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"1ffd3b5548bcf48fce7bfb9a8c802165","Type":"ContainerDied","Data":"e1bceca0463892708755bd3c74753a682b8d9cb1da4a4db3b72edc17dbc8cf99"} Oct 11 10:37:06.928918 master-1 kubenswrapper[4771]: I1011 10:37:06.928870 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="70960866-cb10-49c0-b28d-18f7fa34215c" Oct 11 10:37:06.928918 master-1 kubenswrapper[4771]: I1011 10:37:06.928907 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="70960866-cb10-49c0-b28d-18f7fa34215c" Oct 11 10:37:06.936645 master-1 kubenswrapper[4771]: I1011 10:37:06.935875 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" oldPodUID="1ffd3b5548bcf48fce7bfb9a8c802165" podUID="72ed0d6c-89fd-4c0b-bd7d-af241a60bf0c" Oct 11 10:37:07.939529 master-1 kubenswrapper[4771]: I1011 10:37:07.939448 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_1ffd3b5548bcf48fce7bfb9a8c802165/wait-for-host-port/0.log" Oct 11 10:37:07.939529 master-1 kubenswrapper[4771]: I1011 10:37:07.939520 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"1ffd3b5548bcf48fce7bfb9a8c802165","Type":"ContainerStarted","Data":"0d52bdbb99c295f13f83ea3eb7c6fa80b331e9642a12b706a9065cf6d85ba5e0"} Oct 11 10:37:07.940460 master-1 kubenswrapper[4771]: I1011 10:37:07.939840 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="70960866-cb10-49c0-b28d-18f7fa34215c" Oct 11 10:37:07.940460 master-1 kubenswrapper[4771]: I1011 10:37:07.939860 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="70960866-cb10-49c0-b28d-18f7fa34215c" Oct 11 10:37:07.947578 master-1 kubenswrapper[4771]: I1011 10:37:07.947508 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" oldPodUID="1ffd3b5548bcf48fce7bfb9a8c802165" podUID="72ed0d6c-89fd-4c0b-bd7d-af241a60bf0c" Oct 11 10:37:10.248017 master-1 kubenswrapper[4771]: I1011 10:37:10.247940 4771 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-bg7lk container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:37:10.248017 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:37:10.248017 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:37:10.248017 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:37:10.248017 master-1 kubenswrapper[4771]: [+]metric-storage-ready ok Oct 11 10:37:10.248017 master-1 kubenswrapper[4771]: [+]metric-informer-sync ok Oct 11 10:37:10.248017 master-1 kubenswrapper[4771]: [+]metadata-informer-sync ok Oct 11 10:37:10.248017 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:37:10.248017 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:37:10.248986 master-1 kubenswrapper[4771]: I1011 10:37:10.248046 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" podUID="daf74cdb-6bdb-465a-8e3e-194e8868570f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:11.607922 master-1 kubenswrapper[4771]: I1011 10:37:11.607841 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:37:11.608752 master-1 kubenswrapper[4771]: I1011 10:37:11.607960 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:37:16.607909 master-1 kubenswrapper[4771]: I1011 10:37:16.607806 4771 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 11 10:37:16.608610 master-1 kubenswrapper[4771]: I1011 10:37:16.607908 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="bdeaf49a-fbf2-4e26-88cf-10e723bbdfbe" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 11 10:37:20.016488 master-1 kubenswrapper[4771]: I1011 10:37:20.016312 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_1ffd3b5548bcf48fce7bfb9a8c802165/wait-for-host-port/0.log" Oct 11 10:37:20.017320 master-1 kubenswrapper[4771]: I1011 10:37:20.016486 4771 generic.go:334] "Generic (PLEG): container finished" podID="1ffd3b5548bcf48fce7bfb9a8c802165" containerID="0d52bdbb99c295f13f83ea3eb7c6fa80b331e9642a12b706a9065cf6d85ba5e0" exitCode=0 Oct 11 10:37:20.017320 master-1 kubenswrapper[4771]: I1011 10:37:20.016550 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"1ffd3b5548bcf48fce7bfb9a8c802165","Type":"ContainerDied","Data":"0d52bdbb99c295f13f83ea3eb7c6fa80b331e9642a12b706a9065cf6d85ba5e0"} Oct 11 10:37:20.017320 master-1 kubenswrapper[4771]: I1011 10:37:20.016667 4771 scope.go:117] "RemoveContainer" containerID="e1bceca0463892708755bd3c74753a682b8d9cb1da4a4db3b72edc17dbc8cf99" Oct 11 10:37:20.017320 master-1 kubenswrapper[4771]: I1011 10:37:20.017152 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="70960866-cb10-49c0-b28d-18f7fa34215c" Oct 11 10:37:20.017320 master-1 kubenswrapper[4771]: I1011 10:37:20.017196 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="70960866-cb10-49c0-b28d-18f7fa34215c" Oct 11 10:37:20.022972 master-1 kubenswrapper[4771]: I1011 10:37:20.022863 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" oldPodUID="1ffd3b5548bcf48fce7bfb9a8c802165" podUID="72ed0d6c-89fd-4c0b-bd7d-af241a60bf0c" Oct 11 10:37:20.024046 master-1 kubenswrapper[4771]: I1011 10:37:20.023974 4771 scope.go:117] "RemoveContainer" containerID="e1bceca0463892708755bd3c74753a682b8d9cb1da4a4db3b72edc17dbc8cf99" Oct 11 10:37:20.053683 master-1 kubenswrapper[4771]: E1011 10:37:20.053575 4771 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_wait-for-host-port_openshift-kube-scheduler-master-1_openshift-kube-scheduler_1ffd3b5548bcf48fce7bfb9a8c802165_0 in pod sandbox 9d69cbb2721062172f20487d13f1ddcc1efa6d2ad3a2f2fe34e0e943d4e5aff7 from index: no such id: 'e1bceca0463892708755bd3c74753a682b8d9cb1da4a4db3b72edc17dbc8cf99'" containerID="e1bceca0463892708755bd3c74753a682b8d9cb1da4a4db3b72edc17dbc8cf99" Oct 11 10:37:20.053846 master-1 kubenswrapper[4771]: I1011 10:37:20.053666 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1bceca0463892708755bd3c74753a682b8d9cb1da4a4db3b72edc17dbc8cf99"} err="rpc error: code = Unknown desc = failed to delete container k8s_wait-for-host-port_openshift-kube-scheduler-master-1_openshift-kube-scheduler_1ffd3b5548bcf48fce7bfb9a8c802165_0 in pod sandbox 9d69cbb2721062172f20487d13f1ddcc1efa6d2ad3a2f2fe34e0e943d4e5aff7 from index: no such id: 'e1bceca0463892708755bd3c74753a682b8d9cb1da4a4db3b72edc17dbc8cf99'" Oct 11 10:37:21.047343 master-1 kubenswrapper[4771]: I1011 10:37:21.047249 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"1ffd3b5548bcf48fce7bfb9a8c802165","Type":"ContainerStarted","Data":"e74bc6e341be990d641b3208e5db3f1986459a1e52462f9ffe91cdc733fb24ba"} Oct 11 10:37:21.047343 master-1 kubenswrapper[4771]: I1011 10:37:21.047335 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"1ffd3b5548bcf48fce7bfb9a8c802165","Type":"ContainerStarted","Data":"20eb65db094b78f38b3f02d62c1011c3f80b66cf8dad3e6c74359f3d5455aa20"} Oct 11 10:37:21.048426 master-1 kubenswrapper[4771]: I1011 10:37:21.047394 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"1ffd3b5548bcf48fce7bfb9a8c802165","Type":"ContainerStarted","Data":"fac2308db6f6681cb5064427b63efb1dc9e4c6c67dbcb6bdd98a8dcd331c1383"} Oct 11 10:37:21.048426 master-1 kubenswrapper[4771]: I1011 10:37:21.047823 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="70960866-cb10-49c0-b28d-18f7fa34215c" Oct 11 10:37:21.048426 master-1 kubenswrapper[4771]: I1011 10:37:21.047884 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="70960866-cb10-49c0-b28d-18f7fa34215c" Oct 11 10:37:21.048426 master-1 kubenswrapper[4771]: I1011 10:37:21.048236 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:37:21.057394 master-1 kubenswrapper[4771]: I1011 10:37:21.057277 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" oldPodUID="1ffd3b5548bcf48fce7bfb9a8c802165" podUID="72ed0d6c-89fd-4c0b-bd7d-af241a60bf0c" Oct 11 10:37:21.616957 master-1 kubenswrapper[4771]: I1011 10:37:21.616891 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 11 10:37:21.705463 master-2 kubenswrapper[4776]: I1011 10:37:21.705363 4776 generic.go:334] "Generic (PLEG): container finished" podID="7652e0ca-2d18-48c7-80e0-f4a936038377" containerID="bee07b3499003457995a526e2769ae6950a3ee1b71df0e623d05c583f95fa09d" exitCode=0 Oct 11 10:37:21.705463 master-2 kubenswrapper[4776]: I1011 10:37:21.705459 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" event={"ID":"7652e0ca-2d18-48c7-80e0-f4a936038377","Type":"ContainerDied","Data":"bee07b3499003457995a526e2769ae6950a3ee1b71df0e623d05c583f95fa09d"} Oct 11 10:37:21.706758 master-2 kubenswrapper[4776]: I1011 10:37:21.706453 4776 scope.go:117] "RemoveContainer" containerID="bee07b3499003457995a526e2769ae6950a3ee1b71df0e623d05c583f95fa09d" Oct 11 10:37:22.053245 master-1 kubenswrapper[4771]: I1011 10:37:22.053192 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="70960866-cb10-49c0-b28d-18f7fa34215c" Oct 11 10:37:22.053245 master-1 kubenswrapper[4771]: I1011 10:37:22.053226 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="70960866-cb10-49c0-b28d-18f7fa34215c" Oct 11 10:37:22.056695 master-1 kubenswrapper[4771]: I1011 10:37:22.056661 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" oldPodUID="1ffd3b5548bcf48fce7bfb9a8c802165" podUID="72ed0d6c-89fd-4c0b-bd7d-af241a60bf0c" Oct 11 10:37:22.717174 master-2 kubenswrapper[4776]: I1011 10:37:22.717072 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" event={"ID":"7652e0ca-2d18-48c7-80e0-f4a936038377","Type":"ContainerStarted","Data":"7dae94882449018d204394aae895d50458bd4e4a4aa658882d690763bdb1bc8d"} Oct 11 10:37:22.720201 master-2 kubenswrapper[4776]: I1011 10:37:22.717703 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:37:22.720201 master-2 kubenswrapper[4776]: I1011 10:37:22.719253 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:37:28.096986 master-1 kubenswrapper[4771]: I1011 10:37:28.096919 4771 generic.go:334] "Generic (PLEG): container finished" podID="daf74cdb-6bdb-465a-8e3e-194e8868570f" containerID="d71774e5747fba198d1f1c685867c43372766be8110c50262b34cb5aee247b7d" exitCode=0 Oct 11 10:37:28.096986 master-1 kubenswrapper[4771]: I1011 10:37:28.096974 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" event={"ID":"daf74cdb-6bdb-465a-8e3e-194e8868570f","Type":"ContainerDied","Data":"d71774e5747fba198d1f1c685867c43372766be8110c50262b34cb5aee247b7d"} Oct 11 10:37:28.267610 master-1 kubenswrapper[4771]: I1011 10:37:28.267523 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:37:28.346154 master-1 kubenswrapper[4771]: I1011 10:37:28.345955 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/daf74cdb-6bdb-465a-8e3e-194e8868570f-audit-log\") pod \"daf74cdb-6bdb-465a-8e3e-194e8868570f\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " Oct 11 10:37:28.346154 master-1 kubenswrapper[4771]: I1011 10:37:28.346054 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf74cdb-6bdb-465a-8e3e-194e8868570f-client-ca-bundle\") pod \"daf74cdb-6bdb-465a-8e3e-194e8868570f\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " Oct 11 10:37:28.346154 master-1 kubenswrapper[4771]: I1011 10:37:28.346101 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/daf74cdb-6bdb-465a-8e3e-194e8868570f-metrics-server-audit-profiles\") pod \"daf74cdb-6bdb-465a-8e3e-194e8868570f\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " Oct 11 10:37:28.346154 master-1 kubenswrapper[4771]: I1011 10:37:28.346130 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/daf74cdb-6bdb-465a-8e3e-194e8868570f-secret-metrics-server-tls\") pod \"daf74cdb-6bdb-465a-8e3e-194e8868570f\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " Oct 11 10:37:28.346154 master-1 kubenswrapper[4771]: I1011 10:37:28.346165 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf74cdb-6bdb-465a-8e3e-194e8868570f-configmap-kubelet-serving-ca-bundle\") pod \"daf74cdb-6bdb-465a-8e3e-194e8868570f\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " Oct 11 10:37:28.346578 master-1 kubenswrapper[4771]: I1011 10:37:28.346204 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s2hx\" (UniqueName: \"kubernetes.io/projected/daf74cdb-6bdb-465a-8e3e-194e8868570f-kube-api-access-5s2hx\") pod \"daf74cdb-6bdb-465a-8e3e-194e8868570f\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " Oct 11 10:37:28.346578 master-1 kubenswrapper[4771]: I1011 10:37:28.346234 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/daf74cdb-6bdb-465a-8e3e-194e8868570f-secret-metrics-client-certs\") pod \"daf74cdb-6bdb-465a-8e3e-194e8868570f\" (UID: \"daf74cdb-6bdb-465a-8e3e-194e8868570f\") " Oct 11 10:37:28.347274 master-1 kubenswrapper[4771]: I1011 10:37:28.347183 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daf74cdb-6bdb-465a-8e3e-194e8868570f-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "daf74cdb-6bdb-465a-8e3e-194e8868570f" (UID: "daf74cdb-6bdb-465a-8e3e-194e8868570f"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:37:28.347547 master-1 kubenswrapper[4771]: I1011 10:37:28.347459 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daf74cdb-6bdb-465a-8e3e-194e8868570f-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "daf74cdb-6bdb-465a-8e3e-194e8868570f" (UID: "daf74cdb-6bdb-465a-8e3e-194e8868570f"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:37:28.347606 master-1 kubenswrapper[4771]: I1011 10:37:28.347496 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daf74cdb-6bdb-465a-8e3e-194e8868570f-audit-log" (OuterVolumeSpecName: "audit-log") pod "daf74cdb-6bdb-465a-8e3e-194e8868570f" (UID: "daf74cdb-6bdb-465a-8e3e-194e8868570f"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:37:28.350747 master-1 kubenswrapper[4771]: I1011 10:37:28.350679 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf74cdb-6bdb-465a-8e3e-194e8868570f-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "daf74cdb-6bdb-465a-8e3e-194e8868570f" (UID: "daf74cdb-6bdb-465a-8e3e-194e8868570f"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:37:28.351190 master-1 kubenswrapper[4771]: I1011 10:37:28.351123 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daf74cdb-6bdb-465a-8e3e-194e8868570f-kube-api-access-5s2hx" (OuterVolumeSpecName: "kube-api-access-5s2hx") pod "daf74cdb-6bdb-465a-8e3e-194e8868570f" (UID: "daf74cdb-6bdb-465a-8e3e-194e8868570f"). InnerVolumeSpecName "kube-api-access-5s2hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:37:28.351686 master-1 kubenswrapper[4771]: I1011 10:37:28.351643 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf74cdb-6bdb-465a-8e3e-194e8868570f-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "daf74cdb-6bdb-465a-8e3e-194e8868570f" (UID: "daf74cdb-6bdb-465a-8e3e-194e8868570f"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:37:28.352233 master-1 kubenswrapper[4771]: I1011 10:37:28.352176 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf74cdb-6bdb-465a-8e3e-194e8868570f-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "daf74cdb-6bdb-465a-8e3e-194e8868570f" (UID: "daf74cdb-6bdb-465a-8e3e-194e8868570f"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:37:28.448059 master-1 kubenswrapper[4771]: I1011 10:37:28.447971 4771 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/daf74cdb-6bdb-465a-8e3e-194e8868570f-secret-metrics-client-certs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:37:28.448059 master-1 kubenswrapper[4771]: I1011 10:37:28.448018 4771 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/daf74cdb-6bdb-465a-8e3e-194e8868570f-audit-log\") on node \"master-1\" DevicePath \"\"" Oct 11 10:37:28.448059 master-1 kubenswrapper[4771]: I1011 10:37:28.448032 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf74cdb-6bdb-465a-8e3e-194e8868570f-client-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:37:28.448059 master-1 kubenswrapper[4771]: I1011 10:37:28.448045 4771 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/daf74cdb-6bdb-465a-8e3e-194e8868570f-metrics-server-audit-profiles\") on node \"master-1\" DevicePath \"\"" Oct 11 10:37:28.448059 master-1 kubenswrapper[4771]: I1011 10:37:28.448062 4771 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/daf74cdb-6bdb-465a-8e3e-194e8868570f-secret-metrics-server-tls\") on node \"master-1\" DevicePath \"\"" Oct 11 10:37:28.448059 master-1 kubenswrapper[4771]: I1011 10:37:28.448075 4771 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf74cdb-6bdb-465a-8e3e-194e8868570f-configmap-kubelet-serving-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:37:28.448698 master-1 kubenswrapper[4771]: I1011 10:37:28.448089 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s2hx\" (UniqueName: \"kubernetes.io/projected/daf74cdb-6bdb-465a-8e3e-194e8868570f-kube-api-access-5s2hx\") on node \"master-1\" DevicePath \"\"" Oct 11 10:37:29.104262 master-1 kubenswrapper[4771]: I1011 10:37:29.104200 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" event={"ID":"daf74cdb-6bdb-465a-8e3e-194e8868570f","Type":"ContainerDied","Data":"307edc8bf8db53981b4988030525d3bc29e6569573860e0ae13cb952073e6408"} Oct 11 10:37:29.104262 master-1 kubenswrapper[4771]: I1011 10:37:29.104269 4771 scope.go:117] "RemoveContainer" containerID="d71774e5747fba198d1f1c685867c43372766be8110c50262b34cb5aee247b7d" Oct 11 10:37:29.105187 master-1 kubenswrapper[4771]: I1011 10:37:29.104294 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65d86dff78-bg7lk" Oct 11 10:37:36.751962 master-1 kubenswrapper[4771]: I1011 10:37:36.751869 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9ncpc" podStartSLOduration=102.220313808 podStartE2EDuration="1m44.75184485s" podCreationTimestamp="2025-10-11 10:35:52 +0000 UTC" firstStartedPulling="2025-10-11 10:35:53.367879779 +0000 UTC m=+585.342106230" lastFinishedPulling="2025-10-11 10:35:55.899410831 +0000 UTC m=+587.873637272" observedRunningTime="2025-10-11 10:35:58.036101539 +0000 UTC m=+590.010328020" watchObservedRunningTime="2025-10-11 10:37:36.75184485 +0000 UTC m=+688.726071331" Oct 11 10:37:36.771089 master-1 kubenswrapper[4771]: I1011 10:37:36.770979 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-plxkp" podStartSLOduration=102.358018525 podStartE2EDuration="1m44.770954956s" podCreationTimestamp="2025-10-11 10:35:52 +0000 UTC" firstStartedPulling="2025-10-11 10:35:54.404160024 +0000 UTC m=+586.378386495" lastFinishedPulling="2025-10-11 10:35:56.817096435 +0000 UTC m=+588.791322926" observedRunningTime="2025-10-11 10:35:58.054774232 +0000 UTC m=+590.029000673" watchObservedRunningTime="2025-10-11 10:37:36.770954956 +0000 UTC m=+688.745181407" Oct 11 10:37:36.861424 master-1 kubenswrapper[4771]: I1011 10:37:36.861335 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 11 10:37:36.863553 master-1 kubenswrapper[4771]: I1011 10:37:36.863487 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="70960866-cb10-49c0-b28d-18f7fa34215c" Oct 11 10:37:36.863693 master-1 kubenswrapper[4771]: I1011 10:37:36.863677 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="70960866-cb10-49c0-b28d-18f7fa34215c" Oct 11 10:37:36.868519 master-1 kubenswrapper[4771]: I1011 10:37:36.868455 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 11 10:37:36.878122 master-1 kubenswrapper[4771]: I1011 10:37:36.878053 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" oldPodUID="1ffd3b5548bcf48fce7bfb9a8c802165" podUID="72ed0d6c-89fd-4c0b-bd7d-af241a60bf0c" Oct 11 10:37:36.880902 master-1 kubenswrapper[4771]: I1011 10:37:36.880847 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 11 10:37:36.891711 master-1 kubenswrapper[4771]: I1011 10:37:36.891636 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 11 10:37:36.897739 master-1 kubenswrapper[4771]: I1011 10:37:36.895480 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 11 10:37:36.897739 master-1 kubenswrapper[4771]: I1011 10:37:36.895882 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 11 10:37:36.897739 master-1 kubenswrapper[4771]: I1011 10:37:36.896025 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="1087abee-42bf-483a-b523-13b9169544e5" Oct 11 10:37:36.897739 master-1 kubenswrapper[4771]: I1011 10:37:36.896050 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="1087abee-42bf-483a-b523-13b9169544e5" Oct 11 10:37:36.898693 master-1 kubenswrapper[4771]: I1011 10:37:36.898647 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-5-master-1"] Oct 11 10:37:36.898949 master-1 kubenswrapper[4771]: E1011 10:37:36.898926 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daf74cdb-6bdb-465a-8e3e-194e8868570f" containerName="metrics-server" Oct 11 10:37:36.898949 master-1 kubenswrapper[4771]: I1011 10:37:36.898946 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="daf74cdb-6bdb-465a-8e3e-194e8868570f" containerName="metrics-server" Oct 11 10:37:36.899132 master-1 kubenswrapper[4771]: E1011 10:37:36.898967 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e39e90-67d5-40f4-ad76-1b32adf359ed" containerName="installer" Oct 11 10:37:36.899132 master-1 kubenswrapper[4771]: I1011 10:37:36.898976 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e39e90-67d5-40f4-ad76-1b32adf359ed" containerName="installer" Oct 11 10:37:36.899132 master-1 kubenswrapper[4771]: I1011 10:37:36.899106 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="daf74cdb-6bdb-465a-8e3e-194e8868570f" containerName="metrics-server" Oct 11 10:37:36.899132 master-1 kubenswrapper[4771]: I1011 10:37:36.899126 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e39e90-67d5-40f4-ad76-1b32adf359ed" containerName="installer" Oct 11 10:37:36.899720 master-1 kubenswrapper[4771]: I1011 10:37:36.899688 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-1" Oct 11 10:37:36.904444 master-1 kubenswrapper[4771]: I1011 10:37:36.904385 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:37:36.904754 master-1 kubenswrapper[4771]: I1011 10:37:36.904708 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-djvlq" Oct 11 10:37:36.935843 master-1 kubenswrapper[4771]: I1011 10:37:36.935722 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd"] Oct 11 10:37:36.937203 master-1 kubenswrapper[4771]: I1011 10:37:36.937168 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-65d86dff78-bg7lk"] Oct 11 10:37:36.937375 master-1 kubenswrapper[4771]: I1011 10:37:36.937334 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:36.938749 master-1 kubenswrapper[4771]: I1011 10:37:36.938668 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-65d86dff78-bg7lk"] Oct 11 10:37:36.941550 master-1 kubenswrapper[4771]: I1011 10:37:36.941344 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Oct 11 10:37:36.941896 master-1 kubenswrapper[4771]: I1011 10:37:36.941850 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Oct 11 10:37:36.942079 master-1 kubenswrapper[4771]: I1011 10:37:36.942049 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Oct 11 10:37:36.942634 master-1 kubenswrapper[4771]: I1011 10:37:36.942614 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Oct 11 10:37:36.943071 master-1 kubenswrapper[4771]: I1011 10:37:36.943051 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2ocquro0n92lc" Oct 11 10:37:36.947702 master-1 kubenswrapper[4771]: I1011 10:37:36.947659 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-68fb97bcc4-g7k57"] Oct 11 10:37:36.948482 master-1 kubenswrapper[4771]: I1011 10:37:36.948459 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:36.952950 master-1 kubenswrapper[4771]: I1011 10:37:36.952913 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 11 10:37:36.953912 master-1 kubenswrapper[4771]: I1011 10:37:36.953888 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 11 10:37:36.954225 master-1 kubenswrapper[4771]: I1011 10:37:36.954203 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 11 10:37:36.954524 master-1 kubenswrapper[4771]: I1011 10:37:36.954500 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 11 10:37:36.954744 master-1 kubenswrapper[4771]: I1011 10:37:36.954723 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 11 10:37:36.954853 master-1 kubenswrapper[4771]: I1011 10:37:36.954824 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 11 10:37:36.955017 master-1 kubenswrapper[4771]: I1011 10:37:36.954995 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 11 10:37:36.955189 master-1 kubenswrapper[4771]: I1011 10:37:36.955105 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-279hr" Oct 11 10:37:36.956037 master-1 kubenswrapper[4771]: I1011 10:37:36.956007 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d61f55f6-6e03-40ca-aa96-cb6ba21c39b4-var-lock\") pod \"installer-5-master-1\" (UID: \"d61f55f6-6e03-40ca-aa96-cb6ba21c39b4\") " pod="openshift-kube-controller-manager/installer-5-master-1" Oct 11 10:37:36.956135 master-1 kubenswrapper[4771]: I1011 10:37:36.956107 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d61f55f6-6e03-40ca-aa96-cb6ba21c39b4-kube-api-access\") pod \"installer-5-master-1\" (UID: \"d61f55f6-6e03-40ca-aa96-cb6ba21c39b4\") " pod="openshift-kube-controller-manager/installer-5-master-1" Oct 11 10:37:36.956186 master-1 kubenswrapper[4771]: I1011 10:37:36.956161 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d61f55f6-6e03-40ca-aa96-cb6ba21c39b4-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"d61f55f6-6e03-40ca-aa96-cb6ba21c39b4\") " pod="openshift-kube-controller-manager/installer-5-master-1" Oct 11 10:37:36.957063 master-1 kubenswrapper[4771]: I1011 10:37:36.957026 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 11 10:37:36.958776 master-1 kubenswrapper[4771]: I1011 10:37:36.957550 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 11 10:37:36.958776 master-1 kubenswrapper[4771]: I1011 10:37:36.957778 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 11 10:37:36.958776 master-1 kubenswrapper[4771]: I1011 10:37:36.957833 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 11 10:37:36.961972 master-1 kubenswrapper[4771]: I1011 10:37:36.961897 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-1" podStartSLOduration=59.961877294 podStartE2EDuration="59.961877294s" podCreationTimestamp="2025-10-11 10:36:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:37:36.96000762 +0000 UTC m=+688.934234091" watchObservedRunningTime="2025-10-11 10:37:36.961877294 +0000 UTC m=+688.936103735" Oct 11 10:37:36.968024 master-1 kubenswrapper[4771]: I1011 10:37:36.967979 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 11 10:37:36.976029 master-1 kubenswrapper[4771]: I1011 10:37:36.975978 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 11 10:37:37.057613 master-1 kubenswrapper[4771]: I1011 10:37:37.057564 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56hdc\" (UniqueName: \"kubernetes.io/projected/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-kube-api-access-56hdc\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.057831 master-1 kubenswrapper[4771]: I1011 10:37:37.057637 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/dd18178e-3cb1-41de-8866-913f8f23d90d-metrics-server-audit-profiles\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.057831 master-1 kubenswrapper[4771]: I1011 10:37:37.057671 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.057831 master-1 kubenswrapper[4771]: I1011 10:37:37.057705 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d61f55f6-6e03-40ca-aa96-cb6ba21c39b4-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"d61f55f6-6e03-40ca-aa96-cb6ba21c39b4\") " pod="openshift-kube-controller-manager/installer-5-master-1" Oct 11 10:37:37.057831 master-1 kubenswrapper[4771]: I1011 10:37:37.057729 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-audit-policies\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.057831 master-1 kubenswrapper[4771]: I1011 10:37:37.057755 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/dd18178e-3cb1-41de-8866-913f8f23d90d-audit-log\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.057831 master-1 kubenswrapper[4771]: I1011 10:37:37.057787 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd18178e-3cb1-41de-8866-913f8f23d90d-client-ca-bundle\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.057831 master-1 kubenswrapper[4771]: I1011 10:37:37.057821 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-audit-dir\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.058044 master-1 kubenswrapper[4771]: I1011 10:37:37.057845 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-session\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.058044 master-1 kubenswrapper[4771]: I1011 10:37:37.057851 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d61f55f6-6e03-40ca-aa96-cb6ba21c39b4-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"d61f55f6-6e03-40ca-aa96-cb6ba21c39b4\") " pod="openshift-kube-controller-manager/installer-5-master-1" Oct 11 10:37:37.058044 master-1 kubenswrapper[4771]: I1011 10:37:37.057869 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-service-ca\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.058044 master-1 kubenswrapper[4771]: I1011 10:37:37.057981 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.058489 master-1 kubenswrapper[4771]: I1011 10:37:37.058464 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-router-certs\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.058565 master-1 kubenswrapper[4771]: I1011 10:37:37.058515 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-user-template-error\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.058565 master-1 kubenswrapper[4771]: I1011 10:37:37.058547 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd18178e-3cb1-41de-8866-913f8f23d90d-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.058663 master-1 kubenswrapper[4771]: I1011 10:37:37.058582 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/dd18178e-3cb1-41de-8866-913f8f23d90d-secret-metrics-client-certs\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.058663 master-1 kubenswrapper[4771]: I1011 10:37:37.058605 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.058663 master-1 kubenswrapper[4771]: I1011 10:37:37.058631 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bvtq\" (UniqueName: \"kubernetes.io/projected/dd18178e-3cb1-41de-8866-913f8f23d90d-kube-api-access-5bvtq\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.058663 master-1 kubenswrapper[4771]: I1011 10:37:37.058656 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.058933 master-1 kubenswrapper[4771]: I1011 10:37:37.058686 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d61f55f6-6e03-40ca-aa96-cb6ba21c39b4-var-lock\") pod \"installer-5-master-1\" (UID: \"d61f55f6-6e03-40ca-aa96-cb6ba21c39b4\") " pod="openshift-kube-controller-manager/installer-5-master-1" Oct 11 10:37:37.058933 master-1 kubenswrapper[4771]: I1011 10:37:37.058725 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-user-template-login\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.058933 master-1 kubenswrapper[4771]: I1011 10:37:37.058751 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.058933 master-1 kubenswrapper[4771]: I1011 10:37:37.058778 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/dd18178e-3cb1-41de-8866-913f8f23d90d-secret-metrics-server-tls\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.058933 master-1 kubenswrapper[4771]: I1011 10:37:37.058804 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d61f55f6-6e03-40ca-aa96-cb6ba21c39b4-kube-api-access\") pod \"installer-5-master-1\" (UID: \"d61f55f6-6e03-40ca-aa96-cb6ba21c39b4\") " pod="openshift-kube-controller-manager/installer-5-master-1" Oct 11 10:37:37.059176 master-1 kubenswrapper[4771]: I1011 10:37:37.059078 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d61f55f6-6e03-40ca-aa96-cb6ba21c39b4-var-lock\") pod \"installer-5-master-1\" (UID: \"d61f55f6-6e03-40ca-aa96-cb6ba21c39b4\") " pod="openshift-kube-controller-manager/installer-5-master-1" Oct 11 10:37:37.159858 master-1 kubenswrapper[4771]: I1011 10:37:37.159801 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.160128 master-1 kubenswrapper[4771]: I1011 10:37:37.159883 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-router-certs\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.160128 master-1 kubenswrapper[4771]: I1011 10:37:37.159946 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-user-template-error\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.160128 master-1 kubenswrapper[4771]: I1011 10:37:37.159972 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd18178e-3cb1-41de-8866-913f8f23d90d-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.160128 master-1 kubenswrapper[4771]: I1011 10:37:37.159994 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/dd18178e-3cb1-41de-8866-913f8f23d90d-secret-metrics-client-certs\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.160128 master-1 kubenswrapper[4771]: I1011 10:37:37.160023 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.160128 master-1 kubenswrapper[4771]: I1011 10:37:37.160050 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bvtq\" (UniqueName: \"kubernetes.io/projected/dd18178e-3cb1-41de-8866-913f8f23d90d-kube-api-access-5bvtq\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.160128 master-1 kubenswrapper[4771]: I1011 10:37:37.160077 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.160128 master-1 kubenswrapper[4771]: I1011 10:37:37.160118 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-user-template-login\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.160585 master-1 kubenswrapper[4771]: I1011 10:37:37.160143 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.160585 master-1 kubenswrapper[4771]: I1011 10:37:37.160194 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/dd18178e-3cb1-41de-8866-913f8f23d90d-secret-metrics-server-tls\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.160585 master-1 kubenswrapper[4771]: I1011 10:37:37.160250 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56hdc\" (UniqueName: \"kubernetes.io/projected/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-kube-api-access-56hdc\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.160585 master-1 kubenswrapper[4771]: I1011 10:37:37.160282 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/dd18178e-3cb1-41de-8866-913f8f23d90d-metrics-server-audit-profiles\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.160585 master-1 kubenswrapper[4771]: I1011 10:37:37.160310 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.160585 master-1 kubenswrapper[4771]: I1011 10:37:37.160341 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-audit-policies\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.160585 master-1 kubenswrapper[4771]: I1011 10:37:37.160386 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/dd18178e-3cb1-41de-8866-913f8f23d90d-audit-log\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.160585 master-1 kubenswrapper[4771]: I1011 10:37:37.160424 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd18178e-3cb1-41de-8866-913f8f23d90d-client-ca-bundle\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.160585 master-1 kubenswrapper[4771]: I1011 10:37:37.160453 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-audit-dir\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.160585 master-1 kubenswrapper[4771]: I1011 10:37:37.160477 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-session\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.160585 master-1 kubenswrapper[4771]: I1011 10:37:37.160499 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-service-ca\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.161501 master-1 kubenswrapper[4771]: I1011 10:37:37.161303 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-service-ca\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.161501 master-1 kubenswrapper[4771]: I1011 10:37:37.161430 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd18178e-3cb1-41de-8866-913f8f23d90d-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.161622 master-1 kubenswrapper[4771]: I1011 10:37:37.161537 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-audit-dir\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.161675 master-1 kubenswrapper[4771]: I1011 10:37:37.161652 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/dd18178e-3cb1-41de-8866-913f8f23d90d-audit-log\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.162973 master-1 kubenswrapper[4771]: I1011 10:37:37.162909 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/dd18178e-3cb1-41de-8866-913f8f23d90d-metrics-server-audit-profiles\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.164130 master-1 kubenswrapper[4771]: I1011 10:37:37.163693 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-audit-policies\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.164130 master-1 kubenswrapper[4771]: I1011 10:37:37.164039 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.164447 master-1 kubenswrapper[4771]: I1011 10:37:37.164401 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-user-template-error\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.164447 master-1 kubenswrapper[4771]: I1011 10:37:37.164436 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.164998 master-1 kubenswrapper[4771]: I1011 10:37:37.164956 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/dd18178e-3cb1-41de-8866-913f8f23d90d-secret-metrics-server-tls\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.165144 master-1 kubenswrapper[4771]: I1011 10:37:37.165082 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/dd18178e-3cb1-41de-8866-913f8f23d90d-secret-metrics-client-certs\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.165838 master-1 kubenswrapper[4771]: I1011 10:37:37.165795 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-router-certs\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.166479 master-1 kubenswrapper[4771]: I1011 10:37:37.166447 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.167041 master-1 kubenswrapper[4771]: I1011 10:37:37.167007 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.167566 master-1 kubenswrapper[4771]: I1011 10:37:37.167517 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d61f55f6-6e03-40ca-aa96-cb6ba21c39b4-kube-api-access\") pod \"installer-5-master-1\" (UID: \"d61f55f6-6e03-40ca-aa96-cb6ba21c39b4\") " pod="openshift-kube-controller-manager/installer-5-master-1" Oct 11 10:37:37.167799 master-1 kubenswrapper[4771]: I1011 10:37:37.167758 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd18178e-3cb1-41de-8866-913f8f23d90d-client-ca-bundle\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.168081 master-1 kubenswrapper[4771]: I1011 10:37:37.168016 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-session\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.168257 master-1 kubenswrapper[4771]: I1011 10:37:37.168216 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.184465 master-1 kubenswrapper[4771]: I1011 10:37:37.182469 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-user-template-login\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.222881 master-1 kubenswrapper[4771]: I1011 10:37:37.222649 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-1" Oct 11 10:37:37.227793 master-1 kubenswrapper[4771]: I1011 10:37:37.227752 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56hdc\" (UniqueName: \"kubernetes.io/projected/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-kube-api-access-56hdc\") pod \"oauth-openshift-68fb97bcc4-g7k57\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.233128 master-1 kubenswrapper[4771]: I1011 10:37:37.233065 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bvtq\" (UniqueName: \"kubernetes.io/projected/dd18178e-3cb1-41de-8866-913f8f23d90d-kube-api-access-5bvtq\") pod \"metrics-server-7d46fcc5c6-bhfmd\" (UID: \"dd18178e-3cb1-41de-8866-913f8f23d90d\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.257226 master-1 kubenswrapper[4771]: I1011 10:37:37.257139 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:37.268293 master-1 kubenswrapper[4771]: I1011 10:37:37.268209 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:37.336265 master-1 kubenswrapper[4771]: I1011 10:37:37.335415 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podStartSLOduration=61.335393282 podStartE2EDuration="1m1.335393282s" podCreationTimestamp="2025-10-11 10:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:37:37.330978235 +0000 UTC m=+689.305204686" watchObservedRunningTime="2025-10-11 10:37:37.335393282 +0000 UTC m=+689.309619743" Oct 11 10:37:37.511268 master-1 kubenswrapper[4771]: I1011 10:37:37.510563 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:37:37.511497 master-1 kubenswrapper[4771]: I1011 10:37:37.511413 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:37:37.520556 master-1 kubenswrapper[4771]: I1011 10:37:37.520501 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:37:37.534761 master-2 kubenswrapper[4776]: I1011 10:37:37.534156 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 11 10:37:37.561067 master-2 kubenswrapper[4776]: I1011 10:37:37.561001 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.563280 master-2 kubenswrapper[4776]: I1011 10:37:37.563249 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 11 10:37:37.566392 master-2 kubenswrapper[4776]: I1011 10:37:37.566358 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.570346 master-2 kubenswrapper[4776]: I1011 10:37:37.570320 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 11 10:37:37.571879 master-2 kubenswrapper[4776]: I1011 10:37:37.571852 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Oct 11 10:37:37.579149 master-2 kubenswrapper[4776]: I1011 10:37:37.579099 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 11 10:37:37.579696 master-2 kubenswrapper[4776]: I1011 10:37:37.579639 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 11 10:37:37.580409 master-2 kubenswrapper[4776]: I1011 10:37:37.580380 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.584313 master-2 kubenswrapper[4776]: I1011 10:37:37.584280 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.586632 master-2 kubenswrapper[4776]: I1011 10:37:37.586592 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Oct 11 10:37:37.587885 master-2 kubenswrapper[4776]: I1011 10:37:37.587853 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 11 10:37:37.616853 master-2 kubenswrapper[4776]: I1011 10:37:37.616782 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"openshift-service-ca.crt" Oct 11 10:37:37.617548 master-2 kubenswrapper[4776]: I1011 10:37:37.617515 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.618152 master-2 kubenswrapper[4776]: I1011 10:37:37.618104 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Oct 11 10:37:37.619308 master-2 kubenswrapper[4776]: I1011 10:37:37.619271 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 11 10:37:37.630483 master-2 kubenswrapper[4776]: I1011 10:37:37.630431 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 11 10:37:37.634022 master-2 kubenswrapper[4776]: I1011 10:37:37.633983 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 11 10:37:37.639159 master-2 kubenswrapper[4776]: I1011 10:37:37.639131 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 11 10:37:37.647917 master-2 kubenswrapper[4776]: I1011 10:37:37.647868 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 11 10:37:37.671805 master-2 kubenswrapper[4776]: I1011 10:37:37.671752 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 11 10:37:37.672280 master-2 kubenswrapper[4776]: I1011 10:37:37.672236 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 11 10:37:37.677527 master-2 kubenswrapper[4776]: I1011 10:37:37.677499 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 11 10:37:37.683277 master-2 kubenswrapper[4776]: I1011 10:37:37.683260 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 11 10:37:37.693623 master-2 kubenswrapper[4776]: I1011 10:37:37.693550 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 11 10:37:37.696005 master-2 kubenswrapper[4776]: I1011 10:37:37.695973 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Oct 11 10:37:37.697251 master-2 kubenswrapper[4776]: I1011 10:37:37.697220 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"openshift-service-ca.crt" Oct 11 10:37:37.698422 master-2 kubenswrapper[4776]: I1011 10:37:37.698398 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Oct 11 10:37:37.708513 master-2 kubenswrapper[4776]: I1011 10:37:37.708473 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 11 10:37:37.717350 master-2 kubenswrapper[4776]: I1011 10:37:37.717324 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 11 10:37:37.725757 master-2 kubenswrapper[4776]: I1011 10:37:37.725731 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Oct 11 10:37:37.725968 master-2 kubenswrapper[4776]: I1011 10:37:37.725937 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 11 10:37:37.730857 master-2 kubenswrapper[4776]: I1011 10:37:37.730811 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 11 10:37:37.742176 master-2 kubenswrapper[4776]: I1011 10:37:37.742142 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 11 10:37:37.746275 master-2 kubenswrapper[4776]: I1011 10:37:37.746238 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.757727 master-2 kubenswrapper[4776]: I1011 10:37:37.757650 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 11 10:37:37.760386 master-2 kubenswrapper[4776]: I1011 10:37:37.760360 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Oct 11 10:37:37.760873 master-2 kubenswrapper[4776]: I1011 10:37:37.760855 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 11 10:37:37.761144 master-2 kubenswrapper[4776]: I1011 10:37:37.761095 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 11 10:37:37.761645 master-2 kubenswrapper[4776]: I1011 10:37:37.761459 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 11 10:37:37.763528 master-2 kubenswrapper[4776]: I1011 10:37:37.763488 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 11 10:37:37.771450 master-2 kubenswrapper[4776]: I1011 10:37:37.771404 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 11 10:37:37.774350 master-2 kubenswrapper[4776]: I1011 10:37:37.774301 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Oct 11 10:37:37.779463 master-2 kubenswrapper[4776]: I1011 10:37:37.779383 4776 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 11 10:37:37.782260 master-2 kubenswrapper[4776]: I1011 10:37:37.782218 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 11 10:37:37.783202 master-2 kubenswrapper[4776]: I1011 10:37:37.783073 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.783275 master-2 kubenswrapper[4776]: I1011 10:37:37.783240 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 11 10:37:37.787906 master-2 kubenswrapper[4776]: I1011 10:37:37.787641 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.790917 master-2 kubenswrapper[4776]: I1011 10:37:37.790595 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 11 10:37:37.791094 master-2 kubenswrapper[4776]: I1011 10:37:37.790990 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-68fb97bcc4-r24pr"] Oct 11 10:37:37.792183 master-2 kubenswrapper[4776]: I1011 10:37:37.792134 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.793551 master-2 kubenswrapper[4776]: I1011 10:37:37.793370 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 11 10:37:37.800708 master-2 kubenswrapper[4776]: I1011 10:37:37.799586 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 11 10:37:37.800708 master-2 kubenswrapper[4776]: I1011 10:37:37.799924 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 11 10:37:37.800708 master-2 kubenswrapper[4776]: I1011 10:37:37.800020 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 11 10:37:37.800952 master-2 kubenswrapper[4776]: I1011 10:37:37.800783 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 11 10:37:37.804777 master-2 kubenswrapper[4776]: I1011 10:37:37.801873 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 11 10:37:37.804777 master-2 kubenswrapper[4776]: I1011 10:37:37.802069 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 11 10:37:37.804777 master-2 kubenswrapper[4776]: I1011 10:37:37.803951 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 11 10:37:37.804777 master-2 kubenswrapper[4776]: I1011 10:37:37.804125 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 11 10:37:37.804777 master-2 kubenswrapper[4776]: I1011 10:37:37.804137 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 11 10:37:37.807903 master-2 kubenswrapper[4776]: I1011 10:37:37.807880 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 11 10:37:37.808075 master-2 kubenswrapper[4776]: I1011 10:37:37.808052 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 11 10:37:37.808138 master-2 kubenswrapper[4776]: I1011 10:37:37.807897 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-279hr" Oct 11 10:37:37.808180 master-2 kubenswrapper[4776]: I1011 10:37:37.808086 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 11 10:37:37.808251 master-2 kubenswrapper[4776]: I1011 10:37:37.808129 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 11 10:37:37.811294 master-2 kubenswrapper[4776]: I1011 10:37:37.811218 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 11 10:37:37.814127 master-2 kubenswrapper[4776]: I1011 10:37:37.814062 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 11 10:37:37.814833 master-2 kubenswrapper[4776]: I1011 10:37:37.814795 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Oct 11 10:37:37.824820 master-2 kubenswrapper[4776]: I1011 10:37:37.824779 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 11 10:37:37.825011 master-2 kubenswrapper[4776]: I1011 10:37:37.824967 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 11 10:37:37.827173 master-2 kubenswrapper[4776]: I1011 10:37:37.827135 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 11 10:37:37.832406 master-2 kubenswrapper[4776]: I1011 10:37:37.832038 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Oct 11 10:37:37.836277 master-2 kubenswrapper[4776]: I1011 10:37:37.836229 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 11 10:37:37.837881 master-2 kubenswrapper[4776]: I1011 10:37:37.837850 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Oct 11 10:37:37.842956 master-2 kubenswrapper[4776]: I1011 10:37:37.842929 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 11 10:37:37.850067 master-2 kubenswrapper[4776]: I1011 10:37:37.849450 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 11 10:37:37.854098 master-2 kubenswrapper[4776]: I1011 10:37:37.852951 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Oct 11 10:37:37.864516 master-1 kubenswrapper[4771]: I1011 10:37:37.864464 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-578f8b47b8-tljlp"] Oct 11 10:37:37.865113 master-1 kubenswrapper[4771]: I1011 10:37:37.865094 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-tljlp" Oct 11 10:37:37.866364 master-2 kubenswrapper[4776]: I1011 10:37:37.866322 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Oct 11 10:37:37.867925 master-2 kubenswrapper[4776]: I1011 10:37:37.867889 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Oct 11 10:37:37.868763 master-1 kubenswrapper[4771]: I1011 10:37:37.868260 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-65wvh" Oct 11 10:37:37.869166 master-2 kubenswrapper[4776]: I1011 10:37:37.869102 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr"] Oct 11 10:37:37.869278 master-1 kubenswrapper[4771]: I1011 10:37:37.869129 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Oct 11 10:37:37.870178 master-2 kubenswrapper[4776]: I1011 10:37:37.870149 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" Oct 11 10:37:37.872225 master-2 kubenswrapper[4776]: I1011 10:37:37.872169 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.872653 master-2 kubenswrapper[4776]: I1011 10:37:37.872625 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 11 10:37:37.872653 master-2 kubenswrapper[4776]: I1011 10:37:37.872631 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 11 10:37:37.872856 master-2 kubenswrapper[4776]: I1011 10:37:37.872704 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 11 10:37:37.872856 master-2 kubenswrapper[4776]: I1011 10:37:37.872821 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-65wvh" Oct 11 10:37:37.873511 master-2 kubenswrapper[4776]: I1011 10:37:37.873454 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Oct 11 10:37:37.883520 master-2 kubenswrapper[4776]: I1011 10:37:37.883475 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr"] Oct 11 10:37:37.884091 master-2 kubenswrapper[4776]: I1011 10:37:37.884059 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 11 10:37:37.886855 master-2 kubenswrapper[4776]: I1011 10:37:37.886821 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 11 10:37:37.887696 master-2 kubenswrapper[4776]: I1011 10:37:37.887315 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 11 10:37:37.887989 master-2 kubenswrapper[4776]: I1011 10:37:37.887962 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 11 10:37:37.888494 master-1 kubenswrapper[4771]: I1011 10:37:37.888418 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-6768b5f5f9-r74mm"] Oct 11 10:37:37.889771 master-1 kubenswrapper[4771]: I1011 10:37:37.889719 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" Oct 11 10:37:37.893675 master-1 kubenswrapper[4771]: I1011 10:37:37.893639 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 11 10:37:37.893948 master-1 kubenswrapper[4771]: I1011 10:37:37.893915 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 11 10:37:37.894222 master-1 kubenswrapper[4771]: I1011 10:37:37.894194 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 11 10:37:37.894432 master-1 kubenswrapper[4771]: I1011 10:37:37.894412 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-mqd87" Oct 11 10:37:37.894589 master-1 kubenswrapper[4771]: I1011 10:37:37.894567 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.895156 master-2 kubenswrapper[4776]: I1011 10:37:37.895106 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 11 10:37:37.895325 master-1 kubenswrapper[4771]: I1011 10:37:37.895292 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 11 10:37:37.901004 master-2 kubenswrapper[4776]: I1011 10:37:37.900974 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.906129 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.906500 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.907490 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfkq8\" (UniqueName: \"kubernetes.io/projected/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-kube-api-access-jfkq8\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.907554 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.907718 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-session\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.907747 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.907830 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.907980 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-service-ca\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.908030 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/d38d167f-f15f-4f7e-8717-46dc61374f4a-monitoring-plugin-cert\") pod \"monitoring-plugin-578f8b47b8-5qgnr\" (UID: \"d38d167f-f15f-4f7e-8717-46dc61374f4a\") " pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.908208 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-error\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.908347 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-dir\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.908382 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.908499 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-router-certs\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.908528 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-login\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.908697 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-policies\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.911216 master-2 kubenswrapper[4776]: I1011 10:37:37.911160 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 11 10:37:37.914513 master-2 kubenswrapper[4776]: I1011 10:37:37.914476 4776 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 11 10:37:37.915517 master-2 kubenswrapper[4776]: I1011 10:37:37.915477 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 11 10:37:37.922791 master-2 kubenswrapper[4776]: I1011 10:37:37.922753 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 11 10:37:37.924028 master-2 kubenswrapper[4776]: I1011 10:37:37.923972 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 11 10:37:37.932663 master-2 kubenswrapper[4776]: I1011 10:37:37.932618 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Oct 11 10:37:37.933909 master-2 kubenswrapper[4776]: I1011 10:37:37.933882 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.943459 master-2 kubenswrapper[4776]: I1011 10:37:37.943425 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 11 10:37:37.943644 master-2 kubenswrapper[4776]: I1011 10:37:37.943611 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Oct 11 10:37:37.943874 master-2 kubenswrapper[4776]: I1011 10:37:37.943778 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Oct 11 10:37:37.943874 master-2 kubenswrapper[4776]: I1011 10:37:37.943785 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Oct 11 10:37:37.944005 master-2 kubenswrapper[4776]: I1011 10:37:37.943920 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 11 10:37:37.944049 master-2 kubenswrapper[4776]: I1011 10:37:37.944020 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 11 10:37:37.950692 master-2 kubenswrapper[4776]: I1011 10:37:37.950619 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 11 10:37:37.954647 master-2 kubenswrapper[4776]: I1011 10:37:37.954612 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 11 10:37:37.955415 master-2 kubenswrapper[4776]: I1011 10:37:37.955384 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 11 10:37:37.959997 master-2 kubenswrapper[4776]: I1011 10:37:37.959956 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 11 10:37:37.961943 master-2 kubenswrapper[4776]: I1011 10:37:37.961915 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 11 10:37:37.969506 master-1 kubenswrapper[4771]: I1011 10:37:37.969435 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tjkd\" (UniqueName: \"kubernetes.io/projected/05330706-8231-4c38-be56-416f243992c3-kube-api-access-2tjkd\") pod \"console-operator-6768b5f5f9-r74mm\" (UID: \"05330706-8231-4c38-be56-416f243992c3\") " pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" Oct 11 10:37:37.969506 master-1 kubenswrapper[4771]: I1011 10:37:37.969506 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/d472d171-a5c8-4c71-9d31-7ec0aa3a6db9-monitoring-plugin-cert\") pod \"monitoring-plugin-578f8b47b8-tljlp\" (UID: \"d472d171-a5c8-4c71-9d31-7ec0aa3a6db9\") " pod="openshift-monitoring/monitoring-plugin-578f8b47b8-tljlp" Oct 11 10:37:37.969766 master-1 kubenswrapper[4771]: I1011 10:37:37.969535 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05330706-8231-4c38-be56-416f243992c3-serving-cert\") pod \"console-operator-6768b5f5f9-r74mm\" (UID: \"05330706-8231-4c38-be56-416f243992c3\") " pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" Oct 11 10:37:37.969766 master-1 kubenswrapper[4771]: I1011 10:37:37.969592 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05330706-8231-4c38-be56-416f243992c3-trusted-ca\") pod \"console-operator-6768b5f5f9-r74mm\" (UID: \"05330706-8231-4c38-be56-416f243992c3\") " pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" Oct 11 10:37:37.969766 master-1 kubenswrapper[4771]: I1011 10:37:37.969649 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05330706-8231-4c38-be56-416f243992c3-config\") pod \"console-operator-6768b5f5f9-r74mm\" (UID: \"05330706-8231-4c38-be56-416f243992c3\") " pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" Oct 11 10:37:37.971875 master-2 kubenswrapper[4776]: I1011 10:37:37.971857 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 11 10:37:37.974704 master-2 kubenswrapper[4776]: I1011 10:37:37.974688 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 11 10:37:37.974838 master-2 kubenswrapper[4776]: I1011 10:37:37.974728 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 11 10:37:37.978929 master-2 kubenswrapper[4776]: I1011 10:37:37.978912 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 11 10:37:37.980694 master-2 kubenswrapper[4776]: I1011 10:37:37.980632 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Oct 11 10:37:37.989041 master-2 kubenswrapper[4776]: I1011 10:37:37.988995 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 11 10:37:37.989763 master-2 kubenswrapper[4776]: I1011 10:37:37.989542 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-vwjkz" Oct 11 10:37:37.991153 master-2 kubenswrapper[4776]: I1011 10:37:37.991135 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 11 10:37:37.993008 master-2 kubenswrapper[4776]: I1011 10:37:37.992995 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 11 10:37:38.002524 master-2 kubenswrapper[4776]: I1011 10:37:38.002451 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 11 10:37:38.010527 master-2 kubenswrapper[4776]: I1011 10:37:38.010469 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010597 master-2 kubenswrapper[4776]: I1011 10:37:38.010540 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-router-certs\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010597 master-2 kubenswrapper[4776]: I1011 10:37:38.010575 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-login\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010663 master-2 kubenswrapper[4776]: I1011 10:37:38.010607 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-policies\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010663 master-2 kubenswrapper[4776]: I1011 10:37:38.010637 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010663 master-2 kubenswrapper[4776]: I1011 10:37:38.010662 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfkq8\" (UniqueName: \"kubernetes.io/projected/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-kube-api-access-jfkq8\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010785 master-2 kubenswrapper[4776]: I1011 10:37:38.010706 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010785 master-2 kubenswrapper[4776]: I1011 10:37:38.010733 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-session\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010785 master-2 kubenswrapper[4776]: I1011 10:37:38.010759 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010785 master-2 kubenswrapper[4776]: I1011 10:37:38.010782 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010894 master-2 kubenswrapper[4776]: I1011 10:37:38.010801 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-service-ca\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010894 master-2 kubenswrapper[4776]: I1011 10:37:38.010842 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/d38d167f-f15f-4f7e-8717-46dc61374f4a-monitoring-plugin-cert\") pod \"monitoring-plugin-578f8b47b8-5qgnr\" (UID: \"d38d167f-f15f-4f7e-8717-46dc61374f4a\") " pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" Oct 11 10:37:38.010894 master-2 kubenswrapper[4776]: I1011 10:37:38.010871 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-error\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010984 master-2 kubenswrapper[4776]: I1011 10:37:38.010895 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-dir\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.011047 master-2 kubenswrapper[4776]: I1011 10:37:38.011009 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-dir\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.013104 master-2 kubenswrapper[4776]: I1011 10:37:38.013051 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-policies\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.013104 master-2 kubenswrapper[4776]: I1011 10:37:38.013097 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.013794 master-2 kubenswrapper[4776]: I1011 10:37:38.013760 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-service-ca\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.015260 master-2 kubenswrapper[4776]: I1011 10:37:38.014371 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.015260 master-2 kubenswrapper[4776]: I1011 10:37:38.015156 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.015260 master-2 kubenswrapper[4776]: I1011 10:37:38.015201 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.015698 master-2 kubenswrapper[4776]: I1011 10:37:38.015649 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-login\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.015855 master-2 kubenswrapper[4776]: I1011 10:37:38.015818 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.016525 master-2 kubenswrapper[4776]: I1011 10:37:38.016488 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/d38d167f-f15f-4f7e-8717-46dc61374f4a-monitoring-plugin-cert\") pod \"monitoring-plugin-578f8b47b8-5qgnr\" (UID: \"d38d167f-f15f-4f7e-8717-46dc61374f4a\") " pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" Oct 11 10:37:38.020579 master-2 kubenswrapper[4776]: I1011 10:37:38.020538 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-error\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.020721 master-2 kubenswrapper[4776]: I1011 10:37:38.020625 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-session\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.024614 master-2 kubenswrapper[4776]: I1011 10:37:38.024564 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 11 10:37:38.026494 master-2 kubenswrapper[4776]: I1011 10:37:38.026447 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-router-certs\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.044926 master-2 kubenswrapper[4776]: I1011 10:37:38.042426 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 11 10:37:38.053936 master-1 kubenswrapper[4771]: I1011 10:37:38.053882 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-68fb97bcc4-g7k57"] Oct 11 10:37:38.063074 master-1 kubenswrapper[4771]: I1011 10:37:38.063031 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6768b5f5f9-r74mm"] Oct 11 10:37:38.066661 master-1 kubenswrapper[4771]: I1011 10:37:38.066591 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-1"] Oct 11 10:37:38.070797 master-1 kubenswrapper[4771]: I1011 10:37:38.070768 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tjkd\" (UniqueName: \"kubernetes.io/projected/05330706-8231-4c38-be56-416f243992c3-kube-api-access-2tjkd\") pod \"console-operator-6768b5f5f9-r74mm\" (UID: \"05330706-8231-4c38-be56-416f243992c3\") " pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" Oct 11 10:37:38.070917 master-1 kubenswrapper[4771]: I1011 10:37:38.070904 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/d472d171-a5c8-4c71-9d31-7ec0aa3a6db9-monitoring-plugin-cert\") pod \"monitoring-plugin-578f8b47b8-tljlp\" (UID: \"d472d171-a5c8-4c71-9d31-7ec0aa3a6db9\") " pod="openshift-monitoring/monitoring-plugin-578f8b47b8-tljlp" Oct 11 10:37:38.070997 master-1 kubenswrapper[4771]: I1011 10:37:38.070984 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05330706-8231-4c38-be56-416f243992c3-serving-cert\") pod \"console-operator-6768b5f5f9-r74mm\" (UID: \"05330706-8231-4c38-be56-416f243992c3\") " pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" Oct 11 10:37:38.071097 master-1 kubenswrapper[4771]: I1011 10:37:38.071085 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05330706-8231-4c38-be56-416f243992c3-trusted-ca\") pod \"console-operator-6768b5f5f9-r74mm\" (UID: \"05330706-8231-4c38-be56-416f243992c3\") " pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" Oct 11 10:37:38.071179 master-1 kubenswrapper[4771]: I1011 10:37:38.071167 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05330706-8231-4c38-be56-416f243992c3-config\") pod \"console-operator-6768b5f5f9-r74mm\" (UID: \"05330706-8231-4c38-be56-416f243992c3\") " pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" Oct 11 10:37:38.072023 master-1 kubenswrapper[4771]: I1011 10:37:38.072006 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05330706-8231-4c38-be56-416f243992c3-config\") pod \"console-operator-6768b5f5f9-r74mm\" (UID: \"05330706-8231-4c38-be56-416f243992c3\") " pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" Oct 11 10:37:38.074593 master-1 kubenswrapper[4771]: I1011 10:37:38.074553 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-578f8b47b8-tljlp"] Oct 11 10:37:38.074593 master-1 kubenswrapper[4771]: E1011 10:37:38.074571 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/05330706-8231-4c38-be56-416f243992c3-trusted-ca podName:05330706-8231-4c38-be56-416f243992c3 nodeName:}" failed. No retries permitted until 2025-10-11 10:37:38.574556052 +0000 UTC m=+690.548782493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/05330706-8231-4c38-be56-416f243992c3-trusted-ca") pod "console-operator-6768b5f5f9-r74mm" (UID: "05330706-8231-4c38-be56-416f243992c3") : configmap references non-existent config key: ca-bundle.crt Oct 11 10:37:38.077063 master-1 kubenswrapper[4771]: I1011 10:37:38.077043 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/d472d171-a5c8-4c71-9d31-7ec0aa3a6db9-monitoring-plugin-cert\") pod \"monitoring-plugin-578f8b47b8-tljlp\" (UID: \"d472d171-a5c8-4c71-9d31-7ec0aa3a6db9\") " pod="openshift-monitoring/monitoring-plugin-578f8b47b8-tljlp" Oct 11 10:37:38.079898 master-1 kubenswrapper[4771]: I1011 10:37:38.079880 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05330706-8231-4c38-be56-416f243992c3-serving-cert\") pod \"console-operator-6768b5f5f9-r74mm\" (UID: \"05330706-8231-4c38-be56-416f243992c3\") " pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" Oct 11 10:37:38.081459 master-1 kubenswrapper[4771]: I1011 10:37:38.081410 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd"] Oct 11 10:37:38.082557 master-2 kubenswrapper[4776]: I1011 10:37:38.082488 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 11 10:37:38.089287 master-2 kubenswrapper[4776]: I1011 10:37:38.089249 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfkq8\" (UniqueName: \"kubernetes.io/projected/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-kube-api-access-jfkq8\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.094145 master-1 kubenswrapper[4771]: I1011 10:37:38.094107 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tjkd\" (UniqueName: \"kubernetes.io/projected/05330706-8231-4c38-be56-416f243992c3-kube-api-access-2tjkd\") pod \"console-operator-6768b5f5f9-r74mm\" (UID: \"05330706-8231-4c38-be56-416f243992c3\") " pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" Oct 11 10:37:38.101691 master-2 kubenswrapper[4776]: I1011 10:37:38.101646 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 11 10:37:38.120870 master-2 kubenswrapper[4776]: I1011 10:37:38.120792 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.121646 master-2 kubenswrapper[4776]: I1011 10:37:38.121612 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 11 10:37:38.142373 master-2 kubenswrapper[4776]: I1011 10:37:38.142225 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 11 10:37:38.164958 master-2 kubenswrapper[4776]: I1011 10:37:38.164895 4776 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 11 10:37:38.178909 master-1 kubenswrapper[4771]: I1011 10:37:38.178825 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-tljlp" Oct 11 10:37:38.182464 master-2 kubenswrapper[4776]: I1011 10:37:38.182421 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Oct 11 10:37:38.199121 master-2 kubenswrapper[4776]: I1011 10:37:38.198959 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" Oct 11 10:37:38.200957 master-2 kubenswrapper[4776]: I1011 10:37:38.200931 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Oct 11 10:37:38.220204 master-1 kubenswrapper[4771]: I1011 10:37:38.219958 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:37:38.236704 master-2 kubenswrapper[4776]: I1011 10:37:38.228915 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 11 10:37:38.250699 master-2 kubenswrapper[4776]: I1011 10:37:38.244034 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Oct 11 10:37:38.270128 master-2 kubenswrapper[4776]: I1011 10:37:38.270078 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 11 10:37:38.290696 master-2 kubenswrapper[4776]: I1011 10:37:38.283375 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 11 10:37:38.324775 master-2 kubenswrapper[4776]: I1011 10:37:38.318907 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 11 10:37:38.340712 master-2 kubenswrapper[4776]: I1011 10:37:38.333284 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Oct 11 10:37:38.353419 master-2 kubenswrapper[4776]: I1011 10:37:38.346425 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 11 10:37:38.368570 master-2 kubenswrapper[4776]: I1011 10:37:38.367001 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 11 10:37:38.383876 master-2 kubenswrapper[4776]: I1011 10:37:38.383309 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 11 10:37:38.410377 master-2 kubenswrapper[4776]: I1011 10:37:38.410326 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Oct 11 10:37:38.424609 master-2 kubenswrapper[4776]: I1011 10:37:38.423103 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Oct 11 10:37:38.441949 master-2 kubenswrapper[4776]: I1011 10:37:38.441866 4776 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 11 10:37:38.446856 master-1 kubenswrapper[4771]: I1011 10:37:38.446732 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daf74cdb-6bdb-465a-8e3e-194e8868570f" path="/var/lib/kubelet/pods/daf74cdb-6bdb-465a-8e3e-194e8868570f/volumes" Oct 11 10:37:38.462143 master-2 kubenswrapper[4776]: I1011 10:37:38.462108 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 11 10:37:38.496690 master-2 kubenswrapper[4776]: I1011 10:37:38.488950 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 11 10:37:38.515690 master-2 kubenswrapper[4776]: I1011 10:37:38.504053 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 11 10:37:38.532985 master-2 kubenswrapper[4776]: I1011 10:37:38.530207 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Oct 11 10:37:38.545248 master-2 kubenswrapper[4776]: I1011 10:37:38.545201 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 11 10:37:38.562973 master-2 kubenswrapper[4776]: I1011 10:37:38.562939 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Oct 11 10:37:38.576724 master-1 kubenswrapper[4771]: I1011 10:37:38.576621 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd"] Oct 11 10:37:38.578426 master-1 kubenswrapper[4771]: I1011 10:37:38.578328 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-1"] Oct 11 10:37:38.582344 master-2 kubenswrapper[4776]: I1011 10:37:38.582230 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Oct 11 10:37:38.583486 master-1 kubenswrapper[4771]: W1011 10:37:38.582095 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd61f55f6_6e03_40ca_aa96_cb6ba21c39b4.slice/crio-d51941eb09c3e4a595d1102ccd7dab8966ed8157ff482e09bff14b3b01ba141c WatchSource:0}: Error finding container d51941eb09c3e4a595d1102ccd7dab8966ed8157ff482e09bff14b3b01ba141c: Status 404 returned error can't find the container with id d51941eb09c3e4a595d1102ccd7dab8966ed8157ff482e09bff14b3b01ba141c Oct 11 10:37:38.586149 master-1 kubenswrapper[4771]: W1011 10:37:38.585895 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd18178e_3cb1_41de_8866_913f8f23d90d.slice/crio-e3c94e0f6f964ee9f94da14b1bdf57bd8704f7e66d4160744c45e29823a9eaca WatchSource:0}: Error finding container e3c94e0f6f964ee9f94da14b1bdf57bd8704f7e66d4160744c45e29823a9eaca: Status 404 returned error can't find the container with id e3c94e0f6f964ee9f94da14b1bdf57bd8704f7e66d4160744c45e29823a9eaca Oct 11 10:37:38.589026 master-1 kubenswrapper[4771]: I1011 10:37:38.588984 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05330706-8231-4c38-be56-416f243992c3-trusted-ca\") pod \"console-operator-6768b5f5f9-r74mm\" (UID: \"05330706-8231-4c38-be56-416f243992c3\") " pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" Oct 11 10:37:38.592042 master-1 kubenswrapper[4771]: I1011 10:37:38.591943 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05330706-8231-4c38-be56-416f243992c3-trusted-ca\") pod \"console-operator-6768b5f5f9-r74mm\" (UID: \"05330706-8231-4c38-be56-416f243992c3\") " pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" Oct 11 10:37:38.602957 master-2 kubenswrapper[4776]: I1011 10:37:38.602374 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 11 10:37:38.624075 master-2 kubenswrapper[4776]: I1011 10:37:38.623845 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 11 10:37:38.624525 master-2 kubenswrapper[4776]: I1011 10:37:38.624482 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr"] Oct 11 10:37:38.633957 master-2 kubenswrapper[4776]: I1011 10:37:38.633932 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:37:38.644999 master-2 kubenswrapper[4776]: I1011 10:37:38.644953 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 11 10:37:38.661854 master-2 kubenswrapper[4776]: I1011 10:37:38.661819 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 11 10:37:38.682854 master-2 kubenswrapper[4776]: I1011 10:37:38.682794 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-ap7ej74ueigk4" Oct 11 10:37:38.689588 master-1 kubenswrapper[4771]: I1011 10:37:38.689249 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-68fb97bcc4-g7k57"] Oct 11 10:37:38.699787 master-1 kubenswrapper[4771]: I1011 10:37:38.699758 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:37:38.703934 master-2 kubenswrapper[4776]: I1011 10:37:38.703844 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Oct 11 10:37:38.704571 master-2 kubenswrapper[4776]: I1011 10:37:38.704535 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-68fb97bcc4-r24pr"] Oct 11 10:37:38.712293 master-2 kubenswrapper[4776]: W1011 10:37:38.712244 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd9ad6e0_e85a_41fb_a5cf_a8abeb46f369.slice/crio-fb05ca004bb431ae259dec9c7bc562a3772d43d4b0ba3d1a323b0aee4334c90e WatchSource:0}: Error finding container fb05ca004bb431ae259dec9c7bc562a3772d43d4b0ba3d1a323b0aee4334c90e: Status 404 returned error can't find the container with id fb05ca004bb431ae259dec9c7bc562a3772d43d4b0ba3d1a323b0aee4334c90e Oct 11 10:37:38.721602 master-2 kubenswrapper[4776]: I1011 10:37:38.721551 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Oct 11 10:37:38.741714 master-2 kubenswrapper[4776]: I1011 10:37:38.741655 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 11 10:37:38.749561 master-1 kubenswrapper[4771]: I1011 10:37:38.749499 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-578f8b47b8-tljlp"] Oct 11 10:37:38.759853 master-1 kubenswrapper[4771]: W1011 10:37:38.759568 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd472d171_a5c8_4c71_9d31_7ec0aa3a6db9.slice/crio-afe1b08971e2d05f2fa0425c6f9d72ddbe0592bbee9296deb75b0c968bee6378 WatchSource:0}: Error finding container afe1b08971e2d05f2fa0425c6f9d72ddbe0592bbee9296deb75b0c968bee6378: Status 404 returned error can't find the container with id afe1b08971e2d05f2fa0425c6f9d72ddbe0592bbee9296deb75b0c968bee6378 Oct 11 10:37:38.761925 master-2 kubenswrapper[4776]: I1011 10:37:38.761870 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Oct 11 10:37:38.789268 master-2 kubenswrapper[4776]: I1011 10:37:38.789183 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 11 10:37:38.802742 master-2 kubenswrapper[4776]: I1011 10:37:38.802647 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 11 10:37:38.810170 master-1 kubenswrapper[4771]: I1011 10:37:38.809089 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" Oct 11 10:37:38.822010 master-2 kubenswrapper[4776]: I1011 10:37:38.821969 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 11 10:37:38.828498 master-2 kubenswrapper[4776]: E1011 10:37:38.828462 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:37:38.828766 master-2 kubenswrapper[4776]: E1011 10:37:38.828752 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:39:40.828731834 +0000 UTC m=+815.613158543 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:37:38.844669 master-2 kubenswrapper[4776]: I1011 10:37:38.844511 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"openshift-service-ca.crt" Oct 11 10:37:38.862667 master-2 kubenswrapper[4776]: I1011 10:37:38.862608 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 11 10:37:38.881942 master-2 kubenswrapper[4776]: I1011 10:37:38.881896 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 11 10:37:38.902122 master-2 kubenswrapper[4776]: I1011 10:37:38.902087 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Oct 11 10:37:38.921995 master-2 kubenswrapper[4776]: I1011 10:37:38.921955 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Oct 11 10:37:38.937019 master-2 kubenswrapper[4776]: I1011 10:37:38.936987 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" event={"ID":"d38d167f-f15f-4f7e-8717-46dc61374f4a","Type":"ContainerStarted","Data":"699d77bb6cc6ab9230123cac0b247370d49a03113205a2f77e3698ef1dd861e4"} Oct 11 10:37:38.938471 master-2 kubenswrapper[4776]: I1011 10:37:38.938450 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" event={"ID":"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369","Type":"ContainerStarted","Data":"fb05ca004bb431ae259dec9c7bc562a3772d43d4b0ba3d1a323b0aee4334c90e"} Oct 11 10:37:38.941397 master-2 kubenswrapper[4776]: I1011 10:37:38.941375 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 11 10:37:38.961335 master-2 kubenswrapper[4776]: I1011 10:37:38.961269 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 11 10:37:38.981504 master-2 kubenswrapper[4776]: I1011 10:37:38.981474 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 11 10:37:39.001947 master-2 kubenswrapper[4776]: I1011 10:37:39.001918 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 11 10:37:39.022557 master-2 kubenswrapper[4776]: I1011 10:37:39.022519 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 11 10:37:39.042826 master-2 kubenswrapper[4776]: I1011 10:37:39.042790 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 11 10:37:39.062869 master-2 kubenswrapper[4776]: I1011 10:37:39.062812 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 11 10:37:39.082078 master-2 kubenswrapper[4776]: I1011 10:37:39.082024 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 11 10:37:39.101919 master-2 kubenswrapper[4776]: I1011 10:37:39.101632 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Oct 11 10:37:39.128284 master-2 kubenswrapper[4776]: I1011 10:37:39.128239 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 11 10:37:39.142220 master-2 kubenswrapper[4776]: I1011 10:37:39.142145 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 11 10:37:39.161913 master-2 kubenswrapper[4776]: I1011 10:37:39.161606 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 11 10:37:39.183760 master-2 kubenswrapper[4776]: I1011 10:37:39.181808 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 11 10:37:39.201981 master-2 kubenswrapper[4776]: I1011 10:37:39.201921 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Oct 11 10:37:39.222923 master-2 kubenswrapper[4776]: I1011 10:37:39.222865 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 11 10:37:39.224746 master-1 kubenswrapper[4771]: I1011 10:37:39.224652 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6768b5f5f9-r74mm"] Oct 11 10:37:39.227389 master-1 kubenswrapper[4771]: I1011 10:37:39.226434 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-1" event={"ID":"d61f55f6-6e03-40ca-aa96-cb6ba21c39b4","Type":"ContainerStarted","Data":"5a01f085e3b1c7fb42b9e1bcc547086d47f6a110bdf85f6e451a5f626e8ea9d3"} Oct 11 10:37:39.227389 master-1 kubenswrapper[4771]: I1011 10:37:39.226523 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-1" event={"ID":"d61f55f6-6e03-40ca-aa96-cb6ba21c39b4","Type":"ContainerStarted","Data":"d51941eb09c3e4a595d1102ccd7dab8966ed8157ff482e09bff14b3b01ba141c"} Oct 11 10:37:39.230148 master-1 kubenswrapper[4771]: I1011 10:37:39.230098 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" event={"ID":"dd18178e-3cb1-41de-8866-913f8f23d90d","Type":"ContainerStarted","Data":"15010edee5601765abeb1b9d8fe25dbc88b3ecac08ba70f44f3a0b0d863ba20f"} Oct 11 10:37:39.230224 master-1 kubenswrapper[4771]: I1011 10:37:39.230159 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" event={"ID":"dd18178e-3cb1-41de-8866-913f8f23d90d","Type":"ContainerStarted","Data":"e3c94e0f6f964ee9f94da14b1bdf57bd8704f7e66d4160744c45e29823a9eaca"} Oct 11 10:37:39.230563 master-1 kubenswrapper[4771]: I1011 10:37:39.230525 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:37:39.232262 master-1 kubenswrapper[4771]: W1011 10:37:39.232220 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05330706_8231_4c38_be56_416f243992c3.slice/crio-61f4f1f6f8e0d63ba26153940ab45c54232f6c30fa1b90435ae48a365377cb35 WatchSource:0}: Error finding container 61f4f1f6f8e0d63ba26153940ab45c54232f6c30fa1b90435ae48a365377cb35: Status 404 returned error can't find the container with id 61f4f1f6f8e0d63ba26153940ab45c54232f6c30fa1b90435ae48a365377cb35 Oct 11 10:37:39.233122 master-1 kubenswrapper[4771]: I1011 10:37:39.233080 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" event={"ID":"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8","Type":"ContainerStarted","Data":"8893531f4f4ebb704c0ef08b506e4add808eee05f66a87ee0d2eb8eddb5d49b0"} Oct 11 10:37:39.235073 master-1 kubenswrapper[4771]: I1011 10:37:39.235020 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-tljlp" event={"ID":"d472d171-a5c8-4c71-9d31-7ec0aa3a6db9","Type":"ContainerStarted","Data":"afe1b08971e2d05f2fa0425c6f9d72ddbe0592bbee9296deb75b0c968bee6378"} Oct 11 10:37:39.240030 master-2 kubenswrapper[4776]: I1011 10:37:39.239957 4776 request.go:700] Waited for 1.010841169s due to client-side throttling, not priority and fairness, request: GET:https://api-int.ocp.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=15510 Oct 11 10:37:39.242686 master-2 kubenswrapper[4776]: I1011 10:37:39.242623 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 11 10:37:39.253114 master-1 kubenswrapper[4771]: I1011 10:37:39.253044 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-5-master-1" podStartSLOduration=57.25302427 podStartE2EDuration="57.25302427s" podCreationTimestamp="2025-10-11 10:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:37:39.249432178 +0000 UTC m=+691.223658659" watchObservedRunningTime="2025-10-11 10:37:39.25302427 +0000 UTC m=+691.227250711" Oct 11 10:37:39.262040 master-2 kubenswrapper[4776]: I1011 10:37:39.261993 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 11 10:37:39.280689 master-1 kubenswrapper[4771]: I1011 10:37:39.280082 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" podStartSLOduration=162.280058343 podStartE2EDuration="2m42.280058343s" podCreationTimestamp="2025-10-11 10:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:37:39.276624175 +0000 UTC m=+691.250850656" watchObservedRunningTime="2025-10-11 10:37:39.280058343 +0000 UTC m=+691.254284804" Oct 11 10:37:39.282985 master-2 kubenswrapper[4776]: I1011 10:37:39.282956 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Oct 11 10:37:39.302036 master-2 kubenswrapper[4776]: I1011 10:37:39.301955 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 11 10:37:39.330035 master-2 kubenswrapper[4776]: I1011 10:37:39.329998 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Oct 11 10:37:39.342295 master-2 kubenswrapper[4776]: I1011 10:37:39.342250 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Oct 11 10:37:39.362875 master-2 kubenswrapper[4776]: I1011 10:37:39.361938 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 11 10:37:39.381893 master-2 kubenswrapper[4776]: I1011 10:37:39.381842 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 11 10:37:39.401043 master-2 kubenswrapper[4776]: I1011 10:37:39.401007 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 11 10:37:39.421772 master-2 kubenswrapper[4776]: I1011 10:37:39.421719 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Oct 11 10:37:39.442208 master-2 kubenswrapper[4776]: I1011 10:37:39.441979 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Oct 11 10:37:39.461664 master-2 kubenswrapper[4776]: I1011 10:37:39.461629 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 11 10:37:39.487151 master-2 kubenswrapper[4776]: I1011 10:37:39.487093 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 11 10:37:39.501838 master-2 kubenswrapper[4776]: I1011 10:37:39.501771 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-5v5km" Oct 11 10:37:39.521400 master-2 kubenswrapper[4776]: I1011 10:37:39.521359 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Oct 11 10:37:39.542997 master-2 kubenswrapper[4776]: I1011 10:37:39.542811 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 11 10:37:39.562238 master-2 kubenswrapper[4776]: I1011 10:37:39.562108 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 11 10:37:39.582455 master-2 kubenswrapper[4776]: I1011 10:37:39.582299 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 11 10:37:39.602389 master-2 kubenswrapper[4776]: I1011 10:37:39.602329 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 11 10:37:39.623083 master-2 kubenswrapper[4776]: I1011 10:37:39.622862 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 11 10:37:39.642600 master-2 kubenswrapper[4776]: I1011 10:37:39.642533 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 11 10:37:39.673201 master-2 kubenswrapper[4776]: I1011 10:37:39.661761 4776 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 11 10:37:39.687199 master-2 kubenswrapper[4776]: I1011 10:37:39.686947 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 11 10:37:39.702340 master-2 kubenswrapper[4776]: I1011 10:37:39.702136 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 11 10:37:39.722322 master-2 kubenswrapper[4776]: I1011 10:37:39.722206 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 11 10:37:39.742743 master-2 kubenswrapper[4776]: I1011 10:37:39.742696 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 11 10:37:39.761689 master-2 kubenswrapper[4776]: I1011 10:37:39.761631 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 11 10:37:39.782171 master-2 kubenswrapper[4776]: I1011 10:37:39.782132 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 11 10:37:39.803214 master-2 kubenswrapper[4776]: I1011 10:37:39.803138 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 11 10:37:39.822826 master-2 kubenswrapper[4776]: I1011 10:37:39.822779 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 11 10:37:39.841774 master-2 kubenswrapper[4776]: I1011 10:37:39.841703 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 11 10:37:39.861869 master-2 kubenswrapper[4776]: I1011 10:37:39.861797 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Oct 11 10:37:39.882926 master-2 kubenswrapper[4776]: I1011 10:37:39.882813 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 11 10:37:39.902884 master-2 kubenswrapper[4776]: I1011 10:37:39.902844 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 11 10:37:39.921919 master-2 kubenswrapper[4776]: I1011 10:37:39.921648 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 11 10:37:39.945991 master-2 kubenswrapper[4776]: I1011 10:37:39.942016 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 11 10:37:39.962529 master-2 kubenswrapper[4776]: I1011 10:37:39.962364 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 11 10:37:39.982723 master-2 kubenswrapper[4776]: I1011 10:37:39.981707 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Oct 11 10:37:40.002567 master-2 kubenswrapper[4776]: I1011 10:37:40.002488 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 11 10:37:40.028528 master-2 kubenswrapper[4776]: I1011 10:37:40.028482 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 11 10:37:40.041975 master-2 kubenswrapper[4776]: I1011 10:37:40.041928 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 11 10:37:40.061615 master-2 kubenswrapper[4776]: I1011 10:37:40.061535 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 11 10:37:40.082357 master-2 kubenswrapper[4776]: I1011 10:37:40.082268 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 11 10:37:40.101357 master-2 kubenswrapper[4776]: I1011 10:37:40.101321 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 11 10:37:40.122374 master-2 kubenswrapper[4776]: I1011 10:37:40.122328 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 11 10:37:40.146278 master-2 kubenswrapper[4776]: I1011 10:37:40.146245 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 11 10:37:40.161594 master-2 kubenswrapper[4776]: I1011 10:37:40.161546 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 11 10:37:40.181451 master-2 kubenswrapper[4776]: I1011 10:37:40.181341 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 11 10:37:40.202324 master-2 kubenswrapper[4776]: I1011 10:37:40.202267 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 11 10:37:40.222599 master-2 kubenswrapper[4776]: I1011 10:37:40.222540 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"openshift-service-ca.crt" Oct 11 10:37:40.241798 master-2 kubenswrapper[4776]: I1011 10:37:40.241733 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 11 10:37:40.246904 master-1 kubenswrapper[4771]: I1011 10:37:40.246845 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" event={"ID":"05330706-8231-4c38-be56-416f243992c3","Type":"ContainerStarted","Data":"61f4f1f6f8e0d63ba26153940ab45c54232f6c30fa1b90435ae48a365377cb35"} Oct 11 10:37:40.443863 master-2 kubenswrapper[4776]: I1011 10:37:40.443767 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77c7855cb4-l7mc2"] Oct 11 10:37:40.444251 master-2 kubenswrapper[4776]: I1011 10:37:40.444152 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" podUID="01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" containerName="controller-manager" containerID="cri-o://15dc2b1cfde423b619736343b47ca9dd39eca021477b309bd20f1ac3429f0eac" gracePeriod=30 Oct 11 10:37:40.486796 master-2 kubenswrapper[4776]: I1011 10:37:40.486744 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5"] Oct 11 10:37:40.487027 master-2 kubenswrapper[4776]: I1011 10:37:40.486954 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" podUID="97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" containerName="route-controller-manager" containerID="cri-o://9278bc0b71056762d1acbc3fed930331878f846bd5deefeb8a2b904499d18eb2" gracePeriod=30 Oct 11 10:37:40.963643 master-2 kubenswrapper[4776]: I1011 10:37:40.963585 4776 generic.go:334] "Generic (PLEG): container finished" podID="01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" containerID="15dc2b1cfde423b619736343b47ca9dd39eca021477b309bd20f1ac3429f0eac" exitCode=0 Oct 11 10:37:40.964265 master-2 kubenswrapper[4776]: I1011 10:37:40.963691 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" event={"ID":"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1","Type":"ContainerDied","Data":"15dc2b1cfde423b619736343b47ca9dd39eca021477b309bd20f1ac3429f0eac"} Oct 11 10:37:40.968165 master-2 kubenswrapper[4776]: I1011 10:37:40.967906 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" event={"ID":"d38d167f-f15f-4f7e-8717-46dc61374f4a","Type":"ContainerStarted","Data":"db161c13a5aef0f296c492266dc2b16da13ac806243a968407a23c107700ab11"} Oct 11 10:37:40.968727 master-2 kubenswrapper[4776]: I1011 10:37:40.968430 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" Oct 11 10:37:40.972044 master-2 kubenswrapper[4776]: I1011 10:37:40.971983 4776 generic.go:334] "Generic (PLEG): container finished" podID="97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" containerID="9278bc0b71056762d1acbc3fed930331878f846bd5deefeb8a2b904499d18eb2" exitCode=0 Oct 11 10:37:40.972113 master-2 kubenswrapper[4776]: I1011 10:37:40.972041 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" event={"ID":"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5","Type":"ContainerDied","Data":"9278bc0b71056762d1acbc3fed930331878f846bd5deefeb8a2b904499d18eb2"} Oct 11 10:37:40.977406 master-2 kubenswrapper[4776]: I1011 10:37:40.977366 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" Oct 11 10:37:40.994940 master-2 kubenswrapper[4776]: I1011 10:37:40.989433 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" podStartSLOduration=2.573646337 podStartE2EDuration="3.989391978s" podCreationTimestamp="2025-10-11 10:37:37 +0000 UTC" firstStartedPulling="2025-10-11 10:37:38.633863365 +0000 UTC m=+693.418290074" lastFinishedPulling="2025-10-11 10:37:40.049609006 +0000 UTC m=+694.834035715" observedRunningTime="2025-10-11 10:37:40.988517085 +0000 UTC m=+695.772943814" watchObservedRunningTime="2025-10-11 10:37:40.989391978 +0000 UTC m=+695.773818707" Oct 11 10:37:41.207156 master-2 kubenswrapper[4776]: I1011 10:37:41.207104 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:37:41.214045 master-2 kubenswrapper[4776]: I1011 10:37:41.213997 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:37:41.273304 master-2 kubenswrapper[4776]: I1011 10:37:41.273241 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-client-ca\") pod \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " Oct 11 10:37:41.273540 master-2 kubenswrapper[4776]: I1011 10:37:41.273341 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-client-ca\") pod \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " Oct 11 10:37:41.273540 master-2 kubenswrapper[4776]: I1011 10:37:41.273368 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-serving-cert\") pod \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " Oct 11 10:37:41.273540 master-2 kubenswrapper[4776]: I1011 10:37:41.273411 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkssx\" (UniqueName: \"kubernetes.io/projected/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-kube-api-access-wkssx\") pod \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " Oct 11 10:37:41.273540 master-2 kubenswrapper[4776]: I1011 10:37:41.273445 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-config\") pod \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " Oct 11 10:37:41.273540 master-2 kubenswrapper[4776]: I1011 10:37:41.273503 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-serving-cert\") pod \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " Oct 11 10:37:41.273828 master-2 kubenswrapper[4776]: I1011 10:37:41.273542 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-proxy-ca-bundles\") pod \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " Oct 11 10:37:41.273828 master-2 kubenswrapper[4776]: I1011 10:37:41.273576 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-config\") pod \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " Oct 11 10:37:41.273828 master-2 kubenswrapper[4776]: I1011 10:37:41.273603 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mmxs\" (UniqueName: \"kubernetes.io/projected/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-kube-api-access-8mmxs\") pod \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " Oct 11 10:37:41.276564 master-2 kubenswrapper[4776]: I1011 10:37:41.276528 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-kube-api-access-8mmxs" (OuterVolumeSpecName: "kube-api-access-8mmxs") pod "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" (UID: "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1"). InnerVolumeSpecName "kube-api-access-8mmxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:37:41.276938 master-2 kubenswrapper[4776]: I1011 10:37:41.276908 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-client-ca" (OuterVolumeSpecName: "client-ca") pod "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" (UID: "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:37:41.277273 master-2 kubenswrapper[4776]: I1011 10:37:41.277248 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-client-ca" (OuterVolumeSpecName: "client-ca") pod "97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" (UID: "97bde30f-16ad-44f5-ac26-9f0ba5ba74f5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:37:41.278256 master-2 kubenswrapper[4776]: I1011 10:37:41.278233 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" (UID: "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:37:41.279194 master-2 kubenswrapper[4776]: I1011 10:37:41.279165 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" (UID: "97bde30f-16ad-44f5-ac26-9f0ba5ba74f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:37:41.279664 master-2 kubenswrapper[4776]: I1011 10:37:41.279595 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-config" (OuterVolumeSpecName: "config") pod "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" (UID: "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:37:41.280891 master-2 kubenswrapper[4776]: I1011 10:37:41.280870 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-kube-api-access-wkssx" (OuterVolumeSpecName: "kube-api-access-wkssx") pod "97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" (UID: "97bde30f-16ad-44f5-ac26-9f0ba5ba74f5"). InnerVolumeSpecName "kube-api-access-wkssx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:37:41.286177 master-2 kubenswrapper[4776]: I1011 10:37:41.286099 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" (UID: "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:37:41.287278 master-2 kubenswrapper[4776]: I1011 10:37:41.287230 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-config" (OuterVolumeSpecName: "config") pod "97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" (UID: "97bde30f-16ad-44f5-ac26-9f0ba5ba74f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:37:41.375247 master-2 kubenswrapper[4776]: I1011 10:37:41.375115 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mmxs\" (UniqueName: \"kubernetes.io/projected/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-kube-api-access-8mmxs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:37:41.375247 master-2 kubenswrapper[4776]: I1011 10:37:41.375152 4776 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-client-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:37:41.375247 master-2 kubenswrapper[4776]: I1011 10:37:41.375162 4776 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-client-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:37:41.375247 master-2 kubenswrapper[4776]: I1011 10:37:41.375170 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:37:41.375247 master-2 kubenswrapper[4776]: I1011 10:37:41.375179 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkssx\" (UniqueName: \"kubernetes.io/projected/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-kube-api-access-wkssx\") on node \"master-2\" DevicePath \"\"" Oct 11 10:37:41.375247 master-2 kubenswrapper[4776]: I1011 10:37:41.375188 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:37:41.375247 master-2 kubenswrapper[4776]: I1011 10:37:41.375196 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:37:41.375247 master-2 kubenswrapper[4776]: I1011 10:37:41.375204 4776 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-proxy-ca-bundles\") on node \"master-2\" DevicePath \"\"" Oct 11 10:37:41.375247 master-2 kubenswrapper[4776]: I1011 10:37:41.375212 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:37:41.664785 master-2 kubenswrapper[4776]: I1011 10:37:41.664718 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-897b595f-pt2b4"] Oct 11 10:37:41.664991 master-2 kubenswrapper[4776]: E1011 10:37:41.664962 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" containerName="controller-manager" Oct 11 10:37:41.664991 master-2 kubenswrapper[4776]: I1011 10:37:41.664974 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" containerName="controller-manager" Oct 11 10:37:41.664991 master-2 kubenswrapper[4776]: E1011 10:37:41.664991 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" containerName="route-controller-manager" Oct 11 10:37:41.664991 master-2 kubenswrapper[4776]: I1011 10:37:41.665018 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" containerName="route-controller-manager" Oct 11 10:37:41.665154 master-2 kubenswrapper[4776]: I1011 10:37:41.665113 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" containerName="controller-manager" Oct 11 10:37:41.665154 master-2 kubenswrapper[4776]: I1011 10:37:41.665132 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" containerName="route-controller-manager" Oct 11 10:37:41.665647 master-2 kubenswrapper[4776]: I1011 10:37:41.665601 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.667455 master-2 kubenswrapper[4776]: I1011 10:37:41.667407 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2"] Oct 11 10:37:41.667953 master-2 kubenswrapper[4776]: I1011 10:37:41.667907 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-z46lz" Oct 11 10:37:41.668170 master-2 kubenswrapper[4776]: I1011 10:37:41.668137 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.679867 master-2 kubenswrapper[4776]: I1011 10:37:41.679832 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6baec280-854f-4a9d-b459-e6ccb1e67c12-config\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.679988 master-2 kubenswrapper[4776]: I1011 10:37:41.679875 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6baec280-854f-4a9d-b459-e6ccb1e67c12-client-ca\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.679988 master-2 kubenswrapper[4776]: I1011 10:37:41.679906 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6baec280-854f-4a9d-b459-e6ccb1e67c12-serving-cert\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.679988 master-2 kubenswrapper[4776]: I1011 10:37:41.679938 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8cj9\" (UniqueName: \"kubernetes.io/projected/6baec280-854f-4a9d-b459-e6ccb1e67c12-kube-api-access-q8cj9\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.680200 master-2 kubenswrapper[4776]: I1011 10:37:41.680009 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ecc5770-3970-42d0-9773-d8be6fbf04a2-config\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.680200 master-2 kubenswrapper[4776]: I1011 10:37:41.680106 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6baec280-854f-4a9d-b459-e6ccb1e67c12-proxy-ca-bundles\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.680200 master-2 kubenswrapper[4776]: I1011 10:37:41.680147 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk4m7\" (UniqueName: \"kubernetes.io/projected/1ecc5770-3970-42d0-9773-d8be6fbf04a2-kube-api-access-lk4m7\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.680334 master-2 kubenswrapper[4776]: I1011 10:37:41.680265 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ecc5770-3970-42d0-9773-d8be6fbf04a2-serving-cert\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.680334 master-2 kubenswrapper[4776]: I1011 10:37:41.680312 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ecc5770-3970-42d0-9773-d8be6fbf04a2-client-ca\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.682294 master-2 kubenswrapper[4776]: I1011 10:37:41.682261 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-897b595f-pt2b4"] Oct 11 10:37:41.687240 master-2 kubenswrapper[4776]: I1011 10:37:41.687191 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2"] Oct 11 10:37:41.781614 master-2 kubenswrapper[4776]: I1011 10:37:41.781546 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8cj9\" (UniqueName: \"kubernetes.io/projected/6baec280-854f-4a9d-b459-e6ccb1e67c12-kube-api-access-q8cj9\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.781614 master-2 kubenswrapper[4776]: I1011 10:37:41.781614 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ecc5770-3970-42d0-9773-d8be6fbf04a2-config\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.781872 master-2 kubenswrapper[4776]: I1011 10:37:41.781642 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6baec280-854f-4a9d-b459-e6ccb1e67c12-proxy-ca-bundles\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.781872 master-2 kubenswrapper[4776]: I1011 10:37:41.781659 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk4m7\" (UniqueName: \"kubernetes.io/projected/1ecc5770-3970-42d0-9773-d8be6fbf04a2-kube-api-access-lk4m7\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.781872 master-2 kubenswrapper[4776]: I1011 10:37:41.781705 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ecc5770-3970-42d0-9773-d8be6fbf04a2-serving-cert\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.781872 master-2 kubenswrapper[4776]: I1011 10:37:41.781737 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ecc5770-3970-42d0-9773-d8be6fbf04a2-client-ca\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.781872 master-2 kubenswrapper[4776]: I1011 10:37:41.781775 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6baec280-854f-4a9d-b459-e6ccb1e67c12-client-ca\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.781872 master-2 kubenswrapper[4776]: I1011 10:37:41.781794 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6baec280-854f-4a9d-b459-e6ccb1e67c12-config\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.781872 master-2 kubenswrapper[4776]: I1011 10:37:41.781816 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6baec280-854f-4a9d-b459-e6ccb1e67c12-serving-cert\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.782777 master-2 kubenswrapper[4776]: I1011 10:37:41.782746 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6baec280-854f-4a9d-b459-e6ccb1e67c12-client-ca\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.783032 master-2 kubenswrapper[4776]: I1011 10:37:41.782979 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ecc5770-3970-42d0-9773-d8be6fbf04a2-config\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.783103 master-2 kubenswrapper[4776]: I1011 10:37:41.782984 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ecc5770-3970-42d0-9773-d8be6fbf04a2-client-ca\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.783354 master-2 kubenswrapper[4776]: I1011 10:37:41.783317 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6baec280-854f-4a9d-b459-e6ccb1e67c12-proxy-ca-bundles\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.785742 master-2 kubenswrapper[4776]: I1011 10:37:41.785710 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ecc5770-3970-42d0-9773-d8be6fbf04a2-serving-cert\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.785854 master-2 kubenswrapper[4776]: I1011 10:37:41.785821 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6baec280-854f-4a9d-b459-e6ccb1e67c12-serving-cert\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.786536 master-2 kubenswrapper[4776]: I1011 10:37:41.786505 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6baec280-854f-4a9d-b459-e6ccb1e67c12-config\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.800387 master-2 kubenswrapper[4776]: I1011 10:37:41.800347 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk4m7\" (UniqueName: \"kubernetes.io/projected/1ecc5770-3970-42d0-9773-d8be6fbf04a2-kube-api-access-lk4m7\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.802773 master-2 kubenswrapper[4776]: I1011 10:37:41.802666 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8cj9\" (UniqueName: \"kubernetes.io/projected/6baec280-854f-4a9d-b459-e6ccb1e67c12-kube-api-access-q8cj9\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.981088 master-2 kubenswrapper[4776]: I1011 10:37:41.980957 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" event={"ID":"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369","Type":"ContainerStarted","Data":"4c53ab08cf4b5166d95c57913daeef16e08566e476b981ef95245c117bb87d6a"} Oct 11 10:37:41.981695 master-2 kubenswrapper[4776]: I1011 10:37:41.981311 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:41.981862 master-2 kubenswrapper[4776]: I1011 10:37:41.981811 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.985384 master-2 kubenswrapper[4776]: I1011 10:37:41.985334 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" event={"ID":"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1","Type":"ContainerDied","Data":"bc433b79c55289ef4a64be58481abb43a7688c174b82b2fdfa0d85577d07edb1"} Oct 11 10:37:41.985384 master-2 kubenswrapper[4776]: I1011 10:37:41.985366 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:37:41.985522 master-2 kubenswrapper[4776]: I1011 10:37:41.985404 4776 scope.go:117] "RemoveContainer" containerID="15dc2b1cfde423b619736343b47ca9dd39eca021477b309bd20f1ac3429f0eac" Oct 11 10:37:41.988723 master-2 kubenswrapper[4776]: I1011 10:37:41.988683 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.991202 master-2 kubenswrapper[4776]: I1011 10:37:41.991113 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" event={"ID":"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5","Type":"ContainerDied","Data":"417196648a53013bc23124ecf6d1bf221fe3949e2ac324623e298e42a8c1ca2b"} Oct 11 10:37:41.991202 master-2 kubenswrapper[4776]: I1011 10:37:41.991163 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:37:42.002772 master-2 kubenswrapper[4776]: I1011 10:37:42.002696 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:42.015893 master-2 kubenswrapper[4776]: I1011 10:37:42.015830 4776 scope.go:117] "RemoveContainer" containerID="9278bc0b71056762d1acbc3fed930331878f846bd5deefeb8a2b904499d18eb2" Oct 11 10:37:42.042588 master-2 kubenswrapper[4776]: I1011 10:37:42.042483 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" podStartSLOduration=7.871011364 podStartE2EDuration="10.042450424s" podCreationTimestamp="2025-10-11 10:37:32 +0000 UTC" firstStartedPulling="2025-10-11 10:37:38.714669179 +0000 UTC m=+693.499095888" lastFinishedPulling="2025-10-11 10:37:40.886108249 +0000 UTC m=+695.670534948" observedRunningTime="2025-10-11 10:37:42.013696706 +0000 UTC m=+696.798123435" watchObservedRunningTime="2025-10-11 10:37:42.042450424 +0000 UTC m=+696.826877143" Oct 11 10:37:42.043556 master-2 kubenswrapper[4776]: I1011 10:37:42.043463 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5"] Oct 11 10:37:42.100141 master-2 kubenswrapper[4776]: I1011 10:37:42.100070 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5"] Oct 11 10:37:42.107874 master-2 kubenswrapper[4776]: I1011 10:37:42.106295 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77c7855cb4-l7mc2"] Oct 11 10:37:42.113426 master-2 kubenswrapper[4776]: I1011 10:37:42.113388 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77c7855cb4-l7mc2"] Oct 11 10:37:42.259131 master-1 kubenswrapper[4771]: I1011 10:37:42.259056 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-tljlp" event={"ID":"d472d171-a5c8-4c71-9d31-7ec0aa3a6db9","Type":"ContainerStarted","Data":"929913ef2840fcd61f8266001999ff45886c89e4544b76d9a33c10d3318cb69e"} Oct 11 10:37:42.261112 master-1 kubenswrapper[4771]: I1011 10:37:42.261064 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" event={"ID":"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8","Type":"ContainerStarted","Data":"33d94985a7a913ae973d8eae7753333b859741460e8780a5827eed5110a86a93"} Oct 11 10:37:42.261397 master-1 kubenswrapper[4771]: I1011 10:37:42.261347 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:42.265514 master-1 kubenswrapper[4771]: I1011 10:37:42.265472 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:37:42.296296 master-1 kubenswrapper[4771]: I1011 10:37:42.296190 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-tljlp" podStartSLOduration=2.9128345319999998 podStartE2EDuration="5.296166184s" podCreationTimestamp="2025-10-11 10:37:37 +0000 UTC" firstStartedPulling="2025-10-11 10:37:38.762516588 +0000 UTC m=+690.736743029" lastFinishedPulling="2025-10-11 10:37:41.14584824 +0000 UTC m=+693.120074681" observedRunningTime="2025-10-11 10:37:42.290171493 +0000 UTC m=+694.264397934" watchObservedRunningTime="2025-10-11 10:37:42.296166184 +0000 UTC m=+694.270392625" Oct 11 10:37:42.340927 master-1 kubenswrapper[4771]: I1011 10:37:42.340853 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" podStartSLOduration=7.877255046 podStartE2EDuration="10.340834691s" podCreationTimestamp="2025-10-11 10:37:32 +0000 UTC" firstStartedPulling="2025-10-11 10:37:38.699708093 +0000 UTC m=+690.673934534" lastFinishedPulling="2025-10-11 10:37:41.163287738 +0000 UTC m=+693.137514179" observedRunningTime="2025-10-11 10:37:42.335921 +0000 UTC m=+694.310147461" watchObservedRunningTime="2025-10-11 10:37:42.340834691 +0000 UTC m=+694.315061132" Oct 11 10:37:42.462885 master-2 kubenswrapper[4776]: I1011 10:37:42.462816 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-897b595f-pt2b4"] Oct 11 10:37:42.688139 master-2 kubenswrapper[4776]: I1011 10:37:42.688090 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2"] Oct 11 10:37:42.690867 master-2 kubenswrapper[4776]: W1011 10:37:42.690826 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ecc5770_3970_42d0_9773_d8be6fbf04a2.slice/crio-ae5e0557bdd17ed8ab94916dd413b8aab5aae6d00f7649f0d7bf75739d671dd2 WatchSource:0}: Error finding container ae5e0557bdd17ed8ab94916dd413b8aab5aae6d00f7649f0d7bf75739d671dd2: Status 404 returned error can't find the container with id ae5e0557bdd17ed8ab94916dd413b8aab5aae6d00f7649f0d7bf75739d671dd2 Oct 11 10:37:42.999096 master-2 kubenswrapper[4776]: I1011 10:37:42.998975 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" event={"ID":"1ecc5770-3970-42d0-9773-d8be6fbf04a2","Type":"ContainerStarted","Data":"1b7ce82538690cb89e2d7e9f0d406a630bf93e0be90c1ad442461141eb831682"} Oct 11 10:37:42.999096 master-2 kubenswrapper[4776]: I1011 10:37:42.999031 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" event={"ID":"1ecc5770-3970-42d0-9773-d8be6fbf04a2","Type":"ContainerStarted","Data":"ae5e0557bdd17ed8ab94916dd413b8aab5aae6d00f7649f0d7bf75739d671dd2"} Oct 11 10:37:42.999626 master-2 kubenswrapper[4776]: I1011 10:37:42.999492 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:43.002208 master-2 kubenswrapper[4776]: I1011 10:37:43.002154 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" event={"ID":"6baec280-854f-4a9d-b459-e6ccb1e67c12","Type":"ContainerStarted","Data":"eab4810a3e0e55ee6510b9546f4aaa044d83c3d3f1504fdc228b1fa68c5f7ca8"} Oct 11 10:37:43.002292 master-2 kubenswrapper[4776]: I1011 10:37:43.002209 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" event={"ID":"6baec280-854f-4a9d-b459-e6ccb1e67c12","Type":"ContainerStarted","Data":"26793ae6f40b7ef7a71be09ae2079c1aa7d005227d8bda4b4bd0254701c1775d"} Oct 11 10:37:43.002399 master-2 kubenswrapper[4776]: I1011 10:37:43.002355 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:43.016171 master-2 kubenswrapper[4776]: I1011 10:37:43.016129 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:43.036952 master-2 kubenswrapper[4776]: I1011 10:37:43.036869 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" podStartSLOduration=3.036849632 podStartE2EDuration="3.036849632s" podCreationTimestamp="2025-10-11 10:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:37:43.03274506 +0000 UTC m=+697.817171779" watchObservedRunningTime="2025-10-11 10:37:43.036849632 +0000 UTC m=+697.821276341" Oct 11 10:37:43.073507 master-2 kubenswrapper[4776]: I1011 10:37:43.073339 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" podStartSLOduration=3.073311801 podStartE2EDuration="3.073311801s" podCreationTimestamp="2025-10-11 10:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:37:43.072358115 +0000 UTC m=+697.856784824" watchObservedRunningTime="2025-10-11 10:37:43.073311801 +0000 UTC m=+697.857738510" Oct 11 10:37:43.137725 master-1 kubenswrapper[4771]: I1011 10:37:43.137541 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77c7855cb4-qkp68"] Oct 11 10:37:43.138048 master-1 kubenswrapper[4771]: I1011 10:37:43.137755 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" podUID="e23d9d43-9980-4c16-91c4-9fc0bca161e6" containerName="controller-manager" containerID="cri-o://c570237a7e93abdb8d6cb4489a86eb34cb5e25db0de47a00c9bf05de3a2ba3c4" gracePeriod=30 Oct 11 10:37:43.234927 master-2 kubenswrapper[4776]: I1011 10:37:43.234862 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:43.244587 master-1 kubenswrapper[4771]: I1011 10:37:43.244514 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-68fb97bcc4-g7k57"] Oct 11 10:37:43.271759 master-1 kubenswrapper[4771]: I1011 10:37:43.271694 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" event={"ID":"05330706-8231-4c38-be56-416f243992c3","Type":"ContainerStarted","Data":"6cbe0b22e8526c0b78798635a3f5d8c0f8b018506b067842e391cfdf9bf5d7a5"} Oct 11 10:37:43.272806 master-1 kubenswrapper[4771]: I1011 10:37:43.272574 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" Oct 11 10:37:43.281603 master-1 kubenswrapper[4771]: I1011 10:37:43.281546 4771 generic.go:334] "Generic (PLEG): container finished" podID="e23d9d43-9980-4c16-91c4-9fc0bca161e6" containerID="c570237a7e93abdb8d6cb4489a86eb34cb5e25db0de47a00c9bf05de3a2ba3c4" exitCode=0 Oct 11 10:37:43.281754 master-1 kubenswrapper[4771]: I1011 10:37:43.281656 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" event={"ID":"e23d9d43-9980-4c16-91c4-9fc0bca161e6","Type":"ContainerDied","Data":"c570237a7e93abdb8d6cb4489a86eb34cb5e25db0de47a00c9bf05de3a2ba3c4"} Oct 11 10:37:43.283424 master-1 kubenswrapper[4771]: I1011 10:37:43.282978 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-tljlp" Oct 11 10:37:43.283871 master-1 kubenswrapper[4771]: I1011 10:37:43.283599 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" Oct 11 10:37:43.291150 master-1 kubenswrapper[4771]: I1011 10:37:43.288960 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-tljlp" Oct 11 10:37:43.326946 master-1 kubenswrapper[4771]: I1011 10:37:43.321675 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-6768b5f5f9-r74mm" podStartSLOduration=3.054320858 podStartE2EDuration="6.32165564s" podCreationTimestamp="2025-10-11 10:37:37 +0000 UTC" firstStartedPulling="2025-10-11 10:37:39.236042865 +0000 UTC m=+691.210269306" lastFinishedPulling="2025-10-11 10:37:42.503377417 +0000 UTC m=+694.477604088" observedRunningTime="2025-10-11 10:37:43.314039622 +0000 UTC m=+695.288266073" watchObservedRunningTime="2025-10-11 10:37:43.32165564 +0000 UTC m=+695.295882081" Oct 11 10:37:43.342847 master-1 kubenswrapper[4771]: I1011 10:37:43.342741 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m"] Oct 11 10:37:43.343286 master-1 kubenswrapper[4771]: I1011 10:37:43.343199 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" podUID="e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3" containerName="route-controller-manager" containerID="cri-o://5088293846da37543ad3781b1214351a957d3cdeb729f2d5b67c96a0a56aa547" gracePeriod=30 Oct 11 10:37:43.557190 master-1 kubenswrapper[4771]: I1011 10:37:43.555645 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-65bb9777fc-66jxg"] Oct 11 10:37:43.557190 master-1 kubenswrapper[4771]: I1011 10:37:43.556302 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65bb9777fc-66jxg" Oct 11 10:37:43.558331 master-1 kubenswrapper[4771]: I1011 10:37:43.558258 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 11 10:37:43.558493 master-1 kubenswrapper[4771]: I1011 10:37:43.558472 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-pqtx8" Oct 11 10:37:43.560005 master-1 kubenswrapper[4771]: I1011 10:37:43.559259 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 11 10:37:43.560088 master-1 kubenswrapper[4771]: I1011 10:37:43.560047 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:37:43.567209 master-2 kubenswrapper[4776]: I1011 10:37:43.567152 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-65bb9777fc-bkmsm"] Oct 11 10:37:43.567864 master-2 kubenswrapper[4776]: I1011 10:37:43.567834 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65bb9777fc-bkmsm" Oct 11 10:37:43.573042 master-2 kubenswrapper[4776]: I1011 10:37:43.572997 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-pqtx8" Oct 11 10:37:43.573127 master-2 kubenswrapper[4776]: I1011 10:37:43.573015 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 11 10:37:43.573838 master-2 kubenswrapper[4776]: I1011 10:37:43.573796 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 11 10:37:43.586133 master-1 kubenswrapper[4771]: I1011 10:37:43.586077 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-65bb9777fc-66jxg"] Oct 11 10:37:43.618699 master-2 kubenswrapper[4776]: I1011 10:37:43.618604 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-572jp\" (UniqueName: \"kubernetes.io/projected/31d64616-a514-4ae3-bb6d-d6eb14d9147a-kube-api-access-572jp\") pod \"downloads-65bb9777fc-bkmsm\" (UID: \"31d64616-a514-4ae3-bb6d-d6eb14d9147a\") " pod="openshift-console/downloads-65bb9777fc-bkmsm" Oct 11 10:37:43.688132 master-1 kubenswrapper[4771]: I1011 10:37:43.688081 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e23d9d43-9980-4c16-91c4-9fc0bca161e6-serving-cert\") pod \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " Oct 11 10:37:43.688132 master-1 kubenswrapper[4771]: I1011 10:37:43.688130 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e23d9d43-9980-4c16-91c4-9fc0bca161e6-proxy-ca-bundles\") pod \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " Oct 11 10:37:43.688378 master-1 kubenswrapper[4771]: I1011 10:37:43.688169 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e23d9d43-9980-4c16-91c4-9fc0bca161e6-client-ca\") pod \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " Oct 11 10:37:43.688378 master-1 kubenswrapper[4771]: I1011 10:37:43.688192 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwlwv\" (UniqueName: \"kubernetes.io/projected/e23d9d43-9980-4c16-91c4-9fc0bca161e6-kube-api-access-fwlwv\") pod \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " Oct 11 10:37:43.688378 master-1 kubenswrapper[4771]: I1011 10:37:43.688238 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e23d9d43-9980-4c16-91c4-9fc0bca161e6-config\") pod \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\" (UID: \"e23d9d43-9980-4c16-91c4-9fc0bca161e6\") " Oct 11 10:37:43.688470 master-1 kubenswrapper[4771]: I1011 10:37:43.688412 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mqpr\" (UniqueName: \"kubernetes.io/projected/6958daf7-e9a7-4151-8e42-851feedec58e-kube-api-access-5mqpr\") pod \"downloads-65bb9777fc-66jxg\" (UID: \"6958daf7-e9a7-4151-8e42-851feedec58e\") " pod="openshift-console/downloads-65bb9777fc-66jxg" Oct 11 10:37:43.689753 master-1 kubenswrapper[4771]: I1011 10:37:43.689488 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e23d9d43-9980-4c16-91c4-9fc0bca161e6-client-ca" (OuterVolumeSpecName: "client-ca") pod "e23d9d43-9980-4c16-91c4-9fc0bca161e6" (UID: "e23d9d43-9980-4c16-91c4-9fc0bca161e6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:37:43.690206 master-1 kubenswrapper[4771]: I1011 10:37:43.690175 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e23d9d43-9980-4c16-91c4-9fc0bca161e6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e23d9d43-9980-4c16-91c4-9fc0bca161e6" (UID: "e23d9d43-9980-4c16-91c4-9fc0bca161e6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:37:43.691494 master-1 kubenswrapper[4771]: I1011 10:37:43.691142 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e23d9d43-9980-4c16-91c4-9fc0bca161e6-config" (OuterVolumeSpecName: "config") pod "e23d9d43-9980-4c16-91c4-9fc0bca161e6" (UID: "e23d9d43-9980-4c16-91c4-9fc0bca161e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:37:43.691494 master-1 kubenswrapper[4771]: I1011 10:37:43.691283 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23d9d43-9980-4c16-91c4-9fc0bca161e6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e23d9d43-9980-4c16-91c4-9fc0bca161e6" (UID: "e23d9d43-9980-4c16-91c4-9fc0bca161e6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:37:43.694517 master-1 kubenswrapper[4771]: I1011 10:37:43.694491 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e23d9d43-9980-4c16-91c4-9fc0bca161e6-kube-api-access-fwlwv" (OuterVolumeSpecName: "kube-api-access-fwlwv") pod "e23d9d43-9980-4c16-91c4-9fc0bca161e6" (UID: "e23d9d43-9980-4c16-91c4-9fc0bca161e6"). InnerVolumeSpecName "kube-api-access-fwlwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:37:43.695781 master-2 kubenswrapper[4776]: I1011 10:37:43.695718 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-65bb9777fc-bkmsm"] Oct 11 10:37:43.735209 master-2 kubenswrapper[4776]: I1011 10:37:43.735154 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-572jp\" (UniqueName: \"kubernetes.io/projected/31d64616-a514-4ae3-bb6d-d6eb14d9147a-kube-api-access-572jp\") pod \"downloads-65bb9777fc-bkmsm\" (UID: \"31d64616-a514-4ae3-bb6d-d6eb14d9147a\") " pod="openshift-console/downloads-65bb9777fc-bkmsm" Oct 11 10:37:43.791061 master-1 kubenswrapper[4771]: I1011 10:37:43.789392 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mqpr\" (UniqueName: \"kubernetes.io/projected/6958daf7-e9a7-4151-8e42-851feedec58e-kube-api-access-5mqpr\") pod \"downloads-65bb9777fc-66jxg\" (UID: \"6958daf7-e9a7-4151-8e42-851feedec58e\") " pod="openshift-console/downloads-65bb9777fc-66jxg" Oct 11 10:37:43.791061 master-1 kubenswrapper[4771]: I1011 10:37:43.790021 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e23d9d43-9980-4c16-91c4-9fc0bca161e6-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:37:43.791061 master-1 kubenswrapper[4771]: I1011 10:37:43.790035 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e23d9d43-9980-4c16-91c4-9fc0bca161e6-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:37:43.791061 master-1 kubenswrapper[4771]: I1011 10:37:43.790049 4771 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e23d9d43-9980-4c16-91c4-9fc0bca161e6-proxy-ca-bundles\") on node \"master-1\" DevicePath \"\"" Oct 11 10:37:43.791061 master-1 kubenswrapper[4771]: I1011 10:37:43.790058 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e23d9d43-9980-4c16-91c4-9fc0bca161e6-client-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:37:43.791061 master-1 kubenswrapper[4771]: I1011 10:37:43.790067 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwlwv\" (UniqueName: \"kubernetes.io/projected/e23d9d43-9980-4c16-91c4-9fc0bca161e6-kube-api-access-fwlwv\") on node \"master-1\" DevicePath \"\"" Oct 11 10:37:43.806874 master-2 kubenswrapper[4776]: I1011 10:37:43.806767 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-572jp\" (UniqueName: \"kubernetes.io/projected/31d64616-a514-4ae3-bb6d-d6eb14d9147a-kube-api-access-572jp\") pod \"downloads-65bb9777fc-bkmsm\" (UID: \"31d64616-a514-4ae3-bb6d-d6eb14d9147a\") " pod="openshift-console/downloads-65bb9777fc-bkmsm" Oct 11 10:37:43.823594 master-1 kubenswrapper[4771]: I1011 10:37:43.823534 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" Oct 11 10:37:43.833651 master-1 kubenswrapper[4771]: I1011 10:37:43.833580 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mqpr\" (UniqueName: \"kubernetes.io/projected/6958daf7-e9a7-4151-8e42-851feedec58e-kube-api-access-5mqpr\") pod \"downloads-65bb9777fc-66jxg\" (UID: \"6958daf7-e9a7-4151-8e42-851feedec58e\") " pod="openshift-console/downloads-65bb9777fc-66jxg" Oct 11 10:37:43.838853 master-2 kubenswrapper[4776]: I1011 10:37:43.838736 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-7845cf54d8-h5nlf"] Oct 11 10:37:43.839066 master-2 kubenswrapper[4776]: I1011 10:37:43.839025 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" containerID="cri-o://a1a9c7629f1fd873d3ec9d24b009ce28e04c1ae342e924bb98a2d69fd1fdcc5f" gracePeriod=120 Oct 11 10:37:43.839434 master-2 kubenswrapper[4776]: I1011 10:37:43.839401 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://33b8451dee3f8d5ed8e144b04e3c4757d199f647e9b246655c277be3cef812a5" gracePeriod=120 Oct 11 10:37:43.871079 master-1 kubenswrapper[4771]: I1011 10:37:43.871025 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65bb9777fc-66jxg" Oct 11 10:37:43.890709 master-2 kubenswrapper[4776]: I1011 10:37:43.887213 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65bb9777fc-bkmsm" Oct 11 10:37:43.999454 master-1 kubenswrapper[4771]: I1011 10:37:43.994807 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-serving-cert\") pod \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\" (UID: \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\") " Oct 11 10:37:43.999454 master-1 kubenswrapper[4771]: I1011 10:37:43.994853 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zmdm\" (UniqueName: \"kubernetes.io/projected/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-kube-api-access-9zmdm\") pod \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\" (UID: \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\") " Oct 11 10:37:43.999454 master-1 kubenswrapper[4771]: I1011 10:37:43.994883 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-config\") pod \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\" (UID: \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\") " Oct 11 10:37:43.999454 master-1 kubenswrapper[4771]: I1011 10:37:43.994929 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-client-ca\") pod \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\" (UID: \"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3\") " Oct 11 10:37:43.999454 master-1 kubenswrapper[4771]: I1011 10:37:43.995610 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-client-ca" (OuterVolumeSpecName: "client-ca") pod "e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3" (UID: "e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:37:43.999454 master-1 kubenswrapper[4771]: I1011 10:37:43.995739 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-config" (OuterVolumeSpecName: "config") pod "e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3" (UID: "e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:37:43.999454 master-1 kubenswrapper[4771]: I1011 10:37:43.997854 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3" (UID: "e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:37:43.999454 master-1 kubenswrapper[4771]: I1011 10:37:43.998699 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-kube-api-access-9zmdm" (OuterVolumeSpecName: "kube-api-access-9zmdm") pod "e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3" (UID: "e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3"). InnerVolumeSpecName "kube-api-access-9zmdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:37:44.024833 master-2 kubenswrapper[4776]: I1011 10:37:44.024769 4776 generic.go:334] "Generic (PLEG): container finished" podID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerID="33b8451dee3f8d5ed8e144b04e3c4757d199f647e9b246655c277be3cef812a5" exitCode=0 Oct 11 10:37:44.025507 master-2 kubenswrapper[4776]: I1011 10:37:44.024901 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" event={"ID":"8c500140-fe5c-4fa2-914b-bb1e0c5758ab","Type":"ContainerDied","Data":"33b8451dee3f8d5ed8e144b04e3c4757d199f647e9b246655c277be3cef812a5"} Oct 11 10:37:44.068541 master-2 kubenswrapper[4776]: I1011 10:37:44.068473 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" path="/var/lib/kubelet/pods/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1/volumes" Oct 11 10:37:44.069036 master-2 kubenswrapper[4776]: I1011 10:37:44.069000 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" path="/var/lib/kubelet/pods/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5/volumes" Oct 11 10:37:44.096439 master-1 kubenswrapper[4771]: I1011 10:37:44.096381 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:37:44.096439 master-1 kubenswrapper[4771]: I1011 10:37:44.096430 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:37:44.096439 master-1 kubenswrapper[4771]: I1011 10:37:44.096443 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zmdm\" (UniqueName: \"kubernetes.io/projected/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-kube-api-access-9zmdm\") on node \"master-1\" DevicePath \"\"" Oct 11 10:37:44.096439 master-1 kubenswrapper[4771]: I1011 10:37:44.096455 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3-client-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: I1011 10:37:44.200113 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: I1011 10:37:44.200202 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:44.289772 master-1 kubenswrapper[4771]: I1011 10:37:44.289630 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" Oct 11 10:37:44.293558 master-1 kubenswrapper[4771]: I1011 10:37:44.293492 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7855cb4-qkp68" event={"ID":"e23d9d43-9980-4c16-91c4-9fc0bca161e6","Type":"ContainerDied","Data":"5577977e3fcec143fb9fe4819b109c252e41520252e8f2be4cdb67371fc4b2fd"} Oct 11 10:37:44.293684 master-1 kubenswrapper[4771]: I1011 10:37:44.293573 4771 scope.go:117] "RemoveContainer" containerID="c570237a7e93abdb8d6cb4489a86eb34cb5e25db0de47a00c9bf05de3a2ba3c4" Oct 11 10:37:44.295889 master-1 kubenswrapper[4771]: I1011 10:37:44.295830 4771 generic.go:334] "Generic (PLEG): container finished" podID="e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3" containerID="5088293846da37543ad3781b1214351a957d3cdeb729f2d5b67c96a0a56aa547" exitCode=0 Oct 11 10:37:44.296601 master-1 kubenswrapper[4771]: I1011 10:37:44.296567 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" Oct 11 10:37:44.298600 master-1 kubenswrapper[4771]: I1011 10:37:44.298534 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" event={"ID":"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3","Type":"ContainerDied","Data":"5088293846da37543ad3781b1214351a957d3cdeb729f2d5b67c96a0a56aa547"} Oct 11 10:37:44.298600 master-1 kubenswrapper[4771]: I1011 10:37:44.298582 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m" event={"ID":"e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3","Type":"ContainerDied","Data":"fb8e606f605b5a7a5119eb59ac6c30ff451c4fbab3f45cf0454534a92053916c"} Oct 11 10:37:44.310669 master-1 kubenswrapper[4771]: I1011 10:37:44.310619 4771 scope.go:117] "RemoveContainer" containerID="5088293846da37543ad3781b1214351a957d3cdeb729f2d5b67c96a0a56aa547" Oct 11 10:37:44.323657 master-1 kubenswrapper[4771]: I1011 10:37:44.323617 4771 scope.go:117] "RemoveContainer" containerID="5088293846da37543ad3781b1214351a957d3cdeb729f2d5b67c96a0a56aa547" Oct 11 10:37:44.324057 master-1 kubenswrapper[4771]: E1011 10:37:44.324019 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5088293846da37543ad3781b1214351a957d3cdeb729f2d5b67c96a0a56aa547\": container with ID starting with 5088293846da37543ad3781b1214351a957d3cdeb729f2d5b67c96a0a56aa547 not found: ID does not exist" containerID="5088293846da37543ad3781b1214351a957d3cdeb729f2d5b67c96a0a56aa547" Oct 11 10:37:44.324109 master-1 kubenswrapper[4771]: I1011 10:37:44.324053 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5088293846da37543ad3781b1214351a957d3cdeb729f2d5b67c96a0a56aa547"} err="failed to get container status \"5088293846da37543ad3781b1214351a957d3cdeb729f2d5b67c96a0a56aa547\": rpc error: code = NotFound desc = could not find container \"5088293846da37543ad3781b1214351a957d3cdeb729f2d5b67c96a0a56aa547\": container with ID starting with 5088293846da37543ad3781b1214351a957d3cdeb729f2d5b67c96a0a56aa547 not found: ID does not exist" Oct 11 10:37:44.369486 master-1 kubenswrapper[4771]: I1011 10:37:44.369423 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-65bb9777fc-66jxg"] Oct 11 10:37:44.382161 master-1 kubenswrapper[4771]: W1011 10:37:44.382099 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6958daf7_e9a7_4151_8e42_851feedec58e.slice/crio-e2a80dd3d13e98b0d9cb34b6b2710b9cd6757b9002e8ad477d086d18cf635f54 WatchSource:0}: Error finding container e2a80dd3d13e98b0d9cb34b6b2710b9cd6757b9002e8ad477d086d18cf635f54: Status 404 returned error can't find the container with id e2a80dd3d13e98b0d9cb34b6b2710b9cd6757b9002e8ad477d086d18cf635f54 Oct 11 10:37:44.387696 master-2 kubenswrapper[4776]: I1011 10:37:44.387252 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-65bb9777fc-bkmsm"] Oct 11 10:37:44.398432 master-2 kubenswrapper[4776]: W1011 10:37:44.398348 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31d64616_a514_4ae3_bb6d_d6eb14d9147a.slice/crio-724b736396538af81d05db3a2d73778b58761c2035bea9a2c55b66d74150f1f4 WatchSource:0}: Error finding container 724b736396538af81d05db3a2d73778b58761c2035bea9a2c55b66d74150f1f4: Status 404 returned error can't find the container with id 724b736396538af81d05db3a2d73778b58761c2035bea9a2c55b66d74150f1f4 Oct 11 10:37:44.409394 master-1 kubenswrapper[4771]: I1011 10:37:44.409301 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77c7855cb4-qkp68"] Oct 11 10:37:44.422192 master-1 kubenswrapper[4771]: I1011 10:37:44.422125 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77c7855cb4-qkp68"] Oct 11 10:37:44.450613 master-1 kubenswrapper[4771]: I1011 10:37:44.450529 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e23d9d43-9980-4c16-91c4-9fc0bca161e6" path="/var/lib/kubelet/pods/e23d9d43-9980-4c16-91c4-9fc0bca161e6/volumes" Oct 11 10:37:44.459784 master-1 kubenswrapper[4771]: I1011 10:37:44.459703 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m"] Oct 11 10:37:44.475951 master-1 kubenswrapper[4771]: I1011 10:37:44.475891 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m"] Oct 11 10:37:44.668253 master-1 kubenswrapper[4771]: I1011 10:37:44.668184 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-897b595f-6mkbk"] Oct 11 10:37:44.668599 master-1 kubenswrapper[4771]: E1011 10:37:44.668461 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3" containerName="route-controller-manager" Oct 11 10:37:44.668599 master-1 kubenswrapper[4771]: I1011 10:37:44.668481 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3" containerName="route-controller-manager" Oct 11 10:37:44.668599 master-1 kubenswrapper[4771]: E1011 10:37:44.668494 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e23d9d43-9980-4c16-91c4-9fc0bca161e6" containerName="controller-manager" Oct 11 10:37:44.668599 master-1 kubenswrapper[4771]: I1011 10:37:44.668504 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e23d9d43-9980-4c16-91c4-9fc0bca161e6" containerName="controller-manager" Oct 11 10:37:44.668950 master-1 kubenswrapper[4771]: I1011 10:37:44.668626 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3" containerName="route-controller-manager" Oct 11 10:37:44.668950 master-1 kubenswrapper[4771]: I1011 10:37:44.668646 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e23d9d43-9980-4c16-91c4-9fc0bca161e6" containerName="controller-manager" Oct 11 10:37:44.671537 master-1 kubenswrapper[4771]: I1011 10:37:44.669216 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" Oct 11 10:37:44.671537 master-1 kubenswrapper[4771]: I1011 10:37:44.671402 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 11 10:37:44.671735 master-1 kubenswrapper[4771]: I1011 10:37:44.671678 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 11 10:37:44.672806 master-1 kubenswrapper[4771]: I1011 10:37:44.672218 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 11 10:37:44.672806 master-1 kubenswrapper[4771]: I1011 10:37:44.672417 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 11 10:37:44.673986 master-1 kubenswrapper[4771]: I1011 10:37:44.673494 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 11 10:37:44.676773 master-1 kubenswrapper[4771]: I1011 10:37:44.676709 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-z46lz" Oct 11 10:37:44.683667 master-1 kubenswrapper[4771]: I1011 10:37:44.682990 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29"] Oct 11 10:37:44.683926 master-1 kubenswrapper[4771]: I1011 10:37:44.683855 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29" Oct 11 10:37:44.683978 master-1 kubenswrapper[4771]: I1011 10:37:44.683923 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 11 10:37:44.724239 master-1 kubenswrapper[4771]: I1011 10:37:44.686711 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 11 10:37:44.724239 master-1 kubenswrapper[4771]: I1011 10:37:44.686919 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 11 10:37:44.724239 master-1 kubenswrapper[4771]: I1011 10:37:44.687466 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 11 10:37:44.724239 master-1 kubenswrapper[4771]: I1011 10:37:44.687678 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 11 10:37:44.724239 master-1 kubenswrapper[4771]: I1011 10:37:44.687836 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-vwjkz" Oct 11 10:37:44.724239 master-1 kubenswrapper[4771]: I1011 10:37:44.688018 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 11 10:37:44.724239 master-1 kubenswrapper[4771]: I1011 10:37:44.690929 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-897b595f-6mkbk"] Oct 11 10:37:44.724239 master-1 kubenswrapper[4771]: I1011 10:37:44.706049 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29"] Oct 11 10:37:44.827306 master-1 kubenswrapper[4771]: I1011 10:37:44.827165 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5kn6\" (UniqueName: \"kubernetes.io/projected/9524eebe-db33-4399-8fe5-0bcfd9fbd9f4-kube-api-access-k5kn6\") pod \"controller-manager-897b595f-6mkbk\" (UID: \"9524eebe-db33-4399-8fe5-0bcfd9fbd9f4\") " pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" Oct 11 10:37:44.827517 master-1 kubenswrapper[4771]: I1011 10:37:44.827345 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9524eebe-db33-4399-8fe5-0bcfd9fbd9f4-config\") pod \"controller-manager-897b595f-6mkbk\" (UID: \"9524eebe-db33-4399-8fe5-0bcfd9fbd9f4\") " pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" Oct 11 10:37:44.827517 master-1 kubenswrapper[4771]: I1011 10:37:44.827404 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9524eebe-db33-4399-8fe5-0bcfd9fbd9f4-serving-cert\") pod \"controller-manager-897b595f-6mkbk\" (UID: \"9524eebe-db33-4399-8fe5-0bcfd9fbd9f4\") " pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" Oct 11 10:37:44.827517 master-1 kubenswrapper[4771]: I1011 10:37:44.827432 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dfa56da-3482-4730-ac7a-311905e7396d-config\") pod \"route-controller-manager-57c8488cd7-5ld29\" (UID: \"7dfa56da-3482-4730-ac7a-311905e7396d\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29" Oct 11 10:37:44.827517 master-1 kubenswrapper[4771]: I1011 10:37:44.827495 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66tpk\" (UniqueName: \"kubernetes.io/projected/7dfa56da-3482-4730-ac7a-311905e7396d-kube-api-access-66tpk\") pod \"route-controller-manager-57c8488cd7-5ld29\" (UID: \"7dfa56da-3482-4730-ac7a-311905e7396d\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29" Oct 11 10:37:44.827766 master-1 kubenswrapper[4771]: I1011 10:37:44.827552 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9524eebe-db33-4399-8fe5-0bcfd9fbd9f4-client-ca\") pod \"controller-manager-897b595f-6mkbk\" (UID: \"9524eebe-db33-4399-8fe5-0bcfd9fbd9f4\") " pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" Oct 11 10:37:44.827766 master-1 kubenswrapper[4771]: I1011 10:37:44.827579 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9524eebe-db33-4399-8fe5-0bcfd9fbd9f4-proxy-ca-bundles\") pod \"controller-manager-897b595f-6mkbk\" (UID: \"9524eebe-db33-4399-8fe5-0bcfd9fbd9f4\") " pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" Oct 11 10:37:44.827766 master-1 kubenswrapper[4771]: I1011 10:37:44.827601 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dfa56da-3482-4730-ac7a-311905e7396d-client-ca\") pod \"route-controller-manager-57c8488cd7-5ld29\" (UID: \"7dfa56da-3482-4730-ac7a-311905e7396d\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29" Oct 11 10:37:44.827766 master-1 kubenswrapper[4771]: I1011 10:37:44.827706 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dfa56da-3482-4730-ac7a-311905e7396d-serving-cert\") pod \"route-controller-manager-57c8488cd7-5ld29\" (UID: \"7dfa56da-3482-4730-ac7a-311905e7396d\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29" Oct 11 10:37:44.928720 master-1 kubenswrapper[4771]: I1011 10:37:44.928655 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dfa56da-3482-4730-ac7a-311905e7396d-serving-cert\") pod \"route-controller-manager-57c8488cd7-5ld29\" (UID: \"7dfa56da-3482-4730-ac7a-311905e7396d\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29" Oct 11 10:37:44.928720 master-1 kubenswrapper[4771]: I1011 10:37:44.928722 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5kn6\" (UniqueName: \"kubernetes.io/projected/9524eebe-db33-4399-8fe5-0bcfd9fbd9f4-kube-api-access-k5kn6\") pod \"controller-manager-897b595f-6mkbk\" (UID: \"9524eebe-db33-4399-8fe5-0bcfd9fbd9f4\") " pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" Oct 11 10:37:44.928970 master-1 kubenswrapper[4771]: I1011 10:37:44.928744 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9524eebe-db33-4399-8fe5-0bcfd9fbd9f4-config\") pod \"controller-manager-897b595f-6mkbk\" (UID: \"9524eebe-db33-4399-8fe5-0bcfd9fbd9f4\") " pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" Oct 11 10:37:44.928970 master-1 kubenswrapper[4771]: I1011 10:37:44.928760 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9524eebe-db33-4399-8fe5-0bcfd9fbd9f4-serving-cert\") pod \"controller-manager-897b595f-6mkbk\" (UID: \"9524eebe-db33-4399-8fe5-0bcfd9fbd9f4\") " pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" Oct 11 10:37:44.928970 master-1 kubenswrapper[4771]: I1011 10:37:44.928782 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dfa56da-3482-4730-ac7a-311905e7396d-config\") pod \"route-controller-manager-57c8488cd7-5ld29\" (UID: \"7dfa56da-3482-4730-ac7a-311905e7396d\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29" Oct 11 10:37:44.928970 master-1 kubenswrapper[4771]: I1011 10:37:44.928806 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66tpk\" (UniqueName: \"kubernetes.io/projected/7dfa56da-3482-4730-ac7a-311905e7396d-kube-api-access-66tpk\") pod \"route-controller-manager-57c8488cd7-5ld29\" (UID: \"7dfa56da-3482-4730-ac7a-311905e7396d\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29" Oct 11 10:37:44.928970 master-1 kubenswrapper[4771]: I1011 10:37:44.928822 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9524eebe-db33-4399-8fe5-0bcfd9fbd9f4-client-ca\") pod \"controller-manager-897b595f-6mkbk\" (UID: \"9524eebe-db33-4399-8fe5-0bcfd9fbd9f4\") " pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" Oct 11 10:37:44.928970 master-1 kubenswrapper[4771]: I1011 10:37:44.928837 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9524eebe-db33-4399-8fe5-0bcfd9fbd9f4-proxy-ca-bundles\") pod \"controller-manager-897b595f-6mkbk\" (UID: \"9524eebe-db33-4399-8fe5-0bcfd9fbd9f4\") " pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" Oct 11 10:37:44.928970 master-1 kubenswrapper[4771]: I1011 10:37:44.928855 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dfa56da-3482-4730-ac7a-311905e7396d-client-ca\") pod \"route-controller-manager-57c8488cd7-5ld29\" (UID: \"7dfa56da-3482-4730-ac7a-311905e7396d\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29" Oct 11 10:37:44.930481 master-1 kubenswrapper[4771]: I1011 10:37:44.930444 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dfa56da-3482-4730-ac7a-311905e7396d-client-ca\") pod \"route-controller-manager-57c8488cd7-5ld29\" (UID: \"7dfa56da-3482-4730-ac7a-311905e7396d\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29" Oct 11 10:37:44.930610 master-1 kubenswrapper[4771]: I1011 10:37:44.930588 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9524eebe-db33-4399-8fe5-0bcfd9fbd9f4-client-ca\") pod \"controller-manager-897b595f-6mkbk\" (UID: \"9524eebe-db33-4399-8fe5-0bcfd9fbd9f4\") " pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" Oct 11 10:37:44.930718 master-1 kubenswrapper[4771]: I1011 10:37:44.930652 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dfa56da-3482-4730-ac7a-311905e7396d-config\") pod \"route-controller-manager-57c8488cd7-5ld29\" (UID: \"7dfa56da-3482-4730-ac7a-311905e7396d\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29" Oct 11 10:37:44.931155 master-1 kubenswrapper[4771]: I1011 10:37:44.930977 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9524eebe-db33-4399-8fe5-0bcfd9fbd9f4-proxy-ca-bundles\") pod \"controller-manager-897b595f-6mkbk\" (UID: \"9524eebe-db33-4399-8fe5-0bcfd9fbd9f4\") " pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" Oct 11 10:37:44.932060 master-1 kubenswrapper[4771]: I1011 10:37:44.931974 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9524eebe-db33-4399-8fe5-0bcfd9fbd9f4-config\") pod \"controller-manager-897b595f-6mkbk\" (UID: \"9524eebe-db33-4399-8fe5-0bcfd9fbd9f4\") " pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" Oct 11 10:37:44.932641 master-1 kubenswrapper[4771]: I1011 10:37:44.932582 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dfa56da-3482-4730-ac7a-311905e7396d-serving-cert\") pod \"route-controller-manager-57c8488cd7-5ld29\" (UID: \"7dfa56da-3482-4730-ac7a-311905e7396d\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29" Oct 11 10:37:44.932792 master-1 kubenswrapper[4771]: I1011 10:37:44.932751 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9524eebe-db33-4399-8fe5-0bcfd9fbd9f4-serving-cert\") pod \"controller-manager-897b595f-6mkbk\" (UID: \"9524eebe-db33-4399-8fe5-0bcfd9fbd9f4\") " pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" Oct 11 10:37:44.954094 master-1 kubenswrapper[4771]: I1011 10:37:44.954023 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66tpk\" (UniqueName: \"kubernetes.io/projected/7dfa56da-3482-4730-ac7a-311905e7396d-kube-api-access-66tpk\") pod \"route-controller-manager-57c8488cd7-5ld29\" (UID: \"7dfa56da-3482-4730-ac7a-311905e7396d\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29" Oct 11 10:37:44.962318 master-1 kubenswrapper[4771]: I1011 10:37:44.962247 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5kn6\" (UniqueName: \"kubernetes.io/projected/9524eebe-db33-4399-8fe5-0bcfd9fbd9f4-kube-api-access-k5kn6\") pod \"controller-manager-897b595f-6mkbk\" (UID: \"9524eebe-db33-4399-8fe5-0bcfd9fbd9f4\") " pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" Oct 11 10:37:44.985229 master-1 kubenswrapper[4771]: I1011 10:37:44.985163 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" Oct 11 10:37:45.030205 master-2 kubenswrapper[4776]: I1011 10:37:45.030018 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65bb9777fc-bkmsm" event={"ID":"31d64616-a514-4ae3-bb6d-d6eb14d9147a","Type":"ContainerStarted","Data":"724b736396538af81d05db3a2d73778b58761c2035bea9a2c55b66d74150f1f4"} Oct 11 10:37:45.031744 master-2 kubenswrapper[4776]: I1011 10:37:45.031702 4776 generic.go:334] "Generic (PLEG): container finished" podID="e540333c-4b4d-439e-a82a-cd3a97c95a43" containerID="89704c12769118c53c22d7f82d393e22678a4835f23d73f837dd13b143b58cd8" exitCode=0 Oct 11 10:37:45.031844 master-2 kubenswrapper[4776]: I1011 10:37:45.031776 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" event={"ID":"e540333c-4b4d-439e-a82a-cd3a97c95a43","Type":"ContainerDied","Data":"89704c12769118c53c22d7f82d393e22678a4835f23d73f837dd13b143b58cd8"} Oct 11 10:37:45.031844 master-2 kubenswrapper[4776]: I1011 10:37:45.031809 4776 scope.go:117] "RemoveContainer" containerID="0ba5d510196688f6b97d8a36964cc97a744fb54a3c5e03a38ad0b42712671103" Oct 11 10:37:45.032361 master-2 kubenswrapper[4776]: I1011 10:37:45.032303 4776 scope.go:117] "RemoveContainer" containerID="89704c12769118c53c22d7f82d393e22678a4835f23d73f837dd13b143b58cd8" Oct 11 10:37:45.032578 master-2 kubenswrapper[4776]: E1011 10:37:45.032482 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-storage-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-storage-operator pod=cluster-storage-operator-56d4b95494-9fbb2_openshift-cluster-storage-operator(e540333c-4b4d-439e-a82a-cd3a97c95a43)\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" podUID="e540333c-4b4d-439e-a82a-cd3a97c95a43" Oct 11 10:37:45.036764 master-1 kubenswrapper[4771]: I1011 10:37:45.036691 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29" Oct 11 10:37:45.307461 master-1 kubenswrapper[4771]: I1011 10:37:45.307389 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65bb9777fc-66jxg" event={"ID":"6958daf7-e9a7-4151-8e42-851feedec58e","Type":"ContainerStarted","Data":"e2a80dd3d13e98b0d9cb34b6b2710b9cd6757b9002e8ad477d086d18cf635f54"} Oct 11 10:37:45.427202 master-1 kubenswrapper[4771]: I1011 10:37:45.427129 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-897b595f-6mkbk"] Oct 11 10:37:45.431044 master-1 kubenswrapper[4771]: W1011 10:37:45.430985 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9524eebe_db33_4399_8fe5_0bcfd9fbd9f4.slice/crio-1e236066b0bc7b8da43a96fd9fcbddda3e1776a867545aa62fb4bcb21bbe3f38 WatchSource:0}: Error finding container 1e236066b0bc7b8da43a96fd9fcbddda3e1776a867545aa62fb4bcb21bbe3f38: Status 404 returned error can't find the container with id 1e236066b0bc7b8da43a96fd9fcbddda3e1776a867545aa62fb4bcb21bbe3f38 Oct 11 10:37:45.513902 master-1 kubenswrapper[4771]: I1011 10:37:45.513810 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29"] Oct 11 10:37:45.524994 master-1 kubenswrapper[4771]: W1011 10:37:45.524927 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dfa56da_3482_4730_ac7a_311905e7396d.slice/crio-402b1304f304577e40efaeb530aa7b425f38dc76d456006b9d07ff74bd80c578 WatchSource:0}: Error finding container 402b1304f304577e40efaeb530aa7b425f38dc76d456006b9d07ff74bd80c578: Status 404 returned error can't find the container with id 402b1304f304577e40efaeb530aa7b425f38dc76d456006b9d07ff74bd80c578 Oct 11 10:37:46.322214 master-1 kubenswrapper[4771]: I1011 10:37:46.322141 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29" event={"ID":"7dfa56da-3482-4730-ac7a-311905e7396d","Type":"ContainerStarted","Data":"ad938e460e23dec7fb0a3338327dbffbabede133f5cc566ab9352d8427863862"} Oct 11 10:37:46.322765 master-1 kubenswrapper[4771]: I1011 10:37:46.322290 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29" event={"ID":"7dfa56da-3482-4730-ac7a-311905e7396d","Type":"ContainerStarted","Data":"402b1304f304577e40efaeb530aa7b425f38dc76d456006b9d07ff74bd80c578"} Oct 11 10:37:46.322765 master-1 kubenswrapper[4771]: I1011 10:37:46.322335 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29" Oct 11 10:37:46.324957 master-1 kubenswrapper[4771]: I1011 10:37:46.324912 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" event={"ID":"9524eebe-db33-4399-8fe5-0bcfd9fbd9f4","Type":"ContainerStarted","Data":"a442af9c775f277be6982e731bddbf355cf0e3158bf4e9c3f9821d19f824caa6"} Oct 11 10:37:46.325052 master-1 kubenswrapper[4771]: I1011 10:37:46.324968 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" event={"ID":"9524eebe-db33-4399-8fe5-0bcfd9fbd9f4","Type":"ContainerStarted","Data":"1e236066b0bc7b8da43a96fd9fcbddda3e1776a867545aa62fb4bcb21bbe3f38"} Oct 11 10:37:46.325107 master-1 kubenswrapper[4771]: I1011 10:37:46.325093 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" Oct 11 10:37:46.328927 master-1 kubenswrapper[4771]: I1011 10:37:46.328888 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29" Oct 11 10:37:46.332094 master-1 kubenswrapper[4771]: I1011 10:37:46.332053 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" Oct 11 10:37:46.387745 master-1 kubenswrapper[4771]: I1011 10:37:46.387665 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29" podStartSLOduration=3.387648597 podStartE2EDuration="3.387648597s" podCreationTimestamp="2025-10-11 10:37:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:37:46.38567317 +0000 UTC m=+698.359899611" watchObservedRunningTime="2025-10-11 10:37:46.387648597 +0000 UTC m=+698.361875038" Oct 11 10:37:46.442931 master-1 kubenswrapper[4771]: I1011 10:37:46.442883 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3" path="/var/lib/kubelet/pods/e89d5fa2-4b2d-47b8-9f43-fbf5942eaff3/volumes" Oct 11 10:37:46.456338 master-1 kubenswrapper[4771]: I1011 10:37:46.456110 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-897b595f-6mkbk" podStartSLOduration=3.456092693 podStartE2EDuration="3.456092693s" podCreationTimestamp="2025-10-11 10:37:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:37:46.4552885 +0000 UTC m=+698.429514971" watchObservedRunningTime="2025-10-11 10:37:46.456092693 +0000 UTC m=+698.430319134" Oct 11 10:37:46.833398 master-1 kubenswrapper[4771]: I1011 10:37:46.833313 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-57bccbfdf6-l962w"] Oct 11 10:37:46.834120 master-1 kubenswrapper[4771]: I1011 10:37:46.834089 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:46.838259 master-1 kubenswrapper[4771]: I1011 10:37:46.838231 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 11 10:37:46.838663 master-1 kubenswrapper[4771]: I1011 10:37:46.838643 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 11 10:37:46.838853 master-1 kubenswrapper[4771]: I1011 10:37:46.838790 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 11 10:37:46.840378 master-1 kubenswrapper[4771]: I1011 10:37:46.840338 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 11 10:37:46.840678 master-1 kubenswrapper[4771]: I1011 10:37:46.840644 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-8wmjp" Oct 11 10:37:46.842234 master-1 kubenswrapper[4771]: I1011 10:37:46.842204 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 11 10:37:46.858888 master-1 kubenswrapper[4771]: I1011 10:37:46.858811 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7c05954-4353-4cd1-9130-7fcb832a0493-console-config\") pod \"console-57bccbfdf6-l962w\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:46.859091 master-1 kubenswrapper[4771]: I1011 10:37:46.858935 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7c05954-4353-4cd1-9130-7fcb832a0493-console-oauth-config\") pod \"console-57bccbfdf6-l962w\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:46.859091 master-1 kubenswrapper[4771]: I1011 10:37:46.858998 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c05954-4353-4cd1-9130-7fcb832a0493-console-serving-cert\") pod \"console-57bccbfdf6-l962w\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:46.859091 master-1 kubenswrapper[4771]: I1011 10:37:46.859031 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7c05954-4353-4cd1-9130-7fcb832a0493-oauth-serving-cert\") pod \"console-57bccbfdf6-l962w\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:46.859241 master-1 kubenswrapper[4771]: I1011 10:37:46.859105 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngk7c\" (UniqueName: \"kubernetes.io/projected/a7c05954-4353-4cd1-9130-7fcb832a0493-kube-api-access-ngk7c\") pod \"console-57bccbfdf6-l962w\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:46.859241 master-1 kubenswrapper[4771]: I1011 10:37:46.859184 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7c05954-4353-4cd1-9130-7fcb832a0493-service-ca\") pod \"console-57bccbfdf6-l962w\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:46.860970 master-2 kubenswrapper[4776]: I1011 10:37:46.860915 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-57bccbfdf6-2s9dn"] Oct 11 10:37:46.861798 master-2 kubenswrapper[4776]: I1011 10:37:46.861751 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:46.864411 master-2 kubenswrapper[4776]: I1011 10:37:46.864368 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-8wmjp" Oct 11 10:37:46.864560 master-2 kubenswrapper[4776]: I1011 10:37:46.864532 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 11 10:37:46.864775 master-2 kubenswrapper[4776]: I1011 10:37:46.864686 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 11 10:37:46.864926 master-2 kubenswrapper[4776]: I1011 10:37:46.864822 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 11 10:37:46.865311 master-2 kubenswrapper[4776]: I1011 10:37:46.865280 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 11 10:37:46.865443 master-2 kubenswrapper[4776]: I1011 10:37:46.865417 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 11 10:37:46.866060 master-1 kubenswrapper[4771]: I1011 10:37:46.866004 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57bccbfdf6-l962w"] Oct 11 10:37:46.886261 master-2 kubenswrapper[4776]: I1011 10:37:46.886214 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57bccbfdf6-2s9dn"] Oct 11 10:37:46.923799 master-2 kubenswrapper[4776]: I1011 10:37:46.923728 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-oauth-serving-cert\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:46.924078 master-2 kubenswrapper[4776]: I1011 10:37:46.924058 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-service-ca\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:46.924411 master-2 kubenswrapper[4776]: I1011 10:37:46.924328 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-oauth-config\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:46.924478 master-2 kubenswrapper[4776]: I1011 10:37:46.924452 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rz6c\" (UniqueName: \"kubernetes.io/projected/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-kube-api-access-6rz6c\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:46.924703 master-2 kubenswrapper[4776]: I1011 10:37:46.924657 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-config\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:46.925004 master-2 kubenswrapper[4776]: I1011 10:37:46.924926 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-serving-cert\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:46.960591 master-1 kubenswrapper[4771]: I1011 10:37:46.960543 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7c05954-4353-4cd1-9130-7fcb832a0493-console-config\") pod \"console-57bccbfdf6-l962w\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:46.960913 master-1 kubenswrapper[4771]: I1011 10:37:46.960895 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7c05954-4353-4cd1-9130-7fcb832a0493-console-oauth-config\") pod \"console-57bccbfdf6-l962w\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:46.961012 master-1 kubenswrapper[4771]: I1011 10:37:46.960996 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c05954-4353-4cd1-9130-7fcb832a0493-console-serving-cert\") pod \"console-57bccbfdf6-l962w\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:46.961147 master-1 kubenswrapper[4771]: I1011 10:37:46.961131 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7c05954-4353-4cd1-9130-7fcb832a0493-oauth-serving-cert\") pod \"console-57bccbfdf6-l962w\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:46.961273 master-1 kubenswrapper[4771]: I1011 10:37:46.961257 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngk7c\" (UniqueName: \"kubernetes.io/projected/a7c05954-4353-4cd1-9130-7fcb832a0493-kube-api-access-ngk7c\") pod \"console-57bccbfdf6-l962w\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:46.961419 master-1 kubenswrapper[4771]: I1011 10:37:46.961402 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7c05954-4353-4cd1-9130-7fcb832a0493-service-ca\") pod \"console-57bccbfdf6-l962w\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:46.961851 master-1 kubenswrapper[4771]: I1011 10:37:46.961823 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7c05954-4353-4cd1-9130-7fcb832a0493-console-config\") pod \"console-57bccbfdf6-l962w\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:46.962194 master-1 kubenswrapper[4771]: I1011 10:37:46.962179 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7c05954-4353-4cd1-9130-7fcb832a0493-service-ca\") pod \"console-57bccbfdf6-l962w\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:46.962527 master-1 kubenswrapper[4771]: I1011 10:37:46.962447 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7c05954-4353-4cd1-9130-7fcb832a0493-oauth-serving-cert\") pod \"console-57bccbfdf6-l962w\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:46.967524 master-1 kubenswrapper[4771]: I1011 10:37:46.967491 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c05954-4353-4cd1-9130-7fcb832a0493-console-serving-cert\") pod \"console-57bccbfdf6-l962w\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:46.967766 master-1 kubenswrapper[4771]: I1011 10:37:46.967734 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7c05954-4353-4cd1-9130-7fcb832a0493-console-oauth-config\") pod \"console-57bccbfdf6-l962w\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:47.028867 master-2 kubenswrapper[4776]: I1011 10:37:47.028815 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-config\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.028867 master-2 kubenswrapper[4776]: I1011 10:37:47.028864 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-serving-cert\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.029144 master-2 kubenswrapper[4776]: I1011 10:37:47.028900 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-oauth-serving-cert\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.029144 master-2 kubenswrapper[4776]: I1011 10:37:47.028926 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-service-ca\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.029144 master-2 kubenswrapper[4776]: I1011 10:37:47.028957 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-oauth-config\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.029144 master-2 kubenswrapper[4776]: I1011 10:37:47.028971 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rz6c\" (UniqueName: \"kubernetes.io/projected/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-kube-api-access-6rz6c\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.030272 master-2 kubenswrapper[4776]: I1011 10:37:47.030253 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-oauth-serving-cert\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.030487 master-2 kubenswrapper[4776]: I1011 10:37:47.030387 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-config\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.030687 master-2 kubenswrapper[4776]: I1011 10:37:47.030622 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-service-ca\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.033884 master-2 kubenswrapper[4776]: I1011 10:37:47.033847 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-oauth-config\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.034382 master-2 kubenswrapper[4776]: I1011 10:37:47.034351 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-serving-cert\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.038087 master-1 kubenswrapper[4771]: I1011 10:37:47.038037 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngk7c\" (UniqueName: \"kubernetes.io/projected/a7c05954-4353-4cd1-9130-7fcb832a0493-kube-api-access-ngk7c\") pod \"console-57bccbfdf6-l962w\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:47.077629 master-2 kubenswrapper[4776]: I1011 10:37:47.077522 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rz6c\" (UniqueName: \"kubernetes.io/projected/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-kube-api-access-6rz6c\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.149730 master-1 kubenswrapper[4771]: I1011 10:37:47.149619 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:47.182722 master-2 kubenswrapper[4776]: I1011 10:37:47.181235 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.583012 master-1 kubenswrapper[4771]: I1011 10:37:47.582961 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57bccbfdf6-l962w"] Oct 11 10:37:47.643774 master-2 kubenswrapper[4776]: I1011 10:37:47.643728 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57bccbfdf6-2s9dn"] Oct 11 10:37:47.644690 master-2 kubenswrapper[4776]: W1011 10:37:47.644637 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae5dca9f_dad3_4712_86f9_3a3e537b5c99.slice/crio-96abc13c3c5fa7d6b33609ba42f139af90fc31f27879c88e7021022b86b662c8 WatchSource:0}: Error finding container 96abc13c3c5fa7d6b33609ba42f139af90fc31f27879c88e7021022b86b662c8: Status 404 returned error can't find the container with id 96abc13c3c5fa7d6b33609ba42f139af90fc31f27879c88e7021022b86b662c8 Oct 11 10:37:47.845716 master-2 kubenswrapper[4776]: I1011 10:37:47.841887 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729"] Oct 11 10:37:47.845716 master-2 kubenswrapper[4776]: I1011 10:37:47.842093 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" containerID="cri-o://6dd550d507d66e801941ec8d8dccd203204326eb4fa9e98d9d9de574d26fd168" gracePeriod=120 Oct 11 10:37:48.072201 master-2 kubenswrapper[4776]: I1011 10:37:48.072127 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57bccbfdf6-2s9dn" event={"ID":"ae5dca9f-dad3-4712-86f9-3a3e537b5c99","Type":"ContainerStarted","Data":"96abc13c3c5fa7d6b33609ba42f139af90fc31f27879c88e7021022b86b662c8"} Oct 11 10:37:48.338242 master-1 kubenswrapper[4771]: I1011 10:37:48.338150 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57bccbfdf6-l962w" event={"ID":"a7c05954-4353-4cd1-9130-7fcb832a0493","Type":"ContainerStarted","Data":"8b80ae3136f1889aef88e388582b0fe2c7b18eb0654ca449f848a798f30b4031"} Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: I1011 10:37:48.459510 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: I1011 10:37:48.459565 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: I1011 10:37:49.196150 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:37:49.197503 master-2 kubenswrapper[4776]: I1011 10:37:49.196229 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:49.947455 master-1 kubenswrapper[4771]: I1011 10:37:49.947389 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57bccbfdf6-l962w"] Oct 11 10:37:50.097381 master-2 kubenswrapper[4776]: I1011 10:37:50.097257 4776 generic.go:334] "Generic (PLEG): container finished" podID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerID="37bb400b73bf025924c7c9c5bb9e1d981b6e77aaa21f8e234850cbe27200bcf9" exitCode=0 Oct 11 10:37:50.097868 master-2 kubenswrapper[4776]: I1011 10:37:50.097709 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-57kcw" event={"ID":"c8cd90ff-e70c-4837-82c4-0fec67a8a51b","Type":"ContainerDied","Data":"37bb400b73bf025924c7c9c5bb9e1d981b6e77aaa21f8e234850cbe27200bcf9"} Oct 11 10:37:50.097868 master-2 kubenswrapper[4776]: I1011 10:37:50.097749 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-57kcw" event={"ID":"c8cd90ff-e70c-4837-82c4-0fec67a8a51b","Type":"ContainerStarted","Data":"eeeb754b3aa286aa5e74205d303f35958d66321450cc7b407c8db19c823fb525"} Oct 11 10:37:50.097868 master-2 kubenswrapper[4776]: I1011 10:37:50.097769 4776 scope.go:117] "RemoveContainer" containerID="532aa417a81b450c3ee53605926d217d88d13c85c6a1d9e5ea21cc1e35ca9346" Oct 11 10:37:50.967141 master-2 kubenswrapper[4776]: I1011 10:37:50.967068 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:37:50.971385 master-2 kubenswrapper[4776]: I1011 10:37:50.971345 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:37:51.107013 master-2 kubenswrapper[4776]: I1011 10:37:51.106959 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:37:51.114718 master-2 kubenswrapper[4776]: I1011 10:37:51.114614 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:37:53.141486 master-2 kubenswrapper[4776]: I1011 10:37:53.141396 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57bccbfdf6-2s9dn" event={"ID":"ae5dca9f-dad3-4712-86f9-3a3e537b5c99","Type":"ContainerStarted","Data":"60848da2e2d10f9d66feb8c08460e0558bf900e390d4618db9b15ae08a0b1a6f"} Oct 11 10:37:53.369947 master-1 kubenswrapper[4771]: I1011 10:37:53.369833 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57bccbfdf6-l962w" event={"ID":"a7c05954-4353-4cd1-9130-7fcb832a0493","Type":"ContainerStarted","Data":"ac4cbca778491aecc2f71b3dc29feeec7e7dede29f2bc161a327a52e374f391b"} Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: I1011 10:37:53.459501 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:37:53.460119 master-2 kubenswrapper[4776]: I1011 10:37:53.459558 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:53.625584 master-2 kubenswrapper[4776]: I1011 10:37:53.625504 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-57bccbfdf6-2s9dn" podStartSLOduration=2.3514108289999998 podStartE2EDuration="7.625488076s" podCreationTimestamp="2025-10-11 10:37:46 +0000 UTC" firstStartedPulling="2025-10-11 10:37:47.64685942 +0000 UTC m=+702.431286139" lastFinishedPulling="2025-10-11 10:37:52.920936677 +0000 UTC m=+707.705363386" observedRunningTime="2025-10-11 10:37:53.51064451 +0000 UTC m=+708.295071219" watchObservedRunningTime="2025-10-11 10:37:53.625488076 +0000 UTC m=+708.409914785" Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: I1011 10:37:54.197903 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:37:54.200460 master-2 kubenswrapper[4776]: I1011 10:37:54.198879 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:54.200460 master-2 kubenswrapper[4776]: I1011 10:37:54.199211 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:37:54.230324 master-1 kubenswrapper[4771]: I1011 10:37:54.230181 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-57bccbfdf6-l962w" podStartSLOduration=3.5835673679999998 podStartE2EDuration="8.23014872s" podCreationTimestamp="2025-10-11 10:37:46 +0000 UTC" firstStartedPulling="2025-10-11 10:37:47.598769498 +0000 UTC m=+699.572995959" lastFinishedPulling="2025-10-11 10:37:52.24535083 +0000 UTC m=+704.219577311" observedRunningTime="2025-10-11 10:37:54.223285514 +0000 UTC m=+706.197511985" watchObservedRunningTime="2025-10-11 10:37:54.23014872 +0000 UTC m=+706.204375191" Oct 11 10:37:57.150839 master-1 kubenswrapper[4771]: I1011 10:37:57.150776 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:37:57.181809 master-2 kubenswrapper[4776]: I1011 10:37:57.181720 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:57.183704 master-2 kubenswrapper[4776]: I1011 10:37:57.183588 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:57.185152 master-2 kubenswrapper[4776]: I1011 10:37:57.185112 4776 patch_prober.go:28] interesting pod/console-57bccbfdf6-2s9dn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.76:8443/health\": dial tcp 10.128.0.76:8443: connect: connection refused" start-of-body= Oct 11 10:37:57.185238 master-2 kubenswrapper[4776]: I1011 10:37:57.185168 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-57bccbfdf6-2s9dn" podUID="ae5dca9f-dad3-4712-86f9-3a3e537b5c99" containerName="console" probeResult="failure" output="Get \"https://10.128.0.76:8443/health\": dial tcp 10.128.0.76:8443: connect: connection refused" Oct 11 10:37:58.062331 master-2 kubenswrapper[4776]: I1011 10:37:58.062271 4776 scope.go:117] "RemoveContainer" containerID="89704c12769118c53c22d7f82d393e22678a4835f23d73f837dd13b143b58cd8" Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: I1011 10:37:58.459404 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:37:58.461096 master-2 kubenswrapper[4776]: I1011 10:37:58.459462 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:58.461096 master-2 kubenswrapper[4776]: I1011 10:37:58.459933 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:37:59.176997 master-2 kubenswrapper[4776]: I1011 10:37:59.176936 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" event={"ID":"e540333c-4b4d-439e-a82a-cd3a97c95a43","Type":"ContainerStarted","Data":"2b9f0d27b6f21bda8ce6285e683e7a0f5ef61713f522d8ef354c5ed8789a85fa"} Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: I1011 10:37:59.195755 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:37:59.196349 master-2 kubenswrapper[4776]: I1011 10:37:59.195805 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:00.880301 master-1 kubenswrapper[4771]: I1011 10:38:00.880184 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85df6bdd68-48crk"] Oct 11 10:38:00.881709 master-1 kubenswrapper[4771]: I1011 10:38:00.881654 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85df6bdd68-48crk" Oct 11 10:38:00.885425 master-1 kubenswrapper[4771]: I1011 10:38:00.885332 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 11 10:38:00.885683 master-1 kubenswrapper[4771]: I1011 10:38:00.885344 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 11 10:38:00.894208 master-1 kubenswrapper[4771]: I1011 10:38:00.894158 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-85df6bdd68-48crk"] Oct 11 10:38:00.909132 master-2 kubenswrapper[4776]: I1011 10:38:00.908950 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj"] Oct 11 10:38:00.910340 master-2 kubenswrapper[4776]: I1011 10:38:00.910308 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" Oct 11 10:38:00.912637 master-2 kubenswrapper[4776]: I1011 10:38:00.912588 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 11 10:38:00.912793 master-2 kubenswrapper[4776]: I1011 10:38:00.912764 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 11 10:38:00.947847 master-1 kubenswrapper[4771]: I1011 10:38:00.947777 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/45c6c687-55ce-4176-903e-5dadd7371470-networking-console-plugin-cert\") pod \"networking-console-plugin-85df6bdd68-48crk\" (UID: \"45c6c687-55ce-4176-903e-5dadd7371470\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-48crk" Oct 11 10:38:00.947847 master-1 kubenswrapper[4771]: I1011 10:38:00.947836 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/45c6c687-55ce-4176-903e-5dadd7371470-nginx-conf\") pod \"networking-console-plugin-85df6bdd68-48crk\" (UID: \"45c6c687-55ce-4176-903e-5dadd7371470\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-48crk" Oct 11 10:38:00.959977 master-2 kubenswrapper[4776]: I1011 10:38:00.959925 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj"] Oct 11 10:38:00.969429 master-2 kubenswrapper[4776]: I1011 10:38:00.969383 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/db93c795-02b8-4e94-9fdc-bdc616f05e56-nginx-conf\") pod \"networking-console-plugin-85df6bdd68-qsxrj\" (UID: \"db93c795-02b8-4e94-9fdc-bdc616f05e56\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" Oct 11 10:38:00.969587 master-2 kubenswrapper[4776]: I1011 10:38:00.969431 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/db93c795-02b8-4e94-9fdc-bdc616f05e56-networking-console-plugin-cert\") pod \"networking-console-plugin-85df6bdd68-qsxrj\" (UID: \"db93c795-02b8-4e94-9fdc-bdc616f05e56\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" Oct 11 10:38:01.049185 master-1 kubenswrapper[4771]: I1011 10:38:01.049112 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/45c6c687-55ce-4176-903e-5dadd7371470-networking-console-plugin-cert\") pod \"networking-console-plugin-85df6bdd68-48crk\" (UID: \"45c6c687-55ce-4176-903e-5dadd7371470\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-48crk" Oct 11 10:38:01.049483 master-1 kubenswrapper[4771]: I1011 10:38:01.049236 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/45c6c687-55ce-4176-903e-5dadd7371470-nginx-conf\") pod \"networking-console-plugin-85df6bdd68-48crk\" (UID: \"45c6c687-55ce-4176-903e-5dadd7371470\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-48crk" Oct 11 10:38:01.049483 master-1 kubenswrapper[4771]: E1011 10:38:01.049405 4771 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Oct 11 10:38:01.049613 master-1 kubenswrapper[4771]: E1011 10:38:01.049523 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45c6c687-55ce-4176-903e-5dadd7371470-networking-console-plugin-cert podName:45c6c687-55ce-4176-903e-5dadd7371470 nodeName:}" failed. No retries permitted until 2025-10-11 10:38:01.549494293 +0000 UTC m=+713.523720774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/45c6c687-55ce-4176-903e-5dadd7371470-networking-console-plugin-cert") pod "networking-console-plugin-85df6bdd68-48crk" (UID: "45c6c687-55ce-4176-903e-5dadd7371470") : secret "networking-console-plugin-cert" not found Oct 11 10:38:01.050951 master-1 kubenswrapper[4771]: I1011 10:38:01.050885 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/45c6c687-55ce-4176-903e-5dadd7371470-nginx-conf\") pod \"networking-console-plugin-85df6bdd68-48crk\" (UID: \"45c6c687-55ce-4176-903e-5dadd7371470\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-48crk" Oct 11 10:38:01.070218 master-2 kubenswrapper[4776]: I1011 10:38:01.070161 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/db93c795-02b8-4e94-9fdc-bdc616f05e56-nginx-conf\") pod \"networking-console-plugin-85df6bdd68-qsxrj\" (UID: \"db93c795-02b8-4e94-9fdc-bdc616f05e56\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" Oct 11 10:38:01.070218 master-2 kubenswrapper[4776]: I1011 10:38:01.070211 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/db93c795-02b8-4e94-9fdc-bdc616f05e56-networking-console-plugin-cert\") pod \"networking-console-plugin-85df6bdd68-qsxrj\" (UID: \"db93c795-02b8-4e94-9fdc-bdc616f05e56\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" Oct 11 10:38:01.070477 master-2 kubenswrapper[4776]: E1011 10:38:01.070325 4776 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Oct 11 10:38:01.070477 master-2 kubenswrapper[4776]: E1011 10:38:01.070400 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db93c795-02b8-4e94-9fdc-bdc616f05e56-networking-console-plugin-cert podName:db93c795-02b8-4e94-9fdc-bdc616f05e56 nodeName:}" failed. No retries permitted until 2025-10-11 10:38:01.570381177 +0000 UTC m=+716.354807896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/db93c795-02b8-4e94-9fdc-bdc616f05e56-networking-console-plugin-cert") pod "networking-console-plugin-85df6bdd68-qsxrj" (UID: "db93c795-02b8-4e94-9fdc-bdc616f05e56") : secret "networking-console-plugin-cert" not found Oct 11 10:38:01.071364 master-2 kubenswrapper[4776]: I1011 10:38:01.071336 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/db93c795-02b8-4e94-9fdc-bdc616f05e56-nginx-conf\") pod \"networking-console-plugin-85df6bdd68-qsxrj\" (UID: \"db93c795-02b8-4e94-9fdc-bdc616f05e56\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" Oct 11 10:38:01.555577 master-1 kubenswrapper[4771]: I1011 10:38:01.555501 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/45c6c687-55ce-4176-903e-5dadd7371470-networking-console-plugin-cert\") pod \"networking-console-plugin-85df6bdd68-48crk\" (UID: \"45c6c687-55ce-4176-903e-5dadd7371470\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-48crk" Oct 11 10:38:01.559991 master-1 kubenswrapper[4771]: I1011 10:38:01.559936 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/45c6c687-55ce-4176-903e-5dadd7371470-networking-console-plugin-cert\") pod \"networking-console-plugin-85df6bdd68-48crk\" (UID: \"45c6c687-55ce-4176-903e-5dadd7371470\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-48crk" Oct 11 10:38:01.580306 master-2 kubenswrapper[4776]: I1011 10:38:01.580231 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/db93c795-02b8-4e94-9fdc-bdc616f05e56-networking-console-plugin-cert\") pod \"networking-console-plugin-85df6bdd68-qsxrj\" (UID: \"db93c795-02b8-4e94-9fdc-bdc616f05e56\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" Oct 11 10:38:01.584290 master-2 kubenswrapper[4776]: I1011 10:38:01.584220 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/db93c795-02b8-4e94-9fdc-bdc616f05e56-networking-console-plugin-cert\") pod \"networking-console-plugin-85df6bdd68-qsxrj\" (UID: \"db93c795-02b8-4e94-9fdc-bdc616f05e56\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" Oct 11 10:38:01.824487 master-2 kubenswrapper[4776]: I1011 10:38:01.824401 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" Oct 11 10:38:01.851103 master-1 kubenswrapper[4771]: I1011 10:38:01.850922 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85df6bdd68-48crk" Oct 11 10:38:02.290172 master-2 kubenswrapper[4776]: I1011 10:38:02.289985 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj"] Oct 11 10:38:02.293415 master-2 kubenswrapper[4776]: W1011 10:38:02.293384 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb93c795_02b8_4e94_9fdc_bdc616f05e56.slice/crio-e83ef9478383c0e735705ea9054c055d55170e99077c9340f1d40de1cbe08bc7 WatchSource:0}: Error finding container e83ef9478383c0e735705ea9054c055d55170e99077c9340f1d40de1cbe08bc7: Status 404 returned error can't find the container with id e83ef9478383c0e735705ea9054c055d55170e99077c9340f1d40de1cbe08bc7 Oct 11 10:38:02.330970 master-1 kubenswrapper[4771]: I1011 10:38:02.329432 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-85df6bdd68-48crk"] Oct 11 10:38:03.211997 master-2 kubenswrapper[4776]: I1011 10:38:03.211942 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" event={"ID":"db93c795-02b8-4e94-9fdc-bdc616f05e56","Type":"ContainerStarted","Data":"e83ef9478383c0e735705ea9054c055d55170e99077c9340f1d40de1cbe08bc7"} Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: I1011 10:38:03.460465 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:03.461411 master-2 kubenswrapper[4776]: I1011 10:38:03.460596 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: I1011 10:38:04.196454 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:04.197302 master-2 kubenswrapper[4776]: I1011 10:38:04.196517 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:04.219303 master-2 kubenswrapper[4776]: I1011 10:38:04.219253 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" event={"ID":"db93c795-02b8-4e94-9fdc-bdc616f05e56","Type":"ContainerStarted","Data":"1e4f9b093cb4a2472154a1759b750613d1a4988c25914dfef5cf3d4ab591df57"} Oct 11 10:38:04.240739 master-2 kubenswrapper[4776]: I1011 10:38:04.240288 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" podStartSLOduration=2.816745739 podStartE2EDuration="4.240267788s" podCreationTimestamp="2025-10-11 10:38:00 +0000 UTC" firstStartedPulling="2025-10-11 10:38:02.296456121 +0000 UTC m=+717.080882820" lastFinishedPulling="2025-10-11 10:38:03.71997816 +0000 UTC m=+718.504404869" observedRunningTime="2025-10-11 10:38:04.238037617 +0000 UTC m=+719.022464326" watchObservedRunningTime="2025-10-11 10:38:04.240267788 +0000 UTC m=+719.024694497" Oct 11 10:38:06.509824 master-1 kubenswrapper[4771]: I1011 10:38:06.509767 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 11 10:38:07.182166 master-2 kubenswrapper[4776]: I1011 10:38:07.181772 4776 patch_prober.go:28] interesting pod/console-57bccbfdf6-2s9dn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.76:8443/health\": dial tcp 10.128.0.76:8443: connect: connection refused" start-of-body= Oct 11 10:38:07.182166 master-2 kubenswrapper[4776]: I1011 10:38:07.181825 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-57bccbfdf6-2s9dn" podUID="ae5dca9f-dad3-4712-86f9-3a3e537b5c99" containerName="console" probeResult="failure" output="Get \"https://10.128.0.76:8443/health\": dial tcp 10.128.0.76:8443: connect: connection refused" Oct 11 10:38:07.922167 master-2 kubenswrapper[4776]: I1011 10:38:07.921774 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57bccbfdf6-2s9dn"] Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: I1011 10:38:08.458347 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: I1011 10:38:08.458411 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: I1011 10:38:09.196545 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:09.197421 master-2 kubenswrapper[4776]: I1011 10:38:09.196610 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:09.316783 master-1 kubenswrapper[4771]: I1011 10:38:09.316668 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" podUID="d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8" containerName="oauth-openshift" containerID="cri-o://33d94985a7a913ae973d8eae7753333b859741460e8780a5827eed5110a86a93" gracePeriod=15 Oct 11 10:38:09.506395 master-1 kubenswrapper[4771]: I1011 10:38:09.506264 4771 generic.go:334] "Generic (PLEG): container finished" podID="d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8" containerID="33d94985a7a913ae973d8eae7753333b859741460e8780a5827eed5110a86a93" exitCode=0 Oct 11 10:38:09.506395 master-1 kubenswrapper[4771]: I1011 10:38:09.506333 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" event={"ID":"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8","Type":"ContainerDied","Data":"33d94985a7a913ae973d8eae7753333b859741460e8780a5827eed5110a86a93"} Oct 11 10:38:11.347938 master-2 kubenswrapper[4776]: I1011 10:38:11.347866 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-6-master-2"] Oct 11 10:38:11.348947 master-2 kubenswrapper[4776]: I1011 10:38:11.348906 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.352014 master-2 kubenswrapper[4776]: I1011 10:38:11.351556 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-js756" Oct 11 10:38:11.359567 master-2 kubenswrapper[4776]: I1011 10:38:11.359525 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-2"] Oct 11 10:38:11.535141 master-2 kubenswrapper[4776]: I1011 10:38:11.535041 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-kubelet-dir\") pod \"installer-6-master-2\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.535141 master-2 kubenswrapper[4776]: I1011 10:38:11.535137 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8755f64d-7ff8-4df3-ae55-c1154ba02830-kube-api-access\") pod \"installer-6-master-2\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.535431 master-2 kubenswrapper[4776]: I1011 10:38:11.535186 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-var-lock\") pod \"installer-6-master-2\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.636528 master-2 kubenswrapper[4776]: I1011 10:38:11.636412 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8755f64d-7ff8-4df3-ae55-c1154ba02830-kube-api-access\") pod \"installer-6-master-2\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.636528 master-2 kubenswrapper[4776]: I1011 10:38:11.636466 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-var-lock\") pod \"installer-6-master-2\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.636528 master-2 kubenswrapper[4776]: I1011 10:38:11.636518 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-kubelet-dir\") pod \"installer-6-master-2\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.636846 master-2 kubenswrapper[4776]: I1011 10:38:11.636587 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-kubelet-dir\") pod \"installer-6-master-2\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.636846 master-2 kubenswrapper[4776]: I1011 10:38:11.636686 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-var-lock\") pod \"installer-6-master-2\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.641253 master-1 kubenswrapper[4771]: I1011 10:38:11.641186 4771 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 11 10:38:11.641768 master-1 kubenswrapper[4771]: I1011 10:38:11.641578 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerName="kube-controller-manager" containerID="cri-o://bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db" gracePeriod=30 Oct 11 10:38:11.641768 master-1 kubenswrapper[4771]: I1011 10:38:11.641599 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1" gracePeriod=30 Oct 11 10:38:11.641768 master-1 kubenswrapper[4771]: I1011 10:38:11.641712 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerName="cluster-policy-controller" containerID="cri-o://c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb" gracePeriod=30 Oct 11 10:38:11.641768 master-1 kubenswrapper[4771]: I1011 10:38:11.641693 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5" gracePeriod=30 Oct 11 10:38:11.642856 master-1 kubenswrapper[4771]: I1011 10:38:11.642684 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 11 10:38:11.643048 master-1 kubenswrapper[4771]: E1011 10:38:11.643014 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerName="cluster-policy-controller" Oct 11 10:38:11.643048 master-1 kubenswrapper[4771]: I1011 10:38:11.643041 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerName="cluster-policy-controller" Oct 11 10:38:11.643141 master-1 kubenswrapper[4771]: E1011 10:38:11.643066 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerName="kube-controller-manager" Oct 11 10:38:11.643141 master-1 kubenswrapper[4771]: I1011 10:38:11.643076 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerName="kube-controller-manager" Oct 11 10:38:11.643141 master-1 kubenswrapper[4771]: E1011 10:38:11.643095 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerName="kube-controller-manager-cert-syncer" Oct 11 10:38:11.643141 master-1 kubenswrapper[4771]: I1011 10:38:11.643108 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerName="kube-controller-manager-cert-syncer" Oct 11 10:38:11.643141 master-1 kubenswrapper[4771]: E1011 10:38:11.643119 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerName="kube-controller-manager-recovery-controller" Oct 11 10:38:11.643141 master-1 kubenswrapper[4771]: I1011 10:38:11.643128 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerName="kube-controller-manager-recovery-controller" Oct 11 10:38:11.643330 master-1 kubenswrapper[4771]: I1011 10:38:11.643252 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerName="kube-controller-manager-recovery-controller" Oct 11 10:38:11.643330 master-1 kubenswrapper[4771]: I1011 10:38:11.643269 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerName="kube-controller-manager" Oct 11 10:38:11.643330 master-1 kubenswrapper[4771]: I1011 10:38:11.643281 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerName="cluster-policy-controller" Oct 11 10:38:11.643330 master-1 kubenswrapper[4771]: I1011 10:38:11.643296 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerName="kube-controller-manager-cert-syncer" Oct 11 10:38:11.673251 master-2 kubenswrapper[4776]: I1011 10:38:11.673179 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8755f64d-7ff8-4df3-ae55-c1154ba02830-kube-api-access\") pod \"installer-6-master-2\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.683590 master-1 kubenswrapper[4771]: I1011 10:38:11.683530 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b5de9d609ee1e6c379f71934cb2c3c6-resource-dir\") pod \"kube-controller-manager-master-1\" (UID: \"0b5de9d609ee1e6c379f71934cb2c3c6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:38:11.683708 master-1 kubenswrapper[4771]: I1011 10:38:11.683598 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b5de9d609ee1e6c379f71934cb2c3c6-cert-dir\") pod \"kube-controller-manager-master-1\" (UID: \"0b5de9d609ee1e6c379f71934cb2c3c6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:38:11.784980 master-1 kubenswrapper[4771]: I1011 10:38:11.784527 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b5de9d609ee1e6c379f71934cb2c3c6-resource-dir\") pod \"kube-controller-manager-master-1\" (UID: \"0b5de9d609ee1e6c379f71934cb2c3c6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:38:11.784980 master-1 kubenswrapper[4771]: I1011 10:38:11.784591 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b5de9d609ee1e6c379f71934cb2c3c6-cert-dir\") pod \"kube-controller-manager-master-1\" (UID: \"0b5de9d609ee1e6c379f71934cb2c3c6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:38:11.784980 master-1 kubenswrapper[4771]: I1011 10:38:11.784655 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b5de9d609ee1e6c379f71934cb2c3c6-resource-dir\") pod \"kube-controller-manager-master-1\" (UID: \"0b5de9d609ee1e6c379f71934cb2c3c6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:38:11.784980 master-1 kubenswrapper[4771]: I1011 10:38:11.784713 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b5de9d609ee1e6c379f71934cb2c3c6-cert-dir\") pod \"kube-controller-manager-master-1\" (UID: \"0b5de9d609ee1e6c379f71934cb2c3c6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:38:11.963596 master-2 kubenswrapper[4776]: I1011 10:38:11.963525 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:12.024889 master-1 kubenswrapper[4771]: W1011 10:38:12.024828 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45c6c687_55ce_4176_903e_5dadd7371470.slice/crio-295428d0c3674ff9225efcb779e01906679c427acbd4d700762b0d513f5169a8 WatchSource:0}: Error finding container 295428d0c3674ff9225efcb779e01906679c427acbd4d700762b0d513f5169a8: Status 404 returned error can't find the container with id 295428d0c3674ff9225efcb779e01906679c427acbd4d700762b0d513f5169a8 Oct 11 10:38:12.114925 master-1 kubenswrapper[4771]: I1011 10:38:12.114878 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-1_0c6dd9eb5bc384e5fbc388e7a2f95c28/kube-controller-manager-cert-syncer/0.log" Oct 11 10:38:12.116592 master-1 kubenswrapper[4771]: I1011 10:38:12.116547 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:38:12.122927 master-1 kubenswrapper[4771]: I1011 10:38:12.122859 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" oldPodUID="0c6dd9eb5bc384e5fbc388e7a2f95c28" podUID="0b5de9d609ee1e6c379f71934cb2c3c6" Oct 11 10:38:12.190025 master-1 kubenswrapper[4771]: I1011 10:38:12.188434 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0c6dd9eb5bc384e5fbc388e7a2f95c28-cert-dir\") pod \"0c6dd9eb5bc384e5fbc388e7a2f95c28\" (UID: \"0c6dd9eb5bc384e5fbc388e7a2f95c28\") " Oct 11 10:38:12.190025 master-1 kubenswrapper[4771]: I1011 10:38:12.188519 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0c6dd9eb5bc384e5fbc388e7a2f95c28-resource-dir\") pod \"0c6dd9eb5bc384e5fbc388e7a2f95c28\" (UID: \"0c6dd9eb5bc384e5fbc388e7a2f95c28\") " Oct 11 10:38:12.190025 master-1 kubenswrapper[4771]: I1011 10:38:12.188649 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c6dd9eb5bc384e5fbc388e7a2f95c28-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "0c6dd9eb5bc384e5fbc388e7a2f95c28" (UID: "0c6dd9eb5bc384e5fbc388e7a2f95c28"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:38:12.190025 master-1 kubenswrapper[4771]: I1011 10:38:12.188843 4771 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0c6dd9eb5bc384e5fbc388e7a2f95c28-cert-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:12.190025 master-1 kubenswrapper[4771]: I1011 10:38:12.188807 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c6dd9eb5bc384e5fbc388e7a2f95c28-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "0c6dd9eb5bc384e5fbc388e7a2f95c28" (UID: "0c6dd9eb5bc384e5fbc388e7a2f95c28"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:38:12.290039 master-1 kubenswrapper[4771]: I1011 10:38:12.289953 4771 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0c6dd9eb5bc384e5fbc388e7a2f95c28-resource-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:12.294981 master-1 kubenswrapper[4771]: I1011 10:38:12.294936 4771 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-1 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" start-of-body= Oct 11 10:38:12.295031 master-1 kubenswrapper[4771]: I1011 10:38:12.295004 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" podUID="a706deec-9223-4663-9db5-71147d242c34" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" Oct 11 10:38:12.404964 master-1 kubenswrapper[4771]: I1011 10:38:12.404909 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:38:12.445908 master-1 kubenswrapper[4771]: I1011 10:38:12.445816 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c6dd9eb5bc384e5fbc388e7a2f95c28" path="/var/lib/kubelet/pods/0c6dd9eb5bc384e5fbc388e7a2f95c28/volumes" Oct 11 10:38:12.459568 master-1 kubenswrapper[4771]: I1011 10:38:12.459513 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6fccd5ccc-txx8d"] Oct 11 10:38:12.460217 master-1 kubenswrapper[4771]: E1011 10:38:12.460181 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8" containerName="oauth-openshift" Oct 11 10:38:12.460217 master-1 kubenswrapper[4771]: I1011 10:38:12.460212 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8" containerName="oauth-openshift" Oct 11 10:38:12.460697 master-1 kubenswrapper[4771]: I1011 10:38:12.460663 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8" containerName="oauth-openshift" Oct 11 10:38:12.462142 master-1 kubenswrapper[4771]: I1011 10:38:12.462102 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.476703 master-1 kubenswrapper[4771]: I1011 10:38:12.476631 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6fccd5ccc-txx8d"] Oct 11 10:38:12.520033 master-1 kubenswrapper[4771]: I1011 10:38:12.519981 4771 generic.go:334] "Generic (PLEG): container finished" podID="d61f55f6-6e03-40ca-aa96-cb6ba21c39b4" containerID="5a01f085e3b1c7fb42b9e1bcc547086d47f6a110bdf85f6e451a5f626e8ea9d3" exitCode=0 Oct 11 10:38:12.520213 master-1 kubenswrapper[4771]: I1011 10:38:12.520073 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-1" event={"ID":"d61f55f6-6e03-40ca-aa96-cb6ba21c39b4","Type":"ContainerDied","Data":"5a01f085e3b1c7fb42b9e1bcc547086d47f6a110bdf85f6e451a5f626e8ea9d3"} Oct 11 10:38:12.521198 master-1 kubenswrapper[4771]: I1011 10:38:12.521154 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85df6bdd68-48crk" event={"ID":"45c6c687-55ce-4176-903e-5dadd7371470","Type":"ContainerStarted","Data":"295428d0c3674ff9225efcb779e01906679c427acbd4d700762b0d513f5169a8"} Oct 11 10:38:12.522639 master-1 kubenswrapper[4771]: I1011 10:38:12.522584 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" event={"ID":"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8","Type":"ContainerDied","Data":"8893531f4f4ebb704c0ef08b506e4add808eee05f66a87ee0d2eb8eddb5d49b0"} Oct 11 10:38:12.522713 master-1 kubenswrapper[4771]: I1011 10:38:12.522652 4771 scope.go:117] "RemoveContainer" containerID="33d94985a7a913ae973d8eae7753333b859741460e8780a5827eed5110a86a93" Oct 11 10:38:12.522939 master-1 kubenswrapper[4771]: I1011 10:38:12.522926 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68fb97bcc4-g7k57" Oct 11 10:38:12.524478 master-1 kubenswrapper[4771]: I1011 10:38:12.524454 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65bb9777fc-66jxg" event={"ID":"6958daf7-e9a7-4151-8e42-851feedec58e","Type":"ContainerStarted","Data":"a368ebd7e5449ec4254f55beacd6ae1c83830fe1507f669e7fc7b33b7cdb82c3"} Oct 11 10:38:12.525031 master-1 kubenswrapper[4771]: I1011 10:38:12.524963 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65bb9777fc-66jxg" Oct 11 10:38:12.527266 master-1 kubenswrapper[4771]: I1011 10:38:12.527249 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-1_0c6dd9eb5bc384e5fbc388e7a2f95c28/kube-controller-manager-cert-syncer/0.log" Oct 11 10:38:12.527343 master-1 kubenswrapper[4771]: I1011 10:38:12.527282 4771 patch_prober.go:28] interesting pod/downloads-65bb9777fc-66jxg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.129.0.66:8080/\": dial tcp 10.129.0.66:8080: connect: connection refused" start-of-body= Oct 11 10:38:12.527419 master-1 kubenswrapper[4771]: I1011 10:38:12.527340 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65bb9777fc-66jxg" podUID="6958daf7-e9a7-4151-8e42-851feedec58e" containerName="download-server" probeResult="failure" output="Get \"http://10.129.0.66:8080/\": dial tcp 10.129.0.66:8080: connect: connection refused" Oct 11 10:38:12.528045 master-1 kubenswrapper[4771]: I1011 10:38:12.527976 4771 generic.go:334] "Generic (PLEG): container finished" podID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerID="06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1" exitCode=0 Oct 11 10:38:12.528045 master-1 kubenswrapper[4771]: I1011 10:38:12.527998 4771 generic.go:334] "Generic (PLEG): container finished" podID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerID="38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5" exitCode=2 Oct 11 10:38:12.528143 master-1 kubenswrapper[4771]: I1011 10:38:12.528006 4771 generic.go:334] "Generic (PLEG): container finished" podID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerID="c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb" exitCode=0 Oct 11 10:38:12.528214 master-1 kubenswrapper[4771]: I1011 10:38:12.528153 4771 generic.go:334] "Generic (PLEG): container finished" podID="0c6dd9eb5bc384e5fbc388e7a2f95c28" containerID="bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db" exitCode=0 Oct 11 10:38:12.528214 master-1 kubenswrapper[4771]: I1011 10:38:12.528076 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:38:12.546274 master-1 kubenswrapper[4771]: I1011 10:38:12.546154 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" oldPodUID="0c6dd9eb5bc384e5fbc388e7a2f95c28" podUID="0b5de9d609ee1e6c379f71934cb2c3c6" Oct 11 10:38:12.546961 master-1 kubenswrapper[4771]: I1011 10:38:12.546942 4771 scope.go:117] "RemoveContainer" containerID="06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1" Oct 11 10:38:12.565577 master-1 kubenswrapper[4771]: I1011 10:38:12.565489 4771 scope.go:117] "RemoveContainer" containerID="38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5" Oct 11 10:38:12.573862 master-1 kubenswrapper[4771]: I1011 10:38:12.573788 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-65bb9777fc-66jxg" podStartSLOduration=1.693749492 podStartE2EDuration="29.573762795s" podCreationTimestamp="2025-10-11 10:37:43 +0000 UTC" firstStartedPulling="2025-10-11 10:37:44.384431851 +0000 UTC m=+696.358658292" lastFinishedPulling="2025-10-11 10:38:12.264445154 +0000 UTC m=+724.238671595" observedRunningTime="2025-10-11 10:38:12.570552622 +0000 UTC m=+724.544779073" watchObservedRunningTime="2025-10-11 10:38:12.573762795 +0000 UTC m=+724.547989246" Oct 11 10:38:12.579635 master-1 kubenswrapper[4771]: I1011 10:38:12.579406 4771 scope.go:117] "RemoveContainer" containerID="c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb" Oct 11 10:38:12.592588 master-1 kubenswrapper[4771]: I1011 10:38:12.592557 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-user-template-provider-selection\") pod \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " Oct 11 10:38:12.592700 master-1 kubenswrapper[4771]: I1011 10:38:12.592613 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-router-certs\") pod \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " Oct 11 10:38:12.593236 master-1 kubenswrapper[4771]: I1011 10:38:12.593183 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-user-template-login\") pod \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " Oct 11 10:38:12.593312 master-1 kubenswrapper[4771]: I1011 10:38:12.593261 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-user-template-error\") pod \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " Oct 11 10:38:12.593365 master-1 kubenswrapper[4771]: I1011 10:38:12.593315 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-service-ca\") pod \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " Oct 11 10:38:12.593416 master-1 kubenswrapper[4771]: I1011 10:38:12.593347 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-audit-policies\") pod \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " Oct 11 10:38:12.593416 master-1 kubenswrapper[4771]: I1011 10:38:12.593407 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-serving-cert\") pod \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " Oct 11 10:38:12.593489 master-1 kubenswrapper[4771]: I1011 10:38:12.593438 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-cliconfig\") pod \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " Oct 11 10:38:12.593489 master-1 kubenswrapper[4771]: I1011 10:38:12.593472 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-audit-dir\") pod \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " Oct 11 10:38:12.593562 master-1 kubenswrapper[4771]: I1011 10:38:12.593504 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-session\") pod \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " Oct 11 10:38:12.593595 master-1 kubenswrapper[4771]: I1011 10:38:12.593561 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56hdc\" (UniqueName: \"kubernetes.io/projected/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-kube-api-access-56hdc\") pod \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " Oct 11 10:38:12.593628 master-1 kubenswrapper[4771]: I1011 10:38:12.593605 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-ocp-branding-template\") pod \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " Oct 11 10:38:12.593661 master-1 kubenswrapper[4771]: I1011 10:38:12.593643 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-trusted-ca-bundle\") pod \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\" (UID: \"d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8\") " Oct 11 10:38:12.593817 master-1 kubenswrapper[4771]: I1011 10:38:12.593784 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.593878 master-1 kubenswrapper[4771]: I1011 10:38:12.593832 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-session\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.593878 master-1 kubenswrapper[4771]: I1011 10:38:12.593871 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gcxh\" (UniqueName: \"kubernetes.io/projected/1ec66eef-540b-4e9a-b63a-02d662224040-kube-api-access-2gcxh\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.594024 master-1 kubenswrapper[4771]: I1011 10:38:12.593920 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1ec66eef-540b-4e9a-b63a-02d662224040-audit-policies\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.594024 master-1 kubenswrapper[4771]: I1011 10:38:12.593956 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-user-template-login\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.594105 master-1 kubenswrapper[4771]: I1011 10:38:12.594043 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.594105 master-1 kubenswrapper[4771]: I1011 10:38:12.594078 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8" (UID: "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:12.594867 master-1 kubenswrapper[4771]: I1011 10:38:12.594335 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1ec66eef-540b-4e9a-b63a-02d662224040-audit-dir\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.594867 master-1 kubenswrapper[4771]: I1011 10:38:12.594381 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.594867 master-1 kubenswrapper[4771]: I1011 10:38:12.594400 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.594867 master-1 kubenswrapper[4771]: I1011 10:38:12.594417 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.594867 master-1 kubenswrapper[4771]: I1011 10:38:12.594435 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-user-template-error\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.594867 master-1 kubenswrapper[4771]: I1011 10:38:12.594451 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.594867 master-1 kubenswrapper[4771]: I1011 10:38:12.594437 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8" (UID: "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:12.594867 master-1 kubenswrapper[4771]: I1011 10:38:12.594475 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.594867 master-1 kubenswrapper[4771]: I1011 10:38:12.594798 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-service-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:12.594867 master-1 kubenswrapper[4771]: I1011 10:38:12.594819 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-cliconfig\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:12.595629 master-1 kubenswrapper[4771]: I1011 10:38:12.594942 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8" (UID: "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:12.595629 master-1 kubenswrapper[4771]: I1011 10:38:12.594964 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8" (UID: "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:12.595845 master-1 kubenswrapper[4771]: I1011 10:38:12.593798 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8" (UID: "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:38:12.596396 master-1 kubenswrapper[4771]: I1011 10:38:12.596370 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8" (UID: "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:12.596544 master-1 kubenswrapper[4771]: I1011 10:38:12.596522 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8" (UID: "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:12.596817 master-1 kubenswrapper[4771]: I1011 10:38:12.596711 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8" (UID: "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:12.597720 master-1 kubenswrapper[4771]: I1011 10:38:12.597633 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8" (UID: "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:12.598060 master-1 kubenswrapper[4771]: I1011 10:38:12.598031 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8" (UID: "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:12.599510 master-1 kubenswrapper[4771]: I1011 10:38:12.599475 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8" (UID: "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:12.601434 master-1 kubenswrapper[4771]: I1011 10:38:12.601391 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8" (UID: "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:12.603413 master-1 kubenswrapper[4771]: I1011 10:38:12.603385 4771 scope.go:117] "RemoveContainer" containerID="bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db" Oct 11 10:38:12.605169 master-1 kubenswrapper[4771]: I1011 10:38:12.605130 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-kube-api-access-56hdc" (OuterVolumeSpecName: "kube-api-access-56hdc") pod "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8" (UID: "d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8"). InnerVolumeSpecName "kube-api-access-56hdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:38:12.619444 master-1 kubenswrapper[4771]: I1011 10:38:12.619326 4771 scope.go:117] "RemoveContainer" containerID="06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1" Oct 11 10:38:12.619944 master-1 kubenswrapper[4771]: E1011 10:38:12.619841 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1\": container with ID starting with 06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1 not found: ID does not exist" containerID="06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1" Oct 11 10:38:12.620010 master-1 kubenswrapper[4771]: I1011 10:38:12.619951 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1"} err="failed to get container status \"06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1\": rpc error: code = NotFound desc = could not find container \"06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1\": container with ID starting with 06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1 not found: ID does not exist" Oct 11 10:38:12.620010 master-1 kubenswrapper[4771]: I1011 10:38:12.620002 4771 scope.go:117] "RemoveContainer" containerID="38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5" Oct 11 10:38:12.620584 master-1 kubenswrapper[4771]: E1011 10:38:12.620464 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5\": container with ID starting with 38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5 not found: ID does not exist" containerID="38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5" Oct 11 10:38:12.620584 master-1 kubenswrapper[4771]: I1011 10:38:12.620506 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5"} err="failed to get container status \"38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5\": rpc error: code = NotFound desc = could not find container \"38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5\": container with ID starting with 38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5 not found: ID does not exist" Oct 11 10:38:12.620584 master-1 kubenswrapper[4771]: I1011 10:38:12.620534 4771 scope.go:117] "RemoveContainer" containerID="c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb" Oct 11 10:38:12.620828 master-1 kubenswrapper[4771]: E1011 10:38:12.620798 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb\": container with ID starting with c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb not found: ID does not exist" containerID="c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb" Oct 11 10:38:12.620913 master-1 kubenswrapper[4771]: I1011 10:38:12.620825 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb"} err="failed to get container status \"c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb\": rpc error: code = NotFound desc = could not find container \"c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb\": container with ID starting with c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb not found: ID does not exist" Oct 11 10:38:12.620913 master-1 kubenswrapper[4771]: I1011 10:38:12.620841 4771 scope.go:117] "RemoveContainer" containerID="bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db" Oct 11 10:38:12.621131 master-1 kubenswrapper[4771]: E1011 10:38:12.621101 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db\": container with ID starting with bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db not found: ID does not exist" containerID="bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db" Oct 11 10:38:12.621184 master-1 kubenswrapper[4771]: I1011 10:38:12.621127 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db"} err="failed to get container status \"bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db\": rpc error: code = NotFound desc = could not find container \"bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db\": container with ID starting with bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db not found: ID does not exist" Oct 11 10:38:12.621184 master-1 kubenswrapper[4771]: I1011 10:38:12.621145 4771 scope.go:117] "RemoveContainer" containerID="06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1" Oct 11 10:38:12.621429 master-1 kubenswrapper[4771]: I1011 10:38:12.621406 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1"} err="failed to get container status \"06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1\": rpc error: code = NotFound desc = could not find container \"06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1\": container with ID starting with 06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1 not found: ID does not exist" Oct 11 10:38:12.621495 master-1 kubenswrapper[4771]: I1011 10:38:12.621430 4771 scope.go:117] "RemoveContainer" containerID="38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5" Oct 11 10:38:12.621947 master-1 kubenswrapper[4771]: I1011 10:38:12.621913 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5"} err="failed to get container status \"38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5\": rpc error: code = NotFound desc = could not find container \"38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5\": container with ID starting with 38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5 not found: ID does not exist" Oct 11 10:38:12.621947 master-1 kubenswrapper[4771]: I1011 10:38:12.621937 4771 scope.go:117] "RemoveContainer" containerID="c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb" Oct 11 10:38:12.622267 master-1 kubenswrapper[4771]: I1011 10:38:12.622243 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb"} err="failed to get container status \"c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb\": rpc error: code = NotFound desc = could not find container \"c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb\": container with ID starting with c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb not found: ID does not exist" Oct 11 10:38:12.622267 master-1 kubenswrapper[4771]: I1011 10:38:12.622261 4771 scope.go:117] "RemoveContainer" containerID="bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db" Oct 11 10:38:12.622579 master-1 kubenswrapper[4771]: I1011 10:38:12.622560 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db"} err="failed to get container status \"bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db\": rpc error: code = NotFound desc = could not find container \"bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db\": container with ID starting with bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db not found: ID does not exist" Oct 11 10:38:12.622579 master-1 kubenswrapper[4771]: I1011 10:38:12.622578 4771 scope.go:117] "RemoveContainer" containerID="06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1" Oct 11 10:38:12.627053 master-1 kubenswrapper[4771]: I1011 10:38:12.627011 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1"} err="failed to get container status \"06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1\": rpc error: code = NotFound desc = could not find container \"06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1\": container with ID starting with 06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1 not found: ID does not exist" Oct 11 10:38:12.627053 master-1 kubenswrapper[4771]: I1011 10:38:12.627042 4771 scope.go:117] "RemoveContainer" containerID="38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5" Oct 11 10:38:12.627340 master-1 kubenswrapper[4771]: I1011 10:38:12.627310 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5"} err="failed to get container status \"38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5\": rpc error: code = NotFound desc = could not find container \"38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5\": container with ID starting with 38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5 not found: ID does not exist" Oct 11 10:38:12.627412 master-1 kubenswrapper[4771]: I1011 10:38:12.627340 4771 scope.go:117] "RemoveContainer" containerID="c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb" Oct 11 10:38:12.627754 master-1 kubenswrapper[4771]: I1011 10:38:12.627682 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb"} err="failed to get container status \"c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb\": rpc error: code = NotFound desc = could not find container \"c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb\": container with ID starting with c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb not found: ID does not exist" Oct 11 10:38:12.627754 master-1 kubenswrapper[4771]: I1011 10:38:12.627734 4771 scope.go:117] "RemoveContainer" containerID="bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db" Oct 11 10:38:12.628268 master-1 kubenswrapper[4771]: I1011 10:38:12.628210 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db"} err="failed to get container status \"bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db\": rpc error: code = NotFound desc = could not find container \"bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db\": container with ID starting with bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db not found: ID does not exist" Oct 11 10:38:12.628349 master-1 kubenswrapper[4771]: I1011 10:38:12.628263 4771 scope.go:117] "RemoveContainer" containerID="06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1" Oct 11 10:38:12.628674 master-1 kubenswrapper[4771]: I1011 10:38:12.628646 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1"} err="failed to get container status \"06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1\": rpc error: code = NotFound desc = could not find container \"06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1\": container with ID starting with 06f70e119ed613e4288345adac3908921043a544ab2e25638aba6006871402c1 not found: ID does not exist" Oct 11 10:38:12.628764 master-1 kubenswrapper[4771]: I1011 10:38:12.628674 4771 scope.go:117] "RemoveContainer" containerID="38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5" Oct 11 10:38:12.629151 master-1 kubenswrapper[4771]: I1011 10:38:12.629056 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5"} err="failed to get container status \"38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5\": rpc error: code = NotFound desc = could not find container \"38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5\": container with ID starting with 38defd4315f781404a7b26e3ce886c78170b46fa258e5d8165a7bbf19f839fb5 not found: ID does not exist" Oct 11 10:38:12.629151 master-1 kubenswrapper[4771]: I1011 10:38:12.629082 4771 scope.go:117] "RemoveContainer" containerID="c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb" Oct 11 10:38:12.629404 master-1 kubenswrapper[4771]: I1011 10:38:12.629379 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb"} err="failed to get container status \"c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb\": rpc error: code = NotFound desc = could not find container \"c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb\": container with ID starting with c70e6cc8f9016d52eff2617265ed98972e2a524994971b81a6c9f6c45650cecb not found: ID does not exist" Oct 11 10:38:12.629475 master-1 kubenswrapper[4771]: I1011 10:38:12.629406 4771 scope.go:117] "RemoveContainer" containerID="bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db" Oct 11 10:38:12.629721 master-1 kubenswrapper[4771]: I1011 10:38:12.629696 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db"} err="failed to get container status \"bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db\": rpc error: code = NotFound desc = could not find container \"bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db\": container with ID starting with bc03748902a6d44f293d452c1632075da1fc534b6acb87323d789e53beede2db not found: ID does not exist" Oct 11 10:38:12.696761 master-1 kubenswrapper[4771]: I1011 10:38:12.695395 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.696761 master-1 kubenswrapper[4771]: I1011 10:38:12.695457 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.696761 master-1 kubenswrapper[4771]: I1011 10:38:12.695495 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.696761 master-1 kubenswrapper[4771]: I1011 10:38:12.695520 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-session\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.696761 master-1 kubenswrapper[4771]: I1011 10:38:12.695544 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gcxh\" (UniqueName: \"kubernetes.io/projected/1ec66eef-540b-4e9a-b63a-02d662224040-kube-api-access-2gcxh\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.696761 master-1 kubenswrapper[4771]: I1011 10:38:12.695567 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1ec66eef-540b-4e9a-b63a-02d662224040-audit-policies\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.696761 master-1 kubenswrapper[4771]: I1011 10:38:12.695590 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-user-template-login\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.696761 master-1 kubenswrapper[4771]: I1011 10:38:12.695628 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.696761 master-1 kubenswrapper[4771]: I1011 10:38:12.695682 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1ec66eef-540b-4e9a-b63a-02d662224040-audit-dir\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.696761 master-1 kubenswrapper[4771]: I1011 10:38:12.695785 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.696761 master-1 kubenswrapper[4771]: I1011 10:38:12.695813 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.696761 master-1 kubenswrapper[4771]: I1011 10:38:12.695890 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.696761 master-1 kubenswrapper[4771]: I1011 10:38:12.695960 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-user-template-error\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.696761 master-1 kubenswrapper[4771]: I1011 10:38:12.696014 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:12.696761 master-1 kubenswrapper[4771]: I1011 10:38:12.696031 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-user-template-provider-selection\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:12.696761 master-1 kubenswrapper[4771]: I1011 10:38:12.696031 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1ec66eef-540b-4e9a-b63a-02d662224040-audit-dir\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.697982 master-1 kubenswrapper[4771]: I1011 10:38:12.696047 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-router-certs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:12.697982 master-1 kubenswrapper[4771]: I1011 10:38:12.696149 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-user-template-login\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:12.697982 master-1 kubenswrapper[4771]: I1011 10:38:12.696166 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-user-template-error\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:12.697982 master-1 kubenswrapper[4771]: I1011 10:38:12.696180 4771 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-audit-policies\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:12.697982 master-1 kubenswrapper[4771]: I1011 10:38:12.696194 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:12.697982 master-1 kubenswrapper[4771]: I1011 10:38:12.696206 4771 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:12.697982 master-1 kubenswrapper[4771]: I1011 10:38:12.696217 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-session\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:12.697982 master-1 kubenswrapper[4771]: I1011 10:38:12.696229 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56hdc\" (UniqueName: \"kubernetes.io/projected/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-kube-api-access-56hdc\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:12.697982 master-1 kubenswrapper[4771]: I1011 10:38:12.696281 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8-v4-0-config-system-ocp-branding-template\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:12.697982 master-1 kubenswrapper[4771]: I1011 10:38:12.696416 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.697982 master-1 kubenswrapper[4771]: I1011 10:38:12.696551 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1ec66eef-540b-4e9a-b63a-02d662224040-audit-policies\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.697982 master-1 kubenswrapper[4771]: I1011 10:38:12.697402 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.697982 master-1 kubenswrapper[4771]: I1011 10:38:12.697430 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.700529 master-1 kubenswrapper[4771]: I1011 10:38:12.700458 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.700701 master-1 kubenswrapper[4771]: I1011 10:38:12.700676 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.701024 master-1 kubenswrapper[4771]: I1011 10:38:12.700924 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.701024 master-1 kubenswrapper[4771]: I1011 10:38:12.700962 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-user-template-error\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.701186 master-1 kubenswrapper[4771]: I1011 10:38:12.701042 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-system-session\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.701287 master-1 kubenswrapper[4771]: I1011 10:38:12.701267 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-user-template-login\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.702062 master-1 kubenswrapper[4771]: I1011 10:38:12.702007 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1ec66eef-540b-4e9a-b63a-02d662224040-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.714173 master-1 kubenswrapper[4771]: I1011 10:38:12.714139 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gcxh\" (UniqueName: \"kubernetes.io/projected/1ec66eef-540b-4e9a-b63a-02d662224040-kube-api-access-2gcxh\") pod \"oauth-openshift-6fccd5ccc-txx8d\" (UID: \"1ec66eef-540b-4e9a-b63a-02d662224040\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.718389 master-1 kubenswrapper[4771]: I1011 10:38:12.718250 4771 scope.go:117] "RemoveContainer" containerID="ed4ea2c827d3365e80a136d8fc9c70fdea44747628fc9e1b440208d196a14d73" Oct 11 10:38:12.742518 master-1 kubenswrapper[4771]: I1011 10:38:12.742422 4771 scope.go:117] "RemoveContainer" containerID="5314d6ef2281ac080baefb268e1b24e3959c52d75eecf8bba9e60d0238801c00" Oct 11 10:38:12.761538 master-1 kubenswrapper[4771]: I1011 10:38:12.761471 4771 scope.go:117] "RemoveContainer" containerID="9b7973318d321c4747b9166204be01b90470f6b7ff6c1031063eb5d24ec05b0e" Oct 11 10:38:12.796720 master-1 kubenswrapper[4771]: I1011 10:38:12.796276 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:12.856263 master-1 kubenswrapper[4771]: I1011 10:38:12.856205 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-68fb97bcc4-g7k57"] Oct 11 10:38:12.867854 master-1 kubenswrapper[4771]: I1011 10:38:12.867809 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-68fb97bcc4-g7k57"] Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: I1011 10:38:13.459374 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:13.460293 master-2 kubenswrapper[4776]: I1011 10:38:13.459442 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:13.537201 master-1 kubenswrapper[4771]: I1011 10:38:13.537087 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85df6bdd68-48crk" event={"ID":"45c6c687-55ce-4176-903e-5dadd7371470","Type":"ContainerStarted","Data":"0ea4367e2967b4a12600d203935e4ed28870df606f1322b3a2afbe5f5edbccd3"} Oct 11 10:38:13.540901 master-1 kubenswrapper[4771]: I1011 10:38:13.540846 4771 patch_prober.go:28] interesting pod/downloads-65bb9777fc-66jxg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.129.0.66:8080/\": dial tcp 10.129.0.66:8080: connect: connection refused" start-of-body= Oct 11 10:38:13.541000 master-1 kubenswrapper[4771]: I1011 10:38:13.540929 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65bb9777fc-66jxg" podUID="6958daf7-e9a7-4151-8e42-851feedec58e" containerName="download-server" probeResult="failure" output="Get \"http://10.129.0.66:8080/\": dial tcp 10.129.0.66:8080: connect: connection refused" Oct 11 10:38:13.564616 master-1 kubenswrapper[4771]: I1011 10:38:13.561577 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-85df6bdd68-48crk" podStartSLOduration=12.381453325 podStartE2EDuration="13.561555717s" podCreationTimestamp="2025-10-11 10:38:00 +0000 UTC" firstStartedPulling="2025-10-11 10:38:12.027620572 +0000 UTC m=+724.001847023" lastFinishedPulling="2025-10-11 10:38:13.207722954 +0000 UTC m=+725.181949415" observedRunningTime="2025-10-11 10:38:13.559694743 +0000 UTC m=+725.533921224" watchObservedRunningTime="2025-10-11 10:38:13.561555717 +0000 UTC m=+725.535782188" Oct 11 10:38:13.638633 master-1 kubenswrapper[4771]: I1011 10:38:13.638503 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6fccd5ccc-txx8d"] Oct 11 10:38:13.872776 master-1 kubenswrapper[4771]: I1011 10:38:13.872371 4771 patch_prober.go:28] interesting pod/downloads-65bb9777fc-66jxg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.129.0.66:8080/\": dial tcp 10.129.0.66:8080: connect: connection refused" start-of-body= Oct 11 10:38:13.872776 master-1 kubenswrapper[4771]: I1011 10:38:13.872430 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65bb9777fc-66jxg" podUID="6958daf7-e9a7-4151-8e42-851feedec58e" containerName="download-server" probeResult="failure" output="Get \"http://10.129.0.66:8080/\": dial tcp 10.129.0.66:8080: connect: connection refused" Oct 11 10:38:13.872776 master-1 kubenswrapper[4771]: I1011 10:38:13.872557 4771 patch_prober.go:28] interesting pod/downloads-65bb9777fc-66jxg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.129.0.66:8080/\": dial tcp 10.129.0.66:8080: connect: connection refused" start-of-body= Oct 11 10:38:13.872776 master-1 kubenswrapper[4771]: I1011 10:38:13.872655 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65bb9777fc-66jxg" podUID="6958daf7-e9a7-4151-8e42-851feedec58e" containerName="download-server" probeResult="failure" output="Get \"http://10.129.0.66:8080/\": dial tcp 10.129.0.66:8080: connect: connection refused" Oct 11 10:38:13.913226 master-1 kubenswrapper[4771]: I1011 10:38:13.913179 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-1" Oct 11 10:38:14.024814 master-1 kubenswrapper[4771]: I1011 10:38:14.024640 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d61f55f6-6e03-40ca-aa96-cb6ba21c39b4-kubelet-dir\") pod \"d61f55f6-6e03-40ca-aa96-cb6ba21c39b4\" (UID: \"d61f55f6-6e03-40ca-aa96-cb6ba21c39b4\") " Oct 11 10:38:14.025159 master-1 kubenswrapper[4771]: I1011 10:38:14.024756 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d61f55f6-6e03-40ca-aa96-cb6ba21c39b4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d61f55f6-6e03-40ca-aa96-cb6ba21c39b4" (UID: "d61f55f6-6e03-40ca-aa96-cb6ba21c39b4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:38:14.025986 master-1 kubenswrapper[4771]: I1011 10:38:14.025348 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d61f55f6-6e03-40ca-aa96-cb6ba21c39b4-kube-api-access\") pod \"d61f55f6-6e03-40ca-aa96-cb6ba21c39b4\" (UID: \"d61f55f6-6e03-40ca-aa96-cb6ba21c39b4\") " Oct 11 10:38:14.026298 master-1 kubenswrapper[4771]: I1011 10:38:14.026272 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d61f55f6-6e03-40ca-aa96-cb6ba21c39b4-var-lock\") pod \"d61f55f6-6e03-40ca-aa96-cb6ba21c39b4\" (UID: \"d61f55f6-6e03-40ca-aa96-cb6ba21c39b4\") " Oct 11 10:38:14.026611 master-1 kubenswrapper[4771]: I1011 10:38:14.026434 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d61f55f6-6e03-40ca-aa96-cb6ba21c39b4-var-lock" (OuterVolumeSpecName: "var-lock") pod "d61f55f6-6e03-40ca-aa96-cb6ba21c39b4" (UID: "d61f55f6-6e03-40ca-aa96-cb6ba21c39b4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:38:14.027317 master-1 kubenswrapper[4771]: I1011 10:38:14.027278 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d61f55f6-6e03-40ca-aa96-cb6ba21c39b4-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:14.027672 master-1 kubenswrapper[4771]: I1011 10:38:14.027639 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d61f55f6-6e03-40ca-aa96-cb6ba21c39b4-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:14.029145 master-1 kubenswrapper[4771]: I1011 10:38:14.029070 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d61f55f6-6e03-40ca-aa96-cb6ba21c39b4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d61f55f6-6e03-40ca-aa96-cb6ba21c39b4" (UID: "d61f55f6-6e03-40ca-aa96-cb6ba21c39b4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:38:14.129505 master-1 kubenswrapper[4771]: I1011 10:38:14.129309 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d61f55f6-6e03-40ca-aa96-cb6ba21c39b4-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: I1011 10:38:14.196284 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:14.197191 master-2 kubenswrapper[4776]: I1011 10:38:14.196364 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:14.447683 master-1 kubenswrapper[4771]: I1011 10:38:14.447516 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8" path="/var/lib/kubelet/pods/d2a2fcbb-6cfa-4d22-ba29-7f08edccdee8/volumes" Oct 11 10:38:14.549126 master-1 kubenswrapper[4771]: I1011 10:38:14.548609 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" event={"ID":"1ec66eef-540b-4e9a-b63a-02d662224040","Type":"ContainerStarted","Data":"fcd7151178a5f5be90381dc50d2b0387c668e8ffa4d74ae87b04086b2bf41165"} Oct 11 10:38:14.549126 master-1 kubenswrapper[4771]: I1011 10:38:14.548686 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" event={"ID":"1ec66eef-540b-4e9a-b63a-02d662224040","Type":"ContainerStarted","Data":"9444a4249ba1568aacf3e1abe26947d48b9cfe0ff588bf5939efb08250e848de"} Oct 11 10:38:14.549126 master-1 kubenswrapper[4771]: I1011 10:38:14.548877 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:14.552489 master-1 kubenswrapper[4771]: I1011 10:38:14.552420 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-1" event={"ID":"d61f55f6-6e03-40ca-aa96-cb6ba21c39b4","Type":"ContainerDied","Data":"d51941eb09c3e4a595d1102ccd7dab8966ed8157ff482e09bff14b3b01ba141c"} Oct 11 10:38:14.552623 master-1 kubenswrapper[4771]: I1011 10:38:14.552499 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d51941eb09c3e4a595d1102ccd7dab8966ed8157ff482e09bff14b3b01ba141c" Oct 11 10:38:14.552891 master-1 kubenswrapper[4771]: I1011 10:38:14.552850 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-1" Oct 11 10:38:14.558018 master-1 kubenswrapper[4771]: I1011 10:38:14.557950 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" Oct 11 10:38:14.614730 master-1 kubenswrapper[4771]: I1011 10:38:14.614600 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6fccd5ccc-txx8d" podStartSLOduration=14.614571878 podStartE2EDuration="14.614571878s" podCreationTimestamp="2025-10-11 10:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:38:14.584668881 +0000 UTC m=+726.558895362" watchObservedRunningTime="2025-10-11 10:38:14.614571878 +0000 UTC m=+726.588798349" Oct 11 10:38:14.690636 master-2 kubenswrapper[4776]: I1011 10:38:14.690590 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-68fb97bcc4-r24pr"] Oct 11 10:38:15.649450 master-2 kubenswrapper[4776]: I1011 10:38:15.649362 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-2"] Oct 11 10:38:15.651340 master-2 kubenswrapper[4776]: W1011 10:38:15.651296 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8755f64d_7ff8_4df3_ae55_c1154ba02830.slice/crio-778cb332dda0b2164f4a18cca4a33c538ed0cde607af7400e50992351c45e610 WatchSource:0}: Error finding container 778cb332dda0b2164f4a18cca4a33c538ed0cde607af7400e50992351c45e610: Status 404 returned error can't find the container with id 778cb332dda0b2164f4a18cca4a33c538ed0cde607af7400e50992351c45e610 Oct 11 10:38:16.286856 master-2 kubenswrapper[4776]: I1011 10:38:16.286646 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-2" event={"ID":"8755f64d-7ff8-4df3-ae55-c1154ba02830","Type":"ContainerStarted","Data":"9153289ceddc1077d563995a41dced39a8a3e20ad2f9b47e07f851d3852a7efc"} Oct 11 10:38:16.286856 master-2 kubenswrapper[4776]: I1011 10:38:16.286709 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-2" event={"ID":"8755f64d-7ff8-4df3-ae55-c1154ba02830","Type":"ContainerStarted","Data":"778cb332dda0b2164f4a18cca4a33c538ed0cde607af7400e50992351c45e610"} Oct 11 10:38:16.288740 master-2 kubenswrapper[4776]: I1011 10:38:16.288660 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65bb9777fc-bkmsm" event={"ID":"31d64616-a514-4ae3-bb6d-d6eb14d9147a","Type":"ContainerStarted","Data":"cb5441fccf13a8d6993a2d076504c5ed5dd8298eee96bf1cf619fbea6519355c"} Oct 11 10:38:16.289443 master-2 kubenswrapper[4776]: I1011 10:38:16.289421 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65bb9777fc-bkmsm" Oct 11 10:38:16.291238 master-2 kubenswrapper[4776]: I1011 10:38:16.291136 4776 patch_prober.go:28] interesting pod/downloads-65bb9777fc-bkmsm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.75:8080/\": dial tcp 10.128.0.75:8080: connect: connection refused" start-of-body= Oct 11 10:38:16.291238 master-2 kubenswrapper[4776]: I1011 10:38:16.291187 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65bb9777fc-bkmsm" podUID="31d64616-a514-4ae3-bb6d-d6eb14d9147a" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.75:8080/\": dial tcp 10.128.0.75:8080: connect: connection refused" Oct 11 10:38:17.267538 master-1 kubenswrapper[4771]: I1011 10:38:17.267436 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd" Oct 11 10:38:17.293976 master-1 kubenswrapper[4771]: I1011 10:38:17.293878 4771 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-1 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" start-of-body= Oct 11 10:38:17.293976 master-1 kubenswrapper[4771]: I1011 10:38:17.293959 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" podUID="a706deec-9223-4663-9db5-71147d242c34" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" Oct 11 10:38:17.295078 master-2 kubenswrapper[4776]: I1011 10:38:17.294998 4776 patch_prober.go:28] interesting pod/downloads-65bb9777fc-bkmsm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.75:8080/\": dial tcp 10.128.0.75:8080: connect: connection refused" start-of-body= Oct 11 10:38:17.295078 master-2 kubenswrapper[4776]: I1011 10:38:17.295074 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65bb9777fc-bkmsm" podUID="31d64616-a514-4ae3-bb6d-d6eb14d9147a" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.75:8080/\": dial tcp 10.128.0.75:8080: connect: connection refused" Oct 11 10:38:17.449902 master-2 kubenswrapper[4776]: I1011 10:38:17.444698 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-65bb9777fc-bkmsm" podStartSLOduration=3.625507675 podStartE2EDuration="34.444656343s" podCreationTimestamp="2025-10-11 10:37:43 +0000 UTC" firstStartedPulling="2025-10-11 10:37:44.401075091 +0000 UTC m=+699.185501820" lastFinishedPulling="2025-10-11 10:38:15.220223779 +0000 UTC m=+730.004650488" observedRunningTime="2025-10-11 10:38:17.442522854 +0000 UTC m=+732.226949583" watchObservedRunningTime="2025-10-11 10:38:17.444656343 +0000 UTC m=+732.229083052" Oct 11 10:38:17.849475 master-2 kubenswrapper[4776]: I1011 10:38:17.849391 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-v6dfc_8757af56-20fb-439e-adba-7e4e50378936/assisted-installer-controller/0.log" Oct 11 10:38:18.299064 master-2 kubenswrapper[4776]: I1011 10:38:18.298982 4776 patch_prober.go:28] interesting pod/downloads-65bb9777fc-bkmsm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.75:8080/\": dial tcp 10.128.0.75:8080: connect: connection refused" start-of-body= Oct 11 10:38:18.299064 master-2 kubenswrapper[4776]: I1011 10:38:18.299041 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65bb9777fc-bkmsm" podUID="31d64616-a514-4ae3-bb6d-d6eb14d9147a" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.75:8080/\": dial tcp 10.128.0.75:8080: connect: connection refused" Oct 11 10:38:18.406717 master-1 kubenswrapper[4771]: I1011 10:38:18.406539 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-57bccbfdf6-l962w" podUID="a7c05954-4353-4cd1-9130-7fcb832a0493" containerName="console" containerID="cri-o://ac4cbca778491aecc2f71b3dc29feeec7e7dede29f2bc161a327a52e374f391b" gracePeriod=15 Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: I1011 10:38:18.458638 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:18.459458 master-2 kubenswrapper[4776]: I1011 10:38:18.458754 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:18.548961 master-2 kubenswrapper[4776]: I1011 10:38:18.548884 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-6-master-2" podStartSLOduration=7.54886682 podStartE2EDuration="7.54886682s" podCreationTimestamp="2025-10-11 10:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:38:18.545067376 +0000 UTC m=+733.329494125" watchObservedRunningTime="2025-10-11 10:38:18.54886682 +0000 UTC m=+733.333293529" Oct 11 10:38:18.584825 master-1 kubenswrapper[4771]: I1011 10:38:18.584736 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57bccbfdf6-l962w_a7c05954-4353-4cd1-9130-7fcb832a0493/console/0.log" Oct 11 10:38:18.585093 master-1 kubenswrapper[4771]: I1011 10:38:18.584852 4771 generic.go:334] "Generic (PLEG): container finished" podID="a7c05954-4353-4cd1-9130-7fcb832a0493" containerID="ac4cbca778491aecc2f71b3dc29feeec7e7dede29f2bc161a327a52e374f391b" exitCode=2 Oct 11 10:38:18.585093 master-1 kubenswrapper[4771]: I1011 10:38:18.584913 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57bccbfdf6-l962w" event={"ID":"a7c05954-4353-4cd1-9130-7fcb832a0493","Type":"ContainerDied","Data":"ac4cbca778491aecc2f71b3dc29feeec7e7dede29f2bc161a327a52e374f391b"} Oct 11 10:38:18.640905 master-1 kubenswrapper[4771]: I1011 10:38:18.640820 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-1"] Oct 11 10:38:18.641196 master-1 kubenswrapper[4771]: E1011 10:38:18.641147 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61f55f6-6e03-40ca-aa96-cb6ba21c39b4" containerName="installer" Oct 11 10:38:18.641196 master-1 kubenswrapper[4771]: I1011 10:38:18.641179 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61f55f6-6e03-40ca-aa96-cb6ba21c39b4" containerName="installer" Oct 11 10:38:18.641452 master-1 kubenswrapper[4771]: I1011 10:38:18.641405 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d61f55f6-6e03-40ca-aa96-cb6ba21c39b4" containerName="installer" Oct 11 10:38:18.642407 master-1 kubenswrapper[4771]: I1011 10:38:18.642313 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-1" Oct 11 10:38:18.646436 master-1 kubenswrapper[4771]: I1011 10:38:18.646342 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-t28rg" Oct 11 10:38:18.709780 master-1 kubenswrapper[4771]: I1011 10:38:18.709602 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/66232733-dcfb-4320-a372-ce05d7d777d9-var-lock\") pod \"installer-4-master-1\" (UID: \"66232733-dcfb-4320-a372-ce05d7d777d9\") " pod="openshift-kube-apiserver/installer-4-master-1" Oct 11 10:38:18.709780 master-1 kubenswrapper[4771]: I1011 10:38:18.709674 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66232733-dcfb-4320-a372-ce05d7d777d9-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"66232733-dcfb-4320-a372-ce05d7d777d9\") " pod="openshift-kube-apiserver/installer-4-master-1" Oct 11 10:38:18.709780 master-1 kubenswrapper[4771]: I1011 10:38:18.709708 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66232733-dcfb-4320-a372-ce05d7d777d9-kube-api-access\") pod \"installer-4-master-1\" (UID: \"66232733-dcfb-4320-a372-ce05d7d777d9\") " pod="openshift-kube-apiserver/installer-4-master-1" Oct 11 10:38:18.811507 master-1 kubenswrapper[4771]: I1011 10:38:18.811348 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/66232733-dcfb-4320-a372-ce05d7d777d9-var-lock\") pod \"installer-4-master-1\" (UID: \"66232733-dcfb-4320-a372-ce05d7d777d9\") " pod="openshift-kube-apiserver/installer-4-master-1" Oct 11 10:38:18.811507 master-1 kubenswrapper[4771]: I1011 10:38:18.811462 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66232733-dcfb-4320-a372-ce05d7d777d9-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"66232733-dcfb-4320-a372-ce05d7d777d9\") " pod="openshift-kube-apiserver/installer-4-master-1" Oct 11 10:38:18.811507 master-1 kubenswrapper[4771]: I1011 10:38:18.811519 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66232733-dcfb-4320-a372-ce05d7d777d9-kube-api-access\") pod \"installer-4-master-1\" (UID: \"66232733-dcfb-4320-a372-ce05d7d777d9\") " pod="openshift-kube-apiserver/installer-4-master-1" Oct 11 10:38:18.811969 master-1 kubenswrapper[4771]: I1011 10:38:18.811524 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/66232733-dcfb-4320-a372-ce05d7d777d9-var-lock\") pod \"installer-4-master-1\" (UID: \"66232733-dcfb-4320-a372-ce05d7d777d9\") " pod="openshift-kube-apiserver/installer-4-master-1" Oct 11 10:38:18.811969 master-1 kubenswrapper[4771]: I1011 10:38:18.811603 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66232733-dcfb-4320-a372-ce05d7d777d9-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"66232733-dcfb-4320-a372-ce05d7d777d9\") " pod="openshift-kube-apiserver/installer-4-master-1" Oct 11 10:38:18.962163 master-1 kubenswrapper[4771]: I1011 10:38:18.962033 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57bccbfdf6-l962w_a7c05954-4353-4cd1-9130-7fcb832a0493/console/0.log" Oct 11 10:38:18.962163 master-1 kubenswrapper[4771]: I1011 10:38:18.962141 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:38:18.975892 master-1 kubenswrapper[4771]: I1011 10:38:18.975830 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-1"] Oct 11 10:38:18.984561 master-2 kubenswrapper[4776]: I1011 10:38:18.984438 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-65d86dff78-crzgp"] Oct 11 10:38:18.985193 master-2 kubenswrapper[4776]: I1011 10:38:18.985108 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" containerID="cri-o://bc423808a1318a501a04a81a0b62715e5af3476c9da3fb5de99b8aa1ff2380a0" gracePeriod=170 Oct 11 10:38:19.014385 master-1 kubenswrapper[4771]: I1011 10:38:19.014277 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7c05954-4353-4cd1-9130-7fcb832a0493-oauth-serving-cert\") pod \"a7c05954-4353-4cd1-9130-7fcb832a0493\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " Oct 11 10:38:19.014385 master-1 kubenswrapper[4771]: I1011 10:38:19.014348 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7c05954-4353-4cd1-9130-7fcb832a0493-console-config\") pod \"a7c05954-4353-4cd1-9130-7fcb832a0493\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " Oct 11 10:38:19.014610 master-1 kubenswrapper[4771]: I1011 10:38:19.014438 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngk7c\" (UniqueName: \"kubernetes.io/projected/a7c05954-4353-4cd1-9130-7fcb832a0493-kube-api-access-ngk7c\") pod \"a7c05954-4353-4cd1-9130-7fcb832a0493\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " Oct 11 10:38:19.014610 master-1 kubenswrapper[4771]: I1011 10:38:19.014496 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c05954-4353-4cd1-9130-7fcb832a0493-console-serving-cert\") pod \"a7c05954-4353-4cd1-9130-7fcb832a0493\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " Oct 11 10:38:19.014610 master-1 kubenswrapper[4771]: I1011 10:38:19.014579 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7c05954-4353-4cd1-9130-7fcb832a0493-console-oauth-config\") pod \"a7c05954-4353-4cd1-9130-7fcb832a0493\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " Oct 11 10:38:19.015969 master-1 kubenswrapper[4771]: I1011 10:38:19.014669 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7c05954-4353-4cd1-9130-7fcb832a0493-service-ca\") pod \"a7c05954-4353-4cd1-9130-7fcb832a0493\" (UID: \"a7c05954-4353-4cd1-9130-7fcb832a0493\") " Oct 11 10:38:19.015969 master-1 kubenswrapper[4771]: I1011 10:38:19.014761 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7c05954-4353-4cd1-9130-7fcb832a0493-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a7c05954-4353-4cd1-9130-7fcb832a0493" (UID: "a7c05954-4353-4cd1-9130-7fcb832a0493"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:19.015969 master-1 kubenswrapper[4771]: I1011 10:38:19.015072 4771 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7c05954-4353-4cd1-9130-7fcb832a0493-oauth-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:19.015969 master-1 kubenswrapper[4771]: I1011 10:38:19.015545 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7c05954-4353-4cd1-9130-7fcb832a0493-console-config" (OuterVolumeSpecName: "console-config") pod "a7c05954-4353-4cd1-9130-7fcb832a0493" (UID: "a7c05954-4353-4cd1-9130-7fcb832a0493"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:19.015969 master-1 kubenswrapper[4771]: I1011 10:38:19.015643 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7c05954-4353-4cd1-9130-7fcb832a0493-service-ca" (OuterVolumeSpecName: "service-ca") pod "a7c05954-4353-4cd1-9130-7fcb832a0493" (UID: "a7c05954-4353-4cd1-9130-7fcb832a0493"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:19.018735 master-1 kubenswrapper[4771]: I1011 10:38:19.018659 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c05954-4353-4cd1-9130-7fcb832a0493-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a7c05954-4353-4cd1-9130-7fcb832a0493" (UID: "a7c05954-4353-4cd1-9130-7fcb832a0493"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:19.018984 master-1 kubenswrapper[4771]: I1011 10:38:19.018927 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c05954-4353-4cd1-9130-7fcb832a0493-kube-api-access-ngk7c" (OuterVolumeSpecName: "kube-api-access-ngk7c") pod "a7c05954-4353-4cd1-9130-7fcb832a0493" (UID: "a7c05954-4353-4cd1-9130-7fcb832a0493"). InnerVolumeSpecName "kube-api-access-ngk7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:38:19.019499 master-1 kubenswrapper[4771]: I1011 10:38:19.019438 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c05954-4353-4cd1-9130-7fcb832a0493-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a7c05954-4353-4cd1-9130-7fcb832a0493" (UID: "a7c05954-4353-4cd1-9130-7fcb832a0493"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:19.116994 master-1 kubenswrapper[4771]: I1011 10:38:19.116891 4771 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7c05954-4353-4cd1-9130-7fcb832a0493-service-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:19.116994 master-1 kubenswrapper[4771]: I1011 10:38:19.116952 4771 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7c05954-4353-4cd1-9130-7fcb832a0493-console-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:19.116994 master-1 kubenswrapper[4771]: I1011 10:38:19.116975 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngk7c\" (UniqueName: \"kubernetes.io/projected/a7c05954-4353-4cd1-9130-7fcb832a0493-kube-api-access-ngk7c\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:19.116994 master-1 kubenswrapper[4771]: I1011 10:38:19.116997 4771 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c05954-4353-4cd1-9130-7fcb832a0493-console-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:19.117465 master-1 kubenswrapper[4771]: I1011 10:38:19.117018 4771 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7c05954-4353-4cd1-9130-7fcb832a0493-console-oauth-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: I1011 10:38:19.199591 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: I1011 10:38:19.199653 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:19.593441 master-1 kubenswrapper[4771]: I1011 10:38:19.593317 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57bccbfdf6-l962w_a7c05954-4353-4cd1-9130-7fcb832a0493/console/0.log" Oct 11 10:38:19.594530 master-1 kubenswrapper[4771]: I1011 10:38:19.593462 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57bccbfdf6-l962w" event={"ID":"a7c05954-4353-4cd1-9130-7fcb832a0493","Type":"ContainerDied","Data":"8b80ae3136f1889aef88e388582b0fe2c7b18eb0654ca449f848a798f30b4031"} Oct 11 10:38:19.594530 master-1 kubenswrapper[4771]: I1011 10:38:19.593513 4771 scope.go:117] "RemoveContainer" containerID="ac4cbca778491aecc2f71b3dc29feeec7e7dede29f2bc161a327a52e374f391b" Oct 11 10:38:19.594530 master-1 kubenswrapper[4771]: I1011 10:38:19.593572 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57bccbfdf6-l962w" Oct 11 10:38:20.033886 master-1 kubenswrapper[4771]: I1011 10:38:20.033652 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66232733-dcfb-4320-a372-ce05d7d777d9-kube-api-access\") pod \"installer-4-master-1\" (UID: \"66232733-dcfb-4320-a372-ce05d7d777d9\") " pod="openshift-kube-apiserver/installer-4-master-1" Oct 11 10:38:20.162179 master-1 kubenswrapper[4771]: I1011 10:38:20.162067 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-1" Oct 11 10:38:20.571518 master-1 kubenswrapper[4771]: I1011 10:38:20.571391 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57bccbfdf6-l962w"] Oct 11 10:38:20.751126 master-1 kubenswrapper[4771]: I1011 10:38:20.750932 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-775ff6c4fc-csp4z"] Oct 11 10:38:20.752228 master-1 kubenswrapper[4771]: E1011 10:38:20.752196 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c05954-4353-4cd1-9130-7fcb832a0493" containerName="console" Oct 11 10:38:20.752390 master-1 kubenswrapper[4771]: I1011 10:38:20.752337 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c05954-4353-4cd1-9130-7fcb832a0493" containerName="console" Oct 11 10:38:20.752656 master-1 kubenswrapper[4771]: I1011 10:38:20.752632 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c05954-4353-4cd1-9130-7fcb832a0493" containerName="console" Oct 11 10:38:20.753537 master-1 kubenswrapper[4771]: I1011 10:38:20.753504 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.757192 master-1 kubenswrapper[4771]: I1011 10:38:20.757113 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 11 10:38:20.762014 master-1 kubenswrapper[4771]: I1011 10:38:20.761968 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-8wmjp" Oct 11 10:38:20.762157 master-1 kubenswrapper[4771]: I1011 10:38:20.762028 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 11 10:38:20.762578 master-1 kubenswrapper[4771]: I1011 10:38:20.762536 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 11 10:38:20.762697 master-1 kubenswrapper[4771]: I1011 10:38:20.762649 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 11 10:38:20.763155 master-1 kubenswrapper[4771]: I1011 10:38:20.763106 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 11 10:38:20.769207 master-1 kubenswrapper[4771]: I1011 10:38:20.769169 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 11 10:38:20.849245 master-1 kubenswrapper[4771]: I1011 10:38:20.849151 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-service-ca\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.849574 master-1 kubenswrapper[4771]: I1011 10:38:20.849254 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-oauth-serving-cert\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.849574 master-1 kubenswrapper[4771]: I1011 10:38:20.849300 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-console-serving-cert\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.849574 master-1 kubenswrapper[4771]: I1011 10:38:20.849339 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmzlm\" (UniqueName: \"kubernetes.io/projected/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-kube-api-access-mmzlm\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.849809 master-1 kubenswrapper[4771]: I1011 10:38:20.849546 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-console-oauth-config\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.849809 master-1 kubenswrapper[4771]: I1011 10:38:20.849628 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-trusted-ca-bundle\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.850028 master-1 kubenswrapper[4771]: I1011 10:38:20.849976 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-console-config\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.925573 master-1 kubenswrapper[4771]: I1011 10:38:20.925444 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-775ff6c4fc-csp4z"] Oct 11 10:38:20.926931 master-1 kubenswrapper[4771]: I1011 10:38:20.926870 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-57bccbfdf6-l962w"] Oct 11 10:38:20.954019 master-1 kubenswrapper[4771]: I1011 10:38:20.953914 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmzlm\" (UniqueName: \"kubernetes.io/projected/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-kube-api-access-mmzlm\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.954019 master-1 kubenswrapper[4771]: I1011 10:38:20.954026 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-console-oauth-config\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.954610 master-1 kubenswrapper[4771]: I1011 10:38:20.954106 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-trusted-ca-bundle\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.954610 master-1 kubenswrapper[4771]: I1011 10:38:20.954258 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-console-config\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.954610 master-1 kubenswrapper[4771]: I1011 10:38:20.954296 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-service-ca\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.954610 master-1 kubenswrapper[4771]: I1011 10:38:20.954337 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-oauth-serving-cert\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.955957 master-1 kubenswrapper[4771]: I1011 10:38:20.955892 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-oauth-serving-cert\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.956108 master-1 kubenswrapper[4771]: I1011 10:38:20.955962 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-service-ca\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.956108 master-1 kubenswrapper[4771]: I1011 10:38:20.956089 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-trusted-ca-bundle\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.956296 master-1 kubenswrapper[4771]: I1011 10:38:20.956203 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-console-serving-cert\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.957725 master-1 kubenswrapper[4771]: I1011 10:38:20.957689 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-console-config\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.962438 master-1 kubenswrapper[4771]: I1011 10:38:20.962030 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-console-serving-cert\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:20.965055 master-1 kubenswrapper[4771]: I1011 10:38:20.964997 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-console-oauth-config\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:21.302943 master-1 kubenswrapper[4771]: I1011 10:38:21.302704 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-1"] Oct 11 10:38:21.309000 master-1 kubenswrapper[4771]: W1011 10:38:21.308910 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod66232733_dcfb_4320_a372_ce05d7d777d9.slice/crio-210c1e83cdd57074a16669f3f9ab89020bba5fd50626a163674b365ff40935aa WatchSource:0}: Error finding container 210c1e83cdd57074a16669f3f9ab89020bba5fd50626a163674b365ff40935aa: Status 404 returned error can't find the container with id 210c1e83cdd57074a16669f3f9ab89020bba5fd50626a163674b365ff40935aa Oct 11 10:38:21.613994 master-1 kubenswrapper[4771]: I1011 10:38:21.613877 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-1" event={"ID":"66232733-dcfb-4320-a372-ce05d7d777d9","Type":"ContainerStarted","Data":"210c1e83cdd57074a16669f3f9ab89020bba5fd50626a163674b365ff40935aa"} Oct 11 10:38:22.293665 master-1 kubenswrapper[4771]: I1011 10:38:22.293593 4771 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-1 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" start-of-body= Oct 11 10:38:22.293665 master-1 kubenswrapper[4771]: I1011 10:38:22.293660 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" podUID="a706deec-9223-4663-9db5-71147d242c34" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" Oct 11 10:38:22.294406 master-1 kubenswrapper[4771]: I1011 10:38:22.293760 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 11 10:38:22.294517 master-1 kubenswrapper[4771]: I1011 10:38:22.294463 4771 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-1 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" start-of-body= Oct 11 10:38:22.294622 master-1 kubenswrapper[4771]: I1011 10:38:22.294546 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" podUID="a706deec-9223-4663-9db5-71147d242c34" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" Oct 11 10:38:22.447976 master-1 kubenswrapper[4771]: I1011 10:38:22.447748 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c05954-4353-4cd1-9130-7fcb832a0493" path="/var/lib/kubelet/pods/a7c05954-4353-4cd1-9130-7fcb832a0493/volumes" Oct 11 10:38:22.452592 master-1 kubenswrapper[4771]: I1011 10:38:22.452429 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmzlm\" (UniqueName: \"kubernetes.io/projected/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-kube-api-access-mmzlm\") pod \"console-775ff6c4fc-csp4z\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:22.579567 master-1 kubenswrapper[4771]: I1011 10:38:22.579456 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:22.623839 master-1 kubenswrapper[4771]: I1011 10:38:22.623753 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-1" event={"ID":"66232733-dcfb-4320-a372-ce05d7d777d9","Type":"ContainerStarted","Data":"1a8de711412c9754f899398f555a2bb9a02c8065248c232ba0c054fb5b00ec21"} Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: I1011 10:38:23.460408 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: I1011 10:38:23.460505 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:23.887668 master-1 kubenswrapper[4771]: I1011 10:38:23.887569 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-65bb9777fc-66jxg" Oct 11 10:38:23.907219 master-2 kubenswrapper[4776]: I1011 10:38:23.907149 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-65bb9777fc-bkmsm" Oct 11 10:38:24.021620 master-1 kubenswrapper[4771]: W1011 10:38:24.021530 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde7aa64b_afab_4b3a_b56d_81c324e7a8cb.slice/crio-dbb5133e318020821233bd4743645ca9f974f8d4348733f58f43c17203dfa102 WatchSource:0}: Error finding container dbb5133e318020821233bd4743645ca9f974f8d4348733f58f43c17203dfa102: Status 404 returned error can't find the container with id dbb5133e318020821233bd4743645ca9f974f8d4348733f58f43c17203dfa102 Oct 11 10:38:24.126422 master-1 kubenswrapper[4771]: I1011 10:38:24.125878 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-775ff6c4fc-csp4z"] Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: I1011 10:38:24.198447 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: I1011 10:38:24.198576 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:24.640607 master-1 kubenswrapper[4771]: I1011 10:38:24.640509 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-775ff6c4fc-csp4z" event={"ID":"de7aa64b-afab-4b3a-b56d-81c324e7a8cb","Type":"ContainerStarted","Data":"10775f41088b2fc502cb2185cba3870f7995bc3ef1e5d846bebb3b393b7337f5"} Oct 11 10:38:24.640888 master-1 kubenswrapper[4771]: I1011 10:38:24.640644 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-775ff6c4fc-csp4z" event={"ID":"de7aa64b-afab-4b3a-b56d-81c324e7a8cb","Type":"ContainerStarted","Data":"dbb5133e318020821233bd4743645ca9f974f8d4348733f58f43c17203dfa102"} Oct 11 10:38:25.436821 master-1 kubenswrapper[4771]: I1011 10:38:25.436724 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:38:25.456819 master-1 kubenswrapper[4771]: I1011 10:38:25.456755 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="8e33f2eb-84da-43b0-9aad-9b9ee4940b82" Oct 11 10:38:25.456819 master-1 kubenswrapper[4771]: I1011 10:38:25.456803 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="8e33f2eb-84da-43b0-9aad-9b9ee4940b82" Oct 11 10:38:25.753541 master-1 kubenswrapper[4771]: I1011 10:38:25.753332 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-1" podStartSLOduration=7.7532997 podStartE2EDuration="7.7532997s" podCreationTimestamp="2025-10-11 10:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:38:24.410873153 +0000 UTC m=+736.385099654" watchObservedRunningTime="2025-10-11 10:38:25.7532997 +0000 UTC m=+737.727526171" Oct 11 10:38:26.735630 master-1 kubenswrapper[4771]: I1011 10:38:26.735551 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 11 10:38:27.294651 master-1 kubenswrapper[4771]: I1011 10:38:27.294562 4771 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-1 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" start-of-body= Oct 11 10:38:27.294944 master-1 kubenswrapper[4771]: I1011 10:38:27.294656 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" podUID="a706deec-9223-4663-9db5-71147d242c34" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" Oct 11 10:38:28.046218 master-1 kubenswrapper[4771]: I1011 10:38:28.045891 4771 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:38:28.142590 master-1 kubenswrapper[4771]: I1011 10:38:28.142507 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: I1011 10:38:28.461201 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: I1011 10:38:28.461270 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:29.094589 master-1 kubenswrapper[4771]: I1011 10:38:29.093630 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-775ff6c4fc-csp4z" podStartSLOduration=40.093601824 podStartE2EDuration="40.093601824s" podCreationTimestamp="2025-10-11 10:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:38:29.092806081 +0000 UTC m=+741.067032592" watchObservedRunningTime="2025-10-11 10:38:29.093601824 +0000 UTC m=+741.067828305" Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: I1011 10:38:29.196776 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:29.198039 master-2 kubenswrapper[4776]: I1011 10:38:29.196840 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: I1011 10:38:30.266828 4776 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-crzgp container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: [+]metric-storage-ready ok Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: [+]metric-informer-sync ok Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: [+]metadata-informer-sync ok Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: I1011 10:38:30.266889 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:30.645293 master-1 kubenswrapper[4771]: I1011 10:38:30.645209 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:38:30.647671 master-1 kubenswrapper[4771]: I1011 10:38:30.647605 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 11 10:38:30.680243 master-1 kubenswrapper[4771]: I1011 10:38:30.680156 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"0b5de9d609ee1e6c379f71934cb2c3c6","Type":"ContainerStarted","Data":"4247c914a32e821feeb321db49e7b5b061a40ecb112a752686b9ea07098f462f"} Oct 11 10:38:31.689604 master-1 kubenswrapper[4771]: I1011 10:38:31.689519 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"0b5de9d609ee1e6c379f71934cb2c3c6","Type":"ContainerStarted","Data":"9e6a4086932c3b4c0590b1992411e46984c974a11450de3378bede5ca3045d02"} Oct 11 10:38:32.298476 master-1 kubenswrapper[4771]: I1011 10:38:32.298426 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 11 10:38:32.580404 master-1 kubenswrapper[4771]: I1011 10:38:32.580298 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:32.580758 master-1 kubenswrapper[4771]: I1011 10:38:32.580431 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:38:32.582343 master-1 kubenswrapper[4771]: I1011 10:38:32.582287 4771 patch_prober.go:28] interesting pod/console-775ff6c4fc-csp4z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" start-of-body= Oct 11 10:38:32.582481 master-1 kubenswrapper[4771]: I1011 10:38:32.582427 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-775ff6c4fc-csp4z" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" probeResult="failure" output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" Oct 11 10:38:32.697344 master-1 kubenswrapper[4771]: I1011 10:38:32.697243 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"0b5de9d609ee1e6c379f71934cb2c3c6","Type":"ContainerStarted","Data":"913e0c188082961ad93b5f6a07d9eda57e62160ccbff129947e77948c758035a"} Oct 11 10:38:32.956730 master-2 kubenswrapper[4776]: I1011 10:38:32.956603 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-57bccbfdf6-2s9dn" podUID="ae5dca9f-dad3-4712-86f9-3a3e537b5c99" containerName="console" containerID="cri-o://60848da2e2d10f9d66feb8c08460e0558bf900e390d4618db9b15ae08a0b1a6f" gracePeriod=15 Oct 11 10:38:33.425525 master-2 kubenswrapper[4776]: I1011 10:38:33.425349 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57bccbfdf6-2s9dn_ae5dca9f-dad3-4712-86f9-3a3e537b5c99/console/0.log" Oct 11 10:38:33.425525 master-2 kubenswrapper[4776]: I1011 10:38:33.425399 4776 generic.go:334] "Generic (PLEG): container finished" podID="ae5dca9f-dad3-4712-86f9-3a3e537b5c99" containerID="60848da2e2d10f9d66feb8c08460e0558bf900e390d4618db9b15ae08a0b1a6f" exitCode=2 Oct 11 10:38:33.425525 master-2 kubenswrapper[4776]: I1011 10:38:33.425428 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57bccbfdf6-2s9dn" event={"ID":"ae5dca9f-dad3-4712-86f9-3a3e537b5c99","Type":"ContainerDied","Data":"60848da2e2d10f9d66feb8c08460e0558bf900e390d4618db9b15ae08a0b1a6f"} Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: I1011 10:38:33.459374 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:33.459999 master-2 kubenswrapper[4776]: I1011 10:38:33.459445 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:33.713722 master-1 kubenswrapper[4771]: I1011 10:38:33.713602 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"0b5de9d609ee1e6c379f71934cb2c3c6","Type":"ContainerStarted","Data":"79e52bbf7393881dfbba04f7a9f71721266d98f1191a6c7be91f8bc0ce4e1139"} Oct 11 10:38:34.163048 master-2 kubenswrapper[4776]: I1011 10:38:34.162985 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57bccbfdf6-2s9dn_ae5dca9f-dad3-4712-86f9-3a3e537b5c99/console/0.log" Oct 11 10:38:34.163048 master-2 kubenswrapper[4776]: I1011 10:38:34.163056 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: I1011 10:38:34.200399 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: I1011 10:38:34.200494 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:34.286297 master-2 kubenswrapper[4776]: I1011 10:38:34.286196 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-config\") pod \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " Oct 11 10:38:34.286862 master-2 kubenswrapper[4776]: I1011 10:38:34.286398 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-serving-cert\") pod \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " Oct 11 10:38:34.286862 master-2 kubenswrapper[4776]: I1011 10:38:34.286462 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-oauth-serving-cert\") pod \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " Oct 11 10:38:34.286862 master-2 kubenswrapper[4776]: I1011 10:38:34.286538 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-oauth-config\") pod \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " Oct 11 10:38:34.286862 master-2 kubenswrapper[4776]: I1011 10:38:34.286608 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-service-ca\") pod \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " Oct 11 10:38:34.286862 master-2 kubenswrapper[4776]: I1011 10:38:34.286670 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rz6c\" (UniqueName: \"kubernetes.io/projected/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-kube-api-access-6rz6c\") pod \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " Oct 11 10:38:34.286862 master-2 kubenswrapper[4776]: I1011 10:38:34.286772 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-config" (OuterVolumeSpecName: "console-config") pod "ae5dca9f-dad3-4712-86f9-3a3e537b5c99" (UID: "ae5dca9f-dad3-4712-86f9-3a3e537b5c99"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:34.287479 master-2 kubenswrapper[4776]: I1011 10:38:34.287326 4776 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:34.287479 master-2 kubenswrapper[4776]: I1011 10:38:34.287212 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-service-ca" (OuterVolumeSpecName: "service-ca") pod "ae5dca9f-dad3-4712-86f9-3a3e537b5c99" (UID: "ae5dca9f-dad3-4712-86f9-3a3e537b5c99"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:34.287879 master-2 kubenswrapper[4776]: I1011 10:38:34.287778 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ae5dca9f-dad3-4712-86f9-3a3e537b5c99" (UID: "ae5dca9f-dad3-4712-86f9-3a3e537b5c99"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:34.289759 master-2 kubenswrapper[4776]: I1011 10:38:34.289655 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ae5dca9f-dad3-4712-86f9-3a3e537b5c99" (UID: "ae5dca9f-dad3-4712-86f9-3a3e537b5c99"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:34.289887 master-2 kubenswrapper[4776]: I1011 10:38:34.289774 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ae5dca9f-dad3-4712-86f9-3a3e537b5c99" (UID: "ae5dca9f-dad3-4712-86f9-3a3e537b5c99"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:34.292207 master-2 kubenswrapper[4776]: I1011 10:38:34.292151 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-kube-api-access-6rz6c" (OuterVolumeSpecName: "kube-api-access-6rz6c") pod "ae5dca9f-dad3-4712-86f9-3a3e537b5c99" (UID: "ae5dca9f-dad3-4712-86f9-3a3e537b5c99"). InnerVolumeSpecName "kube-api-access-6rz6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:38:34.388713 master-2 kubenswrapper[4776]: I1011 10:38:34.388514 4776 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-oauth-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:34.388713 master-2 kubenswrapper[4776]: I1011 10:38:34.388557 4776 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-service-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:34.388713 master-2 kubenswrapper[4776]: I1011 10:38:34.388570 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rz6c\" (UniqueName: \"kubernetes.io/projected/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-kube-api-access-6rz6c\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:34.388713 master-2 kubenswrapper[4776]: I1011 10:38:34.388578 4776 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:34.388713 master-2 kubenswrapper[4776]: I1011 10:38:34.388586 4776 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-oauth-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:34.431648 master-2 kubenswrapper[4776]: I1011 10:38:34.431581 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57bccbfdf6-2s9dn_ae5dca9f-dad3-4712-86f9-3a3e537b5c99/console/0.log" Oct 11 10:38:34.431648 master-2 kubenswrapper[4776]: I1011 10:38:34.431634 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57bccbfdf6-2s9dn" event={"ID":"ae5dca9f-dad3-4712-86f9-3a3e537b5c99","Type":"ContainerDied","Data":"96abc13c3c5fa7d6b33609ba42f139af90fc31f27879c88e7021022b86b662c8"} Oct 11 10:38:34.431966 master-2 kubenswrapper[4776]: I1011 10:38:34.431695 4776 scope.go:117] "RemoveContainer" containerID="60848da2e2d10f9d66feb8c08460e0558bf900e390d4618db9b15ae08a0b1a6f" Oct 11 10:38:34.431966 master-2 kubenswrapper[4776]: I1011 10:38:34.431703 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:38:34.725504 master-1 kubenswrapper[4771]: I1011 10:38:34.725409 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"0b5de9d609ee1e6c379f71934cb2c3c6","Type":"ContainerStarted","Data":"068b46162b2804f4e661290cc4e58111faa3ee64a5ff733b8a30de9f4b7d070e"} Oct 11 10:38:35.001960 master-2 kubenswrapper[4776]: I1011 10:38:35.001892 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57bccbfdf6-2s9dn"] Oct 11 10:38:35.011075 master-1 kubenswrapper[4771]: I1011 10:38:35.010878 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-1"] Oct 11 10:38:35.012957 master-1 kubenswrapper[4771]: I1011 10:38:35.011351 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-4-master-1" podUID="66232733-dcfb-4320-a372-ce05d7d777d9" containerName="installer" containerID="cri-o://1a8de711412c9754f899398f555a2bb9a02c8065248c232ba0c054fb5b00ec21" gracePeriod=30 Oct 11 10:38:35.020327 master-2 kubenswrapper[4776]: I1011 10:38:35.015695 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-57bccbfdf6-2s9dn"] Oct 11 10:38:35.033141 master-1 kubenswrapper[4771]: I1011 10:38:35.033036 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podStartSLOduration=6.033009036 podStartE2EDuration="6.033009036s" podCreationTimestamp="2025-10-11 10:38:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:38:35.031167853 +0000 UTC m=+747.005394314" watchObservedRunningTime="2025-10-11 10:38:35.033009036 +0000 UTC m=+747.007235477" Oct 11 10:38:36.079483 master-2 kubenswrapper[4776]: I1011 10:38:36.079393 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae5dca9f-dad3-4712-86f9-3a3e537b5c99" path="/var/lib/kubelet/pods/ae5dca9f-dad3-4712-86f9-3a3e537b5c99/volumes" Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: I1011 10:38:38.459150 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: I1011 10:38:38.459214 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:39.095713 master-2 kubenswrapper[4776]: I1011 10:38:39.095617 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-76f8bc4746-5jp5k"] Oct 11 10:38:39.096023 master-2 kubenswrapper[4776]: E1011 10:38:39.095970 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae5dca9f-dad3-4712-86f9-3a3e537b5c99" containerName="console" Oct 11 10:38:39.096023 master-2 kubenswrapper[4776]: I1011 10:38:39.095992 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae5dca9f-dad3-4712-86f9-3a3e537b5c99" containerName="console" Oct 11 10:38:39.096237 master-2 kubenswrapper[4776]: I1011 10:38:39.096197 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae5dca9f-dad3-4712-86f9-3a3e537b5c99" containerName="console" Oct 11 10:38:39.097023 master-2 kubenswrapper[4776]: I1011 10:38:39.096980 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.101950 master-2 kubenswrapper[4776]: I1011 10:38:39.101879 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-8wmjp" Oct 11 10:38:39.101950 master-2 kubenswrapper[4776]: I1011 10:38:39.101884 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 11 10:38:39.102720 master-2 kubenswrapper[4776]: I1011 10:38:39.102649 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 11 10:38:39.102931 master-2 kubenswrapper[4776]: I1011 10:38:39.102729 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 11 10:38:39.103023 master-2 kubenswrapper[4776]: I1011 10:38:39.102805 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 11 10:38:39.107720 master-2 kubenswrapper[4776]: I1011 10:38:39.107635 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 11 10:38:39.112758 master-2 kubenswrapper[4776]: I1011 10:38:39.112717 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 11 10:38:39.157434 master-2 kubenswrapper[4776]: I1011 10:38:39.157348 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nf86\" (UniqueName: \"kubernetes.io/projected/9c72970e-d35b-4f28-8291-e3ed3683c59c-kube-api-access-7nf86\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.157744 master-2 kubenswrapper[4776]: I1011 10:38:39.157489 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-service-ca\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.157744 master-2 kubenswrapper[4776]: I1011 10:38:39.157556 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-trusted-ca-bundle\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.157969 master-2 kubenswrapper[4776]: I1011 10:38:39.157811 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-oauth-config\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.157969 master-2 kubenswrapper[4776]: I1011 10:38:39.157889 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-serving-cert\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.157969 master-2 kubenswrapper[4776]: I1011 10:38:39.157944 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-config\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.158160 master-2 kubenswrapper[4776]: I1011 10:38:39.158053 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-oauth-serving-cert\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.193280 master-2 kubenswrapper[4776]: I1011 10:38:39.193185 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:38:39.193280 master-2 kubenswrapper[4776]: I1011 10:38:39.193256 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:38:39.260153 master-2 kubenswrapper[4776]: I1011 10:38:39.260057 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-oauth-serving-cert\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.260433 master-2 kubenswrapper[4776]: I1011 10:38:39.260226 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nf86\" (UniqueName: \"kubernetes.io/projected/9c72970e-d35b-4f28-8291-e3ed3683c59c-kube-api-access-7nf86\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.260433 master-2 kubenswrapper[4776]: I1011 10:38:39.260265 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-service-ca\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.260433 master-2 kubenswrapper[4776]: I1011 10:38:39.260304 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-trusted-ca-bundle\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.260433 master-2 kubenswrapper[4776]: I1011 10:38:39.260397 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-oauth-config\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.260569 master-2 kubenswrapper[4776]: I1011 10:38:39.260459 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-serving-cert\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.260569 master-2 kubenswrapper[4776]: I1011 10:38:39.260510 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-config\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.262399 master-2 kubenswrapper[4776]: I1011 10:38:39.262302 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-service-ca\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.262753 master-2 kubenswrapper[4776]: I1011 10:38:39.262644 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-oauth-serving-cert\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.263041 master-2 kubenswrapper[4776]: I1011 10:38:39.262988 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-config\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.263413 master-2 kubenswrapper[4776]: I1011 10:38:39.263325 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-trusted-ca-bundle\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.265793 master-2 kubenswrapper[4776]: I1011 10:38:39.265755 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-oauth-config\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.266783 master-2 kubenswrapper[4776]: I1011 10:38:39.266733 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-serving-cert\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.464900 master-2 kubenswrapper[4776]: I1011 10:38:39.464806 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76f8bc4746-5jp5k"] Oct 11 10:38:39.728107 master-2 kubenswrapper[4776]: I1011 10:38:39.727942 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" podUID="dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" containerName="oauth-openshift" containerID="cri-o://4c53ab08cf4b5166d95c57913daeef16e08566e476b981ef95245c117bb87d6a" gracePeriod=15 Oct 11 10:38:39.883099 master-2 kubenswrapper[4776]: I1011 10:38:39.883005 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nf86\" (UniqueName: \"kubernetes.io/projected/9c72970e-d35b-4f28-8291-e3ed3683c59c-kube-api-access-7nf86\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:40.022499 master-2 kubenswrapper[4776]: I1011 10:38:40.022375 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:40.646632 master-1 kubenswrapper[4771]: I1011 10:38:40.646451 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:38:40.647695 master-1 kubenswrapper[4771]: I1011 10:38:40.646668 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:38:40.647695 master-1 kubenswrapper[4771]: I1011 10:38:40.646700 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:38:40.647695 master-1 kubenswrapper[4771]: I1011 10:38:40.646723 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:38:40.653845 master-1 kubenswrapper[4771]: I1011 10:38:40.653783 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:38:40.654103 master-1 kubenswrapper[4771]: I1011 10:38:40.654047 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:38:41.482120 master-2 kubenswrapper[4776]: I1011 10:38:41.481973 4776 generic.go:334] "Generic (PLEG): container finished" podID="dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" containerID="4c53ab08cf4b5166d95c57913daeef16e08566e476b981ef95245c117bb87d6a" exitCode=0 Oct 11 10:38:41.482120 master-2 kubenswrapper[4776]: I1011 10:38:41.482025 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" event={"ID":"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369","Type":"ContainerDied","Data":"4c53ab08cf4b5166d95c57913daeef16e08566e476b981ef95245c117bb87d6a"} Oct 11 10:38:41.535704 master-2 kubenswrapper[4776]: I1011 10:38:41.535627 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:38:41.602631 master-2 kubenswrapper[4776]: I1011 10:38:41.602544 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-provider-selection\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.602631 master-2 kubenswrapper[4776]: I1011 10:38:41.602634 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-cliconfig\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.602711 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-policies\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.602757 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfkq8\" (UniqueName: \"kubernetes.io/projected/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-kube-api-access-jfkq8\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.602795 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-service-ca\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.602849 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-login\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.602892 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-session\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.602972 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-error\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.603004 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-dir\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.603040 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-router-certs\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.603073 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-ocp-branding-template\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.603111 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-trusted-ca-bundle\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603996 master-2 kubenswrapper[4776]: I1011 10:38:41.603177 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-serving-cert\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603996 master-2 kubenswrapper[4776]: I1011 10:38:41.603436 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:41.603996 master-2 kubenswrapper[4776]: I1011 10:38:41.603501 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:38:41.603996 master-2 kubenswrapper[4776]: I1011 10:38:41.603503 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:41.603996 master-2 kubenswrapper[4776]: I1011 10:38:41.603607 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:41.603996 master-2 kubenswrapper[4776]: I1011 10:38:41.603629 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.603996 master-2 kubenswrapper[4776]: I1011 10:38:41.603752 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-cliconfig\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.603996 master-2 kubenswrapper[4776]: I1011 10:38:41.603783 4776 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-policies\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.603996 master-2 kubenswrapper[4776]: I1011 10:38:41.603893 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:41.605807 master-2 kubenswrapper[4776]: I1011 10:38:41.605749 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:41.605946 master-2 kubenswrapper[4776]: I1011 10:38:41.605818 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:41.606131 master-2 kubenswrapper[4776]: I1011 10:38:41.606051 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:41.606568 master-2 kubenswrapper[4776]: I1011 10:38:41.606513 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:41.606820 master-2 kubenswrapper[4776]: I1011 10:38:41.606768 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:41.606916 master-2 kubenswrapper[4776]: I1011 10:38:41.606862 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:41.607969 master-2 kubenswrapper[4776]: I1011 10:38:41.607909 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:41.608081 master-2 kubenswrapper[4776]: I1011 10:38:41.607963 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-kube-api-access-jfkq8" (OuterVolumeSpecName: "kube-api-access-jfkq8") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "kube-api-access-jfkq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:38:41.705321 master-2 kubenswrapper[4776]: I1011 10:38:41.705223 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-error\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.705321 master-2 kubenswrapper[4776]: I1011 10:38:41.705293 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-router-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.705321 master-2 kubenswrapper[4776]: I1011 10:38:41.705314 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-ocp-branding-template\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.705321 master-2 kubenswrapper[4776]: I1011 10:38:41.705334 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.705872 master-2 kubenswrapper[4776]: I1011 10:38:41.705354 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.705872 master-2 kubenswrapper[4776]: I1011 10:38:41.705372 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-provider-selection\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.705872 master-2 kubenswrapper[4776]: I1011 10:38:41.705391 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfkq8\" (UniqueName: \"kubernetes.io/projected/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-kube-api-access-jfkq8\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.705872 master-2 kubenswrapper[4776]: I1011 10:38:41.705410 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-service-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.705872 master-2 kubenswrapper[4776]: I1011 10:38:41.705427 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-login\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.705872 master-2 kubenswrapper[4776]: I1011 10:38:41.705448 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-session\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.765591 master-1 kubenswrapper[4771]: I1011 10:38:41.765528 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-1"] Oct 11 10:38:41.766610 master-1 kubenswrapper[4771]: I1011 10:38:41.766407 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-1" Oct 11 10:38:41.792963 master-1 kubenswrapper[4771]: I1011 10:38:41.792885 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:38:41.828765 master-1 kubenswrapper[4771]: I1011 10:38:41.828628 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/04d0b40e-b6ae-4466-a0af-fcb5ce630a97-var-lock\") pod \"installer-5-master-1\" (UID: \"04d0b40e-b6ae-4466-a0af-fcb5ce630a97\") " pod="openshift-kube-apiserver/installer-5-master-1" Oct 11 10:38:41.828765 master-1 kubenswrapper[4771]: I1011 10:38:41.828769 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04d0b40e-b6ae-4466-a0af-fcb5ce630a97-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"04d0b40e-b6ae-4466-a0af-fcb5ce630a97\") " pod="openshift-kube-apiserver/installer-5-master-1" Oct 11 10:38:41.829334 master-1 kubenswrapper[4771]: I1011 10:38:41.828843 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04d0b40e-b6ae-4466-a0af-fcb5ce630a97-kube-api-access\") pod \"installer-5-master-1\" (UID: \"04d0b40e-b6ae-4466-a0af-fcb5ce630a97\") " pod="openshift-kube-apiserver/installer-5-master-1" Oct 11 10:38:41.929881 master-1 kubenswrapper[4771]: I1011 10:38:41.929787 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04d0b40e-b6ae-4466-a0af-fcb5ce630a97-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"04d0b40e-b6ae-4466-a0af-fcb5ce630a97\") " pod="openshift-kube-apiserver/installer-5-master-1" Oct 11 10:38:41.929881 master-1 kubenswrapper[4771]: I1011 10:38:41.929882 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04d0b40e-b6ae-4466-a0af-fcb5ce630a97-kube-api-access\") pod \"installer-5-master-1\" (UID: \"04d0b40e-b6ae-4466-a0af-fcb5ce630a97\") " pod="openshift-kube-apiserver/installer-5-master-1" Oct 11 10:38:41.930321 master-1 kubenswrapper[4771]: I1011 10:38:41.929924 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/04d0b40e-b6ae-4466-a0af-fcb5ce630a97-var-lock\") pod \"installer-5-master-1\" (UID: \"04d0b40e-b6ae-4466-a0af-fcb5ce630a97\") " pod="openshift-kube-apiserver/installer-5-master-1" Oct 11 10:38:41.930321 master-1 kubenswrapper[4771]: I1011 10:38:41.929980 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04d0b40e-b6ae-4466-a0af-fcb5ce630a97-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"04d0b40e-b6ae-4466-a0af-fcb5ce630a97\") " pod="openshift-kube-apiserver/installer-5-master-1" Oct 11 10:38:41.930321 master-1 kubenswrapper[4771]: I1011 10:38:41.929999 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/04d0b40e-b6ae-4466-a0af-fcb5ce630a97-var-lock\") pod \"installer-5-master-1\" (UID: \"04d0b40e-b6ae-4466-a0af-fcb5ce630a97\") " pod="openshift-kube-apiserver/installer-5-master-1" Oct 11 10:38:42.324612 master-1 kubenswrapper[4771]: I1011 10:38:42.324540 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-1"] Oct 11 10:38:42.492595 master-2 kubenswrapper[4776]: I1011 10:38:42.492486 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" event={"ID":"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369","Type":"ContainerDied","Data":"fb05ca004bb431ae259dec9c7bc562a3772d43d4b0ba3d1a323b0aee4334c90e"} Oct 11 10:38:42.493493 master-2 kubenswrapper[4776]: I1011 10:38:42.492624 4776 scope.go:117] "RemoveContainer" containerID="4c53ab08cf4b5166d95c57913daeef16e08566e476b981ef95245c117bb87d6a" Oct 11 10:38:42.493493 master-2 kubenswrapper[4776]: I1011 10:38:42.493105 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:38:42.496741 master-2 kubenswrapper[4776]: I1011 10:38:42.496579 4776 generic.go:334] "Generic (PLEG): container finished" podID="cc095688-9188-4472-9c26-d4d286e5ef06" containerID="6dd550d507d66e801941ec8d8dccd203204326eb4fa9e98d9d9de574d26fd168" exitCode=0 Oct 11 10:38:42.496741 master-2 kubenswrapper[4776]: I1011 10:38:42.496636 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" event={"ID":"cc095688-9188-4472-9c26-d4d286e5ef06","Type":"ContainerDied","Data":"6dd550d507d66e801941ec8d8dccd203204326eb4fa9e98d9d9de574d26fd168"} Oct 11 10:38:42.581791 master-1 kubenswrapper[4771]: I1011 10:38:42.581584 4771 patch_prober.go:28] interesting pod/console-775ff6c4fc-csp4z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" start-of-body= Oct 11 10:38:42.581791 master-1 kubenswrapper[4771]: I1011 10:38:42.581687 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-775ff6c4fc-csp4z" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" probeResult="failure" output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" Oct 11 10:38:42.695065 master-1 kubenswrapper[4771]: I1011 10:38:42.694814 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04d0b40e-b6ae-4466-a0af-fcb5ce630a97-kube-api-access\") pod \"installer-5-master-1\" (UID: \"04d0b40e-b6ae-4466-a0af-fcb5ce630a97\") " pod="openshift-kube-apiserver/installer-5-master-1" Oct 11 10:38:42.696469 master-1 kubenswrapper[4771]: I1011 10:38:42.695632 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-1" Oct 11 10:38:42.710059 master-2 kubenswrapper[4776]: I1011 10:38:42.710007 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76f8bc4746-5jp5k"] Oct 11 10:38:42.722050 master-2 kubenswrapper[4776]: I1011 10:38:42.721990 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-68fb97bcc4-r24pr"] Oct 11 10:38:42.734016 master-2 kubenswrapper[4776]: I1011 10:38:42.733959 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-68fb97bcc4-r24pr"] Oct 11 10:38:42.806976 master-1 kubenswrapper[4771]: I1011 10:38:42.806860 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:38:43.032062 master-2 kubenswrapper[4776]: I1011 10:38:43.032033 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:38:43.123368 master-2 kubenswrapper[4776]: I1011 10:38:43.123297 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-client\") pod \"cc095688-9188-4472-9c26-d4d286e5ef06\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " Oct 11 10:38:43.123368 master-2 kubenswrapper[4776]: I1011 10:38:43.123371 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-serving-cert\") pod \"cc095688-9188-4472-9c26-d4d286e5ef06\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " Oct 11 10:38:43.123630 master-2 kubenswrapper[4776]: I1011 10:38:43.123417 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-audit-policies\") pod \"cc095688-9188-4472-9c26-d4d286e5ef06\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " Oct 11 10:38:43.123630 master-2 kubenswrapper[4776]: I1011 10:38:43.123448 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-encryption-config\") pod \"cc095688-9188-4472-9c26-d4d286e5ef06\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " Oct 11 10:38:43.123630 master-2 kubenswrapper[4776]: I1011 10:38:43.123491 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54fhv\" (UniqueName: \"kubernetes.io/projected/cc095688-9188-4472-9c26-d4d286e5ef06-kube-api-access-54fhv\") pod \"cc095688-9188-4472-9c26-d4d286e5ef06\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " Oct 11 10:38:43.123630 master-2 kubenswrapper[4776]: I1011 10:38:43.123519 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-serving-ca\") pod \"cc095688-9188-4472-9c26-d4d286e5ef06\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " Oct 11 10:38:43.123630 master-2 kubenswrapper[4776]: I1011 10:38:43.123543 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc095688-9188-4472-9c26-d4d286e5ef06-audit-dir\") pod \"cc095688-9188-4472-9c26-d4d286e5ef06\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " Oct 11 10:38:43.123630 master-2 kubenswrapper[4776]: I1011 10:38:43.123602 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-trusted-ca-bundle\") pod \"cc095688-9188-4472-9c26-d4d286e5ef06\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " Oct 11 10:38:43.124559 master-2 kubenswrapper[4776]: I1011 10:38:43.124516 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "cc095688-9188-4472-9c26-d4d286e5ef06" (UID: "cc095688-9188-4472-9c26-d4d286e5ef06"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:43.124559 master-2 kubenswrapper[4776]: I1011 10:38:43.124524 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "cc095688-9188-4472-9c26-d4d286e5ef06" (UID: "cc095688-9188-4472-9c26-d4d286e5ef06"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:43.124667 master-2 kubenswrapper[4776]: I1011 10:38:43.124606 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc095688-9188-4472-9c26-d4d286e5ef06-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "cc095688-9188-4472-9c26-d4d286e5ef06" (UID: "cc095688-9188-4472-9c26-d4d286e5ef06"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:38:43.124995 master-2 kubenswrapper[4776]: I1011 10:38:43.124957 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "cc095688-9188-4472-9c26-d4d286e5ef06" (UID: "cc095688-9188-4472-9c26-d4d286e5ef06"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:43.127089 master-2 kubenswrapper[4776]: I1011 10:38:43.127054 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cc095688-9188-4472-9c26-d4d286e5ef06" (UID: "cc095688-9188-4472-9c26-d4d286e5ef06"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:43.127391 master-2 kubenswrapper[4776]: I1011 10:38:43.127337 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "cc095688-9188-4472-9c26-d4d286e5ef06" (UID: "cc095688-9188-4472-9c26-d4d286e5ef06"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:43.128088 master-2 kubenswrapper[4776]: I1011 10:38:43.128034 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "cc095688-9188-4472-9c26-d4d286e5ef06" (UID: "cc095688-9188-4472-9c26-d4d286e5ef06"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:43.128355 master-2 kubenswrapper[4776]: I1011 10:38:43.128315 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc095688-9188-4472-9c26-d4d286e5ef06-kube-api-access-54fhv" (OuterVolumeSpecName: "kube-api-access-54fhv") pod "cc095688-9188-4472-9c26-d4d286e5ef06" (UID: "cc095688-9188-4472-9c26-d4d286e5ef06"). InnerVolumeSpecName "kube-api-access-54fhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:38:43.172895 master-1 kubenswrapper[4771]: I1011 10:38:43.171549 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-1"] Oct 11 10:38:43.178656 master-1 kubenswrapper[4771]: W1011 10:38:43.178578 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod04d0b40e_b6ae_4466_a0af_fcb5ce630a97.slice/crio-bbc3a84466c188ab0111b2748b205176a81652b14bfe38d9a3a683ab12c3236b WatchSource:0}: Error finding container bbc3a84466c188ab0111b2748b205176a81652b14bfe38d9a3a683ab12c3236b: Status 404 returned error can't find the container with id bbc3a84466c188ab0111b2748b205176a81652b14bfe38d9a3a683ab12c3236b Oct 11 10:38:43.225206 master-2 kubenswrapper[4776]: I1011 10:38:43.225153 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:43.225206 master-2 kubenswrapper[4776]: I1011 10:38:43.225192 4776 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-audit-policies\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:43.225206 master-2 kubenswrapper[4776]: I1011 10:38:43.225202 4776 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-encryption-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:43.225206 master-2 kubenswrapper[4776]: I1011 10:38:43.225212 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54fhv\" (UniqueName: \"kubernetes.io/projected/cc095688-9188-4472-9c26-d4d286e5ef06-kube-api-access-54fhv\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:43.225206 master-2 kubenswrapper[4776]: I1011 10:38:43.225222 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-serving-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:43.225567 master-2 kubenswrapper[4776]: I1011 10:38:43.225231 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc095688-9188-4472-9c26-d4d286e5ef06-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:43.225567 master-2 kubenswrapper[4776]: I1011 10:38:43.225239 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:43.225567 master-2 kubenswrapper[4776]: I1011 10:38:43.225247 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-client\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:43.305289 master-0 systemd[1]: Starting Kubernetes Kubelet... Oct 11 10:38:43.503427 master-2 kubenswrapper[4776]: I1011 10:38:43.503339 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76f8bc4746-5jp5k" event={"ID":"9c72970e-d35b-4f28-8291-e3ed3683c59c","Type":"ContainerStarted","Data":"ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f"} Oct 11 10:38:43.503427 master-2 kubenswrapper[4776]: I1011 10:38:43.503386 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76f8bc4746-5jp5k" event={"ID":"9c72970e-d35b-4f28-8291-e3ed3683c59c","Type":"ContainerStarted","Data":"3717643475eebdbec50aa27932ca525c2e2f047c2a23862ba4394759fc5478d9"} Oct 11 10:38:43.507334 master-2 kubenswrapper[4776]: I1011 10:38:43.507242 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" event={"ID":"cc095688-9188-4472-9c26-d4d286e5ef06","Type":"ContainerDied","Data":"38caa553aa3028fefa0c3bd77280e5deedf30358e11b27817863ca0e8b11f26f"} Oct 11 10:38:43.507334 master-2 kubenswrapper[4776]: I1011 10:38:43.507335 4776 scope.go:117] "RemoveContainer" containerID="6dd550d507d66e801941ec8d8dccd203204326eb4fa9e98d9d9de574d26fd168" Oct 11 10:38:43.507334 master-2 kubenswrapper[4776]: I1011 10:38:43.507335 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:38:43.521656 master-2 kubenswrapper[4776]: I1011 10:38:43.521588 4776 scope.go:117] "RemoveContainer" containerID="890baf1a750c905b81b3a86397294058183d567d6c2fdd860242c1e809168b9e" Oct 11 10:38:43.565073 master-2 kubenswrapper[4776]: I1011 10:38:43.565014 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-76f8bc4746-5jp5k" podStartSLOduration=36.564993715 podStartE2EDuration="36.564993715s" podCreationTimestamp="2025-10-11 10:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:38:43.535405749 +0000 UTC m=+758.319832458" watchObservedRunningTime="2025-10-11 10:38:43.564993715 +0000 UTC m=+758.349420424" Oct 11 10:38:43.574692 master-2 kubenswrapper[4776]: I1011 10:38:43.566922 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729"] Oct 11 10:38:43.580783 master-2 kubenswrapper[4776]: I1011 10:38:43.577548 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729"] Oct 11 10:38:43.710352 master-2 kubenswrapper[4776]: I1011 10:38:43.710257 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6fccd5ccc-lxq75"] Oct 11 10:38:43.710609 master-2 kubenswrapper[4776]: E1011 10:38:43.710572 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="fix-audit-permissions" Oct 11 10:38:43.710609 master-2 kubenswrapper[4776]: I1011 10:38:43.710601 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="fix-audit-permissions" Oct 11 10:38:43.710697 master-2 kubenswrapper[4776]: E1011 10:38:43.710625 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" containerName="oauth-openshift" Oct 11 10:38:43.710697 master-2 kubenswrapper[4776]: I1011 10:38:43.710635 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" containerName="oauth-openshift" Oct 11 10:38:43.710697 master-2 kubenswrapper[4776]: E1011 10:38:43.710648 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" Oct 11 10:38:43.710697 master-2 kubenswrapper[4776]: I1011 10:38:43.710657 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" Oct 11 10:38:43.710865 master-2 kubenswrapper[4776]: I1011 10:38:43.710834 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" Oct 11 10:38:43.710898 master-2 kubenswrapper[4776]: I1011 10:38:43.710866 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" containerName="oauth-openshift" Oct 11 10:38:43.711472 master-2 kubenswrapper[4776]: I1011 10:38:43.711443 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.715309 master-2 kubenswrapper[4776]: I1011 10:38:43.715261 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 11 10:38:43.715423 master-2 kubenswrapper[4776]: I1011 10:38:43.715391 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 11 10:38:43.715542 master-2 kubenswrapper[4776]: I1011 10:38:43.715506 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 11 10:38:43.715755 master-2 kubenswrapper[4776]: I1011 10:38:43.715707 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 11 10:38:43.715989 master-2 kubenswrapper[4776]: I1011 10:38:43.715959 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 11 10:38:43.716031 master-2 kubenswrapper[4776]: I1011 10:38:43.715957 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 11 10:38:43.716091 master-2 kubenswrapper[4776]: I1011 10:38:43.716068 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 11 10:38:43.716132 master-2 kubenswrapper[4776]: I1011 10:38:43.715707 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-279hr" Oct 11 10:38:43.716568 master-2 kubenswrapper[4776]: I1011 10:38:43.716534 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 11 10:38:43.717553 master-2 kubenswrapper[4776]: I1011 10:38:43.717510 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 11 10:38:43.718238 master-2 kubenswrapper[4776]: I1011 10:38:43.718212 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 11 10:38:43.718238 master-2 kubenswrapper[4776]: I1011 10:38:43.718227 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 11 10:38:43.728696 master-2 kubenswrapper[4776]: I1011 10:38:43.728535 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6fccd5ccc-lxq75"] Oct 11 10:38:43.731155 master-2 kubenswrapper[4776]: I1011 10:38:43.731098 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 11 10:38:43.740059 master-2 kubenswrapper[4776]: I1011 10:38:43.739862 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 11 10:38:43.803014 master-1 kubenswrapper[4771]: I1011 10:38:43.802954 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-1" event={"ID":"04d0b40e-b6ae-4466-a0af-fcb5ce630a97","Type":"ContainerStarted","Data":"3d3a7650ee6f21f1edc22785fe9fc463251f973399b34912c74a0d533d0b5e22"} Oct 11 10:38:43.803545 master-1 kubenswrapper[4771]: I1011 10:38:43.803506 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-1" event={"ID":"04d0b40e-b6ae-4466-a0af-fcb5ce630a97","Type":"ContainerStarted","Data":"bbc3a84466c188ab0111b2748b205176a81652b14bfe38d9a3a683ab12c3236b"} Oct 11 10:38:43.830700 master-1 kubenswrapper[4771]: I1011 10:38:43.830601 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-1" podStartSLOduration=2.830584633 podStartE2EDuration="2.830584633s" podCreationTimestamp="2025-10-11 10:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:38:43.824179597 +0000 UTC m=+755.798406048" watchObservedRunningTime="2025-10-11 10:38:43.830584633 +0000 UTC m=+755.804811074" Oct 11 10:38:43.833723 master-2 kubenswrapper[4776]: I1011 10:38:43.833451 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-session\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.833723 master-2 kubenswrapper[4776]: I1011 10:38:43.833502 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.833723 master-2 kubenswrapper[4776]: I1011 10:38:43.833522 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.833723 master-2 kubenswrapper[4776]: I1011 10:38:43.833548 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.833723 master-2 kubenswrapper[4776]: I1011 10:38:43.833569 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.834084 master-2 kubenswrapper[4776]: I1011 10:38:43.833717 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-user-template-error\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.834084 master-2 kubenswrapper[4776]: I1011 10:38:43.833873 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.834084 master-2 kubenswrapper[4776]: I1011 10:38:43.833905 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-audit-dir\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.834084 master-2 kubenswrapper[4776]: I1011 10:38:43.833945 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-audit-policies\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.834084 master-2 kubenswrapper[4776]: I1011 10:38:43.833965 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g9ll\" (UniqueName: \"kubernetes.io/projected/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-kube-api-access-2g9ll\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.834084 master-2 kubenswrapper[4776]: I1011 10:38:43.833993 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.834084 master-2 kubenswrapper[4776]: I1011 10:38:43.834027 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.834084 master-2 kubenswrapper[4776]: I1011 10:38:43.834060 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-user-template-login\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935690 master-2 kubenswrapper[4776]: I1011 10:38:43.935622 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935690 master-2 kubenswrapper[4776]: I1011 10:38:43.935688 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-audit-dir\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935970 master-2 kubenswrapper[4776]: I1011 10:38:43.935720 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g9ll\" (UniqueName: \"kubernetes.io/projected/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-kube-api-access-2g9ll\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935970 master-2 kubenswrapper[4776]: I1011 10:38:43.935754 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-audit-policies\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935970 master-2 kubenswrapper[4776]: I1011 10:38:43.935791 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935970 master-2 kubenswrapper[4776]: I1011 10:38:43.935798 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-audit-dir\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935970 master-2 kubenswrapper[4776]: I1011 10:38:43.935819 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935970 master-2 kubenswrapper[4776]: I1011 10:38:43.935861 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-user-template-login\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935970 master-2 kubenswrapper[4776]: I1011 10:38:43.935918 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-session\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935970 master-2 kubenswrapper[4776]: I1011 10:38:43.935935 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935970 master-2 kubenswrapper[4776]: I1011 10:38:43.935950 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.936271 master-2 kubenswrapper[4776]: I1011 10:38:43.936010 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.936271 master-2 kubenswrapper[4776]: I1011 10:38:43.936027 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.936271 master-2 kubenswrapper[4776]: I1011 10:38:43.936046 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-user-template-error\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.936977 master-2 kubenswrapper[4776]: I1011 10:38:43.936843 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-audit-policies\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.937103 master-2 kubenswrapper[4776]: I1011 10:38:43.937078 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.937945 master-2 kubenswrapper[4776]: I1011 10:38:43.937926 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.938712 master-2 kubenswrapper[4776]: I1011 10:38:43.938655 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.939303 master-2 kubenswrapper[4776]: I1011 10:38:43.939263 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.939400 master-2 kubenswrapper[4776]: I1011 10:38:43.939344 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-session\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.939621 master-2 kubenswrapper[4776]: I1011 10:38:43.939594 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.940004 master-2 kubenswrapper[4776]: I1011 10:38:43.939990 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.940127 master-2 kubenswrapper[4776]: I1011 10:38:43.940041 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-user-template-error\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.940775 master-2 kubenswrapper[4776]: I1011 10:38:43.940743 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-user-template-login\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.941343 master-2 kubenswrapper[4776]: I1011 10:38:43.941297 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.943081 master-0 kubenswrapper[4790]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 10:38:43.943081 master-0 kubenswrapper[4790]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 11 10:38:43.943081 master-0 kubenswrapper[4790]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 10:38:43.943081 master-0 kubenswrapper[4790]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 10:38:43.943081 master-0 kubenswrapper[4790]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 11 10:38:43.943081 master-0 kubenswrapper[4790]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 10:38:43.945285 master-0 kubenswrapper[4790]: I1011 10:38:43.944067 4790 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 11 10:38:43.948423 master-0 kubenswrapper[4790]: W1011 10:38:43.948383 4790 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 10:38:43.948423 master-0 kubenswrapper[4790]: W1011 10:38:43.948411 4790 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 10:38:43.948423 master-0 kubenswrapper[4790]: W1011 10:38:43.948420 4790 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 10:38:43.948423 master-0 kubenswrapper[4790]: W1011 10:38:43.948427 4790 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948435 4790 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948443 4790 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948450 4790 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948456 4790 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948462 4790 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948470 4790 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948476 4790 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948489 4790 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948496 4790 feature_gate.go:330] unrecognized feature gate: Example Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948502 4790 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948508 4790 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948515 4790 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948521 4790 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948527 4790 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948533 4790 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948540 4790 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948547 4790 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948553 4790 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948560 4790 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 10:38:43.948556 master-0 kubenswrapper[4790]: W1011 10:38:43.948566 4790 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 10:38:43.949103 master-0 kubenswrapper[4790]: W1011 10:38:43.948573 4790 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 10:38:43.949103 master-0 kubenswrapper[4790]: W1011 10:38:43.948579 4790 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 10:38:43.949103 master-0 kubenswrapper[4790]: W1011 10:38:43.948586 4790 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 10:38:43.949103 master-0 kubenswrapper[4790]: W1011 10:38:43.948592 4790 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 10:38:43.949103 master-0 kubenswrapper[4790]: W1011 10:38:43.948598 4790 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 10:38:43.949103 master-0 kubenswrapper[4790]: W1011 10:38:43.948605 4790 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 10:38:43.949103 master-0 kubenswrapper[4790]: W1011 10:38:43.948612 4790 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 10:38:43.949103 master-0 kubenswrapper[4790]: W1011 10:38:43.948619 4790 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 10:38:43.949103 master-0 kubenswrapper[4790]: W1011 10:38:43.948625 4790 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 10:38:43.949103 master-0 kubenswrapper[4790]: W1011 10:38:43.948634 4790 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 10:38:43.949103 master-0 kubenswrapper[4790]: W1011 10:38:43.948643 4790 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 10:38:43.949103 master-0 kubenswrapper[4790]: W1011 10:38:43.948651 4790 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 10:38:43.949103 master-0 kubenswrapper[4790]: W1011 10:38:43.948659 4790 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 10:38:43.949103 master-0 kubenswrapper[4790]: W1011 10:38:43.948666 4790 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 10:38:43.949103 master-0 kubenswrapper[4790]: W1011 10:38:43.948674 4790 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 10:38:43.949103 master-0 kubenswrapper[4790]: W1011 10:38:43.948681 4790 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 10:38:43.949103 master-0 kubenswrapper[4790]: W1011 10:38:43.948689 4790 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 10:38:43.949103 master-0 kubenswrapper[4790]: W1011 10:38:43.948696 4790 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 10:38:43.949103 master-0 kubenswrapper[4790]: W1011 10:38:43.948728 4790 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948737 4790 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948744 4790 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948751 4790 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948758 4790 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948766 4790 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948774 4790 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948779 4790 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948784 4790 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948792 4790 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948797 4790 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948803 4790 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948808 4790 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948813 4790 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948824 4790 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948830 4790 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948836 4790 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948843 4790 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948854 4790 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948866 4790 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 10:38:43.949540 master-0 kubenswrapper[4790]: W1011 10:38:43.948873 4790 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: W1011 10:38:43.948884 4790 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: W1011 10:38:43.948899 4790 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: W1011 10:38:43.948905 4790 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: W1011 10:38:43.948913 4790 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: W1011 10:38:43.948920 4790 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: W1011 10:38:43.948927 4790 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: W1011 10:38:43.948934 4790 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: W1011 10:38:43.948940 4790 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: W1011 10:38:43.948946 4790 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: I1011 10:38:43.949782 4790 flags.go:64] FLAG: --address="0.0.0.0" Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: I1011 10:38:43.949804 4790 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: I1011 10:38:43.949818 4790 flags.go:64] FLAG: --anonymous-auth="true" Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: I1011 10:38:43.949828 4790 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: I1011 10:38:43.949837 4790 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: I1011 10:38:43.949845 4790 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: I1011 10:38:43.949854 4790 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: I1011 10:38:43.949862 4790 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: I1011 10:38:43.949869 4790 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: I1011 10:38:43.949875 4790 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: I1011 10:38:43.949882 4790 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 11 10:38:43.949979 master-0 kubenswrapper[4790]: I1011 10:38:43.949890 4790 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.949896 4790 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.949902 4790 flags.go:64] FLAG: --cgroup-root="" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.949908 4790 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.949915 4790 flags.go:64] FLAG: --client-ca-file="" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.949922 4790 flags.go:64] FLAG: --cloud-config="" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.949927 4790 flags.go:64] FLAG: --cloud-provider="" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.949933 4790 flags.go:64] FLAG: --cluster-dns="[]" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.949942 4790 flags.go:64] FLAG: --cluster-domain="" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.949948 4790 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.949954 4790 flags.go:64] FLAG: --config-dir="" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.949962 4790 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.949968 4790 flags.go:64] FLAG: --container-log-max-files="5" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.949978 4790 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.949986 4790 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.949993 4790 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.950001 4790 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.950009 4790 flags.go:64] FLAG: --contention-profiling="false" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.950017 4790 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.950025 4790 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.950033 4790 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.950041 4790 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.950052 4790 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.950060 4790 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.950067 4790 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 11 10:38:43.950579 master-0 kubenswrapper[4790]: I1011 10:38:43.950075 4790 flags.go:64] FLAG: --enable-load-reader="false" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950083 4790 flags.go:64] FLAG: --enable-server="true" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950091 4790 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950103 4790 flags.go:64] FLAG: --event-burst="100" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950110 4790 flags.go:64] FLAG: --event-qps="50" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950116 4790 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950123 4790 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950131 4790 flags.go:64] FLAG: --eviction-hard="" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950141 4790 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950149 4790 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950156 4790 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950166 4790 flags.go:64] FLAG: --eviction-soft="" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950174 4790 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950182 4790 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950189 4790 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950197 4790 flags.go:64] FLAG: --experimental-mounter-path="" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950205 4790 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950213 4790 flags.go:64] FLAG: --fail-swap-on="true" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950220 4790 flags.go:64] FLAG: --feature-gates="" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950230 4790 flags.go:64] FLAG: --file-check-frequency="20s" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950241 4790 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950251 4790 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950260 4790 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950271 4790 flags.go:64] FLAG: --healthz-port="10248" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950280 4790 flags.go:64] FLAG: --help="false" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950288 4790 flags.go:64] FLAG: --hostname-override="" Oct 11 10:38:43.951121 master-0 kubenswrapper[4790]: I1011 10:38:43.950295 4790 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950304 4790 flags.go:64] FLAG: --http-check-frequency="20s" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950312 4790 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950319 4790 flags.go:64] FLAG: --image-credential-provider-config="" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950327 4790 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950335 4790 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950343 4790 flags.go:64] FLAG: --image-service-endpoint="" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950351 4790 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950360 4790 flags.go:64] FLAG: --kube-api-burst="100" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950367 4790 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950374 4790 flags.go:64] FLAG: --kube-api-qps="50" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950381 4790 flags.go:64] FLAG: --kube-reserved="" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950387 4790 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950394 4790 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950400 4790 flags.go:64] FLAG: --kubelet-cgroups="" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950406 4790 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950412 4790 flags.go:64] FLAG: --lock-file="" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950418 4790 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950425 4790 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950431 4790 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950442 4790 flags.go:64] FLAG: --log-json-split-stream="false" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950450 4790 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950456 4790 flags.go:64] FLAG: --log-text-split-stream="false" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950462 4790 flags.go:64] FLAG: --logging-format="text" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950468 4790 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 11 10:38:43.951678 master-0 kubenswrapper[4790]: I1011 10:38:43.950474 4790 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950483 4790 flags.go:64] FLAG: --manifest-url="" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950489 4790 flags.go:64] FLAG: --manifest-url-header="" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950497 4790 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950504 4790 flags.go:64] FLAG: --max-open-files="1000000" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950511 4790 flags.go:64] FLAG: --max-pods="110" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950518 4790 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950524 4790 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950530 4790 flags.go:64] FLAG: --memory-manager-policy="None" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950537 4790 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950543 4790 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950549 4790 flags.go:64] FLAG: --node-ip="192.168.34.10" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950556 4790 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950576 4790 flags.go:64] FLAG: --node-status-max-images="50" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950582 4790 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950589 4790 flags.go:64] FLAG: --oom-score-adj="-999" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950595 4790 flags.go:64] FLAG: --pod-cidr="" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950601 4790 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d66b9dbe1d071d7372c477a78835fb65b48ea82db00d23e9086af5cfcb194ad" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950611 4790 flags.go:64] FLAG: --pod-manifest-path="" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950617 4790 flags.go:64] FLAG: --pod-max-pids="-1" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950624 4790 flags.go:64] FLAG: --pods-per-core="0" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950630 4790 flags.go:64] FLAG: --port="10250" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950637 4790 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950643 4790 flags.go:64] FLAG: --provider-id="" Oct 11 10:38:43.952283 master-0 kubenswrapper[4790]: I1011 10:38:43.950650 4790 flags.go:64] FLAG: --qos-reserved="" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950656 4790 flags.go:64] FLAG: --read-only-port="10255" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950662 4790 flags.go:64] FLAG: --register-node="true" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950668 4790 flags.go:64] FLAG: --register-schedulable="true" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950675 4790 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950687 4790 flags.go:64] FLAG: --registry-burst="10" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950695 4790 flags.go:64] FLAG: --registry-qps="5" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950703 4790 flags.go:64] FLAG: --reserved-cpus="" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950735 4790 flags.go:64] FLAG: --reserved-memory="" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950745 4790 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950753 4790 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950760 4790 flags.go:64] FLAG: --rotate-certificates="false" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950768 4790 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950775 4790 flags.go:64] FLAG: --runonce="false" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950782 4790 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950789 4790 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950797 4790 flags.go:64] FLAG: --seccomp-default="false" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950805 4790 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950813 4790 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950820 4790 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950827 4790 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950834 4790 flags.go:64] FLAG: --storage-driver-password="root" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950843 4790 flags.go:64] FLAG: --storage-driver-secure="false" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950850 4790 flags.go:64] FLAG: --storage-driver-table="stats" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950856 4790 flags.go:64] FLAG: --storage-driver-user="root" Oct 11 10:38:43.952807 master-0 kubenswrapper[4790]: I1011 10:38:43.950863 4790 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: I1011 10:38:43.950869 4790 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: I1011 10:38:43.950876 4790 flags.go:64] FLAG: --system-cgroups="" Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: I1011 10:38:43.950882 4790 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: I1011 10:38:43.950893 4790 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: I1011 10:38:43.950898 4790 flags.go:64] FLAG: --tls-cert-file="" Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: I1011 10:38:43.950904 4790 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: I1011 10:38:43.950912 4790 flags.go:64] FLAG: --tls-min-version="" Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: I1011 10:38:43.950918 4790 flags.go:64] FLAG: --tls-private-key-file="" Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: I1011 10:38:43.950925 4790 flags.go:64] FLAG: --topology-manager-policy="none" Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: I1011 10:38:43.950931 4790 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: I1011 10:38:43.950937 4790 flags.go:64] FLAG: --topology-manager-scope="container" Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: I1011 10:38:43.950943 4790 flags.go:64] FLAG: --v="2" Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: I1011 10:38:43.950958 4790 flags.go:64] FLAG: --version="false" Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: I1011 10:38:43.950967 4790 flags.go:64] FLAG: --vmodule="" Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: I1011 10:38:43.950975 4790 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: I1011 10:38:43.950982 4790 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: W1011 10:38:43.951147 4790 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: W1011 10:38:43.951155 4790 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: W1011 10:38:43.951162 4790 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: W1011 10:38:43.951168 4790 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: W1011 10:38:43.951174 4790 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: W1011 10:38:43.951179 4790 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: W1011 10:38:43.951184 4790 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 10:38:43.953319 master-0 kubenswrapper[4790]: W1011 10:38:43.951189 4790 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951195 4790 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951200 4790 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951205 4790 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951211 4790 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951219 4790 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951224 4790 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951229 4790 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951234 4790 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951240 4790 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951245 4790 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951250 4790 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951257 4790 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951263 4790 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951269 4790 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951274 4790 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951279 4790 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951285 4790 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951290 4790 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951295 4790 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 10:38:43.953871 master-0 kubenswrapper[4790]: W1011 10:38:43.951301 4790 feature_gate.go:330] unrecognized feature gate: Example Oct 11 10:38:43.954322 master-0 kubenswrapper[4790]: W1011 10:38:43.951306 4790 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 10:38:43.954322 master-0 kubenswrapper[4790]: W1011 10:38:43.951311 4790 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 10:38:43.954322 master-0 kubenswrapper[4790]: W1011 10:38:43.951316 4790 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 10:38:43.954322 master-0 kubenswrapper[4790]: W1011 10:38:43.951323 4790 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 10:38:43.954322 master-0 kubenswrapper[4790]: W1011 10:38:43.951328 4790 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 10:38:43.954322 master-0 kubenswrapper[4790]: W1011 10:38:43.951333 4790 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 10:38:43.954322 master-0 kubenswrapper[4790]: W1011 10:38:43.951339 4790 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 10:38:43.954322 master-0 kubenswrapper[4790]: W1011 10:38:43.951345 4790 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 10:38:43.954322 master-0 kubenswrapper[4790]: W1011 10:38:43.951351 4790 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 10:38:43.954322 master-0 kubenswrapper[4790]: W1011 10:38:43.951358 4790 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 10:38:43.954322 master-0 kubenswrapper[4790]: W1011 10:38:43.951365 4790 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 10:38:43.954322 master-0 kubenswrapper[4790]: W1011 10:38:43.951373 4790 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 10:38:43.954322 master-0 kubenswrapper[4790]: W1011 10:38:43.951379 4790 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 10:38:43.954322 master-0 kubenswrapper[4790]: W1011 10:38:43.951386 4790 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 10:38:43.954322 master-0 kubenswrapper[4790]: W1011 10:38:43.951395 4790 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 10:38:43.954322 master-0 kubenswrapper[4790]: W1011 10:38:43.951404 4790 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 10:38:43.954322 master-0 kubenswrapper[4790]: W1011 10:38:43.951416 4790 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 10:38:43.954322 master-0 kubenswrapper[4790]: W1011 10:38:43.951423 4790 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 10:38:43.954322 master-0 kubenswrapper[4790]: W1011 10:38:43.951428 4790 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951434 4790 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951439 4790 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951446 4790 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951453 4790 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951458 4790 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951464 4790 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951470 4790 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951475 4790 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951480 4790 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951486 4790 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951491 4790 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951496 4790 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951501 4790 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951507 4790 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951512 4790 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951516 4790 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951523 4790 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951529 4790 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951534 4790 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 10:38:43.954762 master-0 kubenswrapper[4790]: W1011 10:38:43.951539 4790 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 10:38:43.955212 master-0 kubenswrapper[4790]: W1011 10:38:43.951544 4790 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 10:38:43.955212 master-0 kubenswrapper[4790]: W1011 10:38:43.951549 4790 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 10:38:43.955212 master-0 kubenswrapper[4790]: W1011 10:38:43.951555 4790 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 11 10:38:43.955212 master-0 kubenswrapper[4790]: W1011 10:38:43.951561 4790 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 10:38:43.955212 master-0 kubenswrapper[4790]: W1011 10:38:43.951566 4790 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 10:38:43.955212 master-0 kubenswrapper[4790]: I1011 10:38:43.952358 4790 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 11 10:38:43.956649 master-2 kubenswrapper[4776]: I1011 10:38:43.956601 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g9ll\" (UniqueName: \"kubernetes.io/projected/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-kube-api-access-2g9ll\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.965414 master-0 kubenswrapper[4790]: I1011 10:38:43.965331 4790 server.go:491] "Kubelet version" kubeletVersion="v1.31.13" Oct 11 10:38:43.965414 master-0 kubenswrapper[4790]: I1011 10:38:43.965397 4790 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 11 10:38:43.965622 master-0 kubenswrapper[4790]: W1011 10:38:43.965588 4790 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 10:38:43.965622 master-0 kubenswrapper[4790]: W1011 10:38:43.965612 4790 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 10:38:43.965622 master-0 kubenswrapper[4790]: W1011 10:38:43.965623 4790 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 10:38:43.965700 master-0 kubenswrapper[4790]: W1011 10:38:43.965634 4790 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 10:38:43.965700 master-0 kubenswrapper[4790]: W1011 10:38:43.965643 4790 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 10:38:43.965700 master-0 kubenswrapper[4790]: W1011 10:38:43.965651 4790 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 10:38:43.965700 master-0 kubenswrapper[4790]: W1011 10:38:43.965659 4790 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 10:38:43.965700 master-0 kubenswrapper[4790]: W1011 10:38:43.965667 4790 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 10:38:43.965700 master-0 kubenswrapper[4790]: W1011 10:38:43.965675 4790 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 10:38:43.965700 master-0 kubenswrapper[4790]: W1011 10:38:43.965684 4790 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 10:38:43.965700 master-0 kubenswrapper[4790]: W1011 10:38:43.965691 4790 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 10:38:43.965700 master-0 kubenswrapper[4790]: W1011 10:38:43.965700 4790 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 10:38:43.965700 master-0 kubenswrapper[4790]: W1011 10:38:43.965732 4790 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 10:38:43.965700 master-0 kubenswrapper[4790]: W1011 10:38:43.965743 4790 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 10:38:43.965700 master-0 kubenswrapper[4790]: W1011 10:38:43.965757 4790 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 10:38:43.966046 master-0 kubenswrapper[4790]: W1011 10:38:43.965767 4790 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 11 10:38:43.966046 master-0 kubenswrapper[4790]: W1011 10:38:43.965776 4790 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 10:38:43.966046 master-0 kubenswrapper[4790]: W1011 10:38:43.965785 4790 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 10:38:43.966046 master-0 kubenswrapper[4790]: W1011 10:38:43.965796 4790 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 10:38:43.966046 master-0 kubenswrapper[4790]: W1011 10:38:43.965804 4790 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 10:38:43.966046 master-0 kubenswrapper[4790]: W1011 10:38:43.965814 4790 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 10:38:43.966046 master-0 kubenswrapper[4790]: W1011 10:38:43.965822 4790 feature_gate.go:330] unrecognized feature gate: Example Oct 11 10:38:43.966046 master-0 kubenswrapper[4790]: W1011 10:38:43.965831 4790 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 10:38:43.966046 master-0 kubenswrapper[4790]: W1011 10:38:43.965839 4790 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 10:38:43.966046 master-0 kubenswrapper[4790]: W1011 10:38:43.965848 4790 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 10:38:43.966046 master-0 kubenswrapper[4790]: W1011 10:38:43.965856 4790 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 10:38:43.966046 master-0 kubenswrapper[4790]: W1011 10:38:43.965865 4790 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 10:38:43.966046 master-0 kubenswrapper[4790]: W1011 10:38:43.965873 4790 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 10:38:43.966046 master-0 kubenswrapper[4790]: W1011 10:38:43.965884 4790 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 10:38:43.966046 master-0 kubenswrapper[4790]: W1011 10:38:43.965893 4790 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 10:38:43.966046 master-0 kubenswrapper[4790]: W1011 10:38:43.965901 4790 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 10:38:43.966046 master-0 kubenswrapper[4790]: W1011 10:38:43.965909 4790 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 10:38:43.966046 master-0 kubenswrapper[4790]: W1011 10:38:43.965917 4790 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 10:38:43.966046 master-0 kubenswrapper[4790]: W1011 10:38:43.965926 4790 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.965935 4790 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.965943 4790 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.965951 4790 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.965959 4790 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.965967 4790 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.965975 4790 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.965983 4790 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.965993 4790 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.966002 4790 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.966011 4790 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.966020 4790 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.966027 4790 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.966035 4790 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.966042 4790 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.966053 4790 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.966062 4790 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.966069 4790 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.966077 4790 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.966085 4790 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 10:38:43.966537 master-0 kubenswrapper[4790]: W1011 10:38:43.966093 4790 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 10:38:43.967197 master-0 kubenswrapper[4790]: W1011 10:38:43.966103 4790 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 10:38:43.967197 master-0 kubenswrapper[4790]: W1011 10:38:43.966110 4790 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 10:38:43.967197 master-0 kubenswrapper[4790]: W1011 10:38:43.966119 4790 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 10:38:43.967197 master-0 kubenswrapper[4790]: W1011 10:38:43.966126 4790 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 10:38:43.967197 master-0 kubenswrapper[4790]: W1011 10:38:43.966134 4790 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 10:38:43.967197 master-0 kubenswrapper[4790]: W1011 10:38:43.966142 4790 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 10:38:43.967197 master-0 kubenswrapper[4790]: W1011 10:38:43.966150 4790 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 10:38:43.967197 master-0 kubenswrapper[4790]: W1011 10:38:43.966157 4790 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 10:38:43.967197 master-0 kubenswrapper[4790]: W1011 10:38:43.966165 4790 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 10:38:43.967197 master-0 kubenswrapper[4790]: W1011 10:38:43.966172 4790 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 10:38:43.967197 master-0 kubenswrapper[4790]: W1011 10:38:43.966180 4790 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 10:38:43.967197 master-0 kubenswrapper[4790]: W1011 10:38:43.966188 4790 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 10:38:43.967197 master-0 kubenswrapper[4790]: W1011 10:38:43.966196 4790 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 10:38:43.967197 master-0 kubenswrapper[4790]: W1011 10:38:43.966205 4790 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 10:38:43.967197 master-0 kubenswrapper[4790]: W1011 10:38:43.966213 4790 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 10:38:43.967197 master-0 kubenswrapper[4790]: W1011 10:38:43.966221 4790 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 10:38:43.967197 master-0 kubenswrapper[4790]: W1011 10:38:43.966232 4790 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 10:38:43.967197 master-0 kubenswrapper[4790]: W1011 10:38:43.966241 4790 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 10:38:43.967628 master-0 kubenswrapper[4790]: I1011 10:38:43.966256 4790 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 11 10:38:43.970260 master-0 kubenswrapper[4790]: W1011 10:38:43.970186 4790 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 10:38:43.970260 master-0 kubenswrapper[4790]: W1011 10:38:43.970252 4790 feature_gate.go:330] unrecognized feature gate: Example Oct 11 10:38:43.970260 master-0 kubenswrapper[4790]: W1011 10:38:43.970262 4790 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 10:38:43.970371 master-0 kubenswrapper[4790]: W1011 10:38:43.970271 4790 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 10:38:43.970371 master-0 kubenswrapper[4790]: W1011 10:38:43.970285 4790 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 10:38:43.970371 master-0 kubenswrapper[4790]: W1011 10:38:43.970301 4790 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 10:38:43.970371 master-0 kubenswrapper[4790]: W1011 10:38:43.970310 4790 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 10:38:43.970371 master-0 kubenswrapper[4790]: W1011 10:38:43.970319 4790 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 10:38:43.970371 master-0 kubenswrapper[4790]: W1011 10:38:43.970327 4790 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 10:38:43.970371 master-0 kubenswrapper[4790]: W1011 10:38:43.970335 4790 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 10:38:43.970371 master-0 kubenswrapper[4790]: W1011 10:38:43.970343 4790 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 10:38:43.970371 master-0 kubenswrapper[4790]: W1011 10:38:43.970351 4790 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 11 10:38:43.970371 master-0 kubenswrapper[4790]: W1011 10:38:43.970359 4790 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 10:38:43.970371 master-0 kubenswrapper[4790]: W1011 10:38:43.970366 4790 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 10:38:43.970371 master-0 kubenswrapper[4790]: W1011 10:38:43.970376 4790 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 10:38:43.970371 master-0 kubenswrapper[4790]: W1011 10:38:43.970384 4790 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 10:38:43.970371 master-0 kubenswrapper[4790]: W1011 10:38:43.970392 4790 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 10:38:43.970875 master-0 kubenswrapper[4790]: W1011 10:38:43.970400 4790 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 10:38:43.970875 master-0 kubenswrapper[4790]: W1011 10:38:43.970409 4790 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 10:38:43.970875 master-0 kubenswrapper[4790]: W1011 10:38:43.970417 4790 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 10:38:43.970875 master-0 kubenswrapper[4790]: W1011 10:38:43.970425 4790 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 10:38:43.970875 master-0 kubenswrapper[4790]: W1011 10:38:43.970432 4790 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 10:38:43.970875 master-0 kubenswrapper[4790]: W1011 10:38:43.970440 4790 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 10:38:43.970875 master-0 kubenswrapper[4790]: W1011 10:38:43.970451 4790 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 10:38:43.970875 master-0 kubenswrapper[4790]: W1011 10:38:43.970461 4790 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 10:38:43.970875 master-0 kubenswrapper[4790]: W1011 10:38:43.970471 4790 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 10:38:43.970875 master-0 kubenswrapper[4790]: W1011 10:38:43.970483 4790 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 10:38:43.970875 master-0 kubenswrapper[4790]: W1011 10:38:43.970492 4790 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 10:38:43.970875 master-0 kubenswrapper[4790]: W1011 10:38:43.970500 4790 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 10:38:43.970875 master-0 kubenswrapper[4790]: W1011 10:38:43.970509 4790 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 10:38:43.970875 master-0 kubenswrapper[4790]: W1011 10:38:43.970518 4790 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 10:38:43.970875 master-0 kubenswrapper[4790]: W1011 10:38:43.970525 4790 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 10:38:43.970875 master-0 kubenswrapper[4790]: W1011 10:38:43.970533 4790 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 10:38:43.970875 master-0 kubenswrapper[4790]: W1011 10:38:43.970541 4790 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 10:38:43.970875 master-0 kubenswrapper[4790]: W1011 10:38:43.970548 4790 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 10:38:43.970875 master-0 kubenswrapper[4790]: W1011 10:38:43.970558 4790 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970568 4790 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970578 4790 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970587 4790 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970595 4790 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970603 4790 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970611 4790 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970618 4790 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970626 4790 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970634 4790 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970641 4790 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970650 4790 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970658 4790 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970666 4790 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970674 4790 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970682 4790 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970690 4790 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970698 4790 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970734 4790 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970743 4790 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 10:38:43.971462 master-0 kubenswrapper[4790]: W1011 10:38:43.970751 4790 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 10:38:43.971935 master-0 kubenswrapper[4790]: W1011 10:38:43.970760 4790 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 10:38:43.971935 master-0 kubenswrapper[4790]: W1011 10:38:43.970768 4790 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 10:38:43.971935 master-0 kubenswrapper[4790]: W1011 10:38:43.970775 4790 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 10:38:43.971935 master-0 kubenswrapper[4790]: W1011 10:38:43.970783 4790 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 10:38:43.971935 master-0 kubenswrapper[4790]: W1011 10:38:43.970794 4790 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 10:38:43.971935 master-0 kubenswrapper[4790]: W1011 10:38:43.970804 4790 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 10:38:43.971935 master-0 kubenswrapper[4790]: W1011 10:38:43.970813 4790 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 10:38:43.971935 master-0 kubenswrapper[4790]: W1011 10:38:43.970822 4790 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 10:38:43.971935 master-0 kubenswrapper[4790]: W1011 10:38:43.970830 4790 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 10:38:43.971935 master-0 kubenswrapper[4790]: W1011 10:38:43.970837 4790 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 10:38:43.971935 master-0 kubenswrapper[4790]: W1011 10:38:43.970846 4790 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 10:38:43.971935 master-0 kubenswrapper[4790]: W1011 10:38:43.970853 4790 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 10:38:43.971935 master-0 kubenswrapper[4790]: W1011 10:38:43.970861 4790 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 10:38:43.971935 master-0 kubenswrapper[4790]: W1011 10:38:43.970868 4790 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 10:38:43.971935 master-0 kubenswrapper[4790]: W1011 10:38:43.970876 4790 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 10:38:43.971935 master-0 kubenswrapper[4790]: W1011 10:38:43.970884 4790 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 10:38:43.972285 master-0 kubenswrapper[4790]: I1011 10:38:43.970898 4790 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 11 10:38:43.972285 master-0 kubenswrapper[4790]: I1011 10:38:43.971240 4790 server.go:940] "Client rotation is on, will bootstrap in background" Oct 11 10:38:43.975142 master-0 kubenswrapper[4790]: I1011 10:38:43.975104 4790 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Oct 11 10:38:43.976780 master-0 kubenswrapper[4790]: I1011 10:38:43.976748 4790 server.go:997] "Starting client certificate rotation" Oct 11 10:38:43.976814 master-0 kubenswrapper[4790]: I1011 10:38:43.976789 4790 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 11 10:38:43.977041 master-0 kubenswrapper[4790]: I1011 10:38:43.976992 4790 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Oct 11 10:38:44.004104 master-0 kubenswrapper[4790]: I1011 10:38:44.004001 4790 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 11 10:38:44.006921 master-0 kubenswrapper[4790]: I1011 10:38:44.006830 4790 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 11 10:38:44.024389 master-0 kubenswrapper[4790]: I1011 10:38:44.024161 4790 log.go:25] "Validated CRI v1 runtime API" Oct 11 10:38:44.031157 master-0 kubenswrapper[4790]: I1011 10:38:44.031103 4790 log.go:25] "Validated CRI v1 image API" Oct 11 10:38:44.034209 master-0 kubenswrapper[4790]: I1011 10:38:44.034149 4790 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 11 10:38:44.040884 master-0 kubenswrapper[4790]: I1011 10:38:44.040820 4790 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 7ed13c62-ecfa-44fd-93db-2cdfc620f24a:/dev/vda3 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Oct 11 10:38:44.040992 master-0 kubenswrapper[4790]: I1011 10:38:44.040874 4790 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Oct 11 10:38:44.064589 master-2 kubenswrapper[4776]: I1011 10:38:44.064133 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:44.068434 master-2 kubenswrapper[4776]: I1011 10:38:44.068369 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" path="/var/lib/kubelet/pods/cc095688-9188-4472-9c26-d4d286e5ef06/volumes" Oct 11 10:38:44.069747 master-2 kubenswrapper[4776]: I1011 10:38:44.069708 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" path="/var/lib/kubelet/pods/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369/volumes" Oct 11 10:38:44.072217 master-0 kubenswrapper[4790]: I1011 10:38:44.072069 4790 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Oct 11 10:38:44.081315 master-0 kubenswrapper[4790]: I1011 10:38:44.080831 4790 manager.go:217] Machine: {Timestamp:2025-10-11 10:38:44.078879165 +0000 UTC m=+0.633339537 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2799998 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ef9a84db9c494bb2987b8bfc3ab9d214 SystemUUID:ef9a84db-9c49-4bb2-987b-8bfc3ab9d214 BootID:e436c54c-2677-4e8c-8717-eb2ef57d6e68 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:05:d1:c8 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:3e:05:d1:c8 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:19:d3:60 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:cb:37:ae Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:72:f3:09:61:c2:18 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 11 10:38:44.081315 master-0 kubenswrapper[4790]: I1011 10:38:44.081282 4790 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 11 10:38:44.081534 master-0 kubenswrapper[4790]: I1011 10:38:44.081486 4790 manager.go:233] Version: {KernelVersion:5.14.0-427.91.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202509241235-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 11 10:38:44.082009 master-0 kubenswrapper[4790]: I1011 10:38:44.081969 4790 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 11 10:38:44.082356 master-0 kubenswrapper[4790]: I1011 10:38:44.082288 4790 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 11 10:38:44.082680 master-0 kubenswrapper[4790]: I1011 10:38:44.082352 4790 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 11 10:38:44.083859 master-0 kubenswrapper[4790]: I1011 10:38:44.083823 4790 topology_manager.go:138] "Creating topology manager with none policy" Oct 11 10:38:44.083859 master-0 kubenswrapper[4790]: I1011 10:38:44.083859 4790 container_manager_linux.go:303] "Creating device plugin manager" Oct 11 10:38:44.083982 master-0 kubenswrapper[4790]: I1011 10:38:44.083885 4790 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 11 10:38:44.083982 master-0 kubenswrapper[4790]: I1011 10:38:44.083911 4790 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 11 10:38:44.085816 master-0 kubenswrapper[4790]: I1011 10:38:44.085777 4790 state_mem.go:36] "Initialized new in-memory state store" Oct 11 10:38:44.085953 master-0 kubenswrapper[4790]: I1011 10:38:44.085921 4790 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 11 10:38:44.091314 master-0 kubenswrapper[4790]: I1011 10:38:44.091276 4790 kubelet.go:418] "Attempting to sync node with API server" Oct 11 10:38:44.091314 master-0 kubenswrapper[4790]: I1011 10:38:44.091316 4790 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 11 10:38:44.091514 master-0 kubenswrapper[4790]: I1011 10:38:44.091482 4790 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 11 10:38:44.091514 master-0 kubenswrapper[4790]: I1011 10:38:44.091511 4790 kubelet.go:324] "Adding apiserver pod source" Oct 11 10:38:44.091651 master-0 kubenswrapper[4790]: I1011 10:38:44.091533 4790 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 11 10:38:44.098532 master-0 kubenswrapper[4790]: I1011 10:38:44.098460 4790 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.12-3.rhaos4.18.gitdc59c78.el9" apiVersion="v1" Oct 11 10:38:44.102667 master-0 kubenswrapper[4790]: I1011 10:38:44.102607 4790 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 11 10:38:44.103040 master-0 kubenswrapper[4790]: I1011 10:38:44.102967 4790 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 11 10:38:44.103040 master-0 kubenswrapper[4790]: I1011 10:38:44.103017 4790 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 11 10:38:44.103040 master-0 kubenswrapper[4790]: I1011 10:38:44.103030 4790 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 11 10:38:44.103040 master-0 kubenswrapper[4790]: I1011 10:38:44.103040 4790 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 11 10:38:44.103040 master-0 kubenswrapper[4790]: I1011 10:38:44.103055 4790 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 11 10:38:44.103361 master-0 kubenswrapper[4790]: I1011 10:38:44.103067 4790 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 11 10:38:44.103361 master-0 kubenswrapper[4790]: I1011 10:38:44.103077 4790 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 11 10:38:44.103361 master-0 kubenswrapper[4790]: I1011 10:38:44.103094 4790 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 11 10:38:44.103361 master-0 kubenswrapper[4790]: I1011 10:38:44.103105 4790 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 11 10:38:44.103361 master-0 kubenswrapper[4790]: I1011 10:38:44.103115 4790 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 11 10:38:44.103361 master-0 kubenswrapper[4790]: I1011 10:38:44.103132 4790 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 11 10:38:44.103805 master-0 kubenswrapper[4790]: I1011 10:38:44.103765 4790 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 11 10:38:44.105330 master-0 kubenswrapper[4790]: W1011 10:38:44.105260 4790 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 11 10:38:44.105486 master-0 kubenswrapper[4790]: W1011 10:38:44.105422 4790 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 11 10:38:44.105560 master-0 kubenswrapper[4790]: E1011 10:38:44.105465 4790 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:38:44.105560 master-0 kubenswrapper[4790]: E1011 10:38:44.105501 4790 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:38:44.105966 master-0 kubenswrapper[4790]: I1011 10:38:44.105922 4790 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 11 10:38:44.106617 master-0 kubenswrapper[4790]: I1011 10:38:44.106580 4790 server.go:1280] "Started kubelet" Oct 11 10:38:44.106860 master-0 kubenswrapper[4790]: I1011 10:38:44.106777 4790 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 11 10:38:44.107028 master-0 kubenswrapper[4790]: I1011 10:38:44.106923 4790 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 11 10:38:44.107104 master-0 kubenswrapper[4790]: I1011 10:38:44.107067 4790 server_v1.go:47] "podresources" method="list" useActivePods=true Oct 11 10:38:44.107941 master-0 kubenswrapper[4790]: I1011 10:38:44.107882 4790 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 11 10:38:44.108449 master-0 systemd[1]: Started Kubernetes Kubelet. Oct 11 10:38:44.112909 master-0 kubenswrapper[4790]: I1011 10:38:44.112855 4790 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 11 10:38:44.112909 master-0 kubenswrapper[4790]: I1011 10:38:44.112904 4790 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 11 10:38:44.114095 master-0 kubenswrapper[4790]: E1011 10:38:44.113986 4790 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Oct 11 10:38:44.115359 master-0 kubenswrapper[4790]: I1011 10:38:44.114592 4790 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 11 10:38:44.115359 master-0 kubenswrapper[4790]: I1011 10:38:44.114825 4790 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 11 10:38:44.115359 master-0 kubenswrapper[4790]: I1011 10:38:44.115368 4790 reconstruct.go:97] "Volume reconstruction finished" Oct 11 10:38:44.115686 master-0 kubenswrapper[4790]: I1011 10:38:44.115411 4790 reconciler.go:26] "Reconciler: start to sync state" Oct 11 10:38:44.116311 master-0 kubenswrapper[4790]: I1011 10:38:44.116121 4790 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Oct 11 10:38:44.117001 master-0 kubenswrapper[4790]: I1011 10:38:44.116927 4790 server.go:449] "Adding debug handlers to kubelet server" Oct 11 10:38:44.119410 master-0 kubenswrapper[4790]: E1011 10:38:44.119299 4790 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 11 10:38:44.119821 master-0 kubenswrapper[4790]: I1011 10:38:44.119751 4790 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:38:44.120057 master-0 kubenswrapper[4790]: W1011 10:38:44.119843 4790 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 11 10:38:44.120345 master-0 kubenswrapper[4790]: E1011 10:38:44.120279 4790 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:38:44.121109 master-0 kubenswrapper[4790]: E1011 10:38:44.119946 4790 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d6996696db8b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.106541239 +0000 UTC m=+0.661001541,LastTimestamp:2025-10-11 10:38:44.106541239 +0000 UTC m=+0.661001541,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:44.126035 master-0 kubenswrapper[4790]: I1011 10:38:44.125986 4790 factory.go:55] Registering systemd factory Oct 11 10:38:44.126035 master-0 kubenswrapper[4790]: I1011 10:38:44.126036 4790 factory.go:221] Registration of the systemd container factory successfully Oct 11 10:38:44.127040 master-0 kubenswrapper[4790]: I1011 10:38:44.126995 4790 factory.go:153] Registering CRI-O factory Oct 11 10:38:44.127040 master-0 kubenswrapper[4790]: I1011 10:38:44.127031 4790 factory.go:221] Registration of the crio container factory successfully Oct 11 10:38:44.127213 master-0 kubenswrapper[4790]: I1011 10:38:44.127138 4790 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 11 10:38:44.127213 master-0 kubenswrapper[4790]: I1011 10:38:44.127187 4790 factory.go:103] Registering Raw factory Oct 11 10:38:44.127213 master-0 kubenswrapper[4790]: I1011 10:38:44.127213 4790 manager.go:1196] Started watching for new ooms in manager Oct 11 10:38:44.128338 master-0 kubenswrapper[4790]: I1011 10:38:44.128281 4790 manager.go:319] Starting recovery of all containers Oct 11 10:38:44.136531 master-0 kubenswrapper[4790]: E1011 10:38:44.136478 4790 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Oct 11 10:38:44.154937 master-0 kubenswrapper[4790]: I1011 10:38:44.154890 4790 manager.go:324] Recovery completed Oct 11 10:38:44.167532 master-0 kubenswrapper[4790]: I1011 10:38:44.167493 4790 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:38:44.169143 master-0 kubenswrapper[4790]: I1011 10:38:44.169081 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Oct 11 10:38:44.169143 master-0 kubenswrapper[4790]: I1011 10:38:44.169133 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Oct 11 10:38:44.169143 master-0 kubenswrapper[4790]: I1011 10:38:44.169147 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Oct 11 10:38:44.171274 master-0 kubenswrapper[4790]: I1011 10:38:44.171229 4790 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 11 10:38:44.171274 master-0 kubenswrapper[4790]: I1011 10:38:44.171252 4790 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 11 10:38:44.171274 master-0 kubenswrapper[4790]: I1011 10:38:44.171278 4790 state_mem.go:36] "Initialized new in-memory state store" Oct 11 10:38:44.172364 master-0 kubenswrapper[4790]: E1011 10:38:44.172081 4790 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d28a0cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169121995 +0000 UTC m=+0.723582297,LastTimestamp:2025-10-11 10:38:44.169121995 +0000 UTC m=+0.723582297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:44.175308 master-0 kubenswrapper[4790]: I1011 10:38:44.175272 4790 policy_none.go:49] "None policy: Start" Oct 11 10:38:44.176929 master-0 kubenswrapper[4790]: I1011 10:38:44.176860 4790 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 11 10:38:44.177187 master-0 kubenswrapper[4790]: I1011 10:38:44.177014 4790 state_mem.go:35] "Initializing new in-memory state store" Oct 11 10:38:44.180090 master-0 kubenswrapper[4790]: E1011 10:38:44.179915 4790 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d28eb17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169141015 +0000 UTC m=+0.723601317,LastTimestamp:2025-10-11 10:38:44.169141015 +0000 UTC m=+0.723601317,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:44.188854 master-0 kubenswrapper[4790]: E1011 10:38:44.188755 4790 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d291d68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169153896 +0000 UTC m=+0.723614198,LastTimestamp:2025-10-11 10:38:44.169153896 +0000 UTC m=+0.723614198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:44.192591 master-2 kubenswrapper[4776]: I1011 10:38:44.192508 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:38:44.192591 master-2 kubenswrapper[4776]: I1011 10:38:44.192574 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:38:44.214327 master-0 kubenswrapper[4790]: E1011 10:38:44.214232 4790 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Oct 11 10:38:44.251483 master-0 kubenswrapper[4790]: I1011 10:38:44.251320 4790 manager.go:334] "Starting Device Plugin manager" Oct 11 10:38:44.251781 master-0 kubenswrapper[4790]: I1011 10:38:44.251493 4790 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 11 10:38:44.251781 master-0 kubenswrapper[4790]: I1011 10:38:44.251517 4790 server.go:79] "Starting device plugin registration server" Oct 11 10:38:44.252197 master-0 kubenswrapper[4790]: I1011 10:38:44.252150 4790 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 11 10:38:44.252287 master-0 kubenswrapper[4790]: I1011 10:38:44.252180 4790 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 11 10:38:44.253074 master-0 kubenswrapper[4790]: I1011 10:38:44.252995 4790 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 11 10:38:44.253387 master-0 kubenswrapper[4790]: I1011 10:38:44.253172 4790 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 11 10:38:44.253387 master-0 kubenswrapper[4790]: I1011 10:38:44.253188 4790 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 11 10:38:44.255114 master-0 kubenswrapper[4790]: E1011 10:38:44.254955 4790 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Oct 11 10:38:44.267802 master-0 kubenswrapper[4790]: E1011 10:38:44.267555 4790 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69967266ce36 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.257082934 +0000 UTC m=+0.811543266,LastTimestamp:2025-10-11 10:38:44.257082934 +0000 UTC m=+0.811543266,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:44.288145 master-0 kubenswrapper[4790]: I1011 10:38:44.287995 4790 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 11 10:38:44.291128 master-0 kubenswrapper[4790]: I1011 10:38:44.291071 4790 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 11 10:38:44.291240 master-0 kubenswrapper[4790]: I1011 10:38:44.291155 4790 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 11 10:38:44.291240 master-0 kubenswrapper[4790]: I1011 10:38:44.291198 4790 kubelet.go:2335] "Starting kubelet main sync loop" Oct 11 10:38:44.291362 master-0 kubenswrapper[4790]: E1011 10:38:44.291278 4790 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 11 10:38:44.301419 master-0 kubenswrapper[4790]: W1011 10:38:44.301330 4790 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 11 10:38:44.301580 master-0 kubenswrapper[4790]: E1011 10:38:44.301429 4790 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:38:44.328186 master-0 kubenswrapper[4790]: E1011 10:38:44.328052 4790 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 11 10:38:44.353370 master-0 kubenswrapper[4790]: I1011 10:38:44.353279 4790 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:38:44.355647 master-0 kubenswrapper[4790]: I1011 10:38:44.355595 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Oct 11 10:38:44.355647 master-0 kubenswrapper[4790]: I1011 10:38:44.355651 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Oct 11 10:38:44.355870 master-0 kubenswrapper[4790]: I1011 10:38:44.355663 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Oct 11 10:38:44.355870 master-0 kubenswrapper[4790]: I1011 10:38:44.355723 4790 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Oct 11 10:38:44.363626 master-0 kubenswrapper[4790]: E1011 10:38:44.363569 4790 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Oct 11 10:38:44.364027 master-0 kubenswrapper[4790]: E1011 10:38:44.363887 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d28a0cb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d28a0cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169121995 +0000 UTC m=+0.723582297,LastTimestamp:2025-10-11 10:38:44.355632298 +0000 UTC m=+0.910092590,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:44.374550 master-0 kubenswrapper[4790]: E1011 10:38:44.374320 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d28eb17\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d28eb17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169141015 +0000 UTC m=+0.723601317,LastTimestamp:2025-10-11 10:38:44.355658363 +0000 UTC m=+0.910118655,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:44.383372 master-0 kubenswrapper[4790]: E1011 10:38:44.383083 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d291d68\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d291d68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169153896 +0000 UTC m=+0.723614198,LastTimestamp:2025-10-11 10:38:44.355668436 +0000 UTC m=+0.910128728,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:44.392214 master-0 kubenswrapper[4790]: I1011 10:38:44.392106 4790 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Oct 11 10:38:44.392382 master-0 kubenswrapper[4790]: I1011 10:38:44.392249 4790 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:38:44.394815 master-0 kubenswrapper[4790]: I1011 10:38:44.394765 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Oct 11 10:38:44.394957 master-0 kubenswrapper[4790]: I1011 10:38:44.394830 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Oct 11 10:38:44.394957 master-0 kubenswrapper[4790]: I1011 10:38:44.394849 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Oct 11 10:38:44.395207 master-0 kubenswrapper[4790]: I1011 10:38:44.395162 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Oct 11 10:38:44.395302 master-0 kubenswrapper[4790]: I1011 10:38:44.395218 4790 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:38:44.396911 master-0 kubenswrapper[4790]: I1011 10:38:44.396858 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Oct 11 10:38:44.397030 master-0 kubenswrapper[4790]: I1011 10:38:44.396925 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Oct 11 10:38:44.397030 master-0 kubenswrapper[4790]: I1011 10:38:44.396947 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Oct 11 10:38:44.406115 master-0 kubenswrapper[4790]: E1011 10:38:44.405895 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d28a0cb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d28a0cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169121995 +0000 UTC m=+0.723582297,LastTimestamp:2025-10-11 10:38:44.394806181 +0000 UTC m=+0.949266503,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:44.413640 master-0 kubenswrapper[4790]: E1011 10:38:44.413427 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d28eb17\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d28eb17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169141015 +0000 UTC m=+0.723601317,LastTimestamp:2025-10-11 10:38:44.394842059 +0000 UTC m=+0.949302381,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:44.424564 master-0 kubenswrapper[4790]: E1011 10:38:44.424402 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d291d68\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d291d68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169153896 +0000 UTC m=+0.723614198,LastTimestamp:2025-10-11 10:38:44.394858642 +0000 UTC m=+0.949318964,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:44.431455 master-0 kubenswrapper[4790]: E1011 10:38:44.431234 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d28a0cb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d28a0cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169121995 +0000 UTC m=+0.723582297,LastTimestamp:2025-10-11 10:38:44.396899262 +0000 UTC m=+0.951359584,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:44.442489 master-0 kubenswrapper[4790]: E1011 10:38:44.442349 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d28eb17\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d28eb17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169141015 +0000 UTC m=+0.723601317,LastTimestamp:2025-10-11 10:38:44.396938342 +0000 UTC m=+0.951398674,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:44.450291 master-0 kubenswrapper[4790]: E1011 10:38:44.450103 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d291d68\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d291d68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169153896 +0000 UTC m=+0.723614198,LastTimestamp:2025-10-11 10:38:44.396957846 +0000 UTC m=+0.951418168,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:44.497446 master-2 kubenswrapper[4776]: I1011 10:38:44.497414 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6fccd5ccc-lxq75"] Oct 11 10:38:44.499401 master-2 kubenswrapper[4776]: W1011 10:38:44.499365 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d7c74c7_9652_4fe6_93c3_667ec676ce1c.slice/crio-b3852fd2b203546b8d8ce1fcc93dc21bc0ed0046d146a93029a6263a62ee1ba2 WatchSource:0}: Error finding container b3852fd2b203546b8d8ce1fcc93dc21bc0ed0046d146a93029a6263a62ee1ba2: Status 404 returned error can't find the container with id b3852fd2b203546b8d8ce1fcc93dc21bc0ed0046d146a93029a6263a62ee1ba2 Oct 11 10:38:44.514170 master-2 kubenswrapper[4776]: I1011 10:38:44.514073 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" event={"ID":"6d7c74c7-9652-4fe6-93c3-667ec676ce1c","Type":"ContainerStarted","Data":"b3852fd2b203546b8d8ce1fcc93dc21bc0ed0046d146a93029a6263a62ee1ba2"} Oct 11 10:38:44.517529 master-0 kubenswrapper[4790]: I1011 10:38:44.517403 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/117b8efe269c98124cf5022ab3c340a5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"117b8efe269c98124cf5022ab3c340a5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Oct 11 10:38:44.517529 master-0 kubenswrapper[4790]: I1011 10:38:44.517489 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/117b8efe269c98124cf5022ab3c340a5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"117b8efe269c98124cf5022ab3c340a5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Oct 11 10:38:44.564797 master-0 kubenswrapper[4790]: I1011 10:38:44.564661 4790 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:38:44.566686 master-0 kubenswrapper[4790]: I1011 10:38:44.566578 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Oct 11 10:38:44.566847 master-0 kubenswrapper[4790]: I1011 10:38:44.566801 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Oct 11 10:38:44.566847 master-0 kubenswrapper[4790]: I1011 10:38:44.566835 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Oct 11 10:38:44.566980 master-0 kubenswrapper[4790]: I1011 10:38:44.566902 4790 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Oct 11 10:38:44.575560 master-0 kubenswrapper[4790]: E1011 10:38:44.575487 4790 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Oct 11 10:38:44.575818 master-0 kubenswrapper[4790]: E1011 10:38:44.575615 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d28a0cb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d28a0cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169121995 +0000 UTC m=+0.723582297,LastTimestamp:2025-10-11 10:38:44.566662264 +0000 UTC m=+1.121122636,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:44.586389 master-0 kubenswrapper[4790]: E1011 10:38:44.586171 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d28eb17\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d28eb17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169141015 +0000 UTC m=+0.723601317,LastTimestamp:2025-10-11 10:38:44.566824158 +0000 UTC m=+1.121284490,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:44.594748 master-0 kubenswrapper[4790]: E1011 10:38:44.594546 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d291d68\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d291d68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169153896 +0000 UTC m=+0.723614198,LastTimestamp:2025-10-11 10:38:44.566849374 +0000 UTC m=+1.121309706,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:44.618821 master-0 kubenswrapper[4790]: I1011 10:38:44.618682 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/117b8efe269c98124cf5022ab3c340a5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"117b8efe269c98124cf5022ab3c340a5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Oct 11 10:38:44.618821 master-0 kubenswrapper[4790]: I1011 10:38:44.618788 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/117b8efe269c98124cf5022ab3c340a5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"117b8efe269c98124cf5022ab3c340a5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Oct 11 10:38:44.619101 master-0 kubenswrapper[4790]: I1011 10:38:44.618886 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/117b8efe269c98124cf5022ab3c340a5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"117b8efe269c98124cf5022ab3c340a5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Oct 11 10:38:44.619101 master-0 kubenswrapper[4790]: I1011 10:38:44.618964 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/117b8efe269c98124cf5022ab3c340a5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"117b8efe269c98124cf5022ab3c340a5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Oct 11 10:38:44.731055 master-0 kubenswrapper[4790]: I1011 10:38:44.730792 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Oct 11 10:38:44.738774 master-0 kubenswrapper[4790]: E1011 10:38:44.738651 4790 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 11 10:38:44.976239 master-0 kubenswrapper[4790]: I1011 10:38:44.976085 4790 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:38:44.978993 master-0 kubenswrapper[4790]: I1011 10:38:44.978942 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Oct 11 10:38:44.979106 master-0 kubenswrapper[4790]: I1011 10:38:44.979089 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Oct 11 10:38:44.979221 master-0 kubenswrapper[4790]: I1011 10:38:44.979189 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Oct 11 10:38:44.979292 master-0 kubenswrapper[4790]: I1011 10:38:44.979257 4790 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Oct 11 10:38:44.989581 master-0 kubenswrapper[4790]: E1011 10:38:44.989444 4790 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Oct 11 10:38:44.990659 master-0 kubenswrapper[4790]: E1011 10:38:44.990478 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d28a0cb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d28a0cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169121995 +0000 UTC m=+0.723582297,LastTimestamp:2025-10-11 10:38:44.979074842 +0000 UTC m=+1.533535154,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:44.999132 master-0 kubenswrapper[4790]: E1011 10:38:44.998996 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d28eb17\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d28eb17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169141015 +0000 UTC m=+0.723601317,LastTimestamp:2025-10-11 10:38:44.979140695 +0000 UTC m=+1.533601007,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:45.007750 master-0 kubenswrapper[4790]: E1011 10:38:45.007542 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d291d68\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d291d68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169153896 +0000 UTC m=+0.723614198,LastTimestamp:2025-10-11 10:38:44.97920904 +0000 UTC m=+1.533669342,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:45.128686 master-0 kubenswrapper[4790]: I1011 10:38:45.128538 4790 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:38:45.145394 master-0 kubenswrapper[4790]: W1011 10:38:45.145297 4790 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 11 10:38:45.145394 master-0 kubenswrapper[4790]: E1011 10:38:45.145376 4790 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:38:45.420764 master-0 kubenswrapper[4790]: W1011 10:38:45.420356 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod117b8efe269c98124cf5022ab3c340a5.slice/crio-adfad26afad6055735ede8e6a1ce844b2d8bb91640a5c927f437898a49285a03 WatchSource:0}: Error finding container adfad26afad6055735ede8e6a1ce844b2d8bb91640a5c927f437898a49285a03: Status 404 returned error can't find the container with id adfad26afad6055735ede8e6a1ce844b2d8bb91640a5c927f437898a49285a03 Oct 11 10:38:45.431340 master-0 kubenswrapper[4790]: I1011 10:38:45.431290 4790 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:38:45.440112 master-0 kubenswrapper[4790]: E1011 10:38:45.439963 4790 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.186d6996b862ee69 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:117b8efe269c98124cf5022ab3c340a5,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:45.431234153 +0000 UTC m=+1.985694455,LastTimestamp:2025-10-11 10:38:45.431234153 +0000 UTC m=+1.985694455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:45.463213 master-0 kubenswrapper[4790]: W1011 10:38:45.463125 4790 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 11 10:38:45.463213 master-0 kubenswrapper[4790]: E1011 10:38:45.463209 4790 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:38:45.522051 master-2 kubenswrapper[4776]: I1011 10:38:45.521993 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" event={"ID":"6d7c74c7-9652-4fe6-93c3-667ec676ce1c","Type":"ContainerStarted","Data":"9fc905dea7d19fa845edf55c43657fdffbe2e61a962e8c1c109ed88fc33e853c"} Oct 11 10:38:45.522967 master-2 kubenswrapper[4776]: I1011 10:38:45.522928 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:45.529265 master-2 kubenswrapper[4776]: I1011 10:38:45.529228 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:45.547473 master-0 kubenswrapper[4790]: E1011 10:38:45.547331 4790 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="1.6s" Oct 11 10:38:45.547473 master-0 kubenswrapper[4790]: W1011 10:38:45.547458 4790 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 11 10:38:45.547860 master-0 kubenswrapper[4790]: E1011 10:38:45.547516 4790 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:38:45.553269 master-2 kubenswrapper[4776]: I1011 10:38:45.553172 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" podStartSLOduration=31.553147013 podStartE2EDuration="31.553147013s" podCreationTimestamp="2025-10-11 10:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:38:45.551521599 +0000 UTC m=+760.335948308" watchObservedRunningTime="2025-10-11 10:38:45.553147013 +0000 UTC m=+760.337573732" Oct 11 10:38:45.675247 master-0 kubenswrapper[4790]: W1011 10:38:45.675038 4790 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 11 10:38:45.675247 master-0 kubenswrapper[4790]: E1011 10:38:45.675135 4790 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:38:45.789988 master-0 kubenswrapper[4790]: I1011 10:38:45.789756 4790 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:38:45.791875 master-0 kubenswrapper[4790]: I1011 10:38:45.791814 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Oct 11 10:38:45.791875 master-0 kubenswrapper[4790]: I1011 10:38:45.791872 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Oct 11 10:38:45.791875 master-0 kubenswrapper[4790]: I1011 10:38:45.791886 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Oct 11 10:38:45.792205 master-0 kubenswrapper[4790]: I1011 10:38:45.791937 4790 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Oct 11 10:38:45.799776 master-0 kubenswrapper[4790]: E1011 10:38:45.799654 4790 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Oct 11 10:38:45.799982 master-0 kubenswrapper[4790]: E1011 10:38:45.799789 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d28a0cb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d28a0cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169121995 +0000 UTC m=+0.723582297,LastTimestamp:2025-10-11 10:38:45.791853047 +0000 UTC m=+2.346313349,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:45.804938 master-0 kubenswrapper[4790]: E1011 10:38:45.804365 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d28eb17\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d28eb17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169141015 +0000 UTC m=+0.723601317,LastTimestamp:2025-10-11 10:38:45.791881578 +0000 UTC m=+2.346341880,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:45.809641 master-0 kubenswrapper[4790]: E1011 10:38:45.809450 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d291d68\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d291d68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169153896 +0000 UTC m=+0.723614198,LastTimestamp:2025-10-11 10:38:45.791894568 +0000 UTC m=+2.346354870,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:46.125306 master-0 kubenswrapper[4790]: I1011 10:38:46.125211 4790 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:38:46.300079 master-0 kubenswrapper[4790]: I1011 10:38:46.299880 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"117b8efe269c98124cf5022ab3c340a5","Type":"ContainerStarted","Data":"adfad26afad6055735ede8e6a1ce844b2d8bb91640a5c927f437898a49285a03"} Oct 11 10:38:47.131021 master-0 kubenswrapper[4790]: I1011 10:38:47.130849 4790 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:38:47.158044 master-0 kubenswrapper[4790]: E1011 10:38:47.157930 4790 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="3.2s" Oct 11 10:38:47.401034 master-0 kubenswrapper[4790]: I1011 10:38:47.400619 4790 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:38:47.402439 master-0 kubenswrapper[4790]: I1011 10:38:47.402392 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Oct 11 10:38:47.402589 master-0 kubenswrapper[4790]: I1011 10:38:47.402459 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Oct 11 10:38:47.402589 master-0 kubenswrapper[4790]: I1011 10:38:47.402482 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Oct 11 10:38:47.402589 master-0 kubenswrapper[4790]: I1011 10:38:47.402538 4790 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Oct 11 10:38:47.412051 master-0 kubenswrapper[4790]: E1011 10:38:47.411903 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d28a0cb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d28a0cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169121995 +0000 UTC m=+0.723582297,LastTimestamp:2025-10-11 10:38:47.402433609 +0000 UTC m=+3.956893931,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:47.412405 master-0 kubenswrapper[4790]: E1011 10:38:47.412342 4790 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Oct 11 10:38:47.421535 master-0 kubenswrapper[4790]: E1011 10:38:47.421455 4790 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.186d69966d28eb17\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.186d69966d28eb17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-10-11 10:38:44.169141015 +0000 UTC m=+0.723601317,LastTimestamp:2025-10-11 10:38:47.4024716 +0000 UTC m=+3.956931922,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Oct 11 10:38:47.611938 master-0 kubenswrapper[4790]: W1011 10:38:47.611822 4790 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 11 10:38:47.611938 master-0 kubenswrapper[4790]: E1011 10:38:47.611909 4790 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:38:47.765596 master-0 kubenswrapper[4790]: I1011 10:38:47.765350 4790 csr.go:261] certificate signing request csr-bl7b7 is approved, waiting to be issued Oct 11 10:38:47.809323 master-0 kubenswrapper[4790]: W1011 10:38:47.809236 4790 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 11 10:38:47.809323 master-0 kubenswrapper[4790]: E1011 10:38:47.809311 4790 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:38:47.834906 master-0 kubenswrapper[4790]: I1011 10:38:47.834793 4790 csr.go:257] certificate signing request csr-bl7b7 is issued Oct 11 10:38:47.978499 master-0 kubenswrapper[4790]: I1011 10:38:47.978321 4790 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 11 10:38:48.147455 master-0 kubenswrapper[4790]: I1011 10:38:48.147328 4790 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Oct 11 10:38:48.167177 master-0 kubenswrapper[4790]: I1011 10:38:48.167070 4790 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Oct 11 10:38:48.363562 master-0 kubenswrapper[4790]: I1011 10:38:48.363480 4790 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Oct 11 10:38:48.366055 master-2 kubenswrapper[4776]: I1011 10:38:48.366012 4776 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-2"] Oct 11 10:38:48.366786 master-2 kubenswrapper[4776]: I1011 10:38:48.366759 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler" containerID="cri-o://65701df136730297d07e924a9003107719afae6bc3e70126f7680b788afdcc01" gracePeriod=30 Oct 11 10:38:48.366908 master-2 kubenswrapper[4776]: I1011 10:38:48.366849 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler-cert-syncer" containerID="cri-o://b1b4a56fa0152f300c6a99db97775492cfcdce4712ae78b30e2ac340b25efd8c" gracePeriod=30 Oct 11 10:38:48.367008 master-2 kubenswrapper[4776]: I1011 10:38:48.366776 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler-recovery-controller" containerID="cri-o://3bed6e75ec56e4b27551229ac1cfed015f19cbbd37a51de7899f2a409f7f3107" gracePeriod=30 Oct 11 10:38:48.367790 master-2 kubenswrapper[4776]: I1011 10:38:48.367465 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-2"] Oct 11 10:38:48.367790 master-2 kubenswrapper[4776]: E1011 10:38:48.367733 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler" Oct 11 10:38:48.367790 master-2 kubenswrapper[4776]: I1011 10:38:48.367751 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler" Oct 11 10:38:48.367790 master-2 kubenswrapper[4776]: E1011 10:38:48.367765 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler-recovery-controller" Oct 11 10:38:48.367790 master-2 kubenswrapper[4776]: I1011 10:38:48.367773 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler-recovery-controller" Oct 11 10:38:48.367790 master-2 kubenswrapper[4776]: E1011 10:38:48.367793 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="wait-for-host-port" Oct 11 10:38:48.368023 master-2 kubenswrapper[4776]: I1011 10:38:48.367801 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="wait-for-host-port" Oct 11 10:38:48.368023 master-2 kubenswrapper[4776]: E1011 10:38:48.367814 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler-cert-syncer" Oct 11 10:38:48.368023 master-2 kubenswrapper[4776]: I1011 10:38:48.367822 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler-cert-syncer" Oct 11 10:38:48.368023 master-2 kubenswrapper[4776]: I1011 10:38:48.367948 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler-recovery-controller" Oct 11 10:38:48.368023 master-2 kubenswrapper[4776]: I1011 10:38:48.367963 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler" Oct 11 10:38:48.368023 master-2 kubenswrapper[4776]: I1011 10:38:48.367977 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler-cert-syncer" Oct 11 10:38:48.386506 master-0 kubenswrapper[4790]: I1011 10:38:48.386446 4790 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 11 10:38:48.429863 master-0 kubenswrapper[4790]: I1011 10:38:48.429606 4790 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 11 10:38:48.497048 master-2 kubenswrapper[4776]: I1011 10:38:48.496945 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/09a1584aa5985a5ff9600248bcf73e77-cert-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"09a1584aa5985a5ff9600248bcf73e77\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:48.497294 master-2 kubenswrapper[4776]: I1011 10:38:48.497081 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/09a1584aa5985a5ff9600248bcf73e77-resource-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"09a1584aa5985a5ff9600248bcf73e77\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:48.544538 master-2 kubenswrapper[4776]: I1011 10:38:48.544467 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-2_f26cf13b1c8c4f1b57c0ac506ef256a4/kube-scheduler-cert-syncer/0.log" Oct 11 10:38:48.545496 master-2 kubenswrapper[4776]: I1011 10:38:48.545447 4776 generic.go:334] "Generic (PLEG): container finished" podID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerID="3bed6e75ec56e4b27551229ac1cfed015f19cbbd37a51de7899f2a409f7f3107" exitCode=0 Oct 11 10:38:48.545496 master-2 kubenswrapper[4776]: I1011 10:38:48.545487 4776 generic.go:334] "Generic (PLEG): container finished" podID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerID="b1b4a56fa0152f300c6a99db97775492cfcdce4712ae78b30e2ac340b25efd8c" exitCode=2 Oct 11 10:38:48.598453 master-2 kubenswrapper[4776]: I1011 10:38:48.598385 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/09a1584aa5985a5ff9600248bcf73e77-resource-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"09a1584aa5985a5ff9600248bcf73e77\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:48.598641 master-2 kubenswrapper[4776]: I1011 10:38:48.598530 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/09a1584aa5985a5ff9600248bcf73e77-cert-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"09a1584aa5985a5ff9600248bcf73e77\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:48.598641 master-2 kubenswrapper[4776]: I1011 10:38:48.598540 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/09a1584aa5985a5ff9600248bcf73e77-resource-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"09a1584aa5985a5ff9600248bcf73e77\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:48.598754 master-2 kubenswrapper[4776]: I1011 10:38:48.598636 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/09a1584aa5985a5ff9600248bcf73e77-cert-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"09a1584aa5985a5ff9600248bcf73e77\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:48.692285 master-0 kubenswrapper[4790]: I1011 10:38:48.692075 4790 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Oct 11 10:38:48.692285 master-0 kubenswrapper[4790]: E1011 10:38:48.692139 4790 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Oct 11 10:38:48.720533 master-0 kubenswrapper[4790]: I1011 10:38:48.720445 4790 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Oct 11 10:38:48.741284 master-0 kubenswrapper[4790]: I1011 10:38:48.741193 4790 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Oct 11 10:38:48.804095 master-2 kubenswrapper[4776]: I1011 10:38:48.804049 4776 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-2 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.12:10259/healthz\": dial tcp 192.168.34.12:10259: connect: connection refused" start-of-body= Oct 11 10:38:48.804195 master-2 kubenswrapper[4776]: I1011 10:38:48.804111 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" podUID="c76a7758-6688-4e6c-a01a-c3e29db3c134" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10259/healthz\": dial tcp 192.168.34.12:10259: connect: connection refused" Oct 11 10:38:48.804757 master-0 kubenswrapper[4790]: I1011 10:38:48.804616 4790 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Oct 11 10:38:48.837642 master-0 kubenswrapper[4790]: I1011 10:38:48.837535 4790 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2025-10-12 10:21:12 +0000 UTC, rotation deadline is 2025-10-12 07:14:08.069129423 +0000 UTC Oct 11 10:38:48.837642 master-0 kubenswrapper[4790]: I1011 10:38:48.837604 4790 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h35m19.231532581s for next certificate rotation Oct 11 10:38:48.884878 master-2 kubenswrapper[4776]: I1011 10:38:48.884782 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-2_f26cf13b1c8c4f1b57c0ac506ef256a4/kube-scheduler-cert-syncer/0.log" Oct 11 10:38:48.886513 master-2 kubenswrapper[4776]: I1011 10:38:48.886462 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:48.891698 master-2 kubenswrapper[4776]: I1011 10:38:48.891635 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" oldPodUID="f26cf13b1c8c4f1b57c0ac506ef256a4" podUID="09a1584aa5985a5ff9600248bcf73e77" Oct 11 10:38:49.003223 master-2 kubenswrapper[4776]: I1011 10:38:49.003063 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-cert-dir\") pod \"f26cf13b1c8c4f1b57c0ac506ef256a4\" (UID: \"f26cf13b1c8c4f1b57c0ac506ef256a4\") " Oct 11 10:38:49.003223 master-2 kubenswrapper[4776]: I1011 10:38:49.003152 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-resource-dir\") pod \"f26cf13b1c8c4f1b57c0ac506ef256a4\" (UID: \"f26cf13b1c8c4f1b57c0ac506ef256a4\") " Oct 11 10:38:49.003565 master-2 kubenswrapper[4776]: I1011 10:38:49.003261 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f26cf13b1c8c4f1b57c0ac506ef256a4" (UID: "f26cf13b1c8c4f1b57c0ac506ef256a4"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:38:49.003565 master-2 kubenswrapper[4776]: I1011 10:38:49.003528 4776 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-resource-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:49.003992 master-2 kubenswrapper[4776]: I1011 10:38:49.003931 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f26cf13b1c8c4f1b57c0ac506ef256a4" (UID: "f26cf13b1c8c4f1b57c0ac506ef256a4"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:38:49.066651 master-0 kubenswrapper[4790]: I1011 10:38:49.066582 4790 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Oct 11 10:38:49.066651 master-0 kubenswrapper[4790]: E1011 10:38:49.066624 4790 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Oct 11 10:38:49.094995 master-0 kubenswrapper[4790]: I1011 10:38:49.094925 4790 apiserver.go:52] "Watching apiserver" Oct 11 10:38:49.097767 master-0 kubenswrapper[4790]: I1011 10:38:49.097576 4790 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 11 10:38:49.098050 master-0 kubenswrapper[4790]: I1011 10:38:49.097804 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=[] Oct 11 10:38:49.104949 master-2 kubenswrapper[4776]: I1011 10:38:49.104863 4776 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-cert-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:49.116925 master-0 kubenswrapper[4790]: I1011 10:38:49.116839 4790 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Oct 11 10:38:49.173064 master-0 kubenswrapper[4790]: I1011 10:38:49.172999 4790 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Oct 11 10:38:49.193212 master-2 kubenswrapper[4776]: I1011 10:38:49.193166 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:38:49.193487 master-2 kubenswrapper[4776]: I1011 10:38:49.193462 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:38:49.194382 master-0 kubenswrapper[4790]: I1011 10:38:49.194293 4790 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Oct 11 10:38:49.257359 master-0 kubenswrapper[4790]: I1011 10:38:49.257201 4790 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Oct 11 10:38:49.307842 master-0 kubenswrapper[4790]: I1011 10:38:49.307776 4790 generic.go:334] "Generic (PLEG): container finished" podID="117b8efe269c98124cf5022ab3c340a5" containerID="2773ca4b00e741b3dd67df737e99e7af029b77cdda7febc0ccb0b23ed8efcf99" exitCode=0 Oct 11 10:38:49.307842 master-0 kubenswrapper[4790]: I1011 10:38:49.307826 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"117b8efe269c98124cf5022ab3c340a5","Type":"ContainerDied","Data":"2773ca4b00e741b3dd67df737e99e7af029b77cdda7febc0ccb0b23ed8efcf99"} Oct 11 10:38:49.308117 master-0 kubenswrapper[4790]: I1011 10:38:49.307999 4790 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:38:49.309332 master-0 kubenswrapper[4790]: I1011 10:38:49.309280 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Oct 11 10:38:49.309467 master-0 kubenswrapper[4790]: I1011 10:38:49.309351 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Oct 11 10:38:49.309467 master-0 kubenswrapper[4790]: I1011 10:38:49.309377 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Oct 11 10:38:49.537801 master-0 kubenswrapper[4790]: I1011 10:38:49.537748 4790 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Oct 11 10:38:49.537801 master-0 kubenswrapper[4790]: E1011 10:38:49.537802 4790 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Oct 11 10:38:49.556889 master-2 kubenswrapper[4776]: I1011 10:38:49.556742 4776 generic.go:334] "Generic (PLEG): container finished" podID="8755f64d-7ff8-4df3-ae55-c1154ba02830" containerID="9153289ceddc1077d563995a41dced39a8a3e20ad2f9b47e07f851d3852a7efc" exitCode=0 Oct 11 10:38:49.556889 master-2 kubenswrapper[4776]: I1011 10:38:49.556865 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-2" event={"ID":"8755f64d-7ff8-4df3-ae55-c1154ba02830","Type":"ContainerDied","Data":"9153289ceddc1077d563995a41dced39a8a3e20ad2f9b47e07f851d3852a7efc"} Oct 11 10:38:49.560295 master-2 kubenswrapper[4776]: I1011 10:38:49.560236 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-2_f26cf13b1c8c4f1b57c0ac506ef256a4/kube-scheduler-cert-syncer/0.log" Oct 11 10:38:49.561938 master-2 kubenswrapper[4776]: I1011 10:38:49.561860 4776 generic.go:334] "Generic (PLEG): container finished" podID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerID="65701df136730297d07e924a9003107719afae6bc3e70126f7680b788afdcc01" exitCode=0 Oct 11 10:38:49.562091 master-2 kubenswrapper[4776]: I1011 10:38:49.561946 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ed01188c7283cfff70d6c5cb4504465f9e9f1843a1b8c89bb6c36df04a63ac6" Oct 11 10:38:49.562091 master-2 kubenswrapper[4776]: I1011 10:38:49.562066 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:49.595403 master-2 kubenswrapper[4776]: I1011 10:38:49.594383 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" oldPodUID="f26cf13b1c8c4f1b57c0ac506ef256a4" podUID="09a1584aa5985a5ff9600248bcf73e77" Oct 11 10:38:49.603504 master-2 kubenswrapper[4776]: I1011 10:38:49.603418 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" oldPodUID="f26cf13b1c8c4f1b57c0ac506ef256a4" podUID="09a1584aa5985a5ff9600248bcf73e77" Oct 11 10:38:50.023155 master-2 kubenswrapper[4776]: I1011 10:38:50.023080 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:50.023155 master-2 kubenswrapper[4776]: I1011 10:38:50.023151 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:50.025458 master-2 kubenswrapper[4776]: I1011 10:38:50.025409 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:38:50.025541 master-2 kubenswrapper[4776]: I1011 10:38:50.025476 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:38:50.066715 master-2 kubenswrapper[4776]: I1011 10:38:50.066643 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" path="/var/lib/kubelet/pods/f26cf13b1c8c4f1b57c0ac506ef256a4/volumes" Oct 11 10:38:50.098186 master-0 kubenswrapper[4790]: I1011 10:38:50.098121 4790 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Oct 11 10:38:50.117732 master-0 kubenswrapper[4790]: I1011 10:38:50.117667 4790 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Oct 11 10:38:50.180030 master-0 kubenswrapper[4790]: I1011 10:38:50.179968 4790 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: I1011 10:38:50.264761 4776 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-crzgp container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: [+]metric-storage-ready ok Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: [+]metric-informer-sync ok Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: [+]metadata-informer-sync ok Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: I1011 10:38:50.264829 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:50.312755 master-0 kubenswrapper[4790]: I1011 10:38:50.312613 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_117b8efe269c98124cf5022ab3c340a5/kube-rbac-proxy-crio/0.log" Oct 11 10:38:50.313641 master-0 kubenswrapper[4790]: I1011 10:38:50.313378 4790 generic.go:334] "Generic (PLEG): container finished" podID="117b8efe269c98124cf5022ab3c340a5" containerID="7c490e79b5a38a970413fe2dad2770b3c117205fbbd292831a37ed9dbecf228f" exitCode=1 Oct 11 10:38:50.313641 master-0 kubenswrapper[4790]: I1011 10:38:50.313476 4790 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:38:50.313641 master-0 kubenswrapper[4790]: I1011 10:38:50.313471 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"117b8efe269c98124cf5022ab3c340a5","Type":"ContainerDied","Data":"7c490e79b5a38a970413fe2dad2770b3c117205fbbd292831a37ed9dbecf228f"} Oct 11 10:38:50.315034 master-0 kubenswrapper[4790]: I1011 10:38:50.314889 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Oct 11 10:38:50.315034 master-0 kubenswrapper[4790]: I1011 10:38:50.314961 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Oct 11 10:38:50.315034 master-0 kubenswrapper[4790]: I1011 10:38:50.314975 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Oct 11 10:38:50.329833 master-0 kubenswrapper[4790]: I1011 10:38:50.329656 4790 scope.go:117] "RemoveContainer" containerID="7c490e79b5a38a970413fe2dad2770b3c117205fbbd292831a37ed9dbecf228f" Oct 11 10:38:50.369858 master-0 kubenswrapper[4790]: E1011 10:38:50.369784 4790 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Oct 11 10:38:50.461269 master-0 kubenswrapper[4790]: I1011 10:38:50.461195 4790 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Oct 11 10:38:50.461269 master-0 kubenswrapper[4790]: E1011 10:38:50.461237 4790 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Oct 11 10:38:50.613306 master-0 kubenswrapper[4790]: I1011 10:38:50.613079 4790 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:38:50.614897 master-0 kubenswrapper[4790]: I1011 10:38:50.614841 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Oct 11 10:38:50.614984 master-0 kubenswrapper[4790]: I1011 10:38:50.614910 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Oct 11 10:38:50.614984 master-0 kubenswrapper[4790]: I1011 10:38:50.614931 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Oct 11 10:38:50.615093 master-0 kubenswrapper[4790]: I1011 10:38:50.615000 4790 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Oct 11 10:38:50.705684 master-0 kubenswrapper[4790]: I1011 10:38:50.705581 4790 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Oct 11 10:38:50.939725 master-2 kubenswrapper[4776]: I1011 10:38:50.939666 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:51.033322 master-0 kubenswrapper[4790]: I1011 10:38:51.033113 4790 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 11 10:38:51.036017 master-2 kubenswrapper[4776]: I1011 10:38:51.035945 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8755f64d-7ff8-4df3-ae55-c1154ba02830" (UID: "8755f64d-7ff8-4df3-ae55-c1154ba02830"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:38:51.036925 master-2 kubenswrapper[4776]: I1011 10:38:51.036881 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-kubelet-dir\") pod \"8755f64d-7ff8-4df3-ae55-c1154ba02830\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " Oct 11 10:38:51.036998 master-2 kubenswrapper[4776]: I1011 10:38:51.036973 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8755f64d-7ff8-4df3-ae55-c1154ba02830-kube-api-access\") pod \"8755f64d-7ff8-4df3-ae55-c1154ba02830\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " Oct 11 10:38:51.037047 master-2 kubenswrapper[4776]: I1011 10:38:51.037014 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-var-lock\") pod \"8755f64d-7ff8-4df3-ae55-c1154ba02830\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " Oct 11 10:38:51.037235 master-2 kubenswrapper[4776]: I1011 10:38:51.037209 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-var-lock" (OuterVolumeSpecName: "var-lock") pod "8755f64d-7ff8-4df3-ae55-c1154ba02830" (UID: "8755f64d-7ff8-4df3-ae55-c1154ba02830"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:38:51.039383 master-2 kubenswrapper[4776]: I1011 10:38:51.037541 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:51.039383 master-2 kubenswrapper[4776]: I1011 10:38:51.037587 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:51.040127 master-2 kubenswrapper[4776]: I1011 10:38:51.039874 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8755f64d-7ff8-4df3-ae55-c1154ba02830-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8755f64d-7ff8-4df3-ae55-c1154ba02830" (UID: "8755f64d-7ff8-4df3-ae55-c1154ba02830"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:38:51.124014 master-0 kubenswrapper[4790]: I1011 10:38:51.123930 4790 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Oct 11 10:38:51.138751 master-2 kubenswrapper[4776]: I1011 10:38:51.138694 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8755f64d-7ff8-4df3-ae55-c1154ba02830-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:51.155195 master-0 kubenswrapper[4790]: I1011 10:38:51.155150 4790 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Oct 11 10:38:51.177517 master-0 kubenswrapper[4790]: I1011 10:38:51.177444 4790 ???:1] "http: TLS handshake error from 192.168.34.12:43270: no serving certificate available for the kubelet" Oct 11 10:38:51.269817 master-0 kubenswrapper[4790]: I1011 10:38:51.269780 4790 ???:1] "http: TLS handshake error from 192.168.34.12:43286: no serving certificate available for the kubelet" Oct 11 10:38:51.318627 master-0 kubenswrapper[4790]: I1011 10:38:51.318561 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_117b8efe269c98124cf5022ab3c340a5/kube-rbac-proxy-crio/1.log" Oct 11 10:38:51.319207 master-0 kubenswrapper[4790]: I1011 10:38:51.319169 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_117b8efe269c98124cf5022ab3c340a5/kube-rbac-proxy-crio/0.log" Oct 11 10:38:51.319728 master-0 kubenswrapper[4790]: I1011 10:38:51.319653 4790 generic.go:334] "Generic (PLEG): container finished" podID="117b8efe269c98124cf5022ab3c340a5" containerID="0dce5d6b10f720448939fdc7754f0953d8f5becf80952889b528531812732245" exitCode=1 Oct 11 10:38:51.319792 master-0 kubenswrapper[4790]: I1011 10:38:51.319756 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"117b8efe269c98124cf5022ab3c340a5","Type":"ContainerDied","Data":"0dce5d6b10f720448939fdc7754f0953d8f5becf80952889b528531812732245"} Oct 11 10:38:51.319890 master-0 kubenswrapper[4790]: I1011 10:38:51.319856 4790 scope.go:117] "RemoveContainer" containerID="7c490e79b5a38a970413fe2dad2770b3c117205fbbd292831a37ed9dbecf228f" Oct 11 10:38:51.349524 master-0 kubenswrapper[4790]: I1011 10:38:51.349441 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Oct 11 10:38:51.349809 master-0 kubenswrapper[4790]: I1011 10:38:51.349746 4790 scope.go:117] "RemoveContainer" containerID="0dce5d6b10f720448939fdc7754f0953d8f5becf80952889b528531812732245" Oct 11 10:38:51.350044 master-0 kubenswrapper[4790]: E1011 10:38:51.349992 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(117b8efe269c98124cf5022ab3c340a5)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="117b8efe269c98124cf5022ab3c340a5" Oct 11 10:38:51.371527 master-0 kubenswrapper[4790]: I1011 10:38:51.371411 4790 ???:1] "http: TLS handshake error from 192.168.34.12:43288: no serving certificate available for the kubelet" Oct 11 10:38:51.460458 master-0 kubenswrapper[4790]: I1011 10:38:51.460356 4790 ???:1] "http: TLS handshake error from 192.168.34.12:43290: no serving certificate available for the kubelet" Oct 11 10:38:51.529090 master-0 kubenswrapper[4790]: I1011 10:38:51.529023 4790 ???:1] "http: TLS handshake error from 192.168.34.12:43296: no serving certificate available for the kubelet" Oct 11 10:38:51.575910 master-2 kubenswrapper[4776]: I1011 10:38:51.575859 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-2" event={"ID":"8755f64d-7ff8-4df3-ae55-c1154ba02830","Type":"ContainerDied","Data":"778cb332dda0b2164f4a18cca4a33c538ed0cde607af7400e50992351c45e610"} Oct 11 10:38:51.576215 master-2 kubenswrapper[4776]: I1011 10:38:51.576195 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="778cb332dda0b2164f4a18cca4a33c538ed0cde607af7400e50992351c45e610" Oct 11 10:38:51.576304 master-2 kubenswrapper[4776]: I1011 10:38:51.575910 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:51.641655 master-0 kubenswrapper[4790]: I1011 10:38:51.641390 4790 ???:1] "http: TLS handshake error from 192.168.34.12:43304: no serving certificate available for the kubelet" Oct 11 10:38:51.642891 master-2 kubenswrapper[4776]: I1011 10:38:51.642846 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-5-master-2"] Oct 11 10:38:51.643302 master-2 kubenswrapper[4776]: E1011 10:38:51.643288 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8755f64d-7ff8-4df3-ae55-c1154ba02830" containerName="installer" Oct 11 10:38:51.643370 master-2 kubenswrapper[4776]: I1011 10:38:51.643361 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="8755f64d-7ff8-4df3-ae55-c1154ba02830" containerName="installer" Oct 11 10:38:51.643527 master-2 kubenswrapper[4776]: I1011 10:38:51.643515 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="8755f64d-7ff8-4df3-ae55-c1154ba02830" containerName="installer" Oct 11 10:38:51.644063 master-2 kubenswrapper[4776]: I1011 10:38:51.644047 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.646935 master-2 kubenswrapper[4776]: I1011 10:38:51.646917 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-djvlq" Oct 11 10:38:51.657720 master-2 kubenswrapper[4776]: I1011 10:38:51.657658 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-2"] Oct 11 10:38:51.746403 master-2 kubenswrapper[4776]: I1011 10:38:51.746349 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kubelet-dir\") pod \"installer-5-master-2\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.746609 master-2 kubenswrapper[4776]: I1011 10:38:51.746439 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kube-api-access\") pod \"installer-5-master-2\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.746609 master-2 kubenswrapper[4776]: I1011 10:38:51.746460 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-var-lock\") pod \"installer-5-master-2\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.837006 master-0 kubenswrapper[4790]: I1011 10:38:51.836903 4790 ???:1] "http: TLS handshake error from 192.168.34.12:43318: no serving certificate available for the kubelet" Oct 11 10:38:51.847264 master-2 kubenswrapper[4776]: I1011 10:38:51.847133 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kubelet-dir\") pod \"installer-5-master-2\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.847264 master-2 kubenswrapper[4776]: I1011 10:38:51.847221 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kube-api-access\") pod \"installer-5-master-2\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.847264 master-2 kubenswrapper[4776]: I1011 10:38:51.847238 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-var-lock\") pod \"installer-5-master-2\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.847488 master-2 kubenswrapper[4776]: I1011 10:38:51.847299 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-var-lock\") pod \"installer-5-master-2\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.847488 master-2 kubenswrapper[4776]: I1011 10:38:51.847306 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kubelet-dir\") pod \"installer-5-master-2\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.864595 master-2 kubenswrapper[4776]: I1011 10:38:51.864533 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kube-api-access\") pod \"installer-5-master-2\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.974597 master-2 kubenswrapper[4776]: I1011 10:38:51.974518 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:52.153661 master-0 kubenswrapper[4790]: I1011 10:38:52.153548 4790 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 11 10:38:52.184540 master-0 kubenswrapper[4790]: I1011 10:38:52.184451 4790 ???:1] "http: TLS handshake error from 192.168.34.12:43322: no serving certificate available for the kubelet" Oct 11 10:38:52.326032 master-0 kubenswrapper[4790]: I1011 10:38:52.325946 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_117b8efe269c98124cf5022ab3c340a5/kube-rbac-proxy-crio/1.log" Oct 11 10:38:52.327639 master-0 kubenswrapper[4790]: I1011 10:38:52.327572 4790 scope.go:117] "RemoveContainer" containerID="0dce5d6b10f720448939fdc7754f0953d8f5becf80952889b528531812732245" Oct 11 10:38:52.327983 master-0 kubenswrapper[4790]: E1011 10:38:52.327926 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(117b8efe269c98124cf5022ab3c340a5)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="117b8efe269c98124cf5022ab3c340a5" Oct 11 10:38:52.365563 master-2 kubenswrapper[4776]: I1011 10:38:52.365492 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-2"] Oct 11 10:38:52.368864 master-2 kubenswrapper[4776]: W1011 10:38:52.368817 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5c59b4d8_fa7a_4c50_b130_8b4857359efa.slice/crio-966a418bbbd7a4398464bc0352257041c54b7d3624ba848b9d0be21d900f1a4b WatchSource:0}: Error finding container 966a418bbbd7a4398464bc0352257041c54b7d3624ba848b9d0be21d900f1a4b: Status 404 returned error can't find the container with id 966a418bbbd7a4398464bc0352257041c54b7d3624ba848b9d0be21d900f1a4b Oct 11 10:38:52.581221 master-1 kubenswrapper[4771]: I1011 10:38:52.581105 4771 patch_prober.go:28] interesting pod/console-775ff6c4fc-csp4z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" start-of-body= Oct 11 10:38:52.581221 master-1 kubenswrapper[4771]: I1011 10:38:52.581213 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-775ff6c4fc-csp4z" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" probeResult="failure" output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" Oct 11 10:38:52.582151 master-2 kubenswrapper[4776]: I1011 10:38:52.582088 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-2" event={"ID":"5c59b4d8-fa7a-4c50-b130-8b4857359efa","Type":"ContainerStarted","Data":"966a418bbbd7a4398464bc0352257041c54b7d3624ba848b9d0be21d900f1a4b"} Oct 11 10:38:52.863803 master-0 kubenswrapper[4790]: I1011 10:38:52.863667 4790 ???:1] "http: TLS handshake error from 192.168.34.12:43330: no serving certificate available for the kubelet" Oct 11 10:38:53.593881 master-2 kubenswrapper[4776]: I1011 10:38:53.593789 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-2" event={"ID":"5c59b4d8-fa7a-4c50-b130-8b4857359efa","Type":"ContainerStarted","Data":"b6fb2721b520ebe1cede0be4cfc4189e99d8b75e5efbec478e75775987a6a914"} Oct 11 10:38:53.622233 master-2 kubenswrapper[4776]: I1011 10:38:53.622098 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-5-master-2" podStartSLOduration=2.622071505 podStartE2EDuration="2.622071505s" podCreationTimestamp="2025-10-11 10:38:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:38:53.620601644 +0000 UTC m=+768.405028393" watchObservedRunningTime="2025-10-11 10:38:53.622071505 +0000 UTC m=+768.406498224" Oct 11 10:38:53.804346 master-2 kubenswrapper[4776]: I1011 10:38:53.804258 4776 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-2 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.12:10259/healthz\": dial tcp 192.168.34.12:10259: connect: connection refused" start-of-body= Oct 11 10:38:53.804612 master-2 kubenswrapper[4776]: I1011 10:38:53.804348 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" podUID="c76a7758-6688-4e6c-a01a-c3e29db3c134" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10259/healthz\": dial tcp 192.168.34.12:10259: connect: connection refused" Oct 11 10:38:54.166295 master-0 kubenswrapper[4790]: I1011 10:38:54.166187 4790 ???:1] "http: TLS handshake error from 192.168.34.11:44580: no serving certificate available for the kubelet" Oct 11 10:38:54.182147 master-0 kubenswrapper[4790]: I1011 10:38:54.182079 4790 ???:1] "http: TLS handshake error from 192.168.34.12:43336: no serving certificate available for the kubelet" Oct 11 10:38:54.193574 master-2 kubenswrapper[4776]: I1011 10:38:54.193493 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:38:54.193903 master-2 kubenswrapper[4776]: I1011 10:38:54.193579 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:38:54.884729 master-1 kubenswrapper[4771]: I1011 10:38:54.884645 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-1_66232733-dcfb-4320-a372-ce05d7d777d9/installer/0.log" Oct 11 10:38:54.885549 master-1 kubenswrapper[4771]: I1011 10:38:54.884731 4771 generic.go:334] "Generic (PLEG): container finished" podID="66232733-dcfb-4320-a372-ce05d7d777d9" containerID="1a8de711412c9754f899398f555a2bb9a02c8065248c232ba0c054fb5b00ec21" exitCode=1 Oct 11 10:38:54.885549 master-1 kubenswrapper[4771]: I1011 10:38:54.884775 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-1" event={"ID":"66232733-dcfb-4320-a372-ce05d7d777d9","Type":"ContainerDied","Data":"1a8de711412c9754f899398f555a2bb9a02c8065248c232ba0c054fb5b00ec21"} Oct 11 10:38:55.264117 master-1 kubenswrapper[4771]: I1011 10:38:55.264067 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-1_66232733-dcfb-4320-a372-ce05d7d777d9/installer/0.log" Oct 11 10:38:55.264490 master-1 kubenswrapper[4771]: I1011 10:38:55.264148 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-1" Oct 11 10:38:55.377803 master-1 kubenswrapper[4771]: I1011 10:38:55.377626 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/66232733-dcfb-4320-a372-ce05d7d777d9-var-lock\") pod \"66232733-dcfb-4320-a372-ce05d7d777d9\" (UID: \"66232733-dcfb-4320-a372-ce05d7d777d9\") " Oct 11 10:38:55.378068 master-1 kubenswrapper[4771]: I1011 10:38:55.377789 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66232733-dcfb-4320-a372-ce05d7d777d9-kubelet-dir\") pod \"66232733-dcfb-4320-a372-ce05d7d777d9\" (UID: \"66232733-dcfb-4320-a372-ce05d7d777d9\") " Oct 11 10:38:55.378463 master-1 kubenswrapper[4771]: I1011 10:38:55.377894 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66232733-dcfb-4320-a372-ce05d7d777d9-kube-api-access\") pod \"66232733-dcfb-4320-a372-ce05d7d777d9\" (UID: \"66232733-dcfb-4320-a372-ce05d7d777d9\") " Oct 11 10:38:55.378696 master-1 kubenswrapper[4771]: I1011 10:38:55.377727 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66232733-dcfb-4320-a372-ce05d7d777d9-var-lock" (OuterVolumeSpecName: "var-lock") pod "66232733-dcfb-4320-a372-ce05d7d777d9" (UID: "66232733-dcfb-4320-a372-ce05d7d777d9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:38:55.378848 master-1 kubenswrapper[4771]: I1011 10:38:55.377818 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66232733-dcfb-4320-a372-ce05d7d777d9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "66232733-dcfb-4320-a372-ce05d7d777d9" (UID: "66232733-dcfb-4320-a372-ce05d7d777d9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:38:55.378969 master-1 kubenswrapper[4771]: I1011 10:38:55.378918 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/66232733-dcfb-4320-a372-ce05d7d777d9-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:55.378969 master-1 kubenswrapper[4771]: I1011 10:38:55.378956 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66232733-dcfb-4320-a372-ce05d7d777d9-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:55.381448 master-1 kubenswrapper[4771]: I1011 10:38:55.381403 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66232733-dcfb-4320-a372-ce05d7d777d9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "66232733-dcfb-4320-a372-ce05d7d777d9" (UID: "66232733-dcfb-4320-a372-ce05d7d777d9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:38:55.479832 master-1 kubenswrapper[4771]: I1011 10:38:55.479748 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66232733-dcfb-4320-a372-ce05d7d777d9-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:38:55.901024 master-1 kubenswrapper[4771]: I1011 10:38:55.900942 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-1_66232733-dcfb-4320-a372-ce05d7d777d9/installer/0.log" Oct 11 10:38:55.901695 master-1 kubenswrapper[4771]: I1011 10:38:55.901049 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-1" event={"ID":"66232733-dcfb-4320-a372-ce05d7d777d9","Type":"ContainerDied","Data":"210c1e83cdd57074a16669f3f9ab89020bba5fd50626a163674b365ff40935aa"} Oct 11 10:38:55.901695 master-1 kubenswrapper[4771]: I1011 10:38:55.901113 4771 scope.go:117] "RemoveContainer" containerID="1a8de711412c9754f899398f555a2bb9a02c8065248c232ba0c054fb5b00ec21" Oct 11 10:38:55.901695 master-1 kubenswrapper[4771]: I1011 10:38:55.901329 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-1" Oct 11 10:38:55.952080 master-1 kubenswrapper[4771]: I1011 10:38:55.951993 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-1"] Oct 11 10:38:55.958227 master-1 kubenswrapper[4771]: I1011 10:38:55.958130 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-4-master-1"] Oct 11 10:38:56.447294 master-1 kubenswrapper[4771]: I1011 10:38:56.447185 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66232733-dcfb-4320-a372-ce05d7d777d9" path="/var/lib/kubelet/pods/66232733-dcfb-4320-a372-ce05d7d777d9/volumes" Oct 11 10:38:56.468154 master-0 kubenswrapper[4790]: I1011 10:38:56.468048 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-5-master-0"] Oct 11 10:38:56.469015 master-0 kubenswrapper[4790]: I1011 10:38:56.468384 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-0" Oct 11 10:38:56.469015 master-0 kubenswrapper[4790]: E1011 10:38:56.468518 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-5-master-0" podUID="795a4c8d-2d06-412c-a788-7e8585d432f7" Oct 11 10:38:56.597612 master-0 kubenswrapper[4790]: I1011 10:38:56.597512 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/795a4c8d-2d06-412c-a788-7e8585d432f7-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"795a4c8d-2d06-412c-a788-7e8585d432f7\") " pod="openshift-etcd/installer-5-master-0" Oct 11 10:38:56.597612 master-0 kubenswrapper[4790]: I1011 10:38:56.597593 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/795a4c8d-2d06-412c-a788-7e8585d432f7-var-lock\") pod \"installer-5-master-0\" (UID: \"795a4c8d-2d06-412c-a788-7e8585d432f7\") " pod="openshift-etcd/installer-5-master-0" Oct 11 10:38:56.597612 master-0 kubenswrapper[4790]: I1011 10:38:56.597613 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/795a4c8d-2d06-412c-a788-7e8585d432f7-kube-api-access\") pod \"installer-5-master-0\" (UID: \"795a4c8d-2d06-412c-a788-7e8585d432f7\") " pod="openshift-etcd/installer-5-master-0" Oct 11 10:38:56.698004 master-0 kubenswrapper[4790]: I1011 10:38:56.697907 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/795a4c8d-2d06-412c-a788-7e8585d432f7-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"795a4c8d-2d06-412c-a788-7e8585d432f7\") " pod="openshift-etcd/installer-5-master-0" Oct 11 10:38:56.698004 master-0 kubenswrapper[4790]: I1011 10:38:56.697989 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/795a4c8d-2d06-412c-a788-7e8585d432f7-var-lock\") pod \"installer-5-master-0\" (UID: \"795a4c8d-2d06-412c-a788-7e8585d432f7\") " pod="openshift-etcd/installer-5-master-0" Oct 11 10:38:56.698004 master-0 kubenswrapper[4790]: I1011 10:38:56.698011 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/795a4c8d-2d06-412c-a788-7e8585d432f7-kube-api-access\") pod \"installer-5-master-0\" (UID: \"795a4c8d-2d06-412c-a788-7e8585d432f7\") " pod="openshift-etcd/installer-5-master-0" Oct 11 10:38:56.698325 master-0 kubenswrapper[4790]: I1011 10:38:56.698163 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/795a4c8d-2d06-412c-a788-7e8585d432f7-var-lock\") pod \"installer-5-master-0\" (UID: \"795a4c8d-2d06-412c-a788-7e8585d432f7\") " pod="openshift-etcd/installer-5-master-0" Oct 11 10:38:56.698325 master-0 kubenswrapper[4790]: I1011 10:38:56.698167 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/795a4c8d-2d06-412c-a788-7e8585d432f7-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"795a4c8d-2d06-412c-a788-7e8585d432f7\") " pod="openshift-etcd/installer-5-master-0" Oct 11 10:38:56.729323 master-0 kubenswrapper[4790]: E1011 10:38:56.729213 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:38:56.729323 master-0 kubenswrapper[4790]: E1011 10:38:56.729263 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-5-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:38:56.729415 master-0 kubenswrapper[4790]: E1011 10:38:56.729348 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/795a4c8d-2d06-412c-a788-7e8585d432f7-kube-api-access podName:795a4c8d-2d06-412c-a788-7e8585d432f7 nodeName:}" failed. No retries permitted until 2025-10-11 10:38:57.229318496 +0000 UTC m=+13.783778788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/795a4c8d-2d06-412c-a788-7e8585d432f7-kube-api-access") pod "installer-5-master-0" (UID: "795a4c8d-2d06-412c-a788-7e8585d432f7") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:38:56.783013 master-0 kubenswrapper[4790]: I1011 10:38:56.782952 4790 ???:1] "http: TLS handshake error from 192.168.34.12:43346: no serving certificate available for the kubelet" Oct 11 10:38:57.303540 master-0 kubenswrapper[4790]: I1011 10:38:57.303460 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/795a4c8d-2d06-412c-a788-7e8585d432f7-kube-api-access\") pod \"installer-5-master-0\" (UID: \"795a4c8d-2d06-412c-a788-7e8585d432f7\") " pod="openshift-etcd/installer-5-master-0" Oct 11 10:38:57.303839 master-0 kubenswrapper[4790]: E1011 10:38:57.303683 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:38:57.303839 master-0 kubenswrapper[4790]: E1011 10:38:57.303724 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-5-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:38:57.303839 master-0 kubenswrapper[4790]: E1011 10:38:57.303787 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/795a4c8d-2d06-412c-a788-7e8585d432f7-kube-api-access podName:795a4c8d-2d06-412c-a788-7e8585d432f7 nodeName:}" failed. No retries permitted until 2025-10-11 10:38:58.303767714 +0000 UTC m=+14.858228006 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/795a4c8d-2d06-412c-a788-7e8585d432f7-kube-api-access") pod "installer-5-master-0" (UID: "795a4c8d-2d06-412c-a788-7e8585d432f7") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:38:58.292135 master-0 kubenswrapper[4790]: I1011 10:38:58.292075 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-0" Oct 11 10:38:58.292769 master-0 kubenswrapper[4790]: E1011 10:38:58.292208 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-5-master-0" podUID="795a4c8d-2d06-412c-a788-7e8585d432f7" Oct 11 10:38:58.310905 master-0 kubenswrapper[4790]: I1011 10:38:58.310835 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/795a4c8d-2d06-412c-a788-7e8585d432f7-kube-api-access\") pod \"installer-5-master-0\" (UID: \"795a4c8d-2d06-412c-a788-7e8585d432f7\") " pod="openshift-etcd/installer-5-master-0" Oct 11 10:38:58.311240 master-0 kubenswrapper[4790]: E1011 10:38:58.310985 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:38:58.311240 master-0 kubenswrapper[4790]: E1011 10:38:58.311007 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-5-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:38:58.311240 master-0 kubenswrapper[4790]: E1011 10:38:58.311051 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/795a4c8d-2d06-412c-a788-7e8585d432f7-kube-api-access podName:795a4c8d-2d06-412c-a788-7e8585d432f7 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:00.311032893 +0000 UTC m=+16.865493195 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/795a4c8d-2d06-412c-a788-7e8585d432f7-kube-api-access") pod "installer-5-master-0" (UID: "795a4c8d-2d06-412c-a788-7e8585d432f7") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:38:58.804040 master-2 kubenswrapper[4776]: I1011 10:38:58.803903 4776 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-2 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.12:10259/healthz\": dial tcp 192.168.34.12:10259: connect: connection refused" start-of-body= Oct 11 10:38:58.805143 master-2 kubenswrapper[4776]: I1011 10:38:58.804028 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" podUID="c76a7758-6688-4e6c-a01a-c3e29db3c134" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10259/healthz\": dial tcp 192.168.34.12:10259: connect: connection refused" Oct 11 10:38:58.805143 master-2 kubenswrapper[4776]: I1011 10:38:58.804240 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" Oct 11 10:38:58.805329 master-2 kubenswrapper[4776]: I1011 10:38:58.805265 4776 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-2 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.12:10259/healthz\": dial tcp 192.168.34.12:10259: connect: connection refused" start-of-body= Oct 11 10:38:58.805456 master-2 kubenswrapper[4776]: I1011 10:38:58.805362 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" podUID="c76a7758-6688-4e6c-a01a-c3e29db3c134" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10259/healthz\": dial tcp 192.168.34.12:10259: connect: connection refused" Oct 11 10:38:59.063238 master-2 kubenswrapper[4776]: I1011 10:38:59.063115 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:59.095175 master-2 kubenswrapper[4776]: I1011 10:38:59.095131 4776 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" podUID="ef6d4604-7cf7-4d1f-a697-4dc720b4a516" Oct 11 10:38:59.095175 master-2 kubenswrapper[4776]: I1011 10:38:59.095171 4776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" podUID="ef6d4604-7cf7-4d1f-a697-4dc720b4a516" Oct 11 10:38:59.127849 master-2 kubenswrapper[4776]: I1011 10:38:59.127802 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-2"] Oct 11 10:38:59.137292 master-2 kubenswrapper[4776]: I1011 10:38:59.137246 4776 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:59.140299 master-2 kubenswrapper[4776]: I1011 10:38:59.140244 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-2"] Oct 11 10:38:59.152573 master-2 kubenswrapper[4776]: I1011 10:38:59.152537 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:59.156662 master-2 kubenswrapper[4776]: I1011 10:38:59.156627 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-2"] Oct 11 10:38:59.192786 master-2 kubenswrapper[4776]: I1011 10:38:59.192709 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:38:59.192786 master-2 kubenswrapper[4776]: I1011 10:38:59.192761 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:38:59.646014 master-2 kubenswrapper[4776]: I1011 10:38:59.645863 4776 generic.go:334] "Generic (PLEG): container finished" podID="09a1584aa5985a5ff9600248bcf73e77" containerID="2d997c2d4c42e15e75dcfc064346afe164a2ba45f92c9b53915dda78c32c141c" exitCode=0 Oct 11 10:38:59.646014 master-2 kubenswrapper[4776]: I1011 10:38:59.645907 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"09a1584aa5985a5ff9600248bcf73e77","Type":"ContainerDied","Data":"2d997c2d4c42e15e75dcfc064346afe164a2ba45f92c9b53915dda78c32c141c"} Oct 11 10:38:59.646014 master-2 kubenswrapper[4776]: I1011 10:38:59.645935 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"09a1584aa5985a5ff9600248bcf73e77","Type":"ContainerStarted","Data":"99f0da196ee9f8a5939f45e2bc1ee4e75e90de563aa9ac9e5f2697426085263c"} Oct 11 10:39:00.025504 master-2 kubenswrapper[4776]: I1011 10:39:00.023452 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:39:00.025504 master-2 kubenswrapper[4776]: I1011 10:39:00.023509 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:39:00.292680 master-0 kubenswrapper[4790]: I1011 10:39:00.292549 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-0" Oct 11 10:39:00.293747 master-0 kubenswrapper[4790]: E1011 10:39:00.292882 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-5-master-0" podUID="795a4c8d-2d06-412c-a788-7e8585d432f7" Oct 11 10:39:00.324948 master-0 kubenswrapper[4790]: I1011 10:39:00.324783 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/795a4c8d-2d06-412c-a788-7e8585d432f7-kube-api-access\") pod \"installer-5-master-0\" (UID: \"795a4c8d-2d06-412c-a788-7e8585d432f7\") " pod="openshift-etcd/installer-5-master-0" Oct 11 10:39:00.325297 master-0 kubenswrapper[4790]: E1011 10:39:00.325114 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:00.325297 master-0 kubenswrapper[4790]: E1011 10:39:00.325160 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-5-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:00.325297 master-0 kubenswrapper[4790]: E1011 10:39:00.325260 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/795a4c8d-2d06-412c-a788-7e8585d432f7-kube-api-access podName:795a4c8d-2d06-412c-a788-7e8585d432f7 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:04.325229503 +0000 UTC m=+20.879689835 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/795a4c8d-2d06-412c-a788-7e8585d432f7-kube-api-access") pod "installer-5-master-0" (UID: "795a4c8d-2d06-412c-a788-7e8585d432f7") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:00.542194 master-0 kubenswrapper[4790]: I1011 10:39:00.542070 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-5kghv"] Oct 11 10:39:00.543990 master-0 kubenswrapper[4790]: I1011 10:39:00.542546 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-85bvx"] Oct 11 10:39:00.543990 master-0 kubenswrapper[4790]: I1011 10:39:00.542687 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-8lkdg"] Oct 11 10:39:00.543990 master-0 kubenswrapper[4790]: I1011 10:39:00.542813 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5kghv" Oct 11 10:39:00.543990 master-0 kubenswrapper[4790]: I1011 10:39:00.543114 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8lkdg" Oct 11 10:39:00.543990 master-0 kubenswrapper[4790]: I1011 10:39:00.543841 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.547202 master-0 kubenswrapper[4790]: I1011 10:39:00.547166 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 11 10:39:00.547304 master-0 kubenswrapper[4790]: I1011 10:39:00.547245 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-c7nlq" Oct 11 10:39:00.547375 master-0 kubenswrapper[4790]: I1011 10:39:00.547277 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 11 10:39:00.547514 master-0 kubenswrapper[4790]: I1011 10:39:00.547465 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 11 10:39:00.547631 master-0 kubenswrapper[4790]: I1011 10:39:00.547601 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 11 10:39:00.547874 master-0 kubenswrapper[4790]: I1011 10:39:00.547837 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 11 10:39:00.547950 master-0 kubenswrapper[4790]: I1011 10:39:00.547843 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Oct 11 10:39:00.548213 master-0 kubenswrapper[4790]: I1011 10:39:00.548177 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 11 10:39:00.548386 master-0 kubenswrapper[4790]: I1011 10:39:00.548351 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Oct 11 10:39:00.548386 master-0 kubenswrapper[4790]: I1011 10:39:00.548362 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-r499q"] Oct 11 10:39:00.548591 master-0 kubenswrapper[4790]: I1011 10:39:00.548551 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-hlrjh" Oct 11 10:39:00.548756 master-0 kubenswrapper[4790]: I1011 10:39:00.548644 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-r499q" Oct 11 10:39:00.551401 master-0 kubenswrapper[4790]: I1011 10:39:00.551360 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ft6fv"] Oct 11 10:39:00.551859 master-0 kubenswrapper[4790]: I1011 10:39:00.551788 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.552308 master-0 kubenswrapper[4790]: I1011 10:39:00.552269 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-5ddj4" Oct 11 10:39:00.552366 master-0 kubenswrapper[4790]: I1011 10:39:00.552352 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 11 10:39:00.552731 master-0 kubenswrapper[4790]: I1011 10:39:00.552664 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 11 10:39:00.552821 master-0 kubenswrapper[4790]: I1011 10:39:00.552788 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-kh4ld"] Oct 11 10:39:00.553203 master-0 kubenswrapper[4790]: I1011 10:39:00.553178 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-kh4ld" Oct 11 10:39:00.553768 master-0 kubenswrapper[4790]: I1011 10:39:00.553579 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 11 10:39:00.554113 master-0 kubenswrapper[4790]: I1011 10:39:00.554026 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 11 10:39:00.554588 master-0 kubenswrapper[4790]: I1011 10:39:00.554565 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Oct 11 10:39:00.555206 master-0 kubenswrapper[4790]: I1011 10:39:00.555051 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 11 10:39:00.555546 master-0 kubenswrapper[4790]: I1011 10:39:00.555304 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-7mxth" Oct 11 10:39:00.556404 master-2 kubenswrapper[4776]: I1011 10:39:00.556283 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-5xgzs"] Oct 11 10:39:00.557323 master-2 kubenswrapper[4776]: I1011 10:39:00.557289 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.559690 master-0 kubenswrapper[4790]: I1011 10:39:00.559650 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 11 10:39:00.559860 master-0 kubenswrapper[4790]: I1011 10:39:00.559692 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 11 10:39:00.559860 master-0 kubenswrapper[4790]: I1011 10:39:00.559752 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 11 10:39:00.559860 master-0 kubenswrapper[4790]: I1011 10:39:00.559853 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 11 10:39:00.560083 master-0 kubenswrapper[4790]: I1011 10:39:00.559853 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-dockercfg-7xwqj" Oct 11 10:39:00.560083 master-0 kubenswrapper[4790]: I1011 10:39:00.559863 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"tuned-dockercfg-4b7xp" Oct 11 10:39:00.560083 master-0 kubenswrapper[4790]: I1011 10:39:00.559958 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 11 10:39:00.560449 master-2 kubenswrapper[4776]: I1011 10:39:00.560408 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-zlnjr" Oct 11 10:39:00.560449 master-2 kubenswrapper[4776]: I1011 10:39:00.560428 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 11 10:39:00.560613 master-2 kubenswrapper[4776]: I1011 10:39:00.560552 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 11 10:39:00.560720 master-2 kubenswrapper[4776]: I1011 10:39:00.560642 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 11 10:39:00.560790 master-2 kubenswrapper[4776]: I1011 10:39:00.560735 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 11 10:39:00.560931 master-2 kubenswrapper[4776]: I1011 10:39:00.560887 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 11 10:39:00.563574 master-2 kubenswrapper[4776]: I1011 10:39:00.563541 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 11 10:39:00.563654 master-2 kubenswrapper[4776]: I1011 10:39:00.563573 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 11 10:39:00.563976 master-2 kubenswrapper[4776]: I1011 10:39:00.563952 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 11 10:39:00.570419 master-0 kubenswrapper[4790]: I1011 10:39:00.570374 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-96nq6"] Oct 11 10:39:00.571097 master-0 kubenswrapper[4790]: I1011 10:39:00.571066 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.573455 master-0 kubenswrapper[4790]: I1011 10:39:00.573423 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 11 10:39:00.573455 master-0 kubenswrapper[4790]: I1011 10:39:00.573434 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-dkksq" Oct 11 10:39:00.574596 master-0 kubenswrapper[4790]: I1011 10:39:00.574540 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 11 10:39:00.574596 master-0 kubenswrapper[4790]: I1011 10:39:00.574576 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 11 10:39:00.574596 master-0 kubenswrapper[4790]: I1011 10:39:00.574591 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 11 10:39:00.575735 master-0 kubenswrapper[4790]: I1011 10:39:00.575693 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 11 10:39:00.575938 master-0 kubenswrapper[4790]: I1011 10:39:00.575899 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 11 10:39:00.582798 master-2 kubenswrapper[4776]: I1011 10:39:00.582721 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-5xgzs"] Oct 11 10:39:00.611685 master-0 kubenswrapper[4790]: I1011 10:39:00.611510 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-g99cx"] Oct 11 10:39:00.612237 master-0 kubenswrapper[4790]: I1011 10:39:00.612201 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-g99cx" Oct 11 10:39:00.615250 master-1 kubenswrapper[4771]: I1011 10:39:00.615190 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-kntdb"] Oct 11 10:39:00.616166 master-0 kubenswrapper[4790]: I1011 10:39:00.616118 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 11 10:39:00.616230 master-0 kubenswrapper[4790]: I1011 10:39:00.616171 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 11 10:39:00.616230 master-0 kubenswrapper[4790]: I1011 10:39:00.616189 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-cs6lc" Oct 11 10:39:00.616297 master-1 kubenswrapper[4771]: E1011 10:39:00.615476 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66232733-dcfb-4320-a372-ce05d7d777d9" containerName="installer" Oct 11 10:39:00.616297 master-1 kubenswrapper[4771]: I1011 10:39:00.615510 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="66232733-dcfb-4320-a372-ce05d7d777d9" containerName="installer" Oct 11 10:39:00.616297 master-1 kubenswrapper[4771]: I1011 10:39:00.615652 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="66232733-dcfb-4320-a372-ce05d7d777d9" containerName="installer" Oct 11 10:39:00.616297 master-1 kubenswrapper[4771]: I1011 10:39:00.616246 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kntdb" Oct 11 10:39:00.616985 master-0 kubenswrapper[4790]: I1011 10:39:00.616947 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 11 10:39:00.617612 master-2 kubenswrapper[4776]: I1011 10:39:00.617540 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-jl6f8"] Oct 11 10:39:00.618549 master-2 kubenswrapper[4776]: I1011 10:39:00.618515 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.619579 master-1 kubenswrapper[4771]: I1011 10:39:00.619524 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 11 10:39:00.619660 master-1 kubenswrapper[4771]: I1011 10:39:00.619531 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-cs6lc" Oct 11 10:39:00.619948 master-1 kubenswrapper[4771]: I1011 10:39:00.619895 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 11 10:39:00.621658 master-1 kubenswrapper[4771]: I1011 10:39:00.621585 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 11 10:39:00.621787 master-2 kubenswrapper[4776]: I1011 10:39:00.621765 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-cs6lc" Oct 11 10:39:00.621971 master-2 kubenswrapper[4776]: I1011 10:39:00.621954 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 11 10:39:00.626134 master-0 kubenswrapper[4790]: I1011 10:39:00.626062 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-modprobe-d\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.626196 master-0 kubenswrapper[4790]: I1011 10:39:00.626136 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-cnibin\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.626196 master-0 kubenswrapper[4790]: I1011 10:39:00.626172 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-host-run-k8s-cni-cncf-io\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.626196 master-0 kubenswrapper[4790]: I1011 10:39:00.626194 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-hostroot\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.626292 master-0 kubenswrapper[4790]: I1011 10:39:00.626239 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1076411d-ae28-46e4-97ca-9c78203e7aba-mcd-auth-proxy-config\") pod \"machine-config-daemon-8lkdg\" (UID: \"1076411d-ae28-46e4-97ca-9c78203e7aba\") " pod="openshift-machine-config-operator/machine-config-daemon-8lkdg" Oct 11 10:39:00.626292 master-0 kubenswrapper[4790]: I1011 10:39:00.626261 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-multus-socket-dir-parent\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.626385 master-0 kubenswrapper[4790]: I1011 10:39:00.626334 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-kubernetes\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.626442 master-0 kubenswrapper[4790]: I1011 10:39:00.626386 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-sysctl-conf\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.626442 master-0 kubenswrapper[4790]: I1011 10:39:00.626408 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8c1c727b-713a-4dff-ae8b-ad9b9851adae-cni-binary-copy\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.626442 master-0 kubenswrapper[4790]: I1011 10:39:00.626426 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-etc-openvswitch\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.626442 master-0 kubenswrapper[4790]: I1011 10:39:00.626443 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-ovnkube-config\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.626943 master-0 kubenswrapper[4790]: I1011 10:39:00.626464 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5kd5\" (UniqueName: \"kubernetes.io/projected/bfe05233-94bf-4e16-8c7e-321435ba7f00-kube-api-access-d5kd5\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.626943 master-0 kubenswrapper[4790]: I1011 10:39:00.626483 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/24d4b452-8f49-4e9e-98b6-3429afefc4c4-cni-binary-copy\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.627144 master-0 kubenswrapper[4790]: I1011 10:39:00.627045 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/24d4b452-8f49-4e9e-98b6-3429afefc4c4-whereabouts-configmap\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.627216 master-0 kubenswrapper[4790]: I1011 10:39:00.627188 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/0a2f91f6-f87a-4b69-a47a-91ca827d8386-ovnkube-identity-cm\") pod \"network-node-identity-kh4ld\" (UID: \"0a2f91f6-f87a-4b69-a47a-91ca827d8386\") " pod="openshift-network-node-identity/network-node-identity-kh4ld" Oct 11 10:39:00.627256 master-0 kubenswrapper[4790]: I1011 10:39:00.627239 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8c1c727b-713a-4dff-ae8b-ad9b9851adae-multus-daemon-config\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.627295 master-0 kubenswrapper[4790]: I1011 10:39:00.627281 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-kubelet\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.627342 master-0 kubenswrapper[4790]: I1011 10:39:00.627318 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-log-socket\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.627476 master-0 kubenswrapper[4790]: I1011 10:39:00.627359 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.627476 master-0 kubenswrapper[4790]: I1011 10:39:00.627398 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-run\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.627476 master-0 kubenswrapper[4790]: I1011 10:39:00.627433 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/24d4b452-8f49-4e9e-98b6-3429afefc4c4-system-cni-dir\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.627476 master-0 kubenswrapper[4790]: I1011 10:39:00.627464 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a2f91f6-f87a-4b69-a47a-91ca827d8386-webhook-cert\") pod \"network-node-identity-kh4ld\" (UID: \"0a2f91f6-f87a-4b69-a47a-91ca827d8386\") " pod="openshift-network-node-identity/network-node-identity-kh4ld" Oct 11 10:39:00.627581 master-0 kubenswrapper[4790]: I1011 10:39:00.627498 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-cni-netd\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.627581 master-0 kubenswrapper[4790]: I1011 10:39:00.627534 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/24d4b452-8f49-4e9e-98b6-3429afefc4c4-os-release\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.627637 master-0 kubenswrapper[4790]: I1011 10:39:00.627601 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5bzq\" (UniqueName: \"kubernetes.io/projected/00e9cb61-65c4-4e6a-bb0c-2428529c63bf-kube-api-access-p5bzq\") pod \"node-resolver-5kghv\" (UID: \"00e9cb61-65c4-4e6a-bb0c-2428529c63bf\") " pod="openshift-dns/node-resolver-5kghv" Oct 11 10:39:00.627744 master-0 kubenswrapper[4790]: I1011 10:39:00.627695 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-host-var-lib-cni-bin\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.627788 master-0 kubenswrapper[4790]: I1011 10:39:00.627754 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-multus-conf-dir\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.627788 master-0 kubenswrapper[4790]: I1011 10:39:00.627776 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-slash\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.627845 master-0 kubenswrapper[4790]: I1011 10:39:00.627802 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-run-ovn\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.627845 master-0 kubenswrapper[4790]: I1011 10:39:00.627823 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-lib-modules\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.627845 master-0 kubenswrapper[4790]: I1011 10:39:00.627840 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcq86\" (UniqueName: \"kubernetes.io/projected/1076411d-ae28-46e4-97ca-9c78203e7aba-kube-api-access-kcq86\") pod \"machine-config-daemon-8lkdg\" (UID: \"1076411d-ae28-46e4-97ca-9c78203e7aba\") " pod="openshift-machine-config-operator/machine-config-daemon-8lkdg" Oct 11 10:39:00.627928 master-0 kubenswrapper[4790]: I1011 10:39:00.627863 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-multus-cni-dir\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.628065 master-0 kubenswrapper[4790]: I1011 10:39:00.627981 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0a2f91f6-f87a-4b69-a47a-91ca827d8386-env-overrides\") pod \"network-node-identity-kh4ld\" (UID: \"0a2f91f6-f87a-4b69-a47a-91ca827d8386\") " pod="openshift-network-node-identity/network-node-identity-kh4ld" Oct 11 10:39:00.628152 master-0 kubenswrapper[4790]: I1011 10:39:00.628117 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/00e9cb61-65c4-4e6a-bb0c-2428529c63bf-hosts-file\") pod \"node-resolver-5kghv\" (UID: \"00e9cb61-65c4-4e6a-bb0c-2428529c63bf\") " pod="openshift-dns/node-resolver-5kghv" Oct 11 10:39:00.628226 master-0 kubenswrapper[4790]: I1011 10:39:00.628170 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-run-openvswitch\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.628264 master-0 kubenswrapper[4790]: I1011 10:39:00.628246 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l99wx\" (UniqueName: \"kubernetes.io/projected/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-kube-api-access-l99wx\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.628346 master-0 kubenswrapper[4790]: I1011 10:39:00.628321 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-systemd\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.628417 master-0 kubenswrapper[4790]: I1011 10:39:00.628391 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bfe05233-94bf-4e16-8c7e-321435ba7f00-tmp\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.628495 master-0 kubenswrapper[4790]: I1011 10:39:00.628436 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/24d4b452-8f49-4e9e-98b6-3429afefc4c4-cnibin\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.628536 master-0 kubenswrapper[4790]: I1011 10:39:00.628514 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mptfd\" (UniqueName: \"kubernetes.io/projected/24d4b452-8f49-4e9e-98b6-3429afefc4c4-kube-api-access-mptfd\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.628613 master-0 kubenswrapper[4790]: I1011 10:39:00.628587 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-os-release\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.628680 master-0 kubenswrapper[4790]: I1011 10:39:00.628656 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-host-run-netns\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.628780 master-0 kubenswrapper[4790]: I1011 10:39:00.628750 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-host-var-lib-cni-multus\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.628864 master-0 kubenswrapper[4790]: I1011 10:39:00.628798 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zswxl\" (UniqueName: \"kubernetes.io/projected/8c1c727b-713a-4dff-ae8b-ad9b9851adae-kube-api-access-zswxl\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.628909 master-0 kubenswrapper[4790]: I1011 10:39:00.628881 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-run-ovn-kubernetes\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.628973 master-0 kubenswrapper[4790]: I1011 10:39:00.628949 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-cni-bin\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.629045 master-0 kubenswrapper[4790]: I1011 10:39:00.629020 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-var-lib-kubelet\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.629090 master-0 kubenswrapper[4790]: I1011 10:39:00.629062 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-run-netns\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.629159 master-0 kubenswrapper[4790]: I1011 10:39:00.629135 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-run-systemd\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.629248 master-0 kubenswrapper[4790]: I1011 10:39:00.629224 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-env-overrides\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.629309 master-0 kubenswrapper[4790]: I1011 10:39:00.629282 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-ovn-node-metrics-cert\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.629369 master-0 kubenswrapper[4790]: I1011 10:39:00.629345 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-sys\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.629426 master-0 kubenswrapper[4790]: I1011 10:39:00.629382 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-host\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.629426 master-0 kubenswrapper[4790]: I1011 10:39:00.629413 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-tuned\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.629481 master-0 kubenswrapper[4790]: I1011 10:39:00.629446 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-host-var-lib-kubelet\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.629510 master-0 kubenswrapper[4790]: I1011 10:39:00.629476 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-var-lib-openvswitch\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.629537 master-0 kubenswrapper[4790]: I1011 10:39:00.629509 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-ovnkube-script-lib\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.629663 master-0 kubenswrapper[4790]: I1011 10:39:00.629539 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-system-cni-dir\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.629663 master-0 kubenswrapper[4790]: I1011 10:39:00.629571 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-systemd-units\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.629663 master-0 kubenswrapper[4790]: I1011 10:39:00.629601 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-node-log\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.629663 master-0 kubenswrapper[4790]: I1011 10:39:00.629629 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-sysconfig\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.629791 master-0 kubenswrapper[4790]: I1011 10:39:00.629680 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1076411d-ae28-46e4-97ca-9c78203e7aba-rootfs\") pod \"machine-config-daemon-8lkdg\" (UID: \"1076411d-ae28-46e4-97ca-9c78203e7aba\") " pod="openshift-machine-config-operator/machine-config-daemon-8lkdg" Oct 11 10:39:00.629791 master-0 kubenswrapper[4790]: I1011 10:39:00.629735 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/24d4b452-8f49-4e9e-98b6-3429afefc4c4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.629791 master-0 kubenswrapper[4790]: I1011 10:39:00.629781 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-host-run-multus-certs\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.629902 master-0 kubenswrapper[4790]: I1011 10:39:00.629812 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-sysctl-d\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.629902 master-0 kubenswrapper[4790]: I1011 10:39:00.629856 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1076411d-ae28-46e4-97ca-9c78203e7aba-proxy-tls\") pod \"machine-config-daemon-8lkdg\" (UID: \"1076411d-ae28-46e4-97ca-9c78203e7aba\") " pod="openshift-machine-config-operator/machine-config-daemon-8lkdg" Oct 11 10:39:00.629902 master-0 kubenswrapper[4790]: I1011 10:39:00.629888 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/24d4b452-8f49-4e9e-98b6-3429afefc4c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.629998 master-0 kubenswrapper[4790]: I1011 10:39:00.629918 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5w5m\" (UniqueName: \"kubernetes.io/projected/0a2f91f6-f87a-4b69-a47a-91ca827d8386-kube-api-access-p5w5m\") pod \"network-node-identity-kh4ld\" (UID: \"0a2f91f6-f87a-4b69-a47a-91ca827d8386\") " pod="openshift-network-node-identity/network-node-identity-kh4ld" Oct 11 10:39:00.629998 master-0 kubenswrapper[4790]: I1011 10:39:00.629947 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-etc-kubernetes\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.644277 master-0 kubenswrapper[4790]: I1011 10:39:00.644227 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-zcc4t"] Oct 11 10:39:00.644785 master-0 kubenswrapper[4790]: I1011 10:39:00.644669 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:00.644785 master-0 kubenswrapper[4790]: E1011 10:39:00.644764 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:00.645013 master-0 kubenswrapper[4790]: I1011 10:39:00.644974 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-bn2sv"] Oct 11 10:39:00.645653 master-0 kubenswrapper[4790]: I1011 10:39:00.645610 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:00.645825 master-0 kubenswrapper[4790]: E1011 10:39:00.645781 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:00.647375 master-0 kubenswrapper[4790]: I1011 10:39:00.647318 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-l66k2"] Oct 11 10:39:00.647977 master-0 kubenswrapper[4790]: I1011 10:39:00.647943 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.651823 master-0 kubenswrapper[4790]: I1011 10:39:00.651781 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Oct 11 10:39:00.652106 master-0 kubenswrapper[4790]: I1011 10:39:00.652056 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Oct 11 10:39:00.652402 master-0 kubenswrapper[4790]: I1011 10:39:00.652369 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Oct 11 10:39:00.653880 master-0 kubenswrapper[4790]: I1011 10:39:00.653843 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Oct 11 10:39:00.654013 master-0 kubenswrapper[4790]: I1011 10:39:00.653978 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-vhtwz" Oct 11 10:39:00.655527 master-0 kubenswrapper[4790]: I1011 10:39:00.655487 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Oct 11 10:39:00.657341 master-2 kubenswrapper[4776]: I1011 10:39:00.657292 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"09a1584aa5985a5ff9600248bcf73e77","Type":"ContainerStarted","Data":"6b08c60868474c9e39e0d7cccbaaeebdd877d3e382a64aea2678c63dee8f27b9"} Oct 11 10:39:00.657341 master-2 kubenswrapper[4776]: I1011 10:39:00.657338 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"09a1584aa5985a5ff9600248bcf73e77","Type":"ContainerStarted","Data":"442c453f081ca449416e62f4096c8ffc17314444a4aee0a5fb03fe752c9d03d5"} Oct 11 10:39:00.657341 master-2 kubenswrapper[4776]: I1011 10:39:00.657350 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"09a1584aa5985a5ff9600248bcf73e77","Type":"ContainerStarted","Data":"ce1c9a1bb392147b36f6fb94d3eaa492b2c19737117d1cee7cef002c354e7d3f"} Oct 11 10:39:00.657584 master-2 kubenswrapper[4776]: I1011 10:39:00.657494 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:39:00.680544 master-2 kubenswrapper[4776]: I1011 10:39:00.680470 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" podStartSLOduration=1.680450928 podStartE2EDuration="1.680450928s" podCreationTimestamp="2025-10-11 10:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:39:00.678028732 +0000 UTC m=+775.462455441" watchObservedRunningTime="2025-10-11 10:39:00.680450928 +0000 UTC m=+775.464877627" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.698934 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-serving-cert\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.698990 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-trusted-ca-bundle\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.699011 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-encryption-config\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.699034 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shjw5\" (UniqueName: \"kubernetes.io/projected/b1a4fd85-5da5-4697-b524-a68be3d018cf-kube-api-access-shjw5\") pod \"node-ca-jl6f8\" (UID: \"b1a4fd85-5da5-4697-b524-a68be3d018cf\") " pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.699070 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-dir\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.699085 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-client\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.699103 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cjsj\" (UniqueName: \"kubernetes.io/projected/407e7df9-fbe8-44b1-8dde-bafa356e904c-kube-api-access-4cjsj\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.699123 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b1a4fd85-5da5-4697-b524-a68be3d018cf-serviceca\") pod \"node-ca-jl6f8\" (UID: \"b1a4fd85-5da5-4697-b524-a68be3d018cf\") " pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.699151 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-serving-ca\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.699174 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-policies\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.699190 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b1a4fd85-5da5-4697-b524-a68be3d018cf-host\") pod \"node-ca-jl6f8\" (UID: \"b1a4fd85-5da5-4697-b524-a68be3d018cf\") " pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.730982 master-0 kubenswrapper[4790]: I1011 10:39:00.730903 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-var-lib-kubelet\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.730982 master-0 kubenswrapper[4790]: I1011 10:39:00.730955 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-run-netns\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.730982 master-0 kubenswrapper[4790]: I1011 10:39:00.730973 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-run-systemd\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.730982 master-0 kubenswrapper[4790]: I1011 10:39:00.730991 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-env-overrides\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.731343 master-0 kubenswrapper[4790]: I1011 10:39:00.731014 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-ovn-node-metrics-cert\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.731343 master-0 kubenswrapper[4790]: I1011 10:39:00.731039 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7d9f4c3d-57bd-49f6-94f2-47670b385318-root\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.731343 master-0 kubenswrapper[4790]: I1011 10:39:00.731057 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7d9f4c3d-57bd-49f6-94f2-47670b385318-node-exporter-textfile\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.731343 master-0 kubenswrapper[4790]: I1011 10:39:00.731065 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-var-lib-kubelet\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.731343 master-0 kubenswrapper[4790]: I1011 10:39:00.731088 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-sys\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.731343 master-0 kubenswrapper[4790]: I1011 10:39:00.731106 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-host\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.731343 master-0 kubenswrapper[4790]: I1011 10:39:00.731104 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-run-netns\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.731343 master-0 kubenswrapper[4790]: I1011 10:39:00.731184 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-run-systemd\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.731343 master-0 kubenswrapper[4790]: I1011 10:39:00.731124 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-tuned\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.731343 master-0 kubenswrapper[4790]: I1011 10:39:00.731235 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-host-var-lib-kubelet\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.731343 master-0 kubenswrapper[4790]: I1011 10:39:00.731233 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-sys\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.731343 master-0 kubenswrapper[4790]: I1011 10:39:00.731272 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-var-lib-openvswitch\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.731343 master-0 kubenswrapper[4790]: I1011 10:39:00.731272 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-host-var-lib-kubelet\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.731343 master-0 kubenswrapper[4790]: I1011 10:39:00.731296 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-ovnkube-script-lib\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.731343 master-0 kubenswrapper[4790]: I1011 10:39:00.731317 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-system-cni-dir\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.731343 master-0 kubenswrapper[4790]: I1011 10:39:00.731323 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-var-lib-openvswitch\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.731343 master-0 kubenswrapper[4790]: I1011 10:39:00.731338 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-systemd-units\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.731343 master-0 kubenswrapper[4790]: I1011 10:39:00.731365 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-node-log\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.732363 master-0 kubenswrapper[4790]: I1011 10:39:00.731374 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-systemd-units\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.732363 master-0 kubenswrapper[4790]: I1011 10:39:00.731386 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-sysconfig\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.732363 master-0 kubenswrapper[4790]: I1011 10:39:00.731397 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-system-cni-dir\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.732363 master-0 kubenswrapper[4790]: I1011 10:39:00.731415 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-node-log\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.732363 master-0 kubenswrapper[4790]: I1011 10:39:00.731407 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1076411d-ae28-46e4-97ca-9c78203e7aba-rootfs\") pod \"machine-config-daemon-8lkdg\" (UID: \"1076411d-ae28-46e4-97ca-9c78203e7aba\") " pod="openshift-machine-config-operator/machine-config-daemon-8lkdg" Oct 11 10:39:00.732363 master-0 kubenswrapper[4790]: I1011 10:39:00.731439 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1076411d-ae28-46e4-97ca-9c78203e7aba-rootfs\") pod \"machine-config-daemon-8lkdg\" (UID: \"1076411d-ae28-46e4-97ca-9c78203e7aba\") " pod="openshift-machine-config-operator/machine-config-daemon-8lkdg" Oct 11 10:39:00.732363 master-0 kubenswrapper[4790]: I1011 10:39:00.731443 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-sysconfig\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.732363 master-0 kubenswrapper[4790]: I1011 10:39:00.731461 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/24d4b452-8f49-4e9e-98b6-3429afefc4c4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.732363 master-0 kubenswrapper[4790]: I1011 10:39:00.731510 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5qvq\" (UniqueName: \"kubernetes.io/projected/4e2d32e6-3363-4389-ad6a-cfd917e568d2-kube-api-access-n5qvq\") pod \"node-ca-g99cx\" (UID: \"4e2d32e6-3363-4389-ad6a-cfd917e568d2\") " pod="openshift-image-registry/node-ca-g99cx" Oct 11 10:39:00.732363 master-0 kubenswrapper[4790]: I1011 10:39:00.731536 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d9f4c3d-57bd-49f6-94f2-47670b385318-node-exporter-tls\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.732363 master-0 kubenswrapper[4790]: I1011 10:39:00.731563 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7d9f4c3d-57bd-49f6-94f2-47670b385318-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.732363 master-0 kubenswrapper[4790]: I1011 10:39:00.731590 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-host-run-multus-certs\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.732363 master-0 kubenswrapper[4790]: I1011 10:39:00.731615 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnhq6\" (UniqueName: \"kubernetes.io/projected/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-kube-api-access-gnhq6\") pod \"network-metrics-daemon-zcc4t\" (UID: \"a5b695d5-a88c-4ff9-bc59-d13f61f237f6\") " pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:00.732363 master-0 kubenswrapper[4790]: I1011 10:39:00.731645 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-sysctl-d\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.732363 master-0 kubenswrapper[4790]: I1011 10:39:00.731647 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-host-run-multus-certs\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.732363 master-0 kubenswrapper[4790]: I1011 10:39:00.731645 4790 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 11 10:39:00.732363 master-0 kubenswrapper[4790]: I1011 10:39:00.731771 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-sysctl-d\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.733377 master-0 kubenswrapper[4790]: I1011 10:39:00.731766 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1076411d-ae28-46e4-97ca-9c78203e7aba-proxy-tls\") pod \"machine-config-daemon-8lkdg\" (UID: \"1076411d-ae28-46e4-97ca-9c78203e7aba\") " pod="openshift-machine-config-operator/machine-config-daemon-8lkdg" Oct 11 10:39:00.733377 master-0 kubenswrapper[4790]: I1011 10:39:00.731815 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/24d4b452-8f49-4e9e-98b6-3429afefc4c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.733377 master-0 kubenswrapper[4790]: I1011 10:39:00.731289 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-host\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.733377 master-0 kubenswrapper[4790]: I1011 10:39:00.731844 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5w5m\" (UniqueName: \"kubernetes.io/projected/0a2f91f6-f87a-4b69-a47a-91ca827d8386-kube-api-access-p5w5m\") pod \"network-node-identity-kh4ld\" (UID: \"0a2f91f6-f87a-4b69-a47a-91ca827d8386\") " pod="openshift-network-node-identity/network-node-identity-kh4ld" Oct 11 10:39:00.733377 master-0 kubenswrapper[4790]: I1011 10:39:00.732094 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/24d4b452-8f49-4e9e-98b6-3429afefc4c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.733377 master-0 kubenswrapper[4790]: I1011 10:39:00.732139 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-etc-kubernetes\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.733377 master-0 kubenswrapper[4790]: I1011 10:39:00.732189 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4e2d32e6-3363-4389-ad6a-cfd917e568d2-serviceca\") pod \"node-ca-g99cx\" (UID: \"4e2d32e6-3363-4389-ad6a-cfd917e568d2\") " pod="openshift-image-registry/node-ca-g99cx" Oct 11 10:39:00.733377 master-0 kubenswrapper[4790]: I1011 10:39:00.732216 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xlkt\" (UniqueName: \"kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt\") pod \"network-check-target-bn2sv\" (UID: \"0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7\") " pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:00.733377 master-0 kubenswrapper[4790]: I1011 10:39:00.732245 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4xsk\" (UniqueName: \"kubernetes.io/projected/7d9f4c3d-57bd-49f6-94f2-47670b385318-kube-api-access-s4xsk\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.733377 master-0 kubenswrapper[4790]: I1011 10:39:00.732240 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-etc-kubernetes\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.733377 master-0 kubenswrapper[4790]: I1011 10:39:00.732268 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-modprobe-d\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.733377 master-0 kubenswrapper[4790]: I1011 10:39:00.732298 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-cnibin\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.733377 master-0 kubenswrapper[4790]: I1011 10:39:00.732314 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-env-overrides\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.733377 master-0 kubenswrapper[4790]: I1011 10:39:00.732369 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-cnibin\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.733377 master-0 kubenswrapper[4790]: I1011 10:39:00.732322 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-host-run-k8s-cni-cncf-io\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.733377 master-0 kubenswrapper[4790]: I1011 10:39:00.732447 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-host-run-k8s-cni-cncf-io\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.733377 master-0 kubenswrapper[4790]: I1011 10:39:00.732458 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-modprobe-d\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.734437 master-0 kubenswrapper[4790]: I1011 10:39:00.732499 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-hostroot\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.734437 master-0 kubenswrapper[4790]: I1011 10:39:00.732533 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-hostroot\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.734437 master-0 kubenswrapper[4790]: I1011 10:39:00.732535 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7d9f4c3d-57bd-49f6-94f2-47670b385318-node-exporter-wtmp\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.734437 master-0 kubenswrapper[4790]: I1011 10:39:00.732577 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1076411d-ae28-46e4-97ca-9c78203e7aba-mcd-auth-proxy-config\") pod \"machine-config-daemon-8lkdg\" (UID: \"1076411d-ae28-46e4-97ca-9c78203e7aba\") " pod="openshift-machine-config-operator/machine-config-daemon-8lkdg" Oct 11 10:39:00.734437 master-0 kubenswrapper[4790]: I1011 10:39:00.732597 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-multus-socket-dir-parent\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.734437 master-0 kubenswrapper[4790]: I1011 10:39:00.732621 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d9f4c3d-57bd-49f6-94f2-47670b385318-sys\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.734437 master-0 kubenswrapper[4790]: I1011 10:39:00.732649 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-kubernetes\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.734437 master-0 kubenswrapper[4790]: I1011 10:39:00.733571 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-sysctl-conf\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.734437 master-0 kubenswrapper[4790]: I1011 10:39:00.733632 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8c1c727b-713a-4dff-ae8b-ad9b9851adae-cni-binary-copy\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.734437 master-0 kubenswrapper[4790]: I1011 10:39:00.733669 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-etc-openvswitch\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.734437 master-0 kubenswrapper[4790]: I1011 10:39:00.733698 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-ovnkube-config\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.734437 master-0 kubenswrapper[4790]: I1011 10:39:00.733699 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-ovnkube-script-lib\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.734437 master-0 kubenswrapper[4790]: I1011 10:39:00.733737 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs\") pod \"network-metrics-daemon-zcc4t\" (UID: \"a5b695d5-a88c-4ff9-bc59-d13f61f237f6\") " pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:00.734437 master-0 kubenswrapper[4790]: I1011 10:39:00.733777 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5kd5\" (UniqueName: \"kubernetes.io/projected/bfe05233-94bf-4e16-8c7e-321435ba7f00-kube-api-access-d5kd5\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.734437 master-0 kubenswrapper[4790]: I1011 10:39:00.733806 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/24d4b452-8f49-4e9e-98b6-3429afefc4c4-cni-binary-copy\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.734437 master-0 kubenswrapper[4790]: I1011 10:39:00.733841 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/24d4b452-8f49-4e9e-98b6-3429afefc4c4-whereabouts-configmap\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.734437 master-0 kubenswrapper[4790]: I1011 10:39:00.733875 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/0a2f91f6-f87a-4b69-a47a-91ca827d8386-ovnkube-identity-cm\") pod \"network-node-identity-kh4ld\" (UID: \"0a2f91f6-f87a-4b69-a47a-91ca827d8386\") " pod="openshift-network-node-identity/network-node-identity-kh4ld" Oct 11 10:39:00.735510 master-0 kubenswrapper[4790]: I1011 10:39:00.733880 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-etc-openvswitch\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.735510 master-0 kubenswrapper[4790]: I1011 10:39:00.734096 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/24d4b452-8f49-4e9e-98b6-3429afefc4c4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.735510 master-0 kubenswrapper[4790]: I1011 10:39:00.733896 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8c1c727b-713a-4dff-ae8b-ad9b9851adae-multus-daemon-config\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.735510 master-0 kubenswrapper[4790]: I1011 10:39:00.734425 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1076411d-ae28-46e4-97ca-9c78203e7aba-mcd-auth-proxy-config\") pod \"machine-config-daemon-8lkdg\" (UID: \"1076411d-ae28-46e4-97ca-9c78203e7aba\") " pod="openshift-machine-config-operator/machine-config-daemon-8lkdg" Oct 11 10:39:00.735510 master-0 kubenswrapper[4790]: I1011 10:39:00.734465 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-sysctl-conf\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.735510 master-0 kubenswrapper[4790]: I1011 10:39:00.734496 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-kubelet\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.735510 master-0 kubenswrapper[4790]: I1011 10:39:00.734569 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-multus-socket-dir-parent\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.735510 master-0 kubenswrapper[4790]: I1011 10:39:00.734587 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-log-socket\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.735510 master-0 kubenswrapper[4790]: I1011 10:39:00.734636 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-kubelet\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.735510 master-0 kubenswrapper[4790]: I1011 10:39:00.734638 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.735510 master-0 kubenswrapper[4790]: I1011 10:39:00.734698 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-run\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.735510 master-0 kubenswrapper[4790]: I1011 10:39:00.734701 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-kubernetes\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.735510 master-0 kubenswrapper[4790]: I1011 10:39:00.734758 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/24d4b452-8f49-4e9e-98b6-3429afefc4c4-system-cni-dir\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.735510 master-0 kubenswrapper[4790]: I1011 10:39:00.734774 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.735510 master-0 kubenswrapper[4790]: I1011 10:39:00.734802 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/24d4b452-8f49-4e9e-98b6-3429afefc4c4-system-cni-dir\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.735510 master-0 kubenswrapper[4790]: I1011 10:39:00.734854 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-run\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.735510 master-0 kubenswrapper[4790]: I1011 10:39:00.734869 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a2f91f6-f87a-4b69-a47a-91ca827d8386-webhook-cert\") pod \"network-node-identity-kh4ld\" (UID: \"0a2f91f6-f87a-4b69-a47a-91ca827d8386\") " pod="openshift-network-node-identity/network-node-identity-kh4ld" Oct 11 10:39:00.736448 master-0 kubenswrapper[4790]: I1011 10:39:00.734942 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-cni-netd\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.736448 master-0 kubenswrapper[4790]: I1011 10:39:00.734976 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8c1c727b-713a-4dff-ae8b-ad9b9851adae-multus-daemon-config\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.736448 master-0 kubenswrapper[4790]: I1011 10:39:00.734991 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7d9f4c3d-57bd-49f6-94f2-47670b385318-metrics-client-ca\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.736448 master-0 kubenswrapper[4790]: I1011 10:39:00.734888 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-log-socket\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.736448 master-0 kubenswrapper[4790]: I1011 10:39:00.735055 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/24d4b452-8f49-4e9e-98b6-3429afefc4c4-os-release\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.736448 master-0 kubenswrapper[4790]: I1011 10:39:00.735069 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-cni-netd\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.736448 master-0 kubenswrapper[4790]: I1011 10:39:00.735110 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5bzq\" (UniqueName: \"kubernetes.io/projected/00e9cb61-65c4-4e6a-bb0c-2428529c63bf-kube-api-access-p5bzq\") pod \"node-resolver-5kghv\" (UID: \"00e9cb61-65c4-4e6a-bb0c-2428529c63bf\") " pod="openshift-dns/node-resolver-5kghv" Oct 11 10:39:00.736448 master-0 kubenswrapper[4790]: I1011 10:39:00.735165 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-host-var-lib-cni-bin\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.736448 master-0 kubenswrapper[4790]: I1011 10:39:00.735196 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/24d4b452-8f49-4e9e-98b6-3429afefc4c4-os-release\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.736448 master-0 kubenswrapper[4790]: I1011 10:39:00.735205 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-multus-conf-dir\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.736448 master-0 kubenswrapper[4790]: I1011 10:39:00.735262 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-slash\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.736448 master-0 kubenswrapper[4790]: I1011 10:39:00.735299 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-host-var-lib-cni-bin\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.736448 master-0 kubenswrapper[4790]: I1011 10:39:00.735319 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-multus-conf-dir\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.736448 master-0 kubenswrapper[4790]: I1011 10:39:00.735337 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-run-ovn\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.736448 master-0 kubenswrapper[4790]: I1011 10:39:00.735384 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-slash\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.736448 master-0 kubenswrapper[4790]: I1011 10:39:00.735263 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/24d4b452-8f49-4e9e-98b6-3429afefc4c4-whereabouts-configmap\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.736448 master-0 kubenswrapper[4790]: I1011 10:39:00.735408 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-lib-modules\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.736448 master-0 kubenswrapper[4790]: I1011 10:39:00.735451 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-run-ovn\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.737422 master-0 kubenswrapper[4790]: I1011 10:39:00.735466 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcq86\" (UniqueName: \"kubernetes.io/projected/1076411d-ae28-46e4-97ca-9c78203e7aba-kube-api-access-kcq86\") pod \"machine-config-daemon-8lkdg\" (UID: \"1076411d-ae28-46e4-97ca-9c78203e7aba\") " pod="openshift-machine-config-operator/machine-config-daemon-8lkdg" Oct 11 10:39:00.737422 master-0 kubenswrapper[4790]: I1011 10:39:00.735506 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-multus-cni-dir\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.737422 master-0 kubenswrapper[4790]: I1011 10:39:00.735574 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-lib-modules\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.737422 master-0 kubenswrapper[4790]: I1011 10:39:00.735614 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0a2f91f6-f87a-4b69-a47a-91ca827d8386-env-overrides\") pod \"network-node-identity-kh4ld\" (UID: \"0a2f91f6-f87a-4b69-a47a-91ca827d8386\") " pod="openshift-network-node-identity/network-node-identity-kh4ld" Oct 11 10:39:00.737422 master-0 kubenswrapper[4790]: I1011 10:39:00.735619 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-ovnkube-config\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.737422 master-0 kubenswrapper[4790]: I1011 10:39:00.735690 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-multus-cni-dir\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.737422 master-0 kubenswrapper[4790]: I1011 10:39:00.735694 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/00e9cb61-65c4-4e6a-bb0c-2428529c63bf-hosts-file\") pod \"node-resolver-5kghv\" (UID: \"00e9cb61-65c4-4e6a-bb0c-2428529c63bf\") " pod="openshift-dns/node-resolver-5kghv" Oct 11 10:39:00.737422 master-0 kubenswrapper[4790]: I1011 10:39:00.735623 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/0a2f91f6-f87a-4b69-a47a-91ca827d8386-ovnkube-identity-cm\") pod \"network-node-identity-kh4ld\" (UID: \"0a2f91f6-f87a-4b69-a47a-91ca827d8386\") " pod="openshift-network-node-identity/network-node-identity-kh4ld" Oct 11 10:39:00.737422 master-0 kubenswrapper[4790]: I1011 10:39:00.735809 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-run-openvswitch\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.737422 master-0 kubenswrapper[4790]: I1011 10:39:00.735621 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8c1c727b-713a-4dff-ae8b-ad9b9851adae-cni-binary-copy\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.737422 master-0 kubenswrapper[4790]: I1011 10:39:00.735867 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l99wx\" (UniqueName: \"kubernetes.io/projected/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-kube-api-access-l99wx\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.737422 master-0 kubenswrapper[4790]: I1011 10:39:00.735882 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/00e9cb61-65c4-4e6a-bb0c-2428529c63bf-hosts-file\") pod \"node-resolver-5kghv\" (UID: \"00e9cb61-65c4-4e6a-bb0c-2428529c63bf\") " pod="openshift-dns/node-resolver-5kghv" Oct 11 10:39:00.737422 master-0 kubenswrapper[4790]: I1011 10:39:00.735918 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-systemd\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.737422 master-0 kubenswrapper[4790]: I1011 10:39:00.735936 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-run-openvswitch\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.737422 master-0 kubenswrapper[4790]: I1011 10:39:00.736076 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bfe05233-94bf-4e16-8c7e-321435ba7f00-tmp\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.737422 master-0 kubenswrapper[4790]: I1011 10:39:00.736091 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-systemd\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.737422 master-0 kubenswrapper[4790]: I1011 10:39:00.736104 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0a2f91f6-f87a-4b69-a47a-91ca827d8386-env-overrides\") pod \"network-node-identity-kh4ld\" (UID: \"0a2f91f6-f87a-4b69-a47a-91ca827d8386\") " pod="openshift-network-node-identity/network-node-identity-kh4ld" Oct 11 10:39:00.738581 master-0 kubenswrapper[4790]: I1011 10:39:00.736265 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/24d4b452-8f49-4e9e-98b6-3429afefc4c4-cnibin\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.738581 master-0 kubenswrapper[4790]: I1011 10:39:00.736295 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mptfd\" (UniqueName: \"kubernetes.io/projected/24d4b452-8f49-4e9e-98b6-3429afefc4c4-kube-api-access-mptfd\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.738581 master-0 kubenswrapper[4790]: I1011 10:39:00.736459 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-os-release\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.738581 master-0 kubenswrapper[4790]: I1011 10:39:00.736532 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-host-run-netns\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.738581 master-0 kubenswrapper[4790]: I1011 10:39:00.736606 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-host-var-lib-cni-multus\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.738581 master-0 kubenswrapper[4790]: I1011 10:39:00.736796 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/24d4b452-8f49-4e9e-98b6-3429afefc4c4-cnibin\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.738581 master-0 kubenswrapper[4790]: I1011 10:39:00.736876 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zswxl\" (UniqueName: \"kubernetes.io/projected/8c1c727b-713a-4dff-ae8b-ad9b9851adae-kube-api-access-zswxl\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.738581 master-0 kubenswrapper[4790]: I1011 10:39:00.736921 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-run-ovn-kubernetes\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.738581 master-0 kubenswrapper[4790]: I1011 10:39:00.736973 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-cni-bin\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.738581 master-0 kubenswrapper[4790]: I1011 10:39:00.736977 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-host-var-lib-cni-multus\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.738581 master-0 kubenswrapper[4790]: I1011 10:39:00.737092 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-os-release\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.738581 master-0 kubenswrapper[4790]: I1011 10:39:00.737112 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-cni-bin\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.738581 master-0 kubenswrapper[4790]: I1011 10:39:00.737124 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/24d4b452-8f49-4e9e-98b6-3429afefc4c4-cni-binary-copy\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.738581 master-0 kubenswrapper[4790]: I1011 10:39:00.737236 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e2d32e6-3363-4389-ad6a-cfd917e568d2-host\") pod \"node-ca-g99cx\" (UID: \"4e2d32e6-3363-4389-ad6a-cfd917e568d2\") " pod="openshift-image-registry/node-ca-g99cx" Oct 11 10:39:00.738581 master-0 kubenswrapper[4790]: I1011 10:39:00.737164 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-host-run-ovn-kubernetes\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.738581 master-0 kubenswrapper[4790]: I1011 10:39:00.737168 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c1c727b-713a-4dff-ae8b-ad9b9851adae-host-run-netns\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.739601 master-0 kubenswrapper[4790]: I1011 10:39:00.739008 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-ovn-node-metrics-cert\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.739730 master-0 kubenswrapper[4790]: I1011 10:39:00.739019 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/bfe05233-94bf-4e16-8c7e-321435ba7f00-etc-tuned\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.741144 master-0 kubenswrapper[4790]: I1011 10:39:00.741089 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1076411d-ae28-46e4-97ca-9c78203e7aba-proxy-tls\") pod \"machine-config-daemon-8lkdg\" (UID: \"1076411d-ae28-46e4-97ca-9c78203e7aba\") " pod="openshift-machine-config-operator/machine-config-daemon-8lkdg" Oct 11 10:39:00.741886 master-0 kubenswrapper[4790]: I1011 10:39:00.741809 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a2f91f6-f87a-4b69-a47a-91ca827d8386-webhook-cert\") pod \"network-node-identity-kh4ld\" (UID: \"0a2f91f6-f87a-4b69-a47a-91ca827d8386\") " pod="openshift-network-node-identity/network-node-identity-kh4ld" Oct 11 10:39:00.742080 master-0 kubenswrapper[4790]: I1011 10:39:00.742022 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bfe05233-94bf-4e16-8c7e-321435ba7f00-tmp\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.761041 master-1 kubenswrapper[4771]: I1011 10:39:00.760963 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6f2m\" (UniqueName: \"kubernetes.io/projected/f621f971-6560-4be2-b36c-307a440c0769-kube-api-access-z6f2m\") pod \"node-ca-kntdb\" (UID: \"f621f971-6560-4be2-b36c-307a440c0769\") " pod="openshift-image-registry/node-ca-kntdb" Oct 11 10:39:00.761465 master-1 kubenswrapper[4771]: I1011 10:39:00.761395 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f621f971-6560-4be2-b36c-307a440c0769-host\") pod \"node-ca-kntdb\" (UID: \"f621f971-6560-4be2-b36c-307a440c0769\") " pod="openshift-image-registry/node-ca-kntdb" Oct 11 10:39:00.761554 master-1 kubenswrapper[4771]: I1011 10:39:00.761514 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f621f971-6560-4be2-b36c-307a440c0769-serviceca\") pod \"node-ca-kntdb\" (UID: \"f621f971-6560-4be2-b36c-307a440c0769\") " pod="openshift-image-registry/node-ca-kntdb" Oct 11 10:39:00.762434 master-0 kubenswrapper[4790]: I1011 10:39:00.762375 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5w5m\" (UniqueName: \"kubernetes.io/projected/0a2f91f6-f87a-4b69-a47a-91ca827d8386-kube-api-access-p5w5m\") pod \"network-node-identity-kh4ld\" (UID: \"0a2f91f6-f87a-4b69-a47a-91ca827d8386\") " pod="openshift-network-node-identity/network-node-identity-kh4ld" Oct 11 10:39:00.764985 master-0 kubenswrapper[4790]: I1011 10:39:00.764933 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcq86\" (UniqueName: \"kubernetes.io/projected/1076411d-ae28-46e4-97ca-9c78203e7aba-kube-api-access-kcq86\") pod \"machine-config-daemon-8lkdg\" (UID: \"1076411d-ae28-46e4-97ca-9c78203e7aba\") " pod="openshift-machine-config-operator/machine-config-daemon-8lkdg" Oct 11 10:39:00.766899 master-0 kubenswrapper[4790]: I1011 10:39:00.766840 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5bzq\" (UniqueName: \"kubernetes.io/projected/00e9cb61-65c4-4e6a-bb0c-2428529c63bf-kube-api-access-p5bzq\") pod \"node-resolver-5kghv\" (UID: \"00e9cb61-65c4-4e6a-bb0c-2428529c63bf\") " pod="openshift-dns/node-resolver-5kghv" Oct 11 10:39:00.767235 master-0 kubenswrapper[4790]: I1011 10:39:00.767185 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mptfd\" (UniqueName: \"kubernetes.io/projected/24d4b452-8f49-4e9e-98b6-3429afefc4c4-kube-api-access-mptfd\") pod \"multus-additional-cni-plugins-ft6fv\" (UID: \"24d4b452-8f49-4e9e-98b6-3429afefc4c4\") " pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.767636 master-0 kubenswrapper[4790]: I1011 10:39:00.767597 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l99wx\" (UniqueName: \"kubernetes.io/projected/417d5cfd-0cf3-4d96-b901-fcfe4f742ca5-kube-api-access-l99wx\") pod \"ovnkube-node-96nq6\" (UID: \"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5\") " pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.770866 master-0 kubenswrapper[4790]: I1011 10:39:00.770825 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zswxl\" (UniqueName: \"kubernetes.io/projected/8c1c727b-713a-4dff-ae8b-ad9b9851adae-kube-api-access-zswxl\") pod \"multus-r499q\" (UID: \"8c1c727b-713a-4dff-ae8b-ad9b9851adae\") " pod="openshift-multus/multus-r499q" Oct 11 10:39:00.772259 master-0 kubenswrapper[4790]: I1011 10:39:00.772202 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5kd5\" (UniqueName: \"kubernetes.io/projected/bfe05233-94bf-4e16-8c7e-321435ba7f00-kube-api-access-d5kd5\") pod \"tuned-85bvx\" (UID: \"bfe05233-94bf-4e16-8c7e-321435ba7f00\") " pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.800855 master-2 kubenswrapper[4776]: I1011 10:39:00.800773 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-serving-ca\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.800855 master-2 kubenswrapper[4776]: I1011 10:39:00.800852 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-policies\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.801104 master-2 kubenswrapper[4776]: I1011 10:39:00.800891 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b1a4fd85-5da5-4697-b524-a68be3d018cf-host\") pod \"node-ca-jl6f8\" (UID: \"b1a4fd85-5da5-4697-b524-a68be3d018cf\") " pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.801104 master-2 kubenswrapper[4776]: I1011 10:39:00.800956 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-serving-cert\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.801104 master-2 kubenswrapper[4776]: I1011 10:39:00.800995 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-trusted-ca-bundle\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.801104 master-2 kubenswrapper[4776]: I1011 10:39:00.801018 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-encryption-config\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.801104 master-2 kubenswrapper[4776]: I1011 10:39:00.801048 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shjw5\" (UniqueName: \"kubernetes.io/projected/b1a4fd85-5da5-4697-b524-a68be3d018cf-kube-api-access-shjw5\") pod \"node-ca-jl6f8\" (UID: \"b1a4fd85-5da5-4697-b524-a68be3d018cf\") " pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.801104 master-2 kubenswrapper[4776]: I1011 10:39:00.801095 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-dir\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.801284 master-2 kubenswrapper[4776]: I1011 10:39:00.801122 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-client\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.801284 master-2 kubenswrapper[4776]: I1011 10:39:00.801153 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cjsj\" (UniqueName: \"kubernetes.io/projected/407e7df9-fbe8-44b1-8dde-bafa356e904c-kube-api-access-4cjsj\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.801284 master-2 kubenswrapper[4776]: I1011 10:39:00.801185 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b1a4fd85-5da5-4697-b524-a68be3d018cf-serviceca\") pod \"node-ca-jl6f8\" (UID: \"b1a4fd85-5da5-4697-b524-a68be3d018cf\") " pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.801456 master-2 kubenswrapper[4776]: I1011 10:39:00.801220 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b1a4fd85-5da5-4697-b524-a68be3d018cf-host\") pod \"node-ca-jl6f8\" (UID: \"b1a4fd85-5da5-4697-b524-a68be3d018cf\") " pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.801560 master-2 kubenswrapper[4776]: I1011 10:39:00.801224 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-dir\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.802054 master-2 kubenswrapper[4776]: I1011 10:39:00.802018 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b1a4fd85-5da5-4697-b524-a68be3d018cf-serviceca\") pod \"node-ca-jl6f8\" (UID: \"b1a4fd85-5da5-4697-b524-a68be3d018cf\") " pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.802148 master-2 kubenswrapper[4776]: I1011 10:39:00.802116 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-trusted-ca-bundle\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.802754 master-2 kubenswrapper[4776]: I1011 10:39:00.802649 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-serving-ca\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.803157 master-2 kubenswrapper[4776]: I1011 10:39:00.803129 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-policies\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.804585 master-2 kubenswrapper[4776]: I1011 10:39:00.804538 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-client\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.804653 master-2 kubenswrapper[4776]: I1011 10:39:00.804625 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-serving-cert\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.804716 master-2 kubenswrapper[4776]: I1011 10:39:00.804641 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-encryption-config\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.821703 master-2 kubenswrapper[4776]: I1011 10:39:00.821580 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shjw5\" (UniqueName: \"kubernetes.io/projected/b1a4fd85-5da5-4697-b524-a68be3d018cf-kube-api-access-shjw5\") pod \"node-ca-jl6f8\" (UID: \"b1a4fd85-5da5-4697-b524-a68be3d018cf\") " pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.824170 master-2 kubenswrapper[4776]: I1011 10:39:00.824122 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cjsj\" (UniqueName: \"kubernetes.io/projected/407e7df9-fbe8-44b1-8dde-bafa356e904c-kube-api-access-4cjsj\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.837843 master-0 kubenswrapper[4790]: I1011 10:39:00.837770 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7d9f4c3d-57bd-49f6-94f2-47670b385318-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.837843 master-0 kubenswrapper[4790]: I1011 10:39:00.837847 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5qvq\" (UniqueName: \"kubernetes.io/projected/4e2d32e6-3363-4389-ad6a-cfd917e568d2-kube-api-access-n5qvq\") pod \"node-ca-g99cx\" (UID: \"4e2d32e6-3363-4389-ad6a-cfd917e568d2\") " pod="openshift-image-registry/node-ca-g99cx" Oct 11 10:39:00.838174 master-0 kubenswrapper[4790]: I1011 10:39:00.837871 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d9f4c3d-57bd-49f6-94f2-47670b385318-node-exporter-tls\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.838174 master-0 kubenswrapper[4790]: I1011 10:39:00.837889 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnhq6\" (UniqueName: \"kubernetes.io/projected/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-kube-api-access-gnhq6\") pod \"network-metrics-daemon-zcc4t\" (UID: \"a5b695d5-a88c-4ff9-bc59-d13f61f237f6\") " pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:00.838174 master-0 kubenswrapper[4790]: I1011 10:39:00.837907 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4e2d32e6-3363-4389-ad6a-cfd917e568d2-serviceca\") pod \"node-ca-g99cx\" (UID: \"4e2d32e6-3363-4389-ad6a-cfd917e568d2\") " pod="openshift-image-registry/node-ca-g99cx" Oct 11 10:39:00.838174 master-0 kubenswrapper[4790]: I1011 10:39:00.837926 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xlkt\" (UniqueName: \"kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt\") pod \"network-check-target-bn2sv\" (UID: \"0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7\") " pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:00.838174 master-0 kubenswrapper[4790]: I1011 10:39:00.837944 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4xsk\" (UniqueName: \"kubernetes.io/projected/7d9f4c3d-57bd-49f6-94f2-47670b385318-kube-api-access-s4xsk\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.838174 master-0 kubenswrapper[4790]: I1011 10:39:00.837991 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7d9f4c3d-57bd-49f6-94f2-47670b385318-node-exporter-wtmp\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.838174 master-0 kubenswrapper[4790]: I1011 10:39:00.838021 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d9f4c3d-57bd-49f6-94f2-47670b385318-sys\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.838174 master-0 kubenswrapper[4790]: I1011 10:39:00.838076 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs\") pod \"network-metrics-daemon-zcc4t\" (UID: \"a5b695d5-a88c-4ff9-bc59-d13f61f237f6\") " pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:00.838174 master-0 kubenswrapper[4790]: I1011 10:39:00.838100 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7d9f4c3d-57bd-49f6-94f2-47670b385318-metrics-client-ca\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.838174 master-0 kubenswrapper[4790]: I1011 10:39:00.838166 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e2d32e6-3363-4389-ad6a-cfd917e568d2-host\") pod \"node-ca-g99cx\" (UID: \"4e2d32e6-3363-4389-ad6a-cfd917e568d2\") " pod="openshift-image-registry/node-ca-g99cx" Oct 11 10:39:00.838174 master-0 kubenswrapper[4790]: I1011 10:39:00.838191 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7d9f4c3d-57bd-49f6-94f2-47670b385318-root\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.838796 master-0 kubenswrapper[4790]: I1011 10:39:00.838211 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7d9f4c3d-57bd-49f6-94f2-47670b385318-node-exporter-textfile\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.838796 master-0 kubenswrapper[4790]: I1011 10:39:00.838452 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7d9f4c3d-57bd-49f6-94f2-47670b385318-node-exporter-wtmp\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.838796 master-0 kubenswrapper[4790]: I1011 10:39:00.838638 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e2d32e6-3363-4389-ad6a-cfd917e568d2-host\") pod \"node-ca-g99cx\" (UID: \"4e2d32e6-3363-4389-ad6a-cfd917e568d2\") " pod="openshift-image-registry/node-ca-g99cx" Oct 11 10:39:00.838900 master-0 kubenswrapper[4790]: E1011 10:39:00.838872 4790 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:39:00.839000 master-0 kubenswrapper[4790]: I1011 10:39:00.838931 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d9f4c3d-57bd-49f6-94f2-47670b385318-sys\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.839000 master-0 kubenswrapper[4790]: E1011 10:39:00.838950 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs podName:a5b695d5-a88c-4ff9-bc59-d13f61f237f6 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:01.338926142 +0000 UTC m=+17.893386664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs") pod "network-metrics-daemon-zcc4t" (UID: "a5b695d5-a88c-4ff9-bc59-d13f61f237f6") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:39:00.839124 master-0 kubenswrapper[4790]: I1011 10:39:00.839012 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4e2d32e6-3363-4389-ad6a-cfd917e568d2-serviceca\") pod \"node-ca-g99cx\" (UID: \"4e2d32e6-3363-4389-ad6a-cfd917e568d2\") " pod="openshift-image-registry/node-ca-g99cx" Oct 11 10:39:00.839124 master-0 kubenswrapper[4790]: I1011 10:39:00.839031 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7d9f4c3d-57bd-49f6-94f2-47670b385318-root\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.839124 master-0 kubenswrapper[4790]: I1011 10:39:00.839100 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7d9f4c3d-57bd-49f6-94f2-47670b385318-node-exporter-textfile\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.840167 master-0 kubenswrapper[4790]: I1011 10:39:00.840121 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7d9f4c3d-57bd-49f6-94f2-47670b385318-metrics-client-ca\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.841774 master-0 kubenswrapper[4790]: I1011 10:39:00.841725 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7d9f4c3d-57bd-49f6-94f2-47670b385318-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.841994 master-0 kubenswrapper[4790]: I1011 10:39:00.841838 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d9f4c3d-57bd-49f6-94f2-47670b385318-node-exporter-tls\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.858262 master-0 kubenswrapper[4790]: I1011 10:39:00.858225 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5qvq\" (UniqueName: \"kubernetes.io/projected/4e2d32e6-3363-4389-ad6a-cfd917e568d2-kube-api-access-n5qvq\") pod \"node-ca-g99cx\" (UID: \"4e2d32e6-3363-4389-ad6a-cfd917e568d2\") " pod="openshift-image-registry/node-ca-g99cx" Oct 11 10:39:00.863265 master-1 kubenswrapper[4771]: I1011 10:39:00.863187 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f621f971-6560-4be2-b36c-307a440c0769-host\") pod \"node-ca-kntdb\" (UID: \"f621f971-6560-4be2-b36c-307a440c0769\") " pod="openshift-image-registry/node-ca-kntdb" Oct 11 10:39:00.863500 master-1 kubenswrapper[4771]: I1011 10:39:00.863311 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f621f971-6560-4be2-b36c-307a440c0769-serviceca\") pod \"node-ca-kntdb\" (UID: \"f621f971-6560-4be2-b36c-307a440c0769\") " pod="openshift-image-registry/node-ca-kntdb" Oct 11 10:39:00.863500 master-1 kubenswrapper[4771]: I1011 10:39:00.863383 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f621f971-6560-4be2-b36c-307a440c0769-host\") pod \"node-ca-kntdb\" (UID: \"f621f971-6560-4be2-b36c-307a440c0769\") " pod="openshift-image-registry/node-ca-kntdb" Oct 11 10:39:00.864447 master-1 kubenswrapper[4771]: I1011 10:39:00.864403 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f621f971-6560-4be2-b36c-307a440c0769-serviceca\") pod \"node-ca-kntdb\" (UID: \"f621f971-6560-4be2-b36c-307a440c0769\") " pod="openshift-image-registry/node-ca-kntdb" Oct 11 10:39:00.864717 master-1 kubenswrapper[4771]: I1011 10:39:00.864682 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6f2m\" (UniqueName: \"kubernetes.io/projected/f621f971-6560-4be2-b36c-307a440c0769-kube-api-access-z6f2m\") pod \"node-ca-kntdb\" (UID: \"f621f971-6560-4be2-b36c-307a440c0769\") " pod="openshift-image-registry/node-ca-kntdb" Oct 11 10:39:00.870561 master-0 kubenswrapper[4790]: I1011 10:39:00.870512 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnhq6\" (UniqueName: \"kubernetes.io/projected/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-kube-api-access-gnhq6\") pod \"network-metrics-daemon-zcc4t\" (UID: \"a5b695d5-a88c-4ff9-bc59-d13f61f237f6\") " pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:00.878053 master-0 kubenswrapper[4790]: E1011 10:39:00.877990 4790 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:39:00.878053 master-0 kubenswrapper[4790]: E1011 10:39:00.878025 4790 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:39:00.878053 master-0 kubenswrapper[4790]: E1011 10:39:00.878042 4790 projected.go:194] Error preparing data for projected volume kube-api-access-8xlkt for pod openshift-network-diagnostics/network-check-target-bn2sv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:39:00.878270 master-0 kubenswrapper[4790]: E1011 10:39:00.878103 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt podName:0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:01.378083387 +0000 UTC m=+17.932543689 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8xlkt" (UniqueName: "kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt") pod "network-check-target-bn2sv" (UID: "0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:39:00.882207 master-0 kubenswrapper[4790]: I1011 10:39:00.882159 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5kghv" Oct 11 10:39:00.885464 master-2 kubenswrapper[4776]: I1011 10:39:00.885357 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.885545 master-0 kubenswrapper[4790]: I1011 10:39:00.885491 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4xsk\" (UniqueName: \"kubernetes.io/projected/7d9f4c3d-57bd-49f6-94f2-47670b385318-kube-api-access-s4xsk\") pod \"node-exporter-l66k2\" (UID: \"7d9f4c3d-57bd-49f6-94f2-47670b385318\") " pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:00.889685 master-0 kubenswrapper[4790]: I1011 10:39:00.889625 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8lkdg" Oct 11 10:39:00.890656 master-1 kubenswrapper[4771]: I1011 10:39:00.890560 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6f2m\" (UniqueName: \"kubernetes.io/projected/f621f971-6560-4be2-b36c-307a440c0769-kube-api-access-z6f2m\") pod \"node-ca-kntdb\" (UID: \"f621f971-6560-4be2-b36c-307a440c0769\") " pod="openshift-image-registry/node-ca-kntdb" Oct 11 10:39:00.906416 master-0 kubenswrapper[4790]: I1011 10:39:00.906356 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-85bvx" Oct 11 10:39:00.907238 master-0 kubenswrapper[4790]: W1011 10:39:00.906925 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1076411d_ae28_46e4_97ca_9c78203e7aba.slice/crio-3ce18fa8f5bbc0577ffd95d727f9818328c95a8e3e9dee18bd6c5f018241a8f5 WatchSource:0}: Error finding container 3ce18fa8f5bbc0577ffd95d727f9818328c95a8e3e9dee18bd6c5f018241a8f5: Status 404 returned error can't find the container with id 3ce18fa8f5bbc0577ffd95d727f9818328c95a8e3e9dee18bd6c5f018241a8f5 Oct 11 10:39:00.918304 master-0 kubenswrapper[4790]: W1011 10:39:00.918238 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfe05233_94bf_4e16_8c7e_321435ba7f00.slice/crio-6ec1ae3f7ba3b832a6f74bed193851690928b4c9fb1f1e5b2c927a922be00ea8 WatchSource:0}: Error finding container 6ec1ae3f7ba3b832a6f74bed193851690928b4c9fb1f1e5b2c927a922be00ea8: Status 404 returned error can't find the container with id 6ec1ae3f7ba3b832a6f74bed193851690928b4c9fb1f1e5b2c927a922be00ea8 Oct 11 10:39:00.934388 master-0 kubenswrapper[4790]: I1011 10:39:00.934350 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-r499q" Oct 11 10:39:00.935236 master-1 kubenswrapper[4771]: I1011 10:39:00.935165 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kntdb" Oct 11 10:39:00.938414 master-2 kubenswrapper[4776]: I1011 10:39:00.934274 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.950457 master-1 kubenswrapper[4771]: W1011 10:39:00.950410 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf621f971_6560_4be2_b36c_307a440c0769.slice/crio-180bb5a314b1530c0f87a385216eb06130a4145266776e64ae7491dd6e872065 WatchSource:0}: Error finding container 180bb5a314b1530c0f87a385216eb06130a4145266776e64ae7491dd6e872065: Status 404 returned error can't find the container with id 180bb5a314b1530c0f87a385216eb06130a4145266776e64ae7491dd6e872065 Oct 11 10:39:00.956892 master-0 kubenswrapper[4790]: I1011 10:39:00.956826 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ft6fv" Oct 11 10:39:00.958824 master-0 kubenswrapper[4790]: W1011 10:39:00.958747 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c1c727b_713a_4dff_ae8b_ad9b9851adae.slice/crio-8b87d6351e36c46ff6fb7661e0cca8b56ff438053d98cd170a47fe2544b6bf78 WatchSource:0}: Error finding container 8b87d6351e36c46ff6fb7661e0cca8b56ff438053d98cd170a47fe2544b6bf78: Status 404 returned error can't find the container with id 8b87d6351e36c46ff6fb7661e0cca8b56ff438053d98cd170a47fe2544b6bf78 Oct 11 10:39:00.983778 master-0 kubenswrapper[4790]: I1011 10:39:00.983701 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-kh4ld" Oct 11 10:39:00.991835 master-0 kubenswrapper[4790]: I1011 10:39:00.991666 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:00.996604 master-0 kubenswrapper[4790]: W1011 10:39:00.996143 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a2f91f6_f87a_4b69_a47a_91ca827d8386.slice/crio-99c07ccfcdb0c1c10e6734b3cff3a1985e39f219858ea25caacb4c0621d10a80 WatchSource:0}: Error finding container 99c07ccfcdb0c1c10e6734b3cff3a1985e39f219858ea25caacb4c0621d10a80: Status 404 returned error can't find the container with id 99c07ccfcdb0c1c10e6734b3cff3a1985e39f219858ea25caacb4c0621d10a80 Oct 11 10:39:01.008497 master-0 kubenswrapper[4790]: W1011 10:39:01.008429 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod417d5cfd_0cf3_4d96_b901_fcfe4f742ca5.slice/crio-3d8ef793e5a5ca476c1f291d9d684fa9ca870e0fff4eff886589819a876c33d6 WatchSource:0}: Error finding container 3d8ef793e5a5ca476c1f291d9d684fa9ca870e0fff4eff886589819a876c33d6: Status 404 returned error can't find the container with id 3d8ef793e5a5ca476c1f291d9d684fa9ca870e0fff4eff886589819a876c33d6 Oct 11 10:39:01.015683 master-0 kubenswrapper[4790]: I1011 10:39:01.015576 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-g99cx" Oct 11 10:39:01.020947 master-0 kubenswrapper[4790]: I1011 10:39:01.020871 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-l66k2" Oct 11 10:39:01.035402 master-0 kubenswrapper[4790]: W1011 10:39:01.035329 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e2d32e6_3363_4389_ad6a_cfd917e568d2.slice/crio-5e845fde7720879c7db8961ea19d160365f000111cc54971bf7e8c7757b7e0a9 WatchSource:0}: Error finding container 5e845fde7720879c7db8961ea19d160365f000111cc54971bf7e8c7757b7e0a9: Status 404 returned error can't find the container with id 5e845fde7720879c7db8961ea19d160365f000111cc54971bf7e8c7757b7e0a9 Oct 11 10:39:01.036779 master-0 kubenswrapper[4790]: W1011 10:39:01.036418 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d9f4c3d_57bd_49f6_94f2_47670b385318.slice/crio-bf65362b1aa13d897033e36adeefa4f842093b6cac0a70f775f250b0d47f7b3e WatchSource:0}: Error finding container bf65362b1aa13d897033e36adeefa4f842093b6cac0a70f775f250b0d47f7b3e: Status 404 returned error can't find the container with id bf65362b1aa13d897033e36adeefa4f842093b6cac0a70f775f250b0d47f7b3e Oct 11 10:39:01.328096 master-2 kubenswrapper[4776]: I1011 10:39:01.328025 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-5xgzs"] Oct 11 10:39:01.343428 master-0 kubenswrapper[4790]: I1011 10:39:01.343344 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs\") pod \"network-metrics-daemon-zcc4t\" (UID: \"a5b695d5-a88c-4ff9-bc59-d13f61f237f6\") " pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:01.344365 master-0 kubenswrapper[4790]: E1011 10:39:01.343596 4790 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:39:01.344365 master-0 kubenswrapper[4790]: E1011 10:39:01.343686 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs podName:a5b695d5-a88c-4ff9-bc59-d13f61f237f6 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:02.343656449 +0000 UTC m=+18.898116741 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs") pod "network-metrics-daemon-zcc4t" (UID: "a5b695d5-a88c-4ff9-bc59-d13f61f237f6") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:39:01.348069 master-0 kubenswrapper[4790]: I1011 10:39:01.348001 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5kghv" event={"ID":"00e9cb61-65c4-4e6a-bb0c-2428529c63bf","Type":"ContainerStarted","Data":"6b840ff8900b85e6283f49aa581413b60cfc22d11df8f95161f47cca0d1657d7"} Oct 11 10:39:01.350732 master-0 kubenswrapper[4790]: I1011 10:39:01.350627 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r499q" event={"ID":"8c1c727b-713a-4dff-ae8b-ad9b9851adae","Type":"ContainerStarted","Data":"8b87d6351e36c46ff6fb7661e0cca8b56ff438053d98cd170a47fe2544b6bf78"} Oct 11 10:39:01.352754 master-0 kubenswrapper[4790]: I1011 10:39:01.352669 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-l66k2" event={"ID":"7d9f4c3d-57bd-49f6-94f2-47670b385318","Type":"ContainerStarted","Data":"bf65362b1aa13d897033e36adeefa4f842093b6cac0a70f775f250b0d47f7b3e"} Oct 11 10:39:01.354427 master-0 kubenswrapper[4790]: I1011 10:39:01.354365 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-g99cx" event={"ID":"4e2d32e6-3363-4389-ad6a-cfd917e568d2","Type":"ContainerStarted","Data":"5e845fde7720879c7db8961ea19d160365f000111cc54971bf7e8c7757b7e0a9"} Oct 11 10:39:01.355736 master-0 kubenswrapper[4790]: I1011 10:39:01.355630 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" event={"ID":"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5","Type":"ContainerStarted","Data":"3d8ef793e5a5ca476c1f291d9d684fa9ca870e0fff4eff886589819a876c33d6"} Oct 11 10:39:01.356818 master-0 kubenswrapper[4790]: I1011 10:39:01.356771 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-kh4ld" event={"ID":"0a2f91f6-f87a-4b69-a47a-91ca827d8386","Type":"ContainerStarted","Data":"99c07ccfcdb0c1c10e6734b3cff3a1985e39f219858ea25caacb4c0621d10a80"} Oct 11 10:39:01.358084 master-0 kubenswrapper[4790]: I1011 10:39:01.357929 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-85bvx" event={"ID":"bfe05233-94bf-4e16-8c7e-321435ba7f00","Type":"ContainerStarted","Data":"6ec1ae3f7ba3b832a6f74bed193851690928b4c9fb1f1e5b2c927a922be00ea8"} Oct 11 10:39:01.360400 master-0 kubenswrapper[4790]: I1011 10:39:01.360330 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lkdg" event={"ID":"1076411d-ae28-46e4-97ca-9c78203e7aba","Type":"ContainerStarted","Data":"29d9bf6586931cd43550ae895256ff3093100c55fe6b5e2843d696b112b149af"} Oct 11 10:39:01.360400 master-0 kubenswrapper[4790]: I1011 10:39:01.360358 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lkdg" event={"ID":"1076411d-ae28-46e4-97ca-9c78203e7aba","Type":"ContainerStarted","Data":"0620cf16da36928a13e95e840388b3e5cfd335c5d5355474658f54b96c75c15f"} Oct 11 10:39:01.360400 master-0 kubenswrapper[4790]: I1011 10:39:01.360369 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lkdg" event={"ID":"1076411d-ae28-46e4-97ca-9c78203e7aba","Type":"ContainerStarted","Data":"3ce18fa8f5bbc0577ffd95d727f9818328c95a8e3e9dee18bd6c5f018241a8f5"} Oct 11 10:39:01.362141 master-0 kubenswrapper[4790]: I1011 10:39:01.362085 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ft6fv" event={"ID":"24d4b452-8f49-4e9e-98b6-3429afefc4c4","Type":"ContainerStarted","Data":"b95a93cafe5555685cb6e03ef19e23795847d7899f10c93f94dfce6df82aba47"} Oct 11 10:39:01.391622 master-0 kubenswrapper[4790]: I1011 10:39:01.391494 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-8lkdg" podStartSLOduration=11.391470627 podStartE2EDuration="11.391470627s" podCreationTimestamp="2025-10-11 10:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:39:01.391239111 +0000 UTC m=+17.945699433" watchObservedRunningTime="2025-10-11 10:39:01.391470627 +0000 UTC m=+17.945930919" Oct 11 10:39:01.444597 master-0 kubenswrapper[4790]: I1011 10:39:01.444378 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xlkt\" (UniqueName: \"kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt\") pod \"network-check-target-bn2sv\" (UID: \"0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7\") " pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:01.445076 master-0 kubenswrapper[4790]: E1011 10:39:01.444653 4790 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:39:01.445076 master-0 kubenswrapper[4790]: E1011 10:39:01.444736 4790 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:39:01.445076 master-0 kubenswrapper[4790]: E1011 10:39:01.444761 4790 projected.go:194] Error preparing data for projected volume kube-api-access-8xlkt for pod openshift-network-diagnostics/network-check-target-bn2sv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:39:01.445076 master-0 kubenswrapper[4790]: E1011 10:39:01.444866 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt podName:0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:02.444835827 +0000 UTC m=+18.999296159 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8xlkt" (UniqueName: "kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt") pod "network-check-target-bn2sv" (UID: "0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:39:01.475053 master-1 kubenswrapper[4771]: I1011 10:39:01.474961 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:39:01.475307 master-1 kubenswrapper[4771]: E1011 10:39:01.475169 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker podName:d7647696-42d9-4dd9-bc3b-a4d52a42cf9a nodeName:}" failed. No retries permitted until 2025-10-11 10:41:03.475142578 +0000 UTC m=+895.449369029 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-bqdlc" (UID: "d7647696-42d9-4dd9-bc3b-a4d52a42cf9a") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:39:01.576818 master-1 kubenswrapper[4771]: I1011 10:39:01.576714 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:39:01.577081 master-1 kubenswrapper[4771]: E1011 10:39:01.576929 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker podName:6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b nodeName:}" failed. No retries permitted until 2025-10-11 10:41:03.576905376 +0000 UTC m=+895.551131817 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-tpzsm" (UID: "6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b") : hostPath type check failed: /etc/docker is not a directory Oct 11 10:39:01.665800 master-2 kubenswrapper[4776]: I1011 10:39:01.665741 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jl6f8" event={"ID":"b1a4fd85-5da5-4697-b524-a68be3d018cf","Type":"ContainerStarted","Data":"ce1be7e853a84ffbeb127e872f1d29dc22c8f25bb1faf113830334a76d8ee276"} Oct 11 10:39:01.668849 master-2 kubenswrapper[4776]: I1011 10:39:01.668097 4776 generic.go:334] "Generic (PLEG): container finished" podID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerID="e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960" exitCode=0 Oct 11 10:39:01.668849 master-2 kubenswrapper[4776]: I1011 10:39:01.668163 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" event={"ID":"407e7df9-fbe8-44b1-8dde-bafa356e904c","Type":"ContainerDied","Data":"e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960"} Oct 11 10:39:01.668849 master-2 kubenswrapper[4776]: I1011 10:39:01.668188 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" event={"ID":"407e7df9-fbe8-44b1-8dde-bafa356e904c","Type":"ContainerStarted","Data":"4696d703bfc528a3bf9bd99fc217e6dc2e1faa3cb905d36cd446e1df3ecf761e"} Oct 11 10:39:01.934323 master-0 kubenswrapper[4790]: I1011 10:39:01.934255 4790 ???:1] "http: TLS handshake error from 192.168.34.12:42632: no serving certificate available for the kubelet" Oct 11 10:39:01.947485 master-1 kubenswrapper[4771]: I1011 10:39:01.947415 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kntdb" event={"ID":"f621f971-6560-4be2-b36c-307a440c0769","Type":"ContainerStarted","Data":"180bb5a314b1530c0f87a385216eb06130a4145266776e64ae7491dd6e872065"} Oct 11 10:39:02.292050 master-0 kubenswrapper[4790]: I1011 10:39:02.292003 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:02.292181 master-0 kubenswrapper[4790]: E1011 10:39:02.292149 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:02.293152 master-0 kubenswrapper[4790]: I1011 10:39:02.292424 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:02.293152 master-0 kubenswrapper[4790]: I1011 10:39:02.292601 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-0" Oct 11 10:39:02.293152 master-0 kubenswrapper[4790]: E1011 10:39:02.293064 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:02.293348 master-0 kubenswrapper[4790]: E1011 10:39:02.293285 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-5-master-0" podUID="795a4c8d-2d06-412c-a788-7e8585d432f7" Oct 11 10:39:02.353105 master-0 kubenswrapper[4790]: I1011 10:39:02.353046 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs\") pod \"network-metrics-daemon-zcc4t\" (UID: \"a5b695d5-a88c-4ff9-bc59-d13f61f237f6\") " pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:02.353980 master-0 kubenswrapper[4790]: E1011 10:39:02.353258 4790 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:39:02.353980 master-0 kubenswrapper[4790]: E1011 10:39:02.353350 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs podName:a5b695d5-a88c-4ff9-bc59-d13f61f237f6 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:04.35332329 +0000 UTC m=+20.907783582 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs") pod "network-metrics-daemon-zcc4t" (UID: "a5b695d5-a88c-4ff9-bc59-d13f61f237f6") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:39:02.453761 master-0 kubenswrapper[4790]: I1011 10:39:02.453411 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xlkt\" (UniqueName: \"kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt\") pod \"network-check-target-bn2sv\" (UID: \"0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7\") " pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:02.453761 master-0 kubenswrapper[4790]: E1011 10:39:02.453656 4790 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:39:02.453761 master-0 kubenswrapper[4790]: E1011 10:39:02.453697 4790 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:39:02.453761 master-0 kubenswrapper[4790]: E1011 10:39:02.453726 4790 projected.go:194] Error preparing data for projected volume kube-api-access-8xlkt for pod openshift-network-diagnostics/network-check-target-bn2sv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:39:02.453761 master-0 kubenswrapper[4790]: E1011 10:39:02.453795 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt podName:0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:04.453774319 +0000 UTC m=+21.008234611 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8xlkt" (UniqueName: "kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt") pod "network-check-target-bn2sv" (UID: "0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:39:02.580133 master-1 kubenswrapper[4771]: I1011 10:39:02.580067 4771 patch_prober.go:28] interesting pod/console-775ff6c4fc-csp4z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" start-of-body= Oct 11 10:39:02.580466 master-1 kubenswrapper[4771]: I1011 10:39:02.580139 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-775ff6c4fc-csp4z" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" probeResult="failure" output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" Oct 11 10:39:02.675075 master-2 kubenswrapper[4776]: I1011 10:39:02.675017 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" event={"ID":"407e7df9-fbe8-44b1-8dde-bafa356e904c","Type":"ContainerStarted","Data":"0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee"} Oct 11 10:39:02.705469 master-2 kubenswrapper[4776]: I1011 10:39:02.705373 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podStartSLOduration=75.705351278 podStartE2EDuration="1m15.705351278s" podCreationTimestamp="2025-10-11 10:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:39:02.700917988 +0000 UTC m=+777.485344727" watchObservedRunningTime="2025-10-11 10:39:02.705351278 +0000 UTC m=+777.489777987" Oct 11 10:39:03.399898 master-0 kubenswrapper[4790]: I1011 10:39:03.399826 4790 generic.go:334] "Generic (PLEG): container finished" podID="7d9f4c3d-57bd-49f6-94f2-47670b385318" containerID="1dcd42b41b2999d5139ed007f6535fac65ea9418fe9313b04e66f09cfb1775ec" exitCode=0 Oct 11 10:39:03.399898 master-0 kubenswrapper[4790]: I1011 10:39:03.399900 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-l66k2" event={"ID":"7d9f4c3d-57bd-49f6-94f2-47670b385318","Type":"ContainerDied","Data":"1dcd42b41b2999d5139ed007f6535fac65ea9418fe9313b04e66f09cfb1775ec"} Oct 11 10:39:03.681177 master-2 kubenswrapper[4776]: I1011 10:39:03.681118 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jl6f8" event={"ID":"b1a4fd85-5da5-4697-b524-a68be3d018cf","Type":"ContainerStarted","Data":"4665e07708c514b802cdfed7903bdfb649781ab12f1d4e332f7c69e8568925eb"} Oct 11 10:39:03.716196 master-2 kubenswrapper[4776]: I1011 10:39:03.716128 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-jl6f8" podStartSLOduration=12.830241438 podStartE2EDuration="14.716112191s" podCreationTimestamp="2025-10-11 10:38:49 +0000 UTC" firstStartedPulling="2025-10-11 10:39:00.987960502 +0000 UTC m=+775.772387211" lastFinishedPulling="2025-10-11 10:39:02.873831245 +0000 UTC m=+777.658257964" observedRunningTime="2025-10-11 10:39:03.713627394 +0000 UTC m=+778.498054103" watchObservedRunningTime="2025-10-11 10:39:03.716112191 +0000 UTC m=+778.500538900" Oct 11 10:39:03.808213 master-2 kubenswrapper[4776]: I1011 10:39:03.808157 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" Oct 11 10:39:03.894939 master-1 kubenswrapper[4771]: E1011 10:39:03.894851 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" podUID="d7647696-42d9-4dd9-bc3b-a4d52a42cf9a" Oct 11 10:39:03.895803 master-1 kubenswrapper[4771]: E1011 10:39:03.895181 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" podUID="6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b" Oct 11 10:39:03.963642 master-1 kubenswrapper[4771]: I1011 10:39:03.963546 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kntdb" event={"ID":"f621f971-6560-4be2-b36c-307a440c0769","Type":"ContainerStarted","Data":"d6296cb88992a02f56ed761cb9e4574f8959587dafa90daed9fe2b15ced3e3a0"} Oct 11 10:39:03.963642 master-1 kubenswrapper[4771]: I1011 10:39:03.963590 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:39:03.963918 master-1 kubenswrapper[4771]: I1011 10:39:03.963715 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:39:04.193219 master-2 kubenswrapper[4776]: I1011 10:39:04.193163 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:39:04.193584 master-2 kubenswrapper[4776]: I1011 10:39:04.193244 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:39:04.291969 master-0 kubenswrapper[4790]: I1011 10:39:04.291838 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:04.292966 master-0 kubenswrapper[4790]: I1011 10:39:04.292919 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-0" Oct 11 10:39:04.293107 master-0 kubenswrapper[4790]: I1011 10:39:04.292973 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:04.293107 master-0 kubenswrapper[4790]: E1011 10:39:04.293055 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-5-master-0" podUID="795a4c8d-2d06-412c-a788-7e8585d432f7" Oct 11 10:39:04.293270 master-0 kubenswrapper[4790]: E1011 10:39:04.293150 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:04.293270 master-0 kubenswrapper[4790]: I1011 10:39:04.293238 4790 scope.go:117] "RemoveContainer" containerID="0dce5d6b10f720448939fdc7754f0953d8f5becf80952889b528531812732245" Oct 11 10:39:04.293431 master-0 kubenswrapper[4790]: E1011 10:39:04.293279 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:04.369558 master-0 kubenswrapper[4790]: I1011 10:39:04.369415 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/795a4c8d-2d06-412c-a788-7e8585d432f7-kube-api-access\") pod \"installer-5-master-0\" (UID: \"795a4c8d-2d06-412c-a788-7e8585d432f7\") " pod="openshift-etcd/installer-5-master-0" Oct 11 10:39:04.369558 master-0 kubenswrapper[4790]: I1011 10:39:04.369472 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs\") pod \"network-metrics-daemon-zcc4t\" (UID: \"a5b695d5-a88c-4ff9-bc59-d13f61f237f6\") " pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:04.370073 master-0 kubenswrapper[4790]: E1011 10:39:04.369612 4790 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:39:04.370073 master-0 kubenswrapper[4790]: E1011 10:39:04.369679 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:04.370073 master-0 kubenswrapper[4790]: E1011 10:39:04.369744 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-5-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:04.370073 master-0 kubenswrapper[4790]: E1011 10:39:04.369689 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs podName:a5b695d5-a88c-4ff9-bc59-d13f61f237f6 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:08.369667806 +0000 UTC m=+24.924128098 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs") pod "network-metrics-daemon-zcc4t" (UID: "a5b695d5-a88c-4ff9-bc59-d13f61f237f6") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:39:04.370073 master-0 kubenswrapper[4790]: E1011 10:39:04.369843 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/795a4c8d-2d06-412c-a788-7e8585d432f7-kube-api-access podName:795a4c8d-2d06-412c-a788-7e8585d432f7 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:12.36981668 +0000 UTC m=+28.924277032 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/795a4c8d-2d06-412c-a788-7e8585d432f7-kube-api-access") pod "installer-5-master-0" (UID: "795a4c8d-2d06-412c-a788-7e8585d432f7") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:04.470422 master-0 kubenswrapper[4790]: I1011 10:39:04.470307 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xlkt\" (UniqueName: \"kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt\") pod \"network-check-target-bn2sv\" (UID: \"0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7\") " pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:04.471531 master-0 kubenswrapper[4790]: E1011 10:39:04.470545 4790 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:39:04.471531 master-0 kubenswrapper[4790]: E1011 10:39:04.470583 4790 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:39:04.471531 master-0 kubenswrapper[4790]: E1011 10:39:04.470597 4790 projected.go:194] Error preparing data for projected volume kube-api-access-8xlkt for pod openshift-network-diagnostics/network-check-target-bn2sv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:39:04.471531 master-0 kubenswrapper[4790]: E1011 10:39:04.470675 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt podName:0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:08.470653169 +0000 UTC m=+25.025113451 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8xlkt" (UniqueName: "kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt") pod "network-check-target-bn2sv" (UID: "0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:39:05.278434 master-0 kubenswrapper[4790]: I1011 10:39:05.278375 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-5-master-0"] Oct 11 10:39:05.278918 master-0 kubenswrapper[4790]: I1011 10:39:05.278698 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-0" Oct 11 10:39:05.288019 master-0 kubenswrapper[4790]: I1011 10:39:05.287989 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-0" Oct 11 10:39:05.378135 master-0 kubenswrapper[4790]: I1011 10:39:05.378079 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/795a4c8d-2d06-412c-a788-7e8585d432f7-kubelet-dir\") pod \"795a4c8d-2d06-412c-a788-7e8585d432f7\" (UID: \"795a4c8d-2d06-412c-a788-7e8585d432f7\") " Oct 11 10:39:05.378135 master-0 kubenswrapper[4790]: I1011 10:39:05.378161 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/795a4c8d-2d06-412c-a788-7e8585d432f7-var-lock\") pod \"795a4c8d-2d06-412c-a788-7e8585d432f7\" (UID: \"795a4c8d-2d06-412c-a788-7e8585d432f7\") " Oct 11 10:39:05.378456 master-0 kubenswrapper[4790]: I1011 10:39:05.378229 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/795a4c8d-2d06-412c-a788-7e8585d432f7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "795a4c8d-2d06-412c-a788-7e8585d432f7" (UID: "795a4c8d-2d06-412c-a788-7e8585d432f7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:05.378456 master-0 kubenswrapper[4790]: I1011 10:39:05.378299 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/795a4c8d-2d06-412c-a788-7e8585d432f7-var-lock" (OuterVolumeSpecName: "var-lock") pod "795a4c8d-2d06-412c-a788-7e8585d432f7" (UID: "795a4c8d-2d06-412c-a788-7e8585d432f7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:05.378521 master-0 kubenswrapper[4790]: I1011 10:39:05.378486 4790 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/795a4c8d-2d06-412c-a788-7e8585d432f7-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Oct 11 10:39:05.378521 master-0 kubenswrapper[4790]: I1011 10:39:05.378515 4790 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/795a4c8d-2d06-412c-a788-7e8585d432f7-var-lock\") on node \"master-0\" DevicePath \"\"" Oct 11 10:39:05.405098 master-0 kubenswrapper[4790]: I1011 10:39:05.405051 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-0" Oct 11 10:39:05.494187 master-0 kubenswrapper[4790]: I1011 10:39:05.494139 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-5-master-0"] Oct 11 10:39:05.503602 master-0 kubenswrapper[4790]: I1011 10:39:05.503557 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-5-master-0"] Oct 11 10:39:05.580802 master-0 kubenswrapper[4790]: I1011 10:39:05.580743 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/795a4c8d-2d06-412c-a788-7e8585d432f7-kube-api-access\") on node \"master-0\" DevicePath \"\"" Oct 11 10:39:05.885937 master-2 kubenswrapper[4776]: I1011 10:39:05.885862 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:05.886530 master-2 kubenswrapper[4776]: I1011 10:39:05.886188 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:05.894124 master-2 kubenswrapper[4776]: I1011 10:39:05.894090 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:06.291917 master-0 kubenswrapper[4790]: I1011 10:39:06.291834 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:06.292155 master-0 kubenswrapper[4790]: I1011 10:39:06.291837 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:06.292155 master-0 kubenswrapper[4790]: E1011 10:39:06.292056 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:06.292252 master-0 kubenswrapper[4790]: E1011 10:39:06.292163 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:06.701170 master-2 kubenswrapper[4776]: I1011 10:39:06.701085 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:08.292174 master-0 kubenswrapper[4790]: I1011 10:39:08.292098 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:08.292803 master-0 kubenswrapper[4790]: I1011 10:39:08.292094 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:08.292803 master-0 kubenswrapper[4790]: E1011 10:39:08.292251 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:08.292803 master-0 kubenswrapper[4790]: E1011 10:39:08.292423 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:08.408264 master-0 kubenswrapper[4790]: I1011 10:39:08.408215 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs\") pod \"network-metrics-daemon-zcc4t\" (UID: \"a5b695d5-a88c-4ff9-bc59-d13f61f237f6\") " pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:08.408631 master-0 kubenswrapper[4790]: E1011 10:39:08.408480 4790 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:39:08.408631 master-0 kubenswrapper[4790]: E1011 10:39:08.408592 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs podName:a5b695d5-a88c-4ff9-bc59-d13f61f237f6 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:16.408564357 +0000 UTC m=+32.963024819 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs") pod "network-metrics-daemon-zcc4t" (UID: "a5b695d5-a88c-4ff9-bc59-d13f61f237f6") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:39:08.509421 master-0 kubenswrapper[4790]: I1011 10:39:08.509359 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xlkt\" (UniqueName: \"kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt\") pod \"network-check-target-bn2sv\" (UID: \"0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7\") " pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:08.509672 master-0 kubenswrapper[4790]: E1011 10:39:08.509625 4790 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:39:08.509672 master-0 kubenswrapper[4790]: E1011 10:39:08.509650 4790 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:39:08.509672 master-0 kubenswrapper[4790]: E1011 10:39:08.509667 4790 projected.go:194] Error preparing data for projected volume kube-api-access-8xlkt for pod openshift-network-diagnostics/network-check-target-bn2sv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:39:08.509787 master-0 kubenswrapper[4790]: E1011 10:39:08.509754 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt podName:0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:16.509734084 +0000 UTC m=+33.064194376 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-8xlkt" (UniqueName: "kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt") pod "network-check-target-bn2sv" (UID: "0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:39:08.667204 master-0 kubenswrapper[4790]: I1011 10:39:08.667153 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-6-master-0"] Oct 11 10:39:08.667564 master-0 kubenswrapper[4790]: I1011 10:39:08.667545 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:08.667680 master-0 kubenswrapper[4790]: E1011 10:39:08.667645 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-6-master-0" podUID="50029f27-4009-4075-b148-02f232416a57" Oct 11 10:39:08.812696 master-0 kubenswrapper[4790]: I1011 10:39:08.812262 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access\") pod \"installer-6-master-0\" (UID: \"50029f27-4009-4075-b148-02f232416a57\") " pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:08.812696 master-0 kubenswrapper[4790]: I1011 10:39:08.812699 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50029f27-4009-4075-b148-02f232416a57-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"50029f27-4009-4075-b148-02f232416a57\") " pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:08.813075 master-0 kubenswrapper[4790]: I1011 10:39:08.812763 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50029f27-4009-4075-b148-02f232416a57-var-lock\") pod \"installer-6-master-0\" (UID: \"50029f27-4009-4075-b148-02f232416a57\") " pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:08.914012 master-0 kubenswrapper[4790]: I1011 10:39:08.913894 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50029f27-4009-4075-b148-02f232416a57-var-lock\") pod \"installer-6-master-0\" (UID: \"50029f27-4009-4075-b148-02f232416a57\") " pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:08.914012 master-0 kubenswrapper[4790]: I1011 10:39:08.914017 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access\") pod \"installer-6-master-0\" (UID: \"50029f27-4009-4075-b148-02f232416a57\") " pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:08.914349 master-0 kubenswrapper[4790]: I1011 10:39:08.914071 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50029f27-4009-4075-b148-02f232416a57-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"50029f27-4009-4075-b148-02f232416a57\") " pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:08.914349 master-0 kubenswrapper[4790]: I1011 10:39:08.914105 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50029f27-4009-4075-b148-02f232416a57-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"50029f27-4009-4075-b148-02f232416a57\") " pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:08.914349 master-0 kubenswrapper[4790]: I1011 10:39:08.914065 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50029f27-4009-4075-b148-02f232416a57-var-lock\") pod \"installer-6-master-0\" (UID: \"50029f27-4009-4075-b148-02f232416a57\") " pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:08.940359 master-0 kubenswrapper[4790]: E1011 10:39:08.940153 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:08.940359 master-0 kubenswrapper[4790]: E1011 10:39:08.940207 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-6-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:08.940359 master-0 kubenswrapper[4790]: E1011 10:39:08.940283 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access podName:50029f27-4009-4075-b148-02f232416a57 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:09.440257098 +0000 UTC m=+25.994717390 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access") pod "installer-6-master-0" (UID: "50029f27-4009-4075-b148-02f232416a57") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:09.161211 master-0 kubenswrapper[4790]: I1011 10:39:09.161148 4790 ???:1] "http: TLS handshake error from 192.168.34.11:38028: no serving certificate available for the kubelet" Oct 11 10:39:09.193309 master-2 kubenswrapper[4776]: I1011 10:39:09.193234 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:39:09.193309 master-2 kubenswrapper[4776]: I1011 10:39:09.193294 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:39:09.520328 master-0 kubenswrapper[4790]: I1011 10:39:09.520266 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access\") pod \"installer-6-master-0\" (UID: \"50029f27-4009-4075-b148-02f232416a57\") " pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:09.521081 master-0 kubenswrapper[4790]: E1011 10:39:09.520459 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:09.521081 master-0 kubenswrapper[4790]: E1011 10:39:09.520480 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-6-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:09.521081 master-0 kubenswrapper[4790]: E1011 10:39:09.520533 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access podName:50029f27-4009-4075-b148-02f232416a57 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:10.520516784 +0000 UTC m=+27.074977076 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access") pod "installer-6-master-0" (UID: "50029f27-4009-4075-b148-02f232416a57") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:10.024073 master-2 kubenswrapper[4776]: I1011 10:39:10.023987 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:39:10.024073 master-2 kubenswrapper[4776]: I1011 10:39:10.024053 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: I1011 10:39:10.266657 4776 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-crzgp container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: [+]metric-storage-ready ok Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: [+]metric-informer-sync ok Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: [+]metadata-informer-sync ok Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: I1011 10:39:10.266779 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:39:10.292566 master-0 kubenswrapper[4790]: I1011 10:39:10.292435 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:10.292566 master-0 kubenswrapper[4790]: I1011 10:39:10.292435 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:10.293042 master-0 kubenswrapper[4790]: E1011 10:39:10.292601 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:10.293042 master-0 kubenswrapper[4790]: E1011 10:39:10.292748 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:10.293042 master-0 kubenswrapper[4790]: I1011 10:39:10.292458 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:10.293239 master-0 kubenswrapper[4790]: E1011 10:39:10.293139 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-6-master-0" podUID="50029f27-4009-4075-b148-02f232416a57" Oct 11 10:39:10.529760 master-0 kubenswrapper[4790]: I1011 10:39:10.529588 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access\") pod \"installer-6-master-0\" (UID: \"50029f27-4009-4075-b148-02f232416a57\") " pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:10.529760 master-0 kubenswrapper[4790]: E1011 10:39:10.529782 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:10.529760 master-0 kubenswrapper[4790]: E1011 10:39:10.529802 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-6-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:10.530998 master-0 kubenswrapper[4790]: E1011 10:39:10.529847 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access podName:50029f27-4009-4075-b148-02f232416a57 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:12.529829316 +0000 UTC m=+29.084289608 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access") pod "installer-6-master-0" (UID: "50029f27-4009-4075-b148-02f232416a57") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:12.292157 master-0 kubenswrapper[4790]: I1011 10:39:12.292009 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:12.292157 master-0 kubenswrapper[4790]: I1011 10:39:12.292082 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:12.293365 master-0 kubenswrapper[4790]: I1011 10:39:12.292008 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:12.293365 master-0 kubenswrapper[4790]: E1011 10:39:12.292284 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:12.293365 master-0 kubenswrapper[4790]: E1011 10:39:12.292391 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:12.293365 master-0 kubenswrapper[4790]: E1011 10:39:12.292508 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-6-master-0" podUID="50029f27-4009-4075-b148-02f232416a57" Oct 11 10:39:12.302319 master-0 kubenswrapper[4790]: I1011 10:39:12.302253 4790 ???:1] "http: TLS handshake error from 192.168.34.12:45952: no serving certificate available for the kubelet" Oct 11 10:39:12.550024 master-0 kubenswrapper[4790]: I1011 10:39:12.549938 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access\") pod \"installer-6-master-0\" (UID: \"50029f27-4009-4075-b148-02f232416a57\") " pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:12.550414 master-0 kubenswrapper[4790]: E1011 10:39:12.550305 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:12.550414 master-0 kubenswrapper[4790]: E1011 10:39:12.550396 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-6-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:12.550604 master-0 kubenswrapper[4790]: E1011 10:39:12.550547 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access podName:50029f27-4009-4075-b148-02f232416a57 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:16.550487973 +0000 UTC m=+33.104948305 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access") pod "installer-6-master-0" (UID: "50029f27-4009-4075-b148-02f232416a57") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:12.582753 master-1 kubenswrapper[4771]: I1011 10:39:12.582656 4771 patch_prober.go:28] interesting pod/console-775ff6c4fc-csp4z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" start-of-body= Oct 11 10:39:12.583970 master-1 kubenswrapper[4771]: I1011 10:39:12.582805 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-775ff6c4fc-csp4z" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" probeResult="failure" output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" Oct 11 10:39:14.192662 master-2 kubenswrapper[4776]: I1011 10:39:14.192592 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:39:14.192662 master-2 kubenswrapper[4776]: I1011 10:39:14.192640 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:39:14.292367 master-0 kubenswrapper[4790]: I1011 10:39:14.292258 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:14.293217 master-0 kubenswrapper[4790]: I1011 10:39:14.292437 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:14.296070 master-0 kubenswrapper[4790]: E1011 10:39:14.295649 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-6-master-0" podUID="50029f27-4009-4075-b148-02f232416a57" Oct 11 10:39:14.298185 master-0 kubenswrapper[4790]: I1011 10:39:14.296605 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:14.298185 master-0 kubenswrapper[4790]: E1011 10:39:14.296741 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:14.298185 master-0 kubenswrapper[4790]: E1011 10:39:14.296888 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:14.976983 master-2 kubenswrapper[4776]: I1011 10:39:14.976907 4776 scope.go:117] "RemoveContainer" containerID="e726f4cf3755426805ed7e9bd7973871407e8c8b66372a8c807859b61c3bd2f3" Oct 11 10:39:15.000861 master-2 kubenswrapper[4776]: I1011 10:39:15.000801 4776 scope.go:117] "RemoveContainer" containerID="90d6acbfbe353ba98c33d9d9275a952ddeabac687eed9e519947f935a2f44edf" Oct 11 10:39:15.016745 master-2 kubenswrapper[4776]: I1011 10:39:15.016703 4776 scope.go:117] "RemoveContainer" containerID="e2dd8c36e185fe9780ea6d5b908ce2843fc734e8fa6bcfa6808d36a4d7c261b0" Oct 11 10:39:16.291956 master-0 kubenswrapper[4790]: I1011 10:39:16.291825 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:16.291956 master-0 kubenswrapper[4790]: I1011 10:39:16.291905 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:16.291956 master-0 kubenswrapper[4790]: I1011 10:39:16.291934 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:16.292869 master-0 kubenswrapper[4790]: E1011 10:39:16.291978 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-6-master-0" podUID="50029f27-4009-4075-b148-02f232416a57" Oct 11 10:39:16.292869 master-0 kubenswrapper[4790]: E1011 10:39:16.292076 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:16.292869 master-0 kubenswrapper[4790]: E1011 10:39:16.292171 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:16.492122 master-0 kubenswrapper[4790]: I1011 10:39:16.492031 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs\") pod \"network-metrics-daemon-zcc4t\" (UID: \"a5b695d5-a88c-4ff9-bc59-d13f61f237f6\") " pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:16.492406 master-0 kubenswrapper[4790]: E1011 10:39:16.492213 4790 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:39:16.492406 master-0 kubenswrapper[4790]: E1011 10:39:16.492301 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs podName:a5b695d5-a88c-4ff9-bc59-d13f61f237f6 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:32.492284221 +0000 UTC m=+49.046744513 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs") pod "network-metrics-daemon-zcc4t" (UID: "a5b695d5-a88c-4ff9-bc59-d13f61f237f6") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:39:16.593487 master-0 kubenswrapper[4790]: I1011 10:39:16.593382 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access\") pod \"installer-6-master-0\" (UID: \"50029f27-4009-4075-b148-02f232416a57\") " pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:16.593487 master-0 kubenswrapper[4790]: I1011 10:39:16.593476 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xlkt\" (UniqueName: \"kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt\") pod \"network-check-target-bn2sv\" (UID: \"0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7\") " pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:16.593823 master-0 kubenswrapper[4790]: E1011 10:39:16.593639 4790 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:39:16.593823 master-0 kubenswrapper[4790]: E1011 10:39:16.593641 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:16.593823 master-0 kubenswrapper[4790]: E1011 10:39:16.593685 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-6-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:16.593823 master-0 kubenswrapper[4790]: E1011 10:39:16.593661 4790 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:39:16.593823 master-0 kubenswrapper[4790]: E1011 10:39:16.593743 4790 projected.go:194] Error preparing data for projected volume kube-api-access-8xlkt for pod openshift-network-diagnostics/network-check-target-bn2sv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:39:16.593823 master-0 kubenswrapper[4790]: E1011 10:39:16.593786 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access podName:50029f27-4009-4075-b148-02f232416a57 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:24.593758607 +0000 UTC m=+41.148218919 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access") pod "installer-6-master-0" (UID: "50029f27-4009-4075-b148-02f232416a57") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:16.593823 master-0 kubenswrapper[4790]: E1011 10:39:16.593812 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt podName:0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:32.593801088 +0000 UTC m=+49.148261400 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-8xlkt" (UniqueName: "kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt") pod "network-check-target-bn2sv" (UID: "0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:39:17.758123 master-0 kubenswrapper[4790]: I1011 10:39:17.757991 4790 csr.go:261] certificate signing request csr-j98l9 is approved, waiting to be issued Oct 11 10:39:17.769985 master-0 kubenswrapper[4790]: I1011 10:39:17.769906 4790 csr.go:257] certificate signing request csr-j98l9 is issued Oct 11 10:39:18.303252 master-0 kubenswrapper[4790]: I1011 10:39:18.302685 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:18.303252 master-0 kubenswrapper[4790]: I1011 10:39:18.303256 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:18.303616 master-0 kubenswrapper[4790]: I1011 10:39:18.303357 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:18.303616 master-0 kubenswrapper[4790]: E1011 10:39:18.303527 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-6-master-0" podUID="50029f27-4009-4075-b148-02f232416a57" Oct 11 10:39:18.303818 master-0 kubenswrapper[4790]: E1011 10:39:18.303728 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:18.304031 master-0 kubenswrapper[4790]: E1011 10:39:18.303951 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:18.772161 master-0 kubenswrapper[4790]: I1011 10:39:18.772041 4790 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-10-12 10:21:12 +0000 UTC, rotation deadline is 2025-10-12 05:34:38.01693959 +0000 UTC Oct 11 10:39:18.772161 master-0 kubenswrapper[4790]: I1011 10:39:18.772097 4790 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h55m19.244846149s for next certificate rotation Oct 11 10:39:19.193101 master-2 kubenswrapper[4776]: I1011 10:39:19.193027 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:39:19.193998 master-2 kubenswrapper[4776]: I1011 10:39:19.193105 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:39:19.772806 master-0 kubenswrapper[4790]: I1011 10:39:19.772748 4790 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-10-12 10:21:12 +0000 UTC, rotation deadline is 2025-10-12 07:30:56.064650789 +0000 UTC Oct 11 10:39:19.772806 master-0 kubenswrapper[4790]: I1011 10:39:19.772791 4790 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h51m36.291862288s for next certificate rotation Oct 11 10:39:20.024622 master-2 kubenswrapper[4776]: I1011 10:39:20.024553 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:39:20.024622 master-2 kubenswrapper[4776]: I1011 10:39:20.024609 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:39:20.292580 master-0 kubenswrapper[4790]: I1011 10:39:20.292463 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:20.292915 master-0 kubenswrapper[4790]: E1011 10:39:20.292759 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-6-master-0" podUID="50029f27-4009-4075-b148-02f232416a57" Oct 11 10:39:20.293579 master-0 kubenswrapper[4790]: I1011 10:39:20.293533 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:20.293786 master-0 kubenswrapper[4790]: E1011 10:39:20.293700 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:20.293956 master-0 kubenswrapper[4790]: I1011 10:39:20.293910 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:20.294103 master-0 kubenswrapper[4790]: E1011 10:39:20.294054 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:22.314529 master-0 kubenswrapper[4790]: I1011 10:39:22.314471 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:22.315266 master-0 kubenswrapper[4790]: I1011 10:39:22.314638 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:22.315266 master-0 kubenswrapper[4790]: I1011 10:39:22.314836 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:22.315266 master-0 kubenswrapper[4790]: E1011 10:39:22.314856 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-6-master-0" podUID="50029f27-4009-4075-b148-02f232416a57" Oct 11 10:39:22.315266 master-0 kubenswrapper[4790]: E1011 10:39:22.314948 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:22.315266 master-0 kubenswrapper[4790]: E1011 10:39:22.315036 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:22.580769 master-1 kubenswrapper[4771]: I1011 10:39:22.580695 4771 patch_prober.go:28] interesting pod/console-775ff6c4fc-csp4z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" start-of-body= Oct 11 10:39:22.581798 master-1 kubenswrapper[4771]: I1011 10:39:22.580806 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-775ff6c4fc-csp4z" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" probeResult="failure" output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" Oct 11 10:39:24.193393 master-2 kubenswrapper[4776]: I1011 10:39:24.193299 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:39:24.193991 master-2 kubenswrapper[4776]: I1011 10:39:24.193399 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:39:24.293372 master-0 kubenswrapper[4790]: I1011 10:39:24.292660 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:24.293372 master-0 kubenswrapper[4790]: I1011 10:39:24.293314 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:24.294363 master-0 kubenswrapper[4790]: E1011 10:39:24.293520 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:24.294363 master-0 kubenswrapper[4790]: I1011 10:39:24.293878 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:24.294447 master-0 kubenswrapper[4790]: E1011 10:39:24.294402 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:24.294615 master-0 kubenswrapper[4790]: E1011 10:39:24.294579 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-6-master-0" podUID="50029f27-4009-4075-b148-02f232416a57" Oct 11 10:39:24.461415 master-0 kubenswrapper[4790]: I1011 10:39:24.460311 4790 generic.go:334] "Generic (PLEG): container finished" podID="417d5cfd-0cf3-4d96-b901-fcfe4f742ca5" containerID="8d6c1b823de6d3bbb1ce290ecfdd81097a24f1a6b64ac3d1baa8dbfab78727e3" exitCode=0 Oct 11 10:39:24.461415 master-0 kubenswrapper[4790]: I1011 10:39:24.460972 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" event={"ID":"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5","Type":"ContainerDied","Data":"8d6c1b823de6d3bbb1ce290ecfdd81097a24f1a6b64ac3d1baa8dbfab78727e3"} Oct 11 10:39:24.463183 master-0 kubenswrapper[4790]: I1011 10:39:24.463167 4790 generic.go:334] "Generic (PLEG): container finished" podID="24d4b452-8f49-4e9e-98b6-3429afefc4c4" containerID="cb1b25e322d42bbe15cffe1d3217152fa186cf0eca7d74f0dcb251ca7411c341" exitCode=0 Oct 11 10:39:24.463657 master-0 kubenswrapper[4790]: I1011 10:39:24.463316 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ft6fv" event={"ID":"24d4b452-8f49-4e9e-98b6-3429afefc4c4","Type":"ContainerDied","Data":"cb1b25e322d42bbe15cffe1d3217152fa186cf0eca7d74f0dcb251ca7411c341"} Oct 11 10:39:24.465224 master-0 kubenswrapper[4790]: I1011 10:39:24.465199 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r499q" event={"ID":"8c1c727b-713a-4dff-ae8b-ad9b9851adae","Type":"ContainerStarted","Data":"7954596edbe6a6aeecda34dd5fce3bda1928053feaf02e194e6f5c3aedc1471a"} Oct 11 10:39:24.466987 master-0 kubenswrapper[4790]: I1011 10:39:24.466643 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-85bvx" event={"ID":"bfe05233-94bf-4e16-8c7e-321435ba7f00","Type":"ContainerStarted","Data":"c7885915bf1943aca7a37762abd568286448906e5423ad01a0c6735e8a9ffab6"} Oct 11 10:39:24.469363 master-0 kubenswrapper[4790]: I1011 10:39:24.468573 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_117b8efe269c98124cf5022ab3c340a5/kube-rbac-proxy-crio/1.log" Oct 11 10:39:24.482477 master-0 kubenswrapper[4790]: I1011 10:39:24.482426 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"117b8efe269c98124cf5022ab3c340a5","Type":"ContainerStarted","Data":"27b1d02a9c060f9f7b751a24b5b4858a6e202d522b4e5837cb0be6cbd788c231"} Oct 11 10:39:24.488525 master-0 kubenswrapper[4790]: I1011 10:39:24.488492 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-l66k2" event={"ID":"7d9f4c3d-57bd-49f6-94f2-47670b385318","Type":"ContainerStarted","Data":"6b183fb3917d41ad1a8552e7885aa4b5b49499993b0af87d458e1c7ff3f4620c"} Oct 11 10:39:24.488615 master-0 kubenswrapper[4790]: I1011 10:39:24.488601 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-l66k2" event={"ID":"7d9f4c3d-57bd-49f6-94f2-47670b385318","Type":"ContainerStarted","Data":"d9b098d87397c6534971baf6cd9a23d22ce280cdf7aa79fbcfcf04a94fdb3c37"} Oct 11 10:39:24.490026 master-0 kubenswrapper[4790]: I1011 10:39:24.489975 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5kghv" event={"ID":"00e9cb61-65c4-4e6a-bb0c-2428529c63bf","Type":"ContainerStarted","Data":"ba7a48c8c170f0539b9626753f16469e71298d5b1ce649847a842c1bd11e5612"} Oct 11 10:39:24.492670 master-0 kubenswrapper[4790]: I1011 10:39:24.492631 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-g99cx" event={"ID":"4e2d32e6-3363-4389-ad6a-cfd917e568d2","Type":"ContainerStarted","Data":"d71e6c5741d1252fb04a794dd00c47f3f9910b893f74e8a5143da2763dcedf64"} Oct 11 10:39:24.495430 master-0 kubenswrapper[4790]: I1011 10:39:24.495389 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-kh4ld" event={"ID":"0a2f91f6-f87a-4b69-a47a-91ca827d8386","Type":"ContainerStarted","Data":"b4b76686cfa1337380eb37f9edf14704fb60d927e8f7fdb2c130cf4fe2f40ff0"} Oct 11 10:39:24.495430 master-0 kubenswrapper[4790]: I1011 10:39:24.495426 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-kh4ld" event={"ID":"0a2f91f6-f87a-4b69-a47a-91ca827d8386","Type":"ContainerStarted","Data":"311126a080ca6dd36b989aad9f05139f8f993501c6941fce3a509ded5c7edd89"} Oct 11 10:39:24.535256 master-0 kubenswrapper[4790]: I1011 10:39:24.535140 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-r499q" podStartSLOduration=11.418270635 podStartE2EDuration="34.535116555s" podCreationTimestamp="2025-10-11 10:38:50 +0000 UTC" firstStartedPulling="2025-10-11 10:39:00.961636471 +0000 UTC m=+17.516096763" lastFinishedPulling="2025-10-11 10:39:24.078482391 +0000 UTC m=+40.632942683" observedRunningTime="2025-10-11 10:39:24.533910784 +0000 UTC m=+41.088371096" watchObservedRunningTime="2025-10-11 10:39:24.535116555 +0000 UTC m=+41.089576847" Oct 11 10:39:24.555544 master-0 kubenswrapper[4790]: I1011 10:39:24.555435 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-kh4ld" podStartSLOduration=10.436692239 podStartE2EDuration="33.555410896s" podCreationTimestamp="2025-10-11 10:38:51 +0000 UTC" firstStartedPulling="2025-10-11 10:39:00.998629782 +0000 UTC m=+17.553090074" lastFinishedPulling="2025-10-11 10:39:24.117348439 +0000 UTC m=+40.671808731" observedRunningTime="2025-10-11 10:39:24.554946954 +0000 UTC m=+41.109407266" watchObservedRunningTime="2025-10-11 10:39:24.555410896 +0000 UTC m=+41.109871188" Oct 11 10:39:24.576647 master-0 kubenswrapper[4790]: I1011 10:39:24.576575 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=33.576551178 podStartE2EDuration="33.576551178s" podCreationTimestamp="2025-10-11 10:38:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:39:24.576349933 +0000 UTC m=+41.130810235" watchObservedRunningTime="2025-10-11 10:39:24.576551178 +0000 UTC m=+41.131011470" Oct 11 10:39:24.651821 master-0 kubenswrapper[4790]: I1011 10:39:24.650803 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-g99cx" podStartSLOduration=11.703467748 podStartE2EDuration="34.650780335s" podCreationTimestamp="2025-10-11 10:38:50 +0000 UTC" firstStartedPulling="2025-10-11 10:39:01.038252109 +0000 UTC m=+17.592712431" lastFinishedPulling="2025-10-11 10:39:23.985564696 +0000 UTC m=+40.540025018" observedRunningTime="2025-10-11 10:39:24.65060592 +0000 UTC m=+41.205066222" watchObservedRunningTime="2025-10-11 10:39:24.650780335 +0000 UTC m=+41.205240707" Oct 11 10:39:24.687981 master-0 kubenswrapper[4790]: I1011 10:39:24.687863 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access\") pod \"installer-6-master-0\" (UID: \"50029f27-4009-4075-b148-02f232416a57\") " pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:24.688249 master-0 kubenswrapper[4790]: E1011 10:39:24.688199 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:24.688303 master-0 kubenswrapper[4790]: E1011 10:39:24.688271 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-6-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:24.688434 master-0 kubenswrapper[4790]: E1011 10:39:24.688399 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access podName:50029f27-4009-4075-b148-02f232416a57 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:40.688358459 +0000 UTC m=+57.242818791 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access") pod "installer-6-master-0" (UID: "50029f27-4009-4075-b148-02f232416a57") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:24.715906 master-0 kubenswrapper[4790]: I1011 10:39:24.715675 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-l66k2" podStartSLOduration=33.48788785 podStartE2EDuration="34.71563506s" podCreationTimestamp="2025-10-11 10:38:50 +0000 UTC" firstStartedPulling="2025-10-11 10:39:01.038249859 +0000 UTC m=+17.592710191" lastFinishedPulling="2025-10-11 10:39:02.265997109 +0000 UTC m=+18.820457401" observedRunningTime="2025-10-11 10:39:24.713486774 +0000 UTC m=+41.267947116" watchObservedRunningTime="2025-10-11 10:39:24.71563506 +0000 UTC m=+41.270095392" Oct 11 10:39:24.748021 master-0 kubenswrapper[4790]: I1011 10:39:24.747655 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-5kghv" podStartSLOduration=11.705252622 podStartE2EDuration="34.74761501s" podCreationTimestamp="2025-10-11 10:38:50 +0000 UTC" firstStartedPulling="2025-10-11 10:39:00.902475853 +0000 UTC m=+17.456936185" lastFinishedPulling="2025-10-11 10:39:23.944838241 +0000 UTC m=+40.499298573" observedRunningTime="2025-10-11 10:39:24.746557573 +0000 UTC m=+41.301017915" watchObservedRunningTime="2025-10-11 10:39:24.74761501 +0000 UTC m=+41.302075342" Oct 11 10:39:24.782914 master-0 kubenswrapper[4790]: I1011 10:39:24.782810 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-85bvx" podStartSLOduration=11.715509817000001 podStartE2EDuration="34.782784664s" podCreationTimestamp="2025-10-11 10:38:50 +0000 UTC" firstStartedPulling="2025-10-11 10:39:00.92146286 +0000 UTC m=+17.475923162" lastFinishedPulling="2025-10-11 10:39:23.988737687 +0000 UTC m=+40.543198009" observedRunningTime="2025-10-11 10:39:24.782616119 +0000 UTC m=+41.337076411" watchObservedRunningTime="2025-10-11 10:39:24.782784664 +0000 UTC m=+41.337244976" Oct 11 10:39:25.284870 master-2 kubenswrapper[4776]: I1011 10:39:25.284786 4776 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:39:25.286075 master-2 kubenswrapper[4776]: I1011 10:39:25.285041 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager" containerID="cri-o://10893b6ff26cfe0860be9681aea4e014407f210edeb0807d7d50c1f9edb2d910" gracePeriod=30 Oct 11 10:39:25.286075 master-2 kubenswrapper[4776]: I1011 10:39:25.285117 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://0f23b40fdead0283107ab65c161704aa7acddce7d9fe04f98e8ea9ab7f03a4cd" gracePeriod=30 Oct 11 10:39:25.286075 master-2 kubenswrapper[4776]: I1011 10:39:25.285175 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://1c66c420f42b218cae752c5f11d7d84132ff9087b3b755852c6f533f1acaeece" gracePeriod=30 Oct 11 10:39:25.286075 master-2 kubenswrapper[4776]: I1011 10:39:25.285180 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="4f88b73b0d121e855641834122063be9" containerName="cluster-policy-controller" containerID="cri-o://f3c0c4b66c129c923e0c2ad907f82ab3a83141f8e7f3805af522d3e693204962" gracePeriod=30 Oct 11 10:39:25.287515 master-2 kubenswrapper[4776]: I1011 10:39:25.287356 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:39:25.287659 master-2 kubenswrapper[4776]: E1011 10:39:25.287563 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager" Oct 11 10:39:25.287659 master-2 kubenswrapper[4776]: I1011 10:39:25.287576 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager" Oct 11 10:39:25.287659 master-2 kubenswrapper[4776]: E1011 10:39:25.287592 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f88b73b0d121e855641834122063be9" containerName="cluster-policy-controller" Oct 11 10:39:25.287659 master-2 kubenswrapper[4776]: I1011 10:39:25.287630 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f88b73b0d121e855641834122063be9" containerName="cluster-policy-controller" Oct 11 10:39:25.287659 master-2 kubenswrapper[4776]: E1011 10:39:25.287647 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager-cert-syncer" Oct 11 10:39:25.287659 master-2 kubenswrapper[4776]: I1011 10:39:25.287655 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager-cert-syncer" Oct 11 10:39:25.287659 master-2 kubenswrapper[4776]: E1011 10:39:25.287693 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager-recovery-controller" Oct 11 10:39:25.287659 master-2 kubenswrapper[4776]: I1011 10:39:25.287703 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager-recovery-controller" Oct 11 10:39:25.288413 master-2 kubenswrapper[4776]: I1011 10:39:25.287829 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f88b73b0d121e855641834122063be9" containerName="cluster-policy-controller" Oct 11 10:39:25.288413 master-2 kubenswrapper[4776]: I1011 10:39:25.287844 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager-cert-syncer" Oct 11 10:39:25.288413 master-2 kubenswrapper[4776]: I1011 10:39:25.287866 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager-recovery-controller" Oct 11 10:39:25.288413 master-2 kubenswrapper[4776]: I1011 10:39:25.287875 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager" Oct 11 10:39:25.377483 master-2 kubenswrapper[4776]: I1011 10:39:25.377395 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-resource-dir\") pod \"kube-controller-manager-master-2\" (UID: \"2dd82f838b5636582534da82a3996ea6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:25.377712 master-2 kubenswrapper[4776]: I1011 10:39:25.377503 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-cert-dir\") pod \"kube-controller-manager-master-2\" (UID: \"2dd82f838b5636582534da82a3996ea6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:25.455924 master-2 kubenswrapper[4776]: I1011 10:39:25.455350 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_4f88b73b0d121e855641834122063be9/kube-controller-manager-cert-syncer/0.log" Oct 11 10:39:25.456479 master-2 kubenswrapper[4776]: I1011 10:39:25.456444 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:25.462277 master-2 kubenswrapper[4776]: I1011 10:39:25.462228 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" oldPodUID="4f88b73b0d121e855641834122063be9" podUID="2dd82f838b5636582534da82a3996ea6" Oct 11 10:39:25.478096 master-2 kubenswrapper[4776]: I1011 10:39:25.477960 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-cert-dir\") pod \"4f88b73b0d121e855641834122063be9\" (UID: \"4f88b73b0d121e855641834122063be9\") " Oct 11 10:39:25.478292 master-2 kubenswrapper[4776]: I1011 10:39:25.478118 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-resource-dir\") pod \"4f88b73b0d121e855641834122063be9\" (UID: \"4f88b73b0d121e855641834122063be9\") " Oct 11 10:39:25.478292 master-2 kubenswrapper[4776]: I1011 10:39:25.478195 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "4f88b73b0d121e855641834122063be9" (UID: "4f88b73b0d121e855641834122063be9"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:25.478376 master-2 kubenswrapper[4776]: I1011 10:39:25.478309 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "4f88b73b0d121e855641834122063be9" (UID: "4f88b73b0d121e855641834122063be9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:25.478486 master-2 kubenswrapper[4776]: I1011 10:39:25.478438 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-resource-dir\") pod \"kube-controller-manager-master-2\" (UID: \"2dd82f838b5636582534da82a3996ea6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:25.478573 master-2 kubenswrapper[4776]: I1011 10:39:25.478525 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-resource-dir\") pod \"kube-controller-manager-master-2\" (UID: \"2dd82f838b5636582534da82a3996ea6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:25.478611 master-2 kubenswrapper[4776]: I1011 10:39:25.478572 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-cert-dir\") pod \"kube-controller-manager-master-2\" (UID: \"2dd82f838b5636582534da82a3996ea6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:25.478639 master-2 kubenswrapper[4776]: I1011 10:39:25.478607 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-cert-dir\") pod \"kube-controller-manager-master-2\" (UID: \"2dd82f838b5636582534da82a3996ea6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:25.478832 master-2 kubenswrapper[4776]: I1011 10:39:25.478793 4776 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-resource-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:25.478832 master-2 kubenswrapper[4776]: I1011 10:39:25.478816 4776 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-cert-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:25.508121 master-0 kubenswrapper[4790]: I1011 10:39:25.508033 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" event={"ID":"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5","Type":"ContainerStarted","Data":"b5faae4cb3ce806047cd66c065a54f6c8cc6b120d3d6c1a930b8eb04fb788f18"} Oct 11 10:39:25.508121 master-0 kubenswrapper[4790]: I1011 10:39:25.508117 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" event={"ID":"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5","Type":"ContainerStarted","Data":"7355ba655a327634066827f4e80f5fe8032e43bdabdd01970c30815cb9d86537"} Oct 11 10:39:25.510144 master-0 kubenswrapper[4790]: I1011 10:39:25.508157 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" event={"ID":"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5","Type":"ContainerStarted","Data":"ab19d5c0142bc874df4b98658c457d1cdc054f9b46eef50595af10649131145b"} Oct 11 10:39:25.510144 master-0 kubenswrapper[4790]: I1011 10:39:25.508178 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" event={"ID":"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5","Type":"ContainerStarted","Data":"0953b5b0c5d6edfebd7e041d85e453f7d46a7e288f2dbe6db61c650e49aa3ec0"} Oct 11 10:39:25.510144 master-0 kubenswrapper[4790]: I1011 10:39:25.508197 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" event={"ID":"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5","Type":"ContainerStarted","Data":"a66d1fdbb33d748a1a06a36bf1348949b781c536249b656184762e926e180206"} Oct 11 10:39:25.510144 master-0 kubenswrapper[4790]: I1011 10:39:25.508215 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" event={"ID":"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5","Type":"ContainerStarted","Data":"1c19f9bbf921ae3af539ff0dff6e8cc4553b77a82249a509f0b4aa7f76a3e97f"} Oct 11 10:39:25.809760 master-2 kubenswrapper[4776]: I1011 10:39:25.809702 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_4f88b73b0d121e855641834122063be9/kube-controller-manager-cert-syncer/0.log" Oct 11 10:39:25.811431 master-2 kubenswrapper[4776]: I1011 10:39:25.810877 4776 generic.go:334] "Generic (PLEG): container finished" podID="4f88b73b0d121e855641834122063be9" containerID="1c66c420f42b218cae752c5f11d7d84132ff9087b3b755852c6f533f1acaeece" exitCode=0 Oct 11 10:39:25.811431 master-2 kubenswrapper[4776]: I1011 10:39:25.810909 4776 generic.go:334] "Generic (PLEG): container finished" podID="4f88b73b0d121e855641834122063be9" containerID="0f23b40fdead0283107ab65c161704aa7acddce7d9fe04f98e8ea9ab7f03a4cd" exitCode=2 Oct 11 10:39:25.811431 master-2 kubenswrapper[4776]: I1011 10:39:25.810921 4776 generic.go:334] "Generic (PLEG): container finished" podID="4f88b73b0d121e855641834122063be9" containerID="f3c0c4b66c129c923e0c2ad907f82ab3a83141f8e7f3805af522d3e693204962" exitCode=0 Oct 11 10:39:25.811431 master-2 kubenswrapper[4776]: I1011 10:39:25.810955 4776 generic.go:334] "Generic (PLEG): container finished" podID="4f88b73b0d121e855641834122063be9" containerID="10893b6ff26cfe0860be9681aea4e014407f210edeb0807d7d50c1f9edb2d910" exitCode=0 Oct 11 10:39:25.811431 master-2 kubenswrapper[4776]: I1011 10:39:25.810958 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:25.811431 master-2 kubenswrapper[4776]: I1011 10:39:25.811064 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="916ddd0f284b303e9bb4961df811012eddf3484459c699dfd12d49002c642155" Oct 11 10:39:25.813055 master-2 kubenswrapper[4776]: I1011 10:39:25.813019 4776 generic.go:334] "Generic (PLEG): container finished" podID="5c59b4d8-fa7a-4c50-b130-8b4857359efa" containerID="b6fb2721b520ebe1cede0be4cfc4189e99d8b75e5efbec478e75775987a6a914" exitCode=0 Oct 11 10:39:25.813055 master-2 kubenswrapper[4776]: I1011 10:39:25.813045 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-2" event={"ID":"5c59b4d8-fa7a-4c50-b130-8b4857359efa","Type":"ContainerDied","Data":"b6fb2721b520ebe1cede0be4cfc4189e99d8b75e5efbec478e75775987a6a914"} Oct 11 10:39:25.816892 master-2 kubenswrapper[4776]: I1011 10:39:25.816835 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" oldPodUID="4f88b73b0d121e855641834122063be9" podUID="2dd82f838b5636582534da82a3996ea6" Oct 11 10:39:25.840897 master-2 kubenswrapper[4776]: I1011 10:39:25.840814 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" oldPodUID="4f88b73b0d121e855641834122063be9" podUID="2dd82f838b5636582534da82a3996ea6" Oct 11 10:39:26.066986 master-2 kubenswrapper[4776]: I1011 10:39:26.066804 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f88b73b0d121e855641834122063be9" path="/var/lib/kubelet/pods/4f88b73b0d121e855641834122063be9/volumes" Oct 11 10:39:26.292841 master-0 kubenswrapper[4790]: I1011 10:39:26.292614 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:26.292841 master-0 kubenswrapper[4790]: I1011 10:39:26.292668 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:26.292841 master-0 kubenswrapper[4790]: I1011 10:39:26.292676 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:26.293216 master-0 kubenswrapper[4790]: E1011 10:39:26.292905 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-6-master-0" podUID="50029f27-4009-4075-b148-02f232416a57" Oct 11 10:39:26.293216 master-0 kubenswrapper[4790]: E1011 10:39:26.292811 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:26.293216 master-0 kubenswrapper[4790]: E1011 10:39:26.292982 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:26.465194 master-2 kubenswrapper[4776]: I1011 10:39:26.465127 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:26.465194 master-2 kubenswrapper[4776]: I1011 10:39:26.465188 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:27.247765 master-2 kubenswrapper[4776]: I1011 10:39:27.247711 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:39:27.405582 master-2 kubenswrapper[4776]: I1011 10:39:27.405523 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kubelet-dir\") pod \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " Oct 11 10:39:27.406024 master-2 kubenswrapper[4776]: I1011 10:39:27.405758 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5c59b4d8-fa7a-4c50-b130-8b4857359efa" (UID: "5c59b4d8-fa7a-4c50-b130-8b4857359efa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:27.406138 master-2 kubenswrapper[4776]: I1011 10:39:27.405997 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kube-api-access\") pod \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " Oct 11 10:39:27.406268 master-2 kubenswrapper[4776]: I1011 10:39:27.406250 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-var-lock\") pod \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " Oct 11 10:39:27.406560 master-2 kubenswrapper[4776]: I1011 10:39:27.406521 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-var-lock" (OuterVolumeSpecName: "var-lock") pod "5c59b4d8-fa7a-4c50-b130-8b4857359efa" (UID: "5c59b4d8-fa7a-4c50-b130-8b4857359efa"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:27.406904 master-2 kubenswrapper[4776]: I1011 10:39:27.406881 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:27.407016 master-2 kubenswrapper[4776]: I1011 10:39:27.407002 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:27.410560 master-2 kubenswrapper[4776]: I1011 10:39:27.410509 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5c59b4d8-fa7a-4c50-b130-8b4857359efa" (UID: "5c59b4d8-fa7a-4c50-b130-8b4857359efa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:39:27.508391 master-2 kubenswrapper[4776]: I1011 10:39:27.508241 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:27.521963 master-0 kubenswrapper[4790]: I1011 10:39:27.521662 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" event={"ID":"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5","Type":"ContainerStarted","Data":"6a176c78e46a2ac85d5511e5328a902be27c1e6cbfc1e616a7087d989017fbb7"} Oct 11 10:39:27.828061 master-2 kubenswrapper[4776]: I1011 10:39:27.827915 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-2" event={"ID":"5c59b4d8-fa7a-4c50-b130-8b4857359efa","Type":"ContainerDied","Data":"966a418bbbd7a4398464bc0352257041c54b7d3624ba848b9d0be21d900f1a4b"} Oct 11 10:39:27.828061 master-2 kubenswrapper[4776]: I1011 10:39:27.827985 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="966a418bbbd7a4398464bc0352257041c54b7d3624ba848b9d0be21d900f1a4b" Oct 11 10:39:27.828061 master-2 kubenswrapper[4776]: I1011 10:39:27.828011 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:39:28.265178 master-0 kubenswrapper[4790]: I1011 10:39:28.265049 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-6-master-0"] Oct 11 10:39:28.265495 master-0 kubenswrapper[4790]: I1011 10:39:28.265248 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:28.275214 master-0 kubenswrapper[4790]: I1011 10:39:28.275139 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:28.292453 master-0 kubenswrapper[4790]: I1011 10:39:28.292317 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:28.292613 master-0 kubenswrapper[4790]: I1011 10:39:28.292488 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:28.292838 master-0 kubenswrapper[4790]: E1011 10:39:28.292754 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:28.292953 master-0 kubenswrapper[4790]: E1011 10:39:28.292859 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:28.425481 master-0 kubenswrapper[4790]: I1011 10:39:28.425394 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50029f27-4009-4075-b148-02f232416a57-kubelet-dir\") pod \"50029f27-4009-4075-b148-02f232416a57\" (UID: \"50029f27-4009-4075-b148-02f232416a57\") " Oct 11 10:39:28.425481 master-0 kubenswrapper[4790]: I1011 10:39:28.425468 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50029f27-4009-4075-b148-02f232416a57-var-lock\") pod \"50029f27-4009-4075-b148-02f232416a57\" (UID: \"50029f27-4009-4075-b148-02f232416a57\") " Oct 11 10:39:28.425801 master-0 kubenswrapper[4790]: I1011 10:39:28.425537 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50029f27-4009-4075-b148-02f232416a57-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "50029f27-4009-4075-b148-02f232416a57" (UID: "50029f27-4009-4075-b148-02f232416a57"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:28.425801 master-0 kubenswrapper[4790]: I1011 10:39:28.425597 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50029f27-4009-4075-b148-02f232416a57-var-lock" (OuterVolumeSpecName: "var-lock") pod "50029f27-4009-4075-b148-02f232416a57" (UID: "50029f27-4009-4075-b148-02f232416a57"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:28.425801 master-0 kubenswrapper[4790]: I1011 10:39:28.425661 4790 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50029f27-4009-4075-b148-02f232416a57-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Oct 11 10:39:28.425801 master-0 kubenswrapper[4790]: I1011 10:39:28.425674 4790 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50029f27-4009-4075-b148-02f232416a57-var-lock\") on node \"master-0\" DevicePath \"\"" Oct 11 10:39:28.524800 master-0 kubenswrapper[4790]: I1011 10:39:28.524569 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-6-master-0" Oct 11 10:39:28.577082 master-0 kubenswrapper[4790]: I1011 10:39:28.576990 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-6-master-0"] Oct 11 10:39:28.587449 master-0 kubenswrapper[4790]: I1011 10:39:28.587356 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-6-master-0"] Oct 11 10:39:28.728472 master-0 kubenswrapper[4790]: I1011 10:39:28.728426 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50029f27-4009-4075-b148-02f232416a57-kube-api-access\") on node \"master-0\" DevicePath \"\"" Oct 11 10:39:29.193367 master-2 kubenswrapper[4776]: I1011 10:39:29.193325 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:39:29.194099 master-2 kubenswrapper[4776]: I1011 10:39:29.194017 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:39:30.023970 master-2 kubenswrapper[4776]: I1011 10:39:30.023869 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:39:30.024318 master-2 kubenswrapper[4776]: I1011 10:39:30.023969 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: I1011 10:39:30.265779 4776 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-crzgp container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: [+]metric-storage-ready ok Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: [+]metric-informer-sync ok Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: [+]metadata-informer-sync ok Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: I1011 10:39:30.265854 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:39:30.292829 master-0 kubenswrapper[4790]: I1011 10:39:30.292276 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:30.292829 master-0 kubenswrapper[4790]: I1011 10:39:30.292334 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:30.293861 master-0 kubenswrapper[4790]: E1011 10:39:30.292846 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:30.293861 master-0 kubenswrapper[4790]: E1011 10:39:30.292963 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:30.535728 master-0 kubenswrapper[4790]: I1011 10:39:30.534966 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" event={"ID":"417d5cfd-0cf3-4d96-b901-fcfe4f742ca5","Type":"ContainerStarted","Data":"2f0526028039267cde2979d801362373c8768640c849ad28a6187f6ce5f10f04"} Oct 11 10:39:30.535728 master-0 kubenswrapper[4790]: I1011 10:39:30.535236 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:30.535728 master-0 kubenswrapper[4790]: I1011 10:39:30.535556 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:30.992249 master-0 kubenswrapper[4790]: I1011 10:39:30.992152 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:31.464585 master-2 kubenswrapper[4776]: I1011 10:39:31.464479 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:31.464585 master-2 kubenswrapper[4776]: I1011 10:39:31.464577 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:32.268803 master-0 kubenswrapper[4790]: I1011 10:39:32.267961 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" podStartSLOduration=18.158643306 podStartE2EDuration="41.267925783s" podCreationTimestamp="2025-10-11 10:38:51 +0000 UTC" firstStartedPulling="2025-10-11 10:39:01.011493971 +0000 UTC m=+17.565954263" lastFinishedPulling="2025-10-11 10:39:24.120776438 +0000 UTC m=+40.675236740" observedRunningTime="2025-10-11 10:39:30.58899739 +0000 UTC m=+47.143457722" watchObservedRunningTime="2025-10-11 10:39:32.267925783 +0000 UTC m=+48.822386115" Oct 11 10:39:32.268803 master-0 kubenswrapper[4790]: I1011 10:39:32.268640 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-7-master-0"] Oct 11 10:39:32.269630 master-0 kubenswrapper[4790]: I1011 10:39:32.269147 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:32.269630 master-0 kubenswrapper[4790]: E1011 10:39:32.269234 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-7-master-0" podUID="7153671c-589d-434b-88b4-36e3f0e3a585" Oct 11 10:39:32.292124 master-0 kubenswrapper[4790]: I1011 10:39:32.292056 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:32.292418 master-0 kubenswrapper[4790]: E1011 10:39:32.292349 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:32.293220 master-0 kubenswrapper[4790]: I1011 10:39:32.293164 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:32.295646 master-0 kubenswrapper[4790]: E1011 10:39:32.295365 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:32.362568 master-0 kubenswrapper[4790]: I1011 10:39:32.362462 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7153671c-589d-434b-88b4-36e3f0e3a585-kubelet-dir\") pod \"installer-7-master-0\" (UID: \"7153671c-589d-434b-88b4-36e3f0e3a585\") " pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:32.362568 master-0 kubenswrapper[4790]: I1011 10:39:32.362546 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7153671c-589d-434b-88b4-36e3f0e3a585-var-lock\") pod \"installer-7-master-0\" (UID: \"7153671c-589d-434b-88b4-36e3f0e3a585\") " pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:32.362860 master-0 kubenswrapper[4790]: I1011 10:39:32.362620 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7153671c-589d-434b-88b4-36e3f0e3a585-kube-api-access\") pod \"installer-7-master-0\" (UID: \"7153671c-589d-434b-88b4-36e3f0e3a585\") " pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:32.463290 master-0 kubenswrapper[4790]: I1011 10:39:32.463120 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7153671c-589d-434b-88b4-36e3f0e3a585-kube-api-access\") pod \"installer-7-master-0\" (UID: \"7153671c-589d-434b-88b4-36e3f0e3a585\") " pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:32.463290 master-0 kubenswrapper[4790]: I1011 10:39:32.463203 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7153671c-589d-434b-88b4-36e3f0e3a585-kubelet-dir\") pod \"installer-7-master-0\" (UID: \"7153671c-589d-434b-88b4-36e3f0e3a585\") " pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:32.463290 master-0 kubenswrapper[4790]: I1011 10:39:32.463251 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7153671c-589d-434b-88b4-36e3f0e3a585-var-lock\") pod \"installer-7-master-0\" (UID: \"7153671c-589d-434b-88b4-36e3f0e3a585\") " pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:32.463290 master-0 kubenswrapper[4790]: I1011 10:39:32.463321 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7153671c-589d-434b-88b4-36e3f0e3a585-var-lock\") pod \"installer-7-master-0\" (UID: \"7153671c-589d-434b-88b4-36e3f0e3a585\") " pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:32.463836 master-0 kubenswrapper[4790]: I1011 10:39:32.463798 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7153671c-589d-434b-88b4-36e3f0e3a585-kubelet-dir\") pod \"installer-7-master-0\" (UID: \"7153671c-589d-434b-88b4-36e3f0e3a585\") " pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:32.496970 master-0 kubenswrapper[4790]: E1011 10:39:32.496902 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:32.496970 master-0 kubenswrapper[4790]: E1011 10:39:32.496947 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-7-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:32.497273 master-0 kubenswrapper[4790]: E1011 10:39:32.497022 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7153671c-589d-434b-88b4-36e3f0e3a585-kube-api-access podName:7153671c-589d-434b-88b4-36e3f0e3a585 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:32.996997935 +0000 UTC m=+49.551458227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7153671c-589d-434b-88b4-36e3f0e3a585-kube-api-access") pod "installer-7-master-0" (UID: "7153671c-589d-434b-88b4-36e3f0e3a585") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:32.581540 master-1 kubenswrapper[4771]: I1011 10:39:32.581421 4771 patch_prober.go:28] interesting pod/console-775ff6c4fc-csp4z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" start-of-body= Oct 11 10:39:32.581540 master-1 kubenswrapper[4771]: I1011 10:39:32.581521 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-775ff6c4fc-csp4z" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" probeResult="failure" output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" Oct 11 10:39:32.587966 master-0 kubenswrapper[4790]: I1011 10:39:32.587877 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs\") pod \"network-metrics-daemon-zcc4t\" (UID: \"a5b695d5-a88c-4ff9-bc59-d13f61f237f6\") " pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:32.588641 master-0 kubenswrapper[4790]: E1011 10:39:32.588581 4790 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:39:32.588913 master-0 kubenswrapper[4790]: E1011 10:39:32.588893 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs podName:a5b695d5-a88c-4ff9-bc59-d13f61f237f6 nodeName:}" failed. No retries permitted until 2025-10-11 10:40:04.588853252 +0000 UTC m=+81.143313564 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs") pod "network-metrics-daemon-zcc4t" (UID: "a5b695d5-a88c-4ff9-bc59-d13f61f237f6") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:39:32.593465 master-0 kubenswrapper[4790]: I1011 10:39:32.593401 4790 generic.go:334] "Generic (PLEG): container finished" podID="24d4b452-8f49-4e9e-98b6-3429afefc4c4" containerID="60ae2bc66c768fd133fb33de0b797bb3ba3a737ec0842ecc5b07d80a61c2f2b0" exitCode=0 Oct 11 10:39:32.593619 master-0 kubenswrapper[4790]: I1011 10:39:32.593558 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ft6fv" event={"ID":"24d4b452-8f49-4e9e-98b6-3429afefc4c4","Type":"ContainerDied","Data":"60ae2bc66c768fd133fb33de0b797bb3ba3a737ec0842ecc5b07d80a61c2f2b0"} Oct 11 10:39:32.689438 master-0 kubenswrapper[4790]: I1011 10:39:32.689334 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xlkt\" (UniqueName: \"kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt\") pod \"network-check-target-bn2sv\" (UID: \"0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7\") " pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:32.689751 master-0 kubenswrapper[4790]: E1011 10:39:32.689541 4790 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:39:32.689751 master-0 kubenswrapper[4790]: E1011 10:39:32.689560 4790 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:39:32.689751 master-0 kubenswrapper[4790]: E1011 10:39:32.689571 4790 projected.go:194] Error preparing data for projected volume kube-api-access-8xlkt for pod openshift-network-diagnostics/network-check-target-bn2sv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:39:32.689980 master-0 kubenswrapper[4790]: E1011 10:39:32.689804 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt podName:0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7 nodeName:}" failed. No retries permitted until 2025-10-11 10:40:04.689776093 +0000 UTC m=+81.244236405 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-8xlkt" (UniqueName: "kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt") pod "network-check-target-bn2sv" (UID: "0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:39:33.092037 master-0 kubenswrapper[4790]: I1011 10:39:33.091944 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7153671c-589d-434b-88b4-36e3f0e3a585-kube-api-access\") pod \"installer-7-master-0\" (UID: \"7153671c-589d-434b-88b4-36e3f0e3a585\") " pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:33.092417 master-0 kubenswrapper[4790]: E1011 10:39:33.092132 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:33.092417 master-0 kubenswrapper[4790]: E1011 10:39:33.092152 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-7-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:33.092417 master-0 kubenswrapper[4790]: E1011 10:39:33.092207 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7153671c-589d-434b-88b4-36e3f0e3a585-kube-api-access podName:7153671c-589d-434b-88b4-36e3f0e3a585 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:34.092188485 +0000 UTC m=+50.646648777 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7153671c-589d-434b-88b4-36e3f0e3a585-kube-api-access") pod "installer-7-master-0" (UID: "7153671c-589d-434b-88b4-36e3f0e3a585") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:33.292123 master-0 kubenswrapper[4790]: I1011 10:39:33.291981 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:33.292925 master-0 kubenswrapper[4790]: E1011 10:39:33.292166 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-7-master-0" podUID="7153671c-589d-434b-88b4-36e3f0e3a585" Oct 11 10:39:34.100224 master-0 kubenswrapper[4790]: I1011 10:39:34.099976 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7153671c-589d-434b-88b4-36e3f0e3a585-kube-api-access\") pod \"installer-7-master-0\" (UID: \"7153671c-589d-434b-88b4-36e3f0e3a585\") " pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:34.100489 master-0 kubenswrapper[4790]: E1011 10:39:34.100249 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:34.100489 master-0 kubenswrapper[4790]: E1011 10:39:34.100275 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-7-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:34.100489 master-0 kubenswrapper[4790]: E1011 10:39:34.100337 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7153671c-589d-434b-88b4-36e3f0e3a585-kube-api-access podName:7153671c-589d-434b-88b4-36e3f0e3a585 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:36.100316026 +0000 UTC m=+52.654776308 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7153671c-589d-434b-88b4-36e3f0e3a585-kube-api-access") pod "installer-7-master-0" (UID: "7153671c-589d-434b-88b4-36e3f0e3a585") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:34.193561 master-2 kubenswrapper[4776]: I1011 10:39:34.193470 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:39:34.194091 master-2 kubenswrapper[4776]: I1011 10:39:34.193614 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:39:34.292597 master-0 kubenswrapper[4790]: I1011 10:39:34.292452 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:34.292597 master-0 kubenswrapper[4790]: I1011 10:39:34.292540 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:34.293890 master-0 kubenswrapper[4790]: E1011 10:39:34.293824 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:34.293981 master-0 kubenswrapper[4790]: E1011 10:39:34.293942 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:34.601682 master-0 kubenswrapper[4790]: I1011 10:39:34.601564 4790 generic.go:334] "Generic (PLEG): container finished" podID="24d4b452-8f49-4e9e-98b6-3429afefc4c4" containerID="577dbabbd1dcda298e8312f0abf41bca5da23d3e379e323dc62243a5ec9eb24c" exitCode=0 Oct 11 10:39:34.601682 master-0 kubenswrapper[4790]: I1011 10:39:34.601640 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ft6fv" event={"ID":"24d4b452-8f49-4e9e-98b6-3429afefc4c4","Type":"ContainerDied","Data":"577dbabbd1dcda298e8312f0abf41bca5da23d3e379e323dc62243a5ec9eb24c"} Oct 11 10:39:34.877425 master-2 kubenswrapper[4776]: I1011 10:39:34.877369 4776 generic.go:334] "Generic (PLEG): container finished" podID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerID="a1a9c7629f1fd873d3ec9d24b009ce28e04c1ae342e924bb98a2d69fd1fdcc5f" exitCode=0 Oct 11 10:39:34.877425 master-2 kubenswrapper[4776]: I1011 10:39:34.877413 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" event={"ID":"8c500140-fe5c-4fa2-914b-bb1e0c5758ab","Type":"ContainerDied","Data":"a1a9c7629f1fd873d3ec9d24b009ce28e04c1ae342e924bb98a2d69fd1fdcc5f"} Oct 11 10:39:35.185817 master-2 kubenswrapper[4776]: I1011 10:39:35.185769 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:39:35.292664 master-0 kubenswrapper[4790]: I1011 10:39:35.292507 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:35.293435 master-0 kubenswrapper[4790]: E1011 10:39:35.292809 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-7-master-0" podUID="7153671c-589d-434b-88b4-36e3f0e3a585" Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316179 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q8hn\" (UniqueName: \"kubernetes.io/projected/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-kube-api-access-5q8hn\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316312 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316335 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit-dir\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316361 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-trusted-ca-bundle\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316402 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-encryption-config\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316450 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-client\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316478 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-serving-cert\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316504 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-serving-ca\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316539 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-config\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316568 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-image-import-ca\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316591 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-node-pullsecrets\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.321008 master-2 kubenswrapper[4776]: I1011 10:39:35.316966 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:35.321008 master-2 kubenswrapper[4776]: I1011 10:39:35.317517 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:35.321008 master-2 kubenswrapper[4776]: I1011 10:39:35.318491 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit" (OuterVolumeSpecName: "audit") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:39:35.321008 master-2 kubenswrapper[4776]: I1011 10:39:35.319307 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-config" (OuterVolumeSpecName: "config") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:39:35.321008 master-2 kubenswrapper[4776]: I1011 10:39:35.319905 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:39:35.321008 master-2 kubenswrapper[4776]: I1011 10:39:35.319998 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:39:35.321008 master-2 kubenswrapper[4776]: I1011 10:39:35.320130 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-kube-api-access-5q8hn" (OuterVolumeSpecName: "kube-api-access-5q8hn") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "kube-api-access-5q8hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:39:35.321008 master-2 kubenswrapper[4776]: I1011 10:39:35.320413 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:39:35.321008 master-2 kubenswrapper[4776]: I1011 10:39:35.320746 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:39:35.325292 master-2 kubenswrapper[4776]: I1011 10:39:35.325203 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:39:35.335178 master-2 kubenswrapper[4776]: I1011 10:39:35.329891 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:39:35.417664 master-2 kubenswrapper[4776]: I1011 10:39:35.417608 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.417664 master-2 kubenswrapper[4776]: I1011 10:39:35.417651 4776 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-image-import-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.417664 master-2 kubenswrapper[4776]: I1011 10:39:35.417665 4776 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-node-pullsecrets\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.417664 master-2 kubenswrapper[4776]: I1011 10:39:35.417694 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q8hn\" (UniqueName: \"kubernetes.io/projected/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-kube-api-access-5q8hn\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.417664 master-2 kubenswrapper[4776]: I1011 10:39:35.417703 4776 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.417664 master-2 kubenswrapper[4776]: I1011 10:39:35.417711 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.417664 master-2 kubenswrapper[4776]: I1011 10:39:35.417721 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.418127 master-2 kubenswrapper[4776]: I1011 10:39:35.417729 4776 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-encryption-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.418127 master-2 kubenswrapper[4776]: I1011 10:39:35.417737 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-client\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.418127 master-2 kubenswrapper[4776]: I1011 10:39:35.417745 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.418127 master-2 kubenswrapper[4776]: I1011 10:39:35.417754 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-serving-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.885842 master-2 kubenswrapper[4776]: I1011 10:39:35.885505 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" event={"ID":"8c500140-fe5c-4fa2-914b-bb1e0c5758ab","Type":"ContainerDied","Data":"096d8e6920a041962b0dbcc41b2a283a99938e8e8c28669a5a9ec5f599e847be"} Oct 11 10:39:35.885842 master-2 kubenswrapper[4776]: I1011 10:39:35.885555 4776 scope.go:117] "RemoveContainer" containerID="33b8451dee3f8d5ed8e144b04e3c4757d199f647e9b246655c277be3cef812a5" Oct 11 10:39:35.885842 master-2 kubenswrapper[4776]: I1011 10:39:35.885640 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:39:35.904062 master-2 kubenswrapper[4776]: I1011 10:39:35.904031 4776 scope.go:117] "RemoveContainer" containerID="a1a9c7629f1fd873d3ec9d24b009ce28e04c1ae342e924bb98a2d69fd1fdcc5f" Oct 11 10:39:35.926323 master-2 kubenswrapper[4776]: I1011 10:39:35.926298 4776 scope.go:117] "RemoveContainer" containerID="227b0ea6948a9655dda8b2fd87923ef92a7b65ccb09fd037cc6c580377f3d16c" Oct 11 10:39:35.968507 master-2 kubenswrapper[4776]: I1011 10:39:35.968433 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-7845cf54d8-h5nlf"] Oct 11 10:39:36.011544 master-2 kubenswrapper[4776]: I1011 10:39:36.011484 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-7845cf54d8-h5nlf"] Oct 11 10:39:36.068554 master-2 kubenswrapper[4776]: I1011 10:39:36.068474 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" path="/var/lib/kubelet/pods/8c500140-fe5c-4fa2-914b-bb1e0c5758ab/volumes" Oct 11 10:39:36.117448 master-0 kubenswrapper[4790]: I1011 10:39:36.117308 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7153671c-589d-434b-88b4-36e3f0e3a585-kube-api-access\") pod \"installer-7-master-0\" (UID: \"7153671c-589d-434b-88b4-36e3f0e3a585\") " pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:36.117782 master-0 kubenswrapper[4790]: E1011 10:39:36.117548 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:36.117782 master-0 kubenswrapper[4790]: E1011 10:39:36.117596 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-7-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:36.117782 master-0 kubenswrapper[4790]: E1011 10:39:36.117688 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7153671c-589d-434b-88b4-36e3f0e3a585-kube-api-access podName:7153671c-589d-434b-88b4-36e3f0e3a585 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:40.117658368 +0000 UTC m=+56.672118850 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7153671c-589d-434b-88b4-36e3f0e3a585-kube-api-access") pod "installer-7-master-0" (UID: "7153671c-589d-434b-88b4-36e3f0e3a585") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:36.292038 master-0 kubenswrapper[4790]: I1011 10:39:36.291922 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:36.292038 master-0 kubenswrapper[4790]: I1011 10:39:36.291994 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:36.292502 master-0 kubenswrapper[4790]: E1011 10:39:36.292129 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:36.292502 master-0 kubenswrapper[4790]: E1011 10:39:36.292296 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:36.465289 master-2 kubenswrapper[4776]: I1011 10:39:36.465213 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:36.465960 master-2 kubenswrapper[4776]: I1011 10:39:36.465313 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:36.465960 master-2 kubenswrapper[4776]: I1011 10:39:36.465447 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:39:36.466548 master-2 kubenswrapper[4776]: I1011 10:39:36.466458 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:36.466727 master-2 kubenswrapper[4776]: I1011 10:39:36.466604 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:36.613600 master-0 kubenswrapper[4790]: I1011 10:39:36.613482 4790 generic.go:334] "Generic (PLEG): container finished" podID="24d4b452-8f49-4e9e-98b6-3429afefc4c4" containerID="913e1a8c1cb851c82ef33685641673d1d715c6fa27efd620af4c66cb97b43d12" exitCode=0 Oct 11 10:39:36.613600 master-0 kubenswrapper[4790]: I1011 10:39:36.613580 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ft6fv" event={"ID":"24d4b452-8f49-4e9e-98b6-3429afefc4c4","Type":"ContainerDied","Data":"913e1a8c1cb851c82ef33685641673d1d715c6fa27efd620af4c66cb97b43d12"} Oct 11 10:39:37.292074 master-0 kubenswrapper[4790]: I1011 10:39:37.291972 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:37.292391 master-0 kubenswrapper[4790]: E1011 10:39:37.292215 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-7-master-0" podUID="7153671c-589d-434b-88b4-36e3f0e3a585" Oct 11 10:39:38.062643 master-2 kubenswrapper[4776]: I1011 10:39:38.062511 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:38.081390 master-2 kubenswrapper[4776]: I1011 10:39:38.081325 4776 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="e7a77e3c-dbf6-4cb7-b694-b6bfe84a86da" Oct 11 10:39:38.081390 master-2 kubenswrapper[4776]: I1011 10:39:38.081369 4776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="e7a77e3c-dbf6-4cb7-b694-b6bfe84a86da" Oct 11 10:39:38.161261 master-2 kubenswrapper[4776]: I1011 10:39:38.161189 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:39:38.240438 master-2 kubenswrapper[4776]: I1011 10:39:38.240344 4776 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:38.257050 master-2 kubenswrapper[4776]: I1011 10:39:38.256930 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:39:38.291884 master-0 kubenswrapper[4790]: I1011 10:39:38.291454 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:38.292469 master-0 kubenswrapper[4790]: I1011 10:39:38.291454 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:38.292469 master-0 kubenswrapper[4790]: E1011 10:39:38.291954 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:38.292469 master-0 kubenswrapper[4790]: E1011 10:39:38.292119 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:38.362110 master-2 kubenswrapper[4776]: I1011 10:39:38.361772 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:39:38.386265 master-2 kubenswrapper[4776]: I1011 10:39:38.386210 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:38.416375 master-2 kubenswrapper[4776]: W1011 10:39:38.416320 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dd82f838b5636582534da82a3996ea6.slice/crio-3c995ec4b96c7edcbb05a935547fbc266fb8b8ebcc60f0a31d4c752102f99347 WatchSource:0}: Error finding container 3c995ec4b96c7edcbb05a935547fbc266fb8b8ebcc60f0a31d4c752102f99347: Status 404 returned error can't find the container with id 3c995ec4b96c7edcbb05a935547fbc266fb8b8ebcc60f0a31d4c752102f99347 Oct 11 10:39:38.911197 master-2 kubenswrapper[4776]: I1011 10:39:38.911156 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"2dd82f838b5636582534da82a3996ea6","Type":"ContainerStarted","Data":"f4e2e49582cba1c1c050448b4ca7740185e407bb7310d805fa88bb2ea911a4b1"} Oct 11 10:39:38.911197 master-2 kubenswrapper[4776]: I1011 10:39:38.911193 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"2dd82f838b5636582534da82a3996ea6","Type":"ContainerStarted","Data":"3c995ec4b96c7edcbb05a935547fbc266fb8b8ebcc60f0a31d4c752102f99347"} Oct 11 10:39:39.291453 master-0 kubenswrapper[4790]: I1011 10:39:39.291388 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:39.291883 master-0 kubenswrapper[4790]: E1011 10:39:39.291526 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-7-master-0" podUID="7153671c-589d-434b-88b4-36e3f0e3a585" Oct 11 10:39:39.892213 master-2 kubenswrapper[4776]: I1011 10:39:39.892145 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-69df5d46bc-klwcv"] Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: E1011 10:39:39.892453 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="fix-audit-permissions" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: I1011 10:39:39.892473 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="fix-audit-permissions" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: E1011 10:39:39.892494 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c59b4d8-fa7a-4c50-b130-8b4857359efa" containerName="installer" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: I1011 10:39:39.892506 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c59b4d8-fa7a-4c50-b130-8b4857359efa" containerName="installer" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: E1011 10:39:39.892529 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver-check-endpoints" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: I1011 10:39:39.892543 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver-check-endpoints" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: E1011 10:39:39.892567 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: I1011 10:39:39.892583 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: I1011 10:39:39.892785 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver-check-endpoints" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: I1011 10:39:39.892814 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c59b4d8-fa7a-4c50-b130-8b4857359efa" containerName="installer" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: I1011 10:39:39.892833 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" Oct 11 10:39:39.894389 master-2 kubenswrapper[4776]: I1011 10:39:39.894339 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:39.905325 master-2 kubenswrapper[4776]: I1011 10:39:39.905266 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 11 10:39:39.905807 master-2 kubenswrapper[4776]: I1011 10:39:39.905764 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 11 10:39:39.906332 master-2 kubenswrapper[4776]: I1011 10:39:39.906287 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-lntq9" Oct 11 10:39:39.906601 master-2 kubenswrapper[4776]: I1011 10:39:39.906561 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 11 10:39:39.907123 master-2 kubenswrapper[4776]: I1011 10:39:39.907057 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 11 10:39:39.907355 master-2 kubenswrapper[4776]: I1011 10:39:39.907274 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 11 10:39:39.907553 master-2 kubenswrapper[4776]: I1011 10:39:39.907469 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 11 10:39:39.907759 master-2 kubenswrapper[4776]: I1011 10:39:39.907731 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 11 10:39:39.907936 master-2 kubenswrapper[4776]: I1011 10:39:39.907897 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 11 10:39:39.908216 master-2 kubenswrapper[4776]: I1011 10:39:39.908173 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 11 10:39:39.916356 master-2 kubenswrapper[4776]: I1011 10:39:39.916308 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 11 10:39:40.023749 master-2 kubenswrapper[4776]: I1011 10:39:40.023650 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:39:40.023950 master-2 kubenswrapper[4776]: I1011 10:39:40.023753 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:39:40.089612 master-2 kubenswrapper[4776]: I1011 10:39:40.089465 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.089612 master-2 kubenswrapper[4776]: I1011 10:39:40.089521 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-client\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.089612 master-2 kubenswrapper[4776]: I1011 10:39:40.089546 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-encryption-config\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.089612 master-2 kubenswrapper[4776]: I1011 10:39:40.089569 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-trusted-ca-bundle\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.089612 master-2 kubenswrapper[4776]: I1011 10:39:40.089591 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit-dir\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.089612 master-2 kubenswrapper[4776]: I1011 10:39:40.089622 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-node-pullsecrets\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.090055 master-2 kubenswrapper[4776]: I1011 10:39:40.089649 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-serving-cert\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.090055 master-2 kubenswrapper[4776]: I1011 10:39:40.089695 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-serving-ca\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.090055 master-2 kubenswrapper[4776]: I1011 10:39:40.089715 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-config\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.090055 master-2 kubenswrapper[4776]: I1011 10:39:40.089733 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vljtf\" (UniqueName: \"kubernetes.io/projected/4125c617-d1f6-4f29-bae1-1165604b9cbd-kube-api-access-vljtf\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.090055 master-2 kubenswrapper[4776]: I1011 10:39:40.089748 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-image-import-ca\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.169369 master-0 kubenswrapper[4790]: I1011 10:39:40.169292 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7153671c-589d-434b-88b4-36e3f0e3a585-kube-api-access\") pod \"installer-7-master-0\" (UID: \"7153671c-589d-434b-88b4-36e3f0e3a585\") " pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:40.170126 master-0 kubenswrapper[4790]: E1011 10:39:40.169570 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:40.170126 master-0 kubenswrapper[4790]: E1011 10:39:40.169616 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-7-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:40.170126 master-0 kubenswrapper[4790]: E1011 10:39:40.169696 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7153671c-589d-434b-88b4-36e3f0e3a585-kube-api-access podName:7153671c-589d-434b-88b4-36e3f0e3a585 nodeName:}" failed. No retries permitted until 2025-10-11 10:39:48.169673815 +0000 UTC m=+64.724134107 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7153671c-589d-434b-88b4-36e3f0e3a585-kube-api-access") pod "installer-7-master-0" (UID: "7153671c-589d-434b-88b4-36e3f0e3a585") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:40.190669 master-2 kubenswrapper[4776]: I1011 10:39:40.190612 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-69df5d46bc-klwcv"] Oct 11 10:39:40.191182 master-2 kubenswrapper[4776]: I1011 10:39:40.191148 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-serving-ca\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191182 master-2 kubenswrapper[4776]: I1011 10:39:40.191179 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-config\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191275 master-2 kubenswrapper[4776]: I1011 10:39:40.191203 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vljtf\" (UniqueName: \"kubernetes.io/projected/4125c617-d1f6-4f29-bae1-1165604b9cbd-kube-api-access-vljtf\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191275 master-2 kubenswrapper[4776]: I1011 10:39:40.191220 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-image-import-ca\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191275 master-2 kubenswrapper[4776]: I1011 10:39:40.191257 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191275 master-2 kubenswrapper[4776]: I1011 10:39:40.191272 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-client\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191438 master-2 kubenswrapper[4776]: I1011 10:39:40.191294 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-encryption-config\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191438 master-2 kubenswrapper[4776]: I1011 10:39:40.191313 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-trusted-ca-bundle\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191438 master-2 kubenswrapper[4776]: I1011 10:39:40.191333 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit-dir\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191438 master-2 kubenswrapper[4776]: I1011 10:39:40.191362 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-node-pullsecrets\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191438 master-2 kubenswrapper[4776]: I1011 10:39:40.191386 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-serving-cert\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191919 master-2 kubenswrapper[4776]: I1011 10:39:40.191545 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit-dir\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191919 master-2 kubenswrapper[4776]: I1011 10:39:40.191659 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-node-pullsecrets\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.192059 master-2 kubenswrapper[4776]: I1011 10:39:40.191992 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-config\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.193416 master-2 kubenswrapper[4776]: I1011 10:39:40.192363 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-serving-ca\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.197269 master-2 kubenswrapper[4776]: I1011 10:39:40.197232 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.197269 master-2 kubenswrapper[4776]: I1011 10:39:40.197266 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-serving-cert\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.197399 master-2 kubenswrapper[4776]: I1011 10:39:40.197374 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-trusted-ca-bundle\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.197438 master-2 kubenswrapper[4776]: I1011 10:39:40.197400 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-encryption-config\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.197472 master-2 kubenswrapper[4776]: I1011 10:39:40.197452 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-client\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.197540 master-2 kubenswrapper[4776]: I1011 10:39:40.197459 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-image-import-ca\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.292366 master-0 kubenswrapper[4790]: I1011 10:39:40.292276 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:40.292366 master-0 kubenswrapper[4790]: I1011 10:39:40.292351 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:40.292646 master-0 kubenswrapper[4790]: E1011 10:39:40.292452 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:40.292646 master-0 kubenswrapper[4790]: E1011 10:39:40.292567 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:40.486866 master-2 kubenswrapper[4776]: I1011 10:39:40.486787 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vljtf\" (UniqueName: \"kubernetes.io/projected/4125c617-d1f6-4f29-bae1-1165604b9cbd-kube-api-access-vljtf\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.514733 master-2 kubenswrapper[4776]: I1011 10:39:40.514688 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.670970 master-0 kubenswrapper[4790]: I1011 10:39:40.670897 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-7-master-0"] Oct 11 10:39:40.671221 master-0 kubenswrapper[4790]: I1011 10:39:40.671035 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:40.677875 master-0 kubenswrapper[4790]: I1011 10:39:40.677853 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:40.844547 master-2 kubenswrapper[4776]: E1011 10:39:40.844468 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:39:40.844547 master-2 kubenswrapper[4776]: E1011 10:39:40.844552 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:41:42.844536232 +0000 UTC m=+937.628962941 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:39:40.874959 master-0 kubenswrapper[4790]: I1011 10:39:40.874855 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7153671c-589d-434b-88b4-36e3f0e3a585-var-lock\") pod \"7153671c-589d-434b-88b4-36e3f0e3a585\" (UID: \"7153671c-589d-434b-88b4-36e3f0e3a585\") " Oct 11 10:39:40.874959 master-0 kubenswrapper[4790]: I1011 10:39:40.874914 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7153671c-589d-434b-88b4-36e3f0e3a585-kubelet-dir\") pod \"7153671c-589d-434b-88b4-36e3f0e3a585\" (UID: \"7153671c-589d-434b-88b4-36e3f0e3a585\") " Oct 11 10:39:40.875313 master-0 kubenswrapper[4790]: I1011 10:39:40.874999 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7153671c-589d-434b-88b4-36e3f0e3a585-var-lock" (OuterVolumeSpecName: "var-lock") pod "7153671c-589d-434b-88b4-36e3f0e3a585" (UID: "7153671c-589d-434b-88b4-36e3f0e3a585"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:40.875313 master-0 kubenswrapper[4790]: I1011 10:39:40.875110 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7153671c-589d-434b-88b4-36e3f0e3a585-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7153671c-589d-434b-88b4-36e3f0e3a585" (UID: "7153671c-589d-434b-88b4-36e3f0e3a585"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:40.927186 master-2 kubenswrapper[4776]: I1011 10:39:40.927124 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"2dd82f838b5636582534da82a3996ea6","Type":"ContainerStarted","Data":"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752"} Oct 11 10:39:40.927186 master-2 kubenswrapper[4776]: I1011 10:39:40.927172 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"2dd82f838b5636582534da82a3996ea6","Type":"ContainerStarted","Data":"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163"} Oct 11 10:39:40.927755 master-2 kubenswrapper[4776]: I1011 10:39:40.927204 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"2dd82f838b5636582534da82a3996ea6","Type":"ContainerStarted","Data":"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4"} Oct 11 10:39:40.976030 master-0 kubenswrapper[4790]: I1011 10:39:40.975857 4790 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7153671c-589d-434b-88b4-36e3f0e3a585-var-lock\") on node \"master-0\" DevicePath \"\"" Oct 11 10:39:40.976030 master-0 kubenswrapper[4790]: I1011 10:39:40.975925 4790 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7153671c-589d-434b-88b4-36e3f0e3a585-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Oct 11 10:39:41.022045 master-0 kubenswrapper[4790]: I1011 10:39:41.021986 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:41.025607 master-0 kubenswrapper[4790]: I1011 10:39:41.025571 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:41.039877 master-0 kubenswrapper[4790]: I1011 10:39:41.039786 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" podUID="417d5cfd-0cf3-4d96-b901-fcfe4f742ca5" containerName="ovnkube-controller" probeResult="failure" output="" Oct 11 10:39:41.288032 master-2 kubenswrapper[4776]: I1011 10:39:41.287940 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-69df5d46bc-klwcv"] Oct 11 10:39:41.416597 master-2 kubenswrapper[4776]: W1011 10:39:41.416533 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4125c617_d1f6_4f29_bae1_1165604b9cbd.slice/crio-8da59f9f35574c5f3bacdb804091911baf908469aad4410d43906f030b48b831 WatchSource:0}: Error finding container 8da59f9f35574c5f3bacdb804091911baf908469aad4410d43906f030b48b831: Status 404 returned error can't find the container with id 8da59f9f35574c5f3bacdb804091911baf908469aad4410d43906f030b48b831 Oct 11 10:39:41.464507 master-2 kubenswrapper[4776]: I1011 10:39:41.464450 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:41.464661 master-2 kubenswrapper[4776]: I1011 10:39:41.464504 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:41.627833 master-0 kubenswrapper[4790]: I1011 10:39:41.627765 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-7-master-0" Oct 11 10:39:41.646568 master-0 kubenswrapper[4790]: I1011 10:39:41.646511 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" podUID="417d5cfd-0cf3-4d96-b901-fcfe4f742ca5" containerName="ovnkube-controller" probeResult="failure" output="" Oct 11 10:39:41.937003 master-2 kubenswrapper[4776]: I1011 10:39:41.936940 4776 generic.go:334] "Generic (PLEG): container finished" podID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerID="eb57f483b1fb4288bd615f1e2349b2230b6272e2d1ba16c1f8dcb73ce4999885" exitCode=0 Oct 11 10:39:41.937725 master-2 kubenswrapper[4776]: I1011 10:39:41.937057 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" event={"ID":"4125c617-d1f6-4f29-bae1-1165604b9cbd","Type":"ContainerDied","Data":"eb57f483b1fb4288bd615f1e2349b2230b6272e2d1ba16c1f8dcb73ce4999885"} Oct 11 10:39:41.937725 master-2 kubenswrapper[4776]: I1011 10:39:41.937144 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" event={"ID":"4125c617-d1f6-4f29-bae1-1165604b9cbd","Type":"ContainerStarted","Data":"8da59f9f35574c5f3bacdb804091911baf908469aad4410d43906f030b48b831"} Oct 11 10:39:42.134463 master-0 kubenswrapper[4790]: I1011 10:39:42.134347 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-7-master-0"] Oct 11 10:39:42.212334 master-0 kubenswrapper[4790]: I1011 10:39:42.212288 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-7-master-0"] Oct 11 10:39:42.287405 master-0 kubenswrapper[4790]: I1011 10:39:42.287334 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7153671c-589d-434b-88b4-36e3f0e3a585-kube-api-access\") on node \"master-0\" DevicePath \"\"" Oct 11 10:39:42.292494 master-0 kubenswrapper[4790]: I1011 10:39:42.292450 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:42.292575 master-0 kubenswrapper[4790]: I1011 10:39:42.292545 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:42.292761 master-0 kubenswrapper[4790]: E1011 10:39:42.292675 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:42.292902 master-0 kubenswrapper[4790]: E1011 10:39:42.292848 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:42.480877 master-1 kubenswrapper[4771]: I1011 10:39:42.480753 4771 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 11 10:39:42.482178 master-1 kubenswrapper[4771]: I1011 10:39:42.481180 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver" containerID="cri-o://be8c3f5de1224f1ddb95dff091a6c317c3ddd56bd6bebec9f107f4b1c1bd098a" gracePeriod=135 Oct 11 10:39:42.482178 master-1 kubenswrapper[4771]: I1011 10:39:42.481265 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-check-endpoints" containerID="cri-o://50a31e36a673a3d932b0eed3747b62b95f6a9f4bd14409954bbb8b619a64ca07" gracePeriod=135 Oct 11 10:39:42.482178 master-1 kubenswrapper[4771]: I1011 10:39:42.481384 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7808e9129366cc3f545a8ddafc086a76064c7552891a59b63333320814e12109" gracePeriod=135 Oct 11 10:39:42.482178 master-1 kubenswrapper[4771]: I1011 10:39:42.481464 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://637d81cee36b54a2c10877824c0d4c8cd57b4ef94675651e76c7ca2c91addea4" gracePeriod=135 Oct 11 10:39:42.482178 master-1 kubenswrapper[4771]: I1011 10:39:42.481535 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-cert-syncer" containerID="cri-o://0e245a1193b14427c4e53af18fc7d7caad0bae63a50c8f3671564cc8eb9cc3dd" gracePeriod=135 Oct 11 10:39:42.483985 master-1 kubenswrapper[4771]: I1011 10:39:42.483282 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 11 10:39:42.483985 master-1 kubenswrapper[4771]: E1011 10:39:42.483732 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-cert-syncer" Oct 11 10:39:42.483985 master-1 kubenswrapper[4771]: I1011 10:39:42.483765 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-cert-syncer" Oct 11 10:39:42.483985 master-1 kubenswrapper[4771]: E1011 10:39:42.483793 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="setup" Oct 11 10:39:42.483985 master-1 kubenswrapper[4771]: I1011 10:39:42.483813 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="setup" Oct 11 10:39:42.483985 master-1 kubenswrapper[4771]: E1011 10:39:42.483846 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver" Oct 11 10:39:42.483985 master-1 kubenswrapper[4771]: I1011 10:39:42.483864 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver" Oct 11 10:39:42.483985 master-1 kubenswrapper[4771]: E1011 10:39:42.483892 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-cert-regeneration-controller" Oct 11 10:39:42.483985 master-1 kubenswrapper[4771]: I1011 10:39:42.483911 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-cert-regeneration-controller" Oct 11 10:39:42.483985 master-1 kubenswrapper[4771]: E1011 10:39:42.483937 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-check-endpoints" Oct 11 10:39:42.483985 master-1 kubenswrapper[4771]: I1011 10:39:42.483955 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-check-endpoints" Oct 11 10:39:42.483985 master-1 kubenswrapper[4771]: E1011 10:39:42.483978 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-insecure-readyz" Oct 11 10:39:42.483985 master-1 kubenswrapper[4771]: I1011 10:39:42.483992 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-insecure-readyz" Oct 11 10:39:42.485029 master-1 kubenswrapper[4771]: I1011 10:39:42.484189 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-cert-regeneration-controller" Oct 11 10:39:42.485029 master-1 kubenswrapper[4771]: I1011 10:39:42.484213 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-cert-syncer" Oct 11 10:39:42.485029 master-1 kubenswrapper[4771]: I1011 10:39:42.484230 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-insecure-readyz" Oct 11 10:39:42.485029 master-1 kubenswrapper[4771]: I1011 10:39:42.484254 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver" Oct 11 10:39:42.485029 master-1 kubenswrapper[4771]: I1011 10:39:42.484268 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-check-endpoints" Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: I1011 10:39:42.514959 4771 patch_prober.go:28] interesting pod/kube-apiserver-master-1 container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:39:42.515025 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:39:42.517506 master-1 kubenswrapper[4771]: I1011 10:39:42.515033 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:39:42.581170 master-1 kubenswrapper[4771]: I1011 10:39:42.581064 4771 patch_prober.go:28] interesting pod/console-775ff6c4fc-csp4z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" start-of-body= Oct 11 10:39:42.581407 master-1 kubenswrapper[4771]: I1011 10:39:42.581182 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-775ff6c4fc-csp4z" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" probeResult="failure" output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" Oct 11 10:39:42.591940 master-1 kubenswrapper[4771]: I1011 10:39:42.591865 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:39:42.592101 master-1 kubenswrapper[4771]: I1011 10:39:42.591985 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:39:42.592101 master-1 kubenswrapper[4771]: I1011 10:39:42.592058 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:39:42.694068 master-1 kubenswrapper[4771]: I1011 10:39:42.693595 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:39:42.694417 master-1 kubenswrapper[4771]: I1011 10:39:42.693982 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:39:42.694417 master-1 kubenswrapper[4771]: I1011 10:39:42.694173 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:39:42.694417 master-1 kubenswrapper[4771]: I1011 10:39:42.694091 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:39:42.694631 master-1 kubenswrapper[4771]: I1011 10:39:42.694415 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:39:42.694715 master-1 kubenswrapper[4771]: I1011 10:39:42.694617 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:39:42.947640 master-2 kubenswrapper[4776]: I1011 10:39:42.947591 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" event={"ID":"4125c617-d1f6-4f29-bae1-1165604b9cbd","Type":"ContainerStarted","Data":"e4993a00e7728dc437a4c6094596c369ce11c26b8ae277d77d9133c67e1933b9"} Oct 11 10:39:42.948476 master-2 kubenswrapper[4776]: I1011 10:39:42.948454 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" event={"ID":"4125c617-d1f6-4f29-bae1-1165604b9cbd","Type":"ContainerStarted","Data":"f9151fc06dbd01664b47f868a0c43e1a9e6b83f5d73cdb1bae7462ef40f38776"} Oct 11 10:39:43.083896 master-2 kubenswrapper[4776]: I1011 10:39:43.083819 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podStartSLOduration=5.083801432 podStartE2EDuration="5.083801432s" podCreationTimestamp="2025-10-11 10:39:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:39:42.469099786 +0000 UTC m=+817.253526505" watchObservedRunningTime="2025-10-11 10:39:43.083801432 +0000 UTC m=+817.868228141" Oct 11 10:39:43.087631 master-2 kubenswrapper[4776]: I1011 10:39:43.086245 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podStartSLOduration=52.086237109 podStartE2EDuration="52.086237109s" podCreationTimestamp="2025-10-11 10:38:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:39:43.083264638 +0000 UTC m=+817.867691357" watchObservedRunningTime="2025-10-11 10:39:43.086237109 +0000 UTC m=+817.870663818" Oct 11 10:39:43.225377 master-1 kubenswrapper[4771]: I1011 10:39:43.225289 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_e39186c2ebd02622803bdbec6984de2a/kube-apiserver-cert-syncer/0.log" Oct 11 10:39:43.226057 master-1 kubenswrapper[4771]: I1011 10:39:43.226019 4771 generic.go:334] "Generic (PLEG): container finished" podID="e39186c2ebd02622803bdbec6984de2a" containerID="50a31e36a673a3d932b0eed3747b62b95f6a9f4bd14409954bbb8b619a64ca07" exitCode=0 Oct 11 10:39:43.226057 master-1 kubenswrapper[4771]: I1011 10:39:43.226054 4771 generic.go:334] "Generic (PLEG): container finished" podID="e39186c2ebd02622803bdbec6984de2a" containerID="637d81cee36b54a2c10877824c0d4c8cd57b4ef94675651e76c7ca2c91addea4" exitCode=0 Oct 11 10:39:43.226147 master-1 kubenswrapper[4771]: I1011 10:39:43.226065 4771 generic.go:334] "Generic (PLEG): container finished" podID="e39186c2ebd02622803bdbec6984de2a" containerID="7808e9129366cc3f545a8ddafc086a76064c7552891a59b63333320814e12109" exitCode=0 Oct 11 10:39:43.226147 master-1 kubenswrapper[4771]: I1011 10:39:43.226077 4771 generic.go:334] "Generic (PLEG): container finished" podID="e39186c2ebd02622803bdbec6984de2a" containerID="0e245a1193b14427c4e53af18fc7d7caad0bae63a50c8f3671564cc8eb9cc3dd" exitCode=2 Oct 11 10:39:43.227632 master-1 kubenswrapper[4771]: I1011 10:39:43.227563 4771 generic.go:334] "Generic (PLEG): container finished" podID="04d0b40e-b6ae-4466-a0af-fcb5ce630a97" containerID="3d3a7650ee6f21f1edc22785fe9fc463251f973399b34912c74a0d533d0b5e22" exitCode=0 Oct 11 10:39:43.227710 master-1 kubenswrapper[4771]: I1011 10:39:43.227667 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-1" event={"ID":"04d0b40e-b6ae-4466-a0af-fcb5ce630a97","Type":"ContainerDied","Data":"3d3a7650ee6f21f1edc22785fe9fc463251f973399b34912c74a0d533d0b5e22"} Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: I1011 10:39:43.241722 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:39:43.241805 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:39:43.243171 master-1 kubenswrapper[4771]: I1011 10:39:43.241820 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:39:43.384072 master-1 kubenswrapper[4771]: I1011 10:39:43.383999 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="e39186c2ebd02622803bdbec6984de2a" podUID="42d61efaa0f96869cf2939026aad6022" Oct 11 10:39:43.485838 master-1 kubenswrapper[4771]: I1011 10:39:43.485648 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-kntdb" podStartSLOduration=52.448594775 podStartE2EDuration="54.485623726s" podCreationTimestamp="2025-10-11 10:38:49 +0000 UTC" firstStartedPulling="2025-10-11 10:39:00.952758982 +0000 UTC m=+772.926985453" lastFinishedPulling="2025-10-11 10:39:02.989787953 +0000 UTC m=+774.964014404" observedRunningTime="2025-10-11 10:39:04.188439094 +0000 UTC m=+776.162665595" watchObservedRunningTime="2025-10-11 10:39:43.485623726 +0000 UTC m=+815.459850207" Oct 11 10:39:44.293295 master-0 kubenswrapper[4790]: I1011 10:39:44.293230 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:44.294431 master-0 kubenswrapper[4790]: I1011 10:39:44.293258 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:44.294990 master-0 kubenswrapper[4790]: E1011 10:39:44.294938 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:44.295142 master-0 kubenswrapper[4790]: E1011 10:39:44.295086 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:44.304349 master-0 kubenswrapper[4790]: I1011 10:39:44.304302 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-8-master-0"] Oct 11 10:39:44.304664 master-0 kubenswrapper[4790]: I1011 10:39:44.304631 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-8-master-0" Oct 11 10:39:44.304766 master-0 kubenswrapper[4790]: E1011 10:39:44.304731 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-8-master-0" podUID="a3934355-bb61-4316-b164-05294e12906a" Oct 11 10:39:44.405745 master-0 kubenswrapper[4790]: I1011 10:39:44.405592 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3934355-bb61-4316-b164-05294e12906a-kubelet-dir\") pod \"installer-8-master-0\" (UID: \"a3934355-bb61-4316-b164-05294e12906a\") " pod="openshift-etcd/installer-8-master-0" Oct 11 10:39:44.405745 master-0 kubenswrapper[4790]: I1011 10:39:44.405648 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3934355-bb61-4316-b164-05294e12906a-var-lock\") pod \"installer-8-master-0\" (UID: \"a3934355-bb61-4316-b164-05294e12906a\") " pod="openshift-etcd/installer-8-master-0" Oct 11 10:39:44.405745 master-0 kubenswrapper[4790]: I1011 10:39:44.405775 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access\") pod \"installer-8-master-0\" (UID: \"a3934355-bb61-4316-b164-05294e12906a\") " pod="openshift-etcd/installer-8-master-0" Oct 11 10:39:44.506213 master-0 kubenswrapper[4790]: I1011 10:39:44.506081 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3934355-bb61-4316-b164-05294e12906a-var-lock\") pod \"installer-8-master-0\" (UID: \"a3934355-bb61-4316-b164-05294e12906a\") " pod="openshift-etcd/installer-8-master-0" Oct 11 10:39:44.506213 master-0 kubenswrapper[4790]: I1011 10:39:44.506214 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access\") pod \"installer-8-master-0\" (UID: \"a3934355-bb61-4316-b164-05294e12906a\") " pod="openshift-etcd/installer-8-master-0" Oct 11 10:39:44.506213 master-0 kubenswrapper[4790]: I1011 10:39:44.506221 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3934355-bb61-4316-b164-05294e12906a-var-lock\") pod \"installer-8-master-0\" (UID: \"a3934355-bb61-4316-b164-05294e12906a\") " pod="openshift-etcd/installer-8-master-0" Oct 11 10:39:44.506676 master-0 kubenswrapper[4790]: I1011 10:39:44.506640 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3934355-bb61-4316-b164-05294e12906a-kubelet-dir\") pod \"installer-8-master-0\" (UID: \"a3934355-bb61-4316-b164-05294e12906a\") " pod="openshift-etcd/installer-8-master-0" Oct 11 10:39:44.506776 master-0 kubenswrapper[4790]: I1011 10:39:44.506701 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3934355-bb61-4316-b164-05294e12906a-kubelet-dir\") pod \"installer-8-master-0\" (UID: \"a3934355-bb61-4316-b164-05294e12906a\") " pod="openshift-etcd/installer-8-master-0" Oct 11 10:39:44.544845 master-1 kubenswrapper[4771]: I1011 10:39:44.544770 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-1" Oct 11 10:39:44.576617 master-0 kubenswrapper[4790]: E1011 10:39:44.576543 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:44.576617 master-0 kubenswrapper[4790]: E1011 10:39:44.576585 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-8-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:44.576617 master-0 kubenswrapper[4790]: E1011 10:39:44.576644 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access podName:a3934355-bb61-4316-b164-05294e12906a nodeName:}" failed. No retries permitted until 2025-10-11 10:39:45.076620476 +0000 UTC m=+61.631080768 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access") pod "installer-8-master-0" (UID: "a3934355-bb61-4316-b164-05294e12906a") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:44.722198 master-1 kubenswrapper[4771]: I1011 10:39:44.722108 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/04d0b40e-b6ae-4466-a0af-fcb5ce630a97-var-lock\") pod \"04d0b40e-b6ae-4466-a0af-fcb5ce630a97\" (UID: \"04d0b40e-b6ae-4466-a0af-fcb5ce630a97\") " Oct 11 10:39:44.722198 master-1 kubenswrapper[4771]: I1011 10:39:44.722203 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04d0b40e-b6ae-4466-a0af-fcb5ce630a97-kube-api-access\") pod \"04d0b40e-b6ae-4466-a0af-fcb5ce630a97\" (UID: \"04d0b40e-b6ae-4466-a0af-fcb5ce630a97\") " Oct 11 10:39:44.722612 master-1 kubenswrapper[4771]: I1011 10:39:44.722279 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04d0b40e-b6ae-4466-a0af-fcb5ce630a97-kubelet-dir\") pod \"04d0b40e-b6ae-4466-a0af-fcb5ce630a97\" (UID: \"04d0b40e-b6ae-4466-a0af-fcb5ce630a97\") " Oct 11 10:39:44.723194 master-1 kubenswrapper[4771]: I1011 10:39:44.723019 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04d0b40e-b6ae-4466-a0af-fcb5ce630a97-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "04d0b40e-b6ae-4466-a0af-fcb5ce630a97" (UID: "04d0b40e-b6ae-4466-a0af-fcb5ce630a97"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:44.723194 master-1 kubenswrapper[4771]: I1011 10:39:44.723085 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04d0b40e-b6ae-4466-a0af-fcb5ce630a97-var-lock" (OuterVolumeSpecName: "var-lock") pod "04d0b40e-b6ae-4466-a0af-fcb5ce630a97" (UID: "04d0b40e-b6ae-4466-a0af-fcb5ce630a97"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:44.728465 master-1 kubenswrapper[4771]: I1011 10:39:44.728407 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04d0b40e-b6ae-4466-a0af-fcb5ce630a97-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "04d0b40e-b6ae-4466-a0af-fcb5ce630a97" (UID: "04d0b40e-b6ae-4466-a0af-fcb5ce630a97"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:39:44.824508 master-1 kubenswrapper[4771]: I1011 10:39:44.824439 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/04d0b40e-b6ae-4466-a0af-fcb5ce630a97-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:39:44.824508 master-1 kubenswrapper[4771]: I1011 10:39:44.824483 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04d0b40e-b6ae-4466-a0af-fcb5ce630a97-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:39:44.824508 master-1 kubenswrapper[4771]: I1011 10:39:44.824495 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04d0b40e-b6ae-4466-a0af-fcb5ce630a97-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:39:45.110768 master-0 kubenswrapper[4790]: I1011 10:39:45.110568 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access\") pod \"installer-8-master-0\" (UID: \"a3934355-bb61-4316-b164-05294e12906a\") " pod="openshift-etcd/installer-8-master-0" Oct 11 10:39:45.111122 master-0 kubenswrapper[4790]: E1011 10:39:45.110822 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:45.111122 master-0 kubenswrapper[4790]: E1011 10:39:45.110863 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-8-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:45.111122 master-0 kubenswrapper[4790]: E1011 10:39:45.110928 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access podName:a3934355-bb61-4316-b164-05294e12906a nodeName:}" failed. No retries permitted until 2025-10-11 10:39:46.110906122 +0000 UTC m=+62.665366414 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access") pod "installer-8-master-0" (UID: "a3934355-bb61-4316-b164-05294e12906a") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:45.242203 master-1 kubenswrapper[4771]: I1011 10:39:45.242120 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-1" event={"ID":"04d0b40e-b6ae-4466-a0af-fcb5ce630a97","Type":"ContainerDied","Data":"bbc3a84466c188ab0111b2748b205176a81652b14bfe38d9a3a683ab12c3236b"} Oct 11 10:39:45.242203 master-1 kubenswrapper[4771]: I1011 10:39:45.242180 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbc3a84466c188ab0111b2748b205176a81652b14bfe38d9a3a683ab12c3236b" Oct 11 10:39:45.242597 master-1 kubenswrapper[4771]: I1011 10:39:45.242261 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-1" Oct 11 10:39:45.515485 master-2 kubenswrapper[4776]: I1011 10:39:45.515424 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:45.516101 master-2 kubenswrapper[4776]: I1011 10:39:45.515975 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:45.522944 master-2 kubenswrapper[4776]: I1011 10:39:45.522908 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:45.968573 master-2 kubenswrapper[4776]: I1011 10:39:45.968521 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:46.120033 master-0 kubenswrapper[4790]: I1011 10:39:46.119900 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access\") pod \"installer-8-master-0\" (UID: \"a3934355-bb61-4316-b164-05294e12906a\") " pod="openshift-etcd/installer-8-master-0" Oct 11 10:39:46.120831 master-0 kubenswrapper[4790]: E1011 10:39:46.120233 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:46.120831 master-0 kubenswrapper[4790]: E1011 10:39:46.120275 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-8-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:46.120831 master-0 kubenswrapper[4790]: E1011 10:39:46.120370 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access podName:a3934355-bb61-4316-b164-05294e12906a nodeName:}" failed. No retries permitted until 2025-10-11 10:39:48.120339578 +0000 UTC m=+64.674799910 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access") pod "installer-8-master-0" (UID: "a3934355-bb61-4316-b164-05294e12906a") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:46.291969 master-0 kubenswrapper[4790]: I1011 10:39:46.291786 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:46.291969 master-0 kubenswrapper[4790]: I1011 10:39:46.291901 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:46.292455 master-0 kubenswrapper[4790]: I1011 10:39:46.291808 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-8-master-0" Oct 11 10:39:46.292455 master-0 kubenswrapper[4790]: E1011 10:39:46.292104 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:46.292455 master-0 kubenswrapper[4790]: E1011 10:39:46.292167 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-8-master-0" podUID="a3934355-bb61-4316-b164-05294e12906a" Oct 11 10:39:46.292455 master-0 kubenswrapper[4790]: E1011 10:39:46.292267 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:46.464815 master-2 kubenswrapper[4776]: I1011 10:39:46.464491 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:46.464815 master-2 kubenswrapper[4776]: I1011 10:39:46.464621 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:47.652298 master-0 kubenswrapper[4790]: I1011 10:39:47.652243 4790 generic.go:334] "Generic (PLEG): container finished" podID="24d4b452-8f49-4e9e-98b6-3429afefc4c4" containerID="bc0358c40c75dc89bef20966f0a2851fd62d9b0845f2052ef84938a96fac4d83" exitCode=0 Oct 11 10:39:47.652842 master-0 kubenswrapper[4790]: I1011 10:39:47.652297 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ft6fv" event={"ID":"24d4b452-8f49-4e9e-98b6-3429afefc4c4","Type":"ContainerDied","Data":"bc0358c40c75dc89bef20966f0a2851fd62d9b0845f2052ef84938a96fac4d83"} Oct 11 10:39:48.139379 master-0 kubenswrapper[4790]: I1011 10:39:48.138813 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access\") pod \"installer-8-master-0\" (UID: \"a3934355-bb61-4316-b164-05294e12906a\") " pod="openshift-etcd/installer-8-master-0" Oct 11 10:39:48.139379 master-0 kubenswrapper[4790]: E1011 10:39:48.139137 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:48.140015 master-0 kubenswrapper[4790]: E1011 10:39:48.139415 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-8-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:48.140015 master-0 kubenswrapper[4790]: E1011 10:39:48.139479 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access podName:a3934355-bb61-4316-b164-05294e12906a nodeName:}" failed. No retries permitted until 2025-10-11 10:39:52.139457594 +0000 UTC m=+68.693917886 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access") pod "installer-8-master-0" (UID: "a3934355-bb61-4316-b164-05294e12906a") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:48.199604 master-0 kubenswrapper[4790]: I1011 10:39:48.198895 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-8-master-0"] Oct 11 10:39:48.199604 master-0 kubenswrapper[4790]: I1011 10:39:48.199043 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-8-master-0" Oct 11 10:39:48.199604 master-0 kubenswrapper[4790]: E1011 10:39:48.199153 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-8-master-0" podUID="a3934355-bb61-4316-b164-05294e12906a" Oct 11 10:39:48.204184 master-0 kubenswrapper[4790]: I1011 10:39:48.203811 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-zcc4t"] Oct 11 10:39:48.204184 master-0 kubenswrapper[4790]: I1011 10:39:48.203872 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-bn2sv"] Oct 11 10:39:48.204184 master-0 kubenswrapper[4790]: I1011 10:39:48.203967 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:48.204184 master-0 kubenswrapper[4790]: E1011 10:39:48.204044 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:48.204184 master-0 kubenswrapper[4790]: I1011 10:39:48.204106 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:48.204184 master-0 kubenswrapper[4790]: E1011 10:39:48.204155 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: I1011 10:39:48.242605 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:39:48.242791 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:39:48.244532 master-1 kubenswrapper[4771]: I1011 10:39:48.242779 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:39:48.387442 master-2 kubenswrapper[4776]: I1011 10:39:48.387155 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:48.388210 master-2 kubenswrapper[4776]: I1011 10:39:48.387474 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:48.388210 master-2 kubenswrapper[4776]: I1011 10:39:48.387504 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:48.388210 master-2 kubenswrapper[4776]: I1011 10:39:48.387522 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:48.388210 master-2 kubenswrapper[4776]: I1011 10:39:48.387592 4776 patch_prober.go:28] interesting pod/kube-controller-manager-master-2 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:48.388210 master-2 kubenswrapper[4776]: I1011 10:39:48.387650 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:48.391925 master-2 kubenswrapper[4776]: I1011 10:39:48.391893 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:48.660971 master-0 kubenswrapper[4790]: I1011 10:39:48.660909 4790 generic.go:334] "Generic (PLEG): container finished" podID="24d4b452-8f49-4e9e-98b6-3429afefc4c4" containerID="a63f08ea91bf39979073d78bd84f5dd5d89bf91aa04b704a104dde5b04c85341" exitCode=0 Oct 11 10:39:48.661889 master-0 kubenswrapper[4790]: I1011 10:39:48.661017 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ft6fv" event={"ID":"24d4b452-8f49-4e9e-98b6-3429afefc4c4","Type":"ContainerDied","Data":"a63f08ea91bf39979073d78bd84f5dd5d89bf91aa04b704a104dde5b04c85341"} Oct 11 10:39:49.157577 master-2 kubenswrapper[4776]: I1011 10:39:49.157496 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:39:49.292064 master-0 kubenswrapper[4790]: I1011 10:39:49.291887 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-8-master-0" Oct 11 10:39:49.292269 master-0 kubenswrapper[4790]: E1011 10:39:49.292103 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-8-master-0" podUID="a3934355-bb61-4316-b164-05294e12906a" Oct 11 10:39:49.674171 master-0 kubenswrapper[4790]: I1011 10:39:49.674040 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ft6fv" event={"ID":"24d4b452-8f49-4e9e-98b6-3429afefc4c4","Type":"ContainerStarted","Data":"37ac6c36a12e5c9753eb09f407c7e986b907eb87aa9519edd17357fa157ee20e"} Oct 11 10:39:50.023627 master-2 kubenswrapper[4776]: I1011 10:39:50.023569 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:39:50.024164 master-2 kubenswrapper[4776]: I1011 10:39:50.023630 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:39:50.267361 master-2 kubenswrapper[4776]: I1011 10:39:50.267287 4776 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-crzgp container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:39:50.267361 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:39:50.267361 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:39:50.267361 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:39:50.267361 master-2 kubenswrapper[4776]: [+]metric-storage-ready ok Oct 11 10:39:50.267361 master-2 kubenswrapper[4776]: [+]metric-informer-sync ok Oct 11 10:39:50.267361 master-2 kubenswrapper[4776]: [+]metadata-informer-sync ok Oct 11 10:39:50.267361 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:39:50.267361 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:39:50.267829 master-2 kubenswrapper[4776]: I1011 10:39:50.267373 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:39:50.292386 master-0 kubenswrapper[4790]: I1011 10:39:50.292261 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:50.292772 master-0 kubenswrapper[4790]: I1011 10:39:50.292295 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:50.292772 master-0 kubenswrapper[4790]: E1011 10:39:50.292423 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:50.292772 master-0 kubenswrapper[4790]: E1011 10:39:50.292569 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:50.825332 master-0 kubenswrapper[4790]: I1011 10:39:50.825268 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-96nq6" Oct 11 10:39:50.877276 master-0 kubenswrapper[4790]: I1011 10:39:50.877137 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ft6fv" podStartSLOduration=15.12860171 podStartE2EDuration="1m0.877114978s" podCreationTimestamp="2025-10-11 10:38:50 +0000 UTC" firstStartedPulling="2025-10-11 10:39:00.97209223 +0000 UTC m=+17.526552522" lastFinishedPulling="2025-10-11 10:39:46.720605498 +0000 UTC m=+63.275065790" observedRunningTime="2025-10-11 10:39:49.720948416 +0000 UTC m=+66.275408768" watchObservedRunningTime="2025-10-11 10:39:50.877114978 +0000 UTC m=+67.431575270" Oct 11 10:39:51.294821 master-0 kubenswrapper[4790]: I1011 10:39:51.294638 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-8-master-0" Oct 11 10:39:51.295137 master-0 kubenswrapper[4790]: E1011 10:39:51.294959 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd/installer-8-master-0" podUID="a3934355-bb61-4316-b164-05294e12906a" Oct 11 10:39:51.464815 master-2 kubenswrapper[4776]: I1011 10:39:51.464719 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:51.465626 master-2 kubenswrapper[4776]: I1011 10:39:51.464837 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:52.187913 master-0 kubenswrapper[4790]: I1011 10:39:52.187828 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access\") pod \"installer-8-master-0\" (UID: \"a3934355-bb61-4316-b164-05294e12906a\") " pod="openshift-etcd/installer-8-master-0" Oct 11 10:39:52.189181 master-0 kubenswrapper[4790]: E1011 10:39:52.188087 4790 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:52.189181 master-0 kubenswrapper[4790]: E1011 10:39:52.188149 4790 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-8-master-0: object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:52.189181 master-0 kubenswrapper[4790]: E1011 10:39:52.188248 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access podName:a3934355-bb61-4316-b164-05294e12906a nodeName:}" failed. No retries permitted until 2025-10-11 10:40:00.188217569 +0000 UTC m=+76.742677891 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access") pod "installer-8-master-0" (UID: "a3934355-bb61-4316-b164-05294e12906a") : object "openshift-etcd"/"kube-root-ca.crt" not registered Oct 11 10:39:52.291619 master-0 kubenswrapper[4790]: I1011 10:39:52.291504 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:52.291850 master-0 kubenswrapper[4790]: I1011 10:39:52.291623 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:52.291850 master-0 kubenswrapper[4790]: E1011 10:39:52.291725 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcc4t" podUID="a5b695d5-a88c-4ff9-bc59-d13f61f237f6" Oct 11 10:39:52.292182 master-0 kubenswrapper[4790]: E1011 10:39:52.291848 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bn2sv" podUID="0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7" Oct 11 10:39:52.580685 master-1 kubenswrapper[4771]: I1011 10:39:52.580474 4771 patch_prober.go:28] interesting pod/console-775ff6c4fc-csp4z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" start-of-body= Oct 11 10:39:52.580685 master-1 kubenswrapper[4771]: I1011 10:39:52.580623 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-775ff6c4fc-csp4z" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" probeResult="failure" output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" Oct 11 10:39:52.856936 master-0 kubenswrapper[4790]: I1011 10:39:52.856839 4790 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Oct 11 10:39:52.857330 master-0 kubenswrapper[4790]: I1011 10:39:52.857097 4790 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Oct 11 10:39:52.916135 master-0 kubenswrapper[4790]: I1011 10:39:52.915821 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-897b595f-xctr8"] Oct 11 10:39:52.916456 master-0 kubenswrapper[4790]: I1011 10:39:52.916400 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv"] Oct 11 10:39:52.916686 master-0 kubenswrapper[4790]: I1011 10:39:52.916609 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-897b595f-xctr8" Oct 11 10:39:52.917064 master-0 kubenswrapper[4790]: I1011 10:39:52.917017 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv" Oct 11 10:39:52.921001 master-0 kubenswrapper[4790]: I1011 10:39:52.920956 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 11 10:39:52.921181 master-0 kubenswrapper[4790]: I1011 10:39:52.921148 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 11 10:39:52.921367 master-0 kubenswrapper[4790]: I1011 10:39:52.921338 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 11 10:39:52.921563 master-0 kubenswrapper[4790]: I1011 10:39:52.921537 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-z46lz" Oct 11 10:39:52.921662 master-0 kubenswrapper[4790]: I1011 10:39:52.921645 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 11 10:39:52.922151 master-0 kubenswrapper[4790]: I1011 10:39:52.922117 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 11 10:39:52.922511 master-0 kubenswrapper[4790]: I1011 10:39:52.922492 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-vwjkz" Oct 11 10:39:52.923137 master-0 kubenswrapper[4790]: I1011 10:39:52.923092 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 11 10:39:52.923270 master-0 kubenswrapper[4790]: I1011 10:39:52.923252 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 11 10:39:52.923530 master-0 kubenswrapper[4790]: I1011 10:39:52.923513 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 11 10:39:52.923664 master-0 kubenswrapper[4790]: I1011 10:39:52.923623 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 11 10:39:52.923857 master-0 kubenswrapper[4790]: I1011 10:39:52.923839 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 11 10:39:52.935619 master-0 kubenswrapper[4790]: I1011 10:39:52.935544 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 11 10:39:52.950274 master-0 kubenswrapper[4790]: I1011 10:39:52.950133 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-897b595f-xctr8"] Oct 11 10:39:52.952967 master-0 kubenswrapper[4790]: I1011 10:39:52.952868 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv"] Oct 11 10:39:52.990489 master-0 kubenswrapper[4790]: I1011 10:39:52.990402 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-6xnjz"] Oct 11 10:39:52.991617 master-0 kubenswrapper[4790]: I1011 10:39:52.991544 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6xnjz" Oct 11 10:39:52.996506 master-0 kubenswrapper[4790]: I1011 10:39:52.996455 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-kcxf6" Oct 11 10:39:52.997145 master-0 kubenswrapper[4790]: I1011 10:39:52.997057 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 11 10:39:52.997950 master-0 kubenswrapper[4790]: I1011 10:39:52.997919 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 11 10:39:52.998154 master-0 kubenswrapper[4790]: I1011 10:39:52.998079 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 11 10:39:53.014953 master-0 kubenswrapper[4790]: I1011 10:39:53.014860 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6xnjz"] Oct 11 10:39:53.040571 master-0 kubenswrapper[4790]: I1011 10:39:53.040503 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-xznwp"] Oct 11 10:39:53.041355 master-0 kubenswrapper[4790]: I1011 10:39:53.041313 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xznwp" Oct 11 10:39:53.044540 master-0 kubenswrapper[4790]: I1011 10:39:53.044414 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-dqfsj"] Oct 11 10:39:53.045237 master-0 kubenswrapper[4790]: I1011 10:39:53.044761 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-dqfsj" Oct 11 10:39:53.047688 master-0 kubenswrapper[4790]: I1011 10:39:53.047645 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-psm75" Oct 11 10:39:53.048859 master-0 kubenswrapper[4790]: I1011 10:39:53.048834 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 11 10:39:53.049241 master-0 kubenswrapper[4790]: I1011 10:39:53.049176 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 11 10:39:53.049761 master-0 kubenswrapper[4790]: I1011 10:39:53.049734 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"iptables-alerter-dockercfg-tndgd" Oct 11 10:39:53.049962 master-0 kubenswrapper[4790]: I1011 10:39:53.049861 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 11 10:39:53.049962 master-0 kubenswrapper[4790]: I1011 10:39:53.049902 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 11 10:39:53.050211 master-0 kubenswrapper[4790]: I1011 10:39:53.050173 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 11 10:39:53.059221 master-0 kubenswrapper[4790]: I1011 10:39:53.059128 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xznwp"] Oct 11 10:39:53.096490 master-0 kubenswrapper[4790]: I1011 10:39:53.096419 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd5837b4-2687-45b6-b9d5-6ef37d7d47fc-config\") pod \"route-controller-manager-57c8488cd7-czzdv\" (UID: \"dd5837b4-2687-45b6-b9d5-6ef37d7d47fc\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv" Oct 11 10:39:53.096702 master-0 kubenswrapper[4790]: I1011 10:39:53.096520 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c1b597b-dba4-4011-9acd-e6d40ed8aea4-config-volume\") pod \"dns-default-xznwp\" (UID: \"9c1b597b-dba4-4011-9acd-e6d40ed8aea4\") " pod="openshift-dns/dns-default-xznwp" Oct 11 10:39:53.096702 master-0 kubenswrapper[4790]: I1011 10:39:53.096541 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a-config\") pod \"controller-manager-897b595f-xctr8\" (UID: \"8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a\") " pod="openshift-controller-manager/controller-manager-897b595f-xctr8" Oct 11 10:39:53.096702 master-0 kubenswrapper[4790]: I1011 10:39:53.096564 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd5837b4-2687-45b6-b9d5-6ef37d7d47fc-serving-cert\") pod \"route-controller-manager-57c8488cd7-czzdv\" (UID: \"dd5837b4-2687-45b6-b9d5-6ef37d7d47fc\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv" Oct 11 10:39:53.096702 master-0 kubenswrapper[4790]: I1011 10:39:53.096586 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df4afece-b896-4fea-8b5f-ccebc400ee9f-cert\") pod \"ingress-canary-6xnjz\" (UID: \"df4afece-b896-4fea-8b5f-ccebc400ee9f\") " pod="openshift-ingress-canary/ingress-canary-6xnjz" Oct 11 10:39:53.096702 master-0 kubenswrapper[4790]: I1011 10:39:53.096652 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fph6m\" (UniqueName: \"kubernetes.io/projected/8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a-kube-api-access-fph6m\") pod \"controller-manager-897b595f-xctr8\" (UID: \"8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a\") " pod="openshift-controller-manager/controller-manager-897b595f-xctr8" Oct 11 10:39:53.096702 master-0 kubenswrapper[4790]: I1011 10:39:53.096722 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd5837b4-2687-45b6-b9d5-6ef37d7d47fc-client-ca\") pod \"route-controller-manager-57c8488cd7-czzdv\" (UID: \"dd5837b4-2687-45b6-b9d5-6ef37d7d47fc\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv" Oct 11 10:39:53.098015 master-0 kubenswrapper[4790]: I1011 10:39:53.096763 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9c1b597b-dba4-4011-9acd-e6d40ed8aea4-metrics-tls\") pod \"dns-default-xznwp\" (UID: \"9c1b597b-dba4-4011-9acd-e6d40ed8aea4\") " pod="openshift-dns/dns-default-xznwp" Oct 11 10:39:53.098015 master-0 kubenswrapper[4790]: I1011 10:39:53.096794 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/67ae8836-ab0a-4b32-acc6-f828c159c96e-host-slash\") pod \"iptables-alerter-dqfsj\" (UID: \"67ae8836-ab0a-4b32-acc6-f828c159c96e\") " pod="openshift-network-operator/iptables-alerter-dqfsj" Oct 11 10:39:53.098015 master-0 kubenswrapper[4790]: I1011 10:39:53.096845 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8xmh\" (UniqueName: \"kubernetes.io/projected/dd5837b4-2687-45b6-b9d5-6ef37d7d47fc-kube-api-access-g8xmh\") pod \"route-controller-manager-57c8488cd7-czzdv\" (UID: \"dd5837b4-2687-45b6-b9d5-6ef37d7d47fc\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv" Oct 11 10:39:53.098015 master-0 kubenswrapper[4790]: I1011 10:39:53.096874 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/67ae8836-ab0a-4b32-acc6-f828c159c96e-iptables-alerter-script\") pod \"iptables-alerter-dqfsj\" (UID: \"67ae8836-ab0a-4b32-acc6-f828c159c96e\") " pod="openshift-network-operator/iptables-alerter-dqfsj" Oct 11 10:39:53.098015 master-0 kubenswrapper[4790]: I1011 10:39:53.096907 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a-client-ca\") pod \"controller-manager-897b595f-xctr8\" (UID: \"8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a\") " pod="openshift-controller-manager/controller-manager-897b595f-xctr8" Oct 11 10:39:53.098015 master-0 kubenswrapper[4790]: I1011 10:39:53.096933 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv9ss\" (UniqueName: \"kubernetes.io/projected/df4afece-b896-4fea-8b5f-ccebc400ee9f-kube-api-access-rv9ss\") pod \"ingress-canary-6xnjz\" (UID: \"df4afece-b896-4fea-8b5f-ccebc400ee9f\") " pod="openshift-ingress-canary/ingress-canary-6xnjz" Oct 11 10:39:53.098015 master-0 kubenswrapper[4790]: I1011 10:39:53.096982 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a-proxy-ca-bundles\") pod \"controller-manager-897b595f-xctr8\" (UID: \"8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a\") " pod="openshift-controller-manager/controller-manager-897b595f-xctr8" Oct 11 10:39:53.098015 master-0 kubenswrapper[4790]: I1011 10:39:53.097033 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlfkw\" (UniqueName: \"kubernetes.io/projected/67ae8836-ab0a-4b32-acc6-f828c159c96e-kube-api-access-jlfkw\") pod \"iptables-alerter-dqfsj\" (UID: \"67ae8836-ab0a-4b32-acc6-f828c159c96e\") " pod="openshift-network-operator/iptables-alerter-dqfsj" Oct 11 10:39:53.098015 master-0 kubenswrapper[4790]: I1011 10:39:53.097074 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a-serving-cert\") pod \"controller-manager-897b595f-xctr8\" (UID: \"8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a\") " pod="openshift-controller-manager/controller-manager-897b595f-xctr8" Oct 11 10:39:53.098015 master-0 kubenswrapper[4790]: I1011 10:39:53.097101 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wclz\" (UniqueName: \"kubernetes.io/projected/9c1b597b-dba4-4011-9acd-e6d40ed8aea4-kube-api-access-6wclz\") pod \"dns-default-xznwp\" (UID: \"9c1b597b-dba4-4011-9acd-e6d40ed8aea4\") " pod="openshift-dns/dns-default-xznwp" Oct 11 10:39:53.139731 master-0 kubenswrapper[4790]: I1011 10:39:53.139633 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-cpn6z"] Oct 11 10:39:53.140373 master-0 kubenswrapper[4790]: I1011 10:39:53.140313 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cpn6z" Oct 11 10:39:53.143327 master-0 kubenswrapper[4790]: I1011 10:39:53.143172 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-xzm85" Oct 11 10:39:53.144094 master-0 kubenswrapper[4790]: I1011 10:39:53.144029 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 11 10:39:53.144827 master-0 kubenswrapper[4790]: I1011 10:39:53.144275 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 11 10:39:53.198126 master-0 kubenswrapper[4790]: I1011 10:39:53.197980 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd5837b4-2687-45b6-b9d5-6ef37d7d47fc-serving-cert\") pod \"route-controller-manager-57c8488cd7-czzdv\" (UID: \"dd5837b4-2687-45b6-b9d5-6ef37d7d47fc\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv" Oct 11 10:39:53.198126 master-0 kubenswrapper[4790]: I1011 10:39:53.198073 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df4afece-b896-4fea-8b5f-ccebc400ee9f-cert\") pod \"ingress-canary-6xnjz\" (UID: \"df4afece-b896-4fea-8b5f-ccebc400ee9f\") " pod="openshift-ingress-canary/ingress-canary-6xnjz" Oct 11 10:39:53.199162 master-0 kubenswrapper[4790]: I1011 10:39:53.198132 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fph6m\" (UniqueName: \"kubernetes.io/projected/8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a-kube-api-access-fph6m\") pod \"controller-manager-897b595f-xctr8\" (UID: \"8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a\") " pod="openshift-controller-manager/controller-manager-897b595f-xctr8" Oct 11 10:39:53.199162 master-0 kubenswrapper[4790]: I1011 10:39:53.198177 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd5837b4-2687-45b6-b9d5-6ef37d7d47fc-client-ca\") pod \"route-controller-manager-57c8488cd7-czzdv\" (UID: \"dd5837b4-2687-45b6-b9d5-6ef37d7d47fc\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv" Oct 11 10:39:53.199162 master-0 kubenswrapper[4790]: I1011 10:39:53.198224 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9c1b597b-dba4-4011-9acd-e6d40ed8aea4-metrics-tls\") pod \"dns-default-xznwp\" (UID: \"9c1b597b-dba4-4011-9acd-e6d40ed8aea4\") " pod="openshift-dns/dns-default-xznwp" Oct 11 10:39:53.199162 master-0 kubenswrapper[4790]: I1011 10:39:53.198275 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/67ae8836-ab0a-4b32-acc6-f828c159c96e-host-slash\") pod \"iptables-alerter-dqfsj\" (UID: \"67ae8836-ab0a-4b32-acc6-f828c159c96e\") " pod="openshift-network-operator/iptables-alerter-dqfsj" Oct 11 10:39:53.199162 master-0 kubenswrapper[4790]: I1011 10:39:53.198348 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8xmh\" (UniqueName: \"kubernetes.io/projected/dd5837b4-2687-45b6-b9d5-6ef37d7d47fc-kube-api-access-g8xmh\") pod \"route-controller-manager-57c8488cd7-czzdv\" (UID: \"dd5837b4-2687-45b6-b9d5-6ef37d7d47fc\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv" Oct 11 10:39:53.199162 master-0 kubenswrapper[4790]: I1011 10:39:53.198455 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/67ae8836-ab0a-4b32-acc6-f828c159c96e-iptables-alerter-script\") pod \"iptables-alerter-dqfsj\" (UID: \"67ae8836-ab0a-4b32-acc6-f828c159c96e\") " pod="openshift-network-operator/iptables-alerter-dqfsj" Oct 11 10:39:53.199162 master-0 kubenswrapper[4790]: I1011 10:39:53.198510 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a-client-ca\") pod \"controller-manager-897b595f-xctr8\" (UID: \"8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a\") " pod="openshift-controller-manager/controller-manager-897b595f-xctr8" Oct 11 10:39:53.199162 master-0 kubenswrapper[4790]: I1011 10:39:53.198562 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv9ss\" (UniqueName: \"kubernetes.io/projected/df4afece-b896-4fea-8b5f-ccebc400ee9f-kube-api-access-rv9ss\") pod \"ingress-canary-6xnjz\" (UID: \"df4afece-b896-4fea-8b5f-ccebc400ee9f\") " pod="openshift-ingress-canary/ingress-canary-6xnjz" Oct 11 10:39:53.199162 master-0 kubenswrapper[4790]: I1011 10:39:53.198607 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a-proxy-ca-bundles\") pod \"controller-manager-897b595f-xctr8\" (UID: \"8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a\") " pod="openshift-controller-manager/controller-manager-897b595f-xctr8" Oct 11 10:39:53.199162 master-0 kubenswrapper[4790]: I1011 10:39:53.198657 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlfkw\" (UniqueName: \"kubernetes.io/projected/67ae8836-ab0a-4b32-acc6-f828c159c96e-kube-api-access-jlfkw\") pod \"iptables-alerter-dqfsj\" (UID: \"67ae8836-ab0a-4b32-acc6-f828c159c96e\") " pod="openshift-network-operator/iptables-alerter-dqfsj" Oct 11 10:39:53.199162 master-0 kubenswrapper[4790]: I1011 10:39:53.198743 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a-serving-cert\") pod \"controller-manager-897b595f-xctr8\" (UID: \"8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a\") " pod="openshift-controller-manager/controller-manager-897b595f-xctr8" Oct 11 10:39:53.199162 master-0 kubenswrapper[4790]: I1011 10:39:53.198800 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wclz\" (UniqueName: \"kubernetes.io/projected/9c1b597b-dba4-4011-9acd-e6d40ed8aea4-kube-api-access-6wclz\") pod \"dns-default-xznwp\" (UID: \"9c1b597b-dba4-4011-9acd-e6d40ed8aea4\") " pod="openshift-dns/dns-default-xznwp" Oct 11 10:39:53.199162 master-0 kubenswrapper[4790]: I1011 10:39:53.198874 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd5837b4-2687-45b6-b9d5-6ef37d7d47fc-config\") pod \"route-controller-manager-57c8488cd7-czzdv\" (UID: \"dd5837b4-2687-45b6-b9d5-6ef37d7d47fc\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv" Oct 11 10:39:53.199162 master-0 kubenswrapper[4790]: I1011 10:39:53.198949 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c1b597b-dba4-4011-9acd-e6d40ed8aea4-config-volume\") pod \"dns-default-xznwp\" (UID: \"9c1b597b-dba4-4011-9acd-e6d40ed8aea4\") " pod="openshift-dns/dns-default-xznwp" Oct 11 10:39:53.199162 master-0 kubenswrapper[4790]: I1011 10:39:53.199000 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a-config\") pod \"controller-manager-897b595f-xctr8\" (UID: \"8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a\") " pod="openshift-controller-manager/controller-manager-897b595f-xctr8" Oct 11 10:39:53.199162 master-0 kubenswrapper[4790]: I1011 10:39:53.199106 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/67ae8836-ab0a-4b32-acc6-f828c159c96e-host-slash\") pod \"iptables-alerter-dqfsj\" (UID: \"67ae8836-ab0a-4b32-acc6-f828c159c96e\") " pod="openshift-network-operator/iptables-alerter-dqfsj" Oct 11 10:39:53.200639 master-0 kubenswrapper[4790]: I1011 10:39:53.200556 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/67ae8836-ab0a-4b32-acc6-f828c159c96e-iptables-alerter-script\") pod \"iptables-alerter-dqfsj\" (UID: \"67ae8836-ab0a-4b32-acc6-f828c159c96e\") " pod="openshift-network-operator/iptables-alerter-dqfsj" Oct 11 10:39:53.201142 master-0 kubenswrapper[4790]: I1011 10:39:53.201057 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c1b597b-dba4-4011-9acd-e6d40ed8aea4-config-volume\") pod \"dns-default-xznwp\" (UID: \"9c1b597b-dba4-4011-9acd-e6d40ed8aea4\") " pod="openshift-dns/dns-default-xznwp" Oct 11 10:39:53.201272 master-0 kubenswrapper[4790]: I1011 10:39:53.201063 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd5837b4-2687-45b6-b9d5-6ef37d7d47fc-client-ca\") pod \"route-controller-manager-57c8488cd7-czzdv\" (UID: \"dd5837b4-2687-45b6-b9d5-6ef37d7d47fc\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv" Oct 11 10:39:53.201532 master-0 kubenswrapper[4790]: I1011 10:39:53.201435 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a-client-ca\") pod \"controller-manager-897b595f-xctr8\" (UID: \"8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a\") " pod="openshift-controller-manager/controller-manager-897b595f-xctr8" Oct 11 10:39:53.201847 master-0 kubenswrapper[4790]: I1011 10:39:53.201727 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a-proxy-ca-bundles\") pod \"controller-manager-897b595f-xctr8\" (UID: \"8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a\") " pod="openshift-controller-manager/controller-manager-897b595f-xctr8" Oct 11 10:39:53.202205 master-0 kubenswrapper[4790]: I1011 10:39:53.202139 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd5837b4-2687-45b6-b9d5-6ef37d7d47fc-config\") pod \"route-controller-manager-57c8488cd7-czzdv\" (UID: \"dd5837b4-2687-45b6-b9d5-6ef37d7d47fc\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv" Oct 11 10:39:53.202787 master-0 kubenswrapper[4790]: I1011 10:39:53.202695 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a-config\") pod \"controller-manager-897b595f-xctr8\" (UID: \"8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a\") " pod="openshift-controller-manager/controller-manager-897b595f-xctr8" Oct 11 10:39:53.207547 master-0 kubenswrapper[4790]: I1011 10:39:53.207064 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9c1b597b-dba4-4011-9acd-e6d40ed8aea4-metrics-tls\") pod \"dns-default-xznwp\" (UID: \"9c1b597b-dba4-4011-9acd-e6d40ed8aea4\") " pod="openshift-dns/dns-default-xznwp" Oct 11 10:39:53.207703 master-0 kubenswrapper[4790]: I1011 10:39:53.207357 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df4afece-b896-4fea-8b5f-ccebc400ee9f-cert\") pod \"ingress-canary-6xnjz\" (UID: \"df4afece-b896-4fea-8b5f-ccebc400ee9f\") " pod="openshift-ingress-canary/ingress-canary-6xnjz" Oct 11 10:39:53.208016 master-0 kubenswrapper[4790]: I1011 10:39:53.207950 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a-serving-cert\") pod \"controller-manager-897b595f-xctr8\" (UID: \"8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a\") " pod="openshift-controller-manager/controller-manager-897b595f-xctr8" Oct 11 10:39:53.208489 master-0 kubenswrapper[4790]: I1011 10:39:53.208420 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd5837b4-2687-45b6-b9d5-6ef37d7d47fc-serving-cert\") pod \"route-controller-manager-57c8488cd7-czzdv\" (UID: \"dd5837b4-2687-45b6-b9d5-6ef37d7d47fc\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv" Oct 11 10:39:53.220178 master-0 kubenswrapper[4790]: I1011 10:39:53.220105 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wclz\" (UniqueName: \"kubernetes.io/projected/9c1b597b-dba4-4011-9acd-e6d40ed8aea4-kube-api-access-6wclz\") pod \"dns-default-xznwp\" (UID: \"9c1b597b-dba4-4011-9acd-e6d40ed8aea4\") " pod="openshift-dns/dns-default-xznwp" Oct 11 10:39:53.222664 master-0 kubenswrapper[4790]: I1011 10:39:53.222566 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlfkw\" (UniqueName: \"kubernetes.io/projected/67ae8836-ab0a-4b32-acc6-f828c159c96e-kube-api-access-jlfkw\") pod \"iptables-alerter-dqfsj\" (UID: \"67ae8836-ab0a-4b32-acc6-f828c159c96e\") " pod="openshift-network-operator/iptables-alerter-dqfsj" Oct 11 10:39:53.223952 master-0 kubenswrapper[4790]: I1011 10:39:53.223868 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fph6m\" (UniqueName: \"kubernetes.io/projected/8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a-kube-api-access-fph6m\") pod \"controller-manager-897b595f-xctr8\" (UID: \"8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a\") " pod="openshift-controller-manager/controller-manager-897b595f-xctr8" Oct 11 10:39:53.227369 master-0 kubenswrapper[4790]: I1011 10:39:53.227222 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv9ss\" (UniqueName: \"kubernetes.io/projected/df4afece-b896-4fea-8b5f-ccebc400ee9f-kube-api-access-rv9ss\") pod \"ingress-canary-6xnjz\" (UID: \"df4afece-b896-4fea-8b5f-ccebc400ee9f\") " pod="openshift-ingress-canary/ingress-canary-6xnjz" Oct 11 10:39:53.232035 master-0 kubenswrapper[4790]: I1011 10:39:53.231961 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8xmh\" (UniqueName: \"kubernetes.io/projected/dd5837b4-2687-45b6-b9d5-6ef37d7d47fc-kube-api-access-g8xmh\") pod \"route-controller-manager-57c8488cd7-czzdv\" (UID: \"dd5837b4-2687-45b6-b9d5-6ef37d7d47fc\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv" Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: I1011 10:39:53.243227 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:39:53.243300 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:39:53.243554 master-0 kubenswrapper[4790]: I1011 10:39:53.243472 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-897b595f-xctr8" Oct 11 10:39:53.244635 master-1 kubenswrapper[4771]: I1011 10:39:53.243321 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:39:53.244635 master-1 kubenswrapper[4771]: I1011 10:39:53.243452 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: I1011 10:39:53.247566 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:39:53.247615 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:39:53.248686 master-1 kubenswrapper[4771]: I1011 10:39:53.247624 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:39:53.251474 master-0 kubenswrapper[4790]: I1011 10:39:53.251411 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv" Oct 11 10:39:53.296839 master-0 kubenswrapper[4790]: I1011 10:39:53.291794 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-8-master-0" Oct 11 10:39:53.296839 master-0 kubenswrapper[4790]: I1011 10:39:53.295752 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-xbqxb" Oct 11 10:39:53.296839 master-0 kubenswrapper[4790]: I1011 10:39:53.296080 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Oct 11 10:39:53.309095 master-0 kubenswrapper[4790]: I1011 10:39:53.299817 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4ce595b2-b8f1-40d5-85db-c1ea82bda0c3-node-bootstrap-token\") pod \"machine-config-server-cpn6z\" (UID: \"4ce595b2-b8f1-40d5-85db-c1ea82bda0c3\") " pod="openshift-machine-config-operator/machine-config-server-cpn6z" Oct 11 10:39:53.309095 master-0 kubenswrapper[4790]: I1011 10:39:53.299879 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4ce595b2-b8f1-40d5-85db-c1ea82bda0c3-certs\") pod \"machine-config-server-cpn6z\" (UID: \"4ce595b2-b8f1-40d5-85db-c1ea82bda0c3\") " pod="openshift-machine-config-operator/machine-config-server-cpn6z" Oct 11 10:39:53.309095 master-0 kubenswrapper[4790]: I1011 10:39:53.300411 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6rjm\" (UniqueName: \"kubernetes.io/projected/4ce595b2-b8f1-40d5-85db-c1ea82bda0c3-kube-api-access-k6rjm\") pod \"machine-config-server-cpn6z\" (UID: \"4ce595b2-b8f1-40d5-85db-c1ea82bda0c3\") " pod="openshift-machine-config-operator/machine-config-server-cpn6z" Oct 11 10:39:53.314531 master-0 kubenswrapper[4790]: I1011 10:39:53.314232 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6xnjz" Oct 11 10:39:53.366208 master-0 kubenswrapper[4790]: I1011 10:39:53.362670 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xznwp" Oct 11 10:39:53.378364 master-0 kubenswrapper[4790]: I1011 10:39:53.378251 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-dqfsj" Oct 11 10:39:53.401837 master-0 kubenswrapper[4790]: I1011 10:39:53.401783 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6rjm\" (UniqueName: \"kubernetes.io/projected/4ce595b2-b8f1-40d5-85db-c1ea82bda0c3-kube-api-access-k6rjm\") pod \"machine-config-server-cpn6z\" (UID: \"4ce595b2-b8f1-40d5-85db-c1ea82bda0c3\") " pod="openshift-machine-config-operator/machine-config-server-cpn6z" Oct 11 10:39:53.401944 master-0 kubenswrapper[4790]: I1011 10:39:53.401898 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4ce595b2-b8f1-40d5-85db-c1ea82bda0c3-node-bootstrap-token\") pod \"machine-config-server-cpn6z\" (UID: \"4ce595b2-b8f1-40d5-85db-c1ea82bda0c3\") " pod="openshift-machine-config-operator/machine-config-server-cpn6z" Oct 11 10:39:53.401944 master-0 kubenswrapper[4790]: I1011 10:39:53.401928 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4ce595b2-b8f1-40d5-85db-c1ea82bda0c3-certs\") pod \"machine-config-server-cpn6z\" (UID: \"4ce595b2-b8f1-40d5-85db-c1ea82bda0c3\") " pod="openshift-machine-config-operator/machine-config-server-cpn6z" Oct 11 10:39:53.410348 master-0 kubenswrapper[4790]: I1011 10:39:53.410306 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4ce595b2-b8f1-40d5-85db-c1ea82bda0c3-certs\") pod \"machine-config-server-cpn6z\" (UID: \"4ce595b2-b8f1-40d5-85db-c1ea82bda0c3\") " pod="openshift-machine-config-operator/machine-config-server-cpn6z" Oct 11 10:39:53.411400 master-0 kubenswrapper[4790]: I1011 10:39:53.411333 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4ce595b2-b8f1-40d5-85db-c1ea82bda0c3-node-bootstrap-token\") pod \"machine-config-server-cpn6z\" (UID: \"4ce595b2-b8f1-40d5-85db-c1ea82bda0c3\") " pod="openshift-machine-config-operator/machine-config-server-cpn6z" Oct 11 10:39:53.442147 master-0 kubenswrapper[4790]: I1011 10:39:53.442100 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6rjm\" (UniqueName: \"kubernetes.io/projected/4ce595b2-b8f1-40d5-85db-c1ea82bda0c3-kube-api-access-k6rjm\") pod \"machine-config-server-cpn6z\" (UID: \"4ce595b2-b8f1-40d5-85db-c1ea82bda0c3\") " pod="openshift-machine-config-operator/machine-config-server-cpn6z" Oct 11 10:39:53.456179 master-0 kubenswrapper[4790]: I1011 10:39:53.456050 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-0"] Oct 11 10:39:53.456533 master-0 kubenswrapper[4790]: I1011 10:39:53.456499 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Oct 11 10:39:53.460020 master-0 kubenswrapper[4790]: I1011 10:39:53.459971 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Oct 11 10:39:53.460295 master-0 kubenswrapper[4790]: I1011 10:39:53.460257 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-js756" Oct 11 10:39:53.467176 master-0 kubenswrapper[4790]: I1011 10:39:53.467143 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cpn6z" Oct 11 10:39:53.474867 master-0 kubenswrapper[4790]: I1011 10:39:53.472376 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-0"] Oct 11 10:39:53.537314 master-0 kubenswrapper[4790]: I1011 10:39:53.536640 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-897b595f-xctr8"] Oct 11 10:39:53.548816 master-0 kubenswrapper[4790]: I1011 10:39:53.548763 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv"] Oct 11 10:39:53.549988 master-0 kubenswrapper[4790]: W1011 10:39:53.549930 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c5908fc_cbf1_412d_ae91_23e3bbdf2b1a.slice/crio-66f870a5a10335a87cc92b89a6bb3d0d45fb46a25453b56104a6708b70365fe2 WatchSource:0}: Error finding container 66f870a5a10335a87cc92b89a6bb3d0d45fb46a25453b56104a6708b70365fe2: Status 404 returned error can't find the container with id 66f870a5a10335a87cc92b89a6bb3d0d45fb46a25453b56104a6708b70365fe2 Oct 11 10:39:53.566339 master-0 kubenswrapper[4790]: W1011 10:39:53.566247 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd5837b4_2687_45b6_b9d5_6ef37d7d47fc.slice/crio-e5586526960a62ff8e99ed810d664ce57e3140abea4f81be691e37ea95bcb805 WatchSource:0}: Error finding container e5586526960a62ff8e99ed810d664ce57e3140abea4f81be691e37ea95bcb805: Status 404 returned error can't find the container with id e5586526960a62ff8e99ed810d664ce57e3140abea4f81be691e37ea95bcb805 Oct 11 10:39:53.571799 master-0 kubenswrapper[4790]: I1011 10:39:53.571760 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6xnjz"] Oct 11 10:39:53.593383 master-0 kubenswrapper[4790]: W1011 10:39:53.593318 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf4afece_b896_4fea_8b5f_ccebc400ee9f.slice/crio-64315d7e7fcd2574b74ea824285b5ff4af543c2a46185fc3af4306baf1c75247 WatchSource:0}: Error finding container 64315d7e7fcd2574b74ea824285b5ff4af543c2a46185fc3af4306baf1c75247: Status 404 returned error can't find the container with id 64315d7e7fcd2574b74ea824285b5ff4af543c2a46185fc3af4306baf1c75247 Oct 11 10:39:53.604824 master-0 kubenswrapper[4790]: I1011 10:39:53.604790 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f810d826-e11a-4e68-8b42-f9cc96815f6e-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"f810d826-e11a-4e68-8b42-f9cc96815f6e\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Oct 11 10:39:53.604897 master-0 kubenswrapper[4790]: I1011 10:39:53.604852 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f810d826-e11a-4e68-8b42-f9cc96815f6e-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"f810d826-e11a-4e68-8b42-f9cc96815f6e\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Oct 11 10:39:53.619425 master-0 kubenswrapper[4790]: I1011 10:39:53.619386 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xznwp"] Oct 11 10:39:53.625920 master-0 kubenswrapper[4790]: W1011 10:39:53.625867 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c1b597b_dba4_4011_9acd_e6d40ed8aea4.slice/crio-033682d36b7a106f3804a482d8511eae40961fbcf552c6f58c6a876c5bdffe1f WatchSource:0}: Error finding container 033682d36b7a106f3804a482d8511eae40961fbcf552c6f58c6a876c5bdffe1f: Status 404 returned error can't find the container with id 033682d36b7a106f3804a482d8511eae40961fbcf552c6f58c6a876c5bdffe1f Oct 11 10:39:53.691557 master-0 kubenswrapper[4790]: I1011 10:39:53.691466 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xznwp" event={"ID":"9c1b597b-dba4-4011-9acd-e6d40ed8aea4","Type":"ContainerStarted","Data":"033682d36b7a106f3804a482d8511eae40961fbcf552c6f58c6a876c5bdffe1f"} Oct 11 10:39:53.693416 master-0 kubenswrapper[4790]: I1011 10:39:53.693368 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6xnjz" event={"ID":"df4afece-b896-4fea-8b5f-ccebc400ee9f","Type":"ContainerStarted","Data":"64315d7e7fcd2574b74ea824285b5ff4af543c2a46185fc3af4306baf1c75247"} Oct 11 10:39:53.695395 master-0 kubenswrapper[4790]: I1011 10:39:53.695350 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cpn6z" event={"ID":"4ce595b2-b8f1-40d5-85db-c1ea82bda0c3","Type":"ContainerStarted","Data":"cbcffed3e591804c0115b4d5d01e0e54f333f6c8f5c24286210a94589ba8b0c8"} Oct 11 10:39:53.695491 master-0 kubenswrapper[4790]: I1011 10:39:53.695404 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cpn6z" event={"ID":"4ce595b2-b8f1-40d5-85db-c1ea82bda0c3","Type":"ContainerStarted","Data":"c7ce1fa5bd22e7dc35a2712ea20ec03215ba2152098c2d197ffbd414f9c01f1c"} Oct 11 10:39:53.698162 master-0 kubenswrapper[4790]: I1011 10:39:53.698081 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-dqfsj" event={"ID":"67ae8836-ab0a-4b32-acc6-f828c159c96e","Type":"ContainerStarted","Data":"eca40f1cc150e140c4b9eb4dd21f799f353cf62a23358250afef8d5640679c67"} Oct 11 10:39:53.699102 master-0 kubenswrapper[4790]: I1011 10:39:53.699054 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv" event={"ID":"dd5837b4-2687-45b6-b9d5-6ef37d7d47fc","Type":"ContainerStarted","Data":"e5586526960a62ff8e99ed810d664ce57e3140abea4f81be691e37ea95bcb805"} Oct 11 10:39:53.700755 master-0 kubenswrapper[4790]: I1011 10:39:53.700631 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-897b595f-xctr8" event={"ID":"8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a","Type":"ContainerStarted","Data":"66f870a5a10335a87cc92b89a6bb3d0d45fb46a25453b56104a6708b70365fe2"} Oct 11 10:39:53.705748 master-0 kubenswrapper[4790]: I1011 10:39:53.705666 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f810d826-e11a-4e68-8b42-f9cc96815f6e-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"f810d826-e11a-4e68-8b42-f9cc96815f6e\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Oct 11 10:39:53.705748 master-0 kubenswrapper[4790]: I1011 10:39:53.705743 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f810d826-e11a-4e68-8b42-f9cc96815f6e-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"f810d826-e11a-4e68-8b42-f9cc96815f6e\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Oct 11 10:39:53.706310 master-0 kubenswrapper[4790]: I1011 10:39:53.706162 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f810d826-e11a-4e68-8b42-f9cc96815f6e-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"f810d826-e11a-4e68-8b42-f9cc96815f6e\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Oct 11 10:39:53.740228 master-0 kubenswrapper[4790]: I1011 10:39:53.740137 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f810d826-e11a-4e68-8b42-f9cc96815f6e-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"f810d826-e11a-4e68-8b42-f9cc96815f6e\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Oct 11 10:39:53.807189 master-0 kubenswrapper[4790]: I1011 10:39:53.807073 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Oct 11 10:39:54.003587 master-0 kubenswrapper[4790]: I1011 10:39:54.003423 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-cpn6z" podStartSLOduration=1.003397762 podStartE2EDuration="1.003397762s" podCreationTimestamp="2025-10-11 10:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:39:53.72411558 +0000 UTC m=+70.278575872" watchObservedRunningTime="2025-10-11 10:39:54.003397762 +0000 UTC m=+70.557858054" Oct 11 10:39:54.004129 master-0 kubenswrapper[4790]: I1011 10:39:54.004098 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-0"] Oct 11 10:39:54.011890 master-0 kubenswrapper[4790]: W1011 10:39:54.011812 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf810d826_e11a_4e68_8b42_f9cc96815f6e.slice/crio-0037715e465467c850978e63ab18a53529c91d6f5e89e891c58217bd8c626b4a WatchSource:0}: Error finding container 0037715e465467c850978e63ab18a53529c91d6f5e89e891c58217bd8c626b4a: Status 404 returned error can't find the container with id 0037715e465467c850978e63ab18a53529c91d6f5e89e891c58217bd8c626b4a Oct 11 10:39:54.292332 master-0 kubenswrapper[4790]: I1011 10:39:54.292174 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:39:54.292332 master-0 kubenswrapper[4790]: I1011 10:39:54.292259 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:39:54.301701 master-0 kubenswrapper[4790]: I1011 10:39:54.301266 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 11 10:39:54.301701 master-0 kubenswrapper[4790]: I1011 10:39:54.301385 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-diagnostics"/"default-dockercfg-wbrmn" Oct 11 10:39:54.301701 master-0 kubenswrapper[4790]: I1011 10:39:54.301438 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-hgln7" Oct 11 10:39:54.302278 master-0 kubenswrapper[4790]: I1011 10:39:54.302102 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 11 10:39:54.302278 master-0 kubenswrapper[4790]: I1011 10:39:54.302144 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 11 10:39:54.704733 master-0 kubenswrapper[4790]: I1011 10:39:54.704628 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"f810d826-e11a-4e68-8b42-f9cc96815f6e","Type":"ContainerStarted","Data":"0037715e465467c850978e63ab18a53529c91d6f5e89e891c58217bd8c626b4a"} Oct 11 10:39:55.454539 master-0 kubenswrapper[4790]: I1011 10:39:55.454431 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Oct 11 10:39:55.455511 master-0 kubenswrapper[4790]: I1011 10:39:55.455352 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Oct 11 10:39:55.466771 master-0 kubenswrapper[4790]: I1011 10:39:55.466723 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Oct 11 10:39:55.526765 master-0 kubenswrapper[4790]: I1011 10:39:55.526651 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6dd18b40-5213-44f7-83cd-99076fb3ee73-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6dd18b40-5213-44f7-83cd-99076fb3ee73\") " pod="openshift-kube-scheduler/installer-6-master-0" Oct 11 10:39:55.526765 master-0 kubenswrapper[4790]: I1011 10:39:55.526744 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6dd18b40-5213-44f7-83cd-99076fb3ee73-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"6dd18b40-5213-44f7-83cd-99076fb3ee73\") " pod="openshift-kube-scheduler/installer-6-master-0" Oct 11 10:39:55.526765 master-0 kubenswrapper[4790]: I1011 10:39:55.526765 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6dd18b40-5213-44f7-83cd-99076fb3ee73-var-lock\") pod \"installer-6-master-0\" (UID: \"6dd18b40-5213-44f7-83cd-99076fb3ee73\") " pod="openshift-kube-scheduler/installer-6-master-0" Oct 11 10:39:55.628269 master-0 kubenswrapper[4790]: I1011 10:39:55.628034 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6dd18b40-5213-44f7-83cd-99076fb3ee73-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6dd18b40-5213-44f7-83cd-99076fb3ee73\") " pod="openshift-kube-scheduler/installer-6-master-0" Oct 11 10:39:55.628269 master-0 kubenswrapper[4790]: I1011 10:39:55.628115 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6dd18b40-5213-44f7-83cd-99076fb3ee73-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"6dd18b40-5213-44f7-83cd-99076fb3ee73\") " pod="openshift-kube-scheduler/installer-6-master-0" Oct 11 10:39:55.628269 master-0 kubenswrapper[4790]: I1011 10:39:55.628140 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6dd18b40-5213-44f7-83cd-99076fb3ee73-var-lock\") pod \"installer-6-master-0\" (UID: \"6dd18b40-5213-44f7-83cd-99076fb3ee73\") " pod="openshift-kube-scheduler/installer-6-master-0" Oct 11 10:39:55.628269 master-0 kubenswrapper[4790]: I1011 10:39:55.628207 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6dd18b40-5213-44f7-83cd-99076fb3ee73-var-lock\") pod \"installer-6-master-0\" (UID: \"6dd18b40-5213-44f7-83cd-99076fb3ee73\") " pod="openshift-kube-scheduler/installer-6-master-0" Oct 11 10:39:55.628685 master-0 kubenswrapper[4790]: I1011 10:39:55.628293 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6dd18b40-5213-44f7-83cd-99076fb3ee73-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"6dd18b40-5213-44f7-83cd-99076fb3ee73\") " pod="openshift-kube-scheduler/installer-6-master-0" Oct 11 10:39:55.654253 master-0 kubenswrapper[4790]: I1011 10:39:55.654141 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6dd18b40-5213-44f7-83cd-99076fb3ee73-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6dd18b40-5213-44f7-83cd-99076fb3ee73\") " pod="openshift-kube-scheduler/installer-6-master-0" Oct 11 10:39:55.780843 master-0 kubenswrapper[4790]: I1011 10:39:55.780574 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Oct 11 10:39:56.464096 master-2 kubenswrapper[4776]: I1011 10:39:56.464034 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:56.464891 master-2 kubenswrapper[4776]: I1011 10:39:56.464107 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:56.658558 master-1 kubenswrapper[4771]: I1011 10:39:56.658450 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-1"] Oct 11 10:39:56.659925 master-1 kubenswrapper[4771]: E1011 10:39:56.658847 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d0b40e-b6ae-4466-a0af-fcb5ce630a97" containerName="installer" Oct 11 10:39:56.659925 master-1 kubenswrapper[4771]: I1011 10:39:56.658875 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d0b40e-b6ae-4466-a0af-fcb5ce630a97" containerName="installer" Oct 11 10:39:56.659925 master-1 kubenswrapper[4771]: I1011 10:39:56.659043 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="04d0b40e-b6ae-4466-a0af-fcb5ce630a97" containerName="installer" Oct 11 10:39:56.660239 master-1 kubenswrapper[4771]: I1011 10:39:56.659994 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 11 10:39:56.662948 master-1 kubenswrapper[4771]: I1011 10:39:56.662878 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-js756" Oct 11 10:39:56.670625 master-1 kubenswrapper[4771]: I1011 10:39:56.670348 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-1"] Oct 11 10:39:56.723719 master-1 kubenswrapper[4771]: I1011 10:39:56.723633 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05-kube-api-access\") pod \"revision-pruner-6-master-1\" (UID: \"3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05\") " pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 11 10:39:56.724154 master-1 kubenswrapper[4771]: I1011 10:39:56.724116 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05-kubelet-dir\") pod \"revision-pruner-6-master-1\" (UID: \"3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05\") " pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 11 10:39:56.826721 master-1 kubenswrapper[4771]: I1011 10:39:56.826587 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05-kube-api-access\") pod \"revision-pruner-6-master-1\" (UID: \"3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05\") " pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 11 10:39:56.827299 master-1 kubenswrapper[4771]: I1011 10:39:56.827263 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05-kubelet-dir\") pod \"revision-pruner-6-master-1\" (UID: \"3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05\") " pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 11 10:39:56.827699 master-1 kubenswrapper[4771]: I1011 10:39:56.827472 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05-kubelet-dir\") pod \"revision-pruner-6-master-1\" (UID: \"3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05\") " pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 11 10:39:56.848295 master-1 kubenswrapper[4771]: I1011 10:39:56.848229 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05-kube-api-access\") pod \"revision-pruner-6-master-1\" (UID: \"3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05\") " pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 11 10:39:56.984570 master-1 kubenswrapper[4771]: I1011 10:39:56.984311 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 11 10:39:57.454949 master-1 kubenswrapper[4771]: I1011 10:39:57.454544 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-1"] Oct 11 10:39:57.468809 master-1 kubenswrapper[4771]: W1011 10:39:57.468739 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3d28ccd7_4f37_4a24_a9ff_ef97ff08ae05.slice/crio-713d4071ed7a38a97107ad5f1629e72b0b8fd55934c7595a21204f0f11e35c02 WatchSource:0}: Error finding container 713d4071ed7a38a97107ad5f1629e72b0b8fd55934c7595a21204f0f11e35c02: Status 404 returned error can't find the container with id 713d4071ed7a38a97107ad5f1629e72b0b8fd55934c7595a21204f0f11e35c02 Oct 11 10:39:57.721458 master-0 kubenswrapper[4790]: I1011 10:39:57.721391 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-dqfsj" event={"ID":"67ae8836-ab0a-4b32-acc6-f828c159c96e","Type":"ContainerStarted","Data":"34205c3f5946e7f31a2b129497975903b84c35a1a6d98e69c55d0a92c77a2d1f"} Oct 11 10:39:58.005461 master-0 kubenswrapper[4790]: I1011 10:39:58.005431 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Oct 11 10:39:58.007877 master-0 kubenswrapper[4790]: I1011 10:39:58.007792 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-dqfsj" podStartSLOduration=5.007760046 podStartE2EDuration="5.007760046s" podCreationTimestamp="2025-10-11 10:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:39:58.00522349 +0000 UTC m=+74.559683802" watchObservedRunningTime="2025-10-11 10:39:58.007760046 +0000 UTC m=+74.562220378" Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: I1011 10:39:58.245866 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:39:58.245986 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:39:58.250892 master-1 kubenswrapper[4771]: I1011 10:39:58.250841 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:39:58.319146 master-1 kubenswrapper[4771]: I1011 10:39:58.319079 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-1" event={"ID":"3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05","Type":"ContainerStarted","Data":"ec9f0592baf8c5dd0d5f3cecb9104d9b2a55e7f1b365b952640c85483ccbac69"} Oct 11 10:39:58.319146 master-1 kubenswrapper[4771]: I1011 10:39:58.319148 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-1" event={"ID":"3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05","Type":"ContainerStarted","Data":"713d4071ed7a38a97107ad5f1629e72b0b8fd55934c7595a21204f0f11e35c02"} Oct 11 10:39:58.387551 master-2 kubenswrapper[4776]: I1011 10:39:58.387484 4776 patch_prober.go:28] interesting pod/kube-controller-manager-master-2 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:58.388173 master-2 kubenswrapper[4776]: I1011 10:39:58.387555 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:58.390767 master-2 kubenswrapper[4776]: I1011 10:39:58.390731 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:58.730652 master-0 kubenswrapper[4790]: I1011 10:39:58.730528 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"6dd18b40-5213-44f7-83cd-99076fb3ee73","Type":"ContainerStarted","Data":"f810af855d03b458d1c2e2f8afd6d54238f19e74d825ff17da48e7f4eba7e4c6"} Oct 11 10:39:58.730652 master-0 kubenswrapper[4790]: I1011 10:39:58.730608 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"6dd18b40-5213-44f7-83cd-99076fb3ee73","Type":"ContainerStarted","Data":"65085622d36bce3903d45075fbfade9a38ec4d90dad3e9cbfb565e4e9d566b71"} Oct 11 10:39:58.736078 master-0 kubenswrapper[4790]: I1011 10:39:58.735977 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xznwp" event={"ID":"9c1b597b-dba4-4011-9acd-e6d40ed8aea4","Type":"ContainerStarted","Data":"95e83832bbcdda9ddb74bf91b486535b14f3c77c85ad6272287c3b59b03885bf"} Oct 11 10:39:58.736078 master-0 kubenswrapper[4790]: I1011 10:39:58.736063 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xznwp" event={"ID":"9c1b597b-dba4-4011-9acd-e6d40ed8aea4","Type":"ContainerStarted","Data":"e91ba8938ccac1d9cbe66fa44dcbe3d0380a800c8dcfe1774f11b62abeee6e4e"} Oct 11 10:39:58.736296 master-0 kubenswrapper[4790]: I1011 10:39:58.736169 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-xznwp" Oct 11 10:39:58.738087 master-0 kubenswrapper[4790]: I1011 10:39:58.738009 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6xnjz" event={"ID":"df4afece-b896-4fea-8b5f-ccebc400ee9f","Type":"ContainerStarted","Data":"aeb8dee2c340e2c1d5e45bb0bff615f0896e5e0fb1827f9885c0ba07ca524cd1"} Oct 11 10:39:58.740641 master-0 kubenswrapper[4790]: I1011 10:39:58.740559 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv" event={"ID":"dd5837b4-2687-45b6-b9d5-6ef37d7d47fc","Type":"ContainerStarted","Data":"105b182c33715a1ffd6ad7be3a307bb1ad5281d259a0f997f8006ecb4233f7f3"} Oct 11 10:39:58.741165 master-0 kubenswrapper[4790]: I1011 10:39:58.741104 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv" Oct 11 10:39:58.742788 master-0 kubenswrapper[4790]: I1011 10:39:58.742735 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"f810d826-e11a-4e68-8b42-f9cc96815f6e","Type":"ContainerStarted","Data":"031944b70ff26e5fd30918c1f6363036efb199cbed1980002fd956f601b576fd"} Oct 11 10:39:58.745211 master-0 kubenswrapper[4790]: I1011 10:39:58.745111 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-897b595f-xctr8" event={"ID":"8c5908fc-cbf1-412d-ae91-23e3bbdf2b1a","Type":"ContainerStarted","Data":"dee15171e48f2f1e829379524605e2654b7f5be51d28a70498094c931593f837"} Oct 11 10:39:58.745469 master-0 kubenswrapper[4790]: I1011 10:39:58.745401 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-897b595f-xctr8" Oct 11 10:39:58.755214 master-0 kubenswrapper[4790]: I1011 10:39:58.755126 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-897b595f-xctr8" Oct 11 10:39:58.769923 master-1 kubenswrapper[4771]: I1011 10:39:58.769790 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-master-1" podStartSLOduration=2.769763861 podStartE2EDuration="2.769763861s" podCreationTimestamp="2025-10-11 10:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:39:58.689148724 +0000 UTC m=+830.663375195" watchObservedRunningTime="2025-10-11 10:39:58.769763861 +0000 UTC m=+830.743990302" Oct 11 10:39:58.822461 master-0 kubenswrapper[4790]: I1011 10:39:58.822393 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv" Oct 11 10:39:59.217847 master-0 kubenswrapper[4790]: I1011 10:39:59.217685 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-6-master-0" podStartSLOduration=4.217656238 podStartE2EDuration="4.217656238s" podCreationTimestamp="2025-10-11 10:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:39:59.215596595 +0000 UTC m=+75.770056917" watchObservedRunningTime="2025-10-11 10:39:59.217656238 +0000 UTC m=+75.772116570" Oct 11 10:39:59.327900 master-1 kubenswrapper[4771]: I1011 10:39:59.327806 4771 generic.go:334] "Generic (PLEG): container finished" podID="3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05" containerID="ec9f0592baf8c5dd0d5f3cecb9104d9b2a55e7f1b365b952640c85483ccbac69" exitCode=0 Oct 11 10:39:59.327900 master-1 kubenswrapper[4771]: I1011 10:39:59.327872 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-1" event={"ID":"3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05","Type":"ContainerDied","Data":"ec9f0592baf8c5dd0d5f3cecb9104d9b2a55e7f1b365b952640c85483ccbac69"} Oct 11 10:39:59.460843 master-0 kubenswrapper[4790]: I1011 10:39:59.460653 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv" podStartSLOduration=9.446136633 podStartE2EDuration="13.460622039s" podCreationTimestamp="2025-10-11 10:39:46 +0000 UTC" firstStartedPulling="2025-10-11 10:39:53.568562776 +0000 UTC m=+70.123023068" lastFinishedPulling="2025-10-11 10:39:57.583048152 +0000 UTC m=+74.137508474" observedRunningTime="2025-10-11 10:39:59.459289234 +0000 UTC m=+76.013749566" watchObservedRunningTime="2025-10-11 10:39:59.460622039 +0000 UTC m=+76.015082401" Oct 11 10:39:59.486364 master-2 kubenswrapper[4776]: I1011 10:39:59.486269 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-2"] Oct 11 10:39:59.487458 master-2 kubenswrapper[4776]: I1011 10:39:59.487251 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:39:59.489636 master-2 kubenswrapper[4776]: I1011 10:39:59.489572 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-js756" Oct 11 10:39:59.552135 master-2 kubenswrapper[4776]: I1011 10:39:59.552019 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4114c1be-d3d9-438f-b215-619b0aa3e114-kubelet-dir\") pod \"revision-pruner-6-master-2\" (UID: \"4114c1be-d3d9-438f-b215-619b0aa3e114\") " pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:39:59.552135 master-2 kubenswrapper[4776]: I1011 10:39:59.552089 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4114c1be-d3d9-438f-b215-619b0aa3e114-kube-api-access\") pod \"revision-pruner-6-master-2\" (UID: \"4114c1be-d3d9-438f-b215-619b0aa3e114\") " pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:39:59.649827 master-2 kubenswrapper[4776]: I1011 10:39:59.649763 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-2"] Oct 11 10:39:59.653330 master-2 kubenswrapper[4776]: I1011 10:39:59.653269 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4114c1be-d3d9-438f-b215-619b0aa3e114-kube-api-access\") pod \"revision-pruner-6-master-2\" (UID: \"4114c1be-d3d9-438f-b215-619b0aa3e114\") " pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:39:59.653448 master-2 kubenswrapper[4776]: I1011 10:39:59.653414 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4114c1be-d3d9-438f-b215-619b0aa3e114-kubelet-dir\") pod \"revision-pruner-6-master-2\" (UID: \"4114c1be-d3d9-438f-b215-619b0aa3e114\") " pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:39:59.653548 master-2 kubenswrapper[4776]: I1011 10:39:59.653492 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4114c1be-d3d9-438f-b215-619b0aa3e114-kubelet-dir\") pod \"revision-pruner-6-master-2\" (UID: \"4114c1be-d3d9-438f-b215-619b0aa3e114\") " pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:39:59.703462 master-0 kubenswrapper[4790]: I1011 10:39:59.703372 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-9c8k6"] Oct 11 10:39:59.704447 master-0 kubenswrapper[4790]: I1011 10:39:59.704389 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.704913 master-0 kubenswrapper[4790]: I1011 10:39:59.704828 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-69df5d46bc-wjtq5"] Oct 11 10:39:59.705842 master-0 kubenswrapper[4790]: I1011 10:39:59.705802 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.708925 master-0 kubenswrapper[4790]: I1011 10:39:59.708863 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-zlnjr" Oct 11 10:39:59.711837 master-0 kubenswrapper[4790]: I1011 10:39:59.711770 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 11 10:39:59.711837 master-0 kubenswrapper[4790]: I1011 10:39:59.711802 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 11 10:39:59.723933 master-0 kubenswrapper[4790]: I1011 10:39:59.723397 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 11 10:39:59.725806 master-0 kubenswrapper[4790]: I1011 10:39:59.724001 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 11 10:39:59.725806 master-0 kubenswrapper[4790]: I1011 10:39:59.724003 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 11 10:39:59.725806 master-0 kubenswrapper[4790]: I1011 10:39:59.724134 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-lntq9" Oct 11 10:39:59.725806 master-0 kubenswrapper[4790]: I1011 10:39:59.724184 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 11 10:39:59.725806 master-0 kubenswrapper[4790]: I1011 10:39:59.724016 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 11 10:39:59.725806 master-0 kubenswrapper[4790]: I1011 10:39:59.724516 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 11 10:39:59.725806 master-0 kubenswrapper[4790]: I1011 10:39:59.724533 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 11 10:39:59.725806 master-0 kubenswrapper[4790]: I1011 10:39:59.724552 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 11 10:39:59.725806 master-0 kubenswrapper[4790]: I1011 10:39:59.724833 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 11 10:39:59.725806 master-0 kubenswrapper[4790]: I1011 10:39:59.725174 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 11 10:39:59.725806 master-0 kubenswrapper[4790]: I1011 10:39:59.725251 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 11 10:39:59.725806 master-0 kubenswrapper[4790]: I1011 10:39:59.725281 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 11 10:39:59.725806 master-0 kubenswrapper[4790]: I1011 10:39:59.725415 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 11 10:39:59.725806 master-0 kubenswrapper[4790]: I1011 10:39:59.725426 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 11 10:39:59.728364 master-0 kubenswrapper[4790]: I1011 10:39:59.727450 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-76f8bc4746-9rjdm"] Oct 11 10:39:59.728565 master-0 kubenswrapper[4790]: I1011 10:39:59.728511 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 11 10:39:59.730314 master-0 kubenswrapper[4790]: I1011 10:39:59.730247 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.731191 master-0 kubenswrapper[4790]: I1011 10:39:59.731150 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6fccd5ccc-khqd5"] Oct 11 10:39:59.735078 master-0 kubenswrapper[4790]: I1011 10:39:59.734037 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.735078 master-0 kubenswrapper[4790]: I1011 10:39:59.734794 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7d46fcc5c6-n88q4"] Oct 11 10:39:59.735345 master-0 kubenswrapper[4790]: I1011 10:39:59.735269 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.739145 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-xznwp" podStartSLOduration=2.7828357329999998 podStartE2EDuration="6.73911668s" podCreationTimestamp="2025-10-11 10:39:53 +0000 UTC" firstStartedPulling="2025-10-11 10:39:53.628166742 +0000 UTC m=+70.182627034" lastFinishedPulling="2025-10-11 10:39:57.584447659 +0000 UTC m=+74.138907981" observedRunningTime="2025-10-11 10:39:59.724670595 +0000 UTC m=+76.279130947" watchObservedRunningTime="2025-10-11 10:39:59.73911668 +0000 UTC m=+76.293577042" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.742010 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.742339 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.742455 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.742589 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-8wmjp" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.742787 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.742875 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.743422 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.743452 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.743559 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.743650 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.743666 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.743752 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.743934 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.744426 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.744592 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.744611 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.745240 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.745413 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-279hr" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.745419 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.745671 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.745855 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.745918 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.746012 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.745952 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Oct 11 10:39:59.747652 master-0 kubenswrapper[4790]: I1011 10:39:59.746576 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 11 10:39:59.749677 master-0 kubenswrapper[4790]: I1011 10:39:59.748336 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-9c8k6"] Oct 11 10:39:59.749677 master-0 kubenswrapper[4790]: I1011 10:39:59.748388 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-69df5d46bc-wjtq5"] Oct 11 10:39:59.750065 master-0 kubenswrapper[4790]: I1011 10:39:59.750026 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2ocquro0n92lc" Oct 11 10:39:59.756404 master-2 kubenswrapper[4776]: I1011 10:39:59.756277 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4114c1be-d3d9-438f-b215-619b0aa3e114-kube-api-access\") pod \"revision-pruner-6-master-2\" (UID: \"4114c1be-d3d9-438f-b215-619b0aa3e114\") " pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:39:59.758230 master-0 kubenswrapper[4790]: I1011 10:39:59.757411 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_revision-pruner-6-master-0_f810d826-e11a-4e68-8b42-f9cc96815f6e/pruner/0.log" Oct 11 10:39:59.758230 master-0 kubenswrapper[4790]: I1011 10:39:59.757497 4790 generic.go:334] "Generic (PLEG): container finished" podID="f810d826-e11a-4e68-8b42-f9cc96815f6e" containerID="031944b70ff26e5fd30918c1f6363036efb199cbed1980002fd956f601b576fd" exitCode=255 Oct 11 10:39:59.758230 master-0 kubenswrapper[4790]: I1011 10:39:59.757690 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"f810d826-e11a-4e68-8b42-f9cc96815f6e","Type":"ContainerDied","Data":"031944b70ff26e5fd30918c1f6363036efb199cbed1980002fd956f601b576fd"} Oct 11 10:39:59.758230 master-0 kubenswrapper[4790]: I1011 10:39:59.757808 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 11 10:39:59.761110 master-0 kubenswrapper[4790]: I1011 10:39:59.760454 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 11 10:39:59.767862 master-0 kubenswrapper[4790]: I1011 10:39:59.767434 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 11 10:39:59.779395 master-0 kubenswrapper[4790]: I1011 10:39:59.778502 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76f8bc4746-9rjdm"] Oct 11 10:39:59.780421 master-0 kubenswrapper[4790]: I1011 10:39:59.780363 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6fccd5ccc-khqd5"] Oct 11 10:39:59.785146 master-0 kubenswrapper[4790]: I1011 10:39:59.785070 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7d46fcc5c6-n88q4"] Oct 11 10:39:59.790614 master-0 kubenswrapper[4790]: I1011 10:39:59.790058 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-897b595f-xctr8" podStartSLOduration=9.736230014 podStartE2EDuration="13.79004041s" podCreationTimestamp="2025-10-11 10:39:46 +0000 UTC" firstStartedPulling="2025-10-11 10:39:53.552600053 +0000 UTC m=+70.107060345" lastFinishedPulling="2025-10-11 10:39:57.606410449 +0000 UTC m=+74.160870741" observedRunningTime="2025-10-11 10:39:59.78884959 +0000 UTC m=+76.343309922" watchObservedRunningTime="2025-10-11 10:39:59.79004041 +0000 UTC m=+76.344500702" Oct 11 10:39:59.802924 master-2 kubenswrapper[4776]: I1011 10:39:59.802872 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:39:59.817225 master-0 kubenswrapper[4790]: I1011 10:39:59.817132 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-6xnjz" podStartSLOduration=3.828545097 podStartE2EDuration="7.817110562s" podCreationTimestamp="2025-10-11 10:39:52 +0000 UTC" firstStartedPulling="2025-10-11 10:39:53.596407758 +0000 UTC m=+70.150868050" lastFinishedPulling="2025-10-11 10:39:57.584973223 +0000 UTC m=+74.139433515" observedRunningTime="2025-10-11 10:39:59.816209759 +0000 UTC m=+76.370670051" watchObservedRunningTime="2025-10-11 10:39:59.817110562 +0000 UTC m=+76.371570854" Oct 11 10:39:59.842092 master-0 kubenswrapper[4790]: I1011 10:39:59.841937 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-master-0" podStartSLOduration=3.27287778 podStartE2EDuration="6.841898185s" podCreationTimestamp="2025-10-11 10:39:53 +0000 UTC" firstStartedPulling="2025-10-11 10:39:54.014094239 +0000 UTC m=+70.568554541" lastFinishedPulling="2025-10-11 10:39:57.583114654 +0000 UTC m=+74.137574946" observedRunningTime="2025-10-11 10:39:59.839600055 +0000 UTC m=+76.394060357" watchObservedRunningTime="2025-10-11 10:39:59.841898185 +0000 UTC m=+76.396358517" Oct 11 10:39:59.871817 master-0 kubenswrapper[4790]: I1011 10:39:59.871659 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.871817 master-0 kubenswrapper[4790]: I1011 10:39:59.871785 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpsdf\" (UniqueName: \"kubernetes.io/projected/a6689745-4f25-4776-9f5c-6bfd7abe62a8-kube-api-access-mpsdf\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.871817 master-0 kubenswrapper[4790]: I1011 10:39:59.871846 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1254ac82-5820-431e-baeb-3ae7d7997b38-metrics-server-audit-profiles\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.872359 master-0 kubenswrapper[4790]: I1011 10:39:59.871904 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/76dd8647-4ad5-4874-b2d2-dee16aab637a-audit-dir\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.872359 master-0 kubenswrapper[4790]: I1011 10:39:59.871987 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/099ca022-6e9c-4604-b517-d90713dd6a44-node-pullsecrets\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.872359 master-0 kubenswrapper[4790]: I1011 10:39:59.872040 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-trusted-ca-bundle\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.872359 master-0 kubenswrapper[4790]: I1011 10:39:59.872085 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/76dd8647-4ad5-4874-b2d2-dee16aab637a-audit-policies\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.872359 master-0 kubenswrapper[4790]: I1011 10:39:59.872135 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76dd8647-4ad5-4874-b2d2-dee16aab637a-trusted-ca-bundle\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.872359 master-0 kubenswrapper[4790]: I1011 10:39:59.872220 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1254ac82-5820-431e-baeb-3ae7d7997b38-secret-metrics-client-certs\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.872359 master-0 kubenswrapper[4790]: I1011 10:39:59.872267 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-console-serving-cert\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.872359 master-0 kubenswrapper[4790]: I1011 10:39:59.872312 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a6689745-4f25-4776-9f5c-6bfd7abe62a8-audit-policies\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.872359 master-0 kubenswrapper[4790]: I1011 10:39:59.872366 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-etcd-serving-ca\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.872900 master-0 kubenswrapper[4790]: I1011 10:39:59.872417 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.872900 master-0 kubenswrapper[4790]: I1011 10:39:59.872463 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1254ac82-5820-431e-baeb-3ae7d7997b38-audit-log\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.872900 master-0 kubenswrapper[4790]: I1011 10:39:59.872543 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88h25\" (UniqueName: \"kubernetes.io/projected/099ca022-6e9c-4604-b517-d90713dd6a44-kube-api-access-88h25\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.872900 master-0 kubenswrapper[4790]: I1011 10:39:59.872593 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1254ac82-5820-431e-baeb-3ae7d7997b38-client-ca-bundle\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.872900 master-0 kubenswrapper[4790]: I1011 10:39:59.872641 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-console-oauth-config\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.872900 master-0 kubenswrapper[4790]: I1011 10:39:59.872687 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-audit\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.872900 master-0 kubenswrapper[4790]: I1011 10:39:59.872770 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/099ca022-6e9c-4604-b517-d90713dd6a44-audit-dir\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.872900 master-0 kubenswrapper[4790]: I1011 10:39:59.872826 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.873314 master-0 kubenswrapper[4790]: I1011 10:39:59.872958 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/099ca022-6e9c-4604-b517-d90713dd6a44-encryption-config\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.873314 master-0 kubenswrapper[4790]: I1011 10:39:59.873017 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78q9m\" (UniqueName: \"kubernetes.io/projected/1254ac82-5820-431e-baeb-3ae7d7997b38-kube-api-access-78q9m\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.873314 master-0 kubenswrapper[4790]: I1011 10:39:59.873073 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-trusted-ca-bundle\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.873314 master-0 kubenswrapper[4790]: I1011 10:39:59.873131 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-user-template-error\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.873314 master-0 kubenswrapper[4790]: I1011 10:39:59.873189 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1254ac82-5820-431e-baeb-3ae7d7997b38-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.873803 master-0 kubenswrapper[4790]: I1011 10:39:59.873769 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.873865 master-0 kubenswrapper[4790]: I1011 10:39:59.873835 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/76dd8647-4ad5-4874-b2d2-dee16aab637a-etcd-serving-ca\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.874039 master-0 kubenswrapper[4790]: I1011 10:39:59.873969 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.874120 master-0 kubenswrapper[4790]: I1011 10:39:59.874086 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/76dd8647-4ad5-4874-b2d2-dee16aab637a-encryption-config\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.874182 master-0 kubenswrapper[4790]: I1011 10:39:59.874146 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-user-template-login\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.874250 master-0 kubenswrapper[4790]: I1011 10:39:59.874180 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2kt7\" (UniqueName: \"kubernetes.io/projected/76dd8647-4ad5-4874-b2d2-dee16aab637a-kube-api-access-k2kt7\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.874250 master-0 kubenswrapper[4790]: I1011 10:39:59.874216 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/099ca022-6e9c-4604-b517-d90713dd6a44-serving-cert\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.874376 master-0 kubenswrapper[4790]: I1011 10:39:59.874273 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/76dd8647-4ad5-4874-b2d2-dee16aab637a-etcd-client\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.874376 master-0 kubenswrapper[4790]: I1011 10:39:59.874310 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.874376 master-0 kubenswrapper[4790]: I1011 10:39:59.874353 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-service-ca\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.874667 master-0 kubenswrapper[4790]: I1011 10:39:59.874614 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-image-import-ca\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.874667 master-0 kubenswrapper[4790]: I1011 10:39:59.874660 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-console-config\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.874844 master-0 kubenswrapper[4790]: I1011 10:39:59.874691 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-oauth-serving-cert\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.874844 master-0 kubenswrapper[4790]: I1011 10:39:59.874790 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1254ac82-5820-431e-baeb-3ae7d7997b38-secret-metrics-server-tls\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.875049 master-0 kubenswrapper[4790]: I1011 10:39:59.874979 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-session\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.875188 master-0 kubenswrapper[4790]: I1011 10:39:59.875149 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.875262 master-0 kubenswrapper[4790]: I1011 10:39:59.875226 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a6689745-4f25-4776-9f5c-6bfd7abe62a8-audit-dir\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.875262 master-0 kubenswrapper[4790]: I1011 10:39:59.875256 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ljnp\" (UniqueName: \"kubernetes.io/projected/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-kube-api-access-7ljnp\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.875498 master-0 kubenswrapper[4790]: I1011 10:39:59.875438 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-config\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.875584 master-0 kubenswrapper[4790]: I1011 10:39:59.875508 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76dd8647-4ad5-4874-b2d2-dee16aab637a-serving-cert\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.875660 master-0 kubenswrapper[4790]: I1011 10:39:59.875597 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/099ca022-6e9c-4604-b517-d90713dd6a44-etcd-client\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.978918 master-0 kubenswrapper[4790]: I1011 10:39:59.977074 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/76dd8647-4ad5-4874-b2d2-dee16aab637a-audit-policies\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.978918 master-0 kubenswrapper[4790]: I1011 10:39:59.977131 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76dd8647-4ad5-4874-b2d2-dee16aab637a-trusted-ca-bundle\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.978918 master-0 kubenswrapper[4790]: I1011 10:39:59.977149 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1254ac82-5820-431e-baeb-3ae7d7997b38-secret-metrics-client-certs\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.978918 master-0 kubenswrapper[4790]: I1011 10:39:59.977171 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-console-serving-cert\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.978918 master-0 kubenswrapper[4790]: I1011 10:39:59.977192 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/099ca022-6e9c-4604-b517-d90713dd6a44-node-pullsecrets\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.978918 master-0 kubenswrapper[4790]: I1011 10:39:59.977210 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-trusted-ca-bundle\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.978918 master-0 kubenswrapper[4790]: I1011 10:39:59.977231 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a6689745-4f25-4776-9f5c-6bfd7abe62a8-audit-policies\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.978918 master-0 kubenswrapper[4790]: I1011 10:39:59.977252 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-etcd-serving-ca\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.978918 master-0 kubenswrapper[4790]: I1011 10:39:59.977270 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.978918 master-0 kubenswrapper[4790]: I1011 10:39:59.977289 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1254ac82-5820-431e-baeb-3ae7d7997b38-audit-log\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.978918 master-0 kubenswrapper[4790]: I1011 10:39:59.977308 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88h25\" (UniqueName: \"kubernetes.io/projected/099ca022-6e9c-4604-b517-d90713dd6a44-kube-api-access-88h25\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.978918 master-0 kubenswrapper[4790]: I1011 10:39:59.977325 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1254ac82-5820-431e-baeb-3ae7d7997b38-client-ca-bundle\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.978918 master-0 kubenswrapper[4790]: I1011 10:39:59.977345 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-console-oauth-config\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.978918 master-0 kubenswrapper[4790]: I1011 10:39:59.977361 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-audit\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.978918 master-0 kubenswrapper[4790]: I1011 10:39:59.977384 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/099ca022-6e9c-4604-b517-d90713dd6a44-audit-dir\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.978918 master-0 kubenswrapper[4790]: I1011 10:39:59.977406 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.980749 master-0 kubenswrapper[4790]: I1011 10:39:59.977425 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/099ca022-6e9c-4604-b517-d90713dd6a44-encryption-config\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.980749 master-0 kubenswrapper[4790]: I1011 10:39:59.977442 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78q9m\" (UniqueName: \"kubernetes.io/projected/1254ac82-5820-431e-baeb-3ae7d7997b38-kube-api-access-78q9m\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.980749 master-0 kubenswrapper[4790]: I1011 10:39:59.977459 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-trusted-ca-bundle\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.980749 master-0 kubenswrapper[4790]: I1011 10:39:59.977478 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-user-template-error\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.980749 master-0 kubenswrapper[4790]: I1011 10:39:59.977498 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1254ac82-5820-431e-baeb-3ae7d7997b38-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.980749 master-0 kubenswrapper[4790]: I1011 10:39:59.977518 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.980749 master-0 kubenswrapper[4790]: I1011 10:39:59.977535 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/76dd8647-4ad5-4874-b2d2-dee16aab637a-etcd-serving-ca\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.980749 master-0 kubenswrapper[4790]: I1011 10:39:59.977555 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.980749 master-0 kubenswrapper[4790]: I1011 10:39:59.977575 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/76dd8647-4ad5-4874-b2d2-dee16aab637a-encryption-config\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.980749 master-0 kubenswrapper[4790]: I1011 10:39:59.977604 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-user-template-login\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.980749 master-0 kubenswrapper[4790]: I1011 10:39:59.977630 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2kt7\" (UniqueName: \"kubernetes.io/projected/76dd8647-4ad5-4874-b2d2-dee16aab637a-kube-api-access-k2kt7\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.980749 master-0 kubenswrapper[4790]: I1011 10:39:59.977612 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/099ca022-6e9c-4604-b517-d90713dd6a44-node-pullsecrets\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.980749 master-0 kubenswrapper[4790]: I1011 10:39:59.977654 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/099ca022-6e9c-4604-b517-d90713dd6a44-serving-cert\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.980749 master-0 kubenswrapper[4790]: I1011 10:39:59.978294 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/76dd8647-4ad5-4874-b2d2-dee16aab637a-etcd-client\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.980749 master-0 kubenswrapper[4790]: I1011 10:39:59.978361 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.980749 master-0 kubenswrapper[4790]: I1011 10:39:59.978634 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-service-ca\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.982261 master-0 kubenswrapper[4790]: I1011 10:39:59.978738 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-console-config\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.982261 master-0 kubenswrapper[4790]: I1011 10:39:59.978794 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-oauth-serving-cert\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.982261 master-0 kubenswrapper[4790]: I1011 10:39:59.978850 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-image-import-ca\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.982261 master-0 kubenswrapper[4790]: I1011 10:39:59.978893 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1254ac82-5820-431e-baeb-3ae7d7997b38-secret-metrics-server-tls\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.982261 master-0 kubenswrapper[4790]: I1011 10:39:59.978983 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-session\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.982261 master-0 kubenswrapper[4790]: I1011 10:39:59.979038 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.982261 master-0 kubenswrapper[4790]: I1011 10:39:59.979087 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ljnp\" (UniqueName: \"kubernetes.io/projected/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-kube-api-access-7ljnp\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.982261 master-0 kubenswrapper[4790]: I1011 10:39:59.979135 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a6689745-4f25-4776-9f5c-6bfd7abe62a8-audit-dir\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.982261 master-0 kubenswrapper[4790]: I1011 10:39:59.979218 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-config\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.982261 master-0 kubenswrapper[4790]: I1011 10:39:59.979274 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76dd8647-4ad5-4874-b2d2-dee16aab637a-serving-cert\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.982261 master-0 kubenswrapper[4790]: I1011 10:39:59.979319 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/099ca022-6e9c-4604-b517-d90713dd6a44-etcd-client\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.982261 master-0 kubenswrapper[4790]: I1011 10:39:59.979374 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1254ac82-5820-431e-baeb-3ae7d7997b38-audit-log\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.982261 master-0 kubenswrapper[4790]: I1011 10:39:59.979395 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.982261 master-0 kubenswrapper[4790]: I1011 10:39:59.979452 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpsdf\" (UniqueName: \"kubernetes.io/projected/a6689745-4f25-4776-9f5c-6bfd7abe62a8-kube-api-access-mpsdf\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.982261 master-0 kubenswrapper[4790]: I1011 10:39:59.979587 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-etcd-serving-ca\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.982261 master-0 kubenswrapper[4790]: I1011 10:39:59.980015 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-image-import-ca\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.983589 master-0 kubenswrapper[4790]: I1011 10:39:59.979834 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a6689745-4f25-4776-9f5c-6bfd7abe62a8-audit-policies\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.983589 master-0 kubenswrapper[4790]: I1011 10:39:59.980965 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-config\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.983589 master-0 kubenswrapper[4790]: I1011 10:39:59.981257 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-trusted-ca-bundle\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.983589 master-0 kubenswrapper[4790]: I1011 10:39:59.981362 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a6689745-4f25-4776-9f5c-6bfd7abe62a8-audit-dir\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.983589 master-0 kubenswrapper[4790]: I1011 10:39:59.981879 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/099ca022-6e9c-4604-b517-d90713dd6a44-audit-dir\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.983589 master-0 kubenswrapper[4790]: I1011 10:39:59.982321 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1254ac82-5820-431e-baeb-3ae7d7997b38-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.983589 master-0 kubenswrapper[4790]: I1011 10:39:59.982724 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-oauth-serving-cert\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.984251 master-0 kubenswrapper[4790]: I1011 10:39:59.983820 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1254ac82-5820-431e-baeb-3ae7d7997b38-metrics-server-audit-profiles\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.984251 master-0 kubenswrapper[4790]: I1011 10:39:59.983874 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/76dd8647-4ad5-4874-b2d2-dee16aab637a-audit-dir\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.985205 master-0 kubenswrapper[4790]: I1011 10:39:59.984435 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/76dd8647-4ad5-4874-b2d2-dee16aab637a-audit-dir\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.985205 master-0 kubenswrapper[4790]: I1011 10:39:59.984491 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/099ca022-6e9c-4604-b517-d90713dd6a44-encryption-config\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.985454 master-0 kubenswrapper[4790]: I1011 10:39:59.985325 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/76dd8647-4ad5-4874-b2d2-dee16aab637a-encryption-config\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.985623 master-0 kubenswrapper[4790]: I1011 10:39:59.985527 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76dd8647-4ad5-4874-b2d2-dee16aab637a-trusted-ca-bundle\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.985623 master-0 kubenswrapper[4790]: I1011 10:39:59.985584 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1254ac82-5820-431e-baeb-3ae7d7997b38-metrics-server-audit-profiles\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.986000 master-0 kubenswrapper[4790]: I1011 10:39:59.985873 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-console-config\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.987797 master-0 kubenswrapper[4790]: I1011 10:39:59.986072 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/099ca022-6e9c-4604-b517-d90713dd6a44-etcd-client\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.987797 master-0 kubenswrapper[4790]: I1011 10:39:59.987437 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-console-serving-cert\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.987797 master-0 kubenswrapper[4790]: I1011 10:39:59.986095 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-console-oauth-config\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.987797 master-0 kubenswrapper[4790]: I1011 10:39:59.986331 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/76dd8647-4ad5-4874-b2d2-dee16aab637a-etcd-serving-ca\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.987797 master-0 kubenswrapper[4790]: I1011 10:39:59.986485 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.987797 master-0 kubenswrapper[4790]: I1011 10:39:59.986268 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-audit\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.987797 master-0 kubenswrapper[4790]: I1011 10:39:59.987557 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.987797 master-0 kubenswrapper[4790]: I1011 10:39:59.987384 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.987797 master-0 kubenswrapper[4790]: I1011 10:39:59.987657 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-user-template-login\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.987797 master-0 kubenswrapper[4790]: I1011 10:39:59.987204 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.987797 master-0 kubenswrapper[4790]: I1011 10:39:59.987064 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-service-ca\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.989625 master-0 kubenswrapper[4790]: I1011 10:39:59.988303 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.989625 master-0 kubenswrapper[4790]: I1011 10:39:59.988666 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76dd8647-4ad5-4874-b2d2-dee16aab637a-serving-cert\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.989625 master-0 kubenswrapper[4790]: I1011 10:39:59.988691 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1254ac82-5820-431e-baeb-3ae7d7997b38-secret-metrics-client-certs\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.989625 master-0 kubenswrapper[4790]: I1011 10:39:59.988746 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/76dd8647-4ad5-4874-b2d2-dee16aab637a-audit-policies\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.989625 master-0 kubenswrapper[4790]: I1011 10:39:59.988840 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.989625 master-0 kubenswrapper[4790]: I1011 10:39:59.989010 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1254ac82-5820-431e-baeb-3ae7d7997b38-secret-metrics-server-tls\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.989625 master-0 kubenswrapper[4790]: I1011 10:39:59.989032 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1254ac82-5820-431e-baeb-3ae7d7997b38-client-ca-bundle\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:39:59.989625 master-0 kubenswrapper[4790]: I1011 10:39:59.989403 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/099ca022-6e9c-4604-b517-d90713dd6a44-serving-cert\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:39:59.990276 master-0 kubenswrapper[4790]: I1011 10:39:59.990095 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-user-template-error\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.990276 master-0 kubenswrapper[4790]: I1011 10:39:59.990158 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-trusted-ca-bundle\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:39:59.990412 master-0 kubenswrapper[4790]: I1011 10:39:59.990343 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-session\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.991049 master-0 kubenswrapper[4790]: I1011 10:39:59.991004 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/76dd8647-4ad5-4874-b2d2-dee16aab637a-etcd-client\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:39:59.992114 master-0 kubenswrapper[4790]: I1011 10:39:59.992057 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6689745-4f25-4776-9f5c-6bfd7abe62a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:39:59.999648 master-0 kubenswrapper[4790]: I1011 10:39:59.999570 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88h25\" (UniqueName: \"kubernetes.io/projected/099ca022-6e9c-4604-b517-d90713dd6a44-kube-api-access-88h25\") pod \"apiserver-69df5d46bc-wjtq5\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:40:00.002337 master-0 kubenswrapper[4790]: I1011 10:40:00.002283 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2kt7\" (UniqueName: \"kubernetes.io/projected/76dd8647-4ad5-4874-b2d2-dee16aab637a-kube-api-access-k2kt7\") pod \"apiserver-656768b4df-9c8k6\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:40:00.005179 master-0 kubenswrapper[4790]: I1011 10:40:00.005119 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78q9m\" (UniqueName: \"kubernetes.io/projected/1254ac82-5820-431e-baeb-3ae7d7997b38-kube-api-access-78q9m\") pod \"metrics-server-7d46fcc5c6-n88q4\" (UID: \"1254ac82-5820-431e-baeb-3ae7d7997b38\") " pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:40:00.005310 master-0 kubenswrapper[4790]: I1011 10:40:00.005198 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ljnp\" (UniqueName: \"kubernetes.io/projected/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-kube-api-access-7ljnp\") pod \"console-76f8bc4746-9rjdm\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:40:00.006910 master-0 kubenswrapper[4790]: I1011 10:40:00.006841 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpsdf\" (UniqueName: \"kubernetes.io/projected/a6689745-4f25-4776-9f5c-6bfd7abe62a8-kube-api-access-mpsdf\") pod \"oauth-openshift-6fccd5ccc-khqd5\" (UID: \"a6689745-4f25-4776-9f5c-6bfd7abe62a8\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:40:00.023870 master-2 kubenswrapper[4776]: I1011 10:40:00.023685 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:40:00.023870 master-2 kubenswrapper[4776]: I1011 10:40:00.023735 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:40:00.042454 master-0 kubenswrapper[4790]: I1011 10:40:00.042341 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:40:00.058083 master-0 kubenswrapper[4790]: I1011 10:40:00.058022 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:40:00.076189 master-0 kubenswrapper[4790]: I1011 10:40:00.076072 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:40:00.089547 master-0 kubenswrapper[4790]: I1011 10:40:00.089460 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:40:00.099053 master-0 kubenswrapper[4790]: I1011 10:40:00.098970 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:40:00.204099 master-2 kubenswrapper[4776]: I1011 10:40:00.204065 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-2"] Oct 11 10:40:00.210494 master-2 kubenswrapper[4776]: W1011 10:40:00.210440 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4114c1be_d3d9_438f_b215_619b0aa3e114.slice/crio-fbcb84dc8dfb875ffbfdc915ddf23aa5c79f58b53751526f2cda898623241b23 WatchSource:0}: Error finding container fbcb84dc8dfb875ffbfdc915ddf23aa5c79f58b53751526f2cda898623241b23: Status 404 returned error can't find the container with id fbcb84dc8dfb875ffbfdc915ddf23aa5c79f58b53751526f2cda898623241b23 Oct 11 10:40:00.288473 master-0 kubenswrapper[4790]: I1011 10:40:00.288409 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access\") pod \"installer-8-master-0\" (UID: \"a3934355-bb61-4316-b164-05294e12906a\") " pod="openshift-etcd/installer-8-master-0" Oct 11 10:40:00.294257 master-0 kubenswrapper[4790]: I1011 10:40:00.293558 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access\") pod \"installer-8-master-0\" (UID: \"a3934355-bb61-4316-b164-05294e12906a\") " pod="openshift-etcd/installer-8-master-0" Oct 11 10:40:00.508955 master-0 kubenswrapper[4790]: I1011 10:40:00.508786 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-8-master-0" Oct 11 10:40:00.522564 master-0 kubenswrapper[4790]: I1011 10:40:00.522483 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-9c8k6"] Oct 11 10:40:00.574934 master-0 kubenswrapper[4790]: I1011 10:40:00.574726 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-69df5d46bc-wjtq5"] Oct 11 10:40:00.578769 master-0 kubenswrapper[4790]: I1011 10:40:00.578319 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76f8bc4746-9rjdm"] Oct 11 10:40:00.580878 master-0 kubenswrapper[4790]: W1011 10:40:00.580788 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod099ca022_6e9c_4604_b517_d90713dd6a44.slice/crio-1623fe4c5875e03dc8879c8e18034ad8d12a92c81f76f68ae1603f1e4ba99e21 WatchSource:0}: Error finding container 1623fe4c5875e03dc8879c8e18034ad8d12a92c81f76f68ae1603f1e4ba99e21: Status 404 returned error can't find the container with id 1623fe4c5875e03dc8879c8e18034ad8d12a92c81f76f68ae1603f1e4ba99e21 Oct 11 10:40:00.587766 master-0 kubenswrapper[4790]: W1011 10:40:00.587668 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab7a5bf0_d3df_49f7_bd97_a7b9425fe9db.slice/crio-f61d3e1404aecae93207fe8056d63854d603a520f23437c58f022239f1436a77 WatchSource:0}: Error finding container f61d3e1404aecae93207fe8056d63854d603a520f23437c58f022239f1436a77: Status 404 returned error can't find the container with id f61d3e1404aecae93207fe8056d63854d603a520f23437c58f022239f1436a77 Oct 11 10:40:00.591080 master-0 kubenswrapper[4790]: I1011 10:40:00.589854 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7d46fcc5c6-n88q4"] Oct 11 10:40:00.591080 master-0 kubenswrapper[4790]: I1011 10:40:00.591020 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6fccd5ccc-khqd5"] Oct 11 10:40:00.598396 master-0 kubenswrapper[4790]: W1011 10:40:00.598338 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6689745_4f25_4776_9f5c_6bfd7abe62a8.slice/crio-af52309d15964e4023c841a9a9505b6c6b33b2ad039face745f16cc56a858b55 WatchSource:0}: Error finding container af52309d15964e4023c841a9a9505b6c6b33b2ad039face745f16cc56a858b55: Status 404 returned error can't find the container with id af52309d15964e4023c841a9a9505b6c6b33b2ad039face745f16cc56a858b55 Oct 11 10:40:00.598842 master-0 kubenswrapper[4790]: W1011 10:40:00.598824 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1254ac82_5820_431e_baeb_3ae7d7997b38.slice/crio-ec5edfa238e67eb6dc656c72be1dbc1f0397d7a8d80950005bcb4a224f23c220 WatchSource:0}: Error finding container ec5edfa238e67eb6dc656c72be1dbc1f0397d7a8d80950005bcb4a224f23c220: Status 404 returned error can't find the container with id ec5edfa238e67eb6dc656c72be1dbc1f0397d7a8d80950005bcb4a224f23c220 Oct 11 10:40:00.744007 master-0 kubenswrapper[4790]: I1011 10:40:00.743913 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-8-master-0"] Oct 11 10:40:00.752048 master-0 kubenswrapper[4790]: W1011 10:40:00.751936 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda3934355_bb61_4316_b164_05294e12906a.slice/crio-4a1e893e4dcbac435d2da0e68c078d559bd07b9d3b598e66849b7e045c462376 WatchSource:0}: Error finding container 4a1e893e4dcbac435d2da0e68c078d559bd07b9d3b598e66849b7e045c462376: Status 404 returned error can't find the container with id 4a1e893e4dcbac435d2da0e68c078d559bd07b9d3b598e66849b7e045c462376 Oct 11 10:40:00.762096 master-0 kubenswrapper[4790]: I1011 10:40:00.762007 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" event={"ID":"099ca022-6e9c-4604-b517-d90713dd6a44","Type":"ContainerStarted","Data":"1623fe4c5875e03dc8879c8e18034ad8d12a92c81f76f68ae1603f1e4ba99e21"} Oct 11 10:40:00.763314 master-0 kubenswrapper[4790]: I1011 10:40:00.763240 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" event={"ID":"76dd8647-4ad5-4874-b2d2-dee16aab637a","Type":"ContainerStarted","Data":"64ce93912fbe2ce263f72579fc62109333989150c0bd59c119eb0bd06f24caa2"} Oct 11 10:40:00.764250 master-0 kubenswrapper[4790]: I1011 10:40:00.764211 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-8-master-0" event={"ID":"a3934355-bb61-4316-b164-05294e12906a","Type":"ContainerStarted","Data":"4a1e893e4dcbac435d2da0e68c078d559bd07b9d3b598e66849b7e045c462376"} Oct 11 10:40:00.765193 master-0 kubenswrapper[4790]: I1011 10:40:00.765163 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" event={"ID":"1254ac82-5820-431e-baeb-3ae7d7997b38","Type":"ContainerStarted","Data":"ec5edfa238e67eb6dc656c72be1dbc1f0397d7a8d80950005bcb4a224f23c220"} Oct 11 10:40:00.765990 master-0 kubenswrapper[4790]: I1011 10:40:00.765964 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76f8bc4746-9rjdm" event={"ID":"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db","Type":"ContainerStarted","Data":"f61d3e1404aecae93207fe8056d63854d603a520f23437c58f022239f1436a77"} Oct 11 10:40:00.766950 master-0 kubenswrapper[4790]: I1011 10:40:00.766858 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" event={"ID":"a6689745-4f25-4776-9f5c-6bfd7abe62a8","Type":"ContainerStarted","Data":"af52309d15964e4023c841a9a9505b6c6b33b2ad039face745f16cc56a858b55"} Oct 11 10:40:00.797335 master-1 kubenswrapper[4771]: I1011 10:40:00.797280 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 11 10:40:00.890681 master-1 kubenswrapper[4771]: I1011 10:40:00.890581 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05-kube-api-access\") pod \"3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05\" (UID: \"3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05\") " Oct 11 10:40:00.890681 master-1 kubenswrapper[4771]: I1011 10:40:00.890675 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05-kubelet-dir\") pod \"3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05\" (UID: \"3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05\") " Oct 11 10:40:00.891288 master-1 kubenswrapper[4771]: I1011 10:40:00.890870 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05" (UID: "3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:40:00.907463 master-1 kubenswrapper[4771]: I1011 10:40:00.903815 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05" (UID: "3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:40:00.992867 master-1 kubenswrapper[4771]: I1011 10:40:00.992675 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:40:00.993276 master-1 kubenswrapper[4771]: I1011 10:40:00.993239 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:40:01.050215 master-2 kubenswrapper[4776]: I1011 10:40:01.050138 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-2" event={"ID":"4114c1be-d3d9-438f-b215-619b0aa3e114","Type":"ContainerStarted","Data":"2a8171298231c0b71d807439369128048e0314ae6d16837ac065e1139fb9e09c"} Oct 11 10:40:01.050215 master-2 kubenswrapper[4776]: I1011 10:40:01.050193 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-2" event={"ID":"4114c1be-d3d9-438f-b215-619b0aa3e114","Type":"ContainerStarted","Data":"fbcb84dc8dfb875ffbfdc915ddf23aa5c79f58b53751526f2cda898623241b23"} Oct 11 10:40:01.080237 master-2 kubenswrapper[4776]: I1011 10:40:01.080151 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-master-2" podStartSLOduration=2.080130726 podStartE2EDuration="2.080130726s" podCreationTimestamp="2025-10-11 10:39:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:40:01.0766124 +0000 UTC m=+835.861039109" watchObservedRunningTime="2025-10-11 10:40:01.080130726 +0000 UTC m=+835.864557435" Oct 11 10:40:01.166194 master-0 kubenswrapper[4790]: I1011 10:40:01.166111 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_revision-pruner-6-master-0_f810d826-e11a-4e68-8b42-f9cc96815f6e/pruner/0.log" Oct 11 10:40:01.166521 master-0 kubenswrapper[4790]: I1011 10:40:01.166277 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Oct 11 10:40:01.200747 master-0 kubenswrapper[4790]: I1011 10:40:01.200638 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f810d826-e11a-4e68-8b42-f9cc96815f6e-kubelet-dir\") pod \"f810d826-e11a-4e68-8b42-f9cc96815f6e\" (UID: \"f810d826-e11a-4e68-8b42-f9cc96815f6e\") " Oct 11 10:40:01.201076 master-0 kubenswrapper[4790]: I1011 10:40:01.200800 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f810d826-e11a-4e68-8b42-f9cc96815f6e-kube-api-access\") pod \"f810d826-e11a-4e68-8b42-f9cc96815f6e\" (UID: \"f810d826-e11a-4e68-8b42-f9cc96815f6e\") " Oct 11 10:40:01.201194 master-0 kubenswrapper[4790]: I1011 10:40:01.201006 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f810d826-e11a-4e68-8b42-f9cc96815f6e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f810d826-e11a-4e68-8b42-f9cc96815f6e" (UID: "f810d826-e11a-4e68-8b42-f9cc96815f6e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:40:01.204587 master-0 kubenswrapper[4790]: I1011 10:40:01.204520 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f810d826-e11a-4e68-8b42-f9cc96815f6e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f810d826-e11a-4e68-8b42-f9cc96815f6e" (UID: "f810d826-e11a-4e68-8b42-f9cc96815f6e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:40:01.303202 master-0 kubenswrapper[4790]: I1011 10:40:01.302087 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f810d826-e11a-4e68-8b42-f9cc96815f6e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Oct 11 10:40:01.303202 master-0 kubenswrapper[4790]: I1011 10:40:01.302136 4790 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f810d826-e11a-4e68-8b42-f9cc96815f6e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Oct 11 10:40:01.342154 master-1 kubenswrapper[4771]: I1011 10:40:01.342094 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-1" event={"ID":"3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05","Type":"ContainerDied","Data":"713d4071ed7a38a97107ad5f1629e72b0b8fd55934c7595a21204f0f11e35c02"} Oct 11 10:40:01.342563 master-1 kubenswrapper[4771]: I1011 10:40:01.342534 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="713d4071ed7a38a97107ad5f1629e72b0b8fd55934c7595a21204f0f11e35c02" Oct 11 10:40:01.342732 master-1 kubenswrapper[4771]: I1011 10:40:01.342229 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 11 10:40:01.464819 master-2 kubenswrapper[4776]: I1011 10:40:01.464739 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:01.464819 master-2 kubenswrapper[4776]: I1011 10:40:01.464801 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:01.775652 master-0 kubenswrapper[4790]: I1011 10:40:01.775533 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_revision-pruner-6-master-0_f810d826-e11a-4e68-8b42-f9cc96815f6e/pruner/0.log" Oct 11 10:40:01.776351 master-0 kubenswrapper[4790]: I1011 10:40:01.776305 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Oct 11 10:40:01.776401 master-0 kubenswrapper[4790]: I1011 10:40:01.775686 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"f810d826-e11a-4e68-8b42-f9cc96815f6e","Type":"ContainerDied","Data":"0037715e465467c850978e63ab18a53529c91d6f5e89e891c58217bd8c626b4a"} Oct 11 10:40:01.776793 master-0 kubenswrapper[4790]: I1011 10:40:01.776764 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0037715e465467c850978e63ab18a53529c91d6f5e89e891c58217bd8c626b4a" Oct 11 10:40:02.068060 master-2 kubenswrapper[4776]: I1011 10:40:02.067767 4776 generic.go:334] "Generic (PLEG): container finished" podID="4114c1be-d3d9-438f-b215-619b0aa3e114" containerID="2a8171298231c0b71d807439369128048e0314ae6d16837ac065e1139fb9e09c" exitCode=0 Oct 11 10:40:02.070480 master-2 kubenswrapper[4776]: I1011 10:40:02.070433 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-2" event={"ID":"4114c1be-d3d9-438f-b215-619b0aa3e114","Type":"ContainerDied","Data":"2a8171298231c0b71d807439369128048e0314ae6d16837ac065e1139fb9e09c"} Oct 11 10:40:02.581021 master-1 kubenswrapper[4771]: I1011 10:40:02.580941 4771 patch_prober.go:28] interesting pod/console-775ff6c4fc-csp4z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" start-of-body= Oct 11 10:40:02.583656 master-1 kubenswrapper[4771]: I1011 10:40:02.583611 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-775ff6c4fc-csp4z" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" probeResult="failure" output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" Oct 11 10:40:02.779865 master-0 kubenswrapper[4790]: I1011 10:40:02.779794 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" event={"ID":"1254ac82-5820-431e-baeb-3ae7d7997b38","Type":"ContainerStarted","Data":"498a71585d2faeaf2e747295cf0d441a20474e8faeb7d0c5a986a626d52eb5b9"} Oct 11 10:40:02.780876 master-0 kubenswrapper[4790]: I1011 10:40:02.780846 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:40:02.782609 master-0 kubenswrapper[4790]: I1011 10:40:02.782580 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" event={"ID":"a6689745-4f25-4776-9f5c-6bfd7abe62a8","Type":"ContainerStarted","Data":"0156b9e81795c34b5869a2a45112f238b843886820aa5fe76bba5aad3dd2bdb4"} Oct 11 10:40:02.783077 master-0 kubenswrapper[4790]: I1011 10:40:02.783045 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:40:02.808958 master-0 kubenswrapper[4790]: I1011 10:40:02.808885 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" podStartSLOduration=303.959566786 podStartE2EDuration="5m5.808865899s" podCreationTimestamp="2025-10-11 10:34:57 +0000 UTC" firstStartedPulling="2025-10-11 10:40:00.603156635 +0000 UTC m=+77.157616927" lastFinishedPulling="2025-10-11 10:40:02.452455748 +0000 UTC m=+79.006916040" observedRunningTime="2025-10-11 10:40:02.808732656 +0000 UTC m=+79.363192948" watchObservedRunningTime="2025-10-11 10:40:02.808865899 +0000 UTC m=+79.363326201" Oct 11 10:40:02.838314 master-0 kubenswrapper[4790]: I1011 10:40:02.838257 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" podStartSLOduration=67.941684141 podStartE2EDuration="1m9.83824563s" podCreationTimestamp="2025-10-11 10:38:53 +0000 UTC" firstStartedPulling="2025-10-11 10:40:00.602386794 +0000 UTC m=+77.156847086" lastFinishedPulling="2025-10-11 10:40:02.498948283 +0000 UTC m=+79.053408575" observedRunningTime="2025-10-11 10:40:02.837487301 +0000 UTC m=+79.391947603" watchObservedRunningTime="2025-10-11 10:40:02.83824563 +0000 UTC m=+79.392705922" Oct 11 10:40:03.117142 master-0 kubenswrapper[4790]: I1011 10:40:03.117075 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6fccd5ccc-khqd5" Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: I1011 10:40:03.245798 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:03.245991 master-1 kubenswrapper[4771]: I1011 10:40:03.245922 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:03.369998 master-2 kubenswrapper[4776]: I1011 10:40:03.369931 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:40:03.521514 master-2 kubenswrapper[4776]: I1011 10:40:03.521466 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4114c1be-d3d9-438f-b215-619b0aa3e114-kube-api-access\") pod \"4114c1be-d3d9-438f-b215-619b0aa3e114\" (UID: \"4114c1be-d3d9-438f-b215-619b0aa3e114\") " Oct 11 10:40:03.521771 master-2 kubenswrapper[4776]: I1011 10:40:03.521530 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4114c1be-d3d9-438f-b215-619b0aa3e114-kubelet-dir\") pod \"4114c1be-d3d9-438f-b215-619b0aa3e114\" (UID: \"4114c1be-d3d9-438f-b215-619b0aa3e114\") " Oct 11 10:40:03.521771 master-2 kubenswrapper[4776]: I1011 10:40:03.521740 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4114c1be-d3d9-438f-b215-619b0aa3e114-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4114c1be-d3d9-438f-b215-619b0aa3e114" (UID: "4114c1be-d3d9-438f-b215-619b0aa3e114"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:40:03.521985 master-2 kubenswrapper[4776]: I1011 10:40:03.521965 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4114c1be-d3d9-438f-b215-619b0aa3e114-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:40:03.524952 master-2 kubenswrapper[4776]: I1011 10:40:03.524923 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4114c1be-d3d9-438f-b215-619b0aa3e114-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4114c1be-d3d9-438f-b215-619b0aa3e114" (UID: "4114c1be-d3d9-438f-b215-619b0aa3e114"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:40:03.623253 master-2 kubenswrapper[4776]: I1011 10:40:03.623118 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4114c1be-d3d9-438f-b215-619b0aa3e114-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:40:03.971169 master-0 kubenswrapper[4790]: I1011 10:40:03.971104 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Oct 11 10:40:03.971870 master-0 kubenswrapper[4790]: E1011 10:40:03.971272 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f810d826-e11a-4e68-8b42-f9cc96815f6e" containerName="pruner" Oct 11 10:40:03.971870 master-0 kubenswrapper[4790]: I1011 10:40:03.971288 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="f810d826-e11a-4e68-8b42-f9cc96815f6e" containerName="pruner" Oct 11 10:40:03.971870 master-0 kubenswrapper[4790]: I1011 10:40:03.971361 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="f810d826-e11a-4e68-8b42-f9cc96815f6e" containerName="pruner" Oct 11 10:40:03.972321 master-0 kubenswrapper[4790]: I1011 10:40:03.972296 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:03.974681 master-0 kubenswrapper[4790]: I1011 10:40:03.974636 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Oct 11 10:40:03.975130 master-0 kubenswrapper[4790]: I1011 10:40:03.975100 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Oct 11 10:40:03.975635 master-0 kubenswrapper[4790]: I1011 10:40:03.975613 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Oct 11 10:40:03.976308 master-0 kubenswrapper[4790]: I1011 10:40:03.976275 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Oct 11 10:40:03.976623 master-0 kubenswrapper[4790]: I1011 10:40:03.976383 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Oct 11 10:40:03.976623 master-0 kubenswrapper[4790]: I1011 10:40:03.976492 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Oct 11 10:40:03.976831 master-0 kubenswrapper[4790]: I1011 10:40:03.976804 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Oct 11 10:40:03.986990 master-0 kubenswrapper[4790]: I1011 10:40:03.986951 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Oct 11 10:40:03.995851 master-1 kubenswrapper[4771]: I1011 10:40:03.995736 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-1"] Oct 11 10:40:03.996813 master-1 kubenswrapper[4771]: E1011 10:40:03.996560 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05" containerName="pruner" Oct 11 10:40:03.996813 master-1 kubenswrapper[4771]: I1011 10:40:03.996596 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05" containerName="pruner" Oct 11 10:40:03.997143 master-1 kubenswrapper[4771]: I1011 10:40:03.997094 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d28ccd7-4f37-4a24-a9ff-ef97ff08ae05" containerName="pruner" Oct 11 10:40:04.005858 master-0 kubenswrapper[4790]: I1011 10:40:04.005813 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Oct 11 10:40:04.008077 master-1 kubenswrapper[4771]: I1011 10:40:04.007988 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.011635 master-1 kubenswrapper[4771]: I1011 10:40:04.011572 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Oct 11 10:40:04.012068 master-1 kubenswrapper[4771]: I1011 10:40:04.012042 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Oct 11 10:40:04.012316 master-1 kubenswrapper[4771]: I1011 10:40:04.012264 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Oct 11 10:40:04.013039 master-1 kubenswrapper[4771]: I1011 10:40:04.012939 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Oct 11 10:40:04.013039 master-1 kubenswrapper[4771]: I1011 10:40:04.012977 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Oct 11 10:40:04.013312 master-1 kubenswrapper[4771]: I1011 10:40:04.013071 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Oct 11 10:40:04.013312 master-1 kubenswrapper[4771]: I1011 10:40:04.013111 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Oct 11 10:40:04.024535 master-1 kubenswrapper[4771]: I1011 10:40:04.024308 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Oct 11 10:40:04.033788 master-1 kubenswrapper[4771]: I1011 10:40:04.033703 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-1"] Oct 11 10:40:04.081240 master-2 kubenswrapper[4776]: I1011 10:40:04.081183 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-2" event={"ID":"4114c1be-d3d9-438f-b215-619b0aa3e114","Type":"ContainerDied","Data":"fbcb84dc8dfb875ffbfdc915ddf23aa5c79f58b53751526f2cda898623241b23"} Oct 11 10:40:04.081240 master-2 kubenswrapper[4776]: I1011 10:40:04.081242 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbcb84dc8dfb875ffbfdc915ddf23aa5c79f58b53751526f2cda898623241b23" Oct 11 10:40:04.081457 master-2 kubenswrapper[4776]: I1011 10:40:04.081300 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:40:04.130921 master-0 kubenswrapper[4790]: I1011 10:40:04.130825 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e3e6a069-f9e0-417c-9226-5ef929699b39-web-config\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.130921 master-0 kubenswrapper[4790]: I1011 10:40:04.130912 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3e6a069-f9e0-417c-9226-5ef929699b39-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.130921 master-0 kubenswrapper[4790]: I1011 10:40:04.130937 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e3e6a069-f9e0-417c-9226-5ef929699b39-config-out\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.131276 master-0 kubenswrapper[4790]: I1011 10:40:04.130954 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e3e6a069-f9e0-417c-9226-5ef929699b39-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.131276 master-0 kubenswrapper[4790]: I1011 10:40:04.130974 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e3e6a069-f9e0-417c-9226-5ef929699b39-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.131276 master-0 kubenswrapper[4790]: I1011 10:40:04.131016 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e3e6a069-f9e0-417c-9226-5ef929699b39-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.131276 master-0 kubenswrapper[4790]: I1011 10:40:04.131203 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e3e6a069-f9e0-417c-9226-5ef929699b39-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.131276 master-0 kubenswrapper[4790]: I1011 10:40:04.131256 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e3e6a069-f9e0-417c-9226-5ef929699b39-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.131411 master-0 kubenswrapper[4790]: I1011 10:40:04.131284 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncg7c\" (UniqueName: \"kubernetes.io/projected/e3e6a069-f9e0-417c-9226-5ef929699b39-kube-api-access-ncg7c\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.131411 master-0 kubenswrapper[4790]: I1011 10:40:04.131322 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e3e6a069-f9e0-417c-9226-5ef929699b39-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.131411 master-0 kubenswrapper[4790]: I1011 10:40:04.131347 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e3e6a069-f9e0-417c-9226-5ef929699b39-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.131411 master-0 kubenswrapper[4790]: I1011 10:40:04.131394 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e3e6a069-f9e0-417c-9226-5ef929699b39-config-volume\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.140334 master-1 kubenswrapper[4771]: I1011 10:40:04.140235 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8cmk\" (UniqueName: \"kubernetes.io/projected/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-kube-api-access-b8cmk\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.140334 master-1 kubenswrapper[4771]: I1011 10:40:04.140317 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-secret-alertmanager-main-tls\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.140778 master-1 kubenswrapper[4771]: I1011 10:40:04.140424 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-metrics-client-ca\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.140778 master-1 kubenswrapper[4771]: I1011 10:40:04.140484 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.140778 master-1 kubenswrapper[4771]: I1011 10:40:04.140520 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-config-volume\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.140778 master-1 kubenswrapper[4771]: I1011 10:40:04.140644 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-tls-assets\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.140778 master-1 kubenswrapper[4771]: I1011 10:40:04.140703 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-web-config\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.140778 master-1 kubenswrapper[4771]: I1011 10:40:04.140773 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.141276 master-1 kubenswrapper[4771]: I1011 10:40:04.140848 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-config-out\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.141276 master-1 kubenswrapper[4771]: I1011 10:40:04.140884 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-alertmanager-main-db\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.141276 master-1 kubenswrapper[4771]: I1011 10:40:04.140917 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.141276 master-1 kubenswrapper[4771]: I1011 10:40:04.141001 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.231983 master-0 kubenswrapper[4790]: I1011 10:40:04.231849 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e3e6a069-f9e0-417c-9226-5ef929699b39-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.231983 master-0 kubenswrapper[4790]: I1011 10:40:04.231915 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e3e6a069-f9e0-417c-9226-5ef929699b39-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.231983 master-0 kubenswrapper[4790]: I1011 10:40:04.231950 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncg7c\" (UniqueName: \"kubernetes.io/projected/e3e6a069-f9e0-417c-9226-5ef929699b39-kube-api-access-ncg7c\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.231983 master-0 kubenswrapper[4790]: I1011 10:40:04.231986 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e3e6a069-f9e0-417c-9226-5ef929699b39-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.232278 master-0 kubenswrapper[4790]: I1011 10:40:04.232010 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e3e6a069-f9e0-417c-9226-5ef929699b39-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.232278 master-0 kubenswrapper[4790]: I1011 10:40:04.232042 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e3e6a069-f9e0-417c-9226-5ef929699b39-config-volume\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.232278 master-0 kubenswrapper[4790]: I1011 10:40:04.232074 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e3e6a069-f9e0-417c-9226-5ef929699b39-web-config\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.232278 master-0 kubenswrapper[4790]: I1011 10:40:04.232108 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3e6a069-f9e0-417c-9226-5ef929699b39-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.232278 master-0 kubenswrapper[4790]: I1011 10:40:04.232135 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e3e6a069-f9e0-417c-9226-5ef929699b39-config-out\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.232278 master-0 kubenswrapper[4790]: I1011 10:40:04.232158 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e3e6a069-f9e0-417c-9226-5ef929699b39-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.232278 master-0 kubenswrapper[4790]: I1011 10:40:04.232183 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e3e6a069-f9e0-417c-9226-5ef929699b39-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.232278 master-0 kubenswrapper[4790]: I1011 10:40:04.232219 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e3e6a069-f9e0-417c-9226-5ef929699b39-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.233623 master-0 kubenswrapper[4790]: I1011 10:40:04.233117 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e3e6a069-f9e0-417c-9226-5ef929699b39-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.233623 master-0 kubenswrapper[4790]: I1011 10:40:04.233160 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e3e6a069-f9e0-417c-9226-5ef929699b39-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.233623 master-0 kubenswrapper[4790]: I1011 10:40:04.233561 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3e6a069-f9e0-417c-9226-5ef929699b39-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.235587 master-0 kubenswrapper[4790]: I1011 10:40:04.235553 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e3e6a069-f9e0-417c-9226-5ef929699b39-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.235953 master-0 kubenswrapper[4790]: I1011 10:40:04.235902 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e3e6a069-f9e0-417c-9226-5ef929699b39-config-volume\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.236049 master-0 kubenswrapper[4790]: I1011 10:40:04.236005 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e3e6a069-f9e0-417c-9226-5ef929699b39-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.236260 master-0 kubenswrapper[4790]: I1011 10:40:04.236229 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e3e6a069-f9e0-417c-9226-5ef929699b39-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.236260 master-0 kubenswrapper[4790]: I1011 10:40:04.236239 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e3e6a069-f9e0-417c-9226-5ef929699b39-config-out\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.236810 master-0 kubenswrapper[4790]: I1011 10:40:04.236650 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e3e6a069-f9e0-417c-9226-5ef929699b39-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.237483 master-0 kubenswrapper[4790]: I1011 10:40:04.237451 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e3e6a069-f9e0-417c-9226-5ef929699b39-web-config\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.242579 master-1 kubenswrapper[4771]: I1011 10:40:04.242516 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-config-out\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.242956 master-1 kubenswrapper[4771]: I1011 10:40:04.242925 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-alertmanager-main-db\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.243209 master-1 kubenswrapper[4771]: I1011 10:40:04.243115 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.243456 master-1 kubenswrapper[4771]: I1011 10:40:04.243424 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.243676 master-1 kubenswrapper[4771]: I1011 10:40:04.243650 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8cmk\" (UniqueName: \"kubernetes.io/projected/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-kube-api-access-b8cmk\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.243902 master-0 kubenswrapper[4790]: I1011 10:40:04.243512 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e3e6a069-f9e0-417c-9226-5ef929699b39-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.243906 master-1 kubenswrapper[4771]: I1011 10:40:04.243879 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-secret-alertmanager-main-tls\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.244087 master-1 kubenswrapper[4771]: I1011 10:40:04.244029 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-alertmanager-main-db\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.244221 master-1 kubenswrapper[4771]: I1011 10:40:04.244070 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-metrics-client-ca\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.244548 master-1 kubenswrapper[4771]: I1011 10:40:04.244505 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.244822 master-1 kubenswrapper[4771]: I1011 10:40:04.244784 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-config-volume\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.245069 master-1 kubenswrapper[4771]: I1011 10:40:04.245033 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-tls-assets\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.245315 master-1 kubenswrapper[4771]: I1011 10:40:04.245278 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-web-config\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.245667 master-1 kubenswrapper[4771]: I1011 10:40:04.244860 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-metrics-client-ca\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.245811 master-1 kubenswrapper[4771]: I1011 10:40:04.245668 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.245811 master-1 kubenswrapper[4771]: I1011 10:40:04.245593 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.250120 master-1 kubenswrapper[4771]: I1011 10:40:04.249984 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.250418 master-1 kubenswrapper[4771]: I1011 10:40:04.250315 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-secret-alertmanager-main-tls\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.250418 master-1 kubenswrapper[4771]: I1011 10:40:04.250342 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-config-volume\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.250700 master-1 kubenswrapper[4771]: I1011 10:40:04.250650 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-config-out\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.250941 master-1 kubenswrapper[4771]: I1011 10:40:04.250880 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-tls-assets\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.251052 master-1 kubenswrapper[4771]: I1011 10:40:04.250980 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-web-config\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.252163 master-1 kubenswrapper[4771]: I1011 10:40:04.252119 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.254384 master-0 kubenswrapper[4790]: I1011 10:40:04.254316 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncg7c\" (UniqueName: \"kubernetes.io/projected/e3e6a069-f9e0-417c-9226-5ef929699b39-kube-api-access-ncg7c\") pod \"alertmanager-main-0\" (UID: \"e3e6a069-f9e0-417c-9226-5ef929699b39\") " pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.254412 master-1 kubenswrapper[4771]: I1011 10:40:04.254306 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.263421 master-1 kubenswrapper[4771]: I1011 10:40:04.263338 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8cmk\" (UniqueName: \"kubernetes.io/projected/06b4c539-712c-4c8b-8b0f-ffbcbfd7811d-kube-api-access-b8cmk\") pod \"alertmanager-main-1\" (UID: \"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d\") " pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.285659 master-0 kubenswrapper[4790]: I1011 10:40:04.285569 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:04.336002 master-1 kubenswrapper[4771]: I1011 10:40:04.335946 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:04.635757 master-0 kubenswrapper[4790]: I1011 10:40:04.635615 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs\") pod \"network-metrics-daemon-zcc4t\" (UID: \"a5b695d5-a88c-4ff9-bc59-d13f61f237f6\") " pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:40:04.640990 master-0 kubenswrapper[4790]: I1011 10:40:04.640914 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5b695d5-a88c-4ff9-bc59-d13f61f237f6-metrics-certs\") pod \"network-metrics-daemon-zcc4t\" (UID: \"a5b695d5-a88c-4ff9-bc59-d13f61f237f6\") " pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:40:04.736516 master-0 kubenswrapper[4790]: I1011 10:40:04.736401 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xlkt\" (UniqueName: \"kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt\") pod \"network-check-target-bn2sv\" (UID: \"0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7\") " pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:40:04.742833 master-0 kubenswrapper[4790]: I1011 10:40:04.742775 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xlkt\" (UniqueName: \"kubernetes.io/projected/0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7-kube-api-access-8xlkt\") pod \"network-check-target-bn2sv\" (UID: \"0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7\") " pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:40:04.813205 master-0 kubenswrapper[4790]: I1011 10:40:04.813118 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:40:04.817630 master-1 kubenswrapper[4771]: I1011 10:40:04.817534 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-1"] Oct 11 10:40:04.820143 master-1 kubenswrapper[4771]: W1011 10:40:04.820076 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06b4c539_712c_4c8b_8b0f_ffbcbfd7811d.slice/crio-0e6f74ab535f56d067899045bb3191ead6cf63a23561a9074240cdae0614d5e1 WatchSource:0}: Error finding container 0e6f74ab535f56d067899045bb3191ead6cf63a23561a9074240cdae0614d5e1: Status 404 returned error can't find the container with id 0e6f74ab535f56d067899045bb3191ead6cf63a23561a9074240cdae0614d5e1 Oct 11 10:40:04.822236 master-0 kubenswrapper[4790]: I1011 10:40:04.822180 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcc4t" Oct 11 10:40:04.929194 master-0 kubenswrapper[4790]: I1011 10:40:04.929042 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w"] Oct 11 10:40:04.930171 master-0 kubenswrapper[4790]: I1011 10:40:04.930126 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:04.936920 master-0 kubenswrapper[4790]: I1011 10:40:04.936868 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Oct 11 10:40:04.937180 master-0 kubenswrapper[4790]: I1011 10:40:04.937157 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Oct 11 10:40:04.937325 master-0 kubenswrapper[4790]: I1011 10:40:04.937303 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Oct 11 10:40:04.937468 master-0 kubenswrapper[4790]: I1011 10:40:04.937445 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Oct 11 10:40:04.937596 master-0 kubenswrapper[4790]: I1011 10:40:04.937581 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-92o819hatg7mp" Oct 11 10:40:04.937765 master-0 kubenswrapper[4790]: I1011 10:40:04.937750 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Oct 11 10:40:04.950506 master-1 kubenswrapper[4771]: I1011 10:40:04.950454 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-7f646dd4d8-v72dv"] Oct 11 10:40:04.952079 master-1 kubenswrapper[4771]: I1011 10:40:04.952054 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:04.952496 master-0 kubenswrapper[4790]: I1011 10:40:04.952438 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w"] Oct 11 10:40:04.955429 master-1 kubenswrapper[4771]: I1011 10:40:04.954903 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Oct 11 10:40:04.955429 master-1 kubenswrapper[4771]: I1011 10:40:04.954904 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-92o819hatg7mp" Oct 11 10:40:04.955429 master-1 kubenswrapper[4771]: I1011 10:40:04.954957 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Oct 11 10:40:04.955429 master-1 kubenswrapper[4771]: I1011 10:40:04.955262 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Oct 11 10:40:04.958271 master-1 kubenswrapper[4771]: I1011 10:40:04.956473 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Oct 11 10:40:04.958271 master-1 kubenswrapper[4771]: I1011 10:40:04.956537 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Oct 11 10:40:04.968926 master-1 kubenswrapper[4771]: I1011 10:40:04.968865 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7f646dd4d8-v72dv"] Oct 11 10:40:05.040554 master-0 kubenswrapper[4790]: I1011 10:40:05.040489 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4txm\" (UniqueName: \"kubernetes.io/projected/06012e2a-b507-48ad-9740-2c3cb3af5bdf-kube-api-access-g4txm\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.041225 master-0 kubenswrapper[4790]: I1011 10:40:05.041201 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/06012e2a-b507-48ad-9740-2c3cb3af5bdf-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.041367 master-0 kubenswrapper[4790]: I1011 10:40:05.041348 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/06012e2a-b507-48ad-9740-2c3cb3af5bdf-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.041482 master-0 kubenswrapper[4790]: I1011 10:40:05.041463 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/06012e2a-b507-48ad-9740-2c3cb3af5bdf-secret-grpc-tls\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.041589 master-0 kubenswrapper[4790]: I1011 10:40:05.041570 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/06012e2a-b507-48ad-9740-2c3cb3af5bdf-secret-thanos-querier-tls\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.041700 master-0 kubenswrapper[4790]: I1011 10:40:05.041683 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/06012e2a-b507-48ad-9740-2c3cb3af5bdf-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.041825 master-0 kubenswrapper[4790]: I1011 10:40:05.041808 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06012e2a-b507-48ad-9740-2c3cb3af5bdf-metrics-client-ca\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.041981 master-0 kubenswrapper[4790]: I1011 10:40:05.041943 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/06012e2a-b507-48ad-9740-2c3cb3af5bdf-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.056744 master-1 kubenswrapper[4771]: I1011 10:40:05.056675 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2710b153-4085-41e5-8524-7cfb5d8c57f9-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.056744 master-1 kubenswrapper[4771]: I1011 10:40:05.056734 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2710b153-4085-41e5-8524-7cfb5d8c57f9-secret-grpc-tls\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.056744 master-1 kubenswrapper[4771]: I1011 10:40:05.056772 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/2710b153-4085-41e5-8524-7cfb5d8c57f9-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.057565 master-1 kubenswrapper[4771]: I1011 10:40:05.056840 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/2710b153-4085-41e5-8524-7cfb5d8c57f9-secret-thanos-querier-tls\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.057565 master-1 kubenswrapper[4771]: I1011 10:40:05.056900 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbwfg\" (UniqueName: \"kubernetes.io/projected/2710b153-4085-41e5-8524-7cfb5d8c57f9-kube-api-access-rbwfg\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.057565 master-1 kubenswrapper[4771]: I1011 10:40:05.056941 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2710b153-4085-41e5-8524-7cfb5d8c57f9-metrics-client-ca\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.057565 master-1 kubenswrapper[4771]: I1011 10:40:05.056973 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/2710b153-4085-41e5-8524-7cfb5d8c57f9-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.057565 master-1 kubenswrapper[4771]: I1011 10:40:05.057005 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2710b153-4085-41e5-8524-7cfb5d8c57f9-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.143529 master-0 kubenswrapper[4790]: I1011 10:40:05.143440 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/06012e2a-b507-48ad-9740-2c3cb3af5bdf-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.143529 master-0 kubenswrapper[4790]: I1011 10:40:05.143540 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06012e2a-b507-48ad-9740-2c3cb3af5bdf-metrics-client-ca\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.143896 master-0 kubenswrapper[4790]: I1011 10:40:05.143590 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/06012e2a-b507-48ad-9740-2c3cb3af5bdf-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.143896 master-0 kubenswrapper[4790]: I1011 10:40:05.143634 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4txm\" (UniqueName: \"kubernetes.io/projected/06012e2a-b507-48ad-9740-2c3cb3af5bdf-kube-api-access-g4txm\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.143896 master-0 kubenswrapper[4790]: I1011 10:40:05.143737 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/06012e2a-b507-48ad-9740-2c3cb3af5bdf-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.143896 master-0 kubenswrapper[4790]: I1011 10:40:05.143784 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/06012e2a-b507-48ad-9740-2c3cb3af5bdf-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.143896 master-0 kubenswrapper[4790]: I1011 10:40:05.143820 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/06012e2a-b507-48ad-9740-2c3cb3af5bdf-secret-grpc-tls\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.143896 master-0 kubenswrapper[4790]: I1011 10:40:05.143890 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/06012e2a-b507-48ad-9740-2c3cb3af5bdf-secret-thanos-querier-tls\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.145911 master-0 kubenswrapper[4790]: I1011 10:40:05.145839 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06012e2a-b507-48ad-9740-2c3cb3af5bdf-metrics-client-ca\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.147463 master-0 kubenswrapper[4790]: I1011 10:40:05.147409 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/06012e2a-b507-48ad-9740-2c3cb3af5bdf-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.148222 master-0 kubenswrapper[4790]: I1011 10:40:05.148165 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/06012e2a-b507-48ad-9740-2c3cb3af5bdf-secret-thanos-querier-tls\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.148522 master-0 kubenswrapper[4790]: I1011 10:40:05.148470 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/06012e2a-b507-48ad-9740-2c3cb3af5bdf-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.149433 master-0 kubenswrapper[4790]: I1011 10:40:05.149361 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/06012e2a-b507-48ad-9740-2c3cb3af5bdf-secret-grpc-tls\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.150350 master-0 kubenswrapper[4790]: I1011 10:40:05.150296 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/06012e2a-b507-48ad-9740-2c3cb3af5bdf-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.150531 master-0 kubenswrapper[4790]: I1011 10:40:05.150482 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/06012e2a-b507-48ad-9740-2c3cb3af5bdf-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.159098 master-1 kubenswrapper[4771]: I1011 10:40:05.158773 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2710b153-4085-41e5-8524-7cfb5d8c57f9-secret-grpc-tls\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.159098 master-1 kubenswrapper[4771]: I1011 10:40:05.158873 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2710b153-4085-41e5-8524-7cfb5d8c57f9-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.159098 master-1 kubenswrapper[4771]: I1011 10:40:05.158933 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/2710b153-4085-41e5-8524-7cfb5d8c57f9-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.159098 master-1 kubenswrapper[4771]: I1011 10:40:05.159015 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/2710b153-4085-41e5-8524-7cfb5d8c57f9-secret-thanos-querier-tls\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.159098 master-1 kubenswrapper[4771]: I1011 10:40:05.159064 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbwfg\" (UniqueName: \"kubernetes.io/projected/2710b153-4085-41e5-8524-7cfb5d8c57f9-kube-api-access-rbwfg\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.159798 master-1 kubenswrapper[4771]: I1011 10:40:05.159412 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2710b153-4085-41e5-8524-7cfb5d8c57f9-metrics-client-ca\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.160141 master-1 kubenswrapper[4771]: I1011 10:40:05.160076 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/2710b153-4085-41e5-8524-7cfb5d8c57f9-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.160265 master-1 kubenswrapper[4771]: I1011 10:40:05.160149 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2710b153-4085-41e5-8524-7cfb5d8c57f9-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.161208 master-1 kubenswrapper[4771]: I1011 10:40:05.161157 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2710b153-4085-41e5-8524-7cfb5d8c57f9-metrics-client-ca\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.162808 master-1 kubenswrapper[4771]: I1011 10:40:05.162765 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/2710b153-4085-41e5-8524-7cfb5d8c57f9-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.164002 master-1 kubenswrapper[4771]: I1011 10:40:05.163933 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/2710b153-4085-41e5-8524-7cfb5d8c57f9-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.164160 master-1 kubenswrapper[4771]: I1011 10:40:05.164030 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2710b153-4085-41e5-8524-7cfb5d8c57f9-secret-grpc-tls\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.164738 master-1 kubenswrapper[4771]: I1011 10:40:05.164692 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2710b153-4085-41e5-8524-7cfb5d8c57f9-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.165831 master-1 kubenswrapper[4771]: I1011 10:40:05.165753 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2710b153-4085-41e5-8524-7cfb5d8c57f9-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.165951 master-1 kubenswrapper[4771]: I1011 10:40:05.165890 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/2710b153-4085-41e5-8524-7cfb5d8c57f9-secret-thanos-querier-tls\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.185286 master-1 kubenswrapper[4771]: I1011 10:40:05.185226 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbwfg\" (UniqueName: \"kubernetes.io/projected/2710b153-4085-41e5-8524-7cfb5d8c57f9-kube-api-access-rbwfg\") pod \"thanos-querier-7f646dd4d8-v72dv\" (UID: \"2710b153-4085-41e5-8524-7cfb5d8c57f9\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.189425 master-0 kubenswrapper[4790]: I1011 10:40:05.189286 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4txm\" (UniqueName: \"kubernetes.io/projected/06012e2a-b507-48ad-9740-2c3cb3af5bdf-kube-api-access-g4txm\") pod \"thanos-querier-7f646dd4d8-qxd8w\" (UID: \"06012e2a-b507-48ad-9740-2c3cb3af5bdf\") " pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.243978 master-0 kubenswrapper[4790]: I1011 10:40:05.243904 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:05.270163 master-1 kubenswrapper[4771]: I1011 10:40:05.270089 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:05.369465 master-1 kubenswrapper[4771]: I1011 10:40:05.369344 4771 generic.go:334] "Generic (PLEG): container finished" podID="06b4c539-712c-4c8b-8b0f-ffbcbfd7811d" containerID="92ecee04176eac17c1c00567be2b537580d0d6115b2687d6cc4cbef9013df695" exitCode=0 Oct 11 10:40:05.369465 master-1 kubenswrapper[4771]: I1011 10:40:05.369455 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d","Type":"ContainerDied","Data":"92ecee04176eac17c1c00567be2b537580d0d6115b2687d6cc4cbef9013df695"} Oct 11 10:40:05.369784 master-1 kubenswrapper[4771]: I1011 10:40:05.369496 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d","Type":"ContainerStarted","Data":"0e6f74ab535f56d067899045bb3191ead6cf63a23561a9074240cdae0614d5e1"} Oct 11 10:40:05.739954 master-1 kubenswrapper[4771]: I1011 10:40:05.739885 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7f646dd4d8-v72dv"] Oct 11 10:40:05.747272 master-1 kubenswrapper[4771]: W1011 10:40:05.747177 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2710b153_4085_41e5_8524_7cfb5d8c57f9.slice/crio-882cb261ffa89fd3372a5b4bceea1e3aa752e4db3b58b2a961b8efdf3a9b7438 WatchSource:0}: Error finding container 882cb261ffa89fd3372a5b4bceea1e3aa752e4db3b58b2a961b8efdf3a9b7438: Status 404 returned error can't find the container with id 882cb261ffa89fd3372a5b4bceea1e3aa752e4db3b58b2a961b8efdf3a9b7438 Oct 11 10:40:06.379409 master-1 kubenswrapper[4771]: I1011 10:40:06.379298 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" event={"ID":"2710b153-4085-41e5-8524-7cfb5d8c57f9","Type":"ContainerStarted","Data":"882cb261ffa89fd3372a5b4bceea1e3aa752e4db3b58b2a961b8efdf3a9b7438"} Oct 11 10:40:06.464500 master-2 kubenswrapper[4776]: I1011 10:40:06.464414 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:06.465034 master-2 kubenswrapper[4776]: I1011 10:40:06.464533 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:06.568118 master-0 kubenswrapper[4790]: I1011 10:40:06.568022 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-bn2sv"] Oct 11 10:40:06.629448 master-0 kubenswrapper[4790]: I1011 10:40:06.629372 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-zcc4t"] Oct 11 10:40:06.632513 master-0 kubenswrapper[4790]: I1011 10:40:06.632418 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Oct 11 10:40:06.639586 master-0 kubenswrapper[4790]: I1011 10:40:06.639517 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w"] Oct 11 10:40:06.801694 master-0 kubenswrapper[4790]: I1011 10:40:06.801576 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76f8bc4746-9rjdm" event={"ID":"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db","Type":"ContainerStarted","Data":"199044355c2ca69bb6ac6dacb80814ae2bdf5a58183730d9a6202cd23bded675"} Oct 11 10:40:06.803298 master-0 kubenswrapper[4790]: I1011 10:40:06.803150 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" event={"ID":"06012e2a-b507-48ad-9740-2c3cb3af5bdf","Type":"ContainerStarted","Data":"56f63f58d50b03c90a73f6e1f202479291f916e6fa3121d3502960f8735cf97e"} Oct 11 10:40:06.804559 master-0 kubenswrapper[4790]: I1011 10:40:06.804484 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zcc4t" event={"ID":"a5b695d5-a88c-4ff9-bc59-d13f61f237f6","Type":"ContainerStarted","Data":"edd65708c0094454d319cb7b3ab81c3dc1c5277d8b68ade357b86423ea48d1e0"} Oct 11 10:40:06.805793 master-0 kubenswrapper[4790]: I1011 10:40:06.805748 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-bn2sv" event={"ID":"0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7","Type":"ContainerStarted","Data":"a06131f06cae792ba40a375398c0bcae5d23446765886bf2babe865973923793"} Oct 11 10:40:06.808269 master-0 kubenswrapper[4790]: I1011 10:40:06.808189 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-8-master-0" event={"ID":"a3934355-bb61-4316-b164-05294e12906a","Type":"ContainerStarted","Data":"7ad4b389f620d673e1c84d3f718fc34561da93dd445afd695e3bc1db0ae8b3cd"} Oct 11 10:40:06.810401 master-0 kubenswrapper[4790]: I1011 10:40:06.810316 4790 generic.go:334] "Generic (PLEG): container finished" podID="099ca022-6e9c-4604-b517-d90713dd6a44" containerID="035fa9c3481f694000fd14dae9e171055c8a6110f7029a9f53a9e12f4eeb425c" exitCode=0 Oct 11 10:40:06.810527 master-0 kubenswrapper[4790]: I1011 10:40:06.810475 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" event={"ID":"099ca022-6e9c-4604-b517-d90713dd6a44","Type":"ContainerDied","Data":"035fa9c3481f694000fd14dae9e171055c8a6110f7029a9f53a9e12f4eeb425c"} Oct 11 10:40:06.812193 master-0 kubenswrapper[4790]: I1011 10:40:06.812138 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e3e6a069-f9e0-417c-9226-5ef929699b39","Type":"ContainerStarted","Data":"04d89d4a333def01fbc6dde02ce158f9828f674ae99cc8cc97c64c4898850a3c"} Oct 11 10:40:06.814407 master-0 kubenswrapper[4790]: I1011 10:40:06.814349 4790 generic.go:334] "Generic (PLEG): container finished" podID="76dd8647-4ad5-4874-b2d2-dee16aab637a" containerID="ace4f650a5b9b61d268cf3bf1c5cdb946038997e7050cf67fa731d305fe20331" exitCode=0 Oct 11 10:40:06.814407 master-0 kubenswrapper[4790]: I1011 10:40:06.814394 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" event={"ID":"76dd8647-4ad5-4874-b2d2-dee16aab637a","Type":"ContainerDied","Data":"ace4f650a5b9b61d268cf3bf1c5cdb946038997e7050cf67fa731d305fe20331"} Oct 11 10:40:06.832721 master-0 kubenswrapper[4790]: I1011 10:40:06.832615 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-76f8bc4746-9rjdm" podStartSLOduration=114.240444529 podStartE2EDuration="1m59.832592485s" podCreationTimestamp="2025-10-11 10:38:07 +0000 UTC" firstStartedPulling="2025-10-11 10:40:00.591089291 +0000 UTC m=+77.145549633" lastFinishedPulling="2025-10-11 10:40:06.183237287 +0000 UTC m=+82.737697589" observedRunningTime="2025-10-11 10:40:06.832535923 +0000 UTC m=+83.386996245" watchObservedRunningTime="2025-10-11 10:40:06.832592485 +0000 UTC m=+83.387052787" Oct 11 10:40:06.912865 master-0 kubenswrapper[4790]: I1011 10:40:06.912633 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-8-master-0" podStartSLOduration=17.454828649 podStartE2EDuration="22.91260557s" podCreationTimestamp="2025-10-11 10:39:44 +0000 UTC" firstStartedPulling="2025-10-11 10:40:00.755095495 +0000 UTC m=+77.309555797" lastFinishedPulling="2025-10-11 10:40:06.212872416 +0000 UTC m=+82.767332718" observedRunningTime="2025-10-11 10:40:06.885933487 +0000 UTC m=+83.440393789" watchObservedRunningTime="2025-10-11 10:40:06.91260557 +0000 UTC m=+83.467065862" Oct 11 10:40:07.388540 master-1 kubenswrapper[4771]: I1011 10:40:07.388461 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d","Type":"ContainerStarted","Data":"a76242e52f4101d9f26e218e8953c585b3982dfbdd75ea5689374d1e469d07d0"} Oct 11 10:40:07.388540 master-1 kubenswrapper[4771]: I1011 10:40:07.388524 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d","Type":"ContainerStarted","Data":"6279c1fb3d44d787fe9e8e48fe2e0c0fc3d305e9cd737a3127a4af7da2544dc7"} Oct 11 10:40:07.825861 master-0 kubenswrapper[4790]: I1011 10:40:07.825802 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" event={"ID":"76dd8647-4ad5-4874-b2d2-dee16aab637a","Type":"ContainerStarted","Data":"35472050aef2a23d480cd0b8022e9ecdb2080124eec5c8fed4e4fce84c05d72c"} Oct 11 10:40:07.830224 master-0 kubenswrapper[4790]: I1011 10:40:07.830177 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" event={"ID":"099ca022-6e9c-4604-b517-d90713dd6a44","Type":"ContainerStarted","Data":"29c2667267dacd2b8bea9208822dffdfcb241820cba338aed8d3c7b0b7e5651a"} Oct 11 10:40:07.870540 master-0 kubenswrapper[4790]: I1011 10:40:07.870435 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" podStartSLOduration=70.225138783 podStartE2EDuration="1m15.870376175s" podCreationTimestamp="2025-10-11 10:38:52 +0000 UTC" firstStartedPulling="2025-10-11 10:40:00.533253602 +0000 UTC m=+77.087713934" lastFinishedPulling="2025-10-11 10:40:06.178491014 +0000 UTC m=+82.732951326" observedRunningTime="2025-10-11 10:40:07.868697671 +0000 UTC m=+84.423157973" watchObservedRunningTime="2025-10-11 10:40:07.870376175 +0000 UTC m=+84.424836477" Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: I1011 10:40:08.242827 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:08.243436 master-1 kubenswrapper[4771]: I1011 10:40:08.242912 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:08.367851 master-0 kubenswrapper[4790]: I1011 10:40:08.367773 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-xznwp" Oct 11 10:40:08.386689 master-2 kubenswrapper[4776]: I1011 10:40:08.386612 4776 patch_prober.go:28] interesting pod/kube-controller-manager-master-2 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:08.387172 master-2 kubenswrapper[4776]: I1011 10:40:08.386716 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:08.387172 master-2 kubenswrapper[4776]: I1011 10:40:08.386757 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:40:08.387400 master-2 kubenswrapper[4776]: I1011 10:40:08.387379 4776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"f4e2e49582cba1c1c050448b4ca7740185e407bb7310d805fa88bb2ea911a4b1"} pod="openshift-kube-controller-manager/kube-controller-manager-master-2" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Oct 11 10:40:08.387516 master-2 kubenswrapper[4776]: I1011 10:40:08.387493 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" containerID="cri-o://f4e2e49582cba1c1c050448b4ca7740185e407bb7310d805fa88bb2ea911a4b1" gracePeriod=30 Oct 11 10:40:08.398414 master-1 kubenswrapper[4771]: I1011 10:40:08.398300 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" event={"ID":"2710b153-4085-41e5-8524-7cfb5d8c57f9","Type":"ContainerStarted","Data":"acc2f488bf57141e6e0a36e89d0adfa78aa91304ff3f85a49628352123bd1320"} Oct 11 10:40:08.409012 master-1 kubenswrapper[4771]: I1011 10:40:08.408713 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d","Type":"ContainerStarted","Data":"51ef77f2205ad388a0044218e2ad9456e242158c3929da3f14b908d1d952ac3d"} Oct 11 10:40:08.409012 master-1 kubenswrapper[4771]: I1011 10:40:08.408755 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d","Type":"ContainerStarted","Data":"98498be567e03aba549687f3377ab610e9ad0ef0a1ff7cb07f31ae2f42c94fa6"} Oct 11 10:40:08.409012 master-1 kubenswrapper[4771]: I1011 10:40:08.408770 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d","Type":"ContainerStarted","Data":"98daeb1d18320b2496b449e3dfeafab3b6fc9485441113346f32cdb7c9195430"} Oct 11 10:40:09.350704 master-0 kubenswrapper[4790]: I1011 10:40:09.350629 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Oct 11 10:40:09.351737 master-0 kubenswrapper[4790]: I1011 10:40:09.351692 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.354450 master-0 kubenswrapper[4790]: I1011 10:40:09.354133 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Oct 11 10:40:09.354450 master-0 kubenswrapper[4790]: I1011 10:40:09.354367 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Oct 11 10:40:09.356731 master-0 kubenswrapper[4790]: I1011 10:40:09.355100 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Oct 11 10:40:09.356731 master-0 kubenswrapper[4790]: I1011 10:40:09.355108 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Oct 11 10:40:09.356731 master-0 kubenswrapper[4790]: I1011 10:40:09.355272 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Oct 11 10:40:09.356731 master-0 kubenswrapper[4790]: I1011 10:40:09.355568 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Oct 11 10:40:09.356731 master-0 kubenswrapper[4790]: I1011 10:40:09.355765 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Oct 11 10:40:09.356731 master-0 kubenswrapper[4790]: I1011 10:40:09.355858 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-6sqva262urci3" Oct 11 10:40:09.356731 master-0 kubenswrapper[4790]: I1011 10:40:09.356636 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Oct 11 10:40:09.357162 master-0 kubenswrapper[4790]: I1011 10:40:09.357049 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Oct 11 10:40:09.361148 master-0 kubenswrapper[4790]: I1011 10:40:09.360949 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Oct 11 10:40:09.362760 master-0 kubenswrapper[4790]: I1011 10:40:09.362725 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Oct 11 10:40:09.369009 master-1 kubenswrapper[4771]: I1011 10:40:09.368406 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-1"] Oct 11 10:40:09.370243 master-1 kubenswrapper[4771]: I1011 10:40:09.370186 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.375398 master-1 kubenswrapper[4771]: I1011 10:40:09.375344 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Oct 11 10:40:09.375752 master-1 kubenswrapper[4771]: I1011 10:40:09.375700 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Oct 11 10:40:09.375876 master-1 kubenswrapper[4771]: I1011 10:40:09.375852 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-6sqva262urci3" Oct 11 10:40:09.376051 master-1 kubenswrapper[4771]: I1011 10:40:09.376003 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Oct 11 10:40:09.376173 master-1 kubenswrapper[4771]: I1011 10:40:09.376119 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Oct 11 10:40:09.376736 master-1 kubenswrapper[4771]: I1011 10:40:09.376681 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Oct 11 10:40:09.377779 master-1 kubenswrapper[4771]: I1011 10:40:09.377704 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Oct 11 10:40:09.378556 master-1 kubenswrapper[4771]: I1011 10:40:09.378492 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Oct 11 10:40:09.378713 master-1 kubenswrapper[4771]: I1011 10:40:09.378618 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Oct 11 10:40:09.378845 master-1 kubenswrapper[4771]: I1011 10:40:09.378797 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Oct 11 10:40:09.382591 master-0 kubenswrapper[4790]: I1011 10:40:09.382415 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Oct 11 10:40:09.393322 master-1 kubenswrapper[4771]: I1011 10:40:09.393213 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Oct 11 10:40:09.394397 master-1 kubenswrapper[4771]: I1011 10:40:09.394296 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Oct 11 10:40:09.398033 master-1 kubenswrapper[4771]: I1011 10:40:09.397965 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-1"] Oct 11 10:40:09.416993 master-1 kubenswrapper[4771]: I1011 10:40:09.416934 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" event={"ID":"2710b153-4085-41e5-8524-7cfb5d8c57f9","Type":"ContainerStarted","Data":"7ca2ecc0731a901c2fd6474f80cbed9283971035e0563cec27dc08282e199482"} Oct 11 10:40:09.416993 master-1 kubenswrapper[4771]: I1011 10:40:09.416982 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" event={"ID":"2710b153-4085-41e5-8524-7cfb5d8c57f9","Type":"ContainerStarted","Data":"7b3e36a7069b634ff3ca91c12f48e463cacec291be3dce74ba2bbd96106de635"} Oct 11 10:40:09.420500 master-1 kubenswrapper[4771]: I1011 10:40:09.420444 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e41e7753-f8a4-4f83-8061-b1610912b8e5-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.420584 master-1 kubenswrapper[4771]: I1011 10:40:09.420512 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bj9h\" (UniqueName: \"kubernetes.io/projected/e41e7753-f8a4-4f83-8061-b1610912b8e5-kube-api-access-4bj9h\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.420584 master-1 kubenswrapper[4771]: I1011 10:40:09.420548 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-web-config\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.420584 master-1 kubenswrapper[4771]: I1011 10:40:09.420578 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.420848 master-1 kubenswrapper[4771]: I1011 10:40:09.420609 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.420848 master-1 kubenswrapper[4771]: I1011 10:40:09.420665 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.420848 master-1 kubenswrapper[4771]: I1011 10:40:09.420698 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e41e7753-f8a4-4f83-8061-b1610912b8e5-configmap-metrics-client-ca\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.420848 master-1 kubenswrapper[4771]: I1011 10:40:09.420727 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e41e7753-f8a4-4f83-8061-b1610912b8e5-tls-assets\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.420848 master-1 kubenswrapper[4771]: I1011 10:40:09.420751 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e41e7753-f8a4-4f83-8061-b1610912b8e5-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.420848 master-1 kubenswrapper[4771]: I1011 10:40:09.420778 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e41e7753-f8a4-4f83-8061-b1610912b8e5-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.421542 master-1 kubenswrapper[4771]: I1011 10:40:09.421490 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e41e7753-f8a4-4f83-8061-b1610912b8e5-config-out\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.421754 master-1 kubenswrapper[4771]: I1011 10:40:09.421726 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.421997 master-1 kubenswrapper[4771]: I1011 10:40:09.421971 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/e41e7753-f8a4-4f83-8061-b1610912b8e5-prometheus-k8s-db\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.422176 master-1 kubenswrapper[4771]: I1011 10:40:09.422150 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-config\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.422378 master-1 kubenswrapper[4771]: I1011 10:40:09.422329 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-secret-metrics-client-certs\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.422596 master-1 kubenswrapper[4771]: I1011 10:40:09.422564 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-secret-kube-rbac-proxy\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.422776 master-1 kubenswrapper[4771]: I1011 10:40:09.422750 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e41e7753-f8a4-4f83-8061-b1610912b8e5-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.422957 master-1 kubenswrapper[4771]: I1011 10:40:09.422930 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-secret-grpc-tls\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.433617 master-0 kubenswrapper[4790]: I1011 10:40:09.433565 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27006098-2092-43c6-97f8-0219e7fc4b81-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.433992 master-0 kubenswrapper[4790]: I1011 10:40:09.433978 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/27006098-2092-43c6-97f8-0219e7fc4b81-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.434150 master-0 kubenswrapper[4790]: I1011 10:40:09.434125 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.434270 master-0 kubenswrapper[4790]: I1011 10:40:09.434250 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.434370 master-0 kubenswrapper[4790]: I1011 10:40:09.434356 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27006098-2092-43c6-97f8-0219e7fc4b81-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.434504 master-0 kubenswrapper[4790]: I1011 10:40:09.434488 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.434639 master-0 kubenswrapper[4790]: I1011 10:40:09.434626 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/27006098-2092-43c6-97f8-0219e7fc4b81-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.434788 master-0 kubenswrapper[4790]: I1011 10:40:09.434775 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/27006098-2092-43c6-97f8-0219e7fc4b81-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.434916 master-0 kubenswrapper[4790]: I1011 10:40:09.434896 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.435030 master-0 kubenswrapper[4790]: I1011 10:40:09.435018 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/27006098-2092-43c6-97f8-0219e7fc4b81-config-out\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.435154 master-0 kubenswrapper[4790]: I1011 10:40:09.435140 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.435272 master-0 kubenswrapper[4790]: I1011 10:40:09.435260 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzrlh\" (UniqueName: \"kubernetes.io/projected/27006098-2092-43c6-97f8-0219e7fc4b81-kube-api-access-tzrlh\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.435362 master-0 kubenswrapper[4790]: I1011 10:40:09.435350 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-web-config\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.435432 master-0 kubenswrapper[4790]: I1011 10:40:09.435419 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27006098-2092-43c6-97f8-0219e7fc4b81-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.435530 master-0 kubenswrapper[4790]: I1011 10:40:09.435517 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.435652 master-0 kubenswrapper[4790]: I1011 10:40:09.435639 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.435791 master-0 kubenswrapper[4790]: I1011 10:40:09.435777 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-config\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.435972 master-0 kubenswrapper[4790]: I1011 10:40:09.435955 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/27006098-2092-43c6-97f8-0219e7fc4b81-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.524820 master-1 kubenswrapper[4771]: I1011 10:40:09.524750 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/e41e7753-f8a4-4f83-8061-b1610912b8e5-prometheus-k8s-db\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.525060 master-1 kubenswrapper[4771]: I1011 10:40:09.524836 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-config\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.525060 master-1 kubenswrapper[4771]: I1011 10:40:09.524882 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-secret-metrics-client-certs\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.525060 master-1 kubenswrapper[4771]: I1011 10:40:09.524948 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e41e7753-f8a4-4f83-8061-b1610912b8e5-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.525060 master-1 kubenswrapper[4771]: I1011 10:40:09.524965 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-secret-kube-rbac-proxy\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.525060 master-1 kubenswrapper[4771]: I1011 10:40:09.524988 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-secret-grpc-tls\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.525060 master-1 kubenswrapper[4771]: I1011 10:40:09.525031 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e41e7753-f8a4-4f83-8061-b1610912b8e5-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.525060 master-1 kubenswrapper[4771]: I1011 10:40:09.525049 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bj9h\" (UniqueName: \"kubernetes.io/projected/e41e7753-f8a4-4f83-8061-b1610912b8e5-kube-api-access-4bj9h\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.525378 master-1 kubenswrapper[4771]: I1011 10:40:09.525077 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-web-config\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.525378 master-1 kubenswrapper[4771]: I1011 10:40:09.525117 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.525378 master-1 kubenswrapper[4771]: I1011 10:40:09.525135 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.525378 master-1 kubenswrapper[4771]: I1011 10:40:09.525207 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.525378 master-1 kubenswrapper[4771]: I1011 10:40:09.525242 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/e41e7753-f8a4-4f83-8061-b1610912b8e5-prometheus-k8s-db\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.525378 master-1 kubenswrapper[4771]: I1011 10:40:09.525275 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e41e7753-f8a4-4f83-8061-b1610912b8e5-configmap-metrics-client-ca\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.525378 master-1 kubenswrapper[4771]: I1011 10:40:09.525344 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e41e7753-f8a4-4f83-8061-b1610912b8e5-tls-assets\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.525775 master-1 kubenswrapper[4771]: I1011 10:40:09.525432 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e41e7753-f8a4-4f83-8061-b1610912b8e5-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.525775 master-1 kubenswrapper[4771]: I1011 10:40:09.525467 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e41e7753-f8a4-4f83-8061-b1610912b8e5-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.525775 master-1 kubenswrapper[4771]: I1011 10:40:09.525489 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e41e7753-f8a4-4f83-8061-b1610912b8e5-config-out\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.525775 master-1 kubenswrapper[4771]: I1011 10:40:09.525505 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.527158 master-1 kubenswrapper[4771]: I1011 10:40:09.527077 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e41e7753-f8a4-4f83-8061-b1610912b8e5-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.527907 master-1 kubenswrapper[4771]: I1011 10:40:09.527878 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e41e7753-f8a4-4f83-8061-b1610912b8e5-configmap-metrics-client-ca\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.528085 master-1 kubenswrapper[4771]: I1011 10:40:09.528038 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e41e7753-f8a4-4f83-8061-b1610912b8e5-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.528147 master-1 kubenswrapper[4771]: I1011 10:40:09.528087 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e41e7753-f8a4-4f83-8061-b1610912b8e5-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.529738 master-1 kubenswrapper[4771]: I1011 10:40:09.529671 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e41e7753-f8a4-4f83-8061-b1610912b8e5-config-out\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.530030 master-1 kubenswrapper[4771]: I1011 10:40:09.529981 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.530244 master-1 kubenswrapper[4771]: I1011 10:40:09.530188 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-secret-grpc-tls\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.530658 master-1 kubenswrapper[4771]: I1011 10:40:09.530633 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-secret-kube-rbac-proxy\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.531303 master-1 kubenswrapper[4771]: I1011 10:40:09.531268 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-secret-metrics-client-certs\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.531303 master-1 kubenswrapper[4771]: I1011 10:40:09.531275 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.531777 master-1 kubenswrapper[4771]: I1011 10:40:09.531735 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-config\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.531887 master-1 kubenswrapper[4771]: I1011 10:40:09.531834 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e41e7753-f8a4-4f83-8061-b1610912b8e5-tls-assets\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.532091 master-1 kubenswrapper[4771]: I1011 10:40:09.532054 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.532683 master-1 kubenswrapper[4771]: I1011 10:40:09.532640 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.533918 master-1 kubenswrapper[4771]: I1011 10:40:09.533877 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e41e7753-f8a4-4f83-8061-b1610912b8e5-web-config\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.537041 master-1 kubenswrapper[4771]: I1011 10:40:09.536998 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e41e7753-f8a4-4f83-8061-b1610912b8e5-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.539919 master-0 kubenswrapper[4790]: I1011 10:40:09.536771 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27006098-2092-43c6-97f8-0219e7fc4b81-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.539919 master-0 kubenswrapper[4790]: I1011 10:40:09.536866 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.539919 master-0 kubenswrapper[4790]: I1011 10:40:09.536897 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.539919 master-0 kubenswrapper[4790]: I1011 10:40:09.536923 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-config\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.539919 master-0 kubenswrapper[4790]: I1011 10:40:09.536949 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/27006098-2092-43c6-97f8-0219e7fc4b81-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.539919 master-0 kubenswrapper[4790]: I1011 10:40:09.536975 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27006098-2092-43c6-97f8-0219e7fc4b81-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.539919 master-0 kubenswrapper[4790]: I1011 10:40:09.536997 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/27006098-2092-43c6-97f8-0219e7fc4b81-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.539919 master-0 kubenswrapper[4790]: I1011 10:40:09.537018 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.539919 master-0 kubenswrapper[4790]: I1011 10:40:09.537042 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.539919 master-0 kubenswrapper[4790]: I1011 10:40:09.537069 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27006098-2092-43c6-97f8-0219e7fc4b81-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.539919 master-0 kubenswrapper[4790]: I1011 10:40:09.537095 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.539919 master-0 kubenswrapper[4790]: I1011 10:40:09.537117 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/27006098-2092-43c6-97f8-0219e7fc4b81-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.539919 master-0 kubenswrapper[4790]: I1011 10:40:09.537138 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/27006098-2092-43c6-97f8-0219e7fc4b81-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.539919 master-0 kubenswrapper[4790]: I1011 10:40:09.537172 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.539919 master-0 kubenswrapper[4790]: I1011 10:40:09.537210 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/27006098-2092-43c6-97f8-0219e7fc4b81-config-out\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.539919 master-0 kubenswrapper[4790]: I1011 10:40:09.537233 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.539919 master-0 kubenswrapper[4790]: I1011 10:40:09.537262 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzrlh\" (UniqueName: \"kubernetes.io/projected/27006098-2092-43c6-97f8-0219e7fc4b81-kube-api-access-tzrlh\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.541129 master-0 kubenswrapper[4790]: I1011 10:40:09.537285 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-web-config\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.541129 master-0 kubenswrapper[4790]: I1011 10:40:09.538698 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27006098-2092-43c6-97f8-0219e7fc4b81-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.541129 master-0 kubenswrapper[4790]: I1011 10:40:09.539024 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27006098-2092-43c6-97f8-0219e7fc4b81-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.541218 master-0 kubenswrapper[4790]: I1011 10:40:09.541143 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-web-config\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.541981 master-0 kubenswrapper[4790]: I1011 10:40:09.541848 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/27006098-2092-43c6-97f8-0219e7fc4b81-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.541981 master-0 kubenswrapper[4790]: I1011 10:40:09.541848 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.542460 master-0 kubenswrapper[4790]: I1011 10:40:09.542420 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/27006098-2092-43c6-97f8-0219e7fc4b81-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.542726 master-0 kubenswrapper[4790]: I1011 10:40:09.542637 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/27006098-2092-43c6-97f8-0219e7fc4b81-config-out\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.542851 master-0 kubenswrapper[4790]: I1011 10:40:09.542754 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27006098-2092-43c6-97f8-0219e7fc4b81-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.542851 master-0 kubenswrapper[4790]: I1011 10:40:09.542771 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-config\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.543299 master-0 kubenswrapper[4790]: I1011 10:40:09.543272 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.544274 master-0 kubenswrapper[4790]: I1011 10:40:09.544226 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.544274 master-0 kubenswrapper[4790]: I1011 10:40:09.544256 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.544920 master-0 kubenswrapper[4790]: I1011 10:40:09.544858 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.545434 master-1 kubenswrapper[4771]: I1011 10:40:09.545348 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bj9h\" (UniqueName: \"kubernetes.io/projected/e41e7753-f8a4-4f83-8061-b1610912b8e5-kube-api-access-4bj9h\") pod \"prometheus-k8s-1\" (UID: \"e41e7753-f8a4-4f83-8061-b1610912b8e5\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.546018 master-0 kubenswrapper[4790]: I1011 10:40:09.545964 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/27006098-2092-43c6-97f8-0219e7fc4b81-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.546274 master-0 kubenswrapper[4790]: I1011 10:40:09.546196 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/27006098-2092-43c6-97f8-0219e7fc4b81-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.547061 master-0 kubenswrapper[4790]: I1011 10:40:09.547009 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.547812 master-0 kubenswrapper[4790]: I1011 10:40:09.547778 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/27006098-2092-43c6-97f8-0219e7fc4b81-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.569959 master-0 kubenswrapper[4790]: I1011 10:40:09.569834 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzrlh\" (UniqueName: \"kubernetes.io/projected/27006098-2092-43c6-97f8-0219e7fc4b81-kube-api-access-tzrlh\") pod \"prometheus-k8s-0\" (UID: \"27006098-2092-43c6-97f8-0219e7fc4b81\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.669145 master-0 kubenswrapper[4790]: I1011 10:40:09.669029 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:09.689340 master-1 kubenswrapper[4771]: I1011 10:40:09.688553 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:09.840882 master-0 kubenswrapper[4790]: I1011 10:40:09.840390 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" event={"ID":"06012e2a-b507-48ad-9740-2c3cb3af5bdf","Type":"ContainerStarted","Data":"c588ca030013e7f6856c7ec7d0452ca808693f0797889a7dbc2fa05ba6679004"} Oct 11 10:40:09.842096 master-0 kubenswrapper[4790]: I1011 10:40:09.842005 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zcc4t" event={"ID":"a5b695d5-a88c-4ff9-bc59-d13f61f237f6","Type":"ContainerStarted","Data":"a046b19339ceccf7c0076522b83d91b31e1d5940f23ca53ec4f5911c51f8f1ea"} Oct 11 10:40:09.844888 master-0 kubenswrapper[4790]: I1011 10:40:09.844213 4790 generic.go:334] "Generic (PLEG): container finished" podID="e3e6a069-f9e0-417c-9226-5ef929699b39" containerID="029043df943b5138a6cfb7a349576a0a53a8aa434f25983d2a5e8a39c878b25b" exitCode=0 Oct 11 10:40:09.844888 master-0 kubenswrapper[4790]: I1011 10:40:09.844239 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e3e6a069-f9e0-417c-9226-5ef929699b39","Type":"ContainerDied","Data":"029043df943b5138a6cfb7a349576a0a53a8aa434f25983d2a5e8a39c878b25b"} Oct 11 10:40:10.024003 master-2 kubenswrapper[4776]: I1011 10:40:10.023929 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:40:10.025037 master-2 kubenswrapper[4776]: I1011 10:40:10.024016 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:40:10.043462 master-0 kubenswrapper[4790]: I1011 10:40:10.043369 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:40:10.043958 master-0 kubenswrapper[4790]: I1011 10:40:10.043914 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:40:10.054281 master-0 kubenswrapper[4790]: I1011 10:40:10.054204 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:40:10.076835 master-0 kubenswrapper[4790]: I1011 10:40:10.076764 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:40:10.076835 master-0 kubenswrapper[4790]: I1011 10:40:10.076832 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:40:10.079178 master-0 kubenswrapper[4790]: I1011 10:40:10.079105 4790 patch_prober.go:28] interesting pod/console-76f8bc4746-9rjdm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.130.0.14:8443/health\": dial tcp 10.130.0.14:8443: connect: connection refused" start-of-body= Oct 11 10:40:10.079247 master-0 kubenswrapper[4790]: I1011 10:40:10.079208 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-9rjdm" podUID="ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db" containerName="console" probeResult="failure" output="Get \"https://10.130.0.14:8443/health\": dial tcp 10.130.0.14:8443: connect: connection refused" Oct 11 10:40:10.170754 master-1 kubenswrapper[4771]: I1011 10:40:10.170689 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-1"] Oct 11 10:40:10.171245 master-1 kubenswrapper[4771]: W1011 10:40:10.171198 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode41e7753_f8a4_4f83_8061_b1610912b8e5.slice/crio-759b46a90e208c9d320ae5625ed991e26011bee2dd7849e5e8f3bffabdf513d2 WatchSource:0}: Error finding container 759b46a90e208c9d320ae5625ed991e26011bee2dd7849e5e8f3bffabdf513d2: Status 404 returned error can't find the container with id 759b46a90e208c9d320ae5625ed991e26011bee2dd7849e5e8f3bffabdf513d2 Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: I1011 10:40:10.265204 4776 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-crzgp container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: [+]metric-storage-ready ok Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: [+]metric-informer-sync ok Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: [+]metadata-informer-sync ok Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: I1011 10:40:10.265302 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:10.265951 master-2 kubenswrapper[4776]: I1011 10:40:10.265388 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:40:10.431164 master-1 kubenswrapper[4771]: I1011 10:40:10.430929 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"06b4c539-712c-4c8b-8b0f-ffbcbfd7811d","Type":"ContainerStarted","Data":"2f30d6729b341dbdb9c05610aed3749be420a9b4f8bca413b81cd42c7a3fd58f"} Oct 11 10:40:10.434276 master-1 kubenswrapper[4771]: I1011 10:40:10.434215 4771 generic.go:334] "Generic (PLEG): container finished" podID="e41e7753-f8a4-4f83-8061-b1610912b8e5" containerID="8b9d35a5b980e617e2c3f966647e6a0a2866b6a35dbf6e74aaff90d59eccb2d5" exitCode=0 Oct 11 10:40:10.434443 master-1 kubenswrapper[4771]: I1011 10:40:10.434284 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"e41e7753-f8a4-4f83-8061-b1610912b8e5","Type":"ContainerDied","Data":"8b9d35a5b980e617e2c3f966647e6a0a2866b6a35dbf6e74aaff90d59eccb2d5"} Oct 11 10:40:10.434443 master-1 kubenswrapper[4771]: I1011 10:40:10.434399 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"e41e7753-f8a4-4f83-8061-b1610912b8e5","Type":"ContainerStarted","Data":"759b46a90e208c9d320ae5625ed991e26011bee2dd7849e5e8f3bffabdf513d2"} Oct 11 10:40:10.449662 master-1 kubenswrapper[4771]: I1011 10:40:10.449579 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" event={"ID":"2710b153-4085-41e5-8524-7cfb5d8c57f9","Type":"ContainerStarted","Data":"b6c95c59cc9b33b208e816806002297456cf1628f1011955a6a7777e48e09342"} Oct 11 10:40:10.449662 master-1 kubenswrapper[4771]: I1011 10:40:10.449662 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:10.449885 master-1 kubenswrapper[4771]: I1011 10:40:10.449684 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" event={"ID":"2710b153-4085-41e5-8524-7cfb5d8c57f9","Type":"ContainerStarted","Data":"73b24c57422a3f7f43d64b8eb582bbcc3d940da1dcdc6d9d55cdff5cbcf0c8d4"} Oct 11 10:40:10.449885 master-1 kubenswrapper[4771]: I1011 10:40:10.449705 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" event={"ID":"2710b153-4085-41e5-8524-7cfb5d8c57f9","Type":"ContainerStarted","Data":"963b466d2923c869692640d42b7341628728b839587f191ba41778719ee555ef"} Oct 11 10:40:10.467838 master-1 kubenswrapper[4771]: I1011 10:40:10.467512 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-1" podStartSLOduration=3.423964435 podStartE2EDuration="7.467501409s" podCreationTimestamp="2025-10-11 10:40:03 +0000 UTC" firstStartedPulling="2025-10-11 10:40:05.37133028 +0000 UTC m=+837.345556751" lastFinishedPulling="2025-10-11 10:40:09.414867274 +0000 UTC m=+841.389093725" observedRunningTime="2025-10-11 10:40:10.462666489 +0000 UTC m=+842.436892990" watchObservedRunningTime="2025-10-11 10:40:10.467501409 +0000 UTC m=+842.441727850" Oct 11 10:40:10.502410 master-1 kubenswrapper[4771]: I1011 10:40:10.502254 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" podStartSLOduration=2.834592451 podStartE2EDuration="6.50222628s" podCreationTimestamp="2025-10-11 10:40:04 +0000 UTC" firstStartedPulling="2025-10-11 10:40:05.751624962 +0000 UTC m=+837.725851443" lastFinishedPulling="2025-10-11 10:40:09.419258821 +0000 UTC m=+841.393485272" observedRunningTime="2025-10-11 10:40:10.489988384 +0000 UTC m=+842.464214865" watchObservedRunningTime="2025-10-11 10:40:10.50222628 +0000 UTC m=+842.476452761" Oct 11 10:40:10.891231 master-0 kubenswrapper[4790]: I1011 10:40:10.891189 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:40:10.995300 master-1 kubenswrapper[4771]: I1011 10:40:10.995191 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b"] Oct 11 10:40:10.995709 master-1 kubenswrapper[4771]: I1011 10:40:10.995618 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" podUID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" containerName="oauth-apiserver" containerID="cri-o://db0964db198d321448b29e5ac3377a039eb46842494e887c86878f66ad14d217" gracePeriod=120 Oct 11 10:40:11.144475 master-1 kubenswrapper[4771]: I1011 10:40:11.144408 4771 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-z898b container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:11.144475 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:11.144475 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:40:11.144475 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:40:11.144475 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:11.144475 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:11.144475 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:11.144475 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:11.144475 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:11.144475 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:40:11.144475 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:40:11.144475 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:40:11.144475 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:11.144475 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:11.145481 master-1 kubenswrapper[4771]: I1011 10:40:11.144537 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" podUID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:11.203901 master-0 kubenswrapper[4790]: I1011 10:40:11.203828 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Oct 11 10:40:11.464899 master-2 kubenswrapper[4776]: I1011 10:40:11.464820 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:11.465385 master-2 kubenswrapper[4776]: I1011 10:40:11.464934 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:11.887936 master-0 kubenswrapper[4790]: I1011 10:40:11.887874 4790 generic.go:334] "Generic (PLEG): container finished" podID="27006098-2092-43c6-97f8-0219e7fc4b81" containerID="f0ccf70a86f9257536be0c3f44809501e528a5a2cbeeb567e1c0b2ce3980447a" exitCode=0 Oct 11 10:40:11.887936 master-0 kubenswrapper[4790]: I1011 10:40:11.887940 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"27006098-2092-43c6-97f8-0219e7fc4b81","Type":"ContainerDied","Data":"f0ccf70a86f9257536be0c3f44809501e528a5a2cbeeb567e1c0b2ce3980447a"} Oct 11 10:40:11.888229 master-0 kubenswrapper[4790]: I1011 10:40:11.887970 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"27006098-2092-43c6-97f8-0219e7fc4b81","Type":"ContainerStarted","Data":"b4da09b1d91add0f61702efa94cfd5fadc0ea9628343d2e5c413b60af2167a29"} Oct 11 10:40:11.892492 master-0 kubenswrapper[4790]: I1011 10:40:11.892412 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" event={"ID":"06012e2a-b507-48ad-9740-2c3cb3af5bdf","Type":"ContainerStarted","Data":"9a560a07bad2152cbf18de2d70b546b10a894a4c9eba4a43930ff340c8389571"} Oct 11 10:40:11.892895 master-0 kubenswrapper[4790]: I1011 10:40:11.892512 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" event={"ID":"06012e2a-b507-48ad-9740-2c3cb3af5bdf","Type":"ContainerStarted","Data":"425a84423cf1ac9f26b84267e7f04d2060fcc86e77e7043fb9e75fb0bad2db16"} Oct 11 10:40:11.897275 master-0 kubenswrapper[4790]: I1011 10:40:11.896548 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zcc4t" event={"ID":"a5b695d5-a88c-4ff9-bc59-d13f61f237f6","Type":"ContainerStarted","Data":"b8b375c1d5ba37d0796c0f31dd781a4598802f156f66ac92adb0145d3738abca"} Oct 11 10:40:11.899268 master-0 kubenswrapper[4790]: I1011 10:40:11.899249 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-bn2sv" event={"ID":"0c2fae04-4b80-4621-ade0-9cd8fbdd0cd7","Type":"ContainerStarted","Data":"27d5862903c983bff49bd00da7e1422b50957d200b1478fe0e1e327797d15147"} Oct 11 10:40:11.899494 master-0 kubenswrapper[4790]: I1011 10:40:11.899448 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:40:11.902593 master-0 kubenswrapper[4790]: I1011 10:40:11.902535 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" event={"ID":"099ca022-6e9c-4604-b517-d90713dd6a44","Type":"ContainerStarted","Data":"dd112f308e676b2c4d179d5aff947e42c2b7c6ef683c69f8c917e46b03a6eeef"} Oct 11 10:40:11.978033 master-0 kubenswrapper[4790]: I1011 10:40:11.977950 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podStartSLOduration=138.783949883 podStartE2EDuration="2m28.977928445s" podCreationTimestamp="2025-10-11 10:37:43 +0000 UTC" firstStartedPulling="2025-10-11 10:40:00.582969411 +0000 UTC m=+77.137429703" lastFinishedPulling="2025-10-11 10:40:10.776947973 +0000 UTC m=+87.331408265" observedRunningTime="2025-10-11 10:40:11.97694902 +0000 UTC m=+88.531409312" watchObservedRunningTime="2025-10-11 10:40:11.977928445 +0000 UTC m=+88.532388737" Oct 11 10:40:12.001940 master-0 kubenswrapper[4790]: I1011 10:40:12.001830 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-zcc4t" podStartSLOduration=79.636162751 podStartE2EDuration="1m22.001800073s" podCreationTimestamp="2025-10-11 10:38:50 +0000 UTC" firstStartedPulling="2025-10-11 10:40:06.653339567 +0000 UTC m=+83.207799859" lastFinishedPulling="2025-10-11 10:40:09.018976869 +0000 UTC m=+85.573437181" observedRunningTime="2025-10-11 10:40:11.99741772 +0000 UTC m=+88.551878012" watchObservedRunningTime="2025-10-11 10:40:12.001800073 +0000 UTC m=+88.556260375" Oct 11 10:40:12.022846 master-0 kubenswrapper[4790]: I1011 10:40:12.022471 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-bn2sv" podStartSLOduration=77.91843361 podStartE2EDuration="1m22.022444379s" podCreationTimestamp="2025-10-11 10:38:50 +0000 UTC" firstStartedPulling="2025-10-11 10:40:06.654858416 +0000 UTC m=+83.209318708" lastFinishedPulling="2025-10-11 10:40:10.758869185 +0000 UTC m=+87.313329477" observedRunningTime="2025-10-11 10:40:12.020693443 +0000 UTC m=+88.575153775" watchObservedRunningTime="2025-10-11 10:40:12.022444379 +0000 UTC m=+88.576904671" Oct 11 10:40:12.580558 master-1 kubenswrapper[4771]: I1011 10:40:12.580448 4771 patch_prober.go:28] interesting pod/console-775ff6c4fc-csp4z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" start-of-body= Oct 11 10:40:12.581284 master-1 kubenswrapper[4771]: I1011 10:40:12.580608 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-775ff6c4fc-csp4z" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" probeResult="failure" output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" Oct 11 10:40:12.915449 master-0 kubenswrapper[4790]: I1011 10:40:12.914698 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e3e6a069-f9e0-417c-9226-5ef929699b39","Type":"ContainerStarted","Data":"9cc67f3249e9ba1a87ad5f1fb5b5b937ef4a43c6916b2e1553d0f6d219913c74"} Oct 11 10:40:12.915449 master-0 kubenswrapper[4790]: I1011 10:40:12.914790 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e3e6a069-f9e0-417c-9226-5ef929699b39","Type":"ContainerStarted","Data":"1de1ae70cc5d6da8d5e33fb6fe9e8a4293266c4ac10f67debb82207ace9bc36c"} Oct 11 10:40:12.915449 master-0 kubenswrapper[4790]: I1011 10:40:12.914835 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e3e6a069-f9e0-417c-9226-5ef929699b39","Type":"ContainerStarted","Data":"ae83c580c95e3b1f241ed240fae33cfe4a4481b7d9da0de952982e2241ebb676"} Oct 11 10:40:12.915449 master-0 kubenswrapper[4790]: I1011 10:40:12.914852 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e3e6a069-f9e0-417c-9226-5ef929699b39","Type":"ContainerStarted","Data":"038e39a4b41e0edad0ccbd1de82e8d4c932dee16fdbd097c895c31138e8493fb"} Oct 11 10:40:12.915449 master-0 kubenswrapper[4790]: I1011 10:40:12.914862 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e3e6a069-f9e0-417c-9226-5ef929699b39","Type":"ContainerStarted","Data":"2011a0acce1f7e66c917cb1c7085c139facae2f3a7c84858a4429def26871e32"} Oct 11 10:40:12.915449 master-0 kubenswrapper[4790]: I1011 10:40:12.914899 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e3e6a069-f9e0-417c-9226-5ef929699b39","Type":"ContainerStarted","Data":"0001b56c3c2a8034aa4f57fc91cf32f256999750df54773f67e906cba6c30794"} Oct 11 10:40:12.918125 master-0 kubenswrapper[4790]: I1011 10:40:12.918042 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" event={"ID":"06012e2a-b507-48ad-9740-2c3cb3af5bdf","Type":"ContainerStarted","Data":"95cdac044629725c44de229a5f3d6aa6b550a67be81219fb460ea5981a0d112a"} Oct 11 10:40:12.918125 master-0 kubenswrapper[4790]: I1011 10:40:12.918082 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" event={"ID":"06012e2a-b507-48ad-9740-2c3cb3af5bdf","Type":"ContainerStarted","Data":"9fe8c2d9ab3ed0a51fdc0047414b82caa3d12fa18737aa831769005add687b8b"} Oct 11 10:40:12.918125 master-0 kubenswrapper[4790]: I1011 10:40:12.918095 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" event={"ID":"06012e2a-b507-48ad-9740-2c3cb3af5bdf","Type":"ContainerStarted","Data":"b21c9286cb6b0d3c9e3d6135adf5fafc9b8e6c733dbeb8e2935ebf6d3ce36a8f"} Oct 11 10:40:12.918881 master-0 kubenswrapper[4790]: I1011 10:40:12.918793 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:12.964194 master-0 kubenswrapper[4790]: I1011 10:40:12.964108 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=4.683859549 podStartE2EDuration="9.964077306s" podCreationTimestamp="2025-10-11 10:40:03 +0000 UTC" firstStartedPulling="2025-10-11 10:40:06.654625701 +0000 UTC m=+83.209085993" lastFinishedPulling="2025-10-11 10:40:11.934843458 +0000 UTC m=+88.489303750" observedRunningTime="2025-10-11 10:40:12.962289699 +0000 UTC m=+89.516749991" watchObservedRunningTime="2025-10-11 10:40:12.964077306 +0000 UTC m=+89.518537608" Oct 11 10:40:13.001738 master-0 kubenswrapper[4790]: I1011 10:40:13.001589 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" podStartSLOduration=3.1955185249999998 podStartE2EDuration="9.001556347s" podCreationTimestamp="2025-10-11 10:40:04 +0000 UTC" firstStartedPulling="2025-10-11 10:40:06.65732196 +0000 UTC m=+83.211782282" lastFinishedPulling="2025-10-11 10:40:12.463359782 +0000 UTC m=+89.017820104" observedRunningTime="2025-10-11 10:40:12.999156235 +0000 UTC m=+89.553616567" watchObservedRunningTime="2025-10-11 10:40:13.001556347 +0000 UTC m=+89.556016679" Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: I1011 10:40:13.242182 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:13.242263 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:13.243893 master-1 kubenswrapper[4771]: I1011 10:40:13.242277 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:14.285906 master-0 kubenswrapper[4790]: I1011 10:40:14.285820 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:14.339380 master-1 kubenswrapper[4771]: I1011 10:40:14.336429 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:14.467105 master-1 kubenswrapper[4771]: I1011 10:40:14.467025 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"e41e7753-f8a4-4f83-8061-b1610912b8e5","Type":"ContainerStarted","Data":"94398d93e05baa7ebaf4c98738bf8458b8c61d13c1518d588a7b6e6ba51ca470"} Oct 11 10:40:14.467105 master-1 kubenswrapper[4771]: I1011 10:40:14.467088 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"e41e7753-f8a4-4f83-8061-b1610912b8e5","Type":"ContainerStarted","Data":"cfb4a3a7730430abd44292ac93b9633e1d847284a2ef917109d0511f065fd804"} Oct 11 10:40:14.467380 master-1 kubenswrapper[4771]: I1011 10:40:14.467121 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"e41e7753-f8a4-4f83-8061-b1610912b8e5","Type":"ContainerStarted","Data":"759fc712555322c0af22165b0ad340ddaafbf49dac9c9493241bfd8f0d65744f"} Oct 11 10:40:15.059007 master-0 kubenswrapper[4790]: I1011 10:40:15.058936 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:40:15.059259 master-0 kubenswrapper[4790]: I1011 10:40:15.059125 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:40:15.070899 master-0 kubenswrapper[4790]: I1011 10:40:15.070841 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:40:15.084052 master-2 kubenswrapper[4776]: I1011 10:40:15.083969 4776 scope.go:117] "RemoveContainer" containerID="3bed6e75ec56e4b27551229ac1cfed015f19cbbd37a51de7899f2a409f7f3107" Oct 11 10:40:15.097491 master-2 kubenswrapper[4776]: I1011 10:40:15.097448 4776 scope.go:117] "RemoveContainer" containerID="0f23b40fdead0283107ab65c161704aa7acddce7d9fe04f98e8ea9ab7f03a4cd" Oct 11 10:40:15.120203 master-2 kubenswrapper[4776]: I1011 10:40:15.120170 4776 scope.go:117] "RemoveContainer" containerID="10893b6ff26cfe0860be9681aea4e014407f210edeb0807d7d50c1f9edb2d910" Oct 11 10:40:15.158586 master-2 kubenswrapper[4776]: I1011 10:40:15.158461 4776 scope.go:117] "RemoveContainer" containerID="65701df136730297d07e924a9003107719afae6bc3e70126f7680b788afdcc01" Oct 11 10:40:15.183699 master-2 kubenswrapper[4776]: I1011 10:40:15.183625 4776 scope.go:117] "RemoveContainer" containerID="c57c60875e1be574e46fe440bf3b2752ffb605bb2f328363af8d1f914310116f" Oct 11 10:40:15.226725 master-2 kubenswrapper[4776]: I1011 10:40:15.226660 4776 scope.go:117] "RemoveContainer" containerID="b1b4a56fa0152f300c6a99db97775492cfcdce4712ae78b30e2ac340b25efd8c" Oct 11 10:40:15.247835 master-2 kubenswrapper[4776]: I1011 10:40:15.247695 4776 scope.go:117] "RemoveContainer" containerID="1c66c420f42b218cae752c5f11d7d84132ff9087b3b755852c6f533f1acaeece" Oct 11 10:40:15.257740 master-0 kubenswrapper[4790]: I1011 10:40:15.257264 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w" Oct 11 10:40:15.264279 master-2 kubenswrapper[4776]: I1011 10:40:15.264217 4776 scope.go:117] "RemoveContainer" containerID="f3c0c4b66c129c923e0c2ad907f82ab3a83141f8e7f3805af522d3e693204962" Oct 11 10:40:15.282014 master-1 kubenswrapper[4771]: I1011 10:40:15.281938 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-7f646dd4d8-v72dv" Oct 11 10:40:15.480921 master-1 kubenswrapper[4771]: I1011 10:40:15.480844 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"e41e7753-f8a4-4f83-8061-b1610912b8e5","Type":"ContainerStarted","Data":"5cb08c81b77eb576dca7c07741e584389136fa4bff5ab56041056a7455f7b9f7"} Oct 11 10:40:15.480921 master-1 kubenswrapper[4771]: I1011 10:40:15.480897 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"e41e7753-f8a4-4f83-8061-b1610912b8e5","Type":"ContainerStarted","Data":"17632de8110e3b81a16e263c87b729e2324b6ff74c3372023897bfeab09634b0"} Oct 11 10:40:15.480921 master-1 kubenswrapper[4771]: I1011 10:40:15.480913 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"e41e7753-f8a4-4f83-8061-b1610912b8e5","Type":"ContainerStarted","Data":"d299a1f483193709c3f90f4daf69e455c6ac2fa5e39b9379cbcae39ed442cb31"} Oct 11 10:40:15.529409 master-1 kubenswrapper[4771]: I1011 10:40:15.529276 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-1" podStartSLOduration=3.105603119 podStartE2EDuration="6.529253534s" podCreationTimestamp="2025-10-11 10:40:09 +0000 UTC" firstStartedPulling="2025-10-11 10:40:10.436540469 +0000 UTC m=+842.410766940" lastFinishedPulling="2025-10-11 10:40:13.860190914 +0000 UTC m=+845.834417355" observedRunningTime="2025-10-11 10:40:15.522155747 +0000 UTC m=+847.496382238" watchObservedRunningTime="2025-10-11 10:40:15.529253534 +0000 UTC m=+847.503479995" Oct 11 10:40:15.941586 master-0 kubenswrapper[4790]: I1011 10:40:15.941471 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"27006098-2092-43c6-97f8-0219e7fc4b81","Type":"ContainerStarted","Data":"34518034d2159a78d44e6347c922575227248da01713a7d4cc5aaae730daee11"} Oct 11 10:40:15.941586 master-0 kubenswrapper[4790]: I1011 10:40:15.941567 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"27006098-2092-43c6-97f8-0219e7fc4b81","Type":"ContainerStarted","Data":"45a804de210a3a3a0c8d8956749a7ed5513d5ab5accb2e91de22b101c17bc054"} Oct 11 10:40:15.941586 master-0 kubenswrapper[4790]: I1011 10:40:15.941591 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"27006098-2092-43c6-97f8-0219e7fc4b81","Type":"ContainerStarted","Data":"fe7380bd6e0c3ce9adab3d9f7cab77bff97e9129678e0baa7b31a6fe54309115"} Oct 11 10:40:15.942379 master-0 kubenswrapper[4790]: I1011 10:40:15.941610 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"27006098-2092-43c6-97f8-0219e7fc4b81","Type":"ContainerStarted","Data":"547c293f85953ae3ff30fa764c110e076ee5e86cd2a7023b3377cd343f88ed87"} Oct 11 10:40:15.942379 master-0 kubenswrapper[4790]: I1011 10:40:15.941630 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"27006098-2092-43c6-97f8-0219e7fc4b81","Type":"ContainerStarted","Data":"b78e7077e53b29ae6c25df1f5e3f838054e54c94e7f01b73db52c5167d296283"} Oct 11 10:40:15.942379 master-0 kubenswrapper[4790]: I1011 10:40:15.941649 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"27006098-2092-43c6-97f8-0219e7fc4b81","Type":"ContainerStarted","Data":"e0d793fe72abcb80701ddeacfac5bb5f45af601f249867c0837d3ff07394abc9"} Oct 11 10:40:15.950782 master-0 kubenswrapper[4790]: I1011 10:40:15.950682 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:40:15.999313 master-0 kubenswrapper[4790]: I1011 10:40:15.999211 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.117112614 podStartE2EDuration="6.999173686s" podCreationTimestamp="2025-10-11 10:40:09 +0000 UTC" firstStartedPulling="2025-10-11 10:40:11.889234655 +0000 UTC m=+88.443694947" lastFinishedPulling="2025-10-11 10:40:14.771295727 +0000 UTC m=+91.325756019" observedRunningTime="2025-10-11 10:40:15.996968169 +0000 UTC m=+92.551428551" watchObservedRunningTime="2025-10-11 10:40:15.999173686 +0000 UTC m=+92.553633978" Oct 11 10:40:16.116593 master-1 kubenswrapper[4771]: I1011 10:40:16.116468 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-7845cf54d8-g8x5z"] Oct 11 10:40:16.117034 master-1 kubenswrapper[4771]: I1011 10:40:16.116912 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" containerID="cri-o://2ccd5ea4ca8c2b32e04ef7419d2c1c1ac0971dd1b18e1a37cd16058b70e5a98c" gracePeriod=120 Oct 11 10:40:16.117258 master-1 kubenswrapper[4771]: I1011 10:40:16.117072 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://a0772db7a40ce6f228f65f235a6668a5f2f1781a4f227000cf9ad01206d856f2" gracePeriod=120 Oct 11 10:40:16.143898 master-1 kubenswrapper[4771]: I1011 10:40:16.143830 4771 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-z898b container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:16.143898 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:16.143898 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:40:16.143898 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:40:16.143898 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:16.143898 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:16.143898 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:16.143898 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:16.143898 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:16.143898 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:40:16.143898 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:40:16.143898 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:40:16.143898 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:16.143898 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:16.144541 master-1 kubenswrapper[4771]: I1011 10:40:16.143923 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" podUID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:16.464888 master-2 kubenswrapper[4776]: I1011 10:40:16.464769 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:16.464888 master-2 kubenswrapper[4776]: I1011 10:40:16.464874 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:16.493351 master-1 kubenswrapper[4771]: I1011 10:40:16.493084 4771 generic.go:334] "Generic (PLEG): container finished" podID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerID="a0772db7a40ce6f228f65f235a6668a5f2f1781a4f227000cf9ad01206d856f2" exitCode=0 Oct 11 10:40:16.493351 master-1 kubenswrapper[4771]: I1011 10:40:16.493176 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" event={"ID":"a2bf529d-094c-4406-8ce6-890cf8c0b840","Type":"ContainerDied","Data":"a0772db7a40ce6f228f65f235a6668a5f2f1781a4f227000cf9ad01206d856f2"} Oct 11 10:40:17.258700 master-1 kubenswrapper[4771]: I1011 10:40:17.258610 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:17.258700 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:17.258700 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:40:17.258700 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:40:17.258700 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:17.258700 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:17.258700 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:17.258700 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:17.258700 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:17.258700 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:40:17.258700 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:40:17.258700 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:40:17.258700 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:40:17.258700 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:40:17.258700 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:40:17.258700 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:40:17.258700 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:40:17.258700 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:17.258700 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:17.260613 master-1 kubenswrapper[4771]: I1011 10:40:17.258713 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: I1011 10:40:18.245029 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:18.245108 master-1 kubenswrapper[4771]: I1011 10:40:18.245093 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:19.669814 master-0 kubenswrapper[4790]: I1011 10:40:19.669655 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:40:19.689040 master-1 kubenswrapper[4771]: I1011 10:40:19.688951 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:40:20.024174 master-2 kubenswrapper[4776]: I1011 10:40:20.024065 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:40:20.024174 master-2 kubenswrapper[4776]: I1011 10:40:20.024143 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:40:20.077945 master-0 kubenswrapper[4790]: I1011 10:40:20.077828 4790 patch_prober.go:28] interesting pod/console-76f8bc4746-9rjdm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.130.0.14:8443/health\": dial tcp 10.130.0.14:8443: connect: connection refused" start-of-body= Oct 11 10:40:20.078321 master-0 kubenswrapper[4790]: I1011 10:40:20.077943 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-9rjdm" podUID="ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db" containerName="console" probeResult="failure" output="Get \"https://10.130.0.14:8443/health\": dial tcp 10.130.0.14:8443: connect: connection refused" Oct 11 10:40:21.140927 master-1 kubenswrapper[4771]: I1011 10:40:21.140867 4771 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-z898b container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:21.140927 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:21.140927 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:40:21.140927 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:40:21.140927 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:21.140927 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:21.140927 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:21.140927 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:21.140927 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:21.140927 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:40:21.140927 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:40:21.140927 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:40:21.140927 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:21.140927 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:21.141939 master-1 kubenswrapper[4771]: I1011 10:40:21.141608 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" podUID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:21.141939 master-1 kubenswrapper[4771]: I1011 10:40:21.141810 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:40:21.475727 master-2 kubenswrapper[4776]: I1011 10:40:21.475657 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:40:22.255612 master-1 kubenswrapper[4771]: I1011 10:40:22.255566 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:22.255612 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:22.255612 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:40:22.255612 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:40:22.255612 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:22.255612 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:22.255612 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:22.255612 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:22.255612 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:22.255612 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:40:22.255612 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:40:22.255612 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:40:22.255612 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:40:22.255612 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:40:22.255612 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:40:22.255612 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:40:22.255612 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:40:22.255612 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:22.255612 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:22.256933 master-1 kubenswrapper[4771]: I1011 10:40:22.256513 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:22.581168 master-1 kubenswrapper[4771]: I1011 10:40:22.581075 4771 patch_prober.go:28] interesting pod/console-775ff6c4fc-csp4z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" start-of-body= Oct 11 10:40:22.581168 master-1 kubenswrapper[4771]: I1011 10:40:22.581156 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-775ff6c4fc-csp4z" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" probeResult="failure" output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: I1011 10:40:23.245071 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:23.245171 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:23.247142 master-1 kubenswrapper[4771]: I1011 10:40:23.245175 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:26.143403 master-1 kubenswrapper[4771]: I1011 10:40:26.143279 4771 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-z898b container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:26.143403 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:26.143403 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:40:26.143403 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:40:26.143403 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:26.143403 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:26.143403 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:26.143403 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:26.143403 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:26.143403 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:40:26.143403 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:40:26.143403 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:40:26.143403 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:26.143403 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:26.144632 master-1 kubenswrapper[4771]: I1011 10:40:26.143586 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" podUID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:27.257479 master-1 kubenswrapper[4771]: I1011 10:40:27.257386 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:27.257479 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:27.257479 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:40:27.257479 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:40:27.257479 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:27.257479 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:27.257479 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:27.257479 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:27.257479 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:27.257479 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:40:27.257479 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:40:27.257479 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:40:27.257479 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:40:27.257479 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:40:27.257479 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:40:27.257479 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:40:27.257479 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:40:27.257479 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:27.257479 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:27.258705 master-1 kubenswrapper[4771]: I1011 10:40:27.257507 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:27.258705 master-1 kubenswrapper[4771]: I1011 10:40:27.257648 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: I1011 10:40:28.244205 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:28.244326 master-1 kubenswrapper[4771]: I1011 10:40:28.244296 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:29.825291 master-0 kubenswrapper[4790]: I1011 10:40:29.825194 4790 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Oct 11 10:40:29.826460 master-0 kubenswrapper[4790]: I1011 10:40:29.826135 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Oct 11 10:40:29.870335 master-0 kubenswrapper[4790]: I1011 10:40:29.870202 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Oct 11 10:40:30.004545 master-0 kubenswrapper[4790]: I1011 10:40:30.004437 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb4d3d6bdbb62903356b2987e206d2-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"c6bb4d3d6bdbb62903356b2987e206d2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Oct 11 10:40:30.004545 master-0 kubenswrapper[4790]: I1011 10:40:30.004547 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb4d3d6bdbb62903356b2987e206d2-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"c6bb4d3d6bdbb62903356b2987e206d2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Oct 11 10:40:30.023863 master-2 kubenswrapper[4776]: I1011 10:40:30.023771 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:40:30.023863 master-2 kubenswrapper[4776]: I1011 10:40:30.023840 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:40:30.077542 master-0 kubenswrapper[4790]: I1011 10:40:30.077368 4790 patch_prober.go:28] interesting pod/console-76f8bc4746-9rjdm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.130.0.14:8443/health\": dial tcp 10.130.0.14:8443: connect: connection refused" start-of-body= Oct 11 10:40:30.077542 master-0 kubenswrapper[4790]: I1011 10:40:30.077475 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-9rjdm" podUID="ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db" containerName="console" probeResult="failure" output="Get \"https://10.130.0.14:8443/health\": dial tcp 10.130.0.14:8443: connect: connection refused" Oct 11 10:40:30.106328 master-0 kubenswrapper[4790]: I1011 10:40:30.106256 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb4d3d6bdbb62903356b2987e206d2-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"c6bb4d3d6bdbb62903356b2987e206d2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Oct 11 10:40:30.106491 master-0 kubenswrapper[4790]: I1011 10:40:30.106388 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb4d3d6bdbb62903356b2987e206d2-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"c6bb4d3d6bdbb62903356b2987e206d2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Oct 11 10:40:30.106491 master-0 kubenswrapper[4790]: I1011 10:40:30.106421 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb4d3d6bdbb62903356b2987e206d2-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"c6bb4d3d6bdbb62903356b2987e206d2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Oct 11 10:40:30.106694 master-0 kubenswrapper[4790]: I1011 10:40:30.106602 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb4d3d6bdbb62903356b2987e206d2-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"c6bb4d3d6bdbb62903356b2987e206d2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Oct 11 10:40:30.168636 master-0 kubenswrapper[4790]: I1011 10:40:30.168540 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Oct 11 10:40:30.190660 master-0 kubenswrapper[4790]: W1011 10:40:30.190581 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6bb4d3d6bdbb62903356b2987e206d2.slice/crio-d11c0d73961fef96a17d590ac532686405f30611b8369d332f54216607db5de0 WatchSource:0}: Error finding container d11c0d73961fef96a17d590ac532686405f30611b8369d332f54216607db5de0: Status 404 returned error can't find the container with id d11c0d73961fef96a17d590ac532686405f30611b8369d332f54216607db5de0 Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: I1011 10:40:30.263951 4776 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-crzgp container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: [+]metric-storage-ready ok Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: [+]metric-informer-sync ok Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: [+]metadata-informer-sync ok Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: I1011 10:40:30.264004 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:31.022462 master-0 kubenswrapper[4790]: I1011 10:40:31.022395 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"c6bb4d3d6bdbb62903356b2987e206d2","Type":"ContainerStarted","Data":"d11c0d73961fef96a17d590ac532686405f30611b8369d332f54216607db5de0"} Oct 11 10:40:31.035384 master-0 kubenswrapper[4790]: I1011 10:40:31.032505 4790 generic.go:334] "Generic (PLEG): container finished" podID="6dd18b40-5213-44f7-83cd-99076fb3ee73" containerID="f810af855d03b458d1c2e2f8afd6d54238f19e74d825ff17da48e7f4eba7e4c6" exitCode=0 Oct 11 10:40:31.035384 master-0 kubenswrapper[4790]: I1011 10:40:31.032593 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"6dd18b40-5213-44f7-83cd-99076fb3ee73","Type":"ContainerDied","Data":"f810af855d03b458d1c2e2f8afd6d54238f19e74d825ff17da48e7f4eba7e4c6"} Oct 11 10:40:31.143377 master-1 kubenswrapper[4771]: I1011 10:40:31.143277 4771 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-z898b container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:31.143377 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:31.143377 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:40:31.143377 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:40:31.143377 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:31.143377 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:31.143377 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:31.143377 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:31.143377 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:31.143377 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:40:31.143377 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:40:31.143377 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:40:31.143377 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:31.143377 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:31.143377 master-1 kubenswrapper[4771]: I1011 10:40:31.143372 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" podUID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:32.254590 master-1 kubenswrapper[4771]: I1011 10:40:32.254524 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:32.254590 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:32.254590 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:40:32.254590 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:40:32.254590 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:32.254590 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:32.254590 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:32.254590 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:32.254590 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:32.254590 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:40:32.254590 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:40:32.254590 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:40:32.254590 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:40:32.254590 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:40:32.254590 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:40:32.254590 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:40:32.254590 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:40:32.254590 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:32.254590 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:32.255649 master-1 kubenswrapper[4771]: I1011 10:40:32.254601 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:32.358112 master-0 kubenswrapper[4790]: I1011 10:40:32.358053 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Oct 11 10:40:32.534978 master-0 kubenswrapper[4790]: I1011 10:40:32.534859 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6dd18b40-5213-44f7-83cd-99076fb3ee73-var-lock\") pod \"6dd18b40-5213-44f7-83cd-99076fb3ee73\" (UID: \"6dd18b40-5213-44f7-83cd-99076fb3ee73\") " Oct 11 10:40:32.535936 master-0 kubenswrapper[4790]: I1011 10:40:32.535027 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6dd18b40-5213-44f7-83cd-99076fb3ee73-kube-api-access\") pod \"6dd18b40-5213-44f7-83cd-99076fb3ee73\" (UID: \"6dd18b40-5213-44f7-83cd-99076fb3ee73\") " Oct 11 10:40:32.535936 master-0 kubenswrapper[4790]: I1011 10:40:32.535089 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6dd18b40-5213-44f7-83cd-99076fb3ee73-kubelet-dir\") pod \"6dd18b40-5213-44f7-83cd-99076fb3ee73\" (UID: \"6dd18b40-5213-44f7-83cd-99076fb3ee73\") " Oct 11 10:40:32.535936 master-0 kubenswrapper[4790]: I1011 10:40:32.535416 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dd18b40-5213-44f7-83cd-99076fb3ee73-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6dd18b40-5213-44f7-83cd-99076fb3ee73" (UID: "6dd18b40-5213-44f7-83cd-99076fb3ee73"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:40:32.535936 master-0 kubenswrapper[4790]: I1011 10:40:32.535465 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dd18b40-5213-44f7-83cd-99076fb3ee73-var-lock" (OuterVolumeSpecName: "var-lock") pod "6dd18b40-5213-44f7-83cd-99076fb3ee73" (UID: "6dd18b40-5213-44f7-83cd-99076fb3ee73"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:40:32.541868 master-0 kubenswrapper[4790]: I1011 10:40:32.541805 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dd18b40-5213-44f7-83cd-99076fb3ee73-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6dd18b40-5213-44f7-83cd-99076fb3ee73" (UID: "6dd18b40-5213-44f7-83cd-99076fb3ee73"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:40:32.581956 master-1 kubenswrapper[4771]: I1011 10:40:32.581842 4771 patch_prober.go:28] interesting pod/console-775ff6c4fc-csp4z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" start-of-body= Oct 11 10:40:32.582307 master-1 kubenswrapper[4771]: I1011 10:40:32.581987 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-775ff6c4fc-csp4z" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" probeResult="failure" output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" Oct 11 10:40:32.637621 master-0 kubenswrapper[4790]: I1011 10:40:32.637415 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6dd18b40-5213-44f7-83cd-99076fb3ee73-kube-api-access\") on node \"master-0\" DevicePath \"\"" Oct 11 10:40:32.637621 master-0 kubenswrapper[4790]: I1011 10:40:32.637500 4790 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6dd18b40-5213-44f7-83cd-99076fb3ee73-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Oct 11 10:40:32.637621 master-0 kubenswrapper[4790]: I1011 10:40:32.637528 4790 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6dd18b40-5213-44f7-83cd-99076fb3ee73-var-lock\") on node \"master-0\" DevicePath \"\"" Oct 11 10:40:33.048113 master-0 kubenswrapper[4790]: I1011 10:40:33.047954 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"6dd18b40-5213-44f7-83cd-99076fb3ee73","Type":"ContainerDied","Data":"65085622d36bce3903d45075fbfade9a38ec4d90dad3e9cbfb565e4e9d566b71"} Oct 11 10:40:33.048113 master-0 kubenswrapper[4790]: I1011 10:40:33.048022 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65085622d36bce3903d45075fbfade9a38ec4d90dad3e9cbfb565e4e9d566b71" Oct 11 10:40:33.048113 master-0 kubenswrapper[4790]: I1011 10:40:33.048113 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: I1011 10:40:33.245314 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:33.245455 master-1 kubenswrapper[4771]: I1011 10:40:33.245425 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:34.328633 master-0 kubenswrapper[4790]: I1011 10:40:34.328575 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/alertmanager-main-0" Oct 11 10:40:34.382880 master-1 kubenswrapper[4771]: I1011 10:40:34.382771 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/alertmanager-main-1" Oct 11 10:40:36.146034 master-1 kubenswrapper[4771]: I1011 10:40:36.145949 4771 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-z898b container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:36.146034 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:36.146034 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:40:36.146034 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:40:36.146034 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:36.146034 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:36.146034 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:36.146034 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:36.146034 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:36.146034 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:40:36.146034 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:40:36.146034 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:40:36.146034 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:36.146034 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:36.147573 master-1 kubenswrapper[4771]: I1011 10:40:36.146047 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" podUID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: I1011 10:40:37.257723 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:37.257890 master-1 kubenswrapper[4771]: I1011 10:40:37.257833 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:37.826277 master-0 kubenswrapper[4790]: I1011 10:40:37.826196 4790 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Oct 11 10:40:37.826964 master-0 kubenswrapper[4790]: E1011 10:40:37.826476 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dd18b40-5213-44f7-83cd-99076fb3ee73" containerName="installer" Oct 11 10:40:37.826964 master-0 kubenswrapper[4790]: I1011 10:40:37.826498 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd18b40-5213-44f7-83cd-99076fb3ee73" containerName="installer" Oct 11 10:40:37.826964 master-0 kubenswrapper[4790]: I1011 10:40:37.826618 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dd18b40-5213-44f7-83cd-99076fb3ee73" containerName="installer" Oct 11 10:40:37.828516 master-0 kubenswrapper[4790]: I1011 10:40:37.828476 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Oct 11 10:40:37.888842 master-0 kubenswrapper[4790]: I1011 10:40:37.888663 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Oct 11 10:40:37.927595 master-0 kubenswrapper[4790]: I1011 10:40:37.927539 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-usr-local-bin\") pod \"etcd-master-0\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:40:37.927701 master-0 kubenswrapper[4790]: I1011 10:40:37.927605 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-resource-dir\") pod \"etcd-master-0\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:40:37.927701 master-0 kubenswrapper[4790]: I1011 10:40:37.927637 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-cert-dir\") pod \"etcd-master-0\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:40:37.927701 master-0 kubenswrapper[4790]: I1011 10:40:37.927660 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-static-pod-dir\") pod \"etcd-master-0\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:40:37.927701 master-0 kubenswrapper[4790]: I1011 10:40:37.927692 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-data-dir\") pod \"etcd-master-0\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:40:37.927844 master-0 kubenswrapper[4790]: I1011 10:40:37.927733 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-log-dir\") pod \"etcd-master-0\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:40:38.029244 master-0 kubenswrapper[4790]: I1011 10:40:38.029173 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-cert-dir\") pod \"etcd-master-0\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:40:38.029244 master-0 kubenswrapper[4790]: I1011 10:40:38.029258 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-static-pod-dir\") pod \"etcd-master-0\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:40:38.029596 master-0 kubenswrapper[4790]: I1011 10:40:38.029351 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-data-dir\") pod \"etcd-master-0\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:40:38.029596 master-0 kubenswrapper[4790]: I1011 10:40:38.029390 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-log-dir\") pod \"etcd-master-0\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:40:38.029596 master-0 kubenswrapper[4790]: I1011 10:40:38.029394 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-cert-dir\") pod \"etcd-master-0\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:40:38.029596 master-0 kubenswrapper[4790]: I1011 10:40:38.029479 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-usr-local-bin\") pod \"etcd-master-0\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:40:38.029596 master-0 kubenswrapper[4790]: I1011 10:40:38.029495 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-data-dir\") pod \"etcd-master-0\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:40:38.029596 master-0 kubenswrapper[4790]: I1011 10:40:38.029434 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-usr-local-bin\") pod \"etcd-master-0\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:40:38.029596 master-0 kubenswrapper[4790]: I1011 10:40:38.029503 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-static-pod-dir\") pod \"etcd-master-0\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:40:38.029596 master-0 kubenswrapper[4790]: I1011 10:40:38.029572 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-resource-dir\") pod \"etcd-master-0\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:40:38.029909 master-0 kubenswrapper[4790]: I1011 10:40:38.029604 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-log-dir\") pod \"etcd-master-0\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:40:38.029909 master-0 kubenswrapper[4790]: I1011 10:40:38.029652 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-resource-dir\") pod \"etcd-master-0\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:40:38.069576 master-0 kubenswrapper[4790]: I1011 10:40:38.069509 4790 generic.go:334] "Generic (PLEG): container finished" podID="a3934355-bb61-4316-b164-05294e12906a" containerID="7ad4b389f620d673e1c84d3f718fc34561da93dd445afd695e3bc1db0ae8b3cd" exitCode=0 Oct 11 10:40:38.069576 master-0 kubenswrapper[4790]: I1011 10:40:38.069576 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-8-master-0" event={"ID":"a3934355-bb61-4316-b164-05294e12906a","Type":"ContainerDied","Data":"7ad4b389f620d673e1c84d3f718fc34561da93dd445afd695e3bc1db0ae8b3cd"} Oct 11 10:40:38.180836 master-0 kubenswrapper[4790]: I1011 10:40:38.180782 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: I1011 10:40:38.243650 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:38.243750 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:38.246634 master-1 kubenswrapper[4771]: I1011 10:40:38.243762 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:39.078610 master-0 kubenswrapper[4790]: I1011 10:40:39.078471 4790 generic.go:334] "Generic (PLEG): container finished" podID="c6bb4d3d6bdbb62903356b2987e206d2" containerID="289df993fad1f4fba2ca17fd7a3cf2133d080a63765dafc8aa2bcf6b7c69fc5b" exitCode=0 Oct 11 10:40:39.079544 master-0 kubenswrapper[4790]: I1011 10:40:39.078616 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"c6bb4d3d6bdbb62903356b2987e206d2","Type":"ContainerDied","Data":"289df993fad1f4fba2ca17fd7a3cf2133d080a63765dafc8aa2bcf6b7c69fc5b"} Oct 11 10:40:39.082228 master-0 kubenswrapper[4790]: I1011 10:40:39.082160 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"a7e53a8977ce5fc5588aef94f91dcc24","Type":"ContainerStarted","Data":"3b919da8ddb7be0dba8b9a6a99bf3d4fe8ae3f53dd95f938e572564de35b4f48"} Oct 11 10:40:39.083812 master-0 kubenswrapper[4790]: I1011 10:40:39.083558 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-0"] Oct 11 10:40:39.084250 master-0 kubenswrapper[4790]: I1011 10:40:39.084199 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-0" Oct 11 10:40:39.089399 master-0 kubenswrapper[4790]: I1011 10:40:39.089277 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Oct 11 10:40:39.089789 master-0 kubenswrapper[4790]: I1011 10:40:39.089746 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"openshift-service-ca.crt" Oct 11 10:40:39.090034 master-0 kubenswrapper[4790]: I1011 10:40:39.090021 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"default-dockercfg-4hwjx" Oct 11 10:40:39.103691 master-0 kubenswrapper[4790]: I1011 10:40:39.103621 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-0"] Oct 11 10:40:39.148252 master-0 kubenswrapper[4790]: I1011 10:40:39.148189 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzddm\" (UniqueName: \"kubernetes.io/projected/95f1328a-5ab8-4276-9bfd-55b3dbb2a994-kube-api-access-tzddm\") pod \"openshift-kube-scheduler-guard-master-0\" (UID: \"95f1328a-5ab8-4276-9bfd-55b3dbb2a994\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-0" Oct 11 10:40:39.248801 master-0 kubenswrapper[4790]: I1011 10:40:39.248691 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzddm\" (UniqueName: \"kubernetes.io/projected/95f1328a-5ab8-4276-9bfd-55b3dbb2a994-kube-api-access-tzddm\") pod \"openshift-kube-scheduler-guard-master-0\" (UID: \"95f1328a-5ab8-4276-9bfd-55b3dbb2a994\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-0" Oct 11 10:40:39.273253 master-0 kubenswrapper[4790]: I1011 10:40:39.273213 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzddm\" (UniqueName: \"kubernetes.io/projected/95f1328a-5ab8-4276-9bfd-55b3dbb2a994-kube-api-access-tzddm\") pod \"openshift-kube-scheduler-guard-master-0\" (UID: \"95f1328a-5ab8-4276-9bfd-55b3dbb2a994\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-0" Oct 11 10:40:39.323867 master-2 kubenswrapper[4776]: I1011 10:40:39.323755 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager/0.log" Oct 11 10:40:39.324505 master-2 kubenswrapper[4776]: I1011 10:40:39.323892 4776 generic.go:334] "Generic (PLEG): container finished" podID="2dd82f838b5636582534da82a3996ea6" containerID="f4e2e49582cba1c1c050448b4ca7740185e407bb7310d805fa88bb2ea911a4b1" exitCode=137 Oct 11 10:40:39.324505 master-2 kubenswrapper[4776]: I1011 10:40:39.323941 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"2dd82f838b5636582534da82a3996ea6","Type":"ContainerDied","Data":"f4e2e49582cba1c1c050448b4ca7740185e407bb7310d805fa88bb2ea911a4b1"} Oct 11 10:40:39.324505 master-2 kubenswrapper[4776]: I1011 10:40:39.323978 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"2dd82f838b5636582534da82a3996ea6","Type":"ContainerStarted","Data":"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec"} Oct 11 10:40:39.388157 master-0 kubenswrapper[4790]: I1011 10:40:39.388116 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-8-master-0" Oct 11 10:40:39.416385 master-0 kubenswrapper[4790]: I1011 10:40:39.416334 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-0" Oct 11 10:40:39.452747 master-0 kubenswrapper[4790]: I1011 10:40:39.452636 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3934355-bb61-4316-b164-05294e12906a-kubelet-dir\") pod \"a3934355-bb61-4316-b164-05294e12906a\" (UID: \"a3934355-bb61-4316-b164-05294e12906a\") " Oct 11 10:40:39.452859 master-0 kubenswrapper[4790]: I1011 10:40:39.452795 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3934355-bb61-4316-b164-05294e12906a-var-lock\") pod \"a3934355-bb61-4316-b164-05294e12906a\" (UID: \"a3934355-bb61-4316-b164-05294e12906a\") " Oct 11 10:40:39.452911 master-0 kubenswrapper[4790]: I1011 10:40:39.452895 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access\") pod \"a3934355-bb61-4316-b164-05294e12906a\" (UID: \"a3934355-bb61-4316-b164-05294e12906a\") " Oct 11 10:40:39.453065 master-0 kubenswrapper[4790]: I1011 10:40:39.453009 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3934355-bb61-4316-b164-05294e12906a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a3934355-bb61-4316-b164-05294e12906a" (UID: "a3934355-bb61-4316-b164-05294e12906a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:40:39.453115 master-0 kubenswrapper[4790]: I1011 10:40:39.453097 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3934355-bb61-4316-b164-05294e12906a-var-lock" (OuterVolumeSpecName: "var-lock") pod "a3934355-bb61-4316-b164-05294e12906a" (UID: "a3934355-bb61-4316-b164-05294e12906a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:40:39.453243 master-0 kubenswrapper[4790]: I1011 10:40:39.453195 4790 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3934355-bb61-4316-b164-05294e12906a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Oct 11 10:40:39.453295 master-0 kubenswrapper[4790]: I1011 10:40:39.453244 4790 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3934355-bb61-4316-b164-05294e12906a-var-lock\") on node \"master-0\" DevicePath \"\"" Oct 11 10:40:39.457585 master-0 kubenswrapper[4790]: I1011 10:40:39.457558 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a3934355-bb61-4316-b164-05294e12906a" (UID: "a3934355-bb61-4316-b164-05294e12906a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:40:39.554173 master-0 kubenswrapper[4790]: I1011 10:40:39.554134 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3934355-bb61-4316-b164-05294e12906a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Oct 11 10:40:39.825674 master-0 kubenswrapper[4790]: I1011 10:40:39.825606 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-0"] Oct 11 10:40:39.831087 master-0 kubenswrapper[4790]: W1011 10:40:39.831042 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95f1328a_5ab8_4276_9bfd_55b3dbb2a994.slice/crio-c55f42cfe8653fa8706646db8a6223894e0bf53deb5fa86dfa59ef2811cbde3d WatchSource:0}: Error finding container c55f42cfe8653fa8706646db8a6223894e0bf53deb5fa86dfa59ef2811cbde3d: Status 404 returned error can't find the container with id c55f42cfe8653fa8706646db8a6223894e0bf53deb5fa86dfa59ef2811cbde3d Oct 11 10:40:40.023410 master-2 kubenswrapper[4776]: I1011 10:40:40.023282 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:40:40.023410 master-2 kubenswrapper[4776]: I1011 10:40:40.023346 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:40:40.076952 master-0 kubenswrapper[4790]: I1011 10:40:40.076882 4790 patch_prober.go:28] interesting pod/console-76f8bc4746-9rjdm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.130.0.14:8443/health\": dial tcp 10.130.0.14:8443: connect: connection refused" start-of-body= Oct 11 10:40:40.077195 master-0 kubenswrapper[4790]: I1011 10:40:40.076962 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-9rjdm" podUID="ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db" containerName="console" probeResult="failure" output="Get \"https://10.130.0.14:8443/health\": dial tcp 10.130.0.14:8443: connect: connection refused" Oct 11 10:40:40.088539 master-0 kubenswrapper[4790]: I1011 10:40:40.088459 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-8-master-0" event={"ID":"a3934355-bb61-4316-b164-05294e12906a","Type":"ContainerDied","Data":"4a1e893e4dcbac435d2da0e68c078d559bd07b9d3b598e66849b7e045c462376"} Oct 11 10:40:40.089090 master-0 kubenswrapper[4790]: I1011 10:40:40.088566 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a1e893e4dcbac435d2da0e68c078d559bd07b9d3b598e66849b7e045c462376" Oct 11 10:40:40.089090 master-0 kubenswrapper[4790]: I1011 10:40:40.088581 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-8-master-0" Oct 11 10:40:40.090279 master-0 kubenswrapper[4790]: I1011 10:40:40.090217 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-0" event={"ID":"95f1328a-5ab8-4276-9bfd-55b3dbb2a994","Type":"ContainerStarted","Data":"0ce8427bbe6d75f182a5b32d8cd2f22fe40fc718730d32921af86371a9379011"} Oct 11 10:40:40.090279 master-0 kubenswrapper[4790]: I1011 10:40:40.090279 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-0" event={"ID":"95f1328a-5ab8-4276-9bfd-55b3dbb2a994","Type":"ContainerStarted","Data":"c55f42cfe8653fa8706646db8a6223894e0bf53deb5fa86dfa59ef2811cbde3d"} Oct 11 10:40:40.090863 master-0 kubenswrapper[4790]: I1011 10:40:40.090813 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-0" Oct 11 10:40:40.093444 master-0 kubenswrapper[4790]: I1011 10:40:40.093381 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"c6bb4d3d6bdbb62903356b2987e206d2","Type":"ContainerStarted","Data":"e036684c78a5021fb48c289d54cae789fa1bf5823ee42d9f136d64fecac494b3"} Oct 11 10:40:40.093444 master-0 kubenswrapper[4790]: I1011 10:40:40.093446 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"c6bb4d3d6bdbb62903356b2987e206d2","Type":"ContainerStarted","Data":"3bafb2f92a85e23ceb69467f8cda564274c1f8c23cdc1567f9df59d0d32f0e95"} Oct 11 10:40:40.093562 master-0 kubenswrapper[4790]: I1011 10:40:40.093460 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"c6bb4d3d6bdbb62903356b2987e206d2","Type":"ContainerStarted","Data":"5ae70bae04836aa8a45f8cf884baea6ccb012cd945a396865da15f8d293dd984"} Oct 11 10:40:40.093609 master-0 kubenswrapper[4790]: I1011 10:40:40.093580 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Oct 11 10:40:40.096908 master-0 kubenswrapper[4790]: I1011 10:40:40.096869 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-0" Oct 11 10:40:40.106716 master-0 kubenswrapper[4790]: I1011 10:40:40.106284 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7d46fcc5c6-n88q4" Oct 11 10:40:40.116457 master-0 kubenswrapper[4790]: I1011 10:40:40.116380 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-0" podStartSLOduration=1.116359718 podStartE2EDuration="1.116359718s" podCreationTimestamp="2025-10-11 10:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:40:40.114737327 +0000 UTC m=+116.669197629" watchObservedRunningTime="2025-10-11 10:40:40.116359718 +0000 UTC m=+116.670820010" Oct 11 10:40:40.177538 master-0 kubenswrapper[4790]: I1011 10:40:40.177431 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=11.177392941 podStartE2EDuration="11.177392941s" podCreationTimestamp="2025-10-11 10:40:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:40:40.172865854 +0000 UTC m=+116.727326156" watchObservedRunningTime="2025-10-11 10:40:40.177392941 +0000 UTC m=+116.731853233" Oct 11 10:40:41.142701 master-1 kubenswrapper[4771]: I1011 10:40:41.142519 4771 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-z898b container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:41.142701 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:41.142701 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:40:41.142701 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:40:41.142701 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:41.142701 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:41.142701 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:41.142701 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:41.142701 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:41.142701 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:40:41.142701 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:40:41.142701 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:40:41.142701 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:41.142701 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:41.142701 master-1 kubenswrapper[4771]: I1011 10:40:41.142615 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" podUID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:41.464126 master-2 kubenswrapper[4776]: I1011 10:40:41.464062 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:41.464795 master-2 kubenswrapper[4776]: I1011 10:40:41.464127 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:42.107171 master-0 kubenswrapper[4790]: I1011 10:40:42.107062 4790 generic.go:334] "Generic (PLEG): container finished" podID="a7e53a8977ce5fc5588aef94f91dcc24" containerID="0b424e04233179f6773559ee329a6da085bcdbf1e83437befa1c0cee6d727796" exitCode=0 Oct 11 10:40:42.108266 master-0 kubenswrapper[4790]: I1011 10:40:42.107502 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"a7e53a8977ce5fc5588aef94f91dcc24","Type":"ContainerDied","Data":"0b424e04233179f6773559ee329a6da085bcdbf1e83437befa1c0cee6d727796"} Oct 11 10:40:42.256521 master-1 kubenswrapper[4771]: I1011 10:40:42.256430 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:42.256521 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:42.256521 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:40:42.256521 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:40:42.256521 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:42.256521 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:42.256521 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:42.256521 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:42.256521 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:42.256521 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:40:42.256521 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:40:42.256521 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:40:42.256521 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:40:42.256521 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:40:42.256521 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:40:42.256521 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:40:42.256521 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:40:42.256521 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:42.256521 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:42.257447 master-1 kubenswrapper[4771]: I1011 10:40:42.256521 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:42.580540 master-1 kubenswrapper[4771]: I1011 10:40:42.580469 4771 patch_prober.go:28] interesting pod/console-775ff6c4fc-csp4z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" start-of-body= Oct 11 10:40:42.580780 master-1 kubenswrapper[4771]: I1011 10:40:42.580556 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-775ff6c4fc-csp4z" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" probeResult="failure" output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" Oct 11 10:40:43.114938 master-0 kubenswrapper[4790]: I1011 10:40:43.114840 4790 generic.go:334] "Generic (PLEG): container finished" podID="a7e53a8977ce5fc5588aef94f91dcc24" containerID="57c32f6a37a34d84078aadb245675fb641cce16a04adbde4e12fbce6a920df2f" exitCode=0 Oct 11 10:40:43.114938 master-0 kubenswrapper[4790]: I1011 10:40:43.114915 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"a7e53a8977ce5fc5588aef94f91dcc24","Type":"ContainerDied","Data":"57c32f6a37a34d84078aadb245675fb641cce16a04adbde4e12fbce6a920df2f"} Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: I1011 10:40:43.244915 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:43.245000 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:43.247998 master-1 kubenswrapper[4771]: I1011 10:40:43.245075 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:44.126423 master-0 kubenswrapper[4790]: I1011 10:40:44.126351 4790 generic.go:334] "Generic (PLEG): container finished" podID="a7e53a8977ce5fc5588aef94f91dcc24" containerID="033430f1d3826d181c0b75fb8343feb31c60c650260f0caebe6a290f1c9deca0" exitCode=0 Oct 11 10:40:44.127635 master-0 kubenswrapper[4790]: I1011 10:40:44.126475 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"a7e53a8977ce5fc5588aef94f91dcc24","Type":"ContainerDied","Data":"033430f1d3826d181c0b75fb8343feb31c60c650260f0caebe6a290f1c9deca0"} Oct 11 10:40:44.819361 master-0 kubenswrapper[4790]: I1011 10:40:44.819244 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-bn2sv" Oct 11 10:40:45.136408 master-0 kubenswrapper[4790]: I1011 10:40:45.136338 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"a7e53a8977ce5fc5588aef94f91dcc24","Type":"ContainerStarted","Data":"d5626e357f9cecab4c479d110f8d9828e354694d4906d68734ca04db0e170970"} Oct 11 10:40:46.144739 master-1 kubenswrapper[4771]: I1011 10:40:46.144614 4771 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-z898b container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:46.144739 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:46.144739 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:40:46.144739 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:40:46.144739 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:46.144739 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:46.144739 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:46.144739 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:46.144739 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:46.144739 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:40:46.144739 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:40:46.144739 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:40:46.144739 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:46.144739 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:46.144739 master-1 kubenswrapper[4771]: I1011 10:40:46.144728 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" podUID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:46.149331 master-0 kubenswrapper[4790]: I1011 10:40:46.149273 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_a7e53a8977ce5fc5588aef94f91dcc24/etcd/0.log" Oct 11 10:40:46.151549 master-0 kubenswrapper[4790]: I1011 10:40:46.151483 4790 generic.go:334] "Generic (PLEG): container finished" podID="a7e53a8977ce5fc5588aef94f91dcc24" containerID="35dda1dfed46ef2979f5c931d6cb1625edfbd4b3165e5e094d4299e095ab99b2" exitCode=1 Oct 11 10:40:46.151666 master-0 kubenswrapper[4790]: I1011 10:40:46.151545 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"a7e53a8977ce5fc5588aef94f91dcc24","Type":"ContainerStarted","Data":"e4fe8f20eab6d0f57dd7ff0348d7d79afb0c0c3e2753d58dd10a19dafe7b60ef"} Oct 11 10:40:46.151666 master-0 kubenswrapper[4790]: I1011 10:40:46.151584 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"a7e53a8977ce5fc5588aef94f91dcc24","Type":"ContainerStarted","Data":"95fc9c6d467ef4be49d4e412e32d764ddb12e42ab41ba14dd9259747220b1377"} Oct 11 10:40:46.151666 master-0 kubenswrapper[4790]: I1011 10:40:46.151600 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"a7e53a8977ce5fc5588aef94f91dcc24","Type":"ContainerDied","Data":"35dda1dfed46ef2979f5c931d6cb1625edfbd4b3165e5e094d4299e095ab99b2"} Oct 11 10:40:46.464254 master-2 kubenswrapper[4776]: I1011 10:40:46.464181 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:46.464254 master-2 kubenswrapper[4776]: I1011 10:40:46.464256 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:46.481909 master-0 kubenswrapper[4790]: I1011 10:40:46.481801 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-0"] Oct 11 10:40:47.176599 master-0 kubenswrapper[4790]: I1011 10:40:47.176456 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_a7e53a8977ce5fc5588aef94f91dcc24/etcd/0.log" Oct 11 10:40:47.178817 master-0 kubenswrapper[4790]: I1011 10:40:47.178691 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"a7e53a8977ce5fc5588aef94f91dcc24","Type":"ContainerStarted","Data":"3581fc03c1130dc2dfacc8a0d155544ce350f22134d68a0cd6e79314b24ed060"} Oct 11 10:40:47.179751 master-0 kubenswrapper[4790]: I1011 10:40:47.179661 4790 scope.go:117] "RemoveContainer" containerID="35dda1dfed46ef2979f5c931d6cb1625edfbd4b3165e5e094d4299e095ab99b2" Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: I1011 10:40:47.257496 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:47.257650 master-1 kubenswrapper[4771]: I1011 10:40:47.257592 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:48.181136 master-0 kubenswrapper[4790]: I1011 10:40:48.181022 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Oct 11 10:40:48.181136 master-0 kubenswrapper[4790]: I1011 10:40:48.181094 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Oct 11 10:40:48.181136 master-0 kubenswrapper[4790]: I1011 10:40:48.181112 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Oct 11 10:40:48.181136 master-0 kubenswrapper[4790]: I1011 10:40:48.181133 4790 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd/etcd-master-0" Oct 11 10:40:48.188884 master-0 kubenswrapper[4790]: I1011 10:40:48.188828 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_a7e53a8977ce5fc5588aef94f91dcc24/etcd/1.log" Oct 11 10:40:48.191140 master-0 kubenswrapper[4790]: I1011 10:40:48.191084 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_a7e53a8977ce5fc5588aef94f91dcc24/etcd/0.log" Oct 11 10:40:48.193335 master-0 kubenswrapper[4790]: I1011 10:40:48.193274 4790 generic.go:334] "Generic (PLEG): container finished" podID="a7e53a8977ce5fc5588aef94f91dcc24" containerID="31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227" exitCode=1 Oct 11 10:40:48.193335 master-0 kubenswrapper[4790]: I1011 10:40:48.193313 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"a7e53a8977ce5fc5588aef94f91dcc24","Type":"ContainerDied","Data":"31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227"} Oct 11 10:40:48.193553 master-0 kubenswrapper[4790]: I1011 10:40:48.193372 4790 scope.go:117] "RemoveContainer" containerID="35dda1dfed46ef2979f5c931d6cb1625edfbd4b3165e5e094d4299e095ab99b2" Oct 11 10:40:48.194335 master-0 kubenswrapper[4790]: I1011 10:40:48.194252 4790 scope.go:117] "RemoveContainer" containerID="31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227" Oct 11 10:40:48.194783 master-0 kubenswrapper[4790]: E1011 10:40:48.194744 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd pod=etcd-master-0_openshift-etcd(a7e53a8977ce5fc5588aef94f91dcc24)\"" pod="openshift-etcd/etcd-master-0" podUID="a7e53a8977ce5fc5588aef94f91dcc24" Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: I1011 10:40:48.247664 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:48.247756 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:48.251617 master-1 kubenswrapper[4771]: I1011 10:40:48.247763 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:48.387700 master-2 kubenswrapper[4776]: I1011 10:40:48.387590 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:40:48.387700 master-2 kubenswrapper[4776]: I1011 10:40:48.387595 4776 patch_prober.go:28] interesting pod/kube-controller-manager-master-2 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:48.387700 master-2 kubenswrapper[4776]: I1011 10:40:48.387669 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:48.388882 master-2 kubenswrapper[4776]: I1011 10:40:48.388308 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:40:49.203006 master-0 kubenswrapper[4790]: I1011 10:40:49.202919 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_a7e53a8977ce5fc5588aef94f91dcc24/etcd/1.log" Oct 11 10:40:49.207380 master-0 kubenswrapper[4790]: I1011 10:40:49.207310 4790 scope.go:117] "RemoveContainer" containerID="31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227" Oct 11 10:40:49.207653 master-0 kubenswrapper[4790]: E1011 10:40:49.207601 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd pod=etcd-master-0_openshift-etcd(a7e53a8977ce5fc5588aef94f91dcc24)\"" pod="openshift-etcd/etcd-master-0" podUID="a7e53a8977ce5fc5588aef94f91dcc24" Oct 11 10:40:49.388656 master-2 kubenswrapper[4776]: I1011 10:40:49.388561 4776 generic.go:334] "Generic (PLEG): container finished" podID="5473628e-94c8-4706-bb03-ff4836debe5f" containerID="bc423808a1318a501a04a81a0b62715e5af3476c9da3fb5de99b8aa1ff2380a0" exitCode=0 Oct 11 10:40:49.388656 master-2 kubenswrapper[4776]: I1011 10:40:49.388620 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" event={"ID":"5473628e-94c8-4706-bb03-ff4836debe5f","Type":"ContainerDied","Data":"bc423808a1318a501a04a81a0b62715e5af3476c9da3fb5de99b8aa1ff2380a0"} Oct 11 10:40:49.388656 master-2 kubenswrapper[4776]: I1011 10:40:49.388687 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" event={"ID":"5473628e-94c8-4706-bb03-ff4836debe5f","Type":"ContainerDied","Data":"8271b5e67a2ec2732b55bb993f1b0ac7f455b4b07969cedd192379e7f4289e39"} Oct 11 10:40:49.388656 master-2 kubenswrapper[4776]: I1011 10:40:49.388702 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8271b5e67a2ec2732b55bb993f1b0ac7f455b4b07969cedd192379e7f4289e39" Oct 11 10:40:49.390831 master-2 kubenswrapper[4776]: I1011 10:40:49.390775 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:40:49.556207 master-2 kubenswrapper[4776]: I1011 10:40:49.556116 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle\") pod \"5473628e-94c8-4706-bb03-ff4836debe5f\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " Oct 11 10:40:49.556207 master-2 kubenswrapper[4776]: I1011 10:40:49.556191 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-server-tls\") pod \"5473628e-94c8-4706-bb03-ff4836debe5f\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " Oct 11 10:40:49.556207 master-2 kubenswrapper[4776]: I1011 10:40:49.556229 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-configmap-kubelet-serving-ca-bundle\") pod \"5473628e-94c8-4706-bb03-ff4836debe5f\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " Oct 11 10:40:49.556712 master-2 kubenswrapper[4776]: I1011 10:40:49.556294 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-metrics-server-audit-profiles\") pod \"5473628e-94c8-4706-bb03-ff4836debe5f\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " Oct 11 10:40:49.556712 master-2 kubenswrapper[4776]: I1011 10:40:49.556360 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2vd6\" (UniqueName: \"kubernetes.io/projected/5473628e-94c8-4706-bb03-ff4836debe5f-kube-api-access-q2vd6\") pod \"5473628e-94c8-4706-bb03-ff4836debe5f\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " Oct 11 10:40:49.556712 master-2 kubenswrapper[4776]: I1011 10:40:49.556409 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5473628e-94c8-4706-bb03-ff4836debe5f-audit-log\") pod \"5473628e-94c8-4706-bb03-ff4836debe5f\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " Oct 11 10:40:49.556712 master-2 kubenswrapper[4776]: I1011 10:40:49.556474 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-client-certs\") pod \"5473628e-94c8-4706-bb03-ff4836debe5f\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " Oct 11 10:40:49.557120 master-2 kubenswrapper[4776]: I1011 10:40:49.557052 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "5473628e-94c8-4706-bb03-ff4836debe5f" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:40:49.557824 master-2 kubenswrapper[4776]: I1011 10:40:49.557779 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5473628e-94c8-4706-bb03-ff4836debe5f-audit-log" (OuterVolumeSpecName: "audit-log") pod "5473628e-94c8-4706-bb03-ff4836debe5f" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:40:49.558117 master-2 kubenswrapper[4776]: I1011 10:40:49.558046 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "5473628e-94c8-4706-bb03-ff4836debe5f" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:40:49.560798 master-2 kubenswrapper[4776]: I1011 10:40:49.560732 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "5473628e-94c8-4706-bb03-ff4836debe5f" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:40:49.560945 master-2 kubenswrapper[4776]: I1011 10:40:49.560884 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "5473628e-94c8-4706-bb03-ff4836debe5f" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:40:49.561158 master-2 kubenswrapper[4776]: I1011 10:40:49.561117 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5473628e-94c8-4706-bb03-ff4836debe5f-kube-api-access-q2vd6" (OuterVolumeSpecName: "kube-api-access-q2vd6") pod "5473628e-94c8-4706-bb03-ff4836debe5f" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f"). InnerVolumeSpecName "kube-api-access-q2vd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:40:49.561544 master-2 kubenswrapper[4776]: I1011 10:40:49.561505 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "5473628e-94c8-4706-bb03-ff4836debe5f" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:40:49.658712 master-2 kubenswrapper[4776]: I1011 10:40:49.658595 4776 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-metrics-server-audit-profiles\") on node \"master-2\" DevicePath \"\"" Oct 11 10:40:49.658712 master-2 kubenswrapper[4776]: I1011 10:40:49.658657 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2vd6\" (UniqueName: \"kubernetes.io/projected/5473628e-94c8-4706-bb03-ff4836debe5f-kube-api-access-q2vd6\") on node \"master-2\" DevicePath \"\"" Oct 11 10:40:49.658712 master-2 kubenswrapper[4776]: I1011 10:40:49.658686 4776 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5473628e-94c8-4706-bb03-ff4836debe5f-audit-log\") on node \"master-2\" DevicePath \"\"" Oct 11 10:40:49.659072 master-2 kubenswrapper[4776]: I1011 10:40:49.658727 4776 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-client-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:40:49.659072 master-2 kubenswrapper[4776]: I1011 10:40:49.658745 4776 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:40:49.659072 master-2 kubenswrapper[4776]: I1011 10:40:49.658755 4776 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-server-tls\") on node \"master-2\" DevicePath \"\"" Oct 11 10:40:49.659072 master-2 kubenswrapper[4776]: I1011 10:40:49.658767 4776 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-configmap-kubelet-serving-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:40:50.024514 master-2 kubenswrapper[4776]: I1011 10:40:50.024340 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:40:50.024514 master-2 kubenswrapper[4776]: I1011 10:40:50.024470 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:40:50.078266 master-0 kubenswrapper[4790]: I1011 10:40:50.078186 4790 patch_prober.go:28] interesting pod/console-76f8bc4746-9rjdm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.130.0.14:8443/health\": dial tcp 10.130.0.14:8443: connect: connection refused" start-of-body= Oct 11 10:40:50.078593 master-0 kubenswrapper[4790]: I1011 10:40:50.078286 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-9rjdm" podUID="ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db" containerName="console" probeResult="failure" output="Get \"https://10.130.0.14:8443/health\": dial tcp 10.130.0.14:8443: connect: connection refused" Oct 11 10:40:50.395936 master-2 kubenswrapper[4776]: I1011 10:40:50.395709 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:40:50.446921 master-2 kubenswrapper[4776]: I1011 10:40:50.446864 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-65d86dff78-crzgp"] Oct 11 10:40:50.450860 master-2 kubenswrapper[4776]: I1011 10:40:50.450807 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-65d86dff78-crzgp"] Oct 11 10:40:51.144578 master-1 kubenswrapper[4771]: I1011 10:40:51.144491 4771 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-z898b container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:51.144578 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:51.144578 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:40:51.144578 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:40:51.144578 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:51.144578 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:51.144578 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:51.144578 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:51.144578 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:51.144578 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:40:51.144578 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:40:51.144578 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:40:51.144578 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:51.144578 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:51.144578 master-1 kubenswrapper[4771]: I1011 10:40:51.144568 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" podUID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:51.464487 master-2 kubenswrapper[4776]: I1011 10:40:51.464372 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:51.465782 master-2 kubenswrapper[4776]: I1011 10:40:51.464476 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:51.465782 master-2 kubenswrapper[4776]: I1011 10:40:51.464722 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:40:51.465980 master-2 kubenswrapper[4776]: I1011 10:40:51.465868 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:51.465980 master-2 kubenswrapper[4776]: I1011 10:40:51.465920 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:51.511049 master-0 kubenswrapper[4790]: I1011 10:40:51.510946 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-guard-master-0"] Oct 11 10:40:51.511975 master-0 kubenswrapper[4790]: E1011 10:40:51.511187 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3934355-bb61-4316-b164-05294e12906a" containerName="installer" Oct 11 10:40:51.511975 master-0 kubenswrapper[4790]: I1011 10:40:51.511209 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3934355-bb61-4316-b164-05294e12906a" containerName="installer" Oct 11 10:40:51.511975 master-0 kubenswrapper[4790]: I1011 10:40:51.511310 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3934355-bb61-4316-b164-05294e12906a" containerName="installer" Oct 11 10:40:51.511975 master-0 kubenswrapper[4790]: I1011 10:40:51.511887 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-master-0" Oct 11 10:40:51.515757 master-0 kubenswrapper[4790]: I1011 10:40:51.515629 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"default-dockercfg-rkxgf" Oct 11 10:40:51.515945 master-0 kubenswrapper[4790]: I1011 10:40:51.515853 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"openshift-service-ca.crt" Oct 11 10:40:51.516057 master-0 kubenswrapper[4790]: I1011 10:40:51.515914 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Oct 11 10:40:51.528865 master-0 kubenswrapper[4790]: I1011 10:40:51.528800 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/etcd-guard-master-0"] Oct 11 10:40:51.612526 master-0 kubenswrapper[4790]: I1011 10:40:51.612431 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q64gl\" (UniqueName: \"kubernetes.io/projected/c6436766-e7b0-471b-acbf-861280191521-kube-api-access-q64gl\") pod \"etcd-guard-master-0\" (UID: \"c6436766-e7b0-471b-acbf-861280191521\") " pod="openshift-etcd/etcd-guard-master-0" Oct 11 10:40:51.714305 master-0 kubenswrapper[4790]: I1011 10:40:51.714156 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q64gl\" (UniqueName: \"kubernetes.io/projected/c6436766-e7b0-471b-acbf-861280191521-kube-api-access-q64gl\") pod \"etcd-guard-master-0\" (UID: \"c6436766-e7b0-471b-acbf-861280191521\") " pod="openshift-etcd/etcd-guard-master-0" Oct 11 10:40:51.736002 master-0 kubenswrapper[4790]: I1011 10:40:51.735886 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q64gl\" (UniqueName: \"kubernetes.io/projected/c6436766-e7b0-471b-acbf-861280191521-kube-api-access-q64gl\") pod \"etcd-guard-master-0\" (UID: \"c6436766-e7b0-471b-acbf-861280191521\") " pod="openshift-etcd/etcd-guard-master-0" Oct 11 10:40:51.831989 master-0 kubenswrapper[4790]: I1011 10:40:51.831886 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-master-0" Oct 11 10:40:52.076459 master-2 kubenswrapper[4776]: I1011 10:40:52.076352 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" path="/var/lib/kubelet/pods/5473628e-94c8-4706-bb03-ff4836debe5f/volumes" Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: I1011 10:40:52.255091 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:52.255191 master-1 kubenswrapper[4771]: I1011 10:40:52.255171 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:52.396040 master-0 kubenswrapper[4790]: I1011 10:40:52.395941 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/etcd-guard-master-0"] Oct 11 10:40:52.399752 master-0 kubenswrapper[4790]: W1011 10:40:52.399633 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6436766_e7b0_471b_acbf_861280191521.slice/crio-51c414a844b478d418e57b3713e6576b51b14b364314441dcee769fc9449f46a WatchSource:0}: Error finding container 51c414a844b478d418e57b3713e6576b51b14b364314441dcee769fc9449f46a: Status 404 returned error can't find the container with id 51c414a844b478d418e57b3713e6576b51b14b364314441dcee769fc9449f46a Oct 11 10:40:52.581520 master-1 kubenswrapper[4771]: I1011 10:40:52.581441 4771 patch_prober.go:28] interesting pod/console-775ff6c4fc-csp4z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" start-of-body= Oct 11 10:40:52.581854 master-1 kubenswrapper[4771]: I1011 10:40:52.581541 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-775ff6c4fc-csp4z" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" probeResult="failure" output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" Oct 11 10:40:53.181252 master-0 kubenswrapper[4790]: I1011 10:40:53.181127 4790 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd/etcd-master-0" Oct 11 10:40:53.181925 master-0 kubenswrapper[4790]: I1011 10:40:53.181265 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Oct 11 10:40:53.182677 master-0 kubenswrapper[4790]: I1011 10:40:53.182628 4790 scope.go:117] "RemoveContainer" containerID="31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227" Oct 11 10:40:53.183254 master-0 kubenswrapper[4790]: E1011 10:40:53.183192 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd pod=etcd-master-0_openshift-etcd(a7e53a8977ce5fc5588aef94f91dcc24)\"" pod="openshift-etcd/etcd-master-0" podUID="a7e53a8977ce5fc5588aef94f91dcc24" Oct 11 10:40:53.232877 master-0 kubenswrapper[4790]: I1011 10:40:53.232770 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-guard-master-0" event={"ID":"c6436766-e7b0-471b-acbf-861280191521","Type":"ContainerStarted","Data":"9e98490fd3666e68c05ba349bef300928b07e9009d6f846b655d69140196a8a3"} Oct 11 10:40:53.232877 master-0 kubenswrapper[4790]: I1011 10:40:53.232882 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-guard-master-0" event={"ID":"c6436766-e7b0-471b-acbf-861280191521","Type":"ContainerStarted","Data":"51c414a844b478d418e57b3713e6576b51b14b364314441dcee769fc9449f46a"} Oct 11 10:40:53.233129 master-0 kubenswrapper[4790]: I1011 10:40:53.232956 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-guard-master-0" Oct 11 10:40:53.239211 master-1 kubenswrapper[4771]: I1011 10:40:53.238558 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 11 10:40:53.239211 master-1 kubenswrapper[4771]: I1011 10:40:53.238662 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 11 10:40:53.285097 master-0 kubenswrapper[4790]: I1011 10:40:53.284977 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-guard-master-0" podStartSLOduration=2.28495318 podStartE2EDuration="2.28495318s" podCreationTimestamp="2025-10-11 10:40:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:40:53.281702526 +0000 UTC m=+129.836162818" watchObservedRunningTime="2025-10-11 10:40:53.28495318 +0000 UTC m=+129.839413492" Oct 11 10:40:54.966649 master-1 kubenswrapper[4771]: I1011 10:40:54.966584 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_e39186c2ebd02622803bdbec6984de2a/kube-apiserver-cert-syncer/0.log" Oct 11 10:40:54.967778 master-1 kubenswrapper[4771]: I1011 10:40:54.967667 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:40:54.972974 master-1 kubenswrapper[4771]: I1011 10:40:54.972917 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="e39186c2ebd02622803bdbec6984de2a" podUID="42d61efaa0f96869cf2939026aad6022" Oct 11 10:40:55.030316 master-1 kubenswrapper[4771]: I1011 10:40:55.030181 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-cert-dir\") pod \"e39186c2ebd02622803bdbec6984de2a\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " Oct 11 10:40:55.030316 master-1 kubenswrapper[4771]: I1011 10:40:55.030306 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-audit-dir\") pod \"e39186c2ebd02622803bdbec6984de2a\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " Oct 11 10:40:55.030845 master-1 kubenswrapper[4771]: I1011 10:40:55.030397 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "e39186c2ebd02622803bdbec6984de2a" (UID: "e39186c2ebd02622803bdbec6984de2a"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:40:55.030845 master-1 kubenswrapper[4771]: I1011 10:40:55.030450 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-resource-dir\") pod \"e39186c2ebd02622803bdbec6984de2a\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " Oct 11 10:40:55.030845 master-1 kubenswrapper[4771]: I1011 10:40:55.030501 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e39186c2ebd02622803bdbec6984de2a" (UID: "e39186c2ebd02622803bdbec6984de2a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:40:55.030845 master-1 kubenswrapper[4771]: I1011 10:40:55.030551 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "e39186c2ebd02622803bdbec6984de2a" (UID: "e39186c2ebd02622803bdbec6984de2a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:40:55.031320 master-1 kubenswrapper[4771]: I1011 10:40:55.030901 4771 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-resource-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:40:55.031320 master-1 kubenswrapper[4771]: I1011 10:40:55.030935 4771 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-cert-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:40:55.031320 master-1 kubenswrapper[4771]: I1011 10:40:55.030961 4771 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:40:55.792628 master-1 kubenswrapper[4771]: I1011 10:40:55.792531 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_e39186c2ebd02622803bdbec6984de2a/kube-apiserver-cert-syncer/0.log" Oct 11 10:40:55.793910 master-1 kubenswrapper[4771]: I1011 10:40:55.793839 4771 generic.go:334] "Generic (PLEG): container finished" podID="e39186c2ebd02622803bdbec6984de2a" containerID="be8c3f5de1224f1ddb95dff091a6c317c3ddd56bd6bebec9f107f4b1c1bd098a" exitCode=0 Oct 11 10:40:55.794022 master-1 kubenswrapper[4771]: I1011 10:40:55.793960 4771 scope.go:117] "RemoveContainer" containerID="50a31e36a673a3d932b0eed3747b62b95f6a9f4bd14409954bbb8b619a64ca07" Oct 11 10:40:55.794022 master-1 kubenswrapper[4771]: I1011 10:40:55.793992 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:40:55.800791 master-1 kubenswrapper[4771]: I1011 10:40:55.800717 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="e39186c2ebd02622803bdbec6984de2a" podUID="42d61efaa0f96869cf2939026aad6022" Oct 11 10:40:55.815969 master-1 kubenswrapper[4771]: I1011 10:40:55.814351 4771 scope.go:117] "RemoveContainer" containerID="637d81cee36b54a2c10877824c0d4c8cd57b4ef94675651e76c7ca2c91addea4" Oct 11 10:40:55.831950 master-1 kubenswrapper[4771]: I1011 10:40:55.831788 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="e39186c2ebd02622803bdbec6984de2a" podUID="42d61efaa0f96869cf2939026aad6022" Oct 11 10:40:55.838945 master-1 kubenswrapper[4771]: I1011 10:40:55.838893 4771 scope.go:117] "RemoveContainer" containerID="7808e9129366cc3f545a8ddafc086a76064c7552891a59b63333320814e12109" Oct 11 10:40:55.858582 master-1 kubenswrapper[4771]: I1011 10:40:55.858525 4771 scope.go:117] "RemoveContainer" containerID="0e245a1193b14427c4e53af18fc7d7caad0bae63a50c8f3671564cc8eb9cc3dd" Oct 11 10:40:55.878610 master-1 kubenswrapper[4771]: I1011 10:40:55.878566 4771 scope.go:117] "RemoveContainer" containerID="be8c3f5de1224f1ddb95dff091a6c317c3ddd56bd6bebec9f107f4b1c1bd098a" Oct 11 10:40:55.901842 master-1 kubenswrapper[4771]: I1011 10:40:55.901785 4771 scope.go:117] "RemoveContainer" containerID="70b3ac8371a68d1ef8731071faccdda868469d141980d26eb4114af177d58912" Oct 11 10:40:55.927112 master-1 kubenswrapper[4771]: I1011 10:40:55.926950 4771 scope.go:117] "RemoveContainer" containerID="50a31e36a673a3d932b0eed3747b62b95f6a9f4bd14409954bbb8b619a64ca07" Oct 11 10:40:55.927771 master-1 kubenswrapper[4771]: E1011 10:40:55.927701 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50a31e36a673a3d932b0eed3747b62b95f6a9f4bd14409954bbb8b619a64ca07\": container with ID starting with 50a31e36a673a3d932b0eed3747b62b95f6a9f4bd14409954bbb8b619a64ca07 not found: ID does not exist" containerID="50a31e36a673a3d932b0eed3747b62b95f6a9f4bd14409954bbb8b619a64ca07" Oct 11 10:40:55.927835 master-1 kubenswrapper[4771]: I1011 10:40:55.927785 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a31e36a673a3d932b0eed3747b62b95f6a9f4bd14409954bbb8b619a64ca07"} err="failed to get container status \"50a31e36a673a3d932b0eed3747b62b95f6a9f4bd14409954bbb8b619a64ca07\": rpc error: code = NotFound desc = could not find container \"50a31e36a673a3d932b0eed3747b62b95f6a9f4bd14409954bbb8b619a64ca07\": container with ID starting with 50a31e36a673a3d932b0eed3747b62b95f6a9f4bd14409954bbb8b619a64ca07 not found: ID does not exist" Oct 11 10:40:55.927896 master-1 kubenswrapper[4771]: I1011 10:40:55.927836 4771 scope.go:117] "RemoveContainer" containerID="637d81cee36b54a2c10877824c0d4c8cd57b4ef94675651e76c7ca2c91addea4" Oct 11 10:40:55.928524 master-1 kubenswrapper[4771]: E1011 10:40:55.928444 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"637d81cee36b54a2c10877824c0d4c8cd57b4ef94675651e76c7ca2c91addea4\": container with ID starting with 637d81cee36b54a2c10877824c0d4c8cd57b4ef94675651e76c7ca2c91addea4 not found: ID does not exist" containerID="637d81cee36b54a2c10877824c0d4c8cd57b4ef94675651e76c7ca2c91addea4" Oct 11 10:40:55.928599 master-1 kubenswrapper[4771]: I1011 10:40:55.928542 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"637d81cee36b54a2c10877824c0d4c8cd57b4ef94675651e76c7ca2c91addea4"} err="failed to get container status \"637d81cee36b54a2c10877824c0d4c8cd57b4ef94675651e76c7ca2c91addea4\": rpc error: code = NotFound desc = could not find container \"637d81cee36b54a2c10877824c0d4c8cd57b4ef94675651e76c7ca2c91addea4\": container with ID starting with 637d81cee36b54a2c10877824c0d4c8cd57b4ef94675651e76c7ca2c91addea4 not found: ID does not exist" Oct 11 10:40:55.928653 master-1 kubenswrapper[4771]: I1011 10:40:55.928607 4771 scope.go:117] "RemoveContainer" containerID="7808e9129366cc3f545a8ddafc086a76064c7552891a59b63333320814e12109" Oct 11 10:40:55.929157 master-1 kubenswrapper[4771]: E1011 10:40:55.929101 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7808e9129366cc3f545a8ddafc086a76064c7552891a59b63333320814e12109\": container with ID starting with 7808e9129366cc3f545a8ddafc086a76064c7552891a59b63333320814e12109 not found: ID does not exist" containerID="7808e9129366cc3f545a8ddafc086a76064c7552891a59b63333320814e12109" Oct 11 10:40:55.929219 master-1 kubenswrapper[4771]: I1011 10:40:55.929155 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7808e9129366cc3f545a8ddafc086a76064c7552891a59b63333320814e12109"} err="failed to get container status \"7808e9129366cc3f545a8ddafc086a76064c7552891a59b63333320814e12109\": rpc error: code = NotFound desc = could not find container \"7808e9129366cc3f545a8ddafc086a76064c7552891a59b63333320814e12109\": container with ID starting with 7808e9129366cc3f545a8ddafc086a76064c7552891a59b63333320814e12109 not found: ID does not exist" Oct 11 10:40:55.929219 master-1 kubenswrapper[4771]: I1011 10:40:55.929187 4771 scope.go:117] "RemoveContainer" containerID="0e245a1193b14427c4e53af18fc7d7caad0bae63a50c8f3671564cc8eb9cc3dd" Oct 11 10:40:55.929834 master-1 kubenswrapper[4771]: E1011 10:40:55.929773 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e245a1193b14427c4e53af18fc7d7caad0bae63a50c8f3671564cc8eb9cc3dd\": container with ID starting with 0e245a1193b14427c4e53af18fc7d7caad0bae63a50c8f3671564cc8eb9cc3dd not found: ID does not exist" containerID="0e245a1193b14427c4e53af18fc7d7caad0bae63a50c8f3671564cc8eb9cc3dd" Oct 11 10:40:55.929903 master-1 kubenswrapper[4771]: I1011 10:40:55.929836 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e245a1193b14427c4e53af18fc7d7caad0bae63a50c8f3671564cc8eb9cc3dd"} err="failed to get container status \"0e245a1193b14427c4e53af18fc7d7caad0bae63a50c8f3671564cc8eb9cc3dd\": rpc error: code = NotFound desc = could not find container \"0e245a1193b14427c4e53af18fc7d7caad0bae63a50c8f3671564cc8eb9cc3dd\": container with ID starting with 0e245a1193b14427c4e53af18fc7d7caad0bae63a50c8f3671564cc8eb9cc3dd not found: ID does not exist" Oct 11 10:40:55.929903 master-1 kubenswrapper[4771]: I1011 10:40:55.929878 4771 scope.go:117] "RemoveContainer" containerID="be8c3f5de1224f1ddb95dff091a6c317c3ddd56bd6bebec9f107f4b1c1bd098a" Oct 11 10:40:55.930503 master-1 kubenswrapper[4771]: E1011 10:40:55.930457 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be8c3f5de1224f1ddb95dff091a6c317c3ddd56bd6bebec9f107f4b1c1bd098a\": container with ID starting with be8c3f5de1224f1ddb95dff091a6c317c3ddd56bd6bebec9f107f4b1c1bd098a not found: ID does not exist" containerID="be8c3f5de1224f1ddb95dff091a6c317c3ddd56bd6bebec9f107f4b1c1bd098a" Oct 11 10:40:55.930566 master-1 kubenswrapper[4771]: I1011 10:40:55.930504 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be8c3f5de1224f1ddb95dff091a6c317c3ddd56bd6bebec9f107f4b1c1bd098a"} err="failed to get container status \"be8c3f5de1224f1ddb95dff091a6c317c3ddd56bd6bebec9f107f4b1c1bd098a\": rpc error: code = NotFound desc = could not find container \"be8c3f5de1224f1ddb95dff091a6c317c3ddd56bd6bebec9f107f4b1c1bd098a\": container with ID starting with be8c3f5de1224f1ddb95dff091a6c317c3ddd56bd6bebec9f107f4b1c1bd098a not found: ID does not exist" Oct 11 10:40:55.930566 master-1 kubenswrapper[4771]: I1011 10:40:55.930539 4771 scope.go:117] "RemoveContainer" containerID="70b3ac8371a68d1ef8731071faccdda868469d141980d26eb4114af177d58912" Oct 11 10:40:55.931050 master-1 kubenswrapper[4771]: E1011 10:40:55.930970 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70b3ac8371a68d1ef8731071faccdda868469d141980d26eb4114af177d58912\": container with ID starting with 70b3ac8371a68d1ef8731071faccdda868469d141980d26eb4114af177d58912 not found: ID does not exist" containerID="70b3ac8371a68d1ef8731071faccdda868469d141980d26eb4114af177d58912" Oct 11 10:40:55.931114 master-1 kubenswrapper[4771]: I1011 10:40:55.931051 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b3ac8371a68d1ef8731071faccdda868469d141980d26eb4114af177d58912"} err="failed to get container status \"70b3ac8371a68d1ef8731071faccdda868469d141980d26eb4114af177d58912\": rpc error: code = NotFound desc = could not find container \"70b3ac8371a68d1ef8731071faccdda868469d141980d26eb4114af177d58912\": container with ID starting with 70b3ac8371a68d1ef8731071faccdda868469d141980d26eb4114af177d58912 not found: ID does not exist" Oct 11 10:40:56.144709 master-1 kubenswrapper[4771]: I1011 10:40:56.144610 4771 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-z898b container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:56.144709 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:56.144709 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:40:56.144709 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:40:56.144709 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:56.144709 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:56.144709 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:56.144709 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:56.144709 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:56.144709 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:40:56.144709 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:40:56.144709 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:40:56.144709 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:56.144709 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:56.144709 master-1 kubenswrapper[4771]: I1011 10:40:56.144702 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" podUID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:56.450295 master-1 kubenswrapper[4771]: I1011 10:40:56.450151 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e39186c2ebd02622803bdbec6984de2a" path="/var/lib/kubelet/pods/e39186c2ebd02622803bdbec6984de2a/volumes" Oct 11 10:40:56.464586 master-2 kubenswrapper[4776]: I1011 10:40:56.464508 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:56.465301 master-2 kubenswrapper[4776]: I1011 10:40:56.464587 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:57.257535 master-1 kubenswrapper[4771]: I1011 10:40:57.257431 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:57.257535 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:40:57.257535 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:40:57.257535 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:40:57.257535 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:40:57.257535 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:40:57.257535 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:40:57.257535 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:57.257535 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:57.257535 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:40:57.257535 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:40:57.257535 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:40:57.257535 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:40:57.257535 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:40:57.257535 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:40:57.257535 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:40:57.257535 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:40:57.257535 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:40:57.257535 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:40:57.259972 master-1 kubenswrapper[4771]: I1011 10:40:57.257538 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:58.182275 master-0 kubenswrapper[4790]: I1011 10:40:58.182132 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Oct 11 10:40:58.182275 master-0 kubenswrapper[4790]: I1011 10:40:58.182255 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Oct 11 10:40:58.183600 master-0 kubenswrapper[4790]: I1011 10:40:58.183538 4790 scope.go:117] "RemoveContainer" containerID="31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227" Oct 11 10:40:58.233676 master-0 kubenswrapper[4790]: I1011 10:40:58.233579 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:40:58.233944 master-0 kubenswrapper[4790]: I1011 10:40:58.233700 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:40:58.238010 master-1 kubenswrapper[4771]: I1011 10:40:58.237936 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 11 10:40:58.238396 master-1 kubenswrapper[4771]: I1011 10:40:58.238010 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 11 10:40:58.388175 master-2 kubenswrapper[4776]: I1011 10:40:58.388043 4776 patch_prober.go:28] interesting pod/kube-controller-manager-master-2 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:58.388175 master-2 kubenswrapper[4776]: I1011 10:40:58.388158 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:59.271872 master-0 kubenswrapper[4790]: I1011 10:40:59.271785 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_a7e53a8977ce5fc5588aef94f91dcc24/etcd/1.log" Oct 11 10:40:59.275374 master-0 kubenswrapper[4790]: I1011 10:40:59.275309 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"a7e53a8977ce5fc5588aef94f91dcc24","Type":"ContainerStarted","Data":"b73aa8db7e53e83a1a9430aab15d1a11f28139414df55dce80c436d48922f6ca"} Oct 11 10:40:59.324595 master-0 kubenswrapper[4790]: I1011 10:40:59.324469 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=22.324447468 podStartE2EDuration="22.324447468s" podCreationTimestamp="2025-10-11 10:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:40:59.320788004 +0000 UTC m=+135.875248336" watchObservedRunningTime="2025-10-11 10:40:59.324447468 +0000 UTC m=+135.878907770" Oct 11 10:41:00.023504 master-2 kubenswrapper[4776]: I1011 10:41:00.023435 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:41:00.024025 master-2 kubenswrapper[4776]: I1011 10:41:00.023505 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:41:00.079960 master-0 kubenswrapper[4790]: I1011 10:41:00.079818 4790 patch_prober.go:28] interesting pod/console-76f8bc4746-9rjdm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.130.0.14:8443/health\": dial tcp 10.130.0.14:8443: connect: connection refused" start-of-body= Oct 11 10:41:00.079960 master-0 kubenswrapper[4790]: I1011 10:41:00.079938 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-9rjdm" podUID="ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db" containerName="console" probeResult="failure" output="Get \"https://10.130.0.14:8443/health\": dial tcp 10.130.0.14:8443: connect: connection refused" Oct 11 10:41:00.437651 master-1 kubenswrapper[4771]: I1011 10:41:00.437553 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:41:00.459204 master-1 kubenswrapper[4771]: I1011 10:41:00.459170 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="233d76fa-d8e2-41eb-9272-6cdd0056b793" Oct 11 10:41:00.459328 master-1 kubenswrapper[4771]: I1011 10:41:00.459316 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="233d76fa-d8e2-41eb-9272-6cdd0056b793" Oct 11 10:41:00.477904 master-1 kubenswrapper[4771]: I1011 10:41:00.477860 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 11 10:41:00.516601 master-1 kubenswrapper[4771]: I1011 10:41:00.516536 4771 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:41:00.520186 master-1 kubenswrapper[4771]: I1011 10:41:00.520141 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 11 10:41:00.539027 master-1 kubenswrapper[4771]: I1011 10:41:00.538952 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:41:00.544485 master-1 kubenswrapper[4771]: I1011 10:41:00.544414 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 11 10:41:00.568078 master-1 kubenswrapper[4771]: W1011 10:41:00.568005 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42d61efaa0f96869cf2939026aad6022.slice/crio-61d36eba775f4bff620711b60052065dee1d2c15d7f0fa86d827b5f7e6631d7d WatchSource:0}: Error finding container 61d36eba775f4bff620711b60052065dee1d2c15d7f0fa86d827b5f7e6631d7d: Status 404 returned error can't find the container with id 61d36eba775f4bff620711b60052065dee1d2c15d7f0fa86d827b5f7e6631d7d Oct 11 10:41:00.689878 master-0 kubenswrapper[4790]: I1011 10:41:00.689773 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/etcd-guard-master-0"] Oct 11 10:41:00.690881 master-0 kubenswrapper[4790]: I1011 10:41:00.689990 4790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:41:00.834303 master-1 kubenswrapper[4771]: I1011 10:41:00.834153 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"42d61efaa0f96869cf2939026aad6022","Type":"ContainerStarted","Data":"61d36eba775f4bff620711b60052065dee1d2c15d7f0fa86d827b5f7e6631d7d"} Oct 11 10:41:01.138320 master-1 kubenswrapper[4771]: I1011 10:41:01.138183 4771 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-z898b container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.129.0.51:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.51:8443: connect: connection refused" start-of-body= Oct 11 10:41:01.138320 master-1 kubenswrapper[4771]: I1011 10:41:01.138278 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" podUID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.129.0.51:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.51:8443: connect: connection refused" Oct 11 10:41:01.464412 master-2 kubenswrapper[4776]: I1011 10:41:01.464365 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:41:01.464973 master-2 kubenswrapper[4776]: I1011 10:41:01.464432 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:41:01.565329 master-2 kubenswrapper[4776]: I1011 10:41:01.565228 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-6-master-2"] Oct 11 10:41:01.565731 master-2 kubenswrapper[4776]: E1011 10:41:01.565651 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" Oct 11 10:41:01.565778 master-2 kubenswrapper[4776]: I1011 10:41:01.565728 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" Oct 11 10:41:01.565778 master-2 kubenswrapper[4776]: E1011 10:41:01.565758 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4114c1be-d3d9-438f-b215-619b0aa3e114" containerName="pruner" Oct 11 10:41:01.565778 master-2 kubenswrapper[4776]: I1011 10:41:01.565773 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4114c1be-d3d9-438f-b215-619b0aa3e114" containerName="pruner" Oct 11 10:41:01.566039 master-2 kubenswrapper[4776]: I1011 10:41:01.566005 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4114c1be-d3d9-438f-b215-619b0aa3e114" containerName="pruner" Oct 11 10:41:01.566079 master-2 kubenswrapper[4776]: I1011 10:41:01.566051 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" Oct 11 10:41:01.566902 master-2 kubenswrapper[4776]: I1011 10:41:01.566866 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:01.570603 master-2 kubenswrapper[4776]: I1011 10:41:01.570518 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-djvlq" Oct 11 10:41:01.579278 master-2 kubenswrapper[4776]: I1011 10:41:01.579213 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-6-master-2"] Oct 11 10:41:01.747859 master-2 kubenswrapper[4776]: I1011 10:41:01.747641 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kubelet-dir\") pod \"installer-6-master-2\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:01.747859 master-2 kubenswrapper[4776]: I1011 10:41:01.747748 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kube-api-access\") pod \"installer-6-master-2\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:01.748206 master-2 kubenswrapper[4776]: I1011 10:41:01.748097 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-var-lock\") pod \"installer-6-master-2\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:01.845158 master-1 kubenswrapper[4771]: I1011 10:41:01.843987 4771 generic.go:334] "Generic (PLEG): container finished" podID="42d61efaa0f96869cf2939026aad6022" containerID="546001aeab4a76f01af18f5f0a0232cc48a20c2025802d7d9983eb8c840e0866" exitCode=0 Oct 11 10:41:01.845158 master-1 kubenswrapper[4771]: I1011 10:41:01.844062 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"42d61efaa0f96869cf2939026aad6022","Type":"ContainerDied","Data":"546001aeab4a76f01af18f5f0a0232cc48a20c2025802d7d9983eb8c840e0866"} Oct 11 10:41:01.850729 master-2 kubenswrapper[4776]: I1011 10:41:01.850626 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kubelet-dir\") pod \"installer-6-master-2\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:01.851172 master-2 kubenswrapper[4776]: I1011 10:41:01.850790 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kube-api-access\") pod \"installer-6-master-2\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:01.851172 master-2 kubenswrapper[4776]: I1011 10:41:01.850794 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kubelet-dir\") pod \"installer-6-master-2\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:01.851172 master-2 kubenswrapper[4776]: I1011 10:41:01.850962 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-var-lock\") pod \"installer-6-master-2\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:01.851172 master-2 kubenswrapper[4776]: I1011 10:41:01.851114 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-var-lock\") pod \"installer-6-master-2\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:01.972913 master-2 kubenswrapper[4776]: I1011 10:41:01.972776 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kube-api-access\") pod \"installer-6-master-2\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:02.186194 master-2 kubenswrapper[4776]: I1011 10:41:02.186118 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:02.262903 master-1 kubenswrapper[4771]: I1011 10:41:02.262602 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:41:02.262903 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:41:02.262903 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:41:02.262903 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:41:02.262903 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:41:02.262903 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:41:02.262903 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:41:02.262903 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:41:02.262903 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:41:02.262903 master-1 kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:41:02.262903 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:41:02.262903 master-1 kubenswrapper[4771]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:41:02.262903 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:41:02.262903 master-1 kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:41:02.262903 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:41:02.262903 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:41:02.262903 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:41:02.262903 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:41:02.262903 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:41:02.263654 master-1 kubenswrapper[4771]: I1011 10:41:02.263087 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:41:02.580293 master-1 kubenswrapper[4771]: I1011 10:41:02.580205 4771 patch_prober.go:28] interesting pod/console-775ff6c4fc-csp4z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" start-of-body= Oct 11 10:41:02.580293 master-1 kubenswrapper[4771]: I1011 10:41:02.580289 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-775ff6c4fc-csp4z" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" probeResult="failure" output="Get \"https://10.129.0.73:8443/health\": dial tcp 10.129.0.73:8443: connect: connection refused" Oct 11 10:41:02.611136 master-2 kubenswrapper[4776]: I1011 10:41:02.611091 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-6-master-2"] Oct 11 10:41:02.854164 master-1 kubenswrapper[4771]: I1011 10:41:02.854095 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"42d61efaa0f96869cf2939026aad6022","Type":"ContainerStarted","Data":"55ecf6fefa862d92619ce534057ad20c836371d13f4c0d70468214b0bd6e3db4"} Oct 11 10:41:02.854164 master-1 kubenswrapper[4771]: I1011 10:41:02.854157 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"42d61efaa0f96869cf2939026aad6022","Type":"ContainerStarted","Data":"7e5a3711f36461fe4ced62a6738267cdf151c6f22d750936a4256bced2e89c2a"} Oct 11 10:41:02.854164 master-1 kubenswrapper[4771]: I1011 10:41:02.854175 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"42d61efaa0f96869cf2939026aad6022","Type":"ContainerStarted","Data":"d035b13d9431b1216e273c4ac7fb5eb87624d8740b70d29326082336302e3b46"} Oct 11 10:41:03.181127 master-0 kubenswrapper[4790]: I1011 10:41:03.180990 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Oct 11 10:41:03.234344 master-0 kubenswrapper[4790]: I1011 10:41:03.234250 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:41:03.234344 master-0 kubenswrapper[4790]: I1011 10:41:03.234325 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:41:03.427210 master-1 kubenswrapper[4771]: I1011 10:41:03.427147 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:41:03.475449 master-1 kubenswrapper[4771]: I1011 10:41:03.468227 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-etcd-client\") pod \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " Oct 11 10:41:03.475449 master-1 kubenswrapper[4771]: I1011 10:41:03.468313 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-trusted-ca-bundle\") pod \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " Oct 11 10:41:03.475449 master-1 kubenswrapper[4771]: I1011 10:41:03.468342 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-etcd-serving-ca\") pod \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " Oct 11 10:41:03.475449 master-1 kubenswrapper[4771]: I1011 10:41:03.468381 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-audit-policies\") pod \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " Oct 11 10:41:03.475449 master-1 kubenswrapper[4771]: I1011 10:41:03.468437 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-encryption-config\") pod \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " Oct 11 10:41:03.475449 master-1 kubenswrapper[4771]: I1011 10:41:03.468465 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crdvt\" (UniqueName: \"kubernetes.io/projected/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-kube-api-access-crdvt\") pod \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " Oct 11 10:41:03.475449 master-1 kubenswrapper[4771]: I1011 10:41:03.468505 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-audit-dir\") pod \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " Oct 11 10:41:03.475449 master-1 kubenswrapper[4771]: I1011 10:41:03.468533 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-serving-cert\") pod \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\" (UID: \"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40\") " Oct 11 10:41:03.475449 master-1 kubenswrapper[4771]: I1011 10:41:03.470177 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" (UID: "44bb7164-0bee-4f90-8bf6-a2d73e1f3d40"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:41:03.475449 master-1 kubenswrapper[4771]: I1011 10:41:03.474100 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" (UID: "44bb7164-0bee-4f90-8bf6-a2d73e1f3d40"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:41:03.475449 master-1 kubenswrapper[4771]: I1011 10:41:03.474469 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" (UID: "44bb7164-0bee-4f90-8bf6-a2d73e1f3d40"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:41:03.475449 master-1 kubenswrapper[4771]: I1011 10:41:03.474796 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" (UID: "44bb7164-0bee-4f90-8bf6-a2d73e1f3d40"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:41:03.478244 master-1 kubenswrapper[4771]: I1011 10:41:03.478189 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-kube-api-access-crdvt" (OuterVolumeSpecName: "kube-api-access-crdvt") pod "44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" (UID: "44bb7164-0bee-4f90-8bf6-a2d73e1f3d40"). InnerVolumeSpecName "kube-api-access-crdvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:41:03.478320 master-1 kubenswrapper[4771]: I1011 10:41:03.478269 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" (UID: "44bb7164-0bee-4f90-8bf6-a2d73e1f3d40"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:41:03.480633 master-1 kubenswrapper[4771]: I1011 10:41:03.480593 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" (UID: "44bb7164-0bee-4f90-8bf6-a2d73e1f3d40"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:41:03.482274 master-1 kubenswrapper[4771]: I1011 10:41:03.482204 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" (UID: "44bb7164-0bee-4f90-8bf6-a2d73e1f3d40"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:41:03.490035 master-2 kubenswrapper[4776]: I1011 10:41:03.489971 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-2" event={"ID":"f5c9d0dc-adaa-427d-9416-8b25d43673d0","Type":"ContainerStarted","Data":"9e3b264b36af8fb8203eaffb48028b74bc6997d9d895e272c171cb9caab5664f"} Oct 11 10:41:03.490035 master-2 kubenswrapper[4776]: I1011 10:41:03.490027 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-2" event={"ID":"f5c9d0dc-adaa-427d-9416-8b25d43673d0","Type":"ContainerStarted","Data":"8fbabddd40d44946c170e869c1c618cdec75e9cb6e63aa5167033a997e2748d9"} Oct 11 10:41:03.570820 master-1 kubenswrapper[4771]: I1011 10:41:03.570539 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:41:03.570820 master-1 kubenswrapper[4771]: I1011 10:41:03.570755 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-etcd-serving-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:41:03.570820 master-1 kubenswrapper[4771]: I1011 10:41:03.570772 4771 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-audit-policies\") on node \"master-1\" DevicePath \"\"" Oct 11 10:41:03.570820 master-1 kubenswrapper[4771]: I1011 10:41:03.570784 4771 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-encryption-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:41:03.570820 master-1 kubenswrapper[4771]: I1011 10:41:03.570795 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crdvt\" (UniqueName: \"kubernetes.io/projected/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-kube-api-access-crdvt\") on node \"master-1\" DevicePath \"\"" Oct 11 10:41:03.570820 master-1 kubenswrapper[4771]: I1011 10:41:03.570809 4771 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:41:03.570820 master-1 kubenswrapper[4771]: I1011 10:41:03.570820 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:41:03.570820 master-1 kubenswrapper[4771]: I1011 10:41:03.570829 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-etcd-client\") on node \"master-1\" DevicePath \"\"" Oct 11 10:41:03.570820 master-1 kubenswrapper[4771]: I1011 10:41:03.570838 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:41:03.571454 master-1 kubenswrapper[4771]: I1011 10:41:03.570936 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d7647696-42d9-4dd9-bc3b-a4d52a42cf9a-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-bqdlc\" (UID: \"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:41:03.608818 master-2 kubenswrapper[4776]: I1011 10:41:03.608745 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-6-master-2" podStartSLOduration=2.608727266 podStartE2EDuration="2.608727266s" podCreationTimestamp="2025-10-11 10:41:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:41:03.608293634 +0000 UTC m=+898.392720363" watchObservedRunningTime="2025-10-11 10:41:03.608727266 +0000 UTC m=+898.393153975" Oct 11 10:41:03.664741 master-1 kubenswrapper[4771]: I1011 10:41:03.664635 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:41:03.672641 master-1 kubenswrapper[4771]: I1011 10:41:03.672551 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:41:03.672932 master-1 kubenswrapper[4771]: I1011 10:41:03.672896 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-tpzsm\" (UID: \"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:41:03.737547 master-1 kubenswrapper[4771]: I1011 10:41:03.737261 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/readyz\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Oct 11 10:41:03.738227 master-1 kubenswrapper[4771]: I1011 10:41:03.737902 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 11 10:41:03.879619 master-1 kubenswrapper[4771]: I1011 10:41:03.879563 4771 generic.go:334] "Generic (PLEG): container finished" podID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" containerID="db0964db198d321448b29e5ac3377a039eb46842494e887c86878f66ad14d217" exitCode=0 Oct 11 10:41:03.880076 master-1 kubenswrapper[4771]: I1011 10:41:03.879666 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" event={"ID":"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40","Type":"ContainerDied","Data":"db0964db198d321448b29e5ac3377a039eb46842494e887c86878f66ad14d217"} Oct 11 10:41:03.880076 master-1 kubenswrapper[4771]: I1011 10:41:03.879695 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" event={"ID":"44bb7164-0bee-4f90-8bf6-a2d73e1f3d40","Type":"ContainerDied","Data":"ef282371271fc7902dfe16d939904e98053b587f042204eef235e27cd9b5b8b6"} Oct 11 10:41:03.880076 master-1 kubenswrapper[4771]: I1011 10:41:03.879716 4771 scope.go:117] "RemoveContainer" containerID="db0964db198d321448b29e5ac3377a039eb46842494e887c86878f66ad14d217" Oct 11 10:41:03.880076 master-1 kubenswrapper[4771]: I1011 10:41:03.879839 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b" Oct 11 10:41:03.886210 master-1 kubenswrapper[4771]: I1011 10:41:03.886134 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"42d61efaa0f96869cf2939026aad6022","Type":"ContainerStarted","Data":"a15e7539d2a0c42e8c6c8995bf98ff26ca0f322daf83394df48b4f13fc42d10b"} Oct 11 10:41:03.886210 master-1 kubenswrapper[4771]: I1011 10:41:03.886211 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"42d61efaa0f96869cf2939026aad6022","Type":"ContainerStarted","Data":"452189c1a156cff2357db3338f99f86d41c76ed0f97b4459672ad6a8fe0dc5c7"} Oct 11 10:41:03.886826 master-1 kubenswrapper[4771]: I1011 10:41:03.886797 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:41:03.921880 master-1 kubenswrapper[4771]: I1011 10:41:03.913798 4771 scope.go:117] "RemoveContainer" containerID="099a8b3dcef0438896afc75fcd82f68fe99e85fb11c77c0389001ba13a5e3c61" Oct 11 10:41:03.946640 master-1 kubenswrapper[4771]: I1011 10:41:03.946563 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-1" podStartSLOduration=3.946543781 podStartE2EDuration="3.946543781s" podCreationTimestamp="2025-10-11 10:41:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:41:03.927159198 +0000 UTC m=+895.901385659" watchObservedRunningTime="2025-10-11 10:41:03.946543781 +0000 UTC m=+895.920770222" Oct 11 10:41:03.949172 master-1 kubenswrapper[4771]: I1011 10:41:03.949125 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b"] Oct 11 10:41:03.952151 master-1 kubenswrapper[4771]: I1011 10:41:03.951926 4771 scope.go:117] "RemoveContainer" containerID="db0964db198d321448b29e5ac3377a039eb46842494e887c86878f66ad14d217" Oct 11 10:41:03.952620 master-1 kubenswrapper[4771]: E1011 10:41:03.952588 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db0964db198d321448b29e5ac3377a039eb46842494e887c86878f66ad14d217\": container with ID starting with db0964db198d321448b29e5ac3377a039eb46842494e887c86878f66ad14d217 not found: ID does not exist" containerID="db0964db198d321448b29e5ac3377a039eb46842494e887c86878f66ad14d217" Oct 11 10:41:03.952691 master-1 kubenswrapper[4771]: I1011 10:41:03.952629 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0964db198d321448b29e5ac3377a039eb46842494e887c86878f66ad14d217"} err="failed to get container status \"db0964db198d321448b29e5ac3377a039eb46842494e887c86878f66ad14d217\": rpc error: code = NotFound desc = could not find container \"db0964db198d321448b29e5ac3377a039eb46842494e887c86878f66ad14d217\": container with ID starting with db0964db198d321448b29e5ac3377a039eb46842494e887c86878f66ad14d217 not found: ID does not exist" Oct 11 10:41:03.952691 master-1 kubenswrapper[4771]: I1011 10:41:03.952658 4771 scope.go:117] "RemoveContainer" containerID="099a8b3dcef0438896afc75fcd82f68fe99e85fb11c77c0389001ba13a5e3c61" Oct 11 10:41:03.953408 master-1 kubenswrapper[4771]: E1011 10:41:03.953335 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"099a8b3dcef0438896afc75fcd82f68fe99e85fb11c77c0389001ba13a5e3c61\": container with ID starting with 099a8b3dcef0438896afc75fcd82f68fe99e85fb11c77c0389001ba13a5e3c61 not found: ID does not exist" containerID="099a8b3dcef0438896afc75fcd82f68fe99e85fb11c77c0389001ba13a5e3c61" Oct 11 10:41:03.953408 master-1 kubenswrapper[4771]: I1011 10:41:03.953387 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"099a8b3dcef0438896afc75fcd82f68fe99e85fb11c77c0389001ba13a5e3c61"} err="failed to get container status \"099a8b3dcef0438896afc75fcd82f68fe99e85fb11c77c0389001ba13a5e3c61\": rpc error: code = NotFound desc = could not find container \"099a8b3dcef0438896afc75fcd82f68fe99e85fb11c77c0389001ba13a5e3c61\": container with ID starting with 099a8b3dcef0438896afc75fcd82f68fe99e85fb11c77c0389001ba13a5e3c61 not found: ID does not exist" Oct 11 10:41:03.953995 master-1 kubenswrapper[4771]: I1011 10:41:03.953935 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b"] Oct 11 10:41:03.974381 master-1 kubenswrapper[4771]: I1011 10:41:03.970511 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:41:04.197101 master-1 kubenswrapper[4771]: I1011 10:41:04.197029 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc"] Oct 11 10:41:04.203878 master-1 kubenswrapper[4771]: W1011 10:41:04.203821 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7647696_42d9_4dd9_bc3b_a4d52a42cf9a.slice/crio-427eb40cfa0882eb2802d3c999b5c0efc782d64ced889beb36b9629f8b535d5d WatchSource:0}: Error finding container 427eb40cfa0882eb2802d3c999b5c0efc782d64ced889beb36b9629f8b535d5d: Status 404 returned error can't find the container with id 427eb40cfa0882eb2802d3c999b5c0efc782d64ced889beb36b9629f8b535d5d Oct 11 10:41:04.450491 master-1 kubenswrapper[4771]: I1011 10:41:04.450435 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" path="/var/lib/kubelet/pods/44bb7164-0bee-4f90-8bf6-a2d73e1f3d40/volumes" Oct 11 10:41:04.451379 master-1 kubenswrapper[4771]: I1011 10:41:04.451331 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm"] Oct 11 10:41:04.893199 master-1 kubenswrapper[4771]: I1011 10:41:04.893126 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" event={"ID":"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b","Type":"ContainerStarted","Data":"b26edef884aff3f6672e1cf391fedcd8f44db05f9d32b8b278db114403a7ea30"} Oct 11 10:41:04.893199 master-1 kubenswrapper[4771]: I1011 10:41:04.893175 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" event={"ID":"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b","Type":"ContainerStarted","Data":"6872f9dd722eb9def4b0c8368459a22a1b55a6810993877af216d31bdec99d96"} Oct 11 10:41:04.894176 master-1 kubenswrapper[4771]: I1011 10:41:04.894136 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" event={"ID":"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a","Type":"ContainerStarted","Data":"427eb40cfa0882eb2802d3c999b5c0efc782d64ced889beb36b9629f8b535d5d"} Oct 11 10:41:05.539492 master-1 kubenswrapper[4771]: I1011 10:41:05.539441 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:41:05.539492 master-1 kubenswrapper[4771]: I1011 10:41:05.539493 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: I1011 10:41:05.547076 4771 patch_prober.go:28] interesting pod/kube-apiserver-master-1 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]etcd ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:41:05.547170 master-1 kubenswrapper[4771]: livez check failed Oct 11 10:41:05.548416 master-1 kubenswrapper[4771]: I1011 10:41:05.547224 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:41:06.464882 master-2 kubenswrapper[4776]: I1011 10:41:06.464811 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:41:06.465482 master-2 kubenswrapper[4776]: I1011 10:41:06.464906 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:41:07.251499 master-1 kubenswrapper[4771]: I1011 10:41:07.251421 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" start-of-body= Oct 11 10:41:07.252046 master-1 kubenswrapper[4771]: I1011 10:41:07.251513 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" Oct 11 10:41:07.924041 master-1 kubenswrapper[4771]: I1011 10:41:07.923964 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" event={"ID":"6d66210c-3e1b-4f1f-87b7-6dfe4dd5423b","Type":"ContainerStarted","Data":"90dd61f5266758da211b0e98508c1a1d319122d1e8ed03c42c48e7345067cef4"} Oct 11 10:41:07.924431 master-1 kubenswrapper[4771]: I1011 10:41:07.924165 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:41:07.927480 master-1 kubenswrapper[4771]: I1011 10:41:07.927402 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" event={"ID":"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a","Type":"ContainerStarted","Data":"ee7362ce753dcdd54615e3606ada5492960b78f2a0918c80fdb33aa93c30dbd6"} Oct 11 10:41:07.927943 master-1 kubenswrapper[4771]: I1011 10:41:07.927495 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" event={"ID":"d7647696-42d9-4dd9-bc3b-a4d52a42cf9a","Type":"ContainerStarted","Data":"f86aabc69abdb933f1234a37c50d6fc06722446595529fab70389a6a64dd2e2f"} Oct 11 10:41:07.927943 master-1 kubenswrapper[4771]: I1011 10:41:07.927600 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:41:07.957138 master-1 kubenswrapper[4771]: I1011 10:41:07.957016 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" podStartSLOduration=741.193524109 podStartE2EDuration="12m22.956992471s" podCreationTimestamp="2025-10-11 10:28:45 +0000 UTC" firstStartedPulling="2025-10-11 10:41:04.848863518 +0000 UTC m=+896.823089959" lastFinishedPulling="2025-10-11 10:41:06.61233188 +0000 UTC m=+898.586558321" observedRunningTime="2025-10-11 10:41:07.952262453 +0000 UTC m=+899.926488974" watchObservedRunningTime="2025-10-11 10:41:07.956992471 +0000 UTC m=+899.931218952" Oct 11 10:41:08.181960 master-0 kubenswrapper[4790]: I1011 10:41:08.181629 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Oct 11 10:41:08.234738 master-0 kubenswrapper[4790]: I1011 10:41:08.234530 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:41:08.234738 master-0 kubenswrapper[4790]: I1011 10:41:08.234724 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:41:08.242279 master-1 kubenswrapper[4771]: I1011 10:41:08.242166 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 11 10:41:08.264442 master-1 kubenswrapper[4771]: I1011 10:41:08.264291 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" podStartSLOduration=740.855158064 podStartE2EDuration="12m23.264257682s" podCreationTimestamp="2025-10-11 10:28:45 +0000 UTC" firstStartedPulling="2025-10-11 10:41:04.207120895 +0000 UTC m=+896.181347356" lastFinishedPulling="2025-10-11 10:41:06.616220523 +0000 UTC m=+898.590446974" observedRunningTime="2025-10-11 10:41:08.004424738 +0000 UTC m=+899.978651239" watchObservedRunningTime="2025-10-11 10:41:08.264257682 +0000 UTC m=+900.238484153" Oct 11 10:41:08.387710 master-2 kubenswrapper[4776]: I1011 10:41:08.387501 4776 patch_prober.go:28] interesting pod/kube-controller-manager-master-2 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:41:08.390217 master-2 kubenswrapper[4776]: I1011 10:41:08.387709 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:41:08.390217 master-2 kubenswrapper[4776]: I1011 10:41:08.387815 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:08.390217 master-2 kubenswrapper[4776]: I1011 10:41:08.389223 4776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec"} pod="openshift-kube-controller-manager/kube-controller-manager-master-2" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Oct 11 10:41:08.390217 master-2 kubenswrapper[4776]: I1011 10:41:08.389486 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" containerID="cri-o://3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec" gracePeriod=30 Oct 11 10:41:09.182143 master-0 kubenswrapper[4790]: I1011 10:41:09.182035 4790 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:41:09.183257 master-0 kubenswrapper[4790]: I1011 10:41:09.182158 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-0" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:41:09.669874 master-0 kubenswrapper[4790]: I1011 10:41:09.669806 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:41:09.689769 master-1 kubenswrapper[4771]: I1011 10:41:09.689678 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:41:09.714494 master-0 kubenswrapper[4790]: I1011 10:41:09.714447 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:41:09.737825 master-1 kubenswrapper[4771]: I1011 10:41:09.737696 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:41:09.984792 master-1 kubenswrapper[4771]: I1011 10:41:09.984599 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-1" Oct 11 10:41:10.028487 master-2 kubenswrapper[4776]: I1011 10:41:10.028432 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:41:10.036497 master-2 kubenswrapper[4776]: I1011 10:41:10.036439 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:41:10.082942 master-0 kubenswrapper[4790]: I1011 10:41:10.082879 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:41:10.087892 master-0 kubenswrapper[4790]: I1011 10:41:10.087841 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:41:10.167379 master-1 kubenswrapper[4771]: I1011 10:41:10.167261 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-775ff6c4fc-csp4z"] Oct 11 10:41:10.364201 master-0 kubenswrapper[4790]: I1011 10:41:10.364054 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Oct 11 10:41:10.548473 master-1 kubenswrapper[4771]: I1011 10:41:10.548398 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:41:10.553091 master-1 kubenswrapper[4771]: I1011 10:41:10.553060 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:41:10.568924 master-1 kubenswrapper[4771]: I1011 10:41:10.568862 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-g4p26"] Oct 11 10:41:10.569686 master-1 kubenswrapper[4771]: E1011 10:41:10.569645 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" containerName="fix-audit-permissions" Oct 11 10:41:10.569686 master-1 kubenswrapper[4771]: I1011 10:41:10.569677 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" containerName="fix-audit-permissions" Oct 11 10:41:10.569804 master-1 kubenswrapper[4771]: E1011 10:41:10.569715 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" containerName="oauth-apiserver" Oct 11 10:41:10.569804 master-1 kubenswrapper[4771]: I1011 10:41:10.569727 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" containerName="oauth-apiserver" Oct 11 10:41:10.569953 master-1 kubenswrapper[4771]: I1011 10:41:10.569919 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="44bb7164-0bee-4f90-8bf6-a2d73e1f3d40" containerName="oauth-apiserver" Oct 11 10:41:10.571305 master-1 kubenswrapper[4771]: I1011 10:41:10.571259 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.574311 master-1 kubenswrapper[4771]: I1011 10:41:10.574246 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 11 10:41:10.574782 master-1 kubenswrapper[4771]: I1011 10:41:10.574715 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 11 10:41:10.575115 master-1 kubenswrapper[4771]: I1011 10:41:10.574808 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 11 10:41:10.575115 master-1 kubenswrapper[4771]: I1011 10:41:10.574865 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 11 10:41:10.575115 master-1 kubenswrapper[4771]: I1011 10:41:10.575099 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 11 10:41:10.575115 master-1 kubenswrapper[4771]: I1011 10:41:10.575155 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 11 10:41:10.575115 master-1 kubenswrapper[4771]: I1011 10:41:10.575285 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-zlnjr" Oct 11 10:41:10.575115 master-1 kubenswrapper[4771]: I1011 10:41:10.575790 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 11 10:41:10.575115 master-1 kubenswrapper[4771]: I1011 10:41:10.576094 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 11 10:41:10.592702 master-1 kubenswrapper[4771]: I1011 10:41:10.592631 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c0cf305-ba21-45c0-a092-05214809da68-trusted-ca-bundle\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.592887 master-1 kubenswrapper[4771]: I1011 10:41:10.592726 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4c0cf305-ba21-45c0-a092-05214809da68-audit-policies\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.592887 master-1 kubenswrapper[4771]: I1011 10:41:10.592781 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjn45\" (UniqueName: \"kubernetes.io/projected/4c0cf305-ba21-45c0-a092-05214809da68-kube-api-access-rjn45\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.592887 master-1 kubenswrapper[4771]: I1011 10:41:10.592818 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c0cf305-ba21-45c0-a092-05214809da68-audit-dir\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.592887 master-1 kubenswrapper[4771]: I1011 10:41:10.592885 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4c0cf305-ba21-45c0-a092-05214809da68-encryption-config\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.593138 master-1 kubenswrapper[4771]: I1011 10:41:10.592914 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c0cf305-ba21-45c0-a092-05214809da68-serving-cert\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.593138 master-1 kubenswrapper[4771]: I1011 10:41:10.592950 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4c0cf305-ba21-45c0-a092-05214809da68-etcd-serving-ca\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.593138 master-1 kubenswrapper[4771]: I1011 10:41:10.592980 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4c0cf305-ba21-45c0-a092-05214809da68-etcd-client\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.596571 master-1 kubenswrapper[4771]: I1011 10:41:10.596506 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-g4p26"] Oct 11 10:41:10.694759 master-1 kubenswrapper[4771]: I1011 10:41:10.694558 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4c0cf305-ba21-45c0-a092-05214809da68-etcd-serving-ca\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.694759 master-1 kubenswrapper[4771]: I1011 10:41:10.694667 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4c0cf305-ba21-45c0-a092-05214809da68-etcd-client\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.694759 master-1 kubenswrapper[4771]: I1011 10:41:10.694727 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c0cf305-ba21-45c0-a092-05214809da68-trusted-ca-bundle\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.695514 master-1 kubenswrapper[4771]: I1011 10:41:10.694775 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4c0cf305-ba21-45c0-a092-05214809da68-audit-policies\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.695514 master-1 kubenswrapper[4771]: I1011 10:41:10.694821 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjn45\" (UniqueName: \"kubernetes.io/projected/4c0cf305-ba21-45c0-a092-05214809da68-kube-api-access-rjn45\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.695514 master-1 kubenswrapper[4771]: I1011 10:41:10.694854 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c0cf305-ba21-45c0-a092-05214809da68-audit-dir\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.695514 master-1 kubenswrapper[4771]: I1011 10:41:10.694942 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4c0cf305-ba21-45c0-a092-05214809da68-encryption-config\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.695514 master-1 kubenswrapper[4771]: I1011 10:41:10.694982 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c0cf305-ba21-45c0-a092-05214809da68-serving-cert\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.695514 master-1 kubenswrapper[4771]: I1011 10:41:10.695076 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c0cf305-ba21-45c0-a092-05214809da68-audit-dir\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.695750 master-1 kubenswrapper[4771]: I1011 10:41:10.695615 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4c0cf305-ba21-45c0-a092-05214809da68-etcd-serving-ca\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.695796 master-1 kubenswrapper[4771]: I1011 10:41:10.695732 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c0cf305-ba21-45c0-a092-05214809da68-trusted-ca-bundle\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.696108 master-1 kubenswrapper[4771]: I1011 10:41:10.696044 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4c0cf305-ba21-45c0-a092-05214809da68-audit-policies\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.699305 master-1 kubenswrapper[4771]: I1011 10:41:10.699259 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c0cf305-ba21-45c0-a092-05214809da68-serving-cert\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.700280 master-1 kubenswrapper[4771]: I1011 10:41:10.700222 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4c0cf305-ba21-45c0-a092-05214809da68-etcd-client\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.706812 master-1 kubenswrapper[4771]: I1011 10:41:10.706760 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4c0cf305-ba21-45c0-a092-05214809da68-encryption-config\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.744203 master-1 kubenswrapper[4771]: I1011 10:41:10.744129 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjn45\" (UniqueName: \"kubernetes.io/projected/4c0cf305-ba21-45c0-a092-05214809da68-kube-api-access-rjn45\") pod \"apiserver-656768b4df-g4p26\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:10.899203 master-1 kubenswrapper[4771]: I1011 10:41:10.898898 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:11.453792 master-1 kubenswrapper[4771]: I1011 10:41:11.453703 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-g4p26"] Oct 11 10:41:11.464910 master-2 kubenswrapper[4776]: I1011 10:41:11.464811 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:41:11.464910 master-2 kubenswrapper[4776]: I1011 10:41:11.464897 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:41:11.472412 master-1 kubenswrapper[4771]: W1011 10:41:11.472334 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c0cf305_ba21_45c0_a092_05214809da68.slice/crio-9a8773a82720172e1c708c6b8b379786c06f2193ace376125888c909cd115b04 WatchSource:0}: Error finding container 9a8773a82720172e1c708c6b8b379786c06f2193ace376125888c909cd115b04: Status 404 returned error can't find the container with id 9a8773a82720172e1c708c6b8b379786c06f2193ace376125888c909cd115b04 Oct 11 10:41:11.973623 master-1 kubenswrapper[4771]: I1011 10:41:11.973456 4771 generic.go:334] "Generic (PLEG): container finished" podID="4c0cf305-ba21-45c0-a092-05214809da68" containerID="0e1c25b025232015baea7f43e82a2fb07a8814a3aa56b9f5440885f746e96d22" exitCode=0 Oct 11 10:41:11.973623 master-1 kubenswrapper[4771]: I1011 10:41:11.973564 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" event={"ID":"4c0cf305-ba21-45c0-a092-05214809da68","Type":"ContainerDied","Data":"0e1c25b025232015baea7f43e82a2fb07a8814a3aa56b9f5440885f746e96d22"} Oct 11 10:41:11.974149 master-1 kubenswrapper[4771]: I1011 10:41:11.973641 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" event={"ID":"4c0cf305-ba21-45c0-a092-05214809da68","Type":"ContainerStarted","Data":"9a8773a82720172e1c708c6b8b379786c06f2193ace376125888c909cd115b04"} Oct 11 10:41:12.251411 master-1 kubenswrapper[4771]: I1011 10:41:12.251314 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" start-of-body= Oct 11 10:41:12.251595 master-1 kubenswrapper[4771]: I1011 10:41:12.251423 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" Oct 11 10:41:12.983092 master-1 kubenswrapper[4771]: I1011 10:41:12.982960 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" event={"ID":"4c0cf305-ba21-45c0-a092-05214809da68","Type":"ContainerStarted","Data":"0d35eb346e00db345ba38192641eb8af77030ecba70c39b9071f558249b90948"} Oct 11 10:41:13.033975 master-1 kubenswrapper[4771]: I1011 10:41:13.033865 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" podStartSLOduration=62.033836215 podStartE2EDuration="1m2.033836215s" podCreationTimestamp="2025-10-11 10:40:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:41:13.028280354 +0000 UTC m=+905.002506815" watchObservedRunningTime="2025-10-11 10:41:13.033836215 +0000 UTC m=+905.008062676" Oct 11 10:41:13.236672 master-0 kubenswrapper[4790]: I1011 10:41:13.235983 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:41:13.236672 master-0 kubenswrapper[4790]: I1011 10:41:13.236112 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:41:13.669731 master-1 kubenswrapper[4771]: I1011 10:41:13.669652 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc" Oct 11 10:41:13.976451 master-1 kubenswrapper[4771]: I1011 10:41:13.976272 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm" Oct 11 10:41:15.335659 master-2 kubenswrapper[4776]: I1011 10:41:15.335588 4776 scope.go:117] "RemoveContainer" containerID="bc423808a1318a501a04a81a0b62715e5af3476c9da3fb5de99b8aa1ff2380a0" Oct 11 10:41:15.900510 master-1 kubenswrapper[4771]: I1011 10:41:15.900423 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:15.901389 master-1 kubenswrapper[4771]: I1011 10:41:15.900542 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:15.912103 master-1 kubenswrapper[4771]: I1011 10:41:15.912038 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:16.014836 master-1 kubenswrapper[4771]: I1011 10:41:16.014744 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:16.465267 master-2 kubenswrapper[4776]: I1011 10:41:16.465167 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:41:16.466106 master-2 kubenswrapper[4776]: I1011 10:41:16.465277 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:41:17.251262 master-1 kubenswrapper[4771]: I1011 10:41:17.251130 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" start-of-body= Oct 11 10:41:17.251262 master-1 kubenswrapper[4771]: I1011 10:41:17.251234 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" Oct 11 10:41:18.236951 master-0 kubenswrapper[4790]: I1011 10:41:18.236827 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:41:18.237605 master-0 kubenswrapper[4790]: I1011 10:41:18.237001 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:41:19.182781 master-0 kubenswrapper[4790]: I1011 10:41:19.182577 4790 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:41:19.182781 master-0 kubenswrapper[4790]: I1011 10:41:19.182772 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-0" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:41:20.545126 master-1 kubenswrapper[4771]: I1011 10:41:20.545070 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:41:21.107714 master-2 kubenswrapper[4776]: I1011 10:41:21.107640 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76f8bc4746-5jp5k"] Oct 11 10:41:21.464923 master-2 kubenswrapper[4776]: I1011 10:41:21.464847 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:41:21.464923 master-2 kubenswrapper[4776]: I1011 10:41:21.464912 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:41:22.250825 master-1 kubenswrapper[4771]: I1011 10:41:22.250755 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" start-of-body= Oct 11 10:41:22.251767 master-1 kubenswrapper[4771]: I1011 10:41:22.251662 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" Oct 11 10:41:23.238668 master-0 kubenswrapper[4790]: I1011 10:41:23.238542 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:41:23.238668 master-0 kubenswrapper[4790]: I1011 10:41:23.238639 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:41:26.464599 master-2 kubenswrapper[4776]: I1011 10:41:26.464520 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:41:26.465356 master-2 kubenswrapper[4776]: I1011 10:41:26.464626 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:41:27.250987 master-1 kubenswrapper[4771]: I1011 10:41:27.250902 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" start-of-body= Oct 11 10:41:27.250987 master-1 kubenswrapper[4771]: I1011 10:41:27.250981 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" Oct 11 10:41:28.239779 master-0 kubenswrapper[4790]: I1011 10:41:28.239627 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:41:28.240975 master-0 kubenswrapper[4790]: I1011 10:41:28.239795 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:41:29.182931 master-0 kubenswrapper[4790]: I1011 10:41:29.182838 4790 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:41:29.183466 master-0 kubenswrapper[4790]: I1011 10:41:29.182978 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-0" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:41:29.799109 master-0 kubenswrapper[4790]: I1011 10:41:29.799021 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-guard-master-0" Oct 11 10:41:30.181444 master-0 kubenswrapper[4790]: I1011 10:41:30.181206 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Oct 11 10:41:31.529865 master-2 kubenswrapper[4776]: I1011 10:41:31.529779 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:41:32.251007 master-1 kubenswrapper[4771]: I1011 10:41:32.250908 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" start-of-body= Oct 11 10:41:32.251848 master-1 kubenswrapper[4771]: I1011 10:41:32.251031 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" Oct 11 10:41:35.203924 master-1 kubenswrapper[4771]: I1011 10:41:35.203830 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-775ff6c4fc-csp4z" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" containerID="cri-o://10775f41088b2fc502cb2185cba3870f7995bc3ef1e5d846bebb3b393b7337f5" gracePeriod=15 Oct 11 10:41:35.675005 master-2 kubenswrapper[4776]: I1011 10:41:35.674913 4776 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:41:35.676004 master-2 kubenswrapper[4776]: I1011 10:41:35.675970 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:41:35.676228 master-2 kubenswrapper[4776]: E1011 10:41:35.676198 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager-recovery-controller" Oct 11 10:41:35.676228 master-2 kubenswrapper[4776]: I1011 10:41:35.676216 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager-recovery-controller" Oct 11 10:41:35.676228 master-2 kubenswrapper[4776]: E1011 10:41:35.676230 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager-cert-syncer" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: I1011 10:41:35.676236 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager-cert-syncer" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: E1011 10:41:35.676246 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd82f838b5636582534da82a3996ea6" containerName="cluster-policy-controller" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: I1011 10:41:35.676252 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd82f838b5636582534da82a3996ea6" containerName="cluster-policy-controller" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: E1011 10:41:35.676262 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: I1011 10:41:35.676268 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: I1011 10:41:35.676392 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: I1011 10:41:35.676404 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: I1011 10:41:35.676417 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager-recovery-controller" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: I1011 10:41:35.676426 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd82f838b5636582534da82a3996ea6" containerName="cluster-policy-controller" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: I1011 10:41:35.676436 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager-cert-syncer" Oct 11 10:41:35.676957 master-2 kubenswrapper[4776]: E1011 10:41:35.676532 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" Oct 11 10:41:35.676957 master-2 kubenswrapper[4776]: I1011 10:41:35.676541 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" Oct 11 10:41:35.768142 master-1 kubenswrapper[4771]: I1011 10:41:35.768109 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-775ff6c4fc-csp4z_de7aa64b-afab-4b3a-b56d-81c324e7a8cb/console/0.log" Oct 11 10:41:35.768257 master-1 kubenswrapper[4771]: I1011 10:41:35.768194 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:41:35.818769 master-1 kubenswrapper[4771]: I1011 10:41:35.815158 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5b846b7bb4-xmv6l"] Oct 11 10:41:35.818769 master-1 kubenswrapper[4771]: E1011 10:41:35.815569 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" Oct 11 10:41:35.818769 master-1 kubenswrapper[4771]: I1011 10:41:35.815594 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" Oct 11 10:41:35.818769 master-1 kubenswrapper[4771]: I1011 10:41:35.815809 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerName="console" Oct 11 10:41:35.818769 master-1 kubenswrapper[4771]: I1011 10:41:35.816534 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:35.843700 master-1 kubenswrapper[4771]: I1011 10:41:35.839562 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b846b7bb4-xmv6l"] Oct 11 10:41:35.859160 master-2 kubenswrapper[4776]: I1011 10:41:35.859080 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e2a316e4240b2f9bcd91a14c93331da1-cert-dir\") pod \"kube-controller-manager-master-2\" (UID: \"e2a316e4240b2f9bcd91a14c93331da1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:35.859428 master-2 kubenswrapper[4776]: I1011 10:41:35.859262 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e2a316e4240b2f9bcd91a14c93331da1-resource-dir\") pod \"kube-controller-manager-master-2\" (UID: \"e2a316e4240b2f9bcd91a14c93331da1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:35.900856 master-1 kubenswrapper[4771]: I1011 10:41:35.900777 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmzlm\" (UniqueName: \"kubernetes.io/projected/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-kube-api-access-mmzlm\") pod \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " Oct 11 10:41:35.901107 master-1 kubenswrapper[4771]: I1011 10:41:35.900942 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-trusted-ca-bundle\") pod \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " Oct 11 10:41:35.901107 master-1 kubenswrapper[4771]: I1011 10:41:35.900984 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-console-config\") pod \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " Oct 11 10:41:35.901107 master-1 kubenswrapper[4771]: I1011 10:41:35.901011 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-oauth-serving-cert\") pod \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " Oct 11 10:41:35.901107 master-1 kubenswrapper[4771]: I1011 10:41:35.901060 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-console-oauth-config\") pod \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " Oct 11 10:41:35.901107 master-1 kubenswrapper[4771]: I1011 10:41:35.901090 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-service-ca\") pod \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " Oct 11 10:41:35.901580 master-1 kubenswrapper[4771]: I1011 10:41:35.901120 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-console-serving-cert\") pod \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\" (UID: \"de7aa64b-afab-4b3a-b56d-81c324e7a8cb\") " Oct 11 10:41:35.901580 master-1 kubenswrapper[4771]: I1011 10:41:35.901326 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-service-ca\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:35.901580 master-1 kubenswrapper[4771]: I1011 10:41:35.901416 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-oauth-serving-cert\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:35.901580 master-1 kubenswrapper[4771]: I1011 10:41:35.901490 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpp7h\" (UniqueName: \"kubernetes.io/projected/a65b0165-5747-48c9-9179-86f19861dd68-kube-api-access-qpp7h\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:35.901580 master-1 kubenswrapper[4771]: I1011 10:41:35.901523 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-console-config\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:35.901580 master-1 kubenswrapper[4771]: I1011 10:41:35.901557 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a65b0165-5747-48c9-9179-86f19861dd68-console-serving-cert\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:35.903190 master-1 kubenswrapper[4771]: I1011 10:41:35.901596 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a65b0165-5747-48c9-9179-86f19861dd68-console-oauth-config\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:35.903190 master-1 kubenswrapper[4771]: I1011 10:41:35.901636 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-trusted-ca-bundle\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:35.903190 master-1 kubenswrapper[4771]: I1011 10:41:35.902493 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "de7aa64b-afab-4b3a-b56d-81c324e7a8cb" (UID: "de7aa64b-afab-4b3a-b56d-81c324e7a8cb"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:41:35.903190 master-1 kubenswrapper[4771]: I1011 10:41:35.902798 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-service-ca" (OuterVolumeSpecName: "service-ca") pod "de7aa64b-afab-4b3a-b56d-81c324e7a8cb" (UID: "de7aa64b-afab-4b3a-b56d-81c324e7a8cb"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:41:35.903190 master-1 kubenswrapper[4771]: I1011 10:41:35.902812 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "de7aa64b-afab-4b3a-b56d-81c324e7a8cb" (UID: "de7aa64b-afab-4b3a-b56d-81c324e7a8cb"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:41:35.904256 master-1 kubenswrapper[4771]: I1011 10:41:35.904115 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "de7aa64b-afab-4b3a-b56d-81c324e7a8cb" (UID: "de7aa64b-afab-4b3a-b56d-81c324e7a8cb"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:41:35.906215 master-1 kubenswrapper[4771]: I1011 10:41:35.906153 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-kube-api-access-mmzlm" (OuterVolumeSpecName: "kube-api-access-mmzlm") pod "de7aa64b-afab-4b3a-b56d-81c324e7a8cb" (UID: "de7aa64b-afab-4b3a-b56d-81c324e7a8cb"). InnerVolumeSpecName "kube-api-access-mmzlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:41:35.906698 master-1 kubenswrapper[4771]: I1011 10:41:35.906604 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-console-config" (OuterVolumeSpecName: "console-config") pod "de7aa64b-afab-4b3a-b56d-81c324e7a8cb" (UID: "de7aa64b-afab-4b3a-b56d-81c324e7a8cb"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:41:35.906879 master-1 kubenswrapper[4771]: I1011 10:41:35.906709 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "de7aa64b-afab-4b3a-b56d-81c324e7a8cb" (UID: "de7aa64b-afab-4b3a-b56d-81c324e7a8cb"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:41:35.960595 master-2 kubenswrapper[4776]: I1011 10:41:35.960459 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e2a316e4240b2f9bcd91a14c93331da1-resource-dir\") pod \"kube-controller-manager-master-2\" (UID: \"e2a316e4240b2f9bcd91a14c93331da1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:35.960595 master-2 kubenswrapper[4776]: I1011 10:41:35.960562 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e2a316e4240b2f9bcd91a14c93331da1-cert-dir\") pod \"kube-controller-manager-master-2\" (UID: \"e2a316e4240b2f9bcd91a14c93331da1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:35.960595 master-2 kubenswrapper[4776]: I1011 10:41:35.960591 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e2a316e4240b2f9bcd91a14c93331da1-cert-dir\") pod \"kube-controller-manager-master-2\" (UID: \"e2a316e4240b2f9bcd91a14c93331da1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:35.960958 master-2 kubenswrapper[4776]: I1011 10:41:35.960564 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e2a316e4240b2f9bcd91a14c93331da1-resource-dir\") pod \"kube-controller-manager-master-2\" (UID: \"e2a316e4240b2f9bcd91a14c93331da1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:36.008124 master-1 kubenswrapper[4771]: I1011 10:41:36.003567 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpp7h\" (UniqueName: \"kubernetes.io/projected/a65b0165-5747-48c9-9179-86f19861dd68-kube-api-access-qpp7h\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:36.008124 master-1 kubenswrapper[4771]: I1011 10:41:36.003650 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-console-config\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:36.008124 master-1 kubenswrapper[4771]: I1011 10:41:36.003703 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a65b0165-5747-48c9-9179-86f19861dd68-console-serving-cert\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:36.008124 master-1 kubenswrapper[4771]: I1011 10:41:36.003726 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a65b0165-5747-48c9-9179-86f19861dd68-console-oauth-config\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:36.008124 master-1 kubenswrapper[4771]: I1011 10:41:36.003751 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-trusted-ca-bundle\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:36.008124 master-1 kubenswrapper[4771]: I1011 10:41:36.003782 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-service-ca\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:36.008124 master-1 kubenswrapper[4771]: I1011 10:41:36.003810 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-oauth-serving-cert\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:36.008124 master-1 kubenswrapper[4771]: I1011 10:41:36.003875 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:41:36.008124 master-1 kubenswrapper[4771]: I1011 10:41:36.003891 4771 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-console-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:41:36.008124 master-1 kubenswrapper[4771]: I1011 10:41:36.003901 4771 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-oauth-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:41:36.008124 master-1 kubenswrapper[4771]: I1011 10:41:36.004056 4771 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-console-oauth-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:41:36.008124 master-1 kubenswrapper[4771]: I1011 10:41:36.005987 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-oauth-serving-cert\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:36.008124 master-1 kubenswrapper[4771]: I1011 10:41:36.006104 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-service-ca\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:36.008124 master-1 kubenswrapper[4771]: I1011 10:41:36.006142 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-trusted-ca-bundle\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:36.008124 master-1 kubenswrapper[4771]: I1011 10:41:36.006153 4771 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-service-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:41:36.008124 master-1 kubenswrapper[4771]: I1011 10:41:36.006232 4771 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-console-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:41:36.008124 master-1 kubenswrapper[4771]: I1011 10:41:36.006259 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmzlm\" (UniqueName: \"kubernetes.io/projected/de7aa64b-afab-4b3a-b56d-81c324e7a8cb-kube-api-access-mmzlm\") on node \"master-1\" DevicePath \"\"" Oct 11 10:41:36.008124 master-1 kubenswrapper[4771]: I1011 10:41:36.006577 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-console-config\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:36.010674 master-1 kubenswrapper[4771]: I1011 10:41:36.010615 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a65b0165-5747-48c9-9179-86f19861dd68-console-serving-cert\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:36.010948 master-1 kubenswrapper[4771]: I1011 10:41:36.010875 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a65b0165-5747-48c9-9179-86f19861dd68-console-oauth-config\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:36.040817 master-1 kubenswrapper[4771]: I1011 10:41:36.040713 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpp7h\" (UniqueName: \"kubernetes.io/projected/a65b0165-5747-48c9-9179-86f19861dd68-kube-api-access-qpp7h\") pod \"console-5b846b7bb4-xmv6l\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:36.065858 master-2 kubenswrapper[4776]: I1011 10:41:36.065811 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" oldPodUID="2dd82f838b5636582534da82a3996ea6" podUID="e2a316e4240b2f9bcd91a14c93331da1" Oct 11 10:41:36.145450 master-1 kubenswrapper[4771]: I1011 10:41:36.145335 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:36.187872 master-1 kubenswrapper[4771]: I1011 10:41:36.187799 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-775ff6c4fc-csp4z_de7aa64b-afab-4b3a-b56d-81c324e7a8cb/console/0.log" Oct 11 10:41:36.188041 master-1 kubenswrapper[4771]: I1011 10:41:36.187887 4771 generic.go:334] "Generic (PLEG): container finished" podID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" containerID="10775f41088b2fc502cb2185cba3870f7995bc3ef1e5d846bebb3b393b7337f5" exitCode=2 Oct 11 10:41:36.188041 master-1 kubenswrapper[4771]: I1011 10:41:36.187934 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-775ff6c4fc-csp4z" event={"ID":"de7aa64b-afab-4b3a-b56d-81c324e7a8cb","Type":"ContainerDied","Data":"10775f41088b2fc502cb2185cba3870f7995bc3ef1e5d846bebb3b393b7337f5"} Oct 11 10:41:36.188041 master-1 kubenswrapper[4771]: I1011 10:41:36.187984 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-775ff6c4fc-csp4z" event={"ID":"de7aa64b-afab-4b3a-b56d-81c324e7a8cb","Type":"ContainerDied","Data":"dbb5133e318020821233bd4743645ca9f974f8d4348733f58f43c17203dfa102"} Oct 11 10:41:36.188041 master-1 kubenswrapper[4771]: I1011 10:41:36.187997 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-775ff6c4fc-csp4z" Oct 11 10:41:36.188491 master-1 kubenswrapper[4771]: I1011 10:41:36.188015 4771 scope.go:117] "RemoveContainer" containerID="10775f41088b2fc502cb2185cba3870f7995bc3ef1e5d846bebb3b393b7337f5" Oct 11 10:41:36.220411 master-1 kubenswrapper[4771]: I1011 10:41:36.219768 4771 scope.go:117] "RemoveContainer" containerID="10775f41088b2fc502cb2185cba3870f7995bc3ef1e5d846bebb3b393b7337f5" Oct 11 10:41:36.221064 master-1 kubenswrapper[4771]: E1011 10:41:36.220969 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10775f41088b2fc502cb2185cba3870f7995bc3ef1e5d846bebb3b393b7337f5\": container with ID starting with 10775f41088b2fc502cb2185cba3870f7995bc3ef1e5d846bebb3b393b7337f5 not found: ID does not exist" containerID="10775f41088b2fc502cb2185cba3870f7995bc3ef1e5d846bebb3b393b7337f5" Oct 11 10:41:36.221064 master-1 kubenswrapper[4771]: I1011 10:41:36.221004 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10775f41088b2fc502cb2185cba3870f7995bc3ef1e5d846bebb3b393b7337f5"} err="failed to get container status \"10775f41088b2fc502cb2185cba3870f7995bc3ef1e5d846bebb3b393b7337f5\": rpc error: code = NotFound desc = could not find container \"10775f41088b2fc502cb2185cba3870f7995bc3ef1e5d846bebb3b393b7337f5\": container with ID starting with 10775f41088b2fc502cb2185cba3870f7995bc3ef1e5d846bebb3b393b7337f5 not found: ID does not exist" Oct 11 10:41:36.248066 master-1 kubenswrapper[4771]: I1011 10:41:36.248004 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-775ff6c4fc-csp4z"] Oct 11 10:41:36.261260 master-1 kubenswrapper[4771]: I1011 10:41:36.261148 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-775ff6c4fc-csp4z"] Oct 11 10:41:36.458576 master-1 kubenswrapper[4771]: I1011 10:41:36.458494 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de7aa64b-afab-4b3a-b56d-81c324e7a8cb" path="/var/lib/kubelet/pods/de7aa64b-afab-4b3a-b56d-81c324e7a8cb/volumes" Oct 11 10:41:36.659720 master-1 kubenswrapper[4771]: I1011 10:41:36.659674 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b846b7bb4-xmv6l"] Oct 11 10:41:36.670102 master-1 kubenswrapper[4771]: W1011 10:41:36.670034 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda65b0165_5747_48c9_9179_86f19861dd68.slice/crio-2e652727629cf31e7de5014abdf61de5e97f13fd0cbfe170fa06452ef6ed0070 WatchSource:0}: Error finding container 2e652727629cf31e7de5014abdf61de5e97f13fd0cbfe170fa06452ef6ed0070: Status 404 returned error can't find the container with id 2e652727629cf31e7de5014abdf61de5e97f13fd0cbfe170fa06452ef6ed0070 Oct 11 10:41:36.733402 master-2 kubenswrapper[4776]: I1011 10:41:36.732998 4776 generic.go:334] "Generic (PLEG): container finished" podID="f5c9d0dc-adaa-427d-9416-8b25d43673d0" containerID="9e3b264b36af8fb8203eaffb48028b74bc6997d9d895e272c171cb9caab5664f" exitCode=0 Oct 11 10:41:36.734184 master-2 kubenswrapper[4776]: I1011 10:41:36.734141 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-2" event={"ID":"f5c9d0dc-adaa-427d-9416-8b25d43673d0","Type":"ContainerDied","Data":"9e3b264b36af8fb8203eaffb48028b74bc6997d9d895e272c171cb9caab5664f"} Oct 11 10:41:37.199468 master-1 kubenswrapper[4771]: I1011 10:41:37.199328 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b846b7bb4-xmv6l" event={"ID":"a65b0165-5747-48c9-9179-86f19861dd68","Type":"ContainerStarted","Data":"6930f37f5d40f6d2be4d6635242240c2b455d30927958a5c8bf12f960d07b1a6"} Oct 11 10:41:37.200080 master-1 kubenswrapper[4771]: I1011 10:41:37.199491 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b846b7bb4-xmv6l" event={"ID":"a65b0165-5747-48c9-9179-86f19861dd68","Type":"ContainerStarted","Data":"2e652727629cf31e7de5014abdf61de5e97f13fd0cbfe170fa06452ef6ed0070"} Oct 11 10:41:37.252459 master-1 kubenswrapper[4771]: I1011 10:41:37.252389 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" start-of-body= Oct 11 10:41:37.253333 master-1 kubenswrapper[4771]: I1011 10:41:37.253279 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" Oct 11 10:41:38.098530 master-2 kubenswrapper[4776]: I1011 10:41:38.098475 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:38.195976 master-0 kubenswrapper[4790]: I1011 10:41:38.195905 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Oct 11 10:41:38.209574 master-0 kubenswrapper[4790]: I1011 10:41:38.209524 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Oct 11 10:41:38.297269 master-2 kubenswrapper[4776]: I1011 10:41:38.297163 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-var-lock\") pod \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " Oct 11 10:41:38.297550 master-2 kubenswrapper[4776]: I1011 10:41:38.297317 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kube-api-access\") pod \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " Oct 11 10:41:38.297550 master-2 kubenswrapper[4776]: I1011 10:41:38.297457 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kubelet-dir\") pod \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " Oct 11 10:41:38.297550 master-2 kubenswrapper[4776]: I1011 10:41:38.297457 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-var-lock" (OuterVolumeSpecName: "var-lock") pod "f5c9d0dc-adaa-427d-9416-8b25d43673d0" (UID: "f5c9d0dc-adaa-427d-9416-8b25d43673d0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:41:38.297858 master-2 kubenswrapper[4776]: I1011 10:41:38.297717 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f5c9d0dc-adaa-427d-9416-8b25d43673d0" (UID: "f5c9d0dc-adaa-427d-9416-8b25d43673d0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:41:38.298015 master-2 kubenswrapper[4776]: I1011 10:41:38.297986 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:38.298064 master-2 kubenswrapper[4776]: I1011 10:41:38.298014 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:38.378881 master-2 kubenswrapper[4776]: I1011 10:41:38.378831 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f5c9d0dc-adaa-427d-9416-8b25d43673d0" (UID: "f5c9d0dc-adaa-427d-9416-8b25d43673d0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:41:38.403527 master-2 kubenswrapper[4776]: I1011 10:41:38.400038 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:38.745907 master-2 kubenswrapper[4776]: I1011 10:41:38.745859 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager/1.log" Oct 11 10:41:38.747764 master-2 kubenswrapper[4776]: I1011 10:41:38.747710 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager/0.log" Oct 11 10:41:38.747764 master-2 kubenswrapper[4776]: I1011 10:41:38.747748 4776 generic.go:334] "Generic (PLEG): container finished" podID="2dd82f838b5636582534da82a3996ea6" containerID="3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec" exitCode=137 Oct 11 10:41:38.747863 master-2 kubenswrapper[4776]: I1011 10:41:38.747809 4776 scope.go:117] "RemoveContainer" containerID="f4e2e49582cba1c1c050448b4ca7740185e407bb7310d805fa88bb2ea911a4b1" Oct 11 10:41:38.749769 master-2 kubenswrapper[4776]: I1011 10:41:38.749738 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-2" event={"ID":"f5c9d0dc-adaa-427d-9416-8b25d43673d0","Type":"ContainerDied","Data":"8fbabddd40d44946c170e869c1c618cdec75e9cb6e63aa5167033a997e2748d9"} Oct 11 10:41:38.749769 master-2 kubenswrapper[4776]: I1011 10:41:38.749765 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fbabddd40d44946c170e869c1c618cdec75e9cb6e63aa5167033a997e2748d9" Oct 11 10:41:38.749885 master-2 kubenswrapper[4776]: I1011 10:41:38.749785 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:39.760664 master-2 kubenswrapper[4776]: I1011 10:41:39.760555 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager/1.log" Oct 11 10:41:39.762338 master-2 kubenswrapper[4776]: I1011 10:41:39.762265 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="cluster-policy-controller" containerID="cri-o://8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4" gracePeriod=30 Oct 11 10:41:39.762894 master-2 kubenswrapper[4776]: I1011 10:41:39.762391 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752" gracePeriod=30 Oct 11 10:41:39.762894 master-2 kubenswrapper[4776]: I1011 10:41:39.762404 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163" gracePeriod=30 Oct 11 10:41:39.763121 master-2 kubenswrapper[4776]: I1011 10:41:39.762428 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" containerID="cri-o://96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3" gracePeriod=30 Oct 11 10:41:39.786288 master-2 kubenswrapper[4776]: I1011 10:41:39.773085 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" oldPodUID="2dd82f838b5636582534da82a3996ea6" podUID="e2a316e4240b2f9bcd91a14c93331da1" Oct 11 10:41:39.949808 master-2 kubenswrapper[4776]: I1011 10:41:39.949749 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager/2.log" Oct 11 10:41:39.950554 master-2 kubenswrapper[4776]: I1011 10:41:39.950524 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager/1.log" Oct 11 10:41:39.951504 master-2 kubenswrapper[4776]: I1011 10:41:39.951479 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager-cert-syncer/0.log" Oct 11 10:41:39.952235 master-2 kubenswrapper[4776]: I1011 10:41:39.952196 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:39.957580 master-2 kubenswrapper[4776]: I1011 10:41:39.957522 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" oldPodUID="2dd82f838b5636582534da82a3996ea6" podUID="e2a316e4240b2f9bcd91a14c93331da1" Oct 11 10:41:40.022884 master-2 kubenswrapper[4776]: I1011 10:41:40.022747 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-resource-dir\") pod \"2dd82f838b5636582534da82a3996ea6\" (UID: \"2dd82f838b5636582534da82a3996ea6\") " Oct 11 10:41:40.023111 master-2 kubenswrapper[4776]: I1011 10:41:40.022866 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "2dd82f838b5636582534da82a3996ea6" (UID: "2dd82f838b5636582534da82a3996ea6"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:41:40.023111 master-2 kubenswrapper[4776]: I1011 10:41:40.022936 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-cert-dir\") pod \"2dd82f838b5636582534da82a3996ea6\" (UID: \"2dd82f838b5636582534da82a3996ea6\") " Oct 11 10:41:40.023111 master-2 kubenswrapper[4776]: I1011 10:41:40.023015 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "2dd82f838b5636582534da82a3996ea6" (UID: "2dd82f838b5636582534da82a3996ea6"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:41:40.023511 master-2 kubenswrapper[4776]: I1011 10:41:40.023473 4776 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-resource-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:40.023511 master-2 kubenswrapper[4776]: I1011 10:41:40.023507 4776 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-cert-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:40.065547 master-2 kubenswrapper[4776]: I1011 10:41:40.065472 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dd82f838b5636582534da82a3996ea6" path="/var/lib/kubelet/pods/2dd82f838b5636582534da82a3996ea6/volumes" Oct 11 10:41:40.770448 master-2 kubenswrapper[4776]: I1011 10:41:40.770406 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager/2.log" Oct 11 10:41:40.771088 master-2 kubenswrapper[4776]: I1011 10:41:40.771061 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager/1.log" Oct 11 10:41:40.772260 master-2 kubenswrapper[4776]: I1011 10:41:40.772240 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager-cert-syncer/0.log" Oct 11 10:41:40.772661 master-2 kubenswrapper[4776]: I1011 10:41:40.772638 4776 generic.go:334] "Generic (PLEG): container finished" podID="2dd82f838b5636582534da82a3996ea6" containerID="96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3" exitCode=2 Oct 11 10:41:40.772758 master-2 kubenswrapper[4776]: I1011 10:41:40.772744 4776 generic.go:334] "Generic (PLEG): container finished" podID="2dd82f838b5636582534da82a3996ea6" containerID="ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752" exitCode=0 Oct 11 10:41:40.772828 master-2 kubenswrapper[4776]: I1011 10:41:40.772717 4776 scope.go:117] "RemoveContainer" containerID="96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3" Oct 11 10:41:40.772872 master-2 kubenswrapper[4776]: I1011 10:41:40.772706 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:40.772914 master-2 kubenswrapper[4776]: I1011 10:41:40.772819 4776 generic.go:334] "Generic (PLEG): container finished" podID="2dd82f838b5636582534da82a3996ea6" containerID="02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163" exitCode=2 Oct 11 10:41:40.772985 master-2 kubenswrapper[4776]: I1011 10:41:40.772963 4776 generic.go:334] "Generic (PLEG): container finished" podID="2dd82f838b5636582534da82a3996ea6" containerID="8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4" exitCode=0 Oct 11 10:41:40.779570 master-2 kubenswrapper[4776]: I1011 10:41:40.779533 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" oldPodUID="2dd82f838b5636582534da82a3996ea6" podUID="e2a316e4240b2f9bcd91a14c93331da1" Oct 11 10:41:40.788083 master-2 kubenswrapper[4776]: I1011 10:41:40.788068 4776 scope.go:117] "RemoveContainer" containerID="3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec" Oct 11 10:41:40.806496 master-2 kubenswrapper[4776]: I1011 10:41:40.806282 4776 scope.go:117] "RemoveContainer" containerID="ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752" Oct 11 10:41:40.823516 master-2 kubenswrapper[4776]: I1011 10:41:40.823493 4776 scope.go:117] "RemoveContainer" containerID="02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163" Oct 11 10:41:40.841052 master-2 kubenswrapper[4776]: I1011 10:41:40.841005 4776 scope.go:117] "RemoveContainer" containerID="8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4" Oct 11 10:41:40.859950 master-2 kubenswrapper[4776]: I1011 10:41:40.859905 4776 scope.go:117] "RemoveContainer" containerID="96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3" Oct 11 10:41:40.860395 master-2 kubenswrapper[4776]: E1011 10:41:40.860354 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3\": container with ID starting with 96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3 not found: ID does not exist" containerID="96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3" Oct 11 10:41:40.860451 master-2 kubenswrapper[4776]: I1011 10:41:40.860400 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3"} err="failed to get container status \"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3\": rpc error: code = NotFound desc = could not find container \"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3\": container with ID starting with 96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3 not found: ID does not exist" Oct 11 10:41:40.860451 master-2 kubenswrapper[4776]: I1011 10:41:40.860430 4776 scope.go:117] "RemoveContainer" containerID="3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec" Oct 11 10:41:40.861000 master-2 kubenswrapper[4776]: E1011 10:41:40.860971 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec\": container with ID starting with 3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec not found: ID does not exist" containerID="3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec" Oct 11 10:41:40.861070 master-2 kubenswrapper[4776]: I1011 10:41:40.861005 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec"} err="failed to get container status \"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec\": rpc error: code = NotFound desc = could not find container \"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec\": container with ID starting with 3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec not found: ID does not exist" Oct 11 10:41:40.861070 master-2 kubenswrapper[4776]: I1011 10:41:40.861026 4776 scope.go:117] "RemoveContainer" containerID="ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752" Oct 11 10:41:40.861392 master-2 kubenswrapper[4776]: E1011 10:41:40.861365 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752\": container with ID starting with ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752 not found: ID does not exist" containerID="ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752" Oct 11 10:41:40.861495 master-2 kubenswrapper[4776]: I1011 10:41:40.861460 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752"} err="failed to get container status \"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752\": rpc error: code = NotFound desc = could not find container \"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752\": container with ID starting with ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752 not found: ID does not exist" Oct 11 10:41:40.862240 master-2 kubenswrapper[4776]: I1011 10:41:40.862224 4776 scope.go:117] "RemoveContainer" containerID="02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163" Oct 11 10:41:40.862955 master-2 kubenswrapper[4776]: E1011 10:41:40.862935 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163\": container with ID starting with 02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163 not found: ID does not exist" containerID="02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163" Oct 11 10:41:40.863044 master-2 kubenswrapper[4776]: I1011 10:41:40.863028 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163"} err="failed to get container status \"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163\": rpc error: code = NotFound desc = could not find container \"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163\": container with ID starting with 02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163 not found: ID does not exist" Oct 11 10:41:40.863127 master-2 kubenswrapper[4776]: I1011 10:41:40.863111 4776 scope.go:117] "RemoveContainer" containerID="8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4" Oct 11 10:41:40.863552 master-2 kubenswrapper[4776]: E1011 10:41:40.863525 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4\": container with ID starting with 8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4 not found: ID does not exist" containerID="8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4" Oct 11 10:41:40.863613 master-2 kubenswrapper[4776]: I1011 10:41:40.863555 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4"} err="failed to get container status \"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4\": rpc error: code = NotFound desc = could not find container \"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4\": container with ID starting with 8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4 not found: ID does not exist" Oct 11 10:41:40.863613 master-2 kubenswrapper[4776]: I1011 10:41:40.863571 4776 scope.go:117] "RemoveContainer" containerID="96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3" Oct 11 10:41:40.863892 master-2 kubenswrapper[4776]: I1011 10:41:40.863873 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3"} err="failed to get container status \"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3\": rpc error: code = NotFound desc = could not find container \"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3\": container with ID starting with 96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3 not found: ID does not exist" Oct 11 10:41:40.863978 master-2 kubenswrapper[4776]: I1011 10:41:40.863965 4776 scope.go:117] "RemoveContainer" containerID="3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec" Oct 11 10:41:40.864305 master-2 kubenswrapper[4776]: I1011 10:41:40.864280 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec"} err="failed to get container status \"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec\": rpc error: code = NotFound desc = could not find container \"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec\": container with ID starting with 3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec not found: ID does not exist" Oct 11 10:41:40.864305 master-2 kubenswrapper[4776]: I1011 10:41:40.864297 4776 scope.go:117] "RemoveContainer" containerID="ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752" Oct 11 10:41:40.864571 master-2 kubenswrapper[4776]: I1011 10:41:40.864530 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752"} err="failed to get container status \"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752\": rpc error: code = NotFound desc = could not find container \"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752\": container with ID starting with ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752 not found: ID does not exist" Oct 11 10:41:40.864571 master-2 kubenswrapper[4776]: I1011 10:41:40.864564 4776 scope.go:117] "RemoveContainer" containerID="02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163" Oct 11 10:41:40.864908 master-2 kubenswrapper[4776]: I1011 10:41:40.864880 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163"} err="failed to get container status \"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163\": rpc error: code = NotFound desc = could not find container \"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163\": container with ID starting with 02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163 not found: ID does not exist" Oct 11 10:41:40.864908 master-2 kubenswrapper[4776]: I1011 10:41:40.864903 4776 scope.go:117] "RemoveContainer" containerID="8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4" Oct 11 10:41:40.865259 master-2 kubenswrapper[4776]: I1011 10:41:40.865233 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4"} err="failed to get container status \"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4\": rpc error: code = NotFound desc = could not find container \"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4\": container with ID starting with 8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4 not found: ID does not exist" Oct 11 10:41:40.865338 master-2 kubenswrapper[4776]: I1011 10:41:40.865326 4776 scope.go:117] "RemoveContainer" containerID="96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3" Oct 11 10:41:40.865710 master-2 kubenswrapper[4776]: I1011 10:41:40.865653 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3"} err="failed to get container status \"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3\": rpc error: code = NotFound desc = could not find container \"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3\": container with ID starting with 96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3 not found: ID does not exist" Oct 11 10:41:40.865710 master-2 kubenswrapper[4776]: I1011 10:41:40.865704 4776 scope.go:117] "RemoveContainer" containerID="3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec" Oct 11 10:41:40.866038 master-2 kubenswrapper[4776]: I1011 10:41:40.866009 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec"} err="failed to get container status \"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec\": rpc error: code = NotFound desc = could not find container \"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec\": container with ID starting with 3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec not found: ID does not exist" Oct 11 10:41:40.866038 master-2 kubenswrapper[4776]: I1011 10:41:40.866029 4776 scope.go:117] "RemoveContainer" containerID="ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752" Oct 11 10:41:40.866445 master-2 kubenswrapper[4776]: I1011 10:41:40.866424 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752"} err="failed to get container status \"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752\": rpc error: code = NotFound desc = could not find container \"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752\": container with ID starting with ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752 not found: ID does not exist" Oct 11 10:41:40.866445 master-2 kubenswrapper[4776]: I1011 10:41:40.866441 4776 scope.go:117] "RemoveContainer" containerID="02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163" Oct 11 10:41:40.866723 master-2 kubenswrapper[4776]: I1011 10:41:40.866704 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163"} err="failed to get container status \"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163\": rpc error: code = NotFound desc = could not find container \"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163\": container with ID starting with 02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163 not found: ID does not exist" Oct 11 10:41:40.866793 master-2 kubenswrapper[4776]: I1011 10:41:40.866780 4776 scope.go:117] "RemoveContainer" containerID="8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4" Oct 11 10:41:40.867129 master-2 kubenswrapper[4776]: I1011 10:41:40.867078 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4"} err="failed to get container status \"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4\": rpc error: code = NotFound desc = could not find container \"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4\": container with ID starting with 8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4 not found: ID does not exist" Oct 11 10:41:40.867129 master-2 kubenswrapper[4776]: I1011 10:41:40.867107 4776 scope.go:117] "RemoveContainer" containerID="96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3" Oct 11 10:41:40.867553 master-2 kubenswrapper[4776]: I1011 10:41:40.867510 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3"} err="failed to get container status \"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3\": rpc error: code = NotFound desc = could not find container \"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3\": container with ID starting with 96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3 not found: ID does not exist" Oct 11 10:41:40.867553 master-2 kubenswrapper[4776]: I1011 10:41:40.867531 4776 scope.go:117] "RemoveContainer" containerID="3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec" Oct 11 10:41:40.867747 master-2 kubenswrapper[4776]: I1011 10:41:40.867717 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec"} err="failed to get container status \"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec\": rpc error: code = NotFound desc = could not find container \"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec\": container with ID starting with 3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec not found: ID does not exist" Oct 11 10:41:40.867747 master-2 kubenswrapper[4776]: I1011 10:41:40.867735 4776 scope.go:117] "RemoveContainer" containerID="ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752" Oct 11 10:41:40.868081 master-2 kubenswrapper[4776]: I1011 10:41:40.868041 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752"} err="failed to get container status \"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752\": rpc error: code = NotFound desc = could not find container \"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752\": container with ID starting with ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752 not found: ID does not exist" Oct 11 10:41:40.868081 master-2 kubenswrapper[4776]: I1011 10:41:40.868068 4776 scope.go:117] "RemoveContainer" containerID="02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163" Oct 11 10:41:40.868308 master-2 kubenswrapper[4776]: I1011 10:41:40.868288 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163"} err="failed to get container status \"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163\": rpc error: code = NotFound desc = could not find container \"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163\": container with ID starting with 02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163 not found: ID does not exist" Oct 11 10:41:40.868524 master-2 kubenswrapper[4776]: I1011 10:41:40.868512 4776 scope.go:117] "RemoveContainer" containerID="8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4" Oct 11 10:41:40.868863 master-2 kubenswrapper[4776]: I1011 10:41:40.868822 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4"} err="failed to get container status \"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4\": rpc error: code = NotFound desc = could not find container \"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4\": container with ID starting with 8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4 not found: ID does not exist" Oct 11 10:41:41.465252 master-2 kubenswrapper[4776]: I1011 10:41:41.465171 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:41:41.465595 master-2 kubenswrapper[4776]: I1011 10:41:41.465287 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:41:42.252972 master-1 kubenswrapper[4771]: I1011 10:41:42.252855 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" start-of-body= Oct 11 10:41:42.252972 master-1 kubenswrapper[4771]: I1011 10:41:42.252946 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" Oct 11 10:41:42.581086 master-1 kubenswrapper[4771]: I1011 10:41:42.580960 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5b846b7bb4-xmv6l" podStartSLOduration=21.580927478 podStartE2EDuration="21.580927478s" podCreationTimestamp="2025-10-11 10:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:41:37.229689597 +0000 UTC m=+929.203916128" watchObservedRunningTime="2025-10-11 10:41:42.580927478 +0000 UTC m=+934.555153929" Oct 11 10:41:42.581884 master-1 kubenswrapper[4771]: I1011 10:41:42.581846 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-g4p26"] Oct 11 10:41:42.582313 master-1 kubenswrapper[4771]: I1011 10:41:42.582262 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" podUID="4c0cf305-ba21-45c0-a092-05214809da68" containerName="oauth-apiserver" containerID="cri-o://0d35eb346e00db345ba38192641eb8af77030ecba70c39b9071f558249b90948" gracePeriod=120 Oct 11 10:41:45.908586 master-1 kubenswrapper[4771]: I1011 10:41:45.908466 4771 patch_prober.go:28] interesting pod/apiserver-656768b4df-g4p26 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:41:45.908586 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:41:45.908586 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:41:45.908586 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:41:45.908586 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:41:45.908586 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:41:45.908586 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:41:45.908586 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:41:45.908586 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:41:45.908586 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:41:45.908586 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:41:45.908586 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:41:45.908586 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:41:45.908586 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:41:45.908586 master-1 kubenswrapper[4771]: I1011 10:41:45.908577 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" podUID="4c0cf305-ba21-45c0-a092-05214809da68" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:41:46.146172 master-1 kubenswrapper[4771]: I1011 10:41:46.146078 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:46.146172 master-1 kubenswrapper[4771]: I1011 10:41:46.146162 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:46.154009 master-1 kubenswrapper[4771]: I1011 10:41:46.153954 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:46.158059 master-2 kubenswrapper[4776]: I1011 10:41:46.157921 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" containerID="cri-o://ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f" gracePeriod=15 Oct 11 10:41:46.272194 master-1 kubenswrapper[4771]: I1011 10:41:46.272011 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:41:46.394085 master-0 kubenswrapper[4790]: I1011 10:41:46.393968 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76f8bc4746-9rjdm"] Oct 11 10:41:46.464709 master-2 kubenswrapper[4776]: I1011 10:41:46.464398 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:41:46.464709 master-2 kubenswrapper[4776]: I1011 10:41:46.464462 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:41:46.700087 master-2 kubenswrapper[4776]: I1011 10:41:46.699982 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76f8bc4746-5jp5k_9c72970e-d35b-4f28-8291-e3ed3683c59c/console/0.log" Oct 11 10:41:46.700087 master-2 kubenswrapper[4776]: I1011 10:41:46.700048 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:41:46.757904 master-2 kubenswrapper[4776]: I1011 10:41:46.757696 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5b846b7bb4-7q7ph"] Oct 11 10:41:46.758124 master-2 kubenswrapper[4776]: E1011 10:41:46.758094 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5c9d0dc-adaa-427d-9416-8b25d43673d0" containerName="installer" Oct 11 10:41:46.758124 master-2 kubenswrapper[4776]: I1011 10:41:46.758111 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5c9d0dc-adaa-427d-9416-8b25d43673d0" containerName="installer" Oct 11 10:41:46.758220 master-2 kubenswrapper[4776]: E1011 10:41:46.758131 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" Oct 11 10:41:46.758220 master-2 kubenswrapper[4776]: I1011 10:41:46.758139 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" Oct 11 10:41:46.758220 master-2 kubenswrapper[4776]: E1011 10:41:46.758150 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" Oct 11 10:41:46.758220 master-2 kubenswrapper[4776]: I1011 10:41:46.758159 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" Oct 11 10:41:46.758398 master-2 kubenswrapper[4776]: I1011 10:41:46.758370 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5c9d0dc-adaa-427d-9416-8b25d43673d0" containerName="installer" Oct 11 10:41:46.758398 master-2 kubenswrapper[4776]: I1011 10:41:46.758396 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" Oct 11 10:41:46.758465 master-2 kubenswrapper[4776]: I1011 10:41:46.758410 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" Oct 11 10:41:46.759060 master-2 kubenswrapper[4776]: I1011 10:41:46.759029 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.827101 master-2 kubenswrapper[4776]: I1011 10:41:46.827037 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76f8bc4746-5jp5k_9c72970e-d35b-4f28-8291-e3ed3683c59c/console/0.log" Oct 11 10:41:46.827433 master-2 kubenswrapper[4776]: I1011 10:41:46.827192 4776 generic.go:334] "Generic (PLEG): container finished" podID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerID="ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f" exitCode=2 Oct 11 10:41:46.827433 master-2 kubenswrapper[4776]: I1011 10:41:46.827219 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:41:46.827433 master-2 kubenswrapper[4776]: I1011 10:41:46.827254 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76f8bc4746-5jp5k" event={"ID":"9c72970e-d35b-4f28-8291-e3ed3683c59c","Type":"ContainerDied","Data":"ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f"} Oct 11 10:41:46.827433 master-2 kubenswrapper[4776]: I1011 10:41:46.827320 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76f8bc4746-5jp5k" event={"ID":"9c72970e-d35b-4f28-8291-e3ed3683c59c","Type":"ContainerDied","Data":"3717643475eebdbec50aa27932ca525c2e2f047c2a23862ba4394759fc5478d9"} Oct 11 10:41:46.827433 master-2 kubenswrapper[4776]: I1011 10:41:46.827357 4776 scope.go:117] "RemoveContainer" containerID="ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f" Oct 11 10:41:46.828541 master-2 kubenswrapper[4776]: I1011 10:41:46.828472 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-serving-cert\") pod \"9c72970e-d35b-4f28-8291-e3ed3683c59c\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " Oct 11 10:41:46.828541 master-2 kubenswrapper[4776]: I1011 10:41:46.828538 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-oauth-config\") pod \"9c72970e-d35b-4f28-8291-e3ed3683c59c\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.828624 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-config\") pod \"9c72970e-d35b-4f28-8291-e3ed3683c59c\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.828665 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-trusted-ca-bundle\") pod \"9c72970e-d35b-4f28-8291-e3ed3683c59c\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.828723 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-service-ca\") pod \"9c72970e-d35b-4f28-8291-e3ed3683c59c\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.828749 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-oauth-serving-cert\") pod \"9c72970e-d35b-4f28-8291-e3ed3683c59c\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.828816 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nf86\" (UniqueName: \"kubernetes.io/projected/9c72970e-d35b-4f28-8291-e3ed3683c59c-kube-api-access-7nf86\") pod \"9c72970e-d35b-4f28-8291-e3ed3683c59c\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.829048 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-oauth-config\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.829106 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-service-ca\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.829138 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-trusted-ca-bundle\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.829163 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-serving-cert\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.829196 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-oauth-serving-cert\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.829220 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74gcw\" (UniqueName: \"kubernetes.io/projected/e0a2e987-f2d6-410a-966a-bd82ab791c00-kube-api-access-74gcw\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.829264 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-config\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.830959 master-2 kubenswrapper[4776]: I1011 10:41:46.829363 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-config" (OuterVolumeSpecName: "console-config") pod "9c72970e-d35b-4f28-8291-e3ed3683c59c" (UID: "9c72970e-d35b-4f28-8291-e3ed3683c59c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:41:46.830959 master-2 kubenswrapper[4776]: I1011 10:41:46.829496 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9c72970e-d35b-4f28-8291-e3ed3683c59c" (UID: "9c72970e-d35b-4f28-8291-e3ed3683c59c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:41:46.830959 master-2 kubenswrapper[4776]: I1011 10:41:46.829645 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9c72970e-d35b-4f28-8291-e3ed3683c59c" (UID: "9c72970e-d35b-4f28-8291-e3ed3683c59c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:41:46.831260 master-2 kubenswrapper[4776]: I1011 10:41:46.831118 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-service-ca" (OuterVolumeSpecName: "service-ca") pod "9c72970e-d35b-4f28-8291-e3ed3683c59c" (UID: "9c72970e-d35b-4f28-8291-e3ed3683c59c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:41:46.832169 master-2 kubenswrapper[4776]: I1011 10:41:46.832106 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9c72970e-d35b-4f28-8291-e3ed3683c59c" (UID: "9c72970e-d35b-4f28-8291-e3ed3683c59c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:41:46.833709 master-2 kubenswrapper[4776]: I1011 10:41:46.833598 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c72970e-d35b-4f28-8291-e3ed3683c59c-kube-api-access-7nf86" (OuterVolumeSpecName: "kube-api-access-7nf86") pod "9c72970e-d35b-4f28-8291-e3ed3683c59c" (UID: "9c72970e-d35b-4f28-8291-e3ed3683c59c"). InnerVolumeSpecName "kube-api-access-7nf86". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:41:46.834392 master-2 kubenswrapper[4776]: I1011 10:41:46.834347 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9c72970e-d35b-4f28-8291-e3ed3683c59c" (UID: "9c72970e-d35b-4f28-8291-e3ed3683c59c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:41:46.892693 master-2 kubenswrapper[4776]: I1011 10:41:46.892607 4776 scope.go:117] "RemoveContainer" containerID="ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f" Oct 11 10:41:46.893586 master-2 kubenswrapper[4776]: E1011 10:41:46.893508 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f\": container with ID starting with ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f not found: ID does not exist" containerID="ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f" Oct 11 10:41:46.893656 master-2 kubenswrapper[4776]: I1011 10:41:46.893603 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f"} err="failed to get container status \"ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f\": rpc error: code = NotFound desc = could not find container \"ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f\": container with ID starting with ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f not found: ID does not exist" Oct 11 10:41:46.930950 master-2 kubenswrapper[4776]: I1011 10:41:46.930878 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-service-ca\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.930950 master-2 kubenswrapper[4776]: I1011 10:41:46.930949 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-serving-cert\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.931185 master-2 kubenswrapper[4776]: I1011 10:41:46.930978 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-trusted-ca-bundle\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.931185 master-2 kubenswrapper[4776]: I1011 10:41:46.931009 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-oauth-serving-cert\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.931185 master-2 kubenswrapper[4776]: I1011 10:41:46.931027 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74gcw\" (UniqueName: \"kubernetes.io/projected/e0a2e987-f2d6-410a-966a-bd82ab791c00-kube-api-access-74gcw\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.931185 master-2 kubenswrapper[4776]: I1011 10:41:46.931067 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-config\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.931185 master-2 kubenswrapper[4776]: I1011 10:41:46.931103 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-oauth-config\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.931185 master-2 kubenswrapper[4776]: I1011 10:41:46.931174 4776 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:46.931185 master-2 kubenswrapper[4776]: I1011 10:41:46.931186 4776 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-oauth-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:46.931404 master-2 kubenswrapper[4776]: I1011 10:41:46.931197 4776 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:46.931404 master-2 kubenswrapper[4776]: I1011 10:41:46.931212 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:46.931404 master-2 kubenswrapper[4776]: I1011 10:41:46.931225 4776 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-service-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:46.931404 master-2 kubenswrapper[4776]: I1011 10:41:46.931236 4776 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-oauth-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:46.931404 master-2 kubenswrapper[4776]: I1011 10:41:46.931248 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nf86\" (UniqueName: \"kubernetes.io/projected/9c72970e-d35b-4f28-8291-e3ed3683c59c-kube-api-access-7nf86\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:46.931934 master-2 kubenswrapper[4776]: I1011 10:41:46.931879 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-service-ca\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.932909 master-2 kubenswrapper[4776]: I1011 10:41:46.932844 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-config\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.933287 master-2 kubenswrapper[4776]: I1011 10:41:46.933249 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-trusted-ca-bundle\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.934624 master-2 kubenswrapper[4776]: I1011 10:41:46.934547 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-oauth-serving-cert\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.934778 master-2 kubenswrapper[4776]: I1011 10:41:46.934741 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-oauth-config\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.938941 master-2 kubenswrapper[4776]: I1011 10:41:46.938886 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-serving-cert\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.959724 master-2 kubenswrapper[4776]: I1011 10:41:46.959544 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74gcw\" (UniqueName: \"kubernetes.io/projected/e0a2e987-f2d6-410a-966a-bd82ab791c00-kube-api-access-74gcw\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:47.016137 master-2 kubenswrapper[4776]: I1011 10:41:47.016056 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b846b7bb4-7q7ph"] Oct 11 10:41:47.058093 master-2 kubenswrapper[4776]: I1011 10:41:47.057983 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:47.078267 master-2 kubenswrapper[4776]: I1011 10:41:47.077790 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:47.083292 master-2 kubenswrapper[4776]: I1011 10:41:47.083246 4776 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="bbf811db-ddc6-4cfb-9181-057546f4c7bd" Oct 11 10:41:47.083292 master-2 kubenswrapper[4776]: I1011 10:41:47.083281 4776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="bbf811db-ddc6-4cfb-9181-057546f4c7bd" Oct 11 10:41:47.250932 master-1 kubenswrapper[4771]: I1011 10:41:47.250813 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" start-of-body= Oct 11 10:41:47.250932 master-1 kubenswrapper[4771]: I1011 10:41:47.250903 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" Oct 11 10:41:47.274998 master-2 kubenswrapper[4776]: I1011 10:41:47.274906 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:41:47.390287 master-2 kubenswrapper[4776]: I1011 10:41:47.390238 4776 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:47.416250 master-2 kubenswrapper[4776]: I1011 10:41:47.416208 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76f8bc4746-5jp5k"] Oct 11 10:41:47.433125 master-2 kubenswrapper[4776]: I1011 10:41:47.433031 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:41:47.492410 master-2 kubenswrapper[4776]: I1011 10:41:47.492263 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-76f8bc4746-5jp5k"] Oct 11 10:41:47.519654 master-2 kubenswrapper[4776]: I1011 10:41:47.518052 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:47.530392 master-2 kubenswrapper[4776]: W1011 10:41:47.530334 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0a2e987_f2d6_410a_966a_bd82ab791c00.slice/crio-43085277e27210405939bccfc3c1c615afc3a4069b47f8fc3de8344908605eaa WatchSource:0}: Error finding container 43085277e27210405939bccfc3c1c615afc3a4069b47f8fc3de8344908605eaa: Status 404 returned error can't find the container with id 43085277e27210405939bccfc3c1c615afc3a4069b47f8fc3de8344908605eaa Oct 11 10:41:47.531343 master-2 kubenswrapper[4776]: I1011 10:41:47.531318 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b846b7bb4-7q7ph"] Oct 11 10:41:47.542154 master-2 kubenswrapper[4776]: I1011 10:41:47.542120 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:41:47.546262 master-2 kubenswrapper[4776]: W1011 10:41:47.546229 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2a316e4240b2f9bcd91a14c93331da1.slice/crio-e2533c3f52c5a4d170bae783110ce88d21f90e2de86a07d24975ca02c5c10002 WatchSource:0}: Error finding container e2533c3f52c5a4d170bae783110ce88d21f90e2de86a07d24975ca02c5c10002: Status 404 returned error can't find the container with id e2533c3f52c5a4d170bae783110ce88d21f90e2de86a07d24975ca02c5c10002 Oct 11 10:41:47.840366 master-2 kubenswrapper[4776]: I1011 10:41:47.840309 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"e2a316e4240b2f9bcd91a14c93331da1","Type":"ContainerStarted","Data":"9189b059d2886230b84e1fc6455d591be19246958f0bbf9d6b5c50b947a7be8d"} Oct 11 10:41:47.840614 master-2 kubenswrapper[4776]: I1011 10:41:47.840368 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"e2a316e4240b2f9bcd91a14c93331da1","Type":"ContainerStarted","Data":"e2533c3f52c5a4d170bae783110ce88d21f90e2de86a07d24975ca02c5c10002"} Oct 11 10:41:47.843583 master-2 kubenswrapper[4776]: I1011 10:41:47.843550 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b846b7bb4-7q7ph" event={"ID":"e0a2e987-f2d6-410a-966a-bd82ab791c00","Type":"ContainerStarted","Data":"b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f"} Oct 11 10:41:47.843709 master-2 kubenswrapper[4776]: I1011 10:41:47.843695 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b846b7bb4-7q7ph" event={"ID":"e0a2e987-f2d6-410a-966a-bd82ab791c00","Type":"ContainerStarted","Data":"43085277e27210405939bccfc3c1c615afc3a4069b47f8fc3de8344908605eaa"} Oct 11 10:41:47.870021 master-2 kubenswrapper[4776]: I1011 10:41:47.869933 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5b846b7bb4-7q7ph" podStartSLOduration=26.86991216 podStartE2EDuration="26.86991216s" podCreationTimestamp="2025-10-11 10:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:41:47.868130793 +0000 UTC m=+942.652557502" watchObservedRunningTime="2025-10-11 10:41:47.86991216 +0000 UTC m=+942.654338869" Oct 11 10:41:48.079777 master-2 kubenswrapper[4776]: I1011 10:41:48.076234 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" path="/var/lib/kubelet/pods/9c72970e-d35b-4f28-8291-e3ed3683c59c/volumes" Oct 11 10:41:48.860418 master-2 kubenswrapper[4776]: I1011 10:41:48.860239 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"e2a316e4240b2f9bcd91a14c93331da1","Type":"ContainerStarted","Data":"f06c4e8acf494fe24eccc1ce43df9eb229509ce5cd888092b73e7eba38862d46"} Oct 11 10:41:48.860418 master-2 kubenswrapper[4776]: I1011 10:41:48.860350 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"e2a316e4240b2f9bcd91a14c93331da1","Type":"ContainerStarted","Data":"82e9c03cc0b9f224b8b66ac568bd942c0785a5327a13aa88e776428fd4d3d837"} Oct 11 10:41:48.860418 master-2 kubenswrapper[4776]: I1011 10:41:48.860376 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"e2a316e4240b2f9bcd91a14c93331da1","Type":"ContainerStarted","Data":"9678e04476c4e6953c7acc1060e1d800851772e39435db54daec2daa1356b64e"} Oct 11 10:41:48.902694 master-2 kubenswrapper[4776]: I1011 10:41:48.902562 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podStartSLOduration=1.902541726 podStartE2EDuration="1.902541726s" podCreationTimestamp="2025-10-11 10:41:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:41:48.899184776 +0000 UTC m=+943.683611485" watchObservedRunningTime="2025-10-11 10:41:48.902541726 +0000 UTC m=+943.686968435" Oct 11 10:41:50.906815 master-1 kubenswrapper[4771]: I1011 10:41:50.906741 4771 patch_prober.go:28] interesting pod/apiserver-656768b4df-g4p26 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:41:50.906815 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:41:50.906815 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:41:50.906815 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:41:50.906815 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:41:50.906815 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:41:50.906815 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:41:50.906815 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:41:50.906815 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:41:50.906815 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:41:50.906815 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:41:50.906815 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:41:50.906815 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:41:50.906815 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:41:50.908122 master-1 kubenswrapper[4771]: I1011 10:41:50.906823 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" podUID="4c0cf305-ba21-45c0-a092-05214809da68" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:41:52.251168 master-1 kubenswrapper[4771]: I1011 10:41:52.251069 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" start-of-body= Oct 11 10:41:52.251168 master-1 kubenswrapper[4771]: I1011 10:41:52.251122 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" Oct 11 10:41:55.905876 master-1 kubenswrapper[4771]: I1011 10:41:55.905798 4771 patch_prober.go:28] interesting pod/apiserver-656768b4df-g4p26 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:41:55.905876 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:41:55.905876 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:41:55.905876 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:41:55.905876 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:41:55.905876 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:41:55.905876 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:41:55.905876 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:41:55.905876 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:41:55.905876 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:41:55.905876 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:41:55.905876 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:41:55.905876 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:41:55.905876 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:41:55.907545 master-1 kubenswrapper[4771]: I1011 10:41:55.907487 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" podUID="4c0cf305-ba21-45c0-a092-05214809da68" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:41:55.907875 master-1 kubenswrapper[4771]: I1011 10:41:55.907843 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:41:57.078789 master-2 kubenswrapper[4776]: I1011 10:41:57.078719 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:57.078789 master-2 kubenswrapper[4776]: I1011 10:41:57.078797 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:57.081473 master-2 kubenswrapper[4776]: I1011 10:41:57.081419 4776 patch_prober.go:28] interesting pod/console-5b846b7bb4-7q7ph container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Oct 11 10:41:57.081561 master-2 kubenswrapper[4776]: I1011 10:41:57.081482 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b846b7bb4-7q7ph" podUID="e0a2e987-f2d6-410a-966a-bd82ab791c00" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Oct 11 10:41:57.251651 master-1 kubenswrapper[4771]: I1011 10:41:57.251572 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" start-of-body= Oct 11 10:41:57.252341 master-1 kubenswrapper[4771]: I1011 10:41:57.251653 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" Oct 11 10:41:57.519004 master-2 kubenswrapper[4776]: I1011 10:41:57.518875 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:57.519004 master-2 kubenswrapper[4776]: I1011 10:41:57.518996 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:57.519270 master-2 kubenswrapper[4776]: I1011 10:41:57.519024 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:57.519270 master-2 kubenswrapper[4776]: I1011 10:41:57.519045 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:57.524822 master-2 kubenswrapper[4776]: I1011 10:41:57.524766 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:57.525914 master-2 kubenswrapper[4776]: I1011 10:41:57.525840 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:57.935350 master-2 kubenswrapper[4776]: I1011 10:41:57.935291 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:58.943844 master-2 kubenswrapper[4776]: I1011 10:41:58.943800 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:42:00.908849 master-1 kubenswrapper[4771]: I1011 10:42:00.908750 4771 patch_prober.go:28] interesting pod/apiserver-656768b4df-g4p26 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:42:00.908849 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:42:00.908849 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:42:00.908849 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:42:00.908849 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:42:00.908849 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:42:00.908849 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:42:00.908849 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:42:00.908849 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:42:00.908849 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:42:00.908849 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:42:00.908849 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:42:00.908849 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:42:00.908849 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:42:00.908849 master-1 kubenswrapper[4771]: I1011 10:42:00.908847 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" podUID="4c0cf305-ba21-45c0-a092-05214809da68" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:42:01.272264 master-1 kubenswrapper[4771]: I1011 10:42:01.270571 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-9-master-1"] Oct 11 10:42:01.274649 master-1 kubenswrapper[4771]: I1011 10:42:01.274611 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-9-master-1" Oct 11 10:42:01.279955 master-1 kubenswrapper[4771]: I1011 10:42:01.279340 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-xbqxb" Oct 11 10:42:01.285206 master-1 kubenswrapper[4771]: I1011 10:42:01.285155 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-9-master-1"] Oct 11 10:42:01.318143 master-1 kubenswrapper[4771]: I1011 10:42:01.318069 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1da01b89-3c1e-4f11-bcc6-65a56654021f-kubelet-dir\") pod \"installer-9-master-1\" (UID: \"1da01b89-3c1e-4f11-bcc6-65a56654021f\") " pod="openshift-etcd/installer-9-master-1" Oct 11 10:42:01.318633 master-1 kubenswrapper[4771]: I1011 10:42:01.318589 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1da01b89-3c1e-4f11-bcc6-65a56654021f-kube-api-access\") pod \"installer-9-master-1\" (UID: \"1da01b89-3c1e-4f11-bcc6-65a56654021f\") " pod="openshift-etcd/installer-9-master-1" Oct 11 10:42:01.318898 master-1 kubenswrapper[4771]: I1011 10:42:01.318864 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1da01b89-3c1e-4f11-bcc6-65a56654021f-var-lock\") pod \"installer-9-master-1\" (UID: \"1da01b89-3c1e-4f11-bcc6-65a56654021f\") " pod="openshift-etcd/installer-9-master-1" Oct 11 10:42:01.420961 master-1 kubenswrapper[4771]: I1011 10:42:01.420915 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1da01b89-3c1e-4f11-bcc6-65a56654021f-kubelet-dir\") pod \"installer-9-master-1\" (UID: \"1da01b89-3c1e-4f11-bcc6-65a56654021f\") " pod="openshift-etcd/installer-9-master-1" Oct 11 10:42:01.421204 master-1 kubenswrapper[4771]: I1011 10:42:01.421186 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1da01b89-3c1e-4f11-bcc6-65a56654021f-kube-api-access\") pod \"installer-9-master-1\" (UID: \"1da01b89-3c1e-4f11-bcc6-65a56654021f\") " pod="openshift-etcd/installer-9-master-1" Oct 11 10:42:01.421329 master-1 kubenswrapper[4771]: I1011 10:42:01.421316 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1da01b89-3c1e-4f11-bcc6-65a56654021f-var-lock\") pod \"installer-9-master-1\" (UID: \"1da01b89-3c1e-4f11-bcc6-65a56654021f\") " pod="openshift-etcd/installer-9-master-1" Oct 11 10:42:01.421523 master-1 kubenswrapper[4771]: I1011 10:42:01.421483 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1da01b89-3c1e-4f11-bcc6-65a56654021f-kubelet-dir\") pod \"installer-9-master-1\" (UID: \"1da01b89-3c1e-4f11-bcc6-65a56654021f\") " pod="openshift-etcd/installer-9-master-1" Oct 11 10:42:01.421582 master-1 kubenswrapper[4771]: I1011 10:42:01.421508 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1da01b89-3c1e-4f11-bcc6-65a56654021f-var-lock\") pod \"installer-9-master-1\" (UID: \"1da01b89-3c1e-4f11-bcc6-65a56654021f\") " pod="openshift-etcd/installer-9-master-1" Oct 11 10:42:01.446042 master-1 kubenswrapper[4771]: I1011 10:42:01.445976 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1da01b89-3c1e-4f11-bcc6-65a56654021f-kube-api-access\") pod \"installer-9-master-1\" (UID: \"1da01b89-3c1e-4f11-bcc6-65a56654021f\") " pod="openshift-etcd/installer-9-master-1" Oct 11 10:42:01.605348 master-1 kubenswrapper[4771]: I1011 10:42:01.605272 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-9-master-1" Oct 11 10:42:02.012551 master-1 kubenswrapper[4771]: I1011 10:42:02.012413 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-9-master-1"] Oct 11 10:42:02.251641 master-1 kubenswrapper[4771]: I1011 10:42:02.251587 4771 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-g8x5z container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" start-of-body= Oct 11 10:42:02.251794 master-1 kubenswrapper[4771]: I1011 10:42:02.251653 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.129.0.53:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.53:8443: connect: connection refused" Oct 11 10:42:02.383625 master-1 kubenswrapper[4771]: I1011 10:42:02.381813 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-9-master-1" event={"ID":"1da01b89-3c1e-4f11-bcc6-65a56654021f","Type":"ContainerStarted","Data":"9692635e40b2a711a263503ff5795f641f5480d42c4c64a94f91d9bd4aff98f6"} Oct 11 10:42:03.394345 master-1 kubenswrapper[4771]: I1011 10:42:03.393969 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-9-master-1" event={"ID":"1da01b89-3c1e-4f11-bcc6-65a56654021f","Type":"ContainerStarted","Data":"1a1e8546ece3b9b09f96eb38ce98e4e2f7676e9d011955a8c3b8f572088b6cdb"} Oct 11 10:42:03.422264 master-1 kubenswrapper[4771]: I1011 10:42:03.422192 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-9-master-1" podStartSLOduration=2.422175243 podStartE2EDuration="2.422175243s" podCreationTimestamp="2025-10-11 10:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:42:03.415331635 +0000 UTC m=+955.389558106" watchObservedRunningTime="2025-10-11 10:42:03.422175243 +0000 UTC m=+955.396401694" Oct 11 10:42:04.487369 master-0 kubenswrapper[4790]: I1011 10:42:04.487256 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-6-master-0"] Oct 11 10:42:04.488386 master-0 kubenswrapper[4790]: I1011 10:42:04.488220 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Oct 11 10:42:04.491733 master-0 kubenswrapper[4790]: I1011 10:42:04.491616 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-djvlq" Oct 11 10:42:04.492703 master-0 kubenswrapper[4790]: I1011 10:42:04.492595 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 11 10:42:04.505555 master-0 kubenswrapper[4790]: I1011 10:42:04.505492 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-6-master-0"] Oct 11 10:42:04.678490 master-0 kubenswrapper[4790]: I1011 10:42:04.678392 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d2957c2-bc3c-4399-b508-37a1a7689108-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"3d2957c2-bc3c-4399-b508-37a1a7689108\") " pod="openshift-kube-controller-manager/installer-6-master-0" Oct 11 10:42:04.678490 master-0 kubenswrapper[4790]: I1011 10:42:04.678508 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d2957c2-bc3c-4399-b508-37a1a7689108-kube-api-access\") pod \"installer-6-master-0\" (UID: \"3d2957c2-bc3c-4399-b508-37a1a7689108\") " pod="openshift-kube-controller-manager/installer-6-master-0" Oct 11 10:42:04.678917 master-0 kubenswrapper[4790]: I1011 10:42:04.678548 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d2957c2-bc3c-4399-b508-37a1a7689108-var-lock\") pod \"installer-6-master-0\" (UID: \"3d2957c2-bc3c-4399-b508-37a1a7689108\") " pod="openshift-kube-controller-manager/installer-6-master-0" Oct 11 10:42:04.779628 master-0 kubenswrapper[4790]: I1011 10:42:04.779408 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d2957c2-bc3c-4399-b508-37a1a7689108-kube-api-access\") pod \"installer-6-master-0\" (UID: \"3d2957c2-bc3c-4399-b508-37a1a7689108\") " pod="openshift-kube-controller-manager/installer-6-master-0" Oct 11 10:42:04.779628 master-0 kubenswrapper[4790]: I1011 10:42:04.779482 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d2957c2-bc3c-4399-b508-37a1a7689108-var-lock\") pod \"installer-6-master-0\" (UID: \"3d2957c2-bc3c-4399-b508-37a1a7689108\") " pod="openshift-kube-controller-manager/installer-6-master-0" Oct 11 10:42:04.779628 master-0 kubenswrapper[4790]: I1011 10:42:04.779540 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d2957c2-bc3c-4399-b508-37a1a7689108-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"3d2957c2-bc3c-4399-b508-37a1a7689108\") " pod="openshift-kube-controller-manager/installer-6-master-0" Oct 11 10:42:04.779628 master-0 kubenswrapper[4790]: I1011 10:42:04.779630 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d2957c2-bc3c-4399-b508-37a1a7689108-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"3d2957c2-bc3c-4399-b508-37a1a7689108\") " pod="openshift-kube-controller-manager/installer-6-master-0" Oct 11 10:42:04.780289 master-0 kubenswrapper[4790]: I1011 10:42:04.779973 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d2957c2-bc3c-4399-b508-37a1a7689108-var-lock\") pod \"installer-6-master-0\" (UID: \"3d2957c2-bc3c-4399-b508-37a1a7689108\") " pod="openshift-kube-controller-manager/installer-6-master-0" Oct 11 10:42:04.804239 master-0 kubenswrapper[4790]: I1011 10:42:04.804148 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d2957c2-bc3c-4399-b508-37a1a7689108-kube-api-access\") pod \"installer-6-master-0\" (UID: \"3d2957c2-bc3c-4399-b508-37a1a7689108\") " pod="openshift-kube-controller-manager/installer-6-master-0" Oct 11 10:42:04.815258 master-0 kubenswrapper[4790]: I1011 10:42:04.815189 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Oct 11 10:42:05.333850 master-0 kubenswrapper[4790]: W1011 10:42:05.333754 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3d2957c2_bc3c_4399_b508_37a1a7689108.slice/crio-93e88f1fafa68d434978386be2c8ac316fefc0276f3cdc7d47fedf2f6e452356 WatchSource:0}: Error finding container 93e88f1fafa68d434978386be2c8ac316fefc0276f3cdc7d47fedf2f6e452356: Status 404 returned error can't find the container with id 93e88f1fafa68d434978386be2c8ac316fefc0276f3cdc7d47fedf2f6e452356 Oct 11 10:42:05.406937 master-0 kubenswrapper[4790]: I1011 10:42:05.406828 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-6-master-0"] Oct 11 10:42:05.634596 master-0 kubenswrapper[4790]: I1011 10:42:05.634439 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"3d2957c2-bc3c-4399-b508-37a1a7689108","Type":"ContainerStarted","Data":"93e88f1fafa68d434978386be2c8ac316fefc0276f3cdc7d47fedf2f6e452356"} Oct 11 10:42:05.905218 master-1 kubenswrapper[4771]: I1011 10:42:05.905126 4771 patch_prober.go:28] interesting pod/apiserver-656768b4df-g4p26 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:42:05.905218 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:42:05.905218 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:42:05.905218 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:42:05.905218 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:42:05.905218 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:42:05.905218 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:42:05.905218 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:42:05.905218 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:42:05.905218 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:42:05.905218 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:42:05.905218 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:42:05.905218 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:42:05.905218 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:42:05.905218 master-1 kubenswrapper[4771]: I1011 10:42:05.905216 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" podUID="4c0cf305-ba21-45c0-a092-05214809da68" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:42:06.417952 master-1 kubenswrapper[4771]: I1011 10:42:06.417871 4771 generic.go:334] "Generic (PLEG): container finished" podID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerID="2ccd5ea4ca8c2b32e04ef7419d2c1c1ac0971dd1b18e1a37cd16058b70e5a98c" exitCode=0 Oct 11 10:42:06.418170 master-1 kubenswrapper[4771]: I1011 10:42:06.417941 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" event={"ID":"a2bf529d-094c-4406-8ce6-890cf8c0b840","Type":"ContainerDied","Data":"2ccd5ea4ca8c2b32e04ef7419d2c1c1ac0971dd1b18e1a37cd16058b70e5a98c"} Oct 11 10:42:07.084184 master-2 kubenswrapper[4776]: I1011 10:42:07.084042 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:42:07.088621 master-2 kubenswrapper[4776]: I1011 10:42:07.088584 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:42:07.119064 master-1 kubenswrapper[4771]: I1011 10:42:07.119000 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:42:07.208838 master-1 kubenswrapper[4771]: I1011 10:42:07.208728 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-8865994fd-g2fnh"] Oct 11 10:42:07.209327 master-1 kubenswrapper[4771]: E1011 10:42:07.209062 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver-check-endpoints" Oct 11 10:42:07.209327 master-1 kubenswrapper[4771]: I1011 10:42:07.209086 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver-check-endpoints" Oct 11 10:42:07.209327 master-1 kubenswrapper[4771]: E1011 10:42:07.209120 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="fix-audit-permissions" Oct 11 10:42:07.209327 master-1 kubenswrapper[4771]: I1011 10:42:07.209134 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="fix-audit-permissions" Oct 11 10:42:07.209327 master-1 kubenswrapper[4771]: E1011 10:42:07.209156 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" Oct 11 10:42:07.209327 master-1 kubenswrapper[4771]: I1011 10:42:07.209172 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" Oct 11 10:42:07.211334 master-1 kubenswrapper[4771]: I1011 10:42:07.209337 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a2bf529d-094c-4406-8ce6-890cf8c0b840-etcd-client\") pod \"a2bf529d-094c-4406-8ce6-890cf8c0b840\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " Oct 11 10:42:07.211334 master-1 kubenswrapper[4771]: I1011 10:42:07.209403 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver" Oct 11 10:42:07.211334 master-1 kubenswrapper[4771]: I1011 10:42:07.209619 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-config\") pod \"a2bf529d-094c-4406-8ce6-890cf8c0b840\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " Oct 11 10:42:07.211334 master-1 kubenswrapper[4771]: I1011 10:42:07.209660 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" containerName="openshift-apiserver-check-endpoints" Oct 11 10:42:07.211334 master-1 kubenswrapper[4771]: I1011 10:42:07.210331 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-config" (OuterVolumeSpecName: "config") pod "a2bf529d-094c-4406-8ce6-890cf8c0b840" (UID: "a2bf529d-094c-4406-8ce6-890cf8c0b840"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:42:07.211334 master-1 kubenswrapper[4771]: I1011 10:42:07.210514 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2bf529d-094c-4406-8ce6-890cf8c0b840-serving-cert\") pod \"a2bf529d-094c-4406-8ce6-890cf8c0b840\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " Oct 11 10:42:07.211334 master-1 kubenswrapper[4771]: I1011 10:42:07.210647 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-image-import-ca\") pod \"a2bf529d-094c-4406-8ce6-890cf8c0b840\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " Oct 11 10:42:07.211334 master-1 kubenswrapper[4771]: I1011 10:42:07.210742 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a2bf529d-094c-4406-8ce6-890cf8c0b840-encryption-config\") pod \"a2bf529d-094c-4406-8ce6-890cf8c0b840\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " Oct 11 10:42:07.211334 master-1 kubenswrapper[4771]: I1011 10:42:07.211140 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.213939 master-1 kubenswrapper[4771]: I1011 10:42:07.211901 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "a2bf529d-094c-4406-8ce6-890cf8c0b840" (UID: "a2bf529d-094c-4406-8ce6-890cf8c0b840"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:42:07.213939 master-1 kubenswrapper[4771]: I1011 10:42:07.212455 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a2bf529d-094c-4406-8ce6-890cf8c0b840" (UID: "a2bf529d-094c-4406-8ce6-890cf8c0b840"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:42:07.213939 master-1 kubenswrapper[4771]: I1011 10:42:07.212519 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-trusted-ca-bundle\") pod \"a2bf529d-094c-4406-8ce6-890cf8c0b840\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " Oct 11 10:42:07.213939 master-1 kubenswrapper[4771]: I1011 10:42:07.212656 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-audit\") pod \"a2bf529d-094c-4406-8ce6-890cf8c0b840\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " Oct 11 10:42:07.213939 master-1 kubenswrapper[4771]: I1011 10:42:07.212708 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2bf529d-094c-4406-8ce6-890cf8c0b840-node-pullsecrets\") pod \"a2bf529d-094c-4406-8ce6-890cf8c0b840\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " Oct 11 10:42:07.213939 master-1 kubenswrapper[4771]: I1011 10:42:07.212746 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2bf529d-094c-4406-8ce6-890cf8c0b840-audit-dir\") pod \"a2bf529d-094c-4406-8ce6-890cf8c0b840\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " Oct 11 10:42:07.213939 master-1 kubenswrapper[4771]: I1011 10:42:07.212785 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-etcd-serving-ca\") pod \"a2bf529d-094c-4406-8ce6-890cf8c0b840\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " Oct 11 10:42:07.213939 master-1 kubenswrapper[4771]: I1011 10:42:07.212815 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b78cw\" (UniqueName: \"kubernetes.io/projected/a2bf529d-094c-4406-8ce6-890cf8c0b840-kube-api-access-b78cw\") pod \"a2bf529d-094c-4406-8ce6-890cf8c0b840\" (UID: \"a2bf529d-094c-4406-8ce6-890cf8c0b840\") " Oct 11 10:42:07.213939 master-1 kubenswrapper[4771]: I1011 10:42:07.212806 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2bf529d-094c-4406-8ce6-890cf8c0b840-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a2bf529d-094c-4406-8ce6-890cf8c0b840" (UID: "a2bf529d-094c-4406-8ce6-890cf8c0b840"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:42:07.213939 master-1 kubenswrapper[4771]: I1011 10:42:07.212828 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2bf529d-094c-4406-8ce6-890cf8c0b840-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "a2bf529d-094c-4406-8ce6-890cf8c0b840" (UID: "a2bf529d-094c-4406-8ce6-890cf8c0b840"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:42:07.213939 master-1 kubenswrapper[4771]: I1011 10:42:07.213279 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-audit" (OuterVolumeSpecName: "audit") pod "a2bf529d-094c-4406-8ce6-890cf8c0b840" (UID: "a2bf529d-094c-4406-8ce6-890cf8c0b840"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:42:07.213939 master-1 kubenswrapper[4771]: I1011 10:42:07.213447 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "a2bf529d-094c-4406-8ce6-890cf8c0b840" (UID: "a2bf529d-094c-4406-8ce6-890cf8c0b840"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:42:07.213939 master-1 kubenswrapper[4771]: I1011 10:42:07.213680 4771 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-image-import-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:07.213939 master-1 kubenswrapper[4771]: I1011 10:42:07.213698 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:07.213939 master-1 kubenswrapper[4771]: I1011 10:42:07.213711 4771 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-audit\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:07.213939 master-1 kubenswrapper[4771]: I1011 10:42:07.213721 4771 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2bf529d-094c-4406-8ce6-890cf8c0b840-node-pullsecrets\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:07.213939 master-1 kubenswrapper[4771]: I1011 10:42:07.213730 4771 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2bf529d-094c-4406-8ce6-890cf8c0b840-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:07.213939 master-1 kubenswrapper[4771]: I1011 10:42:07.213741 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-etcd-serving-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:07.213939 master-1 kubenswrapper[4771]: I1011 10:42:07.213750 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2bf529d-094c-4406-8ce6-890cf8c0b840-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:07.215618 master-1 kubenswrapper[4771]: I1011 10:42:07.214402 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-lntq9" Oct 11 10:42:07.215618 master-1 kubenswrapper[4771]: I1011 10:42:07.215477 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2bf529d-094c-4406-8ce6-890cf8c0b840-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "a2bf529d-094c-4406-8ce6-890cf8c0b840" (UID: "a2bf529d-094c-4406-8ce6-890cf8c0b840"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:42:07.217591 master-1 kubenswrapper[4771]: I1011 10:42:07.217316 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2bf529d-094c-4406-8ce6-890cf8c0b840-kube-api-access-b78cw" (OuterVolumeSpecName: "kube-api-access-b78cw") pod "a2bf529d-094c-4406-8ce6-890cf8c0b840" (UID: "a2bf529d-094c-4406-8ce6-890cf8c0b840"). InnerVolumeSpecName "kube-api-access-b78cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:42:07.219938 master-1 kubenswrapper[4771]: I1011 10:42:07.218832 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2bf529d-094c-4406-8ce6-890cf8c0b840-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a2bf529d-094c-4406-8ce6-890cf8c0b840" (UID: "a2bf529d-094c-4406-8ce6-890cf8c0b840"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:42:07.223738 master-1 kubenswrapper[4771]: I1011 10:42:07.223670 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-8865994fd-g2fnh"] Oct 11 10:42:07.226063 master-1 kubenswrapper[4771]: I1011 10:42:07.225923 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2bf529d-094c-4406-8ce6-890cf8c0b840-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "a2bf529d-094c-4406-8ce6-890cf8c0b840" (UID: "a2bf529d-094c-4406-8ce6-890cf8c0b840"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:42:07.315183 master-1 kubenswrapper[4771]: I1011 10:42:07.315133 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-audit-dir\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.315517 master-1 kubenswrapper[4771]: I1011 10:42:07.315495 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-serving-cert\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.315664 master-1 kubenswrapper[4771]: I1011 10:42:07.315642 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-trusted-ca-bundle\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.316453 master-1 kubenswrapper[4771]: I1011 10:42:07.316434 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-etcd-serving-ca\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.316615 master-1 kubenswrapper[4771]: I1011 10:42:07.316597 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-encryption-config\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.316837 master-1 kubenswrapper[4771]: I1011 10:42:07.316816 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-image-import-ca\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.316969 master-1 kubenswrapper[4771]: I1011 10:42:07.316949 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-etcd-client\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.317113 master-1 kubenswrapper[4771]: I1011 10:42:07.317093 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-node-pullsecrets\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.317252 master-1 kubenswrapper[4771]: I1011 10:42:07.317236 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-audit\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.317390 master-1 kubenswrapper[4771]: I1011 10:42:07.317372 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzqwt\" (UniqueName: \"kubernetes.io/projected/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-kube-api-access-nzqwt\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.317520 master-1 kubenswrapper[4771]: I1011 10:42:07.317501 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-config\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.317675 master-1 kubenswrapper[4771]: I1011 10:42:07.317641 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2bf529d-094c-4406-8ce6-890cf8c0b840-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:07.318107 master-1 kubenswrapper[4771]: I1011 10:42:07.318093 4771 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a2bf529d-094c-4406-8ce6-890cf8c0b840-encryption-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:07.318256 master-1 kubenswrapper[4771]: I1011 10:42:07.318240 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b78cw\" (UniqueName: \"kubernetes.io/projected/a2bf529d-094c-4406-8ce6-890cf8c0b840-kube-api-access-b78cw\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:07.318399 master-1 kubenswrapper[4771]: I1011 10:42:07.318386 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a2bf529d-094c-4406-8ce6-890cf8c0b840-etcd-client\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:07.420543 master-1 kubenswrapper[4771]: I1011 10:42:07.419314 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-audit\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.420543 master-1 kubenswrapper[4771]: I1011 10:42:07.420514 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-audit\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.420783 master-1 kubenswrapper[4771]: I1011 10:42:07.420538 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzqwt\" (UniqueName: \"kubernetes.io/projected/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-kube-api-access-nzqwt\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.420783 master-1 kubenswrapper[4771]: I1011 10:42:07.420588 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-config\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.420783 master-1 kubenswrapper[4771]: I1011 10:42:07.420648 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-audit-dir\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.420783 master-1 kubenswrapper[4771]: I1011 10:42:07.420703 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-serving-cert\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.420783 master-1 kubenswrapper[4771]: I1011 10:42:07.420751 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-trusted-ca-bundle\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.420994 master-1 kubenswrapper[4771]: I1011 10:42:07.420792 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-etcd-serving-ca\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.420994 master-1 kubenswrapper[4771]: I1011 10:42:07.420833 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-encryption-config\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.420994 master-1 kubenswrapper[4771]: I1011 10:42:07.420913 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-etcd-client\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.420994 master-1 kubenswrapper[4771]: I1011 10:42:07.420937 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-image-import-ca\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.421157 master-1 kubenswrapper[4771]: I1011 10:42:07.420995 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-node-pullsecrets\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.421157 master-1 kubenswrapper[4771]: I1011 10:42:07.421128 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-node-pullsecrets\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.421595 master-1 kubenswrapper[4771]: I1011 10:42:07.421563 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-config\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.421813 master-1 kubenswrapper[4771]: I1011 10:42:07.421777 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-etcd-serving-ca\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.421874 master-1 kubenswrapper[4771]: I1011 10:42:07.421811 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-audit-dir\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.422267 master-1 kubenswrapper[4771]: I1011 10:42:07.422238 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-image-import-ca\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.423677 master-1 kubenswrapper[4771]: I1011 10:42:07.423619 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-trusted-ca-bundle\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.425001 master-1 kubenswrapper[4771]: I1011 10:42:07.424960 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-encryption-config\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.427825 master-1 kubenswrapper[4771]: I1011 10:42:07.427786 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-etcd-client\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.428147 master-1 kubenswrapper[4771]: I1011 10:42:07.428097 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" event={"ID":"a2bf529d-094c-4406-8ce6-890cf8c0b840","Type":"ContainerDied","Data":"e398827cf779d365dfc4e6c2443dd2f776caa9a8ba75c41d00aafc513ef28957"} Oct 11 10:42:07.428214 master-1 kubenswrapper[4771]: I1011 10:42:07.428163 4771 scope.go:117] "RemoveContainer" containerID="a0772db7a40ce6f228f65f235a6668a5f2f1781a4f227000cf9ad01206d856f2" Oct 11 10:42:07.428214 master-1 kubenswrapper[4771]: I1011 10:42:07.428199 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-serving-cert\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.428404 master-1 kubenswrapper[4771]: I1011 10:42:07.428348 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7845cf54d8-g8x5z" Oct 11 10:42:07.438716 master-1 kubenswrapper[4771]: I1011 10:42:07.438687 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzqwt\" (UniqueName: \"kubernetes.io/projected/6c9318b1-5be5-4254-8eb6-7cf411c02eb8-kube-api-access-nzqwt\") pod \"apiserver-8865994fd-g2fnh\" (UID: \"6c9318b1-5be5-4254-8eb6-7cf411c02eb8\") " pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:07.486254 master-1 kubenswrapper[4771]: I1011 10:42:07.486216 4771 scope.go:117] "RemoveContainer" containerID="2ccd5ea4ca8c2b32e04ef7419d2c1c1ac0971dd1b18e1a37cd16058b70e5a98c" Oct 11 10:42:07.501774 master-1 kubenswrapper[4771]: I1011 10:42:07.501742 4771 scope.go:117] "RemoveContainer" containerID="5a44ec551f4491e724d147c13cc98b993a3968bac1f8f715ba1d91a8129c8004" Oct 11 10:42:07.505209 master-1 kubenswrapper[4771]: I1011 10:42:07.505183 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-7845cf54d8-g8x5z"] Oct 11 10:42:07.513677 master-1 kubenswrapper[4771]: I1011 10:42:07.513640 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-7845cf54d8-g8x5z"] Oct 11 10:42:07.550214 master-1 kubenswrapper[4771]: I1011 10:42:07.550171 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:08.016515 master-1 kubenswrapper[4771]: I1011 10:42:08.015912 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-8865994fd-g2fnh"] Oct 11 10:42:08.025995 master-1 kubenswrapper[4771]: W1011 10:42:08.025945 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c9318b1_5be5_4254_8eb6_7cf411c02eb8.slice/crio-ac98991a19fe40f0e85bff03a611c5b88f0d91835e7f5d79eaa631518389ccd6 WatchSource:0}: Error finding container ac98991a19fe40f0e85bff03a611c5b88f0d91835e7f5d79eaa631518389ccd6: Status 404 returned error can't find the container with id ac98991a19fe40f0e85bff03a611c5b88f0d91835e7f5d79eaa631518389ccd6 Oct 11 10:42:08.320577 master-0 kubenswrapper[4790]: I1011 10:42:08.320324 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Oct 11 10:42:08.321306 master-0 kubenswrapper[4790]: I1011 10:42:08.321133 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Oct 11 10:42:08.324297 master-0 kubenswrapper[4790]: I1011 10:42:08.324120 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-t28rg" Oct 11 10:42:08.324482 master-0 kubenswrapper[4790]: I1011 10:42:08.324454 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 11 10:42:08.332140 master-0 kubenswrapper[4790]: I1011 10:42:08.332087 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Oct 11 10:42:08.437642 master-1 kubenswrapper[4771]: I1011 10:42:08.437529 4771 generic.go:334] "Generic (PLEG): container finished" podID="6c9318b1-5be5-4254-8eb6-7cf411c02eb8" containerID="f8c9b4d02496959e8fb124becb7d2c60f0137dcae9fa68835aac05f51b1df52b" exitCode=0 Oct 11 10:42:08.441498 master-0 kubenswrapper[4790]: I1011 10:42:08.441463 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5b44d0e-0afa-47db-a215-114b99006a12-var-lock\") pod \"installer-5-master-0\" (UID: \"b5b44d0e-0afa-47db-a215-114b99006a12\") " pod="openshift-kube-apiserver/installer-5-master-0" Oct 11 10:42:08.441694 master-0 kubenswrapper[4790]: I1011 10:42:08.441565 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5b44d0e-0afa-47db-a215-114b99006a12-kube-api-access\") pod \"installer-5-master-0\" (UID: \"b5b44d0e-0afa-47db-a215-114b99006a12\") " pod="openshift-kube-apiserver/installer-5-master-0" Oct 11 10:42:08.441788 master-0 kubenswrapper[4790]: I1011 10:42:08.441743 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5b44d0e-0afa-47db-a215-114b99006a12-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"b5b44d0e-0afa-47db-a215-114b99006a12\") " pod="openshift-kube-apiserver/installer-5-master-0" Oct 11 10:42:08.453452 master-1 kubenswrapper[4771]: I1011 10:42:08.453387 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2bf529d-094c-4406-8ce6-890cf8c0b840" path="/var/lib/kubelet/pods/a2bf529d-094c-4406-8ce6-890cf8c0b840/volumes" Oct 11 10:42:08.454721 master-1 kubenswrapper[4771]: I1011 10:42:08.454655 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8865994fd-g2fnh" event={"ID":"6c9318b1-5be5-4254-8eb6-7cf411c02eb8","Type":"ContainerDied","Data":"f8c9b4d02496959e8fb124becb7d2c60f0137dcae9fa68835aac05f51b1df52b"} Oct 11 10:42:08.454721 master-1 kubenswrapper[4771]: I1011 10:42:08.454714 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8865994fd-g2fnh" event={"ID":"6c9318b1-5be5-4254-8eb6-7cf411c02eb8","Type":"ContainerStarted","Data":"ac98991a19fe40f0e85bff03a611c5b88f0d91835e7f5d79eaa631518389ccd6"} Oct 11 10:42:08.543295 master-0 kubenswrapper[4790]: I1011 10:42:08.543080 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5b44d0e-0afa-47db-a215-114b99006a12-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"b5b44d0e-0afa-47db-a215-114b99006a12\") " pod="openshift-kube-apiserver/installer-5-master-0" Oct 11 10:42:08.543295 master-0 kubenswrapper[4790]: I1011 10:42:08.543269 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5b44d0e-0afa-47db-a215-114b99006a12-var-lock\") pod \"installer-5-master-0\" (UID: \"b5b44d0e-0afa-47db-a215-114b99006a12\") " pod="openshift-kube-apiserver/installer-5-master-0" Oct 11 10:42:08.543846 master-0 kubenswrapper[4790]: I1011 10:42:08.543314 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5b44d0e-0afa-47db-a215-114b99006a12-var-lock\") pod \"installer-5-master-0\" (UID: \"b5b44d0e-0afa-47db-a215-114b99006a12\") " pod="openshift-kube-apiserver/installer-5-master-0" Oct 11 10:42:08.543846 master-0 kubenswrapper[4790]: I1011 10:42:08.543349 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5b44d0e-0afa-47db-a215-114b99006a12-kube-api-access\") pod \"installer-5-master-0\" (UID: \"b5b44d0e-0afa-47db-a215-114b99006a12\") " pod="openshift-kube-apiserver/installer-5-master-0" Oct 11 10:42:08.543846 master-0 kubenswrapper[4790]: I1011 10:42:08.543266 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5b44d0e-0afa-47db-a215-114b99006a12-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"b5b44d0e-0afa-47db-a215-114b99006a12\") " pod="openshift-kube-apiserver/installer-5-master-0" Oct 11 10:42:08.567800 master-0 kubenswrapper[4790]: I1011 10:42:08.566675 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5b44d0e-0afa-47db-a215-114b99006a12-kube-api-access\") pod \"installer-5-master-0\" (UID: \"b5b44d0e-0afa-47db-a215-114b99006a12\") " pod="openshift-kube-apiserver/installer-5-master-0" Oct 11 10:42:08.661186 master-0 kubenswrapper[4790]: I1011 10:42:08.660973 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"3d2957c2-bc3c-4399-b508-37a1a7689108","Type":"ContainerStarted","Data":"2d6f8b55cb9d1ca99486dd5352b34512a263c62c8f4e3533fc79b7394d55c8e0"} Oct 11 10:42:08.691333 master-0 kubenswrapper[4790]: I1011 10:42:08.691196 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Oct 11 10:42:08.691570 master-0 kubenswrapper[4790]: I1011 10:42:08.691450 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-6-master-0" podStartSLOduration=1.9387448539999999 podStartE2EDuration="4.691422536s" podCreationTimestamp="2025-10-11 10:42:04 +0000 UTC" firstStartedPulling="2025-10-11 10:42:05.337185532 +0000 UTC m=+201.891645854" lastFinishedPulling="2025-10-11 10:42:08.089863244 +0000 UTC m=+204.644323536" observedRunningTime="2025-10-11 10:42:08.688511588 +0000 UTC m=+205.242971910" watchObservedRunningTime="2025-10-11 10:42:08.691422536 +0000 UTC m=+205.245882838" Oct 11 10:42:09.289080 master-0 kubenswrapper[4790]: W1011 10:42:09.288983 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb5b44d0e_0afa_47db_a215_114b99006a12.slice/crio-b2f3fc67df160d3adfc50304aaf54cdfff608931c2d0a54d0ec7908bd3846999 WatchSource:0}: Error finding container b2f3fc67df160d3adfc50304aaf54cdfff608931c2d0a54d0ec7908bd3846999: Status 404 returned error can't find the container with id b2f3fc67df160d3adfc50304aaf54cdfff608931c2d0a54d0ec7908bd3846999 Oct 11 10:42:09.308174 master-1 kubenswrapper[4771]: I1011 10:42:09.308063 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-9-master-1"] Oct 11 10:42:09.308484 master-1 kubenswrapper[4771]: I1011 10:42:09.308328 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/installer-9-master-1" podUID="1da01b89-3c1e-4f11-bcc6-65a56654021f" containerName="installer" containerID="cri-o://1a1e8546ece3b9b09f96eb38ce98e4e2f7676e9d011955a8c3b8f572088b6cdb" gracePeriod=30 Oct 11 10:42:09.309792 master-0 kubenswrapper[4790]: I1011 10:42:09.309681 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Oct 11 10:42:09.451000 master-1 kubenswrapper[4771]: I1011 10:42:09.450939 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8865994fd-g2fnh" event={"ID":"6c9318b1-5be5-4254-8eb6-7cf411c02eb8","Type":"ContainerStarted","Data":"aa5cf014d215a1dcd8c5faa8044bde8a201a3720fd928755ca335ca36d17fdf2"} Oct 11 10:42:09.451000 master-1 kubenswrapper[4771]: I1011 10:42:09.450995 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8865994fd-g2fnh" event={"ID":"6c9318b1-5be5-4254-8eb6-7cf411c02eb8","Type":"ContainerStarted","Data":"3d85f294797215e27e00a1deb2e17dd5dd10acdf5ffc31590c560bc071b3ebc4"} Oct 11 10:42:09.485751 master-1 kubenswrapper[4771]: I1011 10:42:09.485663 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-8865994fd-g2fnh" podStartSLOduration=21.485644522 podStartE2EDuration="21.485644522s" podCreationTimestamp="2025-10-11 10:41:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:42:09.47831047 +0000 UTC m=+961.452536911" watchObservedRunningTime="2025-10-11 10:42:09.485644522 +0000 UTC m=+961.459870963" Oct 11 10:42:09.668144 master-0 kubenswrapper[4790]: I1011 10:42:09.668015 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"b5b44d0e-0afa-47db-a215-114b99006a12","Type":"ContainerStarted","Data":"4be921d32cc962b169c97845895b27ff57f7dd21aaaefe6e16cf013e922ab5ec"} Oct 11 10:42:09.668144 master-0 kubenswrapper[4790]: I1011 10:42:09.668086 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"b5b44d0e-0afa-47db-a215-114b99006a12","Type":"ContainerStarted","Data":"b2f3fc67df160d3adfc50304aaf54cdfff608931c2d0a54d0ec7908bd3846999"} Oct 11 10:42:09.719554 master-0 kubenswrapper[4790]: I1011 10:42:09.719457 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=1.7194096509999999 podStartE2EDuration="1.719409651s" podCreationTimestamp="2025-10-11 10:42:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:42:09.716325778 +0000 UTC m=+206.270786070" watchObservedRunningTime="2025-10-11 10:42:09.719409651 +0000 UTC m=+206.273869963" Oct 11 10:42:10.906200 master-1 kubenswrapper[4771]: I1011 10:42:10.906069 4771 patch_prober.go:28] interesting pod/apiserver-656768b4df-g4p26 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:42:10.906200 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:42:10.906200 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:42:10.906200 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:42:10.906200 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:42:10.906200 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:42:10.906200 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:42:10.906200 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:42:10.906200 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:42:10.906200 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:42:10.906200 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:42:10.906200 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:42:10.906200 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:42:10.906200 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:42:10.907509 master-1 kubenswrapper[4771]: I1011 10:42:10.906911 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" podUID="4c0cf305-ba21-45c0-a092-05214809da68" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:42:11.434990 master-0 kubenswrapper[4790]: I1011 10:42:11.434868 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-76f8bc4746-9rjdm" podUID="ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db" containerName="console" containerID="cri-o://199044355c2ca69bb6ac6dacb80814ae2bdf5a58183730d9a6202cd23bded675" gracePeriod=15 Oct 11 10:42:11.680095 master-0 kubenswrapper[4790]: I1011 10:42:11.679999 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76f8bc4746-9rjdm_ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db/console/0.log" Oct 11 10:42:11.680346 master-0 kubenswrapper[4790]: I1011 10:42:11.680099 4790 generic.go:334] "Generic (PLEG): container finished" podID="ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db" containerID="199044355c2ca69bb6ac6dacb80814ae2bdf5a58183730d9a6202cd23bded675" exitCode=2 Oct 11 10:42:11.680346 master-0 kubenswrapper[4790]: I1011 10:42:11.680155 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76f8bc4746-9rjdm" event={"ID":"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db","Type":"ContainerDied","Data":"199044355c2ca69bb6ac6dacb80814ae2bdf5a58183730d9a6202cd23bded675"} Oct 11 10:42:11.908871 master-0 kubenswrapper[4790]: I1011 10:42:11.908787 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76f8bc4746-9rjdm_ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db/console/0.log" Oct 11 10:42:11.908989 master-0 kubenswrapper[4790]: I1011 10:42:11.908921 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:42:12.087364 master-0 kubenswrapper[4790]: I1011 10:42:12.087223 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-service-ca\") pod \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " Oct 11 10:42:12.087364 master-0 kubenswrapper[4790]: I1011 10:42:12.087316 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-trusted-ca-bundle\") pod \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " Oct 11 10:42:12.087364 master-0 kubenswrapper[4790]: I1011 10:42:12.087354 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ljnp\" (UniqueName: \"kubernetes.io/projected/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-kube-api-access-7ljnp\") pod \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " Oct 11 10:42:12.087364 master-0 kubenswrapper[4790]: I1011 10:42:12.087386 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-console-serving-cert\") pod \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " Oct 11 10:42:12.088048 master-0 kubenswrapper[4790]: I1011 10:42:12.087409 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-console-oauth-config\") pod \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " Oct 11 10:42:12.088048 master-0 kubenswrapper[4790]: I1011 10:42:12.087450 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-oauth-serving-cert\") pod \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " Oct 11 10:42:12.088048 master-0 kubenswrapper[4790]: I1011 10:42:12.087523 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-console-config\") pod \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\" (UID: \"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db\") " Oct 11 10:42:12.088048 master-0 kubenswrapper[4790]: I1011 10:42:12.087960 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-service-ca" (OuterVolumeSpecName: "service-ca") pod "ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db" (UID: "ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:42:12.088786 master-0 kubenswrapper[4790]: I1011 10:42:12.088680 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-console-config" (OuterVolumeSpecName: "console-config") pod "ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db" (UID: "ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:42:12.088928 master-0 kubenswrapper[4790]: I1011 10:42:12.088770 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db" (UID: "ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:42:12.089013 master-0 kubenswrapper[4790]: I1011 10:42:12.088903 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db" (UID: "ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:42:12.092178 master-0 kubenswrapper[4790]: I1011 10:42:12.092102 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db" (UID: "ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:42:12.094291 master-0 kubenswrapper[4790]: I1011 10:42:12.094230 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-kube-api-access-7ljnp" (OuterVolumeSpecName: "kube-api-access-7ljnp") pod "ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db" (UID: "ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db"). InnerVolumeSpecName "kube-api-access-7ljnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:42:12.094291 master-0 kubenswrapper[4790]: I1011 10:42:12.094219 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db" (UID: "ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:42:12.188818 master-0 kubenswrapper[4790]: I1011 10:42:12.188620 4790 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-console-config\") on node \"master-0\" DevicePath \"\"" Oct 11 10:42:12.188818 master-0 kubenswrapper[4790]: I1011 10:42:12.188756 4790 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-service-ca\") on node \"master-0\" DevicePath \"\"" Oct 11 10:42:12.188818 master-0 kubenswrapper[4790]: I1011 10:42:12.188787 4790 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Oct 11 10:42:12.188818 master-0 kubenswrapper[4790]: I1011 10:42:12.188820 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ljnp\" (UniqueName: \"kubernetes.io/projected/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-kube-api-access-7ljnp\") on node \"master-0\" DevicePath \"\"" Oct 11 10:42:12.189200 master-0 kubenswrapper[4790]: I1011 10:42:12.188846 4790 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Oct 11 10:42:12.189200 master-0 kubenswrapper[4790]: I1011 10:42:12.188866 4790 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Oct 11 10:42:12.189200 master-0 kubenswrapper[4790]: I1011 10:42:12.188884 4790 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Oct 11 10:42:12.467546 master-1 kubenswrapper[4771]: I1011 10:42:12.467481 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-10-master-1"] Oct 11 10:42:12.471075 master-1 kubenswrapper[4771]: I1011 10:42:12.471005 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-1" Oct 11 10:42:12.486460 master-1 kubenswrapper[4771]: I1011 10:42:12.485915 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-10-master-1"] Oct 11 10:42:12.550989 master-1 kubenswrapper[4771]: I1011 10:42:12.550886 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:12.550989 master-1 kubenswrapper[4771]: I1011 10:42:12.550984 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:12.561062 master-1 kubenswrapper[4771]: I1011 10:42:12.560981 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:12.603723 master-1 kubenswrapper[4771]: I1011 10:42:12.603671 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d7775e5-5c08-4eef-84bf-8995a11eb190-kubelet-dir\") pod \"installer-10-master-1\" (UID: \"8d7775e5-5c08-4eef-84bf-8995a11eb190\") " pod="openshift-etcd/installer-10-master-1" Oct 11 10:42:12.604078 master-1 kubenswrapper[4771]: I1011 10:42:12.604050 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d7775e5-5c08-4eef-84bf-8995a11eb190-kube-api-access\") pod \"installer-10-master-1\" (UID: \"8d7775e5-5c08-4eef-84bf-8995a11eb190\") " pod="openshift-etcd/installer-10-master-1" Oct 11 10:42:12.604256 master-1 kubenswrapper[4771]: I1011 10:42:12.604235 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8d7775e5-5c08-4eef-84bf-8995a11eb190-var-lock\") pod \"installer-10-master-1\" (UID: \"8d7775e5-5c08-4eef-84bf-8995a11eb190\") " pod="openshift-etcd/installer-10-master-1" Oct 11 10:42:12.689277 master-0 kubenswrapper[4790]: I1011 10:42:12.689197 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76f8bc4746-9rjdm_ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db/console/0.log" Oct 11 10:42:12.689277 master-0 kubenswrapper[4790]: I1011 10:42:12.689266 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76f8bc4746-9rjdm" event={"ID":"ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db","Type":"ContainerDied","Data":"f61d3e1404aecae93207fe8056d63854d603a520f23437c58f022239f1436a77"} Oct 11 10:42:12.690307 master-0 kubenswrapper[4790]: I1011 10:42:12.689338 4790 scope.go:117] "RemoveContainer" containerID="199044355c2ca69bb6ac6dacb80814ae2bdf5a58183730d9a6202cd23bded675" Oct 11 10:42:12.690307 master-0 kubenswrapper[4790]: I1011 10:42:12.689439 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76f8bc4746-9rjdm" Oct 11 10:42:12.706453 master-1 kubenswrapper[4771]: I1011 10:42:12.706331 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d7775e5-5c08-4eef-84bf-8995a11eb190-kubelet-dir\") pod \"installer-10-master-1\" (UID: \"8d7775e5-5c08-4eef-84bf-8995a11eb190\") " pod="openshift-etcd/installer-10-master-1" Oct 11 10:42:12.706718 master-1 kubenswrapper[4771]: I1011 10:42:12.706485 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d7775e5-5c08-4eef-84bf-8995a11eb190-kubelet-dir\") pod \"installer-10-master-1\" (UID: \"8d7775e5-5c08-4eef-84bf-8995a11eb190\") " pod="openshift-etcd/installer-10-master-1" Oct 11 10:42:12.706718 master-1 kubenswrapper[4771]: I1011 10:42:12.706670 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d7775e5-5c08-4eef-84bf-8995a11eb190-kube-api-access\") pod \"installer-10-master-1\" (UID: \"8d7775e5-5c08-4eef-84bf-8995a11eb190\") " pod="openshift-etcd/installer-10-master-1" Oct 11 10:42:12.706941 master-1 kubenswrapper[4771]: I1011 10:42:12.706886 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8d7775e5-5c08-4eef-84bf-8995a11eb190-var-lock\") pod \"installer-10-master-1\" (UID: \"8d7775e5-5c08-4eef-84bf-8995a11eb190\") " pod="openshift-etcd/installer-10-master-1" Oct 11 10:42:12.707076 master-1 kubenswrapper[4771]: I1011 10:42:12.707024 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8d7775e5-5c08-4eef-84bf-8995a11eb190-var-lock\") pod \"installer-10-master-1\" (UID: \"8d7775e5-5c08-4eef-84bf-8995a11eb190\") " pod="openshift-etcd/installer-10-master-1" Oct 11 10:42:12.722795 master-0 kubenswrapper[4790]: I1011 10:42:12.722686 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76f8bc4746-9rjdm"] Oct 11 10:42:12.737228 master-0 kubenswrapper[4790]: I1011 10:42:12.737136 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-76f8bc4746-9rjdm"] Oct 11 10:42:12.740090 master-1 kubenswrapper[4771]: I1011 10:42:12.739964 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d7775e5-5c08-4eef-84bf-8995a11eb190-kube-api-access\") pod \"installer-10-master-1\" (UID: \"8d7775e5-5c08-4eef-84bf-8995a11eb190\") " pod="openshift-etcd/installer-10-master-1" Oct 11 10:42:12.785832 master-1 kubenswrapper[4771]: I1011 10:42:12.785754 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-1" Oct 11 10:42:13.322680 master-1 kubenswrapper[4771]: I1011 10:42:13.322580 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-10-master-1"] Oct 11 10:42:13.483567 master-1 kubenswrapper[4771]: I1011 10:42:13.483492 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-1" event={"ID":"8d7775e5-5c08-4eef-84bf-8995a11eb190","Type":"ContainerStarted","Data":"16353a00dd4456281d0e795316b26f0bbce37de72b33a0538c00a1e5b2391471"} Oct 11 10:42:13.490194 master-1 kubenswrapper[4771]: I1011 10:42:13.490155 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-8865994fd-g2fnh" Oct 11 10:42:13.633603 master-0 kubenswrapper[4790]: I1011 10:42:13.633501 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-69df5d46bc-wjtq5"] Oct 11 10:42:13.634210 master-0 kubenswrapper[4790]: I1011 10:42:13.634133 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" containerID="cri-o://29c2667267dacd2b8bea9208822dffdfcb241820cba338aed8d3c7b0b7e5651a" gracePeriod=120 Oct 11 10:42:13.634306 master-0 kubenswrapper[4790]: I1011 10:42:13.634186 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://dd112f308e676b2c4d179d5aff947e42c2b7c6ef683c69f8c917e46b03a6eeef" gracePeriod=120 Oct 11 10:42:14.301081 master-0 kubenswrapper[4790]: I1011 10:42:14.301010 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db" path="/var/lib/kubelet/pods/ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db/volumes" Oct 11 10:42:14.335423 master-0 kubenswrapper[4790]: I1011 10:42:14.335350 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Oct 11 10:42:14.336046 master-0 kubenswrapper[4790]: I1011 10:42:14.335928 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-5-master-0" podUID="b5b44d0e-0afa-47db-a215-114b99006a12" containerName="installer" containerID="cri-o://4be921d32cc962b169c97845895b27ff57f7dd21aaaefe6e16cf013e922ab5ec" gracePeriod=30 Oct 11 10:42:14.490164 master-1 kubenswrapper[4771]: I1011 10:42:14.490052 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-1" event={"ID":"8d7775e5-5c08-4eef-84bf-8995a11eb190","Type":"ContainerStarted","Data":"ea6046ea85f7a0fce021fb5f4d0cfe1454a1393bcf7a0d41b1a58c6b303f5dca"} Oct 11 10:42:14.520912 master-1 kubenswrapper[4771]: I1011 10:42:14.520810 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-10-master-1" podStartSLOduration=2.520788535 podStartE2EDuration="2.520788535s" podCreationTimestamp="2025-10-11 10:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:42:14.515893473 +0000 UTC m=+966.490119934" watchObservedRunningTime="2025-10-11 10:42:14.520788535 +0000 UTC m=+966.495014996" Oct 11 10:42:14.708622 master-0 kubenswrapper[4790]: I1011 10:42:14.708445 4790 generic.go:334] "Generic (PLEG): container finished" podID="099ca022-6e9c-4604-b517-d90713dd6a44" containerID="dd112f308e676b2c4d179d5aff947e42c2b7c6ef683c69f8c917e46b03a6eeef" exitCode=0 Oct 11 10:42:14.708622 master-0 kubenswrapper[4790]: I1011 10:42:14.708538 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" event={"ID":"099ca022-6e9c-4604-b517-d90713dd6a44","Type":"ContainerDied","Data":"dd112f308e676b2c4d179d5aff947e42c2b7c6ef683c69f8c917e46b03a6eeef"} Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: I1011 10:42:15.075973 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:42:15.076101 master-0 kubenswrapper[4790]: I1011 10:42:15.076092 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:42:15.909001 master-1 kubenswrapper[4771]: I1011 10:42:15.908892 4771 patch_prober.go:28] interesting pod/apiserver-656768b4df-g4p26 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:42:15.909001 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:42:15.909001 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:42:15.909001 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:42:15.909001 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:42:15.909001 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:42:15.909001 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:42:15.909001 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:42:15.909001 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:42:15.909001 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:42:15.909001 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:42:15.909001 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:42:15.909001 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:42:15.909001 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:42:15.909001 master-1 kubenswrapper[4771]: I1011 10:42:15.908984 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" podUID="4c0cf305-ba21-45c0-a092-05214809da68" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:42:19.718766 master-0 kubenswrapper[4790]: I1011 10:42:19.718636 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Oct 11 10:42:19.719583 master-0 kubenswrapper[4790]: E1011 10:42:19.718981 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db" containerName="console" Oct 11 10:42:19.719583 master-0 kubenswrapper[4790]: I1011 10:42:19.719012 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db" containerName="console" Oct 11 10:42:19.719583 master-0 kubenswrapper[4790]: I1011 10:42:19.719172 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7a5bf0-d3df-49f7-bd97-a7b9425fe9db" containerName="console" Oct 11 10:42:19.720280 master-0 kubenswrapper[4790]: I1011 10:42:19.720052 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Oct 11 10:42:19.735804 master-0 kubenswrapper[4790]: I1011 10:42:19.735729 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Oct 11 10:42:19.796145 master-0 kubenswrapper[4790]: I1011 10:42:19.796001 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e7063ccc-c150-41d0-9285-8a8ca00aa417-var-lock\") pod \"installer-6-master-0\" (UID: \"e7063ccc-c150-41d0-9285-8a8ca00aa417\") " pod="openshift-kube-apiserver/installer-6-master-0" Oct 11 10:42:19.796145 master-0 kubenswrapper[4790]: I1011 10:42:19.796150 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7063ccc-c150-41d0-9285-8a8ca00aa417-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"e7063ccc-c150-41d0-9285-8a8ca00aa417\") " pod="openshift-kube-apiserver/installer-6-master-0" Oct 11 10:42:19.796438 master-0 kubenswrapper[4790]: I1011 10:42:19.796191 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7063ccc-c150-41d0-9285-8a8ca00aa417-kube-api-access\") pod \"installer-6-master-0\" (UID: \"e7063ccc-c150-41d0-9285-8a8ca00aa417\") " pod="openshift-kube-apiserver/installer-6-master-0" Oct 11 10:42:19.897480 master-0 kubenswrapper[4790]: I1011 10:42:19.897371 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7063ccc-c150-41d0-9285-8a8ca00aa417-kube-api-access\") pod \"installer-6-master-0\" (UID: \"e7063ccc-c150-41d0-9285-8a8ca00aa417\") " pod="openshift-kube-apiserver/installer-6-master-0" Oct 11 10:42:19.897845 master-0 kubenswrapper[4790]: I1011 10:42:19.897519 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e7063ccc-c150-41d0-9285-8a8ca00aa417-var-lock\") pod \"installer-6-master-0\" (UID: \"e7063ccc-c150-41d0-9285-8a8ca00aa417\") " pod="openshift-kube-apiserver/installer-6-master-0" Oct 11 10:42:19.897845 master-0 kubenswrapper[4790]: I1011 10:42:19.897592 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7063ccc-c150-41d0-9285-8a8ca00aa417-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"e7063ccc-c150-41d0-9285-8a8ca00aa417\") " pod="openshift-kube-apiserver/installer-6-master-0" Oct 11 10:42:19.897845 master-0 kubenswrapper[4790]: I1011 10:42:19.897752 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7063ccc-c150-41d0-9285-8a8ca00aa417-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"e7063ccc-c150-41d0-9285-8a8ca00aa417\") " pod="openshift-kube-apiserver/installer-6-master-0" Oct 11 10:42:19.898018 master-0 kubenswrapper[4790]: I1011 10:42:19.897893 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e7063ccc-c150-41d0-9285-8a8ca00aa417-var-lock\") pod \"installer-6-master-0\" (UID: \"e7063ccc-c150-41d0-9285-8a8ca00aa417\") " pod="openshift-kube-apiserver/installer-6-master-0" Oct 11 10:42:19.924507 master-0 kubenswrapper[4790]: I1011 10:42:19.924212 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7063ccc-c150-41d0-9285-8a8ca00aa417-kube-api-access\") pod \"installer-6-master-0\" (UID: \"e7063ccc-c150-41d0-9285-8a8ca00aa417\") " pod="openshift-kube-apiserver/installer-6-master-0" Oct 11 10:42:20.045233 master-0 kubenswrapper[4790]: I1011 10:42:20.045018 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Oct 11 10:42:20.068607 master-0 kubenswrapper[4790]: I1011 10:42:20.068397 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:42:20.068607 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:42:20.068607 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:42:20.068607 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:42:20.068607 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:42:20.068607 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:42:20.068607 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:42:20.068607 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:42:20.068607 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:42:20.068607 master-0 kubenswrapper[4790]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:42:20.068607 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:42:20.068607 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:42:20.068607 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:42:20.068607 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:42:20.068607 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:42:20.068607 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:42:20.068607 master-0 kubenswrapper[4790]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:42:20.068607 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:42:20.068607 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:42:20.069287 master-0 kubenswrapper[4790]: I1011 10:42:20.068650 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:42:20.282966 master-0 kubenswrapper[4790]: I1011 10:42:20.282875 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Oct 11 10:42:20.752920 master-0 kubenswrapper[4790]: I1011 10:42:20.752849 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"e7063ccc-c150-41d0-9285-8a8ca00aa417","Type":"ContainerStarted","Data":"194353fb7acfeb121812b2d62c7722c179dced595ba3e814ace7d8070862578b"} Oct 11 10:42:20.754041 master-0 kubenswrapper[4790]: I1011 10:42:20.754003 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"e7063ccc-c150-41d0-9285-8a8ca00aa417","Type":"ContainerStarted","Data":"df3c31d752a92f830ac660f11dc711746fedff638b520cb70b6e043fe897e4d1"} Oct 11 10:42:20.781106 master-0 kubenswrapper[4790]: I1011 10:42:20.780983 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-0" podStartSLOduration=1.780951299 podStartE2EDuration="1.780951299s" podCreationTimestamp="2025-10-11 10:42:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:42:20.776851076 +0000 UTC m=+217.331311428" watchObservedRunningTime="2025-10-11 10:42:20.780951299 +0000 UTC m=+217.335411631" Oct 11 10:42:20.908746 master-1 kubenswrapper[4771]: I1011 10:42:20.908614 4771 patch_prober.go:28] interesting pod/apiserver-656768b4df-g4p26 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:42:20.908746 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:42:20.908746 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:42:20.908746 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:42:20.908746 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:42:20.908746 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:42:20.908746 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:42:20.908746 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:42:20.908746 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:42:20.908746 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:42:20.908746 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:42:20.908746 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:42:20.908746 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:42:20.908746 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:42:20.910859 master-1 kubenswrapper[4771]: I1011 10:42:20.908746 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" podUID="4c0cf305-ba21-45c0-a092-05214809da68" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: I1011 10:42:25.069874 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:42:25.070019 master-0 kubenswrapper[4790]: I1011 10:42:25.070013 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:42:25.072142 master-0 kubenswrapper[4790]: I1011 10:42:25.070204 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:42:25.906869 master-1 kubenswrapper[4771]: I1011 10:42:25.906775 4771 patch_prober.go:28] interesting pod/apiserver-656768b4df-g4p26 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:42:25.906869 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:42:25.906869 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:42:25.906869 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:42:25.906869 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:42:25.906869 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:42:25.906869 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:42:25.906869 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:42:25.906869 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:42:25.906869 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:42:25.906869 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:42:25.906869 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:42:25.906869 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:42:25.906869 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:42:25.908519 master-1 kubenswrapper[4771]: I1011 10:42:25.906874 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" podUID="4c0cf305-ba21-45c0-a092-05214809da68" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: I1011 10:42:30.066368 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:42:30.066477 master-0 kubenswrapper[4790]: I1011 10:42:30.066476 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:42:30.908060 master-1 kubenswrapper[4771]: I1011 10:42:30.907974 4771 patch_prober.go:28] interesting pod/apiserver-656768b4df-g4p26 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:42:30.908060 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:42:30.908060 master-1 kubenswrapper[4771]: [+]etcd excluded: ok Oct 11 10:42:30.908060 master-1 kubenswrapper[4771]: [+]etcd-readiness excluded: ok Oct 11 10:42:30.908060 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:42:30.908060 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:42:30.908060 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:42:30.908060 master-1 kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:42:30.908060 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:42:30.908060 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:42:30.908060 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:42:30.908060 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:42:30.908060 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:42:30.908060 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:42:30.910171 master-1 kubenswrapper[4771]: I1011 10:42:30.908150 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" podUID="4c0cf305-ba21-45c0-a092-05214809da68" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:42:33.639282 master-1 kubenswrapper[4771]: I1011 10:42:33.639205 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-9-master-1_1da01b89-3c1e-4f11-bcc6-65a56654021f/installer/0.log" Oct 11 10:42:33.640050 master-1 kubenswrapper[4771]: I1011 10:42:33.639438 4771 generic.go:334] "Generic (PLEG): container finished" podID="1da01b89-3c1e-4f11-bcc6-65a56654021f" containerID="1a1e8546ece3b9b09f96eb38ce98e4e2f7676e9d011955a8c3b8f572088b6cdb" exitCode=1 Oct 11 10:42:33.640050 master-1 kubenswrapper[4771]: I1011 10:42:33.639520 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-9-master-1" event={"ID":"1da01b89-3c1e-4f11-bcc6-65a56654021f","Type":"ContainerDied","Data":"1a1e8546ece3b9b09f96eb38ce98e4e2f7676e9d011955a8c3b8f572088b6cdb"} Oct 11 10:42:34.048890 master-1 kubenswrapper[4771]: I1011 10:42:34.047900 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-9-master-1_1da01b89-3c1e-4f11-bcc6-65a56654021f/installer/0.log" Oct 11 10:42:34.048890 master-1 kubenswrapper[4771]: I1011 10:42:34.048081 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-9-master-1" Oct 11 10:42:34.075542 master-1 kubenswrapper[4771]: I1011 10:42:34.075482 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1da01b89-3c1e-4f11-bcc6-65a56654021f-kube-api-access\") pod \"1da01b89-3c1e-4f11-bcc6-65a56654021f\" (UID: \"1da01b89-3c1e-4f11-bcc6-65a56654021f\") " Oct 11 10:42:34.075854 master-1 kubenswrapper[4771]: I1011 10:42:34.075821 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1da01b89-3c1e-4f11-bcc6-65a56654021f-kubelet-dir\") pod \"1da01b89-3c1e-4f11-bcc6-65a56654021f\" (UID: \"1da01b89-3c1e-4f11-bcc6-65a56654021f\") " Oct 11 10:42:34.075942 master-1 kubenswrapper[4771]: I1011 10:42:34.075872 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1da01b89-3c1e-4f11-bcc6-65a56654021f-var-lock\") pod \"1da01b89-3c1e-4f11-bcc6-65a56654021f\" (UID: \"1da01b89-3c1e-4f11-bcc6-65a56654021f\") " Oct 11 10:42:34.076129 master-1 kubenswrapper[4771]: I1011 10:42:34.076066 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1da01b89-3c1e-4f11-bcc6-65a56654021f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1da01b89-3c1e-4f11-bcc6-65a56654021f" (UID: "1da01b89-3c1e-4f11-bcc6-65a56654021f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:42:34.076380 master-1 kubenswrapper[4771]: I1011 10:42:34.076249 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1da01b89-3c1e-4f11-bcc6-65a56654021f-var-lock" (OuterVolumeSpecName: "var-lock") pod "1da01b89-3c1e-4f11-bcc6-65a56654021f" (UID: "1da01b89-3c1e-4f11-bcc6-65a56654021f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:42:34.077797 master-1 kubenswrapper[4771]: I1011 10:42:34.077150 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1da01b89-3c1e-4f11-bcc6-65a56654021f-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:34.077797 master-1 kubenswrapper[4771]: I1011 10:42:34.077255 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1da01b89-3c1e-4f11-bcc6-65a56654021f-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:34.088327 master-1 kubenswrapper[4771]: I1011 10:42:34.088221 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1da01b89-3c1e-4f11-bcc6-65a56654021f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1da01b89-3c1e-4f11-bcc6-65a56654021f" (UID: "1da01b89-3c1e-4f11-bcc6-65a56654021f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:42:34.158001 master-1 kubenswrapper[4771]: I1011 10:42:34.157348 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:42:34.178948 master-1 kubenswrapper[4771]: I1011 10:42:34.178872 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1da01b89-3c1e-4f11-bcc6-65a56654021f-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:34.280065 master-1 kubenswrapper[4771]: I1011 10:42:34.279970 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4c0cf305-ba21-45c0-a092-05214809da68-encryption-config\") pod \"4c0cf305-ba21-45c0-a092-05214809da68\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " Oct 11 10:42:34.280415 master-1 kubenswrapper[4771]: I1011 10:42:34.280086 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4c0cf305-ba21-45c0-a092-05214809da68-etcd-serving-ca\") pod \"4c0cf305-ba21-45c0-a092-05214809da68\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " Oct 11 10:42:34.280415 master-1 kubenswrapper[4771]: I1011 10:42:34.280151 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjn45\" (UniqueName: \"kubernetes.io/projected/4c0cf305-ba21-45c0-a092-05214809da68-kube-api-access-rjn45\") pod \"4c0cf305-ba21-45c0-a092-05214809da68\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " Oct 11 10:42:34.280415 master-1 kubenswrapper[4771]: I1011 10:42:34.280201 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c0cf305-ba21-45c0-a092-05214809da68-audit-dir\") pod \"4c0cf305-ba21-45c0-a092-05214809da68\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " Oct 11 10:42:34.280415 master-1 kubenswrapper[4771]: I1011 10:42:34.280239 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4c0cf305-ba21-45c0-a092-05214809da68-etcd-client\") pod \"4c0cf305-ba21-45c0-a092-05214809da68\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " Oct 11 10:42:34.280415 master-1 kubenswrapper[4771]: I1011 10:42:34.280278 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c0cf305-ba21-45c0-a092-05214809da68-trusted-ca-bundle\") pod \"4c0cf305-ba21-45c0-a092-05214809da68\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " Oct 11 10:42:34.280415 master-1 kubenswrapper[4771]: I1011 10:42:34.280319 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c0cf305-ba21-45c0-a092-05214809da68-serving-cert\") pod \"4c0cf305-ba21-45c0-a092-05214809da68\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " Oct 11 10:42:34.280415 master-1 kubenswrapper[4771]: I1011 10:42:34.280379 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4c0cf305-ba21-45c0-a092-05214809da68-audit-policies\") pod \"4c0cf305-ba21-45c0-a092-05214809da68\" (UID: \"4c0cf305-ba21-45c0-a092-05214809da68\") " Oct 11 10:42:34.281773 master-1 kubenswrapper[4771]: I1011 10:42:34.280534 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c0cf305-ba21-45c0-a092-05214809da68-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "4c0cf305-ba21-45c0-a092-05214809da68" (UID: "4c0cf305-ba21-45c0-a092-05214809da68"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:42:34.282320 master-1 kubenswrapper[4771]: I1011 10:42:34.282241 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c0cf305-ba21-45c0-a092-05214809da68-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "4c0cf305-ba21-45c0-a092-05214809da68" (UID: "4c0cf305-ba21-45c0-a092-05214809da68"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:42:34.282320 master-1 kubenswrapper[4771]: I1011 10:42:34.282268 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c0cf305-ba21-45c0-a092-05214809da68-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "4c0cf305-ba21-45c0-a092-05214809da68" (UID: "4c0cf305-ba21-45c0-a092-05214809da68"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:42:34.283732 master-1 kubenswrapper[4771]: I1011 10:42:34.283668 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4c0cf305-ba21-45c0-a092-05214809da68-etcd-serving-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:34.283732 master-1 kubenswrapper[4771]: I1011 10:42:34.283721 4771 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c0cf305-ba21-45c0-a092-05214809da68-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:34.283953 master-1 kubenswrapper[4771]: I1011 10:42:34.283749 4771 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4c0cf305-ba21-45c0-a092-05214809da68-audit-policies\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:34.285708 master-1 kubenswrapper[4771]: I1011 10:42:34.285613 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c0cf305-ba21-45c0-a092-05214809da68-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "4c0cf305-ba21-45c0-a092-05214809da68" (UID: "4c0cf305-ba21-45c0-a092-05214809da68"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:42:34.286174 master-1 kubenswrapper[4771]: I1011 10:42:34.286109 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c0cf305-ba21-45c0-a092-05214809da68-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "4c0cf305-ba21-45c0-a092-05214809da68" (UID: "4c0cf305-ba21-45c0-a092-05214809da68"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:42:34.286819 master-1 kubenswrapper[4771]: I1011 10:42:34.286757 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c0cf305-ba21-45c0-a092-05214809da68-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4c0cf305-ba21-45c0-a092-05214809da68" (UID: "4c0cf305-ba21-45c0-a092-05214809da68"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:42:34.288166 master-1 kubenswrapper[4771]: I1011 10:42:34.288093 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c0cf305-ba21-45c0-a092-05214809da68-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4c0cf305-ba21-45c0-a092-05214809da68" (UID: "4c0cf305-ba21-45c0-a092-05214809da68"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:42:34.293507 master-1 kubenswrapper[4771]: I1011 10:42:34.290578 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c0cf305-ba21-45c0-a092-05214809da68-kube-api-access-rjn45" (OuterVolumeSpecName: "kube-api-access-rjn45") pod "4c0cf305-ba21-45c0-a092-05214809da68" (UID: "4c0cf305-ba21-45c0-a092-05214809da68"). InnerVolumeSpecName "kube-api-access-rjn45". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:42:34.385512 master-1 kubenswrapper[4771]: I1011 10:42:34.385419 4771 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4c0cf305-ba21-45c0-a092-05214809da68-encryption-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:34.385512 master-1 kubenswrapper[4771]: I1011 10:42:34.385488 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjn45\" (UniqueName: \"kubernetes.io/projected/4c0cf305-ba21-45c0-a092-05214809da68-kube-api-access-rjn45\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:34.385512 master-1 kubenswrapper[4771]: I1011 10:42:34.385510 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4c0cf305-ba21-45c0-a092-05214809da68-etcd-client\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:34.385512 master-1 kubenswrapper[4771]: I1011 10:42:34.385530 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c0cf305-ba21-45c0-a092-05214809da68-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:34.385998 master-1 kubenswrapper[4771]: I1011 10:42:34.385549 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c0cf305-ba21-45c0-a092-05214809da68-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:42:34.654443 master-1 kubenswrapper[4771]: I1011 10:42:34.654340 4771 generic.go:334] "Generic (PLEG): container finished" podID="4c0cf305-ba21-45c0-a092-05214809da68" containerID="0d35eb346e00db345ba38192641eb8af77030ecba70c39b9071f558249b90948" exitCode=0 Oct 11 10:42:34.655222 master-1 kubenswrapper[4771]: I1011 10:42:34.654427 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" event={"ID":"4c0cf305-ba21-45c0-a092-05214809da68","Type":"ContainerDied","Data":"0d35eb346e00db345ba38192641eb8af77030ecba70c39b9071f558249b90948"} Oct 11 10:42:34.655222 master-1 kubenswrapper[4771]: I1011 10:42:34.654532 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" Oct 11 10:42:34.655222 master-1 kubenswrapper[4771]: I1011 10:42:34.654559 4771 scope.go:117] "RemoveContainer" containerID="0d35eb346e00db345ba38192641eb8af77030ecba70c39b9071f558249b90948" Oct 11 10:42:34.655222 master-1 kubenswrapper[4771]: I1011 10:42:34.654538 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-g4p26" event={"ID":"4c0cf305-ba21-45c0-a092-05214809da68","Type":"ContainerDied","Data":"9a8773a82720172e1c708c6b8b379786c06f2193ace376125888c909cd115b04"} Oct 11 10:42:34.661163 master-1 kubenswrapper[4771]: I1011 10:42:34.661100 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-9-master-1_1da01b89-3c1e-4f11-bcc6-65a56654021f/installer/0.log" Oct 11 10:42:34.661299 master-1 kubenswrapper[4771]: I1011 10:42:34.661189 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-9-master-1" event={"ID":"1da01b89-3c1e-4f11-bcc6-65a56654021f","Type":"ContainerDied","Data":"9692635e40b2a711a263503ff5795f641f5480d42c4c64a94f91d9bd4aff98f6"} Oct 11 10:42:34.661724 master-1 kubenswrapper[4771]: I1011 10:42:34.661593 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-9-master-1" Oct 11 10:42:34.681728 master-1 kubenswrapper[4771]: I1011 10:42:34.681658 4771 scope.go:117] "RemoveContainer" containerID="0e1c25b025232015baea7f43e82a2fb07a8814a3aa56b9f5440885f746e96d22" Oct 11 10:42:34.695119 master-1 kubenswrapper[4771]: I1011 10:42:34.695066 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-g4p26"] Oct 11 10:42:34.711075 master-1 kubenswrapper[4771]: I1011 10:42:34.711005 4771 scope.go:117] "RemoveContainer" containerID="0d35eb346e00db345ba38192641eb8af77030ecba70c39b9071f558249b90948" Oct 11 10:42:34.711851 master-1 kubenswrapper[4771]: E1011 10:42:34.711655 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d35eb346e00db345ba38192641eb8af77030ecba70c39b9071f558249b90948\": container with ID starting with 0d35eb346e00db345ba38192641eb8af77030ecba70c39b9071f558249b90948 not found: ID does not exist" containerID="0d35eb346e00db345ba38192641eb8af77030ecba70c39b9071f558249b90948" Oct 11 10:42:34.711851 master-1 kubenswrapper[4771]: I1011 10:42:34.711710 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d35eb346e00db345ba38192641eb8af77030ecba70c39b9071f558249b90948"} err="failed to get container status \"0d35eb346e00db345ba38192641eb8af77030ecba70c39b9071f558249b90948\": rpc error: code = NotFound desc = could not find container \"0d35eb346e00db345ba38192641eb8af77030ecba70c39b9071f558249b90948\": container with ID starting with 0d35eb346e00db345ba38192641eb8af77030ecba70c39b9071f558249b90948 not found: ID does not exist" Oct 11 10:42:34.711851 master-1 kubenswrapper[4771]: I1011 10:42:34.711742 4771 scope.go:117] "RemoveContainer" containerID="0e1c25b025232015baea7f43e82a2fb07a8814a3aa56b9f5440885f746e96d22" Oct 11 10:42:34.712899 master-1 kubenswrapper[4771]: E1011 10:42:34.712536 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e1c25b025232015baea7f43e82a2fb07a8814a3aa56b9f5440885f746e96d22\": container with ID starting with 0e1c25b025232015baea7f43e82a2fb07a8814a3aa56b9f5440885f746e96d22 not found: ID does not exist" containerID="0e1c25b025232015baea7f43e82a2fb07a8814a3aa56b9f5440885f746e96d22" Oct 11 10:42:34.712899 master-1 kubenswrapper[4771]: I1011 10:42:34.712617 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e1c25b025232015baea7f43e82a2fb07a8814a3aa56b9f5440885f746e96d22"} err="failed to get container status \"0e1c25b025232015baea7f43e82a2fb07a8814a3aa56b9f5440885f746e96d22\": rpc error: code = NotFound desc = could not find container \"0e1c25b025232015baea7f43e82a2fb07a8814a3aa56b9f5440885f746e96d22\": container with ID starting with 0e1c25b025232015baea7f43e82a2fb07a8814a3aa56b9f5440885f746e96d22 not found: ID does not exist" Oct 11 10:42:34.712899 master-1 kubenswrapper[4771]: I1011 10:42:34.712671 4771 scope.go:117] "RemoveContainer" containerID="1a1e8546ece3b9b09f96eb38ce98e4e2f7676e9d011955a8c3b8f572088b6cdb" Oct 11 10:42:34.719248 master-1 kubenswrapper[4771]: I1011 10:42:34.719167 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-g4p26"] Oct 11 10:42:34.732407 master-1 kubenswrapper[4771]: I1011 10:42:34.732329 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-9-master-1"] Oct 11 10:42:34.738179 master-1 kubenswrapper[4771]: I1011 10:42:34.737169 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-9-master-1"] Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: I1011 10:42:35.071268 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:42:35.071412 master-0 kubenswrapper[4790]: I1011 10:42:35.071397 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:42:36.449812 master-1 kubenswrapper[4771]: I1011 10:42:36.449698 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1da01b89-3c1e-4f11-bcc6-65a56654021f" path="/var/lib/kubelet/pods/1da01b89-3c1e-4f11-bcc6-65a56654021f/volumes" Oct 11 10:42:36.451042 master-1 kubenswrapper[4771]: I1011 10:42:36.450978 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c0cf305-ba21-45c0-a092-05214809da68" path="/var/lib/kubelet/pods/4c0cf305-ba21-45c0-a092-05214809da68/volumes" Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: I1011 10:42:40.064812 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:42:40.064989 master-0 kubenswrapper[4790]: I1011 10:42:40.064920 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:42:40.870983 master-0 kubenswrapper[4790]: I1011 10:42:40.870909 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_b5b44d0e-0afa-47db-a215-114b99006a12/installer/0.log" Oct 11 10:42:40.870983 master-0 kubenswrapper[4790]: I1011 10:42:40.870976 4790 generic.go:334] "Generic (PLEG): container finished" podID="b5b44d0e-0afa-47db-a215-114b99006a12" containerID="4be921d32cc962b169c97845895b27ff57f7dd21aaaefe6e16cf013e922ab5ec" exitCode=1 Oct 11 10:42:40.871309 master-0 kubenswrapper[4790]: I1011 10:42:40.871016 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"b5b44d0e-0afa-47db-a215-114b99006a12","Type":"ContainerDied","Data":"4be921d32cc962b169c97845895b27ff57f7dd21aaaefe6e16cf013e922ab5ec"} Oct 11 10:42:40.932361 master-0 kubenswrapper[4790]: I1011 10:42:40.932280 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_b5b44d0e-0afa-47db-a215-114b99006a12/installer/0.log" Oct 11 10:42:40.932361 master-0 kubenswrapper[4790]: I1011 10:42:40.932368 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Oct 11 10:42:41.075200 master-0 kubenswrapper[4790]: I1011 10:42:41.075030 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5b44d0e-0afa-47db-a215-114b99006a12-kubelet-dir\") pod \"b5b44d0e-0afa-47db-a215-114b99006a12\" (UID: \"b5b44d0e-0afa-47db-a215-114b99006a12\") " Oct 11 10:42:41.075200 master-0 kubenswrapper[4790]: I1011 10:42:41.075188 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5b44d0e-0afa-47db-a215-114b99006a12-kube-api-access\") pod \"b5b44d0e-0afa-47db-a215-114b99006a12\" (UID: \"b5b44d0e-0afa-47db-a215-114b99006a12\") " Oct 11 10:42:41.076196 master-0 kubenswrapper[4790]: I1011 10:42:41.075267 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5b44d0e-0afa-47db-a215-114b99006a12-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b5b44d0e-0afa-47db-a215-114b99006a12" (UID: "b5b44d0e-0afa-47db-a215-114b99006a12"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:42:41.076196 master-0 kubenswrapper[4790]: I1011 10:42:41.075312 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5b44d0e-0afa-47db-a215-114b99006a12-var-lock\") pod \"b5b44d0e-0afa-47db-a215-114b99006a12\" (UID: \"b5b44d0e-0afa-47db-a215-114b99006a12\") " Oct 11 10:42:41.076196 master-0 kubenswrapper[4790]: I1011 10:42:41.075500 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5b44d0e-0afa-47db-a215-114b99006a12-var-lock" (OuterVolumeSpecName: "var-lock") pod "b5b44d0e-0afa-47db-a215-114b99006a12" (UID: "b5b44d0e-0afa-47db-a215-114b99006a12"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:42:41.076196 master-0 kubenswrapper[4790]: I1011 10:42:41.075770 4790 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5b44d0e-0afa-47db-a215-114b99006a12-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Oct 11 10:42:41.076196 master-0 kubenswrapper[4790]: I1011 10:42:41.075813 4790 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5b44d0e-0afa-47db-a215-114b99006a12-var-lock\") on node \"master-0\" DevicePath \"\"" Oct 11 10:42:41.078039 master-0 kubenswrapper[4790]: I1011 10:42:41.077966 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5b44d0e-0afa-47db-a215-114b99006a12-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b5b44d0e-0afa-47db-a215-114b99006a12" (UID: "b5b44d0e-0afa-47db-a215-114b99006a12"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:42:41.176681 master-0 kubenswrapper[4790]: I1011 10:42:41.176523 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5b44d0e-0afa-47db-a215-114b99006a12-kube-api-access\") on node \"master-0\" DevicePath \"\"" Oct 11 10:42:41.573518 master-0 kubenswrapper[4790]: I1011 10:42:41.573410 4790 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Oct 11 10:42:41.573959 master-0 kubenswrapper[4790]: E1011 10:42:41.573606 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b44d0e-0afa-47db-a215-114b99006a12" containerName="installer" Oct 11 10:42:41.573959 master-0 kubenswrapper[4790]: I1011 10:42:41.573622 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b44d0e-0afa-47db-a215-114b99006a12" containerName="installer" Oct 11 10:42:41.573959 master-0 kubenswrapper[4790]: I1011 10:42:41.573691 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5b44d0e-0afa-47db-a215-114b99006a12" containerName="installer" Oct 11 10:42:41.574508 master-0 kubenswrapper[4790]: I1011 10:42:41.574469 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Oct 11 10:42:41.608072 master-0 kubenswrapper[4790]: I1011 10:42:41.608010 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Oct 11 10:42:41.682886 master-0 kubenswrapper[4790]: I1011 10:42:41.682803 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/578baefcb284ee2ba6604eeb80ded917-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"578baefcb284ee2ba6604eeb80ded917\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Oct 11 10:42:41.683154 master-0 kubenswrapper[4790]: I1011 10:42:41.682901 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/578baefcb284ee2ba6604eeb80ded917-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"578baefcb284ee2ba6604eeb80ded917\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Oct 11 10:42:41.783903 master-0 kubenswrapper[4790]: I1011 10:42:41.783843 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/578baefcb284ee2ba6604eeb80ded917-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"578baefcb284ee2ba6604eeb80ded917\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Oct 11 10:42:41.783994 master-0 kubenswrapper[4790]: I1011 10:42:41.783948 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/578baefcb284ee2ba6604eeb80ded917-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"578baefcb284ee2ba6604eeb80ded917\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Oct 11 10:42:41.784030 master-0 kubenswrapper[4790]: I1011 10:42:41.783969 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/578baefcb284ee2ba6604eeb80ded917-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"578baefcb284ee2ba6604eeb80ded917\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Oct 11 10:42:41.784106 master-0 kubenswrapper[4790]: I1011 10:42:41.784005 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/578baefcb284ee2ba6604eeb80ded917-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"578baefcb284ee2ba6604eeb80ded917\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Oct 11 10:42:41.879253 master-0 kubenswrapper[4790]: I1011 10:42:41.879067 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_b5b44d0e-0afa-47db-a215-114b99006a12/installer/0.log" Oct 11 10:42:41.879253 master-0 kubenswrapper[4790]: I1011 10:42:41.879246 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Oct 11 10:42:41.879550 master-0 kubenswrapper[4790]: I1011 10:42:41.879253 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"b5b44d0e-0afa-47db-a215-114b99006a12","Type":"ContainerDied","Data":"b2f3fc67df160d3adfc50304aaf54cdfff608931c2d0a54d0ec7908bd3846999"} Oct 11 10:42:41.879550 master-0 kubenswrapper[4790]: I1011 10:42:41.879386 4790 scope.go:117] "RemoveContainer" containerID="4be921d32cc962b169c97845895b27ff57f7dd21aaaefe6e16cf013e922ab5ec" Oct 11 10:42:41.881336 master-0 kubenswrapper[4790]: I1011 10:42:41.881302 4790 generic.go:334] "Generic (PLEG): container finished" podID="3d2957c2-bc3c-4399-b508-37a1a7689108" containerID="2d6f8b55cb9d1ca99486dd5352b34512a263c62c8f4e3533fc79b7394d55c8e0" exitCode=0 Oct 11 10:42:41.881414 master-0 kubenswrapper[4790]: I1011 10:42:41.881346 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"3d2957c2-bc3c-4399-b508-37a1a7689108","Type":"ContainerDied","Data":"2d6f8b55cb9d1ca99486dd5352b34512a263c62c8f4e3533fc79b7394d55c8e0"} Oct 11 10:42:41.906345 master-0 kubenswrapper[4790]: I1011 10:42:41.906277 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Oct 11 10:42:41.925229 master-0 kubenswrapper[4790]: I1011 10:42:41.925162 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Oct 11 10:42:41.933252 master-0 kubenswrapper[4790]: I1011 10:42:41.933204 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Oct 11 10:42:42.300082 master-0 kubenswrapper[4790]: I1011 10:42:42.299962 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5b44d0e-0afa-47db-a215-114b99006a12" path="/var/lib/kubelet/pods/b5b44d0e-0afa-47db-a215-114b99006a12/volumes" Oct 11 10:42:42.889819 master-0 kubenswrapper[4790]: I1011 10:42:42.889691 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"578baefcb284ee2ba6604eeb80ded917","Type":"ContainerStarted","Data":"7ea37e7e34cad88e82c46b5464822e5877d8a824b39f1c0da3fb1426b367c1f3"} Oct 11 10:42:42.889819 master-0 kubenswrapper[4790]: I1011 10:42:42.889807 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"578baefcb284ee2ba6604eeb80ded917","Type":"ContainerStarted","Data":"b8775b7a9049a31b31656c7e341d2282e59cafc06361cc9afd7caf5eb1efcbec"} Oct 11 10:42:43.212405 master-0 kubenswrapper[4790]: I1011 10:42:43.212350 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Oct 11 10:42:43.402978 master-0 kubenswrapper[4790]: I1011 10:42:43.402874 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d2957c2-bc3c-4399-b508-37a1a7689108-var-lock\") pod \"3d2957c2-bc3c-4399-b508-37a1a7689108\" (UID: \"3d2957c2-bc3c-4399-b508-37a1a7689108\") " Oct 11 10:42:43.403642 master-0 kubenswrapper[4790]: I1011 10:42:43.403031 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d2957c2-bc3c-4399-b508-37a1a7689108-kube-api-access\") pod \"3d2957c2-bc3c-4399-b508-37a1a7689108\" (UID: \"3d2957c2-bc3c-4399-b508-37a1a7689108\") " Oct 11 10:42:43.403642 master-0 kubenswrapper[4790]: I1011 10:42:43.403073 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d2957c2-bc3c-4399-b508-37a1a7689108-kubelet-dir\") pod \"3d2957c2-bc3c-4399-b508-37a1a7689108\" (UID: \"3d2957c2-bc3c-4399-b508-37a1a7689108\") " Oct 11 10:42:43.403642 master-0 kubenswrapper[4790]: I1011 10:42:43.403220 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d2957c2-bc3c-4399-b508-37a1a7689108-var-lock" (OuterVolumeSpecName: "var-lock") pod "3d2957c2-bc3c-4399-b508-37a1a7689108" (UID: "3d2957c2-bc3c-4399-b508-37a1a7689108"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:42:43.403642 master-0 kubenswrapper[4790]: I1011 10:42:43.403373 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d2957c2-bc3c-4399-b508-37a1a7689108-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3d2957c2-bc3c-4399-b508-37a1a7689108" (UID: "3d2957c2-bc3c-4399-b508-37a1a7689108"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:42:43.403839 master-0 kubenswrapper[4790]: I1011 10:42:43.403652 4790 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d2957c2-bc3c-4399-b508-37a1a7689108-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Oct 11 10:42:43.403839 master-0 kubenswrapper[4790]: I1011 10:42:43.403697 4790 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d2957c2-bc3c-4399-b508-37a1a7689108-var-lock\") on node \"master-0\" DevicePath \"\"" Oct 11 10:42:43.407204 master-0 kubenswrapper[4790]: I1011 10:42:43.407109 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d2957c2-bc3c-4399-b508-37a1a7689108-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3d2957c2-bc3c-4399-b508-37a1a7689108" (UID: "3d2957c2-bc3c-4399-b508-37a1a7689108"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:42:43.505751 master-0 kubenswrapper[4790]: I1011 10:42:43.505480 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d2957c2-bc3c-4399-b508-37a1a7689108-kube-api-access\") on node \"master-0\" DevicePath \"\"" Oct 11 10:42:43.901462 master-0 kubenswrapper[4790]: I1011 10:42:43.901388 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"3d2957c2-bc3c-4399-b508-37a1a7689108","Type":"ContainerDied","Data":"93e88f1fafa68d434978386be2c8ac316fefc0276f3cdc7d47fedf2f6e452356"} Oct 11 10:42:43.901462 master-0 kubenswrapper[4790]: I1011 10:42:43.901456 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93e88f1fafa68d434978386be2c8ac316fefc0276f3cdc7d47fedf2f6e452356" Oct 11 10:42:43.901794 master-0 kubenswrapper[4790]: I1011 10:42:43.901613 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Oct 11 10:42:44.639506 master-1 kubenswrapper[4771]: I1011 10:42:44.639436 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll"] Oct 11 10:42:44.640259 master-1 kubenswrapper[4771]: E1011 10:42:44.639769 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da01b89-3c1e-4f11-bcc6-65a56654021f" containerName="installer" Oct 11 10:42:44.640259 master-1 kubenswrapper[4771]: I1011 10:42:44.639792 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da01b89-3c1e-4f11-bcc6-65a56654021f" containerName="installer" Oct 11 10:42:44.640259 master-1 kubenswrapper[4771]: E1011 10:42:44.639817 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c0cf305-ba21-45c0-a092-05214809da68" containerName="fix-audit-permissions" Oct 11 10:42:44.640259 master-1 kubenswrapper[4771]: I1011 10:42:44.639831 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c0cf305-ba21-45c0-a092-05214809da68" containerName="fix-audit-permissions" Oct 11 10:42:44.640259 master-1 kubenswrapper[4771]: E1011 10:42:44.639860 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c0cf305-ba21-45c0-a092-05214809da68" containerName="oauth-apiserver" Oct 11 10:42:44.640259 master-1 kubenswrapper[4771]: I1011 10:42:44.639874 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c0cf305-ba21-45c0-a092-05214809da68" containerName="oauth-apiserver" Oct 11 10:42:44.640259 master-1 kubenswrapper[4771]: I1011 10:42:44.640066 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da01b89-3c1e-4f11-bcc6-65a56654021f" containerName="installer" Oct 11 10:42:44.640259 master-1 kubenswrapper[4771]: I1011 10:42:44.640092 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c0cf305-ba21-45c0-a092-05214809da68" containerName="oauth-apiserver" Oct 11 10:42:44.641309 master-1 kubenswrapper[4771]: I1011 10:42:44.641273 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.644982 master-1 kubenswrapper[4771]: I1011 10:42:44.644927 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 11 10:42:44.645345 master-1 kubenswrapper[4771]: I1011 10:42:44.645163 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-zlnjr" Oct 11 10:42:44.646161 master-1 kubenswrapper[4771]: I1011 10:42:44.646123 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 11 10:42:44.646269 master-1 kubenswrapper[4771]: I1011 10:42:44.646229 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 11 10:42:44.646907 master-1 kubenswrapper[4771]: I1011 10:42:44.646472 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 11 10:42:44.646907 master-1 kubenswrapper[4771]: I1011 10:42:44.646558 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 11 10:42:44.647179 master-1 kubenswrapper[4771]: I1011 10:42:44.647000 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 11 10:42:44.647233 master-1 kubenswrapper[4771]: I1011 10:42:44.647197 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 11 10:42:44.647449 master-1 kubenswrapper[4771]: I1011 10:42:44.646848 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 11 10:42:44.700045 master-1 kubenswrapper[4771]: I1011 10:42:44.662222 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll"] Oct 11 10:42:44.747043 master-1 kubenswrapper[4771]: I1011 10:42:44.746988 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szs58\" (UniqueName: \"kubernetes.io/projected/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-kube-api-access-szs58\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.747043 master-1 kubenswrapper[4771]: I1011 10:42:44.747050 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-etcd-client\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.747318 master-1 kubenswrapper[4771]: I1011 10:42:44.747094 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-audit-policies\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.747318 master-1 kubenswrapper[4771]: I1011 10:42:44.747115 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-encryption-config\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.747318 master-1 kubenswrapper[4771]: I1011 10:42:44.747147 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-audit-dir\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.747318 master-1 kubenswrapper[4771]: I1011 10:42:44.747267 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-serving-cert\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.747521 master-1 kubenswrapper[4771]: I1011 10:42:44.747322 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.747634 master-1 kubenswrapper[4771]: I1011 10:42:44.747574 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.849915 master-1 kubenswrapper[4771]: I1011 10:42:44.849829 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-etcd-client\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.850153 master-1 kubenswrapper[4771]: I1011 10:42:44.849924 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-audit-policies\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.850153 master-1 kubenswrapper[4771]: I1011 10:42:44.849969 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-encryption-config\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.850153 master-1 kubenswrapper[4771]: I1011 10:42:44.850040 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-audit-dir\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.850153 master-1 kubenswrapper[4771]: I1011 10:42:44.850091 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-serving-cert\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.850330 master-1 kubenswrapper[4771]: I1011 10:42:44.850147 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.850330 master-1 kubenswrapper[4771]: I1011 10:42:44.850203 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.850330 master-1 kubenswrapper[4771]: I1011 10:42:44.850216 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-audit-dir\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.850330 master-1 kubenswrapper[4771]: I1011 10:42:44.850274 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szs58\" (UniqueName: \"kubernetes.io/projected/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-kube-api-access-szs58\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.851521 master-1 kubenswrapper[4771]: I1011 10:42:44.851473 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.851628 master-1 kubenswrapper[4771]: I1011 10:42:44.851578 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.851816 master-1 kubenswrapper[4771]: I1011 10:42:44.851758 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-audit-policies\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.853791 master-1 kubenswrapper[4771]: I1011 10:42:44.853745 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-etcd-client\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.854747 master-1 kubenswrapper[4771]: I1011 10:42:44.854692 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-serving-cert\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.855094 master-1 kubenswrapper[4771]: I1011 10:42:44.855047 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-encryption-config\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.868253 master-1 kubenswrapper[4771]: I1011 10:42:44.868107 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szs58\" (UniqueName: \"kubernetes.io/projected/1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d-kube-api-access-szs58\") pod \"apiserver-68f4c55ff4-mmqll\" (UID: \"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:44.908356 master-0 kubenswrapper[4790]: I1011 10:42:44.908264 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"578baefcb284ee2ba6604eeb80ded917","Type":"ContainerStarted","Data":"2be0c8bedda4d85d18238774ea882f88e570b3cd1131b154fffcc12ee22bff8d"} Oct 11 10:42:44.908356 master-0 kubenswrapper[4790]: I1011 10:42:44.908349 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"578baefcb284ee2ba6604eeb80ded917","Type":"ContainerStarted","Data":"3fea5f6bf2cbad1b08cfa0cc896012dcfddaab4355ed829164961df59c1af634"} Oct 11 10:42:45.006912 master-1 kubenswrapper[4771]: I1011 10:42:45.006775 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:45.067186 master-0 kubenswrapper[4790]: I1011 10:42:45.067077 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:42:45.067186 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:42:45.067186 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:42:45.067186 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:42:45.067186 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:42:45.067186 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:42:45.067186 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:42:45.067186 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:42:45.067186 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:42:45.067186 master-0 kubenswrapper[4790]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:42:45.067186 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:42:45.067186 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:42:45.067186 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:42:45.067186 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:42:45.067186 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:42:45.067186 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:42:45.067186 master-0 kubenswrapper[4790]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:42:45.067186 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:42:45.067186 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:42:45.068964 master-0 kubenswrapper[4790]: I1011 10:42:45.067205 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:42:45.508250 master-1 kubenswrapper[4771]: I1011 10:42:45.508176 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll"] Oct 11 10:42:45.518822 master-1 kubenswrapper[4771]: W1011 10:42:45.514848 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a8be6e8_cddf_46d6_a1e4_f0bdc0ae7f6d.slice/crio-97084740e6a91e89121a51817a01bece0e852c2fe34a9c54110a075f2cc1678b WatchSource:0}: Error finding container 97084740e6a91e89121a51817a01bece0e852c2fe34a9c54110a075f2cc1678b: Status 404 returned error can't find the container with id 97084740e6a91e89121a51817a01bece0e852c2fe34a9c54110a075f2cc1678b Oct 11 10:42:45.752955 master-1 kubenswrapper[4771]: I1011 10:42:45.751375 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" event={"ID":"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d","Type":"ContainerStarted","Data":"97084740e6a91e89121a51817a01bece0e852c2fe34a9c54110a075f2cc1678b"} Oct 11 10:42:45.919430 master-0 kubenswrapper[4790]: I1011 10:42:45.919334 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"578baefcb284ee2ba6604eeb80ded917","Type":"ContainerStarted","Data":"24ac76c410545bd348677bcc077f677b7d9cf32627171ba56638faa2777c4159"} Oct 11 10:42:45.949280 master-0 kubenswrapper[4790]: I1011 10:42:45.949181 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=4.949155214 podStartE2EDuration="4.949155214s" podCreationTimestamp="2025-10-11 10:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:42:45.94699393 +0000 UTC m=+242.501454262" watchObservedRunningTime="2025-10-11 10:42:45.949155214 +0000 UTC m=+242.503615536" Oct 11 10:42:46.762442 master-1 kubenswrapper[4771]: I1011 10:42:46.762335 4771 generic.go:334] "Generic (PLEG): container finished" podID="1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d" containerID="20de4b85fbe0e93509b158b1b1095dc9c627cebb3415ba32b94fcbca5b12e499" exitCode=0 Oct 11 10:42:46.762442 master-1 kubenswrapper[4771]: I1011 10:42:46.762435 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" event={"ID":"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d","Type":"ContainerDied","Data":"20de4b85fbe0e93509b158b1b1095dc9c627cebb3415ba32b94fcbca5b12e499"} Oct 11 10:42:47.771811 master-1 kubenswrapper[4771]: I1011 10:42:47.771751 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" event={"ID":"1a8be6e8-cddf-46d6-a1e4-f0bdc0ae7f6d","Type":"ContainerStarted","Data":"418a6b1668322b851f02d6a982854c54b624c024df481ef6044849a381c3452a"} Oct 11 10:42:47.802946 master-1 kubenswrapper[4771]: I1011 10:42:47.802879 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" podStartSLOduration=65.802860431 podStartE2EDuration="1m5.802860431s" podCreationTimestamp="2025-10-11 10:41:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:42:47.800333029 +0000 UTC m=+999.774559460" watchObservedRunningTime="2025-10-11 10:42:47.802860431 +0000 UTC m=+999.777086872" Oct 11 10:42:47.945524 master-0 kubenswrapper[4790]: I1011 10:42:47.945398 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-0"] Oct 11 10:42:47.946548 master-0 kubenswrapper[4790]: E1011 10:42:47.945674 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d2957c2-bc3c-4399-b508-37a1a7689108" containerName="installer" Oct 11 10:42:47.946548 master-0 kubenswrapper[4790]: I1011 10:42:47.945738 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d2957c2-bc3c-4399-b508-37a1a7689108" containerName="installer" Oct 11 10:42:47.946548 master-0 kubenswrapper[4790]: I1011 10:42:47.945887 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d2957c2-bc3c-4399-b508-37a1a7689108" containerName="installer" Oct 11 10:42:47.946834 master-0 kubenswrapper[4790]: I1011 10:42:47.946666 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-0" Oct 11 10:42:47.949871 master-0 kubenswrapper[4790]: I1011 10:42:47.949745 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 11 10:42:47.949871 master-0 kubenswrapper[4790]: I1011 10:42:47.949849 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"openshift-service-ca.crt" Oct 11 10:42:47.950324 master-0 kubenswrapper[4790]: I1011 10:42:47.949849 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"default-dockercfg-rp764" Oct 11 10:42:47.959878 master-0 kubenswrapper[4790]: I1011 10:42:47.959341 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-0"] Oct 11 10:42:47.971207 master-0 kubenswrapper[4790]: I1011 10:42:47.971127 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5c6v\" (UniqueName: \"kubernetes.io/projected/8f0d8196-2e0b-479b-ba9a-3e65cb92e046-kube-api-access-v5c6v\") pod \"kube-controller-manager-guard-master-0\" (UID: \"8f0d8196-2e0b-479b-ba9a-3e65cb92e046\") " pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-0" Oct 11 10:42:48.072751 master-0 kubenswrapper[4790]: I1011 10:42:48.072625 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5c6v\" (UniqueName: \"kubernetes.io/projected/8f0d8196-2e0b-479b-ba9a-3e65cb92e046-kube-api-access-v5c6v\") pod \"kube-controller-manager-guard-master-0\" (UID: \"8f0d8196-2e0b-479b-ba9a-3e65cb92e046\") " pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-0" Oct 11 10:42:48.098353 master-0 kubenswrapper[4790]: I1011 10:42:48.098207 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5c6v\" (UniqueName: \"kubernetes.io/projected/8f0d8196-2e0b-479b-ba9a-3e65cb92e046-kube-api-access-v5c6v\") pod \"kube-controller-manager-guard-master-0\" (UID: \"8f0d8196-2e0b-479b-ba9a-3e65cb92e046\") " pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-0" Oct 11 10:42:48.265296 master-0 kubenswrapper[4790]: I1011 10:42:48.265073 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-0" Oct 11 10:42:48.874586 master-0 kubenswrapper[4790]: I1011 10:42:48.874486 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-0"] Oct 11 10:42:48.944404 master-0 kubenswrapper[4790]: I1011 10:42:48.944340 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-0" event={"ID":"8f0d8196-2e0b-479b-ba9a-3e65cb92e046","Type":"ContainerStarted","Data":"2bc3521fea3b11a800c4cf500fec36b79a21e18921be0bc8a392dace7631f227"} Oct 11 10:42:49.955471 master-0 kubenswrapper[4790]: I1011 10:42:49.955337 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-0" event={"ID":"8f0d8196-2e0b-479b-ba9a-3e65cb92e046","Type":"ContainerStarted","Data":"26eb134a4d3360d9de34a2d04a9c99747fb6edc563c05fdf173883431831033b"} Oct 11 10:42:49.956613 master-0 kubenswrapper[4790]: I1011 10:42:49.955898 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-0" Oct 11 10:42:49.962937 master-0 kubenswrapper[4790]: I1011 10:42:49.962854 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-0" Oct 11 10:42:49.982440 master-0 kubenswrapper[4790]: I1011 10:42:49.982240 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-0" podStartSLOduration=2.982198491 podStartE2EDuration="2.982198491s" podCreationTimestamp="2025-10-11 10:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:42:49.979919802 +0000 UTC m=+246.534380184" watchObservedRunningTime="2025-10-11 10:42:49.982198491 +0000 UTC m=+246.536658813" Oct 11 10:42:50.007716 master-1 kubenswrapper[4771]: I1011 10:42:50.007571 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:50.007716 master-1 kubenswrapper[4771]: I1011 10:42:50.007721 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:50.017769 master-1 kubenswrapper[4771]: I1011 10:42:50.017693 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: I1011 10:42:50.063692 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:42:50.063788 master-0 kubenswrapper[4790]: I1011 10:42:50.063791 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:42:50.807872 master-1 kubenswrapper[4771]: I1011 10:42:50.807792 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll" Oct 11 10:42:50.899992 master-2 kubenswrapper[4776]: I1011 10:42:50.899877 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-5xgzs"] Oct 11 10:42:50.900714 master-2 kubenswrapper[4776]: I1011 10:42:50.900114 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" containerID="cri-o://0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee" gracePeriod=120 Oct 11 10:42:51.907191 master-0 kubenswrapper[4790]: I1011 10:42:51.907111 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Oct 11 10:42:51.908339 master-0 kubenswrapper[4790]: I1011 10:42:51.907998 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Oct 11 10:42:51.908339 master-0 kubenswrapper[4790]: I1011 10:42:51.908068 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Oct 11 10:42:51.908339 master-0 kubenswrapper[4790]: I1011 10:42:51.908099 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Oct 11 10:42:51.913354 master-0 kubenswrapper[4790]: I1011 10:42:51.913281 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Oct 11 10:42:51.915986 master-0 kubenswrapper[4790]: I1011 10:42:51.915931 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Oct 11 10:42:52.979754 master-0 kubenswrapper[4790]: I1011 10:42:52.979613 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Oct 11 10:42:55.067773 master-0 kubenswrapper[4790]: I1011 10:42:55.067694 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:42:55.067773 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:42:55.067773 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:42:55.067773 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:42:55.067773 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:42:55.067773 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:42:55.067773 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:42:55.067773 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:42:55.067773 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:42:55.067773 master-0 kubenswrapper[4790]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:42:55.067773 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:42:55.067773 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:42:55.067773 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:42:55.067773 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:42:55.067773 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:42:55.067773 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:42:55.067773 master-0 kubenswrapper[4790]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:42:55.067773 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:42:55.067773 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:42:55.068854 master-0 kubenswrapper[4790]: I1011 10:42:55.068817 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: I1011 10:42:55.890397 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:42:55.891513 master-2 kubenswrapper[4776]: I1011 10:42:55.890487 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:42:58.347909 master-0 kubenswrapper[4790]: I1011 10:42:58.347697 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-0"] Oct 11 10:43:00.072577 master-0 kubenswrapper[4790]: I1011 10:43:00.072444 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:43:00.072577 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:43:00.072577 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:43:00.072577 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:43:00.072577 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:43:00.072577 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:43:00.072577 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:43:00.072577 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:43:00.072577 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:43:00.072577 master-0 kubenswrapper[4790]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:43:00.072577 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:43:00.072577 master-0 kubenswrapper[4790]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:43:00.072577 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:43:00.072577 master-0 kubenswrapper[4790]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:43:00.072577 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:43:00.072577 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:43:00.072577 master-0 kubenswrapper[4790]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:43:00.072577 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:43:00.072577 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:43:00.074301 master-0 kubenswrapper[4790]: I1011 10:43:00.072584 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: I1011 10:43:00.893579 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: I1011 10:43:00.893644 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:01.921561 master-0 kubenswrapper[4790]: I1011 10:43:01.921483 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Oct 11 10:43:05.025663 master-1 kubenswrapper[4771]: I1011 10:43:05.025563 4771 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-1"] Oct 11 10:43:05.026393 master-1 kubenswrapper[4771]: I1011 10:43:05.025700 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-1"] Oct 11 10:43:05.026393 master-1 kubenswrapper[4771]: E1011 10:43:05.026048 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-ensure-env-vars" Oct 11 10:43:05.026393 master-1 kubenswrapper[4771]: I1011 10:43:05.026071 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-ensure-env-vars" Oct 11 10:43:05.026393 master-1 kubenswrapper[4771]: E1011 10:43:05.026090 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-metrics" Oct 11 10:43:05.026393 master-1 kubenswrapper[4771]: I1011 10:43:05.026103 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-metrics" Oct 11 10:43:05.026393 master-1 kubenswrapper[4771]: E1011 10:43:05.026120 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-resources-copy" Oct 11 10:43:05.026393 master-1 kubenswrapper[4771]: I1011 10:43:05.026135 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-resources-copy" Oct 11 10:43:05.026393 master-1 kubenswrapper[4771]: E1011 10:43:05.026154 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd" Oct 11 10:43:05.026393 master-1 kubenswrapper[4771]: I1011 10:43:05.026166 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd" Oct 11 10:43:05.026393 master-1 kubenswrapper[4771]: E1011 10:43:05.026181 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-readyz" Oct 11 10:43:05.026393 master-1 kubenswrapper[4771]: I1011 10:43:05.026145 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-rev" containerID="cri-o://2f39d1ed6551318e8799ea55ecdfbfe51ea2b9b7b26411631664f953b1d0e296" gracePeriod=30 Oct 11 10:43:05.026393 master-1 kubenswrapper[4771]: I1011 10:43:05.026253 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd" containerID="cri-o://1b08bbe8a016cc9703a454b83b5ccaac8367e55a0f3e2612f07c89255c5b066b" gracePeriod=30 Oct 11 10:43:05.026393 master-1 kubenswrapper[4771]: I1011 10:43:05.026197 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-readyz" Oct 11 10:43:05.026393 master-1 kubenswrapper[4771]: I1011 10:43:05.026382 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcdctl" containerID="cri-o://49bf7adabb62db980d637017833ab23f35546844d31309e50b509a3be2303a67" gracePeriod=30 Oct 11 10:43:05.026929 master-1 kubenswrapper[4771]: I1011 10:43:05.026324 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-readyz" containerID="cri-o://84bbf7ab3fb66f6d01d7500d037317a4cb49a3eae4199b8937858e7e953c7fd3" gracePeriod=30 Oct 11 10:43:05.026929 master-1 kubenswrapper[4771]: E1011 10:43:05.026449 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="setup" Oct 11 10:43:05.026929 master-1 kubenswrapper[4771]: I1011 10:43:05.026479 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="setup" Oct 11 10:43:05.026929 master-1 kubenswrapper[4771]: E1011 10:43:05.026514 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcdctl" Oct 11 10:43:05.026929 master-1 kubenswrapper[4771]: I1011 10:43:05.026522 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcdctl" Oct 11 10:43:05.026929 master-1 kubenswrapper[4771]: E1011 10:43:05.026540 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-rev" Oct 11 10:43:05.026929 master-1 kubenswrapper[4771]: I1011 10:43:05.026547 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-rev" Oct 11 10:43:05.026929 master-1 kubenswrapper[4771]: I1011 10:43:05.026279 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-metrics" containerID="cri-o://ecbb0613c992785c9403e057fc0c874ad563e770ca35f25a2b4b2f7341f1c10c" gracePeriod=30 Oct 11 10:43:05.026929 master-1 kubenswrapper[4771]: I1011 10:43:05.026871 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-readyz" Oct 11 10:43:05.026929 master-1 kubenswrapper[4771]: I1011 10:43:05.026889 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcdctl" Oct 11 10:43:05.026929 master-1 kubenswrapper[4771]: I1011 10:43:05.026898 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-rev" Oct 11 10:43:05.026929 master-1 kubenswrapper[4771]: I1011 10:43:05.026907 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd" Oct 11 10:43:05.026929 master-1 kubenswrapper[4771]: I1011 10:43:05.026917 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-metrics" Oct 11 10:43:05.063928 master-1 kubenswrapper[4771]: I1011 10:43:05.063889 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-data-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:43:05.064109 master-1 kubenswrapper[4771]: I1011 10:43:05.064091 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-log-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:43:05.064231 master-1 kubenswrapper[4771]: I1011 10:43:05.064214 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-resource-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:43:05.064237 master-0 kubenswrapper[4790]: I1011 10:43:05.064133 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" start-of-body= Oct 11 10:43:05.064347 master-1 kubenswrapper[4771]: I1011 10:43:05.064329 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-static-pod-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:43:05.064503 master-1 kubenswrapper[4771]: I1011 10:43:05.064485 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-usr-local-bin\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:43:05.064617 master-1 kubenswrapper[4771]: I1011 10:43:05.064600 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-cert-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:43:05.065155 master-0 kubenswrapper[4790]: I1011 10:43:05.064522 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" Oct 11 10:43:05.166636 master-1 kubenswrapper[4771]: I1011 10:43:05.166524 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-log-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:43:05.166636 master-1 kubenswrapper[4771]: I1011 10:43:05.166623 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-resource-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:43:05.166897 master-1 kubenswrapper[4771]: I1011 10:43:05.166655 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-static-pod-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:43:05.166897 master-1 kubenswrapper[4771]: I1011 10:43:05.166691 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-usr-local-bin\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:43:05.166897 master-1 kubenswrapper[4771]: I1011 10:43:05.166714 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-cert-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:43:05.166897 master-1 kubenswrapper[4771]: I1011 10:43:05.166779 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-data-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:43:05.166897 master-1 kubenswrapper[4771]: I1011 10:43:05.166882 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-data-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:43:05.167197 master-1 kubenswrapper[4771]: I1011 10:43:05.166930 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-log-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:43:05.167197 master-1 kubenswrapper[4771]: I1011 10:43:05.166962 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-resource-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:43:05.167197 master-1 kubenswrapper[4771]: I1011 10:43:05.166984 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-static-pod-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:43:05.167197 master-1 kubenswrapper[4771]: I1011 10:43:05.167007 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-usr-local-bin\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:43:05.167197 master-1 kubenswrapper[4771]: I1011 10:43:05.167027 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-cert-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 11 10:43:05.507165 master-1 kubenswrapper[4771]: I1011 10:43:05.507039 4771 patch_prober.go:28] interesting pod/etcd-master-1 container/etcd namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:43:05.507165 master-1 kubenswrapper[4771]: I1011 10:43:05.507133 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-master-1" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: I1011 10:43:05.889880 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:43:05.891153 master-2 kubenswrapper[4776]: I1011 10:43:05.889957 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:05.891153 master-2 kubenswrapper[4776]: I1011 10:43:05.890048 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:43:05.955332 master-1 kubenswrapper[4771]: I1011 10:43:05.955256 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd-rev/0.log" Oct 11 10:43:05.956999 master-1 kubenswrapper[4771]: I1011 10:43:05.956947 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd-metrics/0.log" Oct 11 10:43:05.960097 master-1 kubenswrapper[4771]: I1011 10:43:05.960042 4771 generic.go:334] "Generic (PLEG): container finished" podID="2b1859aa05c2c75eb43d086c9ccd9c86" containerID="2f39d1ed6551318e8799ea55ecdfbfe51ea2b9b7b26411631664f953b1d0e296" exitCode=2 Oct 11 10:43:05.960097 master-1 kubenswrapper[4771]: I1011 10:43:05.960091 4771 generic.go:334] "Generic (PLEG): container finished" podID="2b1859aa05c2c75eb43d086c9ccd9c86" containerID="84bbf7ab3fb66f6d01d7500d037317a4cb49a3eae4199b8937858e7e953c7fd3" exitCode=0 Oct 11 10:43:05.960286 master-1 kubenswrapper[4771]: I1011 10:43:05.960111 4771 generic.go:334] "Generic (PLEG): container finished" podID="2b1859aa05c2c75eb43d086c9ccd9c86" containerID="ecbb0613c992785c9403e057fc0c874ad563e770ca35f25a2b4b2f7341f1c10c" exitCode=2 Oct 11 10:43:05.962198 master-1 kubenswrapper[4771]: I1011 10:43:05.962139 4771 generic.go:334] "Generic (PLEG): container finished" podID="8d7775e5-5c08-4eef-84bf-8995a11eb190" containerID="ea6046ea85f7a0fce021fb5f4d0cfe1454a1393bcf7a0d41b1a58c6b303f5dca" exitCode=0 Oct 11 10:43:05.962320 master-1 kubenswrapper[4771]: I1011 10:43:05.962200 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-1" event={"ID":"8d7775e5-5c08-4eef-84bf-8995a11eb190","Type":"ContainerDied","Data":"ea6046ea85f7a0fce021fb5f4d0cfe1454a1393bcf7a0d41b1a58c6b303f5dca"} Oct 11 10:43:05.973479 master-1 kubenswrapper[4771]: I1011 10:43:05.973341 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-1" oldPodUID="2b1859aa05c2c75eb43d086c9ccd9c86" podUID="dbeb1098f6b7e52b91afcf2e9b50b014" Oct 11 10:43:07.444728 master-1 kubenswrapper[4771]: I1011 10:43:07.444663 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-1" Oct 11 10:43:07.600190 master-1 kubenswrapper[4771]: I1011 10:43:07.599908 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8d7775e5-5c08-4eef-84bf-8995a11eb190-var-lock\") pod \"8d7775e5-5c08-4eef-84bf-8995a11eb190\" (UID: \"8d7775e5-5c08-4eef-84bf-8995a11eb190\") " Oct 11 10:43:07.600190 master-1 kubenswrapper[4771]: I1011 10:43:07.599995 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d7775e5-5c08-4eef-84bf-8995a11eb190-kubelet-dir\") pod \"8d7775e5-5c08-4eef-84bf-8995a11eb190\" (UID: \"8d7775e5-5c08-4eef-84bf-8995a11eb190\") " Oct 11 10:43:07.600190 master-1 kubenswrapper[4771]: I1011 10:43:07.600062 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d7775e5-5c08-4eef-84bf-8995a11eb190-var-lock" (OuterVolumeSpecName: "var-lock") pod "8d7775e5-5c08-4eef-84bf-8995a11eb190" (UID: "8d7775e5-5c08-4eef-84bf-8995a11eb190"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:43:07.600190 master-1 kubenswrapper[4771]: I1011 10:43:07.600135 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d7775e5-5c08-4eef-84bf-8995a11eb190-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8d7775e5-5c08-4eef-84bf-8995a11eb190" (UID: "8d7775e5-5c08-4eef-84bf-8995a11eb190"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:43:07.600660 master-1 kubenswrapper[4771]: I1011 10:43:07.600216 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d7775e5-5c08-4eef-84bf-8995a11eb190-kube-api-access\") pod \"8d7775e5-5c08-4eef-84bf-8995a11eb190\" (UID: \"8d7775e5-5c08-4eef-84bf-8995a11eb190\") " Oct 11 10:43:07.600660 master-1 kubenswrapper[4771]: I1011 10:43:07.600599 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8d7775e5-5c08-4eef-84bf-8995a11eb190-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:43:07.600660 master-1 kubenswrapper[4771]: I1011 10:43:07.600621 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d7775e5-5c08-4eef-84bf-8995a11eb190-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:43:07.605548 master-1 kubenswrapper[4771]: I1011 10:43:07.605497 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d7775e5-5c08-4eef-84bf-8995a11eb190-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8d7775e5-5c08-4eef-84bf-8995a11eb190" (UID: "8d7775e5-5c08-4eef-84bf-8995a11eb190"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:43:07.701724 master-1 kubenswrapper[4771]: I1011 10:43:07.701595 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d7775e5-5c08-4eef-84bf-8995a11eb190-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:43:07.980469 master-1 kubenswrapper[4771]: I1011 10:43:07.980225 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-1" event={"ID":"8d7775e5-5c08-4eef-84bf-8995a11eb190","Type":"ContainerDied","Data":"16353a00dd4456281d0e795316b26f0bbce37de72b33a0538c00a1e5b2391471"} Oct 11 10:43:07.980469 master-1 kubenswrapper[4771]: I1011 10:43:07.980322 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16353a00dd4456281d0e795316b26f0bbce37de72b33a0538c00a1e5b2391471" Oct 11 10:43:07.980469 master-1 kubenswrapper[4771]: I1011 10:43:07.980386 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-1" Oct 11 10:43:09.629231 master-1 kubenswrapper[4771]: I1011 10:43:09.629082 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:43:09.630173 master-1 kubenswrapper[4771]: I1011 10:43:09.629241 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:43:10.059212 master-0 kubenswrapper[4790]: I1011 10:43:10.059090 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" start-of-body= Oct 11 10:43:10.059212 master-0 kubenswrapper[4790]: I1011 10:43:10.059197 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" Oct 11 10:43:10.752733 master-1 kubenswrapper[4771]: I1011 10:43:10.752588 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-6-master-1"] Oct 11 10:43:10.754027 master-1 kubenswrapper[4771]: E1011 10:43:10.753928 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7775e5-5c08-4eef-84bf-8995a11eb190" containerName="installer" Oct 11 10:43:10.754027 master-1 kubenswrapper[4771]: I1011 10:43:10.754008 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7775e5-5c08-4eef-84bf-8995a11eb190" containerName="installer" Oct 11 10:43:10.754249 master-1 kubenswrapper[4771]: I1011 10:43:10.754171 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d7775e5-5c08-4eef-84bf-8995a11eb190" containerName="installer" Oct 11 10:43:10.755090 master-1 kubenswrapper[4771]: I1011 10:43:10.755034 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-1" Oct 11 10:43:10.759896 master-1 kubenswrapper[4771]: I1011 10:43:10.759794 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-djvlq" Oct 11 10:43:10.766123 master-1 kubenswrapper[4771]: I1011 10:43:10.766052 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-6-master-1"] Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: I1011 10:43:10.894846 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: I1011 10:43:10.894924 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:10.958070 master-1 kubenswrapper[4771]: I1011 10:43:10.957946 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78689bdc-0258-45eb-8e5b-253911c61c79-kube-api-access\") pod \"installer-6-master-1\" (UID: \"78689bdc-0258-45eb-8e5b-253911c61c79\") " pod="openshift-kube-controller-manager/installer-6-master-1" Oct 11 10:43:10.958070 master-1 kubenswrapper[4771]: I1011 10:43:10.958065 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78689bdc-0258-45eb-8e5b-253911c61c79-kubelet-dir\") pod \"installer-6-master-1\" (UID: \"78689bdc-0258-45eb-8e5b-253911c61c79\") " pod="openshift-kube-controller-manager/installer-6-master-1" Oct 11 10:43:10.958486 master-1 kubenswrapper[4771]: I1011 10:43:10.958102 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78689bdc-0258-45eb-8e5b-253911c61c79-var-lock\") pod \"installer-6-master-1\" (UID: \"78689bdc-0258-45eb-8e5b-253911c61c79\") " pod="openshift-kube-controller-manager/installer-6-master-1" Oct 11 10:43:11.059668 master-1 kubenswrapper[4771]: I1011 10:43:11.059559 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78689bdc-0258-45eb-8e5b-253911c61c79-kube-api-access\") pod \"installer-6-master-1\" (UID: \"78689bdc-0258-45eb-8e5b-253911c61c79\") " pod="openshift-kube-controller-manager/installer-6-master-1" Oct 11 10:43:11.059914 master-1 kubenswrapper[4771]: I1011 10:43:11.059716 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78689bdc-0258-45eb-8e5b-253911c61c79-kubelet-dir\") pod \"installer-6-master-1\" (UID: \"78689bdc-0258-45eb-8e5b-253911c61c79\") " pod="openshift-kube-controller-manager/installer-6-master-1" Oct 11 10:43:11.059914 master-1 kubenswrapper[4771]: I1011 10:43:11.059754 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78689bdc-0258-45eb-8e5b-253911c61c79-var-lock\") pod \"installer-6-master-1\" (UID: \"78689bdc-0258-45eb-8e5b-253911c61c79\") " pod="openshift-kube-controller-manager/installer-6-master-1" Oct 11 10:43:11.060154 master-1 kubenswrapper[4771]: I1011 10:43:11.059919 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78689bdc-0258-45eb-8e5b-253911c61c79-var-lock\") pod \"installer-6-master-1\" (UID: \"78689bdc-0258-45eb-8e5b-253911c61c79\") " pod="openshift-kube-controller-manager/installer-6-master-1" Oct 11 10:43:11.060154 master-1 kubenswrapper[4771]: I1011 10:43:11.059928 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78689bdc-0258-45eb-8e5b-253911c61c79-kubelet-dir\") pod \"installer-6-master-1\" (UID: \"78689bdc-0258-45eb-8e5b-253911c61c79\") " pod="openshift-kube-controller-manager/installer-6-master-1" Oct 11 10:43:11.097808 master-1 kubenswrapper[4771]: I1011 10:43:11.097716 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78689bdc-0258-45eb-8e5b-253911c61c79-kube-api-access\") pod \"installer-6-master-1\" (UID: \"78689bdc-0258-45eb-8e5b-253911c61c79\") " pod="openshift-kube-controller-manager/installer-6-master-1" Oct 11 10:43:11.386404 master-1 kubenswrapper[4771]: I1011 10:43:11.386193 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-1" Oct 11 10:43:11.857424 master-1 kubenswrapper[4771]: I1011 10:43:11.857340 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-6-master-1"] Oct 11 10:43:11.866929 master-1 kubenswrapper[4771]: W1011 10:43:11.866793 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod78689bdc_0258_45eb_8e5b_253911c61c79.slice/crio-56046807ca0aa308b5de3a340c0e367a86144d71f324a1b35aa3b72ae51dc090 WatchSource:0}: Error finding container 56046807ca0aa308b5de3a340c0e367a86144d71f324a1b35aa3b72ae51dc090: Status 404 returned error can't find the container with id 56046807ca0aa308b5de3a340c0e367a86144d71f324a1b35aa3b72ae51dc090 Oct 11 10:43:12.023692 master-1 kubenswrapper[4771]: I1011 10:43:12.023593 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-1" event={"ID":"78689bdc-0258-45eb-8e5b-253911c61c79","Type":"ContainerStarted","Data":"56046807ca0aa308b5de3a340c0e367a86144d71f324a1b35aa3b72ae51dc090"} Oct 11 10:43:13.034666 master-1 kubenswrapper[4771]: I1011 10:43:13.034488 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-1" event={"ID":"78689bdc-0258-45eb-8e5b-253911c61c79","Type":"ContainerStarted","Data":"2d907b9a8cd0470d88178cfea01b0abf30291128bc9c158e361b094caee83ec4"} Oct 11 10:43:13.059468 master-1 kubenswrapper[4771]: I1011 10:43:13.059292 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-6-master-1" podStartSLOduration=3.059263036 podStartE2EDuration="3.059263036s" podCreationTimestamp="2025-10-11 10:43:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:43:13.05730207 +0000 UTC m=+1025.031528601" watchObservedRunningTime="2025-10-11 10:43:13.059263036 +0000 UTC m=+1025.033489517" Oct 11 10:43:14.628300 master-1 kubenswrapper[4771]: I1011 10:43:14.628229 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:43:14.628971 master-1 kubenswrapper[4771]: I1011 10:43:14.628328 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:43:15.059569 master-0 kubenswrapper[4790]: I1011 10:43:15.059415 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" start-of-body= Oct 11 10:43:15.059569 master-0 kubenswrapper[4790]: I1011 10:43:15.059551 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: I1011 10:43:15.890902 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: I1011 10:43:15.890989 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:17.769359 master-2 kubenswrapper[4776]: I1011 10:43:17.769301 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-v6dfc_8757af56-20fb-439e-adba-7e4e50378936/assisted-installer-controller/0.log" Oct 11 10:43:18.933895 master-0 kubenswrapper[4790]: I1011 10:43:18.933791 4790 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Oct 11 10:43:18.936099 master-0 kubenswrapper[4790]: I1011 10:43:18.936042 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Oct 11 10:43:18.981925 master-0 kubenswrapper[4790]: I1011 10:43:18.981810 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Oct 11 10:43:18.984672 master-0 kubenswrapper[4790]: I1011 10:43:18.984582 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/08bb0ac7b01a53ae0dcb90ce8b66efa1-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"08bb0ac7b01a53ae0dcb90ce8b66efa1\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Oct 11 10:43:18.984848 master-0 kubenswrapper[4790]: I1011 10:43:18.984786 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/08bb0ac7b01a53ae0dcb90ce8b66efa1-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"08bb0ac7b01a53ae0dcb90ce8b66efa1\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Oct 11 10:43:18.984940 master-0 kubenswrapper[4790]: I1011 10:43:18.984845 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/08bb0ac7b01a53ae0dcb90ce8b66efa1-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"08bb0ac7b01a53ae0dcb90ce8b66efa1\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Oct 11 10:43:19.086657 master-0 kubenswrapper[4790]: I1011 10:43:19.086519 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/08bb0ac7b01a53ae0dcb90ce8b66efa1-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"08bb0ac7b01a53ae0dcb90ce8b66efa1\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Oct 11 10:43:19.086657 master-0 kubenswrapper[4790]: I1011 10:43:19.086604 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/08bb0ac7b01a53ae0dcb90ce8b66efa1-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"08bb0ac7b01a53ae0dcb90ce8b66efa1\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Oct 11 10:43:19.086657 master-0 kubenswrapper[4790]: I1011 10:43:19.086652 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/08bb0ac7b01a53ae0dcb90ce8b66efa1-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"08bb0ac7b01a53ae0dcb90ce8b66efa1\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Oct 11 10:43:19.087238 master-0 kubenswrapper[4790]: I1011 10:43:19.086802 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/08bb0ac7b01a53ae0dcb90ce8b66efa1-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"08bb0ac7b01a53ae0dcb90ce8b66efa1\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Oct 11 10:43:19.087238 master-0 kubenswrapper[4790]: I1011 10:43:19.086833 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/08bb0ac7b01a53ae0dcb90ce8b66efa1-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"08bb0ac7b01a53ae0dcb90ce8b66efa1\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Oct 11 10:43:19.087238 master-0 kubenswrapper[4790]: I1011 10:43:19.086834 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/08bb0ac7b01a53ae0dcb90ce8b66efa1-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"08bb0ac7b01a53ae0dcb90ce8b66efa1\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Oct 11 10:43:19.127879 master-0 kubenswrapper[4790]: I1011 10:43:19.127754 4790 generic.go:334] "Generic (PLEG): container finished" podID="e7063ccc-c150-41d0-9285-8a8ca00aa417" containerID="194353fb7acfeb121812b2d62c7722c179dced595ba3e814ace7d8070862578b" exitCode=0 Oct 11 10:43:19.127879 master-0 kubenswrapper[4790]: I1011 10:43:19.127838 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"e7063ccc-c150-41d0-9285-8a8ca00aa417","Type":"ContainerDied","Data":"194353fb7acfeb121812b2d62c7722c179dced595ba3e814ace7d8070862578b"} Oct 11 10:43:19.275640 master-0 kubenswrapper[4790]: I1011 10:43:19.275446 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Oct 11 10:43:19.628744 master-1 kubenswrapper[4771]: I1011 10:43:19.628595 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:43:19.628744 master-1 kubenswrapper[4771]: I1011 10:43:19.628730 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:43:19.630247 master-1 kubenswrapper[4771]: I1011 10:43:19.628866 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-guard-master-1" Oct 11 10:43:19.630247 master-1 kubenswrapper[4771]: I1011 10:43:19.629830 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:43:19.630247 master-1 kubenswrapper[4771]: I1011 10:43:19.629878 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:43:20.059744 master-0 kubenswrapper[4790]: I1011 10:43:20.059567 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" start-of-body= Oct 11 10:43:20.060882 master-0 kubenswrapper[4790]: I1011 10:43:20.059748 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" Oct 11 10:43:20.137389 master-0 kubenswrapper[4790]: I1011 10:43:20.137286 4790 generic.go:334] "Generic (PLEG): container finished" podID="08bb0ac7b01a53ae0dcb90ce8b66efa1" containerID="f212a76b747e114411a9d00eac6144e357bffebadcfd5266386a67eb7633032b" exitCode=0 Oct 11 10:43:20.137389 master-0 kubenswrapper[4790]: I1011 10:43:20.137366 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"08bb0ac7b01a53ae0dcb90ce8b66efa1","Type":"ContainerDied","Data":"f212a76b747e114411a9d00eac6144e357bffebadcfd5266386a67eb7633032b"} Oct 11 10:43:20.137781 master-0 kubenswrapper[4790]: I1011 10:43:20.137423 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"08bb0ac7b01a53ae0dcb90ce8b66efa1","Type":"ContainerStarted","Data":"165ad903b2032716ae9b5ae764f82587859a8868b43799d916b6208f93295787"} Oct 11 10:43:20.521966 master-0 kubenswrapper[4790]: I1011 10:43:20.521906 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Oct 11 10:43:20.606869 master-0 kubenswrapper[4790]: I1011 10:43:20.606820 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7063ccc-c150-41d0-9285-8a8ca00aa417-kube-api-access\") pod \"e7063ccc-c150-41d0-9285-8a8ca00aa417\" (UID: \"e7063ccc-c150-41d0-9285-8a8ca00aa417\") " Oct 11 10:43:20.607021 master-0 kubenswrapper[4790]: I1011 10:43:20.606897 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7063ccc-c150-41d0-9285-8a8ca00aa417-kubelet-dir\") pod \"e7063ccc-c150-41d0-9285-8a8ca00aa417\" (UID: \"e7063ccc-c150-41d0-9285-8a8ca00aa417\") " Oct 11 10:43:20.607021 master-0 kubenswrapper[4790]: I1011 10:43:20.606935 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e7063ccc-c150-41d0-9285-8a8ca00aa417-var-lock\") pod \"e7063ccc-c150-41d0-9285-8a8ca00aa417\" (UID: \"e7063ccc-c150-41d0-9285-8a8ca00aa417\") " Oct 11 10:43:20.607234 master-0 kubenswrapper[4790]: I1011 10:43:20.607147 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7063ccc-c150-41d0-9285-8a8ca00aa417-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e7063ccc-c150-41d0-9285-8a8ca00aa417" (UID: "e7063ccc-c150-41d0-9285-8a8ca00aa417"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:43:20.607356 master-0 kubenswrapper[4790]: I1011 10:43:20.607195 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7063ccc-c150-41d0-9285-8a8ca00aa417-var-lock" (OuterVolumeSpecName: "var-lock") pod "e7063ccc-c150-41d0-9285-8a8ca00aa417" (UID: "e7063ccc-c150-41d0-9285-8a8ca00aa417"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:43:20.611137 master-0 kubenswrapper[4790]: I1011 10:43:20.611058 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7063ccc-c150-41d0-9285-8a8ca00aa417-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7063ccc-c150-41d0-9285-8a8ca00aa417" (UID: "e7063ccc-c150-41d0-9285-8a8ca00aa417"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:43:20.709234 master-0 kubenswrapper[4790]: I1011 10:43:20.709116 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7063ccc-c150-41d0-9285-8a8ca00aa417-kube-api-access\") on node \"master-0\" DevicePath \"\"" Oct 11 10:43:20.709234 master-0 kubenswrapper[4790]: I1011 10:43:20.709176 4790 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7063ccc-c150-41d0-9285-8a8ca00aa417-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Oct 11 10:43:20.709234 master-0 kubenswrapper[4790]: I1011 10:43:20.709190 4790 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e7063ccc-c150-41d0-9285-8a8ca00aa417-var-lock\") on node \"master-0\" DevicePath \"\"" Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: I1011 10:43:20.891763 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:43:20.892897 master-2 kubenswrapper[4776]: I1011 10:43:20.891879 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:21.150139 master-0 kubenswrapper[4790]: I1011 10:43:21.150071 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"e7063ccc-c150-41d0-9285-8a8ca00aa417","Type":"ContainerDied","Data":"df3c31d752a92f830ac660f11dc711746fedff638b520cb70b6e043fe897e4d1"} Oct 11 10:43:21.150576 master-0 kubenswrapper[4790]: I1011 10:43:21.150141 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Oct 11 10:43:21.150576 master-0 kubenswrapper[4790]: I1011 10:43:21.150149 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df3c31d752a92f830ac660f11dc711746fedff638b520cb70b6e043fe897e4d1" Oct 11 10:43:21.160695 master-0 kubenswrapper[4790]: I1011 10:43:21.159935 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"08bb0ac7b01a53ae0dcb90ce8b66efa1","Type":"ContainerStarted","Data":"ef053c2ec24777c6f73bf24c58ce6d81648a2919f61739a5cde9038627df1879"} Oct 11 10:43:21.160695 master-0 kubenswrapper[4790]: I1011 10:43:21.159992 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"08bb0ac7b01a53ae0dcb90ce8b66efa1","Type":"ContainerStarted","Data":"489c4661078f82b6a0b68d83fed1c53684d72b0df178edbd87252ff524a895b3"} Oct 11 10:43:21.160695 master-0 kubenswrapper[4790]: I1011 10:43:21.160002 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"08bb0ac7b01a53ae0dcb90ce8b66efa1","Type":"ContainerStarted","Data":"df2872816af4c38af6c470c634071a8b5944f8a31faf49fe4569a9da208e13f7"} Oct 11 10:43:21.160695 master-0 kubenswrapper[4790]: I1011 10:43:21.160012 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"08bb0ac7b01a53ae0dcb90ce8b66efa1","Type":"ContainerStarted","Data":"b09ec9f78f9897a605241691754a162993368392c8020db0290ac43a9862d1f3"} Oct 11 10:43:22.167498 master-0 kubenswrapper[4790]: I1011 10:43:22.167446 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"08bb0ac7b01a53ae0dcb90ce8b66efa1","Type":"ContainerStarted","Data":"67efb8bfe5a6f1b758ca157bfcef80c59b85907f88572dd02d40afe6e9896027"} Oct 11 10:43:22.168356 master-0 kubenswrapper[4790]: I1011 10:43:22.168340 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Oct 11 10:43:22.196639 master-0 kubenswrapper[4790]: I1011 10:43:22.196524 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=4.196501143 podStartE2EDuration="4.196501143s" podCreationTimestamp="2025-10-11 10:43:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:43:22.192667417 +0000 UTC m=+278.747127709" watchObservedRunningTime="2025-10-11 10:43:22.196501143 +0000 UTC m=+278.750961435" Oct 11 10:43:24.276360 master-0 kubenswrapper[4790]: I1011 10:43:24.276283 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Oct 11 10:43:24.276360 master-0 kubenswrapper[4790]: I1011 10:43:24.276349 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Oct 11 10:43:24.284871 master-0 kubenswrapper[4790]: I1011 10:43:24.284810 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Oct 11 10:43:24.628883 master-1 kubenswrapper[4771]: I1011 10:43:24.628773 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:43:24.629806 master-1 kubenswrapper[4771]: I1011 10:43:24.628911 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:43:25.058895 master-0 kubenswrapper[4790]: I1011 10:43:25.058827 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" start-of-body= Oct 11 10:43:25.059174 master-0 kubenswrapper[4790]: I1011 10:43:25.058894 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" Oct 11 10:43:25.192553 master-0 kubenswrapper[4790]: I1011 10:43:25.192430 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: I1011 10:43:25.891288 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:43:25.892364 master-2 kubenswrapper[4776]: I1011 10:43:25.892335 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:29.629024 master-1 kubenswrapper[4771]: I1011 10:43:29.628934 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:43:29.629842 master-1 kubenswrapper[4771]: I1011 10:43:29.629020 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:43:29.990273 master-0 kubenswrapper[4790]: I1011 10:43:29.990057 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-0"] Oct 11 10:43:29.990892 master-0 kubenswrapper[4790]: E1011 10:43:29.990366 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7063ccc-c150-41d0-9285-8a8ca00aa417" containerName="installer" Oct 11 10:43:29.990892 master-0 kubenswrapper[4790]: I1011 10:43:29.990390 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7063ccc-c150-41d0-9285-8a8ca00aa417" containerName="installer" Oct 11 10:43:29.990892 master-0 kubenswrapper[4790]: I1011 10:43:29.990532 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7063ccc-c150-41d0-9285-8a8ca00aa417" containerName="installer" Oct 11 10:43:29.991289 master-0 kubenswrapper[4790]: I1011 10:43:29.991254 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-master-0" Oct 11 10:43:29.993797 master-0 kubenswrapper[4790]: I1011 10:43:29.993745 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"default-dockercfg-hlr4b" Oct 11 10:43:29.995182 master-0 kubenswrapper[4790]: I1011 10:43:29.994949 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"openshift-service-ca.crt" Oct 11 10:43:29.995556 master-0 kubenswrapper[4790]: I1011 10:43:29.995465 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 11 10:43:30.014865 master-0 kubenswrapper[4790]: I1011 10:43:30.009521 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-0"] Oct 11 10:43:30.055919 master-0 kubenswrapper[4790]: I1011 10:43:30.055833 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2vct\" (UniqueName: \"kubernetes.io/projected/acfb978c-45a9-4081-9d1e-3751eea1b483-kube-api-access-v2vct\") pod \"kube-apiserver-guard-master-0\" (UID: \"acfb978c-45a9-4081-9d1e-3751eea1b483\") " pod="openshift-kube-apiserver/kube-apiserver-guard-master-0" Oct 11 10:43:30.059293 master-0 kubenswrapper[4790]: I1011 10:43:30.059195 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" start-of-body= Oct 11 10:43:30.059422 master-0 kubenswrapper[4790]: I1011 10:43:30.059341 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" Oct 11 10:43:30.157083 master-0 kubenswrapper[4790]: I1011 10:43:30.156970 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2vct\" (UniqueName: \"kubernetes.io/projected/acfb978c-45a9-4081-9d1e-3751eea1b483-kube-api-access-v2vct\") pod \"kube-apiserver-guard-master-0\" (UID: \"acfb978c-45a9-4081-9d1e-3751eea1b483\") " pod="openshift-kube-apiserver/kube-apiserver-guard-master-0" Oct 11 10:43:30.183843 master-0 kubenswrapper[4790]: I1011 10:43:30.183751 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2vct\" (UniqueName: \"kubernetes.io/projected/acfb978c-45a9-4081-9d1e-3751eea1b483-kube-api-access-v2vct\") pod \"kube-apiserver-guard-master-0\" (UID: \"acfb978c-45a9-4081-9d1e-3751eea1b483\") " pod="openshift-kube-apiserver/kube-apiserver-guard-master-0" Oct 11 10:43:30.322297 master-0 kubenswrapper[4790]: I1011 10:43:30.322175 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-master-0" Oct 11 10:43:30.811428 master-0 kubenswrapper[4790]: I1011 10:43:30.811149 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-0"] Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: I1011 10:43:30.892652 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:43:30.894238 master-2 kubenswrapper[4776]: I1011 10:43:30.892846 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:31.220137 master-0 kubenswrapper[4790]: I1011 10:43:31.220043 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-guard-master-0" event={"ID":"acfb978c-45a9-4081-9d1e-3751eea1b483","Type":"ContainerStarted","Data":"44f0928c4192874c201c618b86ce60c31a0620f8ec403ce90ee4b0b25f138ba0"} Oct 11 10:43:31.220137 master-0 kubenswrapper[4790]: I1011 10:43:31.220114 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-guard-master-0" event={"ID":"acfb978c-45a9-4081-9d1e-3751eea1b483","Type":"ContainerStarted","Data":"ac6ea93523ef5806f84360870a8fcd92a3cfc634675b21ddfa7edd0303dd1afc"} Oct 11 10:43:31.221407 master-0 kubenswrapper[4790]: I1011 10:43:31.220512 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-guard-master-0" Oct 11 10:43:31.228829 master-0 kubenswrapper[4790]: I1011 10:43:31.228702 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-guard-master-0" Oct 11 10:43:31.248526 master-0 kubenswrapper[4790]: I1011 10:43:31.248363 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-guard-master-0" podStartSLOduration=2.248318007 podStartE2EDuration="2.248318007s" podCreationTimestamp="2025-10-11 10:43:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:43:31.241653061 +0000 UTC m=+287.796113433" watchObservedRunningTime="2025-10-11 10:43:31.248318007 +0000 UTC m=+287.802778339" Oct 11 10:43:34.628816 master-1 kubenswrapper[4771]: I1011 10:43:34.628699 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:43:34.628816 master-1 kubenswrapper[4771]: I1011 10:43:34.628786 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:43:35.059329 master-0 kubenswrapper[4790]: I1011 10:43:35.059190 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" start-of-body= Oct 11 10:43:35.059329 master-0 kubenswrapper[4790]: I1011 10:43:35.059318 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" Oct 11 10:43:35.203222 master-1 kubenswrapper[4771]: I1011 10:43:35.203152 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd-rev/0.log" Oct 11 10:43:35.204161 master-1 kubenswrapper[4771]: I1011 10:43:35.204124 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd-metrics/0.log" Oct 11 10:43:35.204856 master-1 kubenswrapper[4771]: I1011 10:43:35.204825 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd/0.log" Oct 11 10:43:35.205445 master-1 kubenswrapper[4771]: I1011 10:43:35.205385 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcdctl/0.log" Oct 11 10:43:35.206822 master-1 kubenswrapper[4771]: I1011 10:43:35.206770 4771 generic.go:334] "Generic (PLEG): container finished" podID="2b1859aa05c2c75eb43d086c9ccd9c86" containerID="1b08bbe8a016cc9703a454b83b5ccaac8367e55a0f3e2612f07c89255c5b066b" exitCode=137 Oct 11 10:43:35.206822 master-1 kubenswrapper[4771]: I1011 10:43:35.206802 4771 generic.go:334] "Generic (PLEG): container finished" podID="2b1859aa05c2c75eb43d086c9ccd9c86" containerID="49bf7adabb62db980d637017833ab23f35546844d31309e50b509a3be2303a67" exitCode=137 Oct 11 10:43:35.616022 master-1 kubenswrapper[4771]: I1011 10:43:35.615960 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd-rev/0.log" Oct 11 10:43:35.617259 master-1 kubenswrapper[4771]: I1011 10:43:35.617229 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd-metrics/0.log" Oct 11 10:43:35.618970 master-1 kubenswrapper[4771]: I1011 10:43:35.618936 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd/0.log" Oct 11 10:43:35.619481 master-1 kubenswrapper[4771]: I1011 10:43:35.619453 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcdctl/0.log" Oct 11 10:43:35.620764 master-1 kubenswrapper[4771]: I1011 10:43:35.620733 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 11 10:43:35.625599 master-1 kubenswrapper[4771]: I1011 10:43:35.625531 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-1" oldPodUID="2b1859aa05c2c75eb43d086c9ccd9c86" podUID="dbeb1098f6b7e52b91afcf2e9b50b014" Oct 11 10:43:35.681798 master-1 kubenswrapper[4771]: I1011 10:43:35.681745 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-log-dir\") pod \"2b1859aa05c2c75eb43d086c9ccd9c86\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " Oct 11 10:43:35.682129 master-1 kubenswrapper[4771]: I1011 10:43:35.681808 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-data-dir\") pod \"2b1859aa05c2c75eb43d086c9ccd9c86\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " Oct 11 10:43:35.682129 master-1 kubenswrapper[4771]: I1011 10:43:35.681830 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-cert-dir\") pod \"2b1859aa05c2c75eb43d086c9ccd9c86\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " Oct 11 10:43:35.682129 master-1 kubenswrapper[4771]: I1011 10:43:35.681851 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-resource-dir\") pod \"2b1859aa05c2c75eb43d086c9ccd9c86\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " Oct 11 10:43:35.682129 master-1 kubenswrapper[4771]: I1011 10:43:35.681887 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-static-pod-dir\") pod \"2b1859aa05c2c75eb43d086c9ccd9c86\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " Oct 11 10:43:35.682129 master-1 kubenswrapper[4771]: I1011 10:43:35.681884 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-log-dir" (OuterVolumeSpecName: "log-dir") pod "2b1859aa05c2c75eb43d086c9ccd9c86" (UID: "2b1859aa05c2c75eb43d086c9ccd9c86"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:43:35.682129 master-1 kubenswrapper[4771]: I1011 10:43:35.681937 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "2b1859aa05c2c75eb43d086c9ccd9c86" (UID: "2b1859aa05c2c75eb43d086c9ccd9c86"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:43:35.682129 master-1 kubenswrapper[4771]: I1011 10:43:35.681914 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-usr-local-bin\") pod \"2b1859aa05c2c75eb43d086c9ccd9c86\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " Oct 11 10:43:35.682129 master-1 kubenswrapper[4771]: I1011 10:43:35.681953 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-data-dir" (OuterVolumeSpecName: "data-dir") pod "2b1859aa05c2c75eb43d086c9ccd9c86" (UID: "2b1859aa05c2c75eb43d086c9ccd9c86"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:43:35.682129 master-1 kubenswrapper[4771]: I1011 10:43:35.682014 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "2b1859aa05c2c75eb43d086c9ccd9c86" (UID: "2b1859aa05c2c75eb43d086c9ccd9c86"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:43:35.682129 master-1 kubenswrapper[4771]: I1011 10:43:35.682056 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "2b1859aa05c2c75eb43d086c9ccd9c86" (UID: "2b1859aa05c2c75eb43d086c9ccd9c86"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:43:35.682129 master-1 kubenswrapper[4771]: I1011 10:43:35.681974 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "2b1859aa05c2c75eb43d086c9ccd9c86" (UID: "2b1859aa05c2c75eb43d086c9ccd9c86"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:43:35.682654 master-1 kubenswrapper[4771]: I1011 10:43:35.682606 4771 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-data-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:43:35.682693 master-1 kubenswrapper[4771]: I1011 10:43:35.682658 4771 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-cert-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:43:35.682693 master-1 kubenswrapper[4771]: I1011 10:43:35.682686 4771 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-resource-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:43:35.682755 master-1 kubenswrapper[4771]: I1011 10:43:35.682717 4771 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-static-pod-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:43:35.682755 master-1 kubenswrapper[4771]: I1011 10:43:35.682745 4771 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-usr-local-bin\") on node \"master-1\" DevicePath \"\"" Oct 11 10:43:35.682816 master-1 kubenswrapper[4771]: I1011 10:43:35.682768 4771 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-log-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: I1011 10:43:35.890855 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: I1011 10:43:35.890917 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:36.219164 master-1 kubenswrapper[4771]: I1011 10:43:36.219065 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd-rev/0.log" Oct 11 10:43:36.221214 master-1 kubenswrapper[4771]: I1011 10:43:36.221119 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd-metrics/0.log" Oct 11 10:43:36.222498 master-1 kubenswrapper[4771]: I1011 10:43:36.222445 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd/0.log" Oct 11 10:43:36.223311 master-1 kubenswrapper[4771]: I1011 10:43:36.223250 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcdctl/0.log" Oct 11 10:43:36.225776 master-1 kubenswrapper[4771]: I1011 10:43:36.225723 4771 scope.go:117] "RemoveContainer" containerID="2f39d1ed6551318e8799ea55ecdfbfe51ea2b9b7b26411631664f953b1d0e296" Oct 11 10:43:36.225984 master-1 kubenswrapper[4771]: I1011 10:43:36.225847 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 11 10:43:36.234671 master-1 kubenswrapper[4771]: I1011 10:43:36.234479 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-1" oldPodUID="2b1859aa05c2c75eb43d086c9ccd9c86" podUID="dbeb1098f6b7e52b91afcf2e9b50b014" Oct 11 10:43:36.260526 master-1 kubenswrapper[4771]: I1011 10:43:36.260469 4771 scope.go:117] "RemoveContainer" containerID="84bbf7ab3fb66f6d01d7500d037317a4cb49a3eae4199b8937858e7e953c7fd3" Oct 11 10:43:36.268623 master-1 kubenswrapper[4771]: I1011 10:43:36.268559 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-1" oldPodUID="2b1859aa05c2c75eb43d086c9ccd9c86" podUID="dbeb1098f6b7e52b91afcf2e9b50b014" Oct 11 10:43:36.289664 master-1 kubenswrapper[4771]: I1011 10:43:36.289602 4771 scope.go:117] "RemoveContainer" containerID="ecbb0613c992785c9403e057fc0c874ad563e770ca35f25a2b4b2f7341f1c10c" Oct 11 10:43:36.310244 master-1 kubenswrapper[4771]: I1011 10:43:36.310183 4771 scope.go:117] "RemoveContainer" containerID="1b08bbe8a016cc9703a454b83b5ccaac8367e55a0f3e2612f07c89255c5b066b" Oct 11 10:43:36.334087 master-1 kubenswrapper[4771]: I1011 10:43:36.334032 4771 scope.go:117] "RemoveContainer" containerID="49bf7adabb62db980d637017833ab23f35546844d31309e50b509a3be2303a67" Oct 11 10:43:36.349420 master-1 kubenswrapper[4771]: I1011 10:43:36.349340 4771 scope.go:117] "RemoveContainer" containerID="0d2abececcc3750380edf401f993d45ec701aaab0b1cc115175ab53e903df0d6" Oct 11 10:43:36.372300 master-1 kubenswrapper[4771]: I1011 10:43:36.372251 4771 scope.go:117] "RemoveContainer" containerID="f36eed4b60a75dfc18926f5f7a62c7fe09c6ef035bfef9182c1502b7c4eeb07b" Oct 11 10:43:36.402335 master-1 kubenswrapper[4771]: I1011 10:43:36.402293 4771 scope.go:117] "RemoveContainer" containerID="5df2d69fcce5aa4d0f872e664dab924a82b358ddfdc487a9796493b554db07ec" Oct 11 10:43:36.447900 master-1 kubenswrapper[4771]: I1011 10:43:36.447831 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" path="/var/lib/kubelet/pods/2b1859aa05c2c75eb43d086c9ccd9c86/volumes" Oct 11 10:43:38.190374 master-0 kubenswrapper[4790]: I1011 10:43:38.190211 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-0"] Oct 11 10:43:39.281857 master-0 kubenswrapper[4790]: I1011 10:43:39.281772 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Oct 11 10:43:39.629485 master-1 kubenswrapper[4771]: I1011 10:43:39.629222 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:43:39.629485 master-1 kubenswrapper[4771]: I1011 10:43:39.629343 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:43:40.059341 master-0 kubenswrapper[4790]: I1011 10:43:40.059274 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" start-of-body= Oct 11 10:43:40.059806 master-0 kubenswrapper[4790]: I1011 10:43:40.059750 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: I1011 10:43:40.889820 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: I1011 10:43:40.889901 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:43.376274 master-2 kubenswrapper[4776]: I1011 10:43:43.376212 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:43:43.442663 master-2 kubenswrapper[4776]: I1011 10:43:43.442287 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc"] Oct 11 10:43:43.442663 master-2 kubenswrapper[4776]: E1011 10:43:43.442653 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" Oct 11 10:43:43.442663 master-2 kubenswrapper[4776]: I1011 10:43:43.442688 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" Oct 11 10:43:43.442663 master-2 kubenswrapper[4776]: E1011 10:43:43.442716 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="fix-audit-permissions" Oct 11 10:43:43.442663 master-2 kubenswrapper[4776]: I1011 10:43:43.442724 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="fix-audit-permissions" Oct 11 10:43:43.443278 master-2 kubenswrapper[4776]: I1011 10:43:43.442889 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.447372 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.448087 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-dir\") pod \"407e7df9-fbe8-44b1-8dde-bafa356e904c\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.448556 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-serving-cert\") pod \"407e7df9-fbe8-44b1-8dde-bafa356e904c\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.448583 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-policies\") pod \"407e7df9-fbe8-44b1-8dde-bafa356e904c\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.448621 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cjsj\" (UniqueName: \"kubernetes.io/projected/407e7df9-fbe8-44b1-8dde-bafa356e904c-kube-api-access-4cjsj\") pod \"407e7df9-fbe8-44b1-8dde-bafa356e904c\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.448645 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-encryption-config\") pod \"407e7df9-fbe8-44b1-8dde-bafa356e904c\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.448737 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-client\") pod \"407e7df9-fbe8-44b1-8dde-bafa356e904c\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.448801 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-trusted-ca-bundle\") pod \"407e7df9-fbe8-44b1-8dde-bafa356e904c\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.448905 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-serving-ca\") pod \"407e7df9-fbe8-44b1-8dde-bafa356e904c\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.449041 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d346790-931a-4f91-b588-0b6249da0cd0-audit-policies\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.449066 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr7j7\" (UniqueName: \"kubernetes.io/projected/1d346790-931a-4f91-b588-0b6249da0cd0-kube-api-access-zr7j7\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.449088 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d346790-931a-4f91-b588-0b6249da0cd0-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.449107 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d346790-931a-4f91-b588-0b6249da0cd0-etcd-client\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.449146 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d346790-931a-4f91-b588-0b6249da0cd0-serving-cert\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.449172 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d346790-931a-4f91-b588-0b6249da0cd0-audit-dir\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.449248 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d346790-931a-4f91-b588-0b6249da0cd0-encryption-config\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.449282 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d346790-931a-4f91-b588-0b6249da0cd0-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.449357 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "407e7df9-fbe8-44b1-8dde-bafa356e904c" (UID: "407e7df9-fbe8-44b1-8dde-bafa356e904c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.452445 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc"] Oct 11 10:43:43.456364 master-2 kubenswrapper[4776]: I1011 10:43:43.456166 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "407e7df9-fbe8-44b1-8dde-bafa356e904c" (UID: "407e7df9-fbe8-44b1-8dde-bafa356e904c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:43:43.459670 master-2 kubenswrapper[4776]: I1011 10:43:43.456586 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "407e7df9-fbe8-44b1-8dde-bafa356e904c" (UID: "407e7df9-fbe8-44b1-8dde-bafa356e904c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:43:43.459670 master-2 kubenswrapper[4776]: I1011 10:43:43.456951 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "407e7df9-fbe8-44b1-8dde-bafa356e904c" (UID: "407e7df9-fbe8-44b1-8dde-bafa356e904c"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:43:43.459670 master-2 kubenswrapper[4776]: I1011 10:43:43.458886 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "407e7df9-fbe8-44b1-8dde-bafa356e904c" (UID: "407e7df9-fbe8-44b1-8dde-bafa356e904c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:43:43.460802 master-2 kubenswrapper[4776]: I1011 10:43:43.460747 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "407e7df9-fbe8-44b1-8dde-bafa356e904c" (UID: "407e7df9-fbe8-44b1-8dde-bafa356e904c"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:43:43.462048 master-2 kubenswrapper[4776]: I1011 10:43:43.461840 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "407e7df9-fbe8-44b1-8dde-bafa356e904c" (UID: "407e7df9-fbe8-44b1-8dde-bafa356e904c"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:43:43.467713 master-2 kubenswrapper[4776]: I1011 10:43:43.467116 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/407e7df9-fbe8-44b1-8dde-bafa356e904c-kube-api-access-4cjsj" (OuterVolumeSpecName: "kube-api-access-4cjsj") pod "407e7df9-fbe8-44b1-8dde-bafa356e904c" (UID: "407e7df9-fbe8-44b1-8dde-bafa356e904c"). InnerVolumeSpecName "kube-api-access-4cjsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:43:43.551367 master-2 kubenswrapper[4776]: I1011 10:43:43.551277 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d346790-931a-4f91-b588-0b6249da0cd0-serving-cert\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.552008 master-2 kubenswrapper[4776]: I1011 10:43:43.551967 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d346790-931a-4f91-b588-0b6249da0cd0-audit-dir\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.552155 master-2 kubenswrapper[4776]: I1011 10:43:43.552137 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d346790-931a-4f91-b588-0b6249da0cd0-encryption-config\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.552320 master-2 kubenswrapper[4776]: I1011 10:43:43.552303 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d346790-931a-4f91-b588-0b6249da0cd0-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.552475 master-2 kubenswrapper[4776]: I1011 10:43:43.552458 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d346790-931a-4f91-b588-0b6249da0cd0-audit-policies\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.552971 master-2 kubenswrapper[4776]: I1011 10:43:43.552354 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d346790-931a-4f91-b588-0b6249da0cd0-audit-dir\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.553052 master-2 kubenswrapper[4776]: I1011 10:43:43.552977 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr7j7\" (UniqueName: \"kubernetes.io/projected/1d346790-931a-4f91-b588-0b6249da0cd0-kube-api-access-zr7j7\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.553110 master-2 kubenswrapper[4776]: I1011 10:43:43.553090 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d346790-931a-4f91-b588-0b6249da0cd0-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.553156 master-2 kubenswrapper[4776]: I1011 10:43:43.553116 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d346790-931a-4f91-b588-0b6249da0cd0-etcd-client\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.553231 master-2 kubenswrapper[4776]: I1011 10:43:43.553207 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:43:43.553231 master-2 kubenswrapper[4776]: I1011 10:43:43.553226 4776 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-policies\") on node \"master-2\" DevicePath \"\"" Oct 11 10:43:43.553326 master-2 kubenswrapper[4776]: I1011 10:43:43.553239 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cjsj\" (UniqueName: \"kubernetes.io/projected/407e7df9-fbe8-44b1-8dde-bafa356e904c-kube-api-access-4cjsj\") on node \"master-2\" DevicePath \"\"" Oct 11 10:43:43.553326 master-2 kubenswrapper[4776]: I1011 10:43:43.553251 4776 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-encryption-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:43:43.553326 master-2 kubenswrapper[4776]: I1011 10:43:43.553262 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-client\") on node \"master-2\" DevicePath \"\"" Oct 11 10:43:43.553326 master-2 kubenswrapper[4776]: I1011 10:43:43.553271 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:43:43.553326 master-2 kubenswrapper[4776]: I1011 10:43:43.553280 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-serving-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:43:43.553326 master-2 kubenswrapper[4776]: I1011 10:43:43.553290 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:43:43.553563 master-2 kubenswrapper[4776]: I1011 10:43:43.553337 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d346790-931a-4f91-b588-0b6249da0cd0-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.553931 master-2 kubenswrapper[4776]: I1011 10:43:43.553898 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d346790-931a-4f91-b588-0b6249da0cd0-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.554894 master-2 kubenswrapper[4776]: I1011 10:43:43.554872 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d346790-931a-4f91-b588-0b6249da0cd0-audit-policies\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.555928 master-2 kubenswrapper[4776]: I1011 10:43:43.555869 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d346790-931a-4f91-b588-0b6249da0cd0-serving-cert\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.557326 master-2 kubenswrapper[4776]: I1011 10:43:43.557269 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d346790-931a-4f91-b588-0b6249da0cd0-etcd-client\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.557413 master-2 kubenswrapper[4776]: I1011 10:43:43.557384 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d346790-931a-4f91-b588-0b6249da0cd0-encryption-config\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.570966 master-2 kubenswrapper[4776]: I1011 10:43:43.570903 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr7j7\" (UniqueName: \"kubernetes.io/projected/1d346790-931a-4f91-b588-0b6249da0cd0-kube-api-access-zr7j7\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.626740 master-2 kubenswrapper[4776]: I1011 10:43:43.626574 4776 generic.go:334] "Generic (PLEG): container finished" podID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerID="0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee" exitCode=0 Oct 11 10:43:43.626740 master-2 kubenswrapper[4776]: I1011 10:43:43.626662 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:43:43.626997 master-2 kubenswrapper[4776]: I1011 10:43:43.626644 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" event={"ID":"407e7df9-fbe8-44b1-8dde-bafa356e904c","Type":"ContainerDied","Data":"0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee"} Oct 11 10:43:43.626997 master-2 kubenswrapper[4776]: I1011 10:43:43.626901 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" event={"ID":"407e7df9-fbe8-44b1-8dde-bafa356e904c","Type":"ContainerDied","Data":"4696d703bfc528a3bf9bd99fc217e6dc2e1faa3cb905d36cd446e1df3ecf761e"} Oct 11 10:43:43.626997 master-2 kubenswrapper[4776]: I1011 10:43:43.626954 4776 scope.go:117] "RemoveContainer" containerID="0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee" Oct 11 10:43:43.646595 master-2 kubenswrapper[4776]: I1011 10:43:43.646559 4776 scope.go:117] "RemoveContainer" containerID="e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960" Oct 11 10:43:43.662691 master-2 kubenswrapper[4776]: I1011 10:43:43.662617 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-5xgzs"] Oct 11 10:43:43.667816 master-2 kubenswrapper[4776]: I1011 10:43:43.667771 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-5xgzs"] Oct 11 10:43:43.687059 master-2 kubenswrapper[4776]: I1011 10:43:43.687011 4776 scope.go:117] "RemoveContainer" containerID="0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee" Oct 11 10:43:43.687379 master-2 kubenswrapper[4776]: E1011 10:43:43.687342 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee\": container with ID starting with 0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee not found: ID does not exist" containerID="0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee" Oct 11 10:43:43.687443 master-2 kubenswrapper[4776]: I1011 10:43:43.687384 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee"} err="failed to get container status \"0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee\": rpc error: code = NotFound desc = could not find container \"0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee\": container with ID starting with 0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee not found: ID does not exist" Oct 11 10:43:43.687443 master-2 kubenswrapper[4776]: I1011 10:43:43.687411 4776 scope.go:117] "RemoveContainer" containerID="e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960" Oct 11 10:43:43.687844 master-2 kubenswrapper[4776]: E1011 10:43:43.687791 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960\": container with ID starting with e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960 not found: ID does not exist" containerID="e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960" Oct 11 10:43:43.687924 master-2 kubenswrapper[4776]: I1011 10:43:43.687843 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960"} err="failed to get container status \"e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960\": rpc error: code = NotFound desc = could not find container \"e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960\": container with ID starting with e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960 not found: ID does not exist" Oct 11 10:43:43.810265 master-2 kubenswrapper[4776]: I1011 10:43:43.810205 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:44.068438 master-2 kubenswrapper[4776]: I1011 10:43:44.068368 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" path="/var/lib/kubelet/pods/407e7df9-fbe8-44b1-8dde-bafa356e904c/volumes" Oct 11 10:43:44.153021 master-0 kubenswrapper[4790]: I1011 10:43:44.152960 4790 kubelet.go:1505] "Image garbage collection succeeded" Oct 11 10:43:44.248580 master-2 kubenswrapper[4776]: I1011 10:43:44.248310 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc"] Oct 11 10:43:44.628256 master-1 kubenswrapper[4771]: I1011 10:43:44.628178 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:43:44.629443 master-1 kubenswrapper[4771]: I1011 10:43:44.628632 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:43:44.636763 master-2 kubenswrapper[4776]: I1011 10:43:44.636694 4776 generic.go:334] "Generic (PLEG): container finished" podID="1d346790-931a-4f91-b588-0b6249da0cd0" containerID="cc4f498c69ea9832f019d9f23d32281793ea486570689a8233eff5246e1c7c73" exitCode=0 Oct 11 10:43:44.637484 master-2 kubenswrapper[4776]: I1011 10:43:44.636746 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" event={"ID":"1d346790-931a-4f91-b588-0b6249da0cd0","Type":"ContainerDied","Data":"cc4f498c69ea9832f019d9f23d32281793ea486570689a8233eff5246e1c7c73"} Oct 11 10:43:44.637484 master-2 kubenswrapper[4776]: I1011 10:43:44.637150 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" event={"ID":"1d346790-931a-4f91-b588-0b6249da0cd0","Type":"ContainerStarted","Data":"2f76a0e340bec7b367b4fa74984435e2b6f88b42f1e2ce019ae496424c0079da"} Oct 11 10:43:45.060697 master-0 kubenswrapper[4790]: I1011 10:43:45.060605 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" start-of-body= Oct 11 10:43:45.062989 master-0 kubenswrapper[4790]: I1011 10:43:45.060700 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" Oct 11 10:43:45.259653 master-1 kubenswrapper[4771]: I1011 10:43:45.259595 4771 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 11 10:43:45.260399 master-1 kubenswrapper[4771]: I1011 10:43:45.260296 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="0b5de9d609ee1e6c379f71934cb2c3c6" containerName="kube-controller-manager" containerID="cri-o://9e6a4086932c3b4c0590b1992411e46984c974a11450de3378bede5ca3045d02" gracePeriod=30 Oct 11 10:43:45.260634 master-1 kubenswrapper[4771]: I1011 10:43:45.260480 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="0b5de9d609ee1e6c379f71934cb2c3c6" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://79e52bbf7393881dfbba04f7a9f71721266d98f1191a6c7be91f8bc0ce4e1139" gracePeriod=30 Oct 11 10:43:45.260730 master-1 kubenswrapper[4771]: I1011 10:43:45.260580 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="0b5de9d609ee1e6c379f71934cb2c3c6" containerName="cluster-policy-controller" containerID="cri-o://913e0c188082961ad93b5f6a07d9eda57e62160ccbff129947e77948c758035a" gracePeriod=30 Oct 11 10:43:45.262063 master-1 kubenswrapper[4771]: I1011 10:43:45.261391 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 11 10:43:45.262063 master-1 kubenswrapper[4771]: E1011 10:43:45.261816 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b5de9d609ee1e6c379f71934cb2c3c6" containerName="cluster-policy-controller" Oct 11 10:43:45.262063 master-1 kubenswrapper[4771]: I1011 10:43:45.261842 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5de9d609ee1e6c379f71934cb2c3c6" containerName="cluster-policy-controller" Oct 11 10:43:45.262063 master-1 kubenswrapper[4771]: E1011 10:43:45.261860 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b5de9d609ee1e6c379f71934cb2c3c6" containerName="kube-controller-manager-cert-syncer" Oct 11 10:43:45.262063 master-1 kubenswrapper[4771]: I1011 10:43:45.261873 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5de9d609ee1e6c379f71934cb2c3c6" containerName="kube-controller-manager-cert-syncer" Oct 11 10:43:45.262063 master-1 kubenswrapper[4771]: E1011 10:43:45.261893 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b5de9d609ee1e6c379f71934cb2c3c6" containerName="kube-controller-manager" Oct 11 10:43:45.262063 master-1 kubenswrapper[4771]: I1011 10:43:45.261907 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5de9d609ee1e6c379f71934cb2c3c6" containerName="kube-controller-manager" Oct 11 10:43:45.262063 master-1 kubenswrapper[4771]: E1011 10:43:45.261923 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b5de9d609ee1e6c379f71934cb2c3c6" containerName="kube-controller-manager-recovery-controller" Oct 11 10:43:45.262063 master-1 kubenswrapper[4771]: I1011 10:43:45.261936 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5de9d609ee1e6c379f71934cb2c3c6" containerName="kube-controller-manager-recovery-controller" Oct 11 10:43:45.263891 master-1 kubenswrapper[4771]: I1011 10:43:45.262098 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b5de9d609ee1e6c379f71934cb2c3c6" containerName="cluster-policy-controller" Oct 11 10:43:45.263891 master-1 kubenswrapper[4771]: I1011 10:43:45.262121 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b5de9d609ee1e6c379f71934cb2c3c6" containerName="kube-controller-manager-recovery-controller" Oct 11 10:43:45.263891 master-1 kubenswrapper[4771]: I1011 10:43:45.262143 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b5de9d609ee1e6c379f71934cb2c3c6" containerName="kube-controller-manager-cert-syncer" Oct 11 10:43:45.263891 master-1 kubenswrapper[4771]: I1011 10:43:45.262165 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b5de9d609ee1e6c379f71934cb2c3c6" containerName="kube-controller-manager" Oct 11 10:43:45.263891 master-1 kubenswrapper[4771]: I1011 10:43:45.260547 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="0b5de9d609ee1e6c379f71934cb2c3c6" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://068b46162b2804f4e661290cc4e58111faa3ee64a5ff733b8a30de9f4b7d070e" gracePeriod=30 Oct 11 10:43:45.433555 master-1 kubenswrapper[4771]: I1011 10:43:45.433482 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6e4abd751079f7c12d9e1207e209976a-cert-dir\") pod \"kube-controller-manager-master-1\" (UID: \"6e4abd751079f7c12d9e1207e209976a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:43:45.433787 master-1 kubenswrapper[4771]: I1011 10:43:45.433568 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6e4abd751079f7c12d9e1207e209976a-resource-dir\") pod \"kube-controller-manager-master-1\" (UID: \"6e4abd751079f7c12d9e1207e209976a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:43:45.459324 master-1 kubenswrapper[4771]: I1011 10:43:45.459026 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-1_0b5de9d609ee1e6c379f71934cb2c3c6/kube-controller-manager-cert-syncer/0.log" Oct 11 10:43:45.460214 master-1 kubenswrapper[4771]: I1011 10:43:45.460175 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:43:45.465933 master-1 kubenswrapper[4771]: I1011 10:43:45.465898 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" oldPodUID="0b5de9d609ee1e6c379f71934cb2c3c6" podUID="6e4abd751079f7c12d9e1207e209976a" Oct 11 10:43:45.535665 master-1 kubenswrapper[4771]: I1011 10:43:45.535600 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b5de9d609ee1e6c379f71934cb2c3c6-cert-dir\") pod \"0b5de9d609ee1e6c379f71934cb2c3c6\" (UID: \"0b5de9d609ee1e6c379f71934cb2c3c6\") " Oct 11 10:43:45.535875 master-1 kubenswrapper[4771]: I1011 10:43:45.535725 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b5de9d609ee1e6c379f71934cb2c3c6-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "0b5de9d609ee1e6c379f71934cb2c3c6" (UID: "0b5de9d609ee1e6c379f71934cb2c3c6"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:43:45.536093 master-1 kubenswrapper[4771]: I1011 10:43:45.536047 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6e4abd751079f7c12d9e1207e209976a-cert-dir\") pod \"kube-controller-manager-master-1\" (UID: \"6e4abd751079f7c12d9e1207e209976a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:43:45.536407 master-1 kubenswrapper[4771]: I1011 10:43:45.536313 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6e4abd751079f7c12d9e1207e209976a-cert-dir\") pod \"kube-controller-manager-master-1\" (UID: \"6e4abd751079f7c12d9e1207e209976a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:43:45.536854 master-1 kubenswrapper[4771]: I1011 10:43:45.536805 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6e4abd751079f7c12d9e1207e209976a-resource-dir\") pod \"kube-controller-manager-master-1\" (UID: \"6e4abd751079f7c12d9e1207e209976a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:43:45.537068 master-1 kubenswrapper[4771]: I1011 10:43:45.537031 4771 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b5de9d609ee1e6c379f71934cb2c3c6-cert-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:43:45.537228 master-1 kubenswrapper[4771]: I1011 10:43:45.537173 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6e4abd751079f7c12d9e1207e209976a-resource-dir\") pod \"kube-controller-manager-master-1\" (UID: \"6e4abd751079f7c12d9e1207e209976a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:43:45.637404 master-1 kubenswrapper[4771]: I1011 10:43:45.637303 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b5de9d609ee1e6c379f71934cb2c3c6-resource-dir\") pod \"0b5de9d609ee1e6c379f71934cb2c3c6\" (UID: \"0b5de9d609ee1e6c379f71934cb2c3c6\") " Oct 11 10:43:45.638232 master-1 kubenswrapper[4771]: I1011 10:43:45.637509 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b5de9d609ee1e6c379f71934cb2c3c6-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "0b5de9d609ee1e6c379f71934cb2c3c6" (UID: "0b5de9d609ee1e6c379f71934cb2c3c6"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:43:45.638564 master-1 kubenswrapper[4771]: I1011 10:43:45.638543 4771 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b5de9d609ee1e6c379f71934cb2c3c6-resource-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:43:45.645537 master-2 kubenswrapper[4776]: I1011 10:43:45.645501 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" event={"ID":"1d346790-931a-4f91-b588-0b6249da0cd0","Type":"ContainerStarted","Data":"bf065c533130b8217afb792e067a727f075aaa82c3345db41a96954b3ceff80f"} Oct 11 10:43:45.670990 master-2 kubenswrapper[4776]: I1011 10:43:45.670882 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" podStartSLOduration=55.670859856 podStartE2EDuration="55.670859856s" podCreationTimestamp="2025-10-11 10:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:43:45.667267261 +0000 UTC m=+1060.451693980" watchObservedRunningTime="2025-10-11 10:43:45.670859856 +0000 UTC m=+1060.455286575" Oct 11 10:43:46.304725 master-1 kubenswrapper[4771]: I1011 10:43:46.304655 4771 generic.go:334] "Generic (PLEG): container finished" podID="78689bdc-0258-45eb-8e5b-253911c61c79" containerID="2d907b9a8cd0470d88178cfea01b0abf30291128bc9c158e361b094caee83ec4" exitCode=0 Oct 11 10:43:46.304969 master-1 kubenswrapper[4771]: I1011 10:43:46.304805 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-1" event={"ID":"78689bdc-0258-45eb-8e5b-253911c61c79","Type":"ContainerDied","Data":"2d907b9a8cd0470d88178cfea01b0abf30291128bc9c158e361b094caee83ec4"} Oct 11 10:43:46.308437 master-1 kubenswrapper[4771]: I1011 10:43:46.308413 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-1_0b5de9d609ee1e6c379f71934cb2c3c6/kube-controller-manager-cert-syncer/0.log" Oct 11 10:43:46.309814 master-1 kubenswrapper[4771]: I1011 10:43:46.309781 4771 generic.go:334] "Generic (PLEG): container finished" podID="0b5de9d609ee1e6c379f71934cb2c3c6" containerID="068b46162b2804f4e661290cc4e58111faa3ee64a5ff733b8a30de9f4b7d070e" exitCode=0 Oct 11 10:43:46.309814 master-1 kubenswrapper[4771]: I1011 10:43:46.309811 4771 generic.go:334] "Generic (PLEG): container finished" podID="0b5de9d609ee1e6c379f71934cb2c3c6" containerID="79e52bbf7393881dfbba04f7a9f71721266d98f1191a6c7be91f8bc0ce4e1139" exitCode=2 Oct 11 10:43:46.309932 master-1 kubenswrapper[4771]: I1011 10:43:46.309828 4771 generic.go:334] "Generic (PLEG): container finished" podID="0b5de9d609ee1e6c379f71934cb2c3c6" containerID="913e0c188082961ad93b5f6a07d9eda57e62160ccbff129947e77948c758035a" exitCode=0 Oct 11 10:43:46.309932 master-1 kubenswrapper[4771]: I1011 10:43:46.309842 4771 generic.go:334] "Generic (PLEG): container finished" podID="0b5de9d609ee1e6c379f71934cb2c3c6" containerID="9e6a4086932c3b4c0590b1992411e46984c974a11450de3378bede5ca3045d02" exitCode=0 Oct 11 10:43:46.309932 master-1 kubenswrapper[4771]: I1011 10:43:46.309889 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4247c914a32e821feeb321db49e7b5b061a40ecb112a752686b9ea07098f462f" Oct 11 10:43:46.309932 master-1 kubenswrapper[4771]: I1011 10:43:46.309912 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:43:46.331115 master-1 kubenswrapper[4771]: I1011 10:43:46.331050 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" oldPodUID="0b5de9d609ee1e6c379f71934cb2c3c6" podUID="6e4abd751079f7c12d9e1207e209976a" Oct 11 10:43:46.339983 master-1 kubenswrapper[4771]: I1011 10:43:46.339925 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" oldPodUID="0b5de9d609ee1e6c379f71934cb2c3c6" podUID="6e4abd751079f7c12d9e1207e209976a" Oct 11 10:43:46.436933 master-1 kubenswrapper[4771]: I1011 10:43:46.436860 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 11 10:43:46.448100 master-1 kubenswrapper[4771]: I1011 10:43:46.448048 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b5de9d609ee1e6c379f71934cb2c3c6" path="/var/lib/kubelet/pods/0b5de9d609ee1e6c379f71934cb2c3c6/volumes" Oct 11 10:43:46.452174 master-1 kubenswrapper[4771]: I1011 10:43:46.452134 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-1" podUID="6c6a59a5-a8d4-44d0-bc98-101ffe6273e9" Oct 11 10:43:46.452232 master-1 kubenswrapper[4771]: I1011 10:43:46.452174 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-1" podUID="6c6a59a5-a8d4-44d0-bc98-101ffe6273e9" Oct 11 10:43:46.473876 master-1 kubenswrapper[4771]: I1011 10:43:46.473824 4771 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-1" Oct 11 10:43:46.474212 master-1 kubenswrapper[4771]: I1011 10:43:46.474188 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-1"] Oct 11 10:43:46.482370 master-1 kubenswrapper[4771]: I1011 10:43:46.482297 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-1"] Oct 11 10:43:46.499243 master-1 kubenswrapper[4771]: I1011 10:43:46.498507 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 11 10:43:46.501607 master-1 kubenswrapper[4771]: I1011 10:43:46.501556 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-1"] Oct 11 10:43:46.523008 master-1 kubenswrapper[4771]: W1011 10:43:46.522929 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbeb1098f6b7e52b91afcf2e9b50b014.slice/crio-85ffa4a19feba0ffcf516a2a5e009b706621b05b01a7558d41fc981463ac60e2 WatchSource:0}: Error finding container 85ffa4a19feba0ffcf516a2a5e009b706621b05b01a7558d41fc981463ac60e2: Status 404 returned error can't find the container with id 85ffa4a19feba0ffcf516a2a5e009b706621b05b01a7558d41fc981463ac60e2 Oct 11 10:43:47.294167 master-1 kubenswrapper[4771]: I1011 10:43:47.293918 4771 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-1 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" start-of-body= Oct 11 10:43:47.294167 master-1 kubenswrapper[4771]: I1011 10:43:47.294064 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" podUID="a706deec-9223-4663-9db5-71147d242c34" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" Oct 11 10:43:47.321457 master-1 kubenswrapper[4771]: I1011 10:43:47.321335 4771 generic.go:334] "Generic (PLEG): container finished" podID="dbeb1098f6b7e52b91afcf2e9b50b014" containerID="211daec19e26fca55ad8690f95b0fed282fad8cd036efbb54c03ad1969a7cfb2" exitCode=0 Oct 11 10:43:47.321599 master-1 kubenswrapper[4771]: I1011 10:43:47.321499 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"dbeb1098f6b7e52b91afcf2e9b50b014","Type":"ContainerDied","Data":"211daec19e26fca55ad8690f95b0fed282fad8cd036efbb54c03ad1969a7cfb2"} Oct 11 10:43:47.321599 master-1 kubenswrapper[4771]: I1011 10:43:47.321568 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"dbeb1098f6b7e52b91afcf2e9b50b014","Type":"ContainerStarted","Data":"85ffa4a19feba0ffcf516a2a5e009b706621b05b01a7558d41fc981463ac60e2"} Oct 11 10:43:47.752968 master-1 kubenswrapper[4771]: I1011 10:43:47.752876 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-1" Oct 11 10:43:47.874886 master-1 kubenswrapper[4771]: I1011 10:43:47.874670 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78689bdc-0258-45eb-8e5b-253911c61c79-kube-api-access\") pod \"78689bdc-0258-45eb-8e5b-253911c61c79\" (UID: \"78689bdc-0258-45eb-8e5b-253911c61c79\") " Oct 11 10:43:47.874886 master-1 kubenswrapper[4771]: I1011 10:43:47.874845 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78689bdc-0258-45eb-8e5b-253911c61c79-kubelet-dir\") pod \"78689bdc-0258-45eb-8e5b-253911c61c79\" (UID: \"78689bdc-0258-45eb-8e5b-253911c61c79\") " Oct 11 10:43:47.875284 master-1 kubenswrapper[4771]: I1011 10:43:47.874985 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78689bdc-0258-45eb-8e5b-253911c61c79-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "78689bdc-0258-45eb-8e5b-253911c61c79" (UID: "78689bdc-0258-45eb-8e5b-253911c61c79"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:43:47.875284 master-1 kubenswrapper[4771]: I1011 10:43:47.875111 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78689bdc-0258-45eb-8e5b-253911c61c79-var-lock\") pod \"78689bdc-0258-45eb-8e5b-253911c61c79\" (UID: \"78689bdc-0258-45eb-8e5b-253911c61c79\") " Oct 11 10:43:47.875284 master-1 kubenswrapper[4771]: I1011 10:43:47.875203 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78689bdc-0258-45eb-8e5b-253911c61c79-var-lock" (OuterVolumeSpecName: "var-lock") pod "78689bdc-0258-45eb-8e5b-253911c61c79" (UID: "78689bdc-0258-45eb-8e5b-253911c61c79"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:43:47.875657 master-1 kubenswrapper[4771]: I1011 10:43:47.875566 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78689bdc-0258-45eb-8e5b-253911c61c79-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:43:47.875657 master-1 kubenswrapper[4771]: I1011 10:43:47.875623 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78689bdc-0258-45eb-8e5b-253911c61c79-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:43:47.880725 master-1 kubenswrapper[4771]: I1011 10:43:47.880619 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78689bdc-0258-45eb-8e5b-253911c61c79-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "78689bdc-0258-45eb-8e5b-253911c61c79" (UID: "78689bdc-0258-45eb-8e5b-253911c61c79"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:43:47.976924 master-1 kubenswrapper[4771]: I1011 10:43:47.976832 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78689bdc-0258-45eb-8e5b-253911c61c79-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:43:48.341287 master-1 kubenswrapper[4771]: I1011 10:43:48.341214 4771 generic.go:334] "Generic (PLEG): container finished" podID="dbeb1098f6b7e52b91afcf2e9b50b014" containerID="31facdd8fb6e6f6274c00fada32ab1255ea6776f85ffbc3f8065c95c2d2382fb" exitCode=0 Oct 11 10:43:48.342111 master-1 kubenswrapper[4771]: I1011 10:43:48.341283 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"dbeb1098f6b7e52b91afcf2e9b50b014","Type":"ContainerDied","Data":"31facdd8fb6e6f6274c00fada32ab1255ea6776f85ffbc3f8065c95c2d2382fb"} Oct 11 10:43:48.345338 master-1 kubenswrapper[4771]: I1011 10:43:48.345251 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-1" event={"ID":"78689bdc-0258-45eb-8e5b-253911c61c79","Type":"ContainerDied","Data":"56046807ca0aa308b5de3a340c0e367a86144d71f324a1b35aa3b72ae51dc090"} Oct 11 10:43:48.345476 master-1 kubenswrapper[4771]: I1011 10:43:48.345288 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-1" Oct 11 10:43:48.345542 master-1 kubenswrapper[4771]: I1011 10:43:48.345389 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56046807ca0aa308b5de3a340c0e367a86144d71f324a1b35aa3b72ae51dc090" Oct 11 10:43:48.810875 master-2 kubenswrapper[4776]: I1011 10:43:48.810818 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:48.810875 master-2 kubenswrapper[4776]: I1011 10:43:48.810876 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:48.819549 master-2 kubenswrapper[4776]: I1011 10:43:48.819440 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:49.357612 master-1 kubenswrapper[4771]: I1011 10:43:49.357551 4771 generic.go:334] "Generic (PLEG): container finished" podID="dbeb1098f6b7e52b91afcf2e9b50b014" containerID="159ca90da6e99bed0d178155a8c50681923c5b6021a8639d479924886947bb47" exitCode=0 Oct 11 10:43:49.358544 master-1 kubenswrapper[4771]: I1011 10:43:49.357648 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"dbeb1098f6b7e52b91afcf2e9b50b014","Type":"ContainerDied","Data":"159ca90da6e99bed0d178155a8c50681923c5b6021a8639d479924886947bb47"} Oct 11 10:43:49.629225 master-1 kubenswrapper[4771]: I1011 10:43:49.629111 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 11 10:43:49.629423 master-1 kubenswrapper[4771]: I1011 10:43:49.629235 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 11 10:43:49.679056 master-2 kubenswrapper[4776]: I1011 10:43:49.678976 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:50.061206 master-0 kubenswrapper[4790]: I1011 10:43:50.060637 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" start-of-body= Oct 11 10:43:50.061206 master-0 kubenswrapper[4790]: I1011 10:43:50.060782 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" Oct 11 10:43:50.375343 master-1 kubenswrapper[4771]: I1011 10:43:50.375268 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"dbeb1098f6b7e52b91afcf2e9b50b014","Type":"ContainerStarted","Data":"9a89c09ca6e91647da081f97087658c1b11fe705a16ff46043003c3fbbcd0e8e"} Oct 11 10:43:50.375343 master-1 kubenswrapper[4771]: I1011 10:43:50.375340 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"dbeb1098f6b7e52b91afcf2e9b50b014","Type":"ContainerStarted","Data":"6d52caa6ca3071165a980d0d3715ea1f4a335edb9977e20dd52616ba6ac3305d"} Oct 11 10:43:51.391239 master-1 kubenswrapper[4771]: I1011 10:43:51.391162 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"dbeb1098f6b7e52b91afcf2e9b50b014","Type":"ContainerStarted","Data":"2230bc9413b16f3b0764a9834ddc735459529336c9441bb2965aa9ffe5d841d9"} Oct 11 10:43:51.391239 master-1 kubenswrapper[4771]: I1011 10:43:51.391247 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"dbeb1098f6b7e52b91afcf2e9b50b014","Type":"ContainerStarted","Data":"269a69a20048205c2f87c7a43fc6a19fb199854bc247ee4dfd460bbf2b358b62"} Oct 11 10:43:51.392128 master-1 kubenswrapper[4771]: I1011 10:43:51.391268 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"dbeb1098f6b7e52b91afcf2e9b50b014","Type":"ContainerStarted","Data":"c78b119fc172933815c1d2c54198af0cd4a86318e7a1301b7b416c3fad42949a"} Oct 11 10:43:51.448024 master-1 kubenswrapper[4771]: I1011 10:43:51.447946 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-1" podStartSLOduration=5.447926073 podStartE2EDuration="5.447926073s" podCreationTimestamp="2025-10-11 10:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:43:51.444532446 +0000 UTC m=+1063.418758917" watchObservedRunningTime="2025-10-11 10:43:51.447926073 +0000 UTC m=+1063.422152514" Oct 11 10:43:51.499767 master-1 kubenswrapper[4771]: I1011 10:43:51.499679 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-1" Oct 11 10:43:52.293978 master-1 kubenswrapper[4771]: I1011 10:43:52.293883 4771 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-1 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" start-of-body= Oct 11 10:43:52.294205 master-1 kubenswrapper[4771]: I1011 10:43:52.293983 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" podUID="a706deec-9223-4663-9db5-71147d242c34" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" Oct 11 10:43:55.060073 master-0 kubenswrapper[4790]: I1011 10:43:55.059915 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" start-of-body= Oct 11 10:43:55.061214 master-0 kubenswrapper[4790]: I1011 10:43:55.060099 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" Oct 11 10:43:56.499394 master-1 kubenswrapper[4771]: I1011 10:43:56.499278 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-1" Oct 11 10:43:57.294618 master-1 kubenswrapper[4771]: I1011 10:43:57.294524 4771 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-1 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" start-of-body= Oct 11 10:43:57.294919 master-1 kubenswrapper[4771]: I1011 10:43:57.294711 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" podUID="a706deec-9223-4663-9db5-71147d242c34" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" Oct 11 10:43:57.294919 master-1 kubenswrapper[4771]: I1011 10:43:57.294863 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 11 10:43:57.296038 master-1 kubenswrapper[4771]: I1011 10:43:57.295968 4771 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-1 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" start-of-body= Oct 11 10:43:57.296131 master-1 kubenswrapper[4771]: I1011 10:43:57.296061 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" podUID="a706deec-9223-4663-9db5-71147d242c34" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" Oct 11 10:43:59.437167 master-1 kubenswrapper[4771]: I1011 10:43:59.437053 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:43:59.453973 master-1 kubenswrapper[4771]: I1011 10:43:59.453903 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="6daea3ba-d094-4b54-9863-c829a9c42066" Oct 11 10:43:59.453973 master-1 kubenswrapper[4771]: I1011 10:43:59.453960 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="6daea3ba-d094-4b54-9863-c829a9c42066" Oct 11 10:43:59.477804 master-1 kubenswrapper[4771]: I1011 10:43:59.477414 4771 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:43:59.479725 master-1 kubenswrapper[4771]: I1011 10:43:59.477820 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 11 10:43:59.486225 master-1 kubenswrapper[4771]: I1011 10:43:59.486138 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 11 10:43:59.497456 master-1 kubenswrapper[4771]: I1011 10:43:59.497295 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:43:59.503923 master-1 kubenswrapper[4771]: I1011 10:43:59.503815 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 11 10:43:59.530568 master-1 kubenswrapper[4771]: W1011 10:43:59.530488 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e4abd751079f7c12d9e1207e209976a.slice/crio-3a658d0c2cfd0d7018a054674ad2fef1705a690b9977d860ea43a5927f7570a2 WatchSource:0}: Error finding container 3a658d0c2cfd0d7018a054674ad2fef1705a690b9977d860ea43a5927f7570a2: Status 404 returned error can't find the container with id 3a658d0c2cfd0d7018a054674ad2fef1705a690b9977d860ea43a5927f7570a2 Oct 11 10:43:59.629407 master-1 kubenswrapper[4771]: I1011 10:43:59.629313 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:43:59.629759 master-1 kubenswrapper[4771]: I1011 10:43:59.629718 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:44:00.059463 master-0 kubenswrapper[4790]: I1011 10:44:00.059368 4790 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-wjtq5 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" start-of-body= Oct 11 10:44:00.060517 master-0 kubenswrapper[4790]: I1011 10:44:00.059474 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.130.0.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.13:8443: connect: connection refused" Oct 11 10:44:00.459888 master-1 kubenswrapper[4771]: I1011 10:44:00.459828 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"6e4abd751079f7c12d9e1207e209976a","Type":"ContainerStarted","Data":"2b8cfa34057f1d7441fa8899d6ffb269449e64e8f7fbad7e041afde233f9f46e"} Oct 11 10:44:00.460384 master-1 kubenswrapper[4771]: I1011 10:44:00.459902 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"6e4abd751079f7c12d9e1207e209976a","Type":"ContainerStarted","Data":"47f67f1edcd45fc05bb0497d766cd846facb4877f08282bc158671c2139f4123"} Oct 11 10:44:00.460384 master-1 kubenswrapper[4771]: I1011 10:44:00.459926 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"6e4abd751079f7c12d9e1207e209976a","Type":"ContainerStarted","Data":"698e3875aa3788d3ef1c91df166ebdd80850c7ea07b48f8abbf75cab790053eb"} Oct 11 10:44:00.460384 master-1 kubenswrapper[4771]: I1011 10:44:00.459946 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"6e4abd751079f7c12d9e1207e209976a","Type":"ContainerStarted","Data":"3a658d0c2cfd0d7018a054674ad2fef1705a690b9977d860ea43a5927f7570a2"} Oct 11 10:44:01.471522 master-1 kubenswrapper[4771]: I1011 10:44:01.471449 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"6e4abd751079f7c12d9e1207e209976a","Type":"ContainerStarted","Data":"58932b681106833dbc88897b7539419e139110df78331149ef2988d73886577e"} Oct 11 10:44:01.498876 master-1 kubenswrapper[4771]: I1011 10:44:01.498781 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podStartSLOduration=2.498758567 podStartE2EDuration="2.498758567s" podCreationTimestamp="2025-10-11 10:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:44:01.497206483 +0000 UTC m=+1073.471432964" watchObservedRunningTime="2025-10-11 10:44:01.498758567 +0000 UTC m=+1073.472985048" Oct 11 10:44:02.299850 master-1 kubenswrapper[4771]: I1011 10:44:02.299765 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 11 10:44:03.765273 master-0 kubenswrapper[4790]: I1011 10:44:03.765197 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-9c8k6"] Oct 11 10:44:03.765826 master-0 kubenswrapper[4790]: I1011 10:44:03.765515 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" podUID="76dd8647-4ad5-4874-b2d2-dee16aab637a" containerName="oauth-apiserver" containerID="cri-o://35472050aef2a23d480cd0b8022e9ecdb2080124eec5c8fed4e4fce84c05d72c" gracePeriod=120 Oct 11 10:44:04.044321 master-0 kubenswrapper[4790]: I1011 10:44:04.044274 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:44:04.113615 master-0 kubenswrapper[4790]: I1011 10:44:04.113488 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-image-import-ca\") pod \"099ca022-6e9c-4604-b517-d90713dd6a44\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " Oct 11 10:44:04.113615 master-0 kubenswrapper[4790]: I1011 10:44:04.113583 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-etcd-serving-ca\") pod \"099ca022-6e9c-4604-b517-d90713dd6a44\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " Oct 11 10:44:04.113615 master-0 kubenswrapper[4790]: I1011 10:44:04.113622 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/099ca022-6e9c-4604-b517-d90713dd6a44-encryption-config\") pod \"099ca022-6e9c-4604-b517-d90713dd6a44\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " Oct 11 10:44:04.114128 master-0 kubenswrapper[4790]: I1011 10:44:04.113680 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/099ca022-6e9c-4604-b517-d90713dd6a44-etcd-client\") pod \"099ca022-6e9c-4604-b517-d90713dd6a44\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " Oct 11 10:44:04.114128 master-0 kubenswrapper[4790]: I1011 10:44:04.113754 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88h25\" (UniqueName: \"kubernetes.io/projected/099ca022-6e9c-4604-b517-d90713dd6a44-kube-api-access-88h25\") pod \"099ca022-6e9c-4604-b517-d90713dd6a44\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " Oct 11 10:44:04.114128 master-0 kubenswrapper[4790]: I1011 10:44:04.113921 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-config\") pod \"099ca022-6e9c-4604-b517-d90713dd6a44\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " Oct 11 10:44:04.114128 master-0 kubenswrapper[4790]: I1011 10:44:04.113999 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-trusted-ca-bundle\") pod \"099ca022-6e9c-4604-b517-d90713dd6a44\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " Oct 11 10:44:04.114128 master-0 kubenswrapper[4790]: I1011 10:44:04.114031 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/099ca022-6e9c-4604-b517-d90713dd6a44-node-pullsecrets\") pod \"099ca022-6e9c-4604-b517-d90713dd6a44\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " Oct 11 10:44:04.114128 master-0 kubenswrapper[4790]: I1011 10:44:04.114079 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/099ca022-6e9c-4604-b517-d90713dd6a44-audit-dir\") pod \"099ca022-6e9c-4604-b517-d90713dd6a44\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " Oct 11 10:44:04.114128 master-0 kubenswrapper[4790]: I1011 10:44:04.114100 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-audit\") pod \"099ca022-6e9c-4604-b517-d90713dd6a44\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " Oct 11 10:44:04.114128 master-0 kubenswrapper[4790]: I1011 10:44:04.114134 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/099ca022-6e9c-4604-b517-d90713dd6a44-serving-cert\") pod \"099ca022-6e9c-4604-b517-d90713dd6a44\" (UID: \"099ca022-6e9c-4604-b517-d90713dd6a44\") " Oct 11 10:44:04.114564 master-0 kubenswrapper[4790]: I1011 10:44:04.114283 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/099ca022-6e9c-4604-b517-d90713dd6a44-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "099ca022-6e9c-4604-b517-d90713dd6a44" (UID: "099ca022-6e9c-4604-b517-d90713dd6a44"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:44:04.114610 master-0 kubenswrapper[4790]: I1011 10:44:04.114499 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "099ca022-6e9c-4604-b517-d90713dd6a44" (UID: "099ca022-6e9c-4604-b517-d90713dd6a44"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:44:04.114658 master-0 kubenswrapper[4790]: I1011 10:44:04.114567 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/099ca022-6e9c-4604-b517-d90713dd6a44-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "099ca022-6e9c-4604-b517-d90713dd6a44" (UID: "099ca022-6e9c-4604-b517-d90713dd6a44"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:44:04.114701 master-0 kubenswrapper[4790]: I1011 10:44:04.114662 4790 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/099ca022-6e9c-4604-b517-d90713dd6a44-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Oct 11 10:44:04.114701 master-0 kubenswrapper[4790]: I1011 10:44:04.114682 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "099ca022-6e9c-4604-b517-d90713dd6a44" (UID: "099ca022-6e9c-4604-b517-d90713dd6a44"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:44:04.115031 master-0 kubenswrapper[4790]: I1011 10:44:04.114834 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-config" (OuterVolumeSpecName: "config") pod "099ca022-6e9c-4604-b517-d90713dd6a44" (UID: "099ca022-6e9c-4604-b517-d90713dd6a44"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:44:04.115143 master-0 kubenswrapper[4790]: I1011 10:44:04.115118 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-audit" (OuterVolumeSpecName: "audit") pod "099ca022-6e9c-4604-b517-d90713dd6a44" (UID: "099ca022-6e9c-4604-b517-d90713dd6a44"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:44:04.115212 master-0 kubenswrapper[4790]: I1011 10:44:04.115140 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "099ca022-6e9c-4604-b517-d90713dd6a44" (UID: "099ca022-6e9c-4604-b517-d90713dd6a44"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:44:04.118097 master-0 kubenswrapper[4790]: I1011 10:44:04.118008 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/099ca022-6e9c-4604-b517-d90713dd6a44-kube-api-access-88h25" (OuterVolumeSpecName: "kube-api-access-88h25") pod "099ca022-6e9c-4604-b517-d90713dd6a44" (UID: "099ca022-6e9c-4604-b517-d90713dd6a44"). InnerVolumeSpecName "kube-api-access-88h25". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:44:04.118929 master-0 kubenswrapper[4790]: I1011 10:44:04.118330 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/099ca022-6e9c-4604-b517-d90713dd6a44-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "099ca022-6e9c-4604-b517-d90713dd6a44" (UID: "099ca022-6e9c-4604-b517-d90713dd6a44"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:44:04.118929 master-0 kubenswrapper[4790]: I1011 10:44:04.118631 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/099ca022-6e9c-4604-b517-d90713dd6a44-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "099ca022-6e9c-4604-b517-d90713dd6a44" (UID: "099ca022-6e9c-4604-b517-d90713dd6a44"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:44:04.119915 master-0 kubenswrapper[4790]: I1011 10:44:04.119866 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/099ca022-6e9c-4604-b517-d90713dd6a44-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "099ca022-6e9c-4604-b517-d90713dd6a44" (UID: "099ca022-6e9c-4604-b517-d90713dd6a44"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:44:04.216247 master-0 kubenswrapper[4790]: I1011 10:44:04.216136 4790 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/099ca022-6e9c-4604-b517-d90713dd6a44-etcd-client\") on node \"master-0\" DevicePath \"\"" Oct 11 10:44:04.216247 master-0 kubenswrapper[4790]: I1011 10:44:04.216187 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88h25\" (UniqueName: \"kubernetes.io/projected/099ca022-6e9c-4604-b517-d90713dd6a44-kube-api-access-88h25\") on node \"master-0\" DevicePath \"\"" Oct 11 10:44:04.216247 master-0 kubenswrapper[4790]: I1011 10:44:04.216204 4790 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-config\") on node \"master-0\" DevicePath \"\"" Oct 11 10:44:04.216247 master-0 kubenswrapper[4790]: I1011 10:44:04.216217 4790 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Oct 11 10:44:04.216247 master-0 kubenswrapper[4790]: I1011 10:44:04.216229 4790 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/099ca022-6e9c-4604-b517-d90713dd6a44-audit-dir\") on node \"master-0\" DevicePath \"\"" Oct 11 10:44:04.216247 master-0 kubenswrapper[4790]: I1011 10:44:04.216241 4790 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-audit\") on node \"master-0\" DevicePath \"\"" Oct 11 10:44:04.216247 master-0 kubenswrapper[4790]: I1011 10:44:04.216252 4790 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/099ca022-6e9c-4604-b517-d90713dd6a44-serving-cert\") on node \"master-0\" DevicePath \"\"" Oct 11 10:44:04.216247 master-0 kubenswrapper[4790]: I1011 10:44:04.216264 4790 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-image-import-ca\") on node \"master-0\" DevicePath \"\"" Oct 11 10:44:04.216247 master-0 kubenswrapper[4790]: I1011 10:44:04.216275 4790 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/099ca022-6e9c-4604-b517-d90713dd6a44-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Oct 11 10:44:04.216247 master-0 kubenswrapper[4790]: I1011 10:44:04.216290 4790 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/099ca022-6e9c-4604-b517-d90713dd6a44-encryption-config\") on node \"master-0\" DevicePath \"\"" Oct 11 10:44:04.415391 master-0 kubenswrapper[4790]: I1011 10:44:04.415284 4790 generic.go:334] "Generic (PLEG): container finished" podID="099ca022-6e9c-4604-b517-d90713dd6a44" containerID="29c2667267dacd2b8bea9208822dffdfcb241820cba338aed8d3c7b0b7e5651a" exitCode=0 Oct 11 10:44:04.415391 master-0 kubenswrapper[4790]: I1011 10:44:04.415365 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" event={"ID":"099ca022-6e9c-4604-b517-d90713dd6a44","Type":"ContainerDied","Data":"29c2667267dacd2b8bea9208822dffdfcb241820cba338aed8d3c7b0b7e5651a"} Oct 11 10:44:04.415997 master-0 kubenswrapper[4790]: I1011 10:44:04.415403 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" Oct 11 10:44:04.415997 master-0 kubenswrapper[4790]: I1011 10:44:04.415455 4790 scope.go:117] "RemoveContainer" containerID="dd112f308e676b2c4d179d5aff947e42c2b7c6ef683c69f8c917e46b03a6eeef" Oct 11 10:44:04.415997 master-0 kubenswrapper[4790]: I1011 10:44:04.415433 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-wjtq5" event={"ID":"099ca022-6e9c-4604-b517-d90713dd6a44","Type":"ContainerDied","Data":"1623fe4c5875e03dc8879c8e18034ad8d12a92c81f76f68ae1603f1e4ba99e21"} Oct 11 10:44:04.434469 master-0 kubenswrapper[4790]: I1011 10:44:04.434168 4790 scope.go:117] "RemoveContainer" containerID="29c2667267dacd2b8bea9208822dffdfcb241820cba338aed8d3c7b0b7e5651a" Oct 11 10:44:04.445998 master-0 kubenswrapper[4790]: I1011 10:44:04.445914 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-69df5d46bc-wjtq5"] Oct 11 10:44:04.454029 master-0 kubenswrapper[4790]: I1011 10:44:04.453955 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-69df5d46bc-wjtq5"] Oct 11 10:44:04.456188 master-0 kubenswrapper[4790]: I1011 10:44:04.456142 4790 scope.go:117] "RemoveContainer" containerID="035fa9c3481f694000fd14dae9e171055c8a6110f7029a9f53a9e12f4eeb425c" Oct 11 10:44:04.480156 master-0 kubenswrapper[4790]: I1011 10:44:04.480098 4790 scope.go:117] "RemoveContainer" containerID="dd112f308e676b2c4d179d5aff947e42c2b7c6ef683c69f8c917e46b03a6eeef" Oct 11 10:44:04.480780 master-0 kubenswrapper[4790]: E1011 10:44:04.480729 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd112f308e676b2c4d179d5aff947e42c2b7c6ef683c69f8c917e46b03a6eeef\": container with ID starting with dd112f308e676b2c4d179d5aff947e42c2b7c6ef683c69f8c917e46b03a6eeef not found: ID does not exist" containerID="dd112f308e676b2c4d179d5aff947e42c2b7c6ef683c69f8c917e46b03a6eeef" Oct 11 10:44:04.480891 master-0 kubenswrapper[4790]: I1011 10:44:04.480791 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd112f308e676b2c4d179d5aff947e42c2b7c6ef683c69f8c917e46b03a6eeef"} err="failed to get container status \"dd112f308e676b2c4d179d5aff947e42c2b7c6ef683c69f8c917e46b03a6eeef\": rpc error: code = NotFound desc = could not find container \"dd112f308e676b2c4d179d5aff947e42c2b7c6ef683c69f8c917e46b03a6eeef\": container with ID starting with dd112f308e676b2c4d179d5aff947e42c2b7c6ef683c69f8c917e46b03a6eeef not found: ID does not exist" Oct 11 10:44:04.480891 master-0 kubenswrapper[4790]: I1011 10:44:04.480889 4790 scope.go:117] "RemoveContainer" containerID="29c2667267dacd2b8bea9208822dffdfcb241820cba338aed8d3c7b0b7e5651a" Oct 11 10:44:04.481436 master-0 kubenswrapper[4790]: E1011 10:44:04.481395 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c2667267dacd2b8bea9208822dffdfcb241820cba338aed8d3c7b0b7e5651a\": container with ID starting with 29c2667267dacd2b8bea9208822dffdfcb241820cba338aed8d3c7b0b7e5651a not found: ID does not exist" containerID="29c2667267dacd2b8bea9208822dffdfcb241820cba338aed8d3c7b0b7e5651a" Oct 11 10:44:04.481483 master-0 kubenswrapper[4790]: I1011 10:44:04.481444 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c2667267dacd2b8bea9208822dffdfcb241820cba338aed8d3c7b0b7e5651a"} err="failed to get container status \"29c2667267dacd2b8bea9208822dffdfcb241820cba338aed8d3c7b0b7e5651a\": rpc error: code = NotFound desc = could not find container \"29c2667267dacd2b8bea9208822dffdfcb241820cba338aed8d3c7b0b7e5651a\": container with ID starting with 29c2667267dacd2b8bea9208822dffdfcb241820cba338aed8d3c7b0b7e5651a not found: ID does not exist" Oct 11 10:44:04.481522 master-0 kubenswrapper[4790]: I1011 10:44:04.481484 4790 scope.go:117] "RemoveContainer" containerID="035fa9c3481f694000fd14dae9e171055c8a6110f7029a9f53a9e12f4eeb425c" Oct 11 10:44:04.481900 master-0 kubenswrapper[4790]: E1011 10:44:04.481840 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"035fa9c3481f694000fd14dae9e171055c8a6110f7029a9f53a9e12f4eeb425c\": container with ID starting with 035fa9c3481f694000fd14dae9e171055c8a6110f7029a9f53a9e12f4eeb425c not found: ID does not exist" containerID="035fa9c3481f694000fd14dae9e171055c8a6110f7029a9f53a9e12f4eeb425c" Oct 11 10:44:04.481944 master-0 kubenswrapper[4790]: I1011 10:44:04.481891 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"035fa9c3481f694000fd14dae9e171055c8a6110f7029a9f53a9e12f4eeb425c"} err="failed to get container status \"035fa9c3481f694000fd14dae9e171055c8a6110f7029a9f53a9e12f4eeb425c\": rpc error: code = NotFound desc = could not find container \"035fa9c3481f694000fd14dae9e171055c8a6110f7029a9f53a9e12f4eeb425c\": container with ID starting with 035fa9c3481f694000fd14dae9e171055c8a6110f7029a9f53a9e12f4eeb425c not found: ID does not exist" Oct 11 10:44:04.631089 master-1 kubenswrapper[4771]: I1011 10:44:04.631009 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:44:04.631089 master-1 kubenswrapper[4771]: I1011 10:44:04.631076 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:44:05.048279 master-0 kubenswrapper[4790]: I1011 10:44:05.048218 4790 patch_prober.go:28] interesting pod/apiserver-656768b4df-9c8k6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:05.048279 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:44:05.048279 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:44:05.048279 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:44:05.048279 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:05.048279 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:44:05.048279 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:05.048279 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:05.048279 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:05.048279 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:44:05.048279 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:44:05.048279 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:44:05.048279 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:44:05.048279 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:44:05.049256 master-0 kubenswrapper[4790]: I1011 10:44:05.048298 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" podUID="76dd8647-4ad5-4874-b2d2-dee16aab637a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:06.298699 master-0 kubenswrapper[4790]: I1011 10:44:06.298588 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" path="/var/lib/kubelet/pods/099ca022-6e9c-4604-b517-d90713dd6a44/volumes" Oct 11 10:44:07.330936 master-0 kubenswrapper[4790]: I1011 10:44:07.330791 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6f9d445f57-w4nwq"] Oct 11 10:44:07.331472 master-0 kubenswrapper[4790]: E1011 10:44:07.331174 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" Oct 11 10:44:07.331472 master-0 kubenswrapper[4790]: I1011 10:44:07.331193 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" Oct 11 10:44:07.331472 master-0 kubenswrapper[4790]: E1011 10:44:07.331206 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="fix-audit-permissions" Oct 11 10:44:07.331472 master-0 kubenswrapper[4790]: I1011 10:44:07.331213 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="fix-audit-permissions" Oct 11 10:44:07.331472 master-0 kubenswrapper[4790]: E1011 10:44:07.331225 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver-check-endpoints" Oct 11 10:44:07.331472 master-0 kubenswrapper[4790]: I1011 10:44:07.331232 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver-check-endpoints" Oct 11 10:44:07.331472 master-0 kubenswrapper[4790]: I1011 10:44:07.331294 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver" Oct 11 10:44:07.331472 master-0 kubenswrapper[4790]: I1011 10:44:07.331303 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="099ca022-6e9c-4604-b517-d90713dd6a44" containerName="openshift-apiserver-check-endpoints" Oct 11 10:44:07.331825 master-0 kubenswrapper[4790]: I1011 10:44:07.331795 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.336211 master-0 kubenswrapper[4790]: I1011 10:44:07.336086 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 11 10:44:07.336469 master-0 kubenswrapper[4790]: I1011 10:44:07.336293 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-8wmjp" Oct 11 10:44:07.336469 master-0 kubenswrapper[4790]: I1011 10:44:07.336365 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 11 10:44:07.336740 master-0 kubenswrapper[4790]: I1011 10:44:07.336675 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 11 10:44:07.336840 master-0 kubenswrapper[4790]: I1011 10:44:07.336815 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 11 10:44:07.337107 master-0 kubenswrapper[4790]: I1011 10:44:07.337062 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 11 10:44:07.337336 master-0 kubenswrapper[4790]: I1011 10:44:07.337292 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 11 10:44:07.337613 master-0 kubenswrapper[4790]: I1011 10:44:07.337568 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 11 10:44:07.346663 master-0 kubenswrapper[4790]: I1011 10:44:07.346605 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 11 10:44:07.355276 master-2 kubenswrapper[4776]: I1011 10:44:07.355212 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b846b7bb4-7q7ph"] Oct 11 10:44:07.374150 master-0 kubenswrapper[4790]: I1011 10:44:07.374094 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f9d445f57-w4nwq"] Oct 11 10:44:07.391595 master-0 kubenswrapper[4790]: I1011 10:44:07.391488 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brv7r\" (UniqueName: \"kubernetes.io/projected/e299247b-558b-4b6c-9d7c-335475344fdc-kube-api-access-brv7r\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.391595 master-0 kubenswrapper[4790]: I1011 10:44:07.391589 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e299247b-558b-4b6c-9d7c-335475344fdc-console-serving-cert\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.391907 master-0 kubenswrapper[4790]: I1011 10:44:07.391617 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-console-config\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.391907 master-0 kubenswrapper[4790]: I1011 10:44:07.391632 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-oauth-serving-cert\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.391907 master-0 kubenswrapper[4790]: I1011 10:44:07.391651 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-service-ca\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.391907 master-0 kubenswrapper[4790]: I1011 10:44:07.391684 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-trusted-ca-bundle\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.391907 master-0 kubenswrapper[4790]: I1011 10:44:07.391743 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e299247b-558b-4b6c-9d7c-335475344fdc-console-oauth-config\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.493109 master-0 kubenswrapper[4790]: I1011 10:44:07.493017 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-console-config\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.493109 master-0 kubenswrapper[4790]: I1011 10:44:07.493095 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-oauth-serving-cert\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.493408 master-0 kubenswrapper[4790]: I1011 10:44:07.493156 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-service-ca\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.493408 master-0 kubenswrapper[4790]: I1011 10:44:07.493200 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-trusted-ca-bundle\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.493408 master-0 kubenswrapper[4790]: I1011 10:44:07.493250 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e299247b-558b-4b6c-9d7c-335475344fdc-console-oauth-config\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.493408 master-0 kubenswrapper[4790]: I1011 10:44:07.493275 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brv7r\" (UniqueName: \"kubernetes.io/projected/e299247b-558b-4b6c-9d7c-335475344fdc-kube-api-access-brv7r\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.493408 master-0 kubenswrapper[4790]: I1011 10:44:07.493300 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e299247b-558b-4b6c-9d7c-335475344fdc-console-serving-cert\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.494485 master-0 kubenswrapper[4790]: I1011 10:44:07.494439 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-oauth-serving-cert\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.494785 master-0 kubenswrapper[4790]: I1011 10:44:07.494738 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-service-ca\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.494838 master-0 kubenswrapper[4790]: I1011 10:44:07.494809 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-trusted-ca-bundle\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.495138 master-0 kubenswrapper[4790]: I1011 10:44:07.495085 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-console-config\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.500376 master-1 kubenswrapper[4771]: I1011 10:44:07.500290 4771 patch_prober.go:28] interesting pod/etcd-master-1 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:44:07.500693 master-0 kubenswrapper[4790]: I1011 10:44:07.500613 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e299247b-558b-4b6c-9d7c-335475344fdc-console-serving-cert\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.500904 master-0 kubenswrapper[4790]: I1011 10:44:07.500852 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e299247b-558b-4b6c-9d7c-335475344fdc-console-oauth-config\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.500993 master-1 kubenswrapper[4771]: I1011 10:44:07.500385 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-1" podUID="dbeb1098f6b7e52b91afcf2e9b50b014" containerName="etcd" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:44:07.513160 master-0 kubenswrapper[4790]: I1011 10:44:07.513102 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brv7r\" (UniqueName: \"kubernetes.io/projected/e299247b-558b-4b6c-9d7c-335475344fdc-kube-api-access-brv7r\") pod \"console-6f9d445f57-w4nwq\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:07.648417 master-0 kubenswrapper[4790]: I1011 10:44:07.647931 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:08.104148 master-0 kubenswrapper[4790]: I1011 10:44:08.104078 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f9d445f57-w4nwq"] Oct 11 10:44:08.106353 master-0 kubenswrapper[4790]: W1011 10:44:08.106309 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode299247b_558b_4b6c_9d7c_335475344fdc.slice/crio-c55a75264f621b83bca51a6b3abc1339de4396e41f1c001671f54d97f3acff21 WatchSource:0}: Error finding container c55a75264f621b83bca51a6b3abc1339de4396e41f1c001671f54d97f3acff21: Status 404 returned error can't find the container with id c55a75264f621b83bca51a6b3abc1339de4396e41f1c001671f54d97f3acff21 Oct 11 10:44:08.438419 master-0 kubenswrapper[4790]: I1011 10:44:08.438248 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f9d445f57-w4nwq" event={"ID":"e299247b-558b-4b6c-9d7c-335475344fdc","Type":"ContainerStarted","Data":"3088103a931b6dbaca679e1253ed8ab435375376728343b7793fadb15e48f6e0"} Oct 11 10:44:08.438419 master-0 kubenswrapper[4790]: I1011 10:44:08.438317 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f9d445f57-w4nwq" event={"ID":"e299247b-558b-4b6c-9d7c-335475344fdc","Type":"ContainerStarted","Data":"c55a75264f621b83bca51a6b3abc1339de4396e41f1c001671f54d97f3acff21"} Oct 11 10:44:08.468820 master-0 kubenswrapper[4790]: I1011 10:44:08.468643 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6f9d445f57-w4nwq" podStartSLOduration=1.4685880230000001 podStartE2EDuration="1.468588023s" podCreationTimestamp="2025-10-11 10:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:44:08.461915698 +0000 UTC m=+325.016376010" watchObservedRunningTime="2025-10-11 10:44:08.468588023 +0000 UTC m=+325.023048345" Oct 11 10:44:09.498403 master-1 kubenswrapper[4771]: I1011 10:44:09.498286 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:44:09.498403 master-1 kubenswrapper[4771]: I1011 10:44:09.498405 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:44:09.499878 master-1 kubenswrapper[4771]: I1011 10:44:09.498434 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:44:09.499878 master-1 kubenswrapper[4771]: I1011 10:44:09.498455 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:44:09.504683 master-1 kubenswrapper[4771]: I1011 10:44:09.504624 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:44:09.506477 master-1 kubenswrapper[4771]: I1011 10:44:09.506436 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:44:09.543338 master-1 kubenswrapper[4771]: I1011 10:44:09.543266 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:44:09.632029 master-1 kubenswrapper[4771]: I1011 10:44:09.631899 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:44:09.632029 master-1 kubenswrapper[4771]: I1011 10:44:09.632022 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:44:10.050410 master-0 kubenswrapper[4790]: I1011 10:44:10.050349 4790 patch_prober.go:28] interesting pod/apiserver-656768b4df-9c8k6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:10.050410 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:44:10.050410 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:44:10.050410 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:44:10.050410 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:10.050410 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:44:10.050410 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:10.050410 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:10.050410 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:10.050410 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:44:10.050410 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:44:10.050410 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:44:10.050410 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:44:10.050410 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:44:10.051507 master-0 kubenswrapper[4790]: I1011 10:44:10.051469 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" podUID="76dd8647-4ad5-4874-b2d2-dee16aab637a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:10.550064 master-1 kubenswrapper[4771]: I1011 10:44:10.549978 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 11 10:44:13.620632 master-2 kubenswrapper[4776]: I1011 10:44:13.620311 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-2"] Oct 11 10:44:13.621390 master-2 kubenswrapper[4776]: I1011 10:44:13.621114 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.624768 master-2 kubenswrapper[4776]: I1011 10:44:13.623835 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-t28rg" Oct 11 10:44:13.636559 master-2 kubenswrapper[4776]: I1011 10:44:13.636488 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-2"] Oct 11 10:44:13.677332 master-2 kubenswrapper[4776]: I1011 10:44:13.677187 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-kubelet-dir\") pod \"installer-6-master-2\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.677858 master-2 kubenswrapper[4776]: I1011 10:44:13.677398 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e6df740-3969-4dd7-8953-2c21514694b8-kube-api-access\") pod \"installer-6-master-2\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.677858 master-2 kubenswrapper[4776]: I1011 10:44:13.677486 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-var-lock\") pod \"installer-6-master-2\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.691031 master-0 kubenswrapper[4790]: I1011 10:44:13.689805 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-8865994fd-4bs48"] Oct 11 10:44:13.691996 master-0 kubenswrapper[4790]: I1011 10:44:13.691939 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.695860 master-0 kubenswrapper[4790]: I1011 10:44:13.695830 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 11 10:44:13.696057 master-0 kubenswrapper[4790]: I1011 10:44:13.695861 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 11 10:44:13.696057 master-0 kubenswrapper[4790]: I1011 10:44:13.696035 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 11 10:44:13.696196 master-0 kubenswrapper[4790]: I1011 10:44:13.696074 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 11 10:44:13.696196 master-0 kubenswrapper[4790]: I1011 10:44:13.696112 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 11 10:44:13.696196 master-0 kubenswrapper[4790]: I1011 10:44:13.695885 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 11 10:44:13.696196 master-0 kubenswrapper[4790]: I1011 10:44:13.695888 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-lntq9" Oct 11 10:44:13.696389 master-0 kubenswrapper[4790]: I1011 10:44:13.696145 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 11 10:44:13.696447 master-0 kubenswrapper[4790]: I1011 10:44:13.696407 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 11 10:44:13.696447 master-0 kubenswrapper[4790]: I1011 10:44:13.696437 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 11 10:44:13.706164 master-0 kubenswrapper[4790]: I1011 10:44:13.706119 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-8865994fd-4bs48"] Oct 11 10:44:13.706925 master-0 kubenswrapper[4790]: I1011 10:44:13.706875 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 11 10:44:13.778913 master-2 kubenswrapper[4776]: I1011 10:44:13.778845 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-kubelet-dir\") pod \"installer-6-master-2\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.778913 master-2 kubenswrapper[4776]: I1011 10:44:13.778899 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e6df740-3969-4dd7-8953-2c21514694b8-kube-api-access\") pod \"installer-6-master-2\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.778913 master-2 kubenswrapper[4776]: I1011 10:44:13.778921 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-var-lock\") pod \"installer-6-master-2\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.779244 master-2 kubenswrapper[4776]: I1011 10:44:13.779009 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-var-lock\") pod \"installer-6-master-2\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.779244 master-2 kubenswrapper[4776]: I1011 10:44:13.779084 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-kubelet-dir\") pod \"installer-6-master-2\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.786022 master-0 kubenswrapper[4790]: I1011 10:44:13.785949 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdf415dc-3a2a-4f52-90ae-81a963771876-trusted-ca-bundle\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.786402 master-0 kubenswrapper[4790]: I1011 10:44:13.786376 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cdf415dc-3a2a-4f52-90ae-81a963771876-image-import-ca\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.786529 master-0 kubenswrapper[4790]: I1011 10:44:13.786504 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cdf415dc-3a2a-4f52-90ae-81a963771876-audit\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.786645 master-0 kubenswrapper[4790]: I1011 10:44:13.786625 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf415dc-3a2a-4f52-90ae-81a963771876-config\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.786836 master-0 kubenswrapper[4790]: I1011 10:44:13.786800 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cdf415dc-3a2a-4f52-90ae-81a963771876-encryption-config\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.787002 master-0 kubenswrapper[4790]: I1011 10:44:13.786979 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cdf415dc-3a2a-4f52-90ae-81a963771876-etcd-client\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.787306 master-0 kubenswrapper[4790]: I1011 10:44:13.787287 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzxkg\" (UniqueName: \"kubernetes.io/projected/cdf415dc-3a2a-4f52-90ae-81a963771876-kube-api-access-lzxkg\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.787436 master-0 kubenswrapper[4790]: I1011 10:44:13.787420 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdf415dc-3a2a-4f52-90ae-81a963771876-audit-dir\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.787584 master-0 kubenswrapper[4790]: I1011 10:44:13.787560 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cdf415dc-3a2a-4f52-90ae-81a963771876-node-pullsecrets\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.787752 master-0 kubenswrapper[4790]: I1011 10:44:13.787729 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cdf415dc-3a2a-4f52-90ae-81a963771876-etcd-serving-ca\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.787920 master-0 kubenswrapper[4790]: I1011 10:44:13.787892 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdf415dc-3a2a-4f52-90ae-81a963771876-serving-cert\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.798011 master-2 kubenswrapper[4776]: I1011 10:44:13.797948 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e6df740-3969-4dd7-8953-2c21514694b8-kube-api-access\") pod \"installer-6-master-2\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.889801 master-0 kubenswrapper[4790]: I1011 10:44:13.889663 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdf415dc-3a2a-4f52-90ae-81a963771876-serving-cert\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.889801 master-0 kubenswrapper[4790]: I1011 10:44:13.889814 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdf415dc-3a2a-4f52-90ae-81a963771876-trusted-ca-bundle\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.890384 master-0 kubenswrapper[4790]: I1011 10:44:13.889849 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cdf415dc-3a2a-4f52-90ae-81a963771876-image-import-ca\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.890384 master-0 kubenswrapper[4790]: I1011 10:44:13.889895 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cdf415dc-3a2a-4f52-90ae-81a963771876-audit\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.890384 master-0 kubenswrapper[4790]: I1011 10:44:13.889931 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf415dc-3a2a-4f52-90ae-81a963771876-config\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.890384 master-0 kubenswrapper[4790]: I1011 10:44:13.889959 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cdf415dc-3a2a-4f52-90ae-81a963771876-encryption-config\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.890384 master-0 kubenswrapper[4790]: I1011 10:44:13.889991 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cdf415dc-3a2a-4f52-90ae-81a963771876-etcd-client\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.890384 master-0 kubenswrapper[4790]: I1011 10:44:13.890022 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzxkg\" (UniqueName: \"kubernetes.io/projected/cdf415dc-3a2a-4f52-90ae-81a963771876-kube-api-access-lzxkg\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.890384 master-0 kubenswrapper[4790]: I1011 10:44:13.890270 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdf415dc-3a2a-4f52-90ae-81a963771876-audit-dir\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.890384 master-0 kubenswrapper[4790]: I1011 10:44:13.890308 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cdf415dc-3a2a-4f52-90ae-81a963771876-node-pullsecrets\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.890384 master-0 kubenswrapper[4790]: I1011 10:44:13.890345 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cdf415dc-3a2a-4f52-90ae-81a963771876-etcd-serving-ca\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.891250 master-0 kubenswrapper[4790]: I1011 10:44:13.891211 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdf415dc-3a2a-4f52-90ae-81a963771876-audit-dir\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.891465 master-0 kubenswrapper[4790]: I1011 10:44:13.891427 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cdf415dc-3a2a-4f52-90ae-81a963771876-etcd-serving-ca\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.891649 master-0 kubenswrapper[4790]: I1011 10:44:13.891565 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cdf415dc-3a2a-4f52-90ae-81a963771876-node-pullsecrets\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.892589 master-0 kubenswrapper[4790]: I1011 10:44:13.892539 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf415dc-3a2a-4f52-90ae-81a963771876-config\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.892930 master-0 kubenswrapper[4790]: I1011 10:44:13.892860 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cdf415dc-3a2a-4f52-90ae-81a963771876-image-import-ca\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.893219 master-0 kubenswrapper[4790]: I1011 10:44:13.893157 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cdf415dc-3a2a-4f52-90ae-81a963771876-audit\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.894152 master-0 kubenswrapper[4790]: I1011 10:44:13.894082 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdf415dc-3a2a-4f52-90ae-81a963771876-trusted-ca-bundle\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.896135 master-0 kubenswrapper[4790]: I1011 10:44:13.896088 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cdf415dc-3a2a-4f52-90ae-81a963771876-encryption-config\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.897518 master-0 kubenswrapper[4790]: I1011 10:44:13.897494 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdf415dc-3a2a-4f52-90ae-81a963771876-serving-cert\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.897739 master-0 kubenswrapper[4790]: I1011 10:44:13.897649 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cdf415dc-3a2a-4f52-90ae-81a963771876-etcd-client\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.914266 master-0 kubenswrapper[4790]: I1011 10:44:13.914197 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzxkg\" (UniqueName: \"kubernetes.io/projected/cdf415dc-3a2a-4f52-90ae-81a963771876-kube-api-access-lzxkg\") pod \"apiserver-8865994fd-4bs48\" (UID: \"cdf415dc-3a2a-4f52-90ae-81a963771876\") " pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:13.953594 master-2 kubenswrapper[4776]: I1011 10:44:13.953530 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:14.017158 master-0 kubenswrapper[4790]: I1011 10:44:14.016941 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:14.345925 master-2 kubenswrapper[4776]: I1011 10:44:14.345874 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-2"] Oct 11 10:44:14.454616 master-0 kubenswrapper[4790]: I1011 10:44:14.454247 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-8865994fd-4bs48"] Oct 11 10:44:14.461493 master-0 kubenswrapper[4790]: W1011 10:44:14.461439 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdf415dc_3a2a_4f52_90ae_81a963771876.slice/crio-2e0145fee607e1061480a46f1974f026117c6f9a12920376f8680f7f88b90732 WatchSource:0}: Error finding container 2e0145fee607e1061480a46f1974f026117c6f9a12920376f8680f7f88b90732: Status 404 returned error can't find the container with id 2e0145fee607e1061480a46f1974f026117c6f9a12920376f8680f7f88b90732 Oct 11 10:44:14.469359 master-0 kubenswrapper[4790]: I1011 10:44:14.469291 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8865994fd-4bs48" event={"ID":"cdf415dc-3a2a-4f52-90ae-81a963771876","Type":"ContainerStarted","Data":"2e0145fee607e1061480a46f1974f026117c6f9a12920376f8680f7f88b90732"} Oct 11 10:44:14.632421 master-1 kubenswrapper[4771]: I1011 10:44:14.632298 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:44:14.633341 master-1 kubenswrapper[4771]: I1011 10:44:14.632428 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:44:14.857888 master-2 kubenswrapper[4776]: I1011 10:44:14.857824 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-2" event={"ID":"2e6df740-3969-4dd7-8953-2c21514694b8","Type":"ContainerStarted","Data":"dafe48c0553defd2bb14beff21925c74176e23a28e26ccc15fdf50c0af2425e7"} Oct 11 10:44:14.857888 master-2 kubenswrapper[4776]: I1011 10:44:14.857870 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-2" event={"ID":"2e6df740-3969-4dd7-8953-2c21514694b8","Type":"ContainerStarted","Data":"0fdd14a291d71a8f25a23b568e51208656add1332ab642252981b19d9970612d"} Oct 11 10:44:14.884458 master-2 kubenswrapper[4776]: I1011 10:44:14.883234 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-2" podStartSLOduration=1.88321491 podStartE2EDuration="1.88321491s" podCreationTimestamp="2025-10-11 10:44:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:44:14.882408639 +0000 UTC m=+1089.666835348" watchObservedRunningTime="2025-10-11 10:44:14.88321491 +0000 UTC m=+1089.667641619" Oct 11 10:44:15.049809 master-0 kubenswrapper[4790]: I1011 10:44:15.049661 4790 patch_prober.go:28] interesting pod/apiserver-656768b4df-9c8k6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:15.049809 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:44:15.049809 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:44:15.049809 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:44:15.049809 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:15.049809 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:44:15.049809 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:15.049809 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:15.049809 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:15.049809 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:44:15.049809 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:44:15.049809 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:44:15.049809 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:44:15.049809 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:44:15.052131 master-0 kubenswrapper[4790]: I1011 10:44:15.049827 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" podUID="76dd8647-4ad5-4874-b2d2-dee16aab637a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:15.052131 master-0 kubenswrapper[4790]: I1011 10:44:15.050119 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:44:15.480309 master-0 kubenswrapper[4790]: I1011 10:44:15.480143 4790 generic.go:334] "Generic (PLEG): container finished" podID="cdf415dc-3a2a-4f52-90ae-81a963771876" containerID="44a2752364aefd0e8ef37a371fc1a02b554d307c7f399e48a010a918f35a11b1" exitCode=0 Oct 11 10:44:15.480309 master-0 kubenswrapper[4790]: I1011 10:44:15.480209 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8865994fd-4bs48" event={"ID":"cdf415dc-3a2a-4f52-90ae-81a963771876","Type":"ContainerDied","Data":"44a2752364aefd0e8ef37a371fc1a02b554d307c7f399e48a010a918f35a11b1"} Oct 11 10:44:16.489040 master-0 kubenswrapper[4790]: I1011 10:44:16.488928 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8865994fd-4bs48" event={"ID":"cdf415dc-3a2a-4f52-90ae-81a963771876","Type":"ContainerStarted","Data":"6f0df9b47b168a17643529973b548904eef2d99ee4ab5088c9894dedd37f5d9b"} Oct 11 10:44:16.489040 master-0 kubenswrapper[4790]: I1011 10:44:16.488989 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8865994fd-4bs48" event={"ID":"cdf415dc-3a2a-4f52-90ae-81a963771876","Type":"ContainerStarted","Data":"ec910e69b540b523a147a7d7f29286f549aed08e47b9bfba169c2a9f41b1dc75"} Oct 11 10:44:16.525768 master-0 kubenswrapper[4790]: I1011 10:44:16.525564 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-8865994fd-4bs48" podStartSLOduration=123.525525845 podStartE2EDuration="2m3.525525845s" podCreationTimestamp="2025-10-11 10:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:44:16.520451716 +0000 UTC m=+333.074912028" watchObservedRunningTime="2025-10-11 10:44:16.525525845 +0000 UTC m=+333.079986177" Oct 11 10:44:17.500682 master-1 kubenswrapper[4771]: I1011 10:44:17.500599 4771 patch_prober.go:28] interesting pod/etcd-master-1 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:44:17.501709 master-1 kubenswrapper[4771]: I1011 10:44:17.500703 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-1" podUID="dbeb1098f6b7e52b91afcf2e9b50b014" containerName="etcd" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:44:17.648370 master-0 kubenswrapper[4790]: I1011 10:44:17.648284 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:17.648370 master-0 kubenswrapper[4790]: I1011 10:44:17.648367 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:17.650260 master-0 kubenswrapper[4790]: I1011 10:44:17.650215 4790 patch_prober.go:28] interesting pod/console-6f9d445f57-w4nwq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.130.0.27:8443/health\": dial tcp 10.130.0.27:8443: connect: connection refused" start-of-body= Oct 11 10:44:17.650307 master-0 kubenswrapper[4790]: I1011 10:44:17.650276 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6f9d445f57-w4nwq" podUID="e299247b-558b-4b6c-9d7c-335475344fdc" containerName="console" probeResult="failure" output="Get \"https://10.130.0.27:8443/health\": dial tcp 10.130.0.27:8443: connect: connection refused" Oct 11 10:44:19.017742 master-0 kubenswrapper[4790]: I1011 10:44:19.017600 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:19.017742 master-0 kubenswrapper[4790]: I1011 10:44:19.017689 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:19.024759 master-0 kubenswrapper[4790]: I1011 10:44:19.024678 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:19.514471 master-0 kubenswrapper[4790]: I1011 10:44:19.514408 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-8865994fd-4bs48" Oct 11 10:44:19.627033 master-2 kubenswrapper[4776]: I1011 10:44:19.626974 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-69df5d46bc-klwcv"] Oct 11 10:44:19.628254 master-2 kubenswrapper[4776]: I1011 10:44:19.627221 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" containerID="cri-o://f9151fc06dbd01664b47f868a0c43e1a9e6b83f5d73cdb1bae7462ef40f38776" gracePeriod=120 Oct 11 10:44:19.628254 master-2 kubenswrapper[4776]: I1011 10:44:19.627365 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://e4993a00e7728dc437a4c6094596c369ce11c26b8ae277d77d9133c67e1933b9" gracePeriod=120 Oct 11 10:44:19.638857 master-1 kubenswrapper[4771]: I1011 10:44:19.634524 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:44:19.638857 master-1 kubenswrapper[4771]: I1011 10:44:19.635471 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:44:19.888280 master-2 kubenswrapper[4776]: I1011 10:44:19.888147 4776 generic.go:334] "Generic (PLEG): container finished" podID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerID="e4993a00e7728dc437a4c6094596c369ce11c26b8ae277d77d9133c67e1933b9" exitCode=0 Oct 11 10:44:19.888485 master-2 kubenswrapper[4776]: I1011 10:44:19.888233 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" event={"ID":"4125c617-d1f6-4f29-bae1-1165604b9cbd","Type":"ContainerDied","Data":"e4993a00e7728dc437a4c6094596c369ce11c26b8ae277d77d9133c67e1933b9"} Oct 11 10:44:20.052830 master-0 kubenswrapper[4790]: I1011 10:44:20.052686 4790 patch_prober.go:28] interesting pod/apiserver-656768b4df-9c8k6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:20.052830 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:44:20.052830 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:44:20.052830 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:44:20.052830 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:20.052830 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:44:20.052830 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:20.052830 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:20.052830 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:20.052830 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:44:20.052830 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:44:20.052830 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:44:20.052830 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:44:20.052830 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:44:20.053980 master-0 kubenswrapper[4790]: I1011 10:44:20.052845 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" podUID="76dd8647-4ad5-4874-b2d2-dee16aab637a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: I1011 10:44:20.519257 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: I1011 10:44:20.519314 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:24.636924 master-1 kubenswrapper[4771]: I1011 10:44:24.636769 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:44:24.636924 master-1 kubenswrapper[4771]: I1011 10:44:24.636917 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:44:25.050581 master-0 kubenswrapper[4790]: I1011 10:44:25.050499 4790 patch_prober.go:28] interesting pod/apiserver-656768b4df-9c8k6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:25.050581 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:44:25.050581 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:44:25.050581 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:44:25.050581 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:25.050581 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:44:25.050581 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:25.050581 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:25.050581 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:25.050581 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:44:25.050581 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:44:25.050581 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:44:25.050581 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:44:25.050581 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:44:25.052304 master-0 kubenswrapper[4790]: I1011 10:44:25.050598 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" podUID="76dd8647-4ad5-4874-b2d2-dee16aab637a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: I1011 10:44:25.518534 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:25.520089 master-2 kubenswrapper[4776]: I1011 10:44:25.518590 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:27.500193 master-1 kubenswrapper[4771]: I1011 10:44:27.500099 4771 patch_prober.go:28] interesting pod/etcd-master-1 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:44:27.500801 master-1 kubenswrapper[4771]: I1011 10:44:27.500189 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-1" podUID="dbeb1098f6b7e52b91afcf2e9b50b014" containerName="etcd" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:44:27.648936 master-0 kubenswrapper[4790]: I1011 10:44:27.648824 4790 patch_prober.go:28] interesting pod/console-6f9d445f57-w4nwq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.130.0.27:8443/health\": dial tcp 10.130.0.27:8443: connect: connection refused" start-of-body= Oct 11 10:44:27.648936 master-0 kubenswrapper[4790]: I1011 10:44:27.648917 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6f9d445f57-w4nwq" podUID="e299247b-558b-4b6c-9d7c-335475344fdc" containerName="console" probeResult="failure" output="Get \"https://10.130.0.27:8443/health\": dial tcp 10.130.0.27:8443: connect: connection refused" Oct 11 10:44:29.637677 master-1 kubenswrapper[4771]: I1011 10:44:29.637559 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:44:29.638731 master-1 kubenswrapper[4771]: I1011 10:44:29.637703 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:44:30.050608 master-0 kubenswrapper[4790]: I1011 10:44:30.050517 4790 patch_prober.go:28] interesting pod/apiserver-656768b4df-9c8k6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:30.050608 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:44:30.050608 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:44:30.050608 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:44:30.050608 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:30.050608 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:44:30.050608 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:30.050608 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:30.050608 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:30.050608 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:44:30.050608 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:44:30.050608 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:44:30.050608 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:44:30.050608 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:44:30.051691 master-0 kubenswrapper[4790]: I1011 10:44:30.050624 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" podUID="76dd8647-4ad5-4874-b2d2-dee16aab637a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: I1011 10:44:30.522795 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:30.523905 master-2 kubenswrapper[4776]: I1011 10:44:30.522977 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:30.523905 master-2 kubenswrapper[4776]: I1011 10:44:30.523201 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:44:32.414529 master-2 kubenswrapper[4776]: I1011 10:44:32.414338 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5b846b7bb4-7q7ph" podUID="e0a2e987-f2d6-410a-966a-bd82ab791c00" containerName="console" containerID="cri-o://b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f" gracePeriod=15 Oct 11 10:44:32.902622 master-2 kubenswrapper[4776]: I1011 10:44:32.902577 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b846b7bb4-7q7ph_e0a2e987-f2d6-410a-966a-bd82ab791c00/console/0.log" Oct 11 10:44:32.902844 master-2 kubenswrapper[4776]: I1011 10:44:32.902641 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:44:32.949487 master-2 kubenswrapper[4776]: I1011 10:44:32.949434 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6f9d445f57-z6k82"] Oct 11 10:44:32.950281 master-2 kubenswrapper[4776]: E1011 10:44:32.950220 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a2e987-f2d6-410a-966a-bd82ab791c00" containerName="console" Oct 11 10:44:32.950281 master-2 kubenswrapper[4776]: I1011 10:44:32.950249 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a2e987-f2d6-410a-966a-bd82ab791c00" containerName="console" Oct 11 10:44:32.950638 master-2 kubenswrapper[4776]: I1011 10:44:32.950606 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0a2e987-f2d6-410a-966a-bd82ab791c00" containerName="console" Oct 11 10:44:32.951265 master-2 kubenswrapper[4776]: I1011 10:44:32.951233 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:32.959824 master-2 kubenswrapper[4776]: I1011 10:44:32.959775 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f9d445f57-z6k82"] Oct 11 10:44:32.963660 master-2 kubenswrapper[4776]: I1011 10:44:32.963621 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-service-ca\") pod \"e0a2e987-f2d6-410a-966a-bd82ab791c00\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " Oct 11 10:44:32.963749 master-2 kubenswrapper[4776]: I1011 10:44:32.963690 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-trusted-ca-bundle\") pod \"e0a2e987-f2d6-410a-966a-bd82ab791c00\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " Oct 11 10:44:32.963749 master-2 kubenswrapper[4776]: I1011 10:44:32.963743 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-config\") pod \"e0a2e987-f2d6-410a-966a-bd82ab791c00\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " Oct 11 10:44:32.963839 master-2 kubenswrapper[4776]: I1011 10:44:32.963817 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-oauth-config\") pod \"e0a2e987-f2d6-410a-966a-bd82ab791c00\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " Oct 11 10:44:32.963887 master-2 kubenswrapper[4776]: I1011 10:44:32.963870 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-serving-cert\") pod \"e0a2e987-f2d6-410a-966a-bd82ab791c00\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " Oct 11 10:44:32.963921 master-2 kubenswrapper[4776]: I1011 10:44:32.963908 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74gcw\" (UniqueName: \"kubernetes.io/projected/e0a2e987-f2d6-410a-966a-bd82ab791c00-kube-api-access-74gcw\") pod \"e0a2e987-f2d6-410a-966a-bd82ab791c00\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " Oct 11 10:44:32.963976 master-2 kubenswrapper[4776]: I1011 10:44:32.963955 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-oauth-serving-cert\") pod \"e0a2e987-f2d6-410a-966a-bd82ab791c00\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " Oct 11 10:44:32.964136 master-2 kubenswrapper[4776]: I1011 10:44:32.964108 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-service-ca" (OuterVolumeSpecName: "service-ca") pod "e0a2e987-f2d6-410a-966a-bd82ab791c00" (UID: "e0a2e987-f2d6-410a-966a-bd82ab791c00"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:44:32.964414 master-2 kubenswrapper[4776]: I1011 10:44:32.964377 4776 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-service-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:32.965040 master-2 kubenswrapper[4776]: I1011 10:44:32.965007 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-config" (OuterVolumeSpecName: "console-config") pod "e0a2e987-f2d6-410a-966a-bd82ab791c00" (UID: "e0a2e987-f2d6-410a-966a-bd82ab791c00"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:44:32.965346 master-2 kubenswrapper[4776]: I1011 10:44:32.965300 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e0a2e987-f2d6-410a-966a-bd82ab791c00" (UID: "e0a2e987-f2d6-410a-966a-bd82ab791c00"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:44:32.965889 master-2 kubenswrapper[4776]: I1011 10:44:32.965842 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e0a2e987-f2d6-410a-966a-bd82ab791c00" (UID: "e0a2e987-f2d6-410a-966a-bd82ab791c00"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:44:32.967952 master-2 kubenswrapper[4776]: I1011 10:44:32.967894 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e0a2e987-f2d6-410a-966a-bd82ab791c00" (UID: "e0a2e987-f2d6-410a-966a-bd82ab791c00"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:44:32.968921 master-2 kubenswrapper[4776]: I1011 10:44:32.968887 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e0a2e987-f2d6-410a-966a-bd82ab791c00" (UID: "e0a2e987-f2d6-410a-966a-bd82ab791c00"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:44:32.971624 master-2 kubenswrapper[4776]: I1011 10:44:32.971588 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0a2e987-f2d6-410a-966a-bd82ab791c00-kube-api-access-74gcw" (OuterVolumeSpecName: "kube-api-access-74gcw") pod "e0a2e987-f2d6-410a-966a-bd82ab791c00" (UID: "e0a2e987-f2d6-410a-966a-bd82ab791c00"). InnerVolumeSpecName "kube-api-access-74gcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:44:32.981837 master-2 kubenswrapper[4776]: I1011 10:44:32.981778 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b846b7bb4-7q7ph_e0a2e987-f2d6-410a-966a-bd82ab791c00/console/0.log" Oct 11 10:44:32.981837 master-2 kubenswrapper[4776]: I1011 10:44:32.981830 4776 generic.go:334] "Generic (PLEG): container finished" podID="e0a2e987-f2d6-410a-966a-bd82ab791c00" containerID="b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f" exitCode=2 Oct 11 10:44:32.982076 master-2 kubenswrapper[4776]: I1011 10:44:32.981862 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b846b7bb4-7q7ph" event={"ID":"e0a2e987-f2d6-410a-966a-bd82ab791c00","Type":"ContainerDied","Data":"b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f"} Oct 11 10:44:32.982076 master-2 kubenswrapper[4776]: I1011 10:44:32.981892 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b846b7bb4-7q7ph" event={"ID":"e0a2e987-f2d6-410a-966a-bd82ab791c00","Type":"ContainerDied","Data":"43085277e27210405939bccfc3c1c615afc3a4069b47f8fc3de8344908605eaa"} Oct 11 10:44:32.982076 master-2 kubenswrapper[4776]: I1011 10:44:32.981909 4776 scope.go:117] "RemoveContainer" containerID="b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f" Oct 11 10:44:32.982076 master-2 kubenswrapper[4776]: I1011 10:44:32.982001 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:44:33.021663 master-2 kubenswrapper[4776]: I1011 10:44:33.021281 4776 scope.go:117] "RemoveContainer" containerID="b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f" Oct 11 10:44:33.021663 master-2 kubenswrapper[4776]: E1011 10:44:33.021649 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f\": container with ID starting with b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f not found: ID does not exist" containerID="b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f" Oct 11 10:44:33.021663 master-2 kubenswrapper[4776]: I1011 10:44:33.021689 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f"} err="failed to get container status \"b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f\": rpc error: code = NotFound desc = could not find container \"b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f\": container with ID starting with b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f not found: ID does not exist" Oct 11 10:44:33.035206 master-2 kubenswrapper[4776]: I1011 10:44:33.035137 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b846b7bb4-7q7ph"] Oct 11 10:44:33.038948 master-2 kubenswrapper[4776]: I1011 10:44:33.038900 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5b846b7bb4-7q7ph"] Oct 11 10:44:33.065370 master-2 kubenswrapper[4776]: I1011 10:44:33.065310 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-oauth-serving-cert\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.065370 master-2 kubenswrapper[4776]: I1011 10:44:33.065356 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzqm8\" (UniqueName: \"kubernetes.io/projected/9ac259b6-cf42-49b4-b1b7-76cc9072d059-kube-api-access-tzqm8\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.065551 master-2 kubenswrapper[4776]: I1011 10:44:33.065373 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-serving-cert\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.065551 master-2 kubenswrapper[4776]: I1011 10:44:33.065442 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-service-ca\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.065551 master-2 kubenswrapper[4776]: I1011 10:44:33.065473 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-oauth-config\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.065551 master-2 kubenswrapper[4776]: I1011 10:44:33.065510 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-config\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.065551 master-2 kubenswrapper[4776]: I1011 10:44:33.065530 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-trusted-ca-bundle\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.065839 master-2 kubenswrapper[4776]: I1011 10:44:33.065685 4776 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:33.065839 master-2 kubenswrapper[4776]: I1011 10:44:33.065697 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74gcw\" (UniqueName: \"kubernetes.io/projected/e0a2e987-f2d6-410a-966a-bd82ab791c00-kube-api-access-74gcw\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:33.065839 master-2 kubenswrapper[4776]: I1011 10:44:33.065707 4776 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-oauth-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:33.065839 master-2 kubenswrapper[4776]: I1011 10:44:33.065715 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:33.065839 master-2 kubenswrapper[4776]: I1011 10:44:33.065724 4776 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:33.065839 master-2 kubenswrapper[4776]: I1011 10:44:33.065732 4776 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-oauth-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:33.166958 master-2 kubenswrapper[4776]: I1011 10:44:33.166884 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-oauth-config\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.167702 master-2 kubenswrapper[4776]: I1011 10:44:33.167648 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-config\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.167774 master-2 kubenswrapper[4776]: I1011 10:44:33.167744 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-trusted-ca-bundle\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.167897 master-2 kubenswrapper[4776]: I1011 10:44:33.167867 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-oauth-serving-cert\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.167937 master-2 kubenswrapper[4776]: I1011 10:44:33.167901 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-serving-cert\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.167937 master-2 kubenswrapper[4776]: I1011 10:44:33.167926 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzqm8\" (UniqueName: \"kubernetes.io/projected/9ac259b6-cf42-49b4-b1b7-76cc9072d059-kube-api-access-tzqm8\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.167999 master-2 kubenswrapper[4776]: I1011 10:44:33.167962 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-service-ca\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.168556 master-2 kubenswrapper[4776]: I1011 10:44:33.168516 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-config\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.168992 master-2 kubenswrapper[4776]: I1011 10:44:33.168952 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-oauth-serving-cert\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.168992 master-2 kubenswrapper[4776]: I1011 10:44:33.168983 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-service-ca\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.169408 master-2 kubenswrapper[4776]: I1011 10:44:33.169371 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-trusted-ca-bundle\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.170376 master-2 kubenswrapper[4776]: I1011 10:44:33.170341 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-oauth-config\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.171132 master-2 kubenswrapper[4776]: I1011 10:44:33.171089 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-serving-cert\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.191838 master-2 kubenswrapper[4776]: I1011 10:44:33.191787 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzqm8\" (UniqueName: \"kubernetes.io/projected/9ac259b6-cf42-49b4-b1b7-76cc9072d059-kube-api-access-tzqm8\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.277996 master-2 kubenswrapper[4776]: I1011 10:44:33.277944 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.696287 master-2 kubenswrapper[4776]: I1011 10:44:33.696226 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f9d445f57-z6k82"] Oct 11 10:44:33.700298 master-2 kubenswrapper[4776]: W1011 10:44:33.700244 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ac259b6_cf42_49b4_b1b7_76cc9072d059.slice/crio-88f0b704732f3071e12ed605ea3d2e3766c911ff93df7e212bf6bc543d745d21 WatchSource:0}: Error finding container 88f0b704732f3071e12ed605ea3d2e3766c911ff93df7e212bf6bc543d745d21: Status 404 returned error can't find the container with id 88f0b704732f3071e12ed605ea3d2e3766c911ff93df7e212bf6bc543d745d21 Oct 11 10:44:33.992316 master-2 kubenswrapper[4776]: I1011 10:44:33.992144 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f9d445f57-z6k82" event={"ID":"9ac259b6-cf42-49b4-b1b7-76cc9072d059","Type":"ContainerStarted","Data":"64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c"} Oct 11 10:44:33.992316 master-2 kubenswrapper[4776]: I1011 10:44:33.992238 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f9d445f57-z6k82" event={"ID":"9ac259b6-cf42-49b4-b1b7-76cc9072d059","Type":"ContainerStarted","Data":"88f0b704732f3071e12ed605ea3d2e3766c911ff93df7e212bf6bc543d745d21"} Oct 11 10:44:34.023440 master-2 kubenswrapper[4776]: I1011 10:44:34.023313 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6f9d445f57-z6k82" podStartSLOduration=27.023291146 podStartE2EDuration="27.023291146s" podCreationTimestamp="2025-10-11 10:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:44:34.023237174 +0000 UTC m=+1108.807663933" watchObservedRunningTime="2025-10-11 10:44:34.023291146 +0000 UTC m=+1108.807717855" Oct 11 10:44:34.072201 master-2 kubenswrapper[4776]: I1011 10:44:34.072103 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0a2e987-f2d6-410a-966a-bd82ab791c00" path="/var/lib/kubelet/pods/e0a2e987-f2d6-410a-966a-bd82ab791c00/volumes" Oct 11 10:44:34.638238 master-1 kubenswrapper[4771]: I1011 10:44:34.638132 4771 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:44:34.639591 master-1 kubenswrapper[4771]: I1011 10:44:34.638270 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="3fc4970d-4f34-4fc6-9791-6218f8e42eb9" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:44:35.049030 master-0 kubenswrapper[4790]: I1011 10:44:35.048939 4790 patch_prober.go:28] interesting pod/apiserver-656768b4df-9c8k6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:35.049030 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:44:35.049030 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:44:35.049030 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:44:35.049030 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:35.049030 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:44:35.049030 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:35.049030 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:35.049030 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:35.049030 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:44:35.049030 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:44:35.049030 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:44:35.049030 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:44:35.049030 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:44:35.050298 master-0 kubenswrapper[4790]: I1011 10:44:35.049075 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" podUID="76dd8647-4ad5-4874-b2d2-dee16aab637a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: I1011 10:44:35.521569 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:35.522702 master-2 kubenswrapper[4776]: I1011 10:44:35.521638 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:37.500910 master-1 kubenswrapper[4771]: I1011 10:44:37.500791 4771 patch_prober.go:28] interesting pod/etcd-master-1 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:44:37.500910 master-1 kubenswrapper[4771]: I1011 10:44:37.500878 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-1" podUID="dbeb1098f6b7e52b91afcf2e9b50b014" containerName="etcd" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:44:37.655544 master-0 kubenswrapper[4790]: I1011 10:44:37.655403 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:37.660280 master-0 kubenswrapper[4790]: I1011 10:44:37.660208 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:44:37.830847 master-1 kubenswrapper[4771]: I1011 10:44:37.830754 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b846b7bb4-xmv6l"] Oct 11 10:44:39.316416 master-1 kubenswrapper[4771]: I1011 10:44:39.316319 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-guard-master-1" Oct 11 10:44:40.049249 master-0 kubenswrapper[4790]: I1011 10:44:40.049160 4790 patch_prober.go:28] interesting pod/apiserver-656768b4df-9c8k6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:40.049249 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:44:40.049249 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:44:40.049249 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:44:40.049249 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:40.049249 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:44:40.049249 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:40.049249 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:40.049249 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:40.049249 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:44:40.049249 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:44:40.049249 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:44:40.049249 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:44:40.049249 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:44:40.050161 master-0 kubenswrapper[4790]: I1011 10:44:40.049278 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" podUID="76dd8647-4ad5-4874-b2d2-dee16aab637a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: I1011 10:44:40.520926 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:40.522106 master-2 kubenswrapper[4776]: I1011 10:44:40.521006 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:43.278323 master-2 kubenswrapper[4776]: I1011 10:44:43.278267 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:43.278944 master-2 kubenswrapper[4776]: I1011 10:44:43.278413 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:43.285166 master-2 kubenswrapper[4776]: I1011 10:44:43.285131 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:44.071237 master-2 kubenswrapper[4776]: I1011 10:44:44.071147 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:45.051839 master-0 kubenswrapper[4790]: I1011 10:44:45.051755 4790 patch_prober.go:28] interesting pod/apiserver-656768b4df-9c8k6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:45.051839 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:44:45.051839 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:44:45.051839 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:44:45.051839 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:45.051839 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:44:45.051839 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:45.051839 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:45.051839 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:45.051839 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:44:45.051839 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:44:45.051839 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:44:45.051839 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:44:45.051839 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:44:45.052854 master-0 kubenswrapper[4790]: I1011 10:44:45.051861 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" podUID="76dd8647-4ad5-4874-b2d2-dee16aab637a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: I1011 10:44:45.523304 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: I1011 10:44:45.523410 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:46.518968 master-1 kubenswrapper[4771]: I1011 10:44:46.518894 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-1" Oct 11 10:44:46.538129 master-1 kubenswrapper[4771]: I1011 10:44:46.538019 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-1" Oct 11 10:44:50.050927 master-0 kubenswrapper[4790]: I1011 10:44:50.050826 4790 patch_prober.go:28] interesting pod/apiserver-656768b4df-9c8k6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:50.050927 master-0 kubenswrapper[4790]: [+]log ok Oct 11 10:44:50.050927 master-0 kubenswrapper[4790]: [+]etcd excluded: ok Oct 11 10:44:50.050927 master-0 kubenswrapper[4790]: [+]etcd-readiness excluded: ok Oct 11 10:44:50.050927 master-0 kubenswrapper[4790]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:50.050927 master-0 kubenswrapper[4790]: [+]informer-sync ok Oct 11 10:44:50.050927 master-0 kubenswrapper[4790]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:50.050927 master-0 kubenswrapper[4790]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:50.050927 master-0 kubenswrapper[4790]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:50.050927 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:44:50.050927 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:44:50.050927 master-0 kubenswrapper[4790]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:44:50.050927 master-0 kubenswrapper[4790]: [-]shutdown failed: reason withheld Oct 11 10:44:50.050927 master-0 kubenswrapper[4790]: readyz check failed Oct 11 10:44:50.052206 master-0 kubenswrapper[4790]: I1011 10:44:50.050941 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" podUID="76dd8647-4ad5-4874-b2d2-dee16aab637a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: I1011 10:44:50.519493 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:50.520899 master-2 kubenswrapper[4776]: I1011 10:44:50.519554 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:52.364575 master-2 kubenswrapper[4776]: I1011 10:44:52.364511 4776 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-2"] Oct 11 10:44:52.365168 master-2 kubenswrapper[4776]: I1011 10:44:52.364828 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-2" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver" containerID="cri-o://ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da" gracePeriod=135 Oct 11 10:44:52.365168 master-2 kubenswrapper[4776]: I1011 10:44:52.364887 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-2" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9" gracePeriod=135 Oct 11 10:44:52.365168 master-2 kubenswrapper[4776]: I1011 10:44:52.364949 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-2" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6" gracePeriod=135 Oct 11 10:44:52.365168 master-2 kubenswrapper[4776]: I1011 10:44:52.364975 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-2" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-cert-syncer" containerID="cri-o://e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510" gracePeriod=135 Oct 11 10:44:52.365168 master-2 kubenswrapper[4776]: I1011 10:44:52.365101 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-2" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57" gracePeriod=135 Oct 11 10:44:52.366976 master-2 kubenswrapper[4776]: I1011 10:44:52.366942 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-2"] Oct 11 10:44:52.367371 master-2 kubenswrapper[4776]: E1011 10:44:52.367343 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver" Oct 11 10:44:52.367371 master-2 kubenswrapper[4776]: I1011 10:44:52.367363 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver" Oct 11 10:44:52.367475 master-2 kubenswrapper[4776]: E1011 10:44:52.367384 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-cert-regeneration-controller" Oct 11 10:44:52.367475 master-2 kubenswrapper[4776]: I1011 10:44:52.367393 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-cert-regeneration-controller" Oct 11 10:44:52.367475 master-2 kubenswrapper[4776]: E1011 10:44:52.367432 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-insecure-readyz" Oct 11 10:44:52.367475 master-2 kubenswrapper[4776]: I1011 10:44:52.367442 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-insecure-readyz" Oct 11 10:44:52.367475 master-2 kubenswrapper[4776]: E1011 10:44:52.367455 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9041570beb5002e8da158e70e12f0c16" containerName="setup" Oct 11 10:44:52.367475 master-2 kubenswrapper[4776]: I1011 10:44:52.367463 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9041570beb5002e8da158e70e12f0c16" containerName="setup" Oct 11 10:44:52.367475 master-2 kubenswrapper[4776]: E1011 10:44:52.367475 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-cert-syncer" Oct 11 10:44:52.367801 master-2 kubenswrapper[4776]: I1011 10:44:52.367482 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-cert-syncer" Oct 11 10:44:52.367801 master-2 kubenswrapper[4776]: E1011 10:44:52.367525 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-check-endpoints" Oct 11 10:44:52.367801 master-2 kubenswrapper[4776]: I1011 10:44:52.367534 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-check-endpoints" Oct 11 10:44:52.367801 master-2 kubenswrapper[4776]: I1011 10:44:52.367732 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-cert-regeneration-controller" Oct 11 10:44:52.367801 master-2 kubenswrapper[4776]: I1011 10:44:52.367779 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-insecure-readyz" Oct 11 10:44:52.367801 master-2 kubenswrapper[4776]: I1011 10:44:52.367799 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver" Oct 11 10:44:52.368055 master-2 kubenswrapper[4776]: I1011 10:44:52.367808 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-cert-syncer" Oct 11 10:44:52.368055 master-2 kubenswrapper[4776]: I1011 10:44:52.367843 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-check-endpoints" Oct 11 10:44:52.452257 master-2 kubenswrapper[4776]: I1011 10:44:52.452190 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/978811670a28b21932e323b181b31435-cert-dir\") pod \"kube-apiserver-master-2\" (UID: \"978811670a28b21932e323b181b31435\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:44:52.452494 master-2 kubenswrapper[4776]: I1011 10:44:52.452275 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/978811670a28b21932e323b181b31435-resource-dir\") pod \"kube-apiserver-master-2\" (UID: \"978811670a28b21932e323b181b31435\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:44:52.452494 master-2 kubenswrapper[4776]: I1011 10:44:52.452336 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/978811670a28b21932e323b181b31435-audit-dir\") pod \"kube-apiserver-master-2\" (UID: \"978811670a28b21932e323b181b31435\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:44:52.553568 master-2 kubenswrapper[4776]: I1011 10:44:52.553264 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/978811670a28b21932e323b181b31435-audit-dir\") pod \"kube-apiserver-master-2\" (UID: \"978811670a28b21932e323b181b31435\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:44:52.553568 master-2 kubenswrapper[4776]: I1011 10:44:52.553352 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/978811670a28b21932e323b181b31435-cert-dir\") pod \"kube-apiserver-master-2\" (UID: \"978811670a28b21932e323b181b31435\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:44:52.553568 master-2 kubenswrapper[4776]: I1011 10:44:52.553407 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/978811670a28b21932e323b181b31435-resource-dir\") pod \"kube-apiserver-master-2\" (UID: \"978811670a28b21932e323b181b31435\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:44:52.553568 master-2 kubenswrapper[4776]: I1011 10:44:52.553507 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/978811670a28b21932e323b181b31435-resource-dir\") pod \"kube-apiserver-master-2\" (UID: \"978811670a28b21932e323b181b31435\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:44:52.553568 master-2 kubenswrapper[4776]: I1011 10:44:52.553555 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/978811670a28b21932e323b181b31435-audit-dir\") pod \"kube-apiserver-master-2\" (UID: \"978811670a28b21932e323b181b31435\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:44:52.553568 master-2 kubenswrapper[4776]: I1011 10:44:52.553584 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/978811670a28b21932e323b181b31435-cert-dir\") pod \"kube-apiserver-master-2\" (UID: \"978811670a28b21932e323b181b31435\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:44:53.138842 master-2 kubenswrapper[4776]: I1011 10:44:53.138785 4776 generic.go:334] "Generic (PLEG): container finished" podID="2e6df740-3969-4dd7-8953-2c21514694b8" containerID="dafe48c0553defd2bb14beff21925c74176e23a28e26ccc15fdf50c0af2425e7" exitCode=0 Oct 11 10:44:53.139042 master-2 kubenswrapper[4776]: I1011 10:44:53.138853 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-2" event={"ID":"2e6df740-3969-4dd7-8953-2c21514694b8","Type":"ContainerDied","Data":"dafe48c0553defd2bb14beff21925c74176e23a28e26ccc15fdf50c0af2425e7"} Oct 11 10:44:53.142157 master-2 kubenswrapper[4776]: I1011 10:44:53.142122 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-2_9041570beb5002e8da158e70e12f0c16/kube-apiserver-cert-syncer/0.log" Oct 11 10:44:53.142873 master-2 kubenswrapper[4776]: I1011 10:44:53.142844 4776 generic.go:334] "Generic (PLEG): container finished" podID="9041570beb5002e8da158e70e12f0c16" containerID="b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57" exitCode=0 Oct 11 10:44:53.142873 master-2 kubenswrapper[4776]: I1011 10:44:53.142872 4776 generic.go:334] "Generic (PLEG): container finished" podID="9041570beb5002e8da158e70e12f0c16" containerID="784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6" exitCode=0 Oct 11 10:44:53.142970 master-2 kubenswrapper[4776]: I1011 10:44:53.142882 4776 generic.go:334] "Generic (PLEG): container finished" podID="9041570beb5002e8da158e70e12f0c16" containerID="f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9" exitCode=0 Oct 11 10:44:53.142970 master-2 kubenswrapper[4776]: I1011 10:44:53.142894 4776 generic.go:334] "Generic (PLEG): container finished" podID="9041570beb5002e8da158e70e12f0c16" containerID="e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510" exitCode=2 Oct 11 10:44:53.166489 master-2 kubenswrapper[4776]: I1011 10:44:53.166397 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-2" oldPodUID="9041570beb5002e8da158e70e12f0c16" podUID="978811670a28b21932e323b181b31435" Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: I1011 10:44:53.308576 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:53.311325 master-2 kubenswrapper[4776]: I1011 10:44:53.308660 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:54.547917 master-2 kubenswrapper[4776]: I1011 10:44:54.547857 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:54.588953 master-2 kubenswrapper[4776]: I1011 10:44:54.588884 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-var-lock\") pod \"2e6df740-3969-4dd7-8953-2c21514694b8\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " Oct 11 10:44:54.588953 master-2 kubenswrapper[4776]: I1011 10:44:54.588963 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e6df740-3969-4dd7-8953-2c21514694b8-kube-api-access\") pod \"2e6df740-3969-4dd7-8953-2c21514694b8\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " Oct 11 10:44:54.589198 master-2 kubenswrapper[4776]: I1011 10:44:54.588996 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-kubelet-dir\") pod \"2e6df740-3969-4dd7-8953-2c21514694b8\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " Oct 11 10:44:54.589198 master-2 kubenswrapper[4776]: I1011 10:44:54.589026 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-var-lock" (OuterVolumeSpecName: "var-lock") pod "2e6df740-3969-4dd7-8953-2c21514694b8" (UID: "2e6df740-3969-4dd7-8953-2c21514694b8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:44:54.589264 master-2 kubenswrapper[4776]: I1011 10:44:54.589226 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:54.589264 master-2 kubenswrapper[4776]: I1011 10:44:54.589253 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2e6df740-3969-4dd7-8953-2c21514694b8" (UID: "2e6df740-3969-4dd7-8953-2c21514694b8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:44:54.598119 master-2 kubenswrapper[4776]: I1011 10:44:54.598065 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e6df740-3969-4dd7-8953-2c21514694b8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2e6df740-3969-4dd7-8953-2c21514694b8" (UID: "2e6df740-3969-4dd7-8953-2c21514694b8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:44:54.691056 master-2 kubenswrapper[4776]: I1011 10:44:54.690977 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e6df740-3969-4dd7-8953-2c21514694b8-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:54.691056 master-2 kubenswrapper[4776]: I1011 10:44:54.691024 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:55.044772 master-0 kubenswrapper[4790]: I1011 10:44:55.044649 4790 patch_prober.go:28] interesting pod/apiserver-656768b4df-9c8k6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.130.0.12:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.12:8443: connect: connection refused" start-of-body= Oct 11 10:44:55.045500 master-0 kubenswrapper[4790]: I1011 10:44:55.044846 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" podUID="76dd8647-4ad5-4874-b2d2-dee16aab637a" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.130.0.12:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.12:8443: connect: connection refused" Oct 11 10:44:55.158764 master-2 kubenswrapper[4776]: I1011 10:44:55.158633 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-2" event={"ID":"2e6df740-3969-4dd7-8953-2c21514694b8","Type":"ContainerDied","Data":"0fdd14a291d71a8f25a23b568e51208656add1332ab642252981b19d9970612d"} Oct 11 10:44:55.158764 master-2 kubenswrapper[4776]: I1011 10:44:55.158742 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fdd14a291d71a8f25a23b568e51208656add1332ab642252981b19d9970612d" Oct 11 10:44:55.158764 master-2 kubenswrapper[4776]: I1011 10:44:55.158755 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: I1011 10:44:55.521394 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: I1011 10:44:55.521512 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:56.354881 master-0 kubenswrapper[4790]: I1011 10:44:56.354803 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:44:56.498396 master-0 kubenswrapper[4790]: I1011 10:44:56.498279 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76dd8647-4ad5-4874-b2d2-dee16aab637a-trusted-ca-bundle\") pod \"76dd8647-4ad5-4874-b2d2-dee16aab637a\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " Oct 11 10:44:56.498396 master-0 kubenswrapper[4790]: I1011 10:44:56.498398 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/76dd8647-4ad5-4874-b2d2-dee16aab637a-audit-policies\") pod \"76dd8647-4ad5-4874-b2d2-dee16aab637a\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " Oct 11 10:44:56.498795 master-0 kubenswrapper[4790]: I1011 10:44:56.498454 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/76dd8647-4ad5-4874-b2d2-dee16aab637a-audit-dir\") pod \"76dd8647-4ad5-4874-b2d2-dee16aab637a\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " Oct 11 10:44:56.498795 master-0 kubenswrapper[4790]: I1011 10:44:56.498522 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/76dd8647-4ad5-4874-b2d2-dee16aab637a-etcd-client\") pod \"76dd8647-4ad5-4874-b2d2-dee16aab637a\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " Oct 11 10:44:56.498795 master-0 kubenswrapper[4790]: I1011 10:44:56.498620 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/76dd8647-4ad5-4874-b2d2-dee16aab637a-etcd-serving-ca\") pod \"76dd8647-4ad5-4874-b2d2-dee16aab637a\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " Oct 11 10:44:56.498795 master-0 kubenswrapper[4790]: I1011 10:44:56.498616 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76dd8647-4ad5-4874-b2d2-dee16aab637a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "76dd8647-4ad5-4874-b2d2-dee16aab637a" (UID: "76dd8647-4ad5-4874-b2d2-dee16aab637a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:44:56.498795 master-0 kubenswrapper[4790]: I1011 10:44:56.498686 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/76dd8647-4ad5-4874-b2d2-dee16aab637a-encryption-config\") pod \"76dd8647-4ad5-4874-b2d2-dee16aab637a\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " Oct 11 10:44:56.498795 master-0 kubenswrapper[4790]: I1011 10:44:56.498792 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76dd8647-4ad5-4874-b2d2-dee16aab637a-serving-cert\") pod \"76dd8647-4ad5-4874-b2d2-dee16aab637a\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " Oct 11 10:44:56.498969 master-0 kubenswrapper[4790]: I1011 10:44:56.498933 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2kt7\" (UniqueName: \"kubernetes.io/projected/76dd8647-4ad5-4874-b2d2-dee16aab637a-kube-api-access-k2kt7\") pod \"76dd8647-4ad5-4874-b2d2-dee16aab637a\" (UID: \"76dd8647-4ad5-4874-b2d2-dee16aab637a\") " Oct 11 10:44:56.499120 master-0 kubenswrapper[4790]: I1011 10:44:56.499084 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76dd8647-4ad5-4874-b2d2-dee16aab637a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "76dd8647-4ad5-4874-b2d2-dee16aab637a" (UID: "76dd8647-4ad5-4874-b2d2-dee16aab637a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:44:56.499391 master-0 kubenswrapper[4790]: I1011 10:44:56.499336 4790 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/76dd8647-4ad5-4874-b2d2-dee16aab637a-audit-dir\") on node \"master-0\" DevicePath \"\"" Oct 11 10:44:56.499451 master-0 kubenswrapper[4790]: I1011 10:44:56.499394 4790 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76dd8647-4ad5-4874-b2d2-dee16aab637a-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Oct 11 10:44:56.499834 master-0 kubenswrapper[4790]: I1011 10:44:56.499755 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76dd8647-4ad5-4874-b2d2-dee16aab637a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "76dd8647-4ad5-4874-b2d2-dee16aab637a" (UID: "76dd8647-4ad5-4874-b2d2-dee16aab637a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:44:56.499881 master-0 kubenswrapper[4790]: I1011 10:44:56.499791 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76dd8647-4ad5-4874-b2d2-dee16aab637a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "76dd8647-4ad5-4874-b2d2-dee16aab637a" (UID: "76dd8647-4ad5-4874-b2d2-dee16aab637a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:44:56.502481 master-0 kubenswrapper[4790]: I1011 10:44:56.502438 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76dd8647-4ad5-4874-b2d2-dee16aab637a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "76dd8647-4ad5-4874-b2d2-dee16aab637a" (UID: "76dd8647-4ad5-4874-b2d2-dee16aab637a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:44:56.503109 master-0 kubenswrapper[4790]: I1011 10:44:56.503001 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76dd8647-4ad5-4874-b2d2-dee16aab637a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "76dd8647-4ad5-4874-b2d2-dee16aab637a" (UID: "76dd8647-4ad5-4874-b2d2-dee16aab637a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:44:56.503418 master-0 kubenswrapper[4790]: I1011 10:44:56.503278 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76dd8647-4ad5-4874-b2d2-dee16aab637a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "76dd8647-4ad5-4874-b2d2-dee16aab637a" (UID: "76dd8647-4ad5-4874-b2d2-dee16aab637a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:44:56.503547 master-0 kubenswrapper[4790]: I1011 10:44:56.503497 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76dd8647-4ad5-4874-b2d2-dee16aab637a-kube-api-access-k2kt7" (OuterVolumeSpecName: "kube-api-access-k2kt7") pod "76dd8647-4ad5-4874-b2d2-dee16aab637a" (UID: "76dd8647-4ad5-4874-b2d2-dee16aab637a"). InnerVolumeSpecName "kube-api-access-k2kt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:44:56.600210 master-0 kubenswrapper[4790]: I1011 10:44:56.600003 4790 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/76dd8647-4ad5-4874-b2d2-dee16aab637a-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Oct 11 10:44:56.600210 master-0 kubenswrapper[4790]: I1011 10:44:56.600049 4790 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/76dd8647-4ad5-4874-b2d2-dee16aab637a-encryption-config\") on node \"master-0\" DevicePath \"\"" Oct 11 10:44:56.600210 master-0 kubenswrapper[4790]: I1011 10:44:56.600061 4790 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76dd8647-4ad5-4874-b2d2-dee16aab637a-serving-cert\") on node \"master-0\" DevicePath \"\"" Oct 11 10:44:56.600210 master-0 kubenswrapper[4790]: I1011 10:44:56.600071 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2kt7\" (UniqueName: \"kubernetes.io/projected/76dd8647-4ad5-4874-b2d2-dee16aab637a-kube-api-access-k2kt7\") on node \"master-0\" DevicePath \"\"" Oct 11 10:44:56.600210 master-0 kubenswrapper[4790]: I1011 10:44:56.600085 4790 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/76dd8647-4ad5-4874-b2d2-dee16aab637a-audit-policies\") on node \"master-0\" DevicePath \"\"" Oct 11 10:44:56.600210 master-0 kubenswrapper[4790]: I1011 10:44:56.600093 4790 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/76dd8647-4ad5-4874-b2d2-dee16aab637a-etcd-client\") on node \"master-0\" DevicePath \"\"" Oct 11 10:44:56.717503 master-0 kubenswrapper[4790]: I1011 10:44:56.717375 4790 generic.go:334] "Generic (PLEG): container finished" podID="76dd8647-4ad5-4874-b2d2-dee16aab637a" containerID="35472050aef2a23d480cd0b8022e9ecdb2080124eec5c8fed4e4fce84c05d72c" exitCode=0 Oct 11 10:44:56.717503 master-0 kubenswrapper[4790]: I1011 10:44:56.717456 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" event={"ID":"76dd8647-4ad5-4874-b2d2-dee16aab637a","Type":"ContainerDied","Data":"35472050aef2a23d480cd0b8022e9ecdb2080124eec5c8fed4e4fce84c05d72c"} Oct 11 10:44:56.717503 master-0 kubenswrapper[4790]: I1011 10:44:56.717509 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" event={"ID":"76dd8647-4ad5-4874-b2d2-dee16aab637a","Type":"ContainerDied","Data":"64ce93912fbe2ce263f72579fc62109333989150c0bd59c119eb0bd06f24caa2"} Oct 11 10:44:56.717857 master-0 kubenswrapper[4790]: I1011 10:44:56.717508 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-656768b4df-9c8k6" Oct 11 10:44:56.717857 master-0 kubenswrapper[4790]: I1011 10:44:56.717542 4790 scope.go:117] "RemoveContainer" containerID="35472050aef2a23d480cd0b8022e9ecdb2080124eec5c8fed4e4fce84c05d72c" Oct 11 10:44:56.731772 master-0 kubenswrapper[4790]: I1011 10:44:56.731675 4790 scope.go:117] "RemoveContainer" containerID="ace4f650a5b9b61d268cf3bf1c5cdb946038997e7050cf67fa731d305fe20331" Oct 11 10:44:56.749364 master-0 kubenswrapper[4790]: I1011 10:44:56.749288 4790 scope.go:117] "RemoveContainer" containerID="35472050aef2a23d480cd0b8022e9ecdb2080124eec5c8fed4e4fce84c05d72c" Oct 11 10:44:56.750219 master-0 kubenswrapper[4790]: E1011 10:44:56.750121 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35472050aef2a23d480cd0b8022e9ecdb2080124eec5c8fed4e4fce84c05d72c\": container with ID starting with 35472050aef2a23d480cd0b8022e9ecdb2080124eec5c8fed4e4fce84c05d72c not found: ID does not exist" containerID="35472050aef2a23d480cd0b8022e9ecdb2080124eec5c8fed4e4fce84c05d72c" Oct 11 10:44:56.750360 master-0 kubenswrapper[4790]: I1011 10:44:56.750227 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35472050aef2a23d480cd0b8022e9ecdb2080124eec5c8fed4e4fce84c05d72c"} err="failed to get container status \"35472050aef2a23d480cd0b8022e9ecdb2080124eec5c8fed4e4fce84c05d72c\": rpc error: code = NotFound desc = could not find container \"35472050aef2a23d480cd0b8022e9ecdb2080124eec5c8fed4e4fce84c05d72c\": container with ID starting with 35472050aef2a23d480cd0b8022e9ecdb2080124eec5c8fed4e4fce84c05d72c not found: ID does not exist" Oct 11 10:44:56.750360 master-0 kubenswrapper[4790]: I1011 10:44:56.750304 4790 scope.go:117] "RemoveContainer" containerID="ace4f650a5b9b61d268cf3bf1c5cdb946038997e7050cf67fa731d305fe20331" Oct 11 10:44:56.751049 master-0 kubenswrapper[4790]: E1011 10:44:56.750946 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ace4f650a5b9b61d268cf3bf1c5cdb946038997e7050cf67fa731d305fe20331\": container with ID starting with ace4f650a5b9b61d268cf3bf1c5cdb946038997e7050cf67fa731d305fe20331 not found: ID does not exist" containerID="ace4f650a5b9b61d268cf3bf1c5cdb946038997e7050cf67fa731d305fe20331" Oct 11 10:44:56.751183 master-0 kubenswrapper[4790]: I1011 10:44:56.751080 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ace4f650a5b9b61d268cf3bf1c5cdb946038997e7050cf67fa731d305fe20331"} err="failed to get container status \"ace4f650a5b9b61d268cf3bf1c5cdb946038997e7050cf67fa731d305fe20331\": rpc error: code = NotFound desc = could not find container \"ace4f650a5b9b61d268cf3bf1c5cdb946038997e7050cf67fa731d305fe20331\": container with ID starting with ace4f650a5b9b61d268cf3bf1c5cdb946038997e7050cf67fa731d305fe20331 not found: ID does not exist" Oct 11 10:44:56.755184 master-0 kubenswrapper[4790]: I1011 10:44:56.755060 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-9c8k6"] Oct 11 10:44:56.760595 master-0 kubenswrapper[4790]: I1011 10:44:56.760521 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-9c8k6"] Oct 11 10:44:58.303670 master-0 kubenswrapper[4790]: I1011 10:44:58.303572 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76dd8647-4ad5-4874-b2d2-dee16aab637a" path="/var/lib/kubelet/pods/76dd8647-4ad5-4874-b2d2-dee16aab637a/volumes" Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: I1011 10:44:58.320783 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:58.322420 master-2 kubenswrapper[4776]: I1011 10:44:58.320855 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:59.403085 master-1 kubenswrapper[4771]: I1011 10:44:59.402913 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fn27x"] Oct 11 10:44:59.404566 master-1 kubenswrapper[4771]: E1011 10:44:59.403301 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78689bdc-0258-45eb-8e5b-253911c61c79" containerName="installer" Oct 11 10:44:59.404566 master-1 kubenswrapper[4771]: I1011 10:44:59.403322 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="78689bdc-0258-45eb-8e5b-253911c61c79" containerName="installer" Oct 11 10:44:59.404566 master-1 kubenswrapper[4771]: I1011 10:44:59.403568 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="78689bdc-0258-45eb-8e5b-253911c61c79" containerName="installer" Oct 11 10:44:59.413032 master-1 kubenswrapper[4771]: I1011 10:44:59.412913 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fn27x" Oct 11 10:44:59.429551 master-1 kubenswrapper[4771]: I1011 10:44:59.428588 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fn27x"] Oct 11 10:44:59.442102 master-1 kubenswrapper[4771]: I1011 10:44:59.441930 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd28168d-b375-4a82-8784-bc38fad4cc07-utilities\") pod \"redhat-operators-fn27x\" (UID: \"dd28168d-b375-4a82-8784-bc38fad4cc07\") " pod="openshift-marketplace/redhat-operators-fn27x" Oct 11 10:44:59.442102 master-1 kubenswrapper[4771]: I1011 10:44:59.442013 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd28168d-b375-4a82-8784-bc38fad4cc07-catalog-content\") pod \"redhat-operators-fn27x\" (UID: \"dd28168d-b375-4a82-8784-bc38fad4cc07\") " pod="openshift-marketplace/redhat-operators-fn27x" Oct 11 10:44:59.442102 master-1 kubenswrapper[4771]: I1011 10:44:59.442060 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z8rx\" (UniqueName: \"kubernetes.io/projected/dd28168d-b375-4a82-8784-bc38fad4cc07-kube-api-access-4z8rx\") pod \"redhat-operators-fn27x\" (UID: \"dd28168d-b375-4a82-8784-bc38fad4cc07\") " pod="openshift-marketplace/redhat-operators-fn27x" Oct 11 10:44:59.543672 master-1 kubenswrapper[4771]: I1011 10:44:59.543586 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd28168d-b375-4a82-8784-bc38fad4cc07-utilities\") pod \"redhat-operators-fn27x\" (UID: \"dd28168d-b375-4a82-8784-bc38fad4cc07\") " pod="openshift-marketplace/redhat-operators-fn27x" Oct 11 10:44:59.544027 master-1 kubenswrapper[4771]: I1011 10:44:59.543690 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd28168d-b375-4a82-8784-bc38fad4cc07-catalog-content\") pod \"redhat-operators-fn27x\" (UID: \"dd28168d-b375-4a82-8784-bc38fad4cc07\") " pod="openshift-marketplace/redhat-operators-fn27x" Oct 11 10:44:59.544027 master-1 kubenswrapper[4771]: I1011 10:44:59.543771 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z8rx\" (UniqueName: \"kubernetes.io/projected/dd28168d-b375-4a82-8784-bc38fad4cc07-kube-api-access-4z8rx\") pod \"redhat-operators-fn27x\" (UID: \"dd28168d-b375-4a82-8784-bc38fad4cc07\") " pod="openshift-marketplace/redhat-operators-fn27x" Oct 11 10:44:59.546445 master-1 kubenswrapper[4771]: I1011 10:44:59.546378 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd28168d-b375-4a82-8784-bc38fad4cc07-utilities\") pod \"redhat-operators-fn27x\" (UID: \"dd28168d-b375-4a82-8784-bc38fad4cc07\") " pod="openshift-marketplace/redhat-operators-fn27x" Oct 11 10:44:59.546964 master-1 kubenswrapper[4771]: I1011 10:44:59.546917 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd28168d-b375-4a82-8784-bc38fad4cc07-catalog-content\") pod \"redhat-operators-fn27x\" (UID: \"dd28168d-b375-4a82-8784-bc38fad4cc07\") " pod="openshift-marketplace/redhat-operators-fn27x" Oct 11 10:44:59.577526 master-1 kubenswrapper[4771]: I1011 10:44:59.575766 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z8rx\" (UniqueName: \"kubernetes.io/projected/dd28168d-b375-4a82-8784-bc38fad4cc07-kube-api-access-4z8rx\") pod \"redhat-operators-fn27x\" (UID: \"dd28168d-b375-4a82-8784-bc38fad4cc07\") " pod="openshift-marketplace/redhat-operators-fn27x" Oct 11 10:44:59.602008 master-1 kubenswrapper[4771]: I1011 10:44:59.601889 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-btlwb"] Oct 11 10:44:59.603807 master-1 kubenswrapper[4771]: I1011 10:44:59.603759 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btlwb" Oct 11 10:44:59.617627 master-1 kubenswrapper[4771]: I1011 10:44:59.617540 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btlwb"] Oct 11 10:44:59.645461 master-1 kubenswrapper[4771]: I1011 10:44:59.645046 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1-utilities\") pod \"redhat-marketplace-btlwb\" (UID: \"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1\") " pod="openshift-marketplace/redhat-marketplace-btlwb" Oct 11 10:44:59.645461 master-1 kubenswrapper[4771]: I1011 10:44:59.645146 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1-catalog-content\") pod \"redhat-marketplace-btlwb\" (UID: \"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1\") " pod="openshift-marketplace/redhat-marketplace-btlwb" Oct 11 10:44:59.645461 master-1 kubenswrapper[4771]: I1011 10:44:59.645183 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjs9q\" (UniqueName: \"kubernetes.io/projected/aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1-kube-api-access-jjs9q\") pod \"redhat-marketplace-btlwb\" (UID: \"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1\") " pod="openshift-marketplace/redhat-marketplace-btlwb" Oct 11 10:44:59.746771 master-1 kubenswrapper[4771]: I1011 10:44:59.746615 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1-catalog-content\") pod \"redhat-marketplace-btlwb\" (UID: \"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1\") " pod="openshift-marketplace/redhat-marketplace-btlwb" Oct 11 10:44:59.746771 master-1 kubenswrapper[4771]: I1011 10:44:59.746695 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjs9q\" (UniqueName: \"kubernetes.io/projected/aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1-kube-api-access-jjs9q\") pod \"redhat-marketplace-btlwb\" (UID: \"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1\") " pod="openshift-marketplace/redhat-marketplace-btlwb" Oct 11 10:44:59.746771 master-1 kubenswrapper[4771]: I1011 10:44:59.746743 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1-utilities\") pod \"redhat-marketplace-btlwb\" (UID: \"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1\") " pod="openshift-marketplace/redhat-marketplace-btlwb" Oct 11 10:44:59.747465 master-1 kubenswrapper[4771]: I1011 10:44:59.747408 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1-catalog-content\") pod \"redhat-marketplace-btlwb\" (UID: \"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1\") " pod="openshift-marketplace/redhat-marketplace-btlwb" Oct 11 10:44:59.747546 master-1 kubenswrapper[4771]: I1011 10:44:59.747487 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1-utilities\") pod \"redhat-marketplace-btlwb\" (UID: \"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1\") " pod="openshift-marketplace/redhat-marketplace-btlwb" Oct 11 10:44:59.754227 master-1 kubenswrapper[4771]: I1011 10:44:59.754182 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fn27x" Oct 11 10:44:59.773736 master-1 kubenswrapper[4771]: I1011 10:44:59.773582 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjs9q\" (UniqueName: \"kubernetes.io/projected/aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1-kube-api-access-jjs9q\") pod \"redhat-marketplace-btlwb\" (UID: \"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1\") " pod="openshift-marketplace/redhat-marketplace-btlwb" Oct 11 10:44:59.935837 master-2 kubenswrapper[4776]: I1011 10:44:59.935729 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-10-master-2"] Oct 11 10:44:59.937202 master-2 kubenswrapper[4776]: E1011 10:44:59.936005 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6df740-3969-4dd7-8953-2c21514694b8" containerName="installer" Oct 11 10:44:59.937202 master-2 kubenswrapper[4776]: I1011 10:44:59.936023 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6df740-3969-4dd7-8953-2c21514694b8" containerName="installer" Oct 11 10:44:59.937202 master-2 kubenswrapper[4776]: I1011 10:44:59.936135 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6df740-3969-4dd7-8953-2c21514694b8" containerName="installer" Oct 11 10:44:59.937202 master-2 kubenswrapper[4776]: I1011 10:44:59.937080 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-2" Oct 11 10:44:59.941964 master-2 kubenswrapper[4776]: I1011 10:44:59.940919 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-xbqxb" Oct 11 10:44:59.951808 master-1 kubenswrapper[4771]: I1011 10:44:59.949992 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btlwb" Oct 11 10:44:59.952302 master-2 kubenswrapper[4776]: I1011 10:44:59.952239 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-10-master-2"] Oct 11 10:45:00.040476 master-1 kubenswrapper[4771]: I1011 10:45:00.038667 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fn27x"] Oct 11 10:45:00.060745 master-2 kubenswrapper[4776]: I1011 10:45:00.060639 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56e683e1-6c74-4998-ac94-05f58a65965f-kube-api-access\") pod \"installer-10-master-2\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.060745 master-2 kubenswrapper[4776]: I1011 10:45:00.060737 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-kubelet-dir\") pod \"installer-10-master-2\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.061002 master-2 kubenswrapper[4776]: I1011 10:45:00.060910 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-var-lock\") pod \"installer-10-master-2\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.163117 master-2 kubenswrapper[4776]: I1011 10:45:00.163062 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56e683e1-6c74-4998-ac94-05f58a65965f-kube-api-access\") pod \"installer-10-master-2\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.163699 master-2 kubenswrapper[4776]: I1011 10:45:00.163647 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-kubelet-dir\") pod \"installer-10-master-2\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.163866 master-2 kubenswrapper[4776]: I1011 10:45:00.163845 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-var-lock\") pod \"installer-10-master-2\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.164197 master-2 kubenswrapper[4776]: I1011 10:45:00.163761 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-kubelet-dir\") pod \"installer-10-master-2\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.164197 master-2 kubenswrapper[4776]: I1011 10:45:00.163976 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-var-lock\") pod \"installer-10-master-2\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.189732 master-2 kubenswrapper[4776]: I1011 10:45:00.189586 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56e683e1-6c74-4998-ac94-05f58a65965f-kube-api-access\") pod \"installer-10-master-2\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.217404 master-2 kubenswrapper[4776]: I1011 10:45:00.217364 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv"] Oct 11 10:45:00.218836 master-2 kubenswrapper[4776]: I1011 10:45:00.218817 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.221299 master-2 kubenswrapper[4776]: I1011 10:45:00.221263 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-hbjq2" Oct 11 10:45:00.221638 master-2 kubenswrapper[4776]: I1011 10:45:00.221541 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 11 10:45:00.229793 master-2 kubenswrapper[4776]: I1011 10:45:00.229551 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv"] Oct 11 10:45:00.266006 master-2 kubenswrapper[4776]: I1011 10:45:00.265961 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4gqp\" (UniqueName: \"kubernetes.io/projected/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-kube-api-access-j4gqp\") pod \"collect-profiles-29336325-mh4sv\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.266273 master-2 kubenswrapper[4776]: I1011 10:45:00.266259 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-config-volume\") pod \"collect-profiles-29336325-mh4sv\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.266360 master-2 kubenswrapper[4776]: I1011 10:45:00.266348 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-secret-volume\") pod \"collect-profiles-29336325-mh4sv\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.279615 master-2 kubenswrapper[4776]: I1011 10:45:00.279585 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.368234 master-2 kubenswrapper[4776]: I1011 10:45:00.368133 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4gqp\" (UniqueName: \"kubernetes.io/projected/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-kube-api-access-j4gqp\") pod \"collect-profiles-29336325-mh4sv\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.368234 master-2 kubenswrapper[4776]: I1011 10:45:00.368190 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-config-volume\") pod \"collect-profiles-29336325-mh4sv\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.368234 master-2 kubenswrapper[4776]: I1011 10:45:00.368207 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-secret-volume\") pod \"collect-profiles-29336325-mh4sv\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.369403 master-2 kubenswrapper[4776]: I1011 10:45:00.369360 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-config-volume\") pod \"collect-profiles-29336325-mh4sv\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.383585 master-2 kubenswrapper[4776]: I1011 10:45:00.383517 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-secret-volume\") pod \"collect-profiles-29336325-mh4sv\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.385116 master-1 kubenswrapper[4771]: I1011 10:45:00.385057 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btlwb"] Oct 11 10:45:00.390059 master-1 kubenswrapper[4771]: W1011 10:45:00.390014 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa7fddf6_d341_4992_bba8_9d5fa5b1e7a1.slice/crio-07f0178441f3d50314b9519115a4f3d7e321d0618d45e6c54bc9d3b3f3be9ea8 WatchSource:0}: Error finding container 07f0178441f3d50314b9519115a4f3d7e321d0618d45e6c54bc9d3b3f3be9ea8: Status 404 returned error can't find the container with id 07f0178441f3d50314b9519115a4f3d7e321d0618d45e6c54bc9d3b3f3be9ea8 Oct 11 10:45:00.391027 master-2 kubenswrapper[4776]: I1011 10:45:00.390829 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4gqp\" (UniqueName: \"kubernetes.io/projected/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-kube-api-access-j4gqp\") pod \"collect-profiles-29336325-mh4sv\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: I1011 10:45:00.520598 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: I1011 10:45:00.520780 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:00.540780 master-2 kubenswrapper[4776]: I1011 10:45:00.540074 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.666841 master-2 kubenswrapper[4776]: I1011 10:45:00.666763 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-10-master-2"] Oct 11 10:45:00.674858 master-2 kubenswrapper[4776]: W1011 10:45:00.674798 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod56e683e1_6c74_4998_ac94_05f58a65965f.slice/crio-a100df13f60f80ad4bc323ad2221d41c81f5ea6f0551eaf971a5f40bf6de7a88 WatchSource:0}: Error finding container a100df13f60f80ad4bc323ad2221d41c81f5ea6f0551eaf971a5f40bf6de7a88: Status 404 returned error can't find the container with id a100df13f60f80ad4bc323ad2221d41c81f5ea6f0551eaf971a5f40bf6de7a88 Oct 11 10:45:00.926303 master-1 kubenswrapper[4771]: I1011 10:45:00.926233 4771 generic.go:334] "Generic (PLEG): container finished" podID="aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1" containerID="563b4b269fc1aea55e95d3112d36370a783bc78f1b8ea1724512328627aa0245" exitCode=0 Oct 11 10:45:00.926303 master-1 kubenswrapper[4771]: I1011 10:45:00.926313 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btlwb" event={"ID":"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1","Type":"ContainerDied","Data":"563b4b269fc1aea55e95d3112d36370a783bc78f1b8ea1724512328627aa0245"} Oct 11 10:45:00.927161 master-1 kubenswrapper[4771]: I1011 10:45:00.926341 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btlwb" event={"ID":"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1","Type":"ContainerStarted","Data":"07f0178441f3d50314b9519115a4f3d7e321d0618d45e6c54bc9d3b3f3be9ea8"} Oct 11 10:45:00.927614 master-1 kubenswrapper[4771]: I1011 10:45:00.927596 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:45:00.927933 master-1 kubenswrapper[4771]: I1011 10:45:00.927895 4771 generic.go:334] "Generic (PLEG): container finished" podID="dd28168d-b375-4a82-8784-bc38fad4cc07" containerID="09ebc1d9dd212e1b780cdd4c208a385d6c32b3be45f5a7fd3316afb9285c5132" exitCode=0 Oct 11 10:45:00.927933 master-1 kubenswrapper[4771]: I1011 10:45:00.927923 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fn27x" event={"ID":"dd28168d-b375-4a82-8784-bc38fad4cc07","Type":"ContainerDied","Data":"09ebc1d9dd212e1b780cdd4c208a385d6c32b3be45f5a7fd3316afb9285c5132"} Oct 11 10:45:00.928029 master-1 kubenswrapper[4771]: I1011 10:45:00.927939 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fn27x" event={"ID":"dd28168d-b375-4a82-8784-bc38fad4cc07","Type":"ContainerStarted","Data":"6a5c5b9ff534c3108ab033d2a189e452b9f1cc8c2fa78f601306eadfc2f6563e"} Oct 11 10:45:00.981145 master-2 kubenswrapper[4776]: I1011 10:45:00.981092 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv"] Oct 11 10:45:01.197071 master-2 kubenswrapper[4776]: I1011 10:45:01.197018 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-2" event={"ID":"56e683e1-6c74-4998-ac94-05f58a65965f","Type":"ContainerStarted","Data":"903137cd4045917d2201001cea3f552800cf2d073b4309b1386b4ec5c2d61b48"} Oct 11 10:45:01.197071 master-2 kubenswrapper[4776]: I1011 10:45:01.197075 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-2" event={"ID":"56e683e1-6c74-4998-ac94-05f58a65965f","Type":"ContainerStarted","Data":"a100df13f60f80ad4bc323ad2221d41c81f5ea6f0551eaf971a5f40bf6de7a88"} Oct 11 10:45:01.199027 master-2 kubenswrapper[4776]: I1011 10:45:01.198969 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" event={"ID":"a4aea0e1-d6c8-4542-85c7-e46b945d61a0","Type":"ContainerStarted","Data":"84c5dbd5af0fe30b5900ca449a4f0411c4cff8ceb60f39636cf23f6147444fb3"} Oct 11 10:45:01.199027 master-2 kubenswrapper[4776]: I1011 10:45:01.199009 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" event={"ID":"a4aea0e1-d6c8-4542-85c7-e46b945d61a0","Type":"ContainerStarted","Data":"7d380218deaac4977ed8862e6e0db8e065a96b5a1de4b5ed4bb8fa216797842d"} Oct 11 10:45:01.219531 master-2 kubenswrapper[4776]: I1011 10:45:01.219363 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-10-master-2" podStartSLOduration=2.219343671 podStartE2EDuration="2.219343671s" podCreationTimestamp="2025-10-11 10:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:45:01.2163282 +0000 UTC m=+1136.000754909" watchObservedRunningTime="2025-10-11 10:45:01.219343671 +0000 UTC m=+1136.003770380" Oct 11 10:45:01.242524 master-2 kubenswrapper[4776]: I1011 10:45:01.242427 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" podStartSLOduration=1.242412165 podStartE2EDuration="1.242412165s" podCreationTimestamp="2025-10-11 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:45:01.238746796 +0000 UTC m=+1136.023173505" watchObservedRunningTime="2025-10-11 10:45:01.242412165 +0000 UTC m=+1136.026838874" Oct 11 10:45:01.803615 master-2 kubenswrapper[4776]: I1011 10:45:01.802261 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7lq47"] Oct 11 10:45:01.810462 master-2 kubenswrapper[4776]: I1011 10:45:01.808461 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:01.821885 master-2 kubenswrapper[4776]: I1011 10:45:01.821840 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lq47"] Oct 11 10:45:01.893876 master-2 kubenswrapper[4776]: I1011 10:45:01.893762 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-catalog-content\") pod \"certified-operators-7lq47\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:01.894179 master-2 kubenswrapper[4776]: I1011 10:45:01.893891 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-utilities\") pod \"certified-operators-7lq47\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:01.894179 master-2 kubenswrapper[4776]: I1011 10:45:01.893914 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxmvj\" (UniqueName: \"kubernetes.io/projected/89013c68-6873-4f47-bd39-d7eae57cd89b-kube-api-access-jxmvj\") pod \"certified-operators-7lq47\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:01.938342 master-1 kubenswrapper[4771]: I1011 10:45:01.937632 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fn27x" event={"ID":"dd28168d-b375-4a82-8784-bc38fad4cc07","Type":"ContainerStarted","Data":"63deab1a90696cd378fba377a0ccafb64b7d1abb648ebd498f4ae24255a846e8"} Oct 11 10:45:01.995129 master-2 kubenswrapper[4776]: I1011 10:45:01.995052 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-utilities\") pod \"certified-operators-7lq47\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:01.995129 master-2 kubenswrapper[4776]: I1011 10:45:01.995109 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxmvj\" (UniqueName: \"kubernetes.io/projected/89013c68-6873-4f47-bd39-d7eae57cd89b-kube-api-access-jxmvj\") pod \"certified-operators-7lq47\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:01.995129 master-2 kubenswrapper[4776]: I1011 10:45:01.995169 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-catalog-content\") pod \"certified-operators-7lq47\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:01.996314 master-2 kubenswrapper[4776]: I1011 10:45:01.995628 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-utilities\") pod \"certified-operators-7lq47\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:01.996314 master-2 kubenswrapper[4776]: I1011 10:45:01.995702 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-catalog-content\") pod \"certified-operators-7lq47\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:02.003079 master-1 kubenswrapper[4771]: I1011 10:45:02.002933 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r8hdr"] Oct 11 10:45:02.005198 master-1 kubenswrapper[4771]: I1011 10:45:02.005142 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r8hdr" Oct 11 10:45:02.020421 master-2 kubenswrapper[4776]: I1011 10:45:02.020374 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxmvj\" (UniqueName: \"kubernetes.io/projected/89013c68-6873-4f47-bd39-d7eae57cd89b-kube-api-access-jxmvj\") pod \"certified-operators-7lq47\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:02.024524 master-1 kubenswrapper[4771]: I1011 10:45:02.024469 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r8hdr"] Oct 11 10:45:02.083701 master-1 kubenswrapper[4771]: I1011 10:45:02.083624 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plx55\" (UniqueName: \"kubernetes.io/projected/8e964e77-4315-44b2-a34f-d0e2249e9a72-kube-api-access-plx55\") pod \"community-operators-r8hdr\" (UID: \"8e964e77-4315-44b2-a34f-d0e2249e9a72\") " pod="openshift-marketplace/community-operators-r8hdr" Oct 11 10:45:02.084036 master-1 kubenswrapper[4771]: I1011 10:45:02.083740 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e964e77-4315-44b2-a34f-d0e2249e9a72-utilities\") pod \"community-operators-r8hdr\" (UID: \"8e964e77-4315-44b2-a34f-d0e2249e9a72\") " pod="openshift-marketplace/community-operators-r8hdr" Oct 11 10:45:02.084036 master-1 kubenswrapper[4771]: I1011 10:45:02.083788 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e964e77-4315-44b2-a34f-d0e2249e9a72-catalog-content\") pod \"community-operators-r8hdr\" (UID: \"8e964e77-4315-44b2-a34f-d0e2249e9a72\") " pod="openshift-marketplace/community-operators-r8hdr" Oct 11 10:45:02.158086 master-2 kubenswrapper[4776]: I1011 10:45:02.158020 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:02.185284 master-1 kubenswrapper[4771]: I1011 10:45:02.185210 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plx55\" (UniqueName: \"kubernetes.io/projected/8e964e77-4315-44b2-a34f-d0e2249e9a72-kube-api-access-plx55\") pod \"community-operators-r8hdr\" (UID: \"8e964e77-4315-44b2-a34f-d0e2249e9a72\") " pod="openshift-marketplace/community-operators-r8hdr" Oct 11 10:45:02.185553 master-1 kubenswrapper[4771]: I1011 10:45:02.185330 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e964e77-4315-44b2-a34f-d0e2249e9a72-utilities\") pod \"community-operators-r8hdr\" (UID: \"8e964e77-4315-44b2-a34f-d0e2249e9a72\") " pod="openshift-marketplace/community-operators-r8hdr" Oct 11 10:45:02.185553 master-1 kubenswrapper[4771]: I1011 10:45:02.185397 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e964e77-4315-44b2-a34f-d0e2249e9a72-catalog-content\") pod \"community-operators-r8hdr\" (UID: \"8e964e77-4315-44b2-a34f-d0e2249e9a72\") " pod="openshift-marketplace/community-operators-r8hdr" Oct 11 10:45:02.185965 master-1 kubenswrapper[4771]: I1011 10:45:02.185931 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e964e77-4315-44b2-a34f-d0e2249e9a72-utilities\") pod \"community-operators-r8hdr\" (UID: \"8e964e77-4315-44b2-a34f-d0e2249e9a72\") " pod="openshift-marketplace/community-operators-r8hdr" Oct 11 10:45:02.186202 master-1 kubenswrapper[4771]: I1011 10:45:02.186147 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e964e77-4315-44b2-a34f-d0e2249e9a72-catalog-content\") pod \"community-operators-r8hdr\" (UID: \"8e964e77-4315-44b2-a34f-d0e2249e9a72\") " pod="openshift-marketplace/community-operators-r8hdr" Oct 11 10:45:02.209504 master-2 kubenswrapper[4776]: I1011 10:45:02.208722 4776 generic.go:334] "Generic (PLEG): container finished" podID="a4aea0e1-d6c8-4542-85c7-e46b945d61a0" containerID="84c5dbd5af0fe30b5900ca449a4f0411c4cff8ceb60f39636cf23f6147444fb3" exitCode=0 Oct 11 10:45:02.209752 master-2 kubenswrapper[4776]: I1011 10:45:02.209532 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" event={"ID":"a4aea0e1-d6c8-4542-85c7-e46b945d61a0","Type":"ContainerDied","Data":"84c5dbd5af0fe30b5900ca449a4f0411c4cff8ceb60f39636cf23f6147444fb3"} Oct 11 10:45:02.217964 master-1 kubenswrapper[4771]: I1011 10:45:02.217898 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plx55\" (UniqueName: \"kubernetes.io/projected/8e964e77-4315-44b2-a34f-d0e2249e9a72-kube-api-access-plx55\") pod \"community-operators-r8hdr\" (UID: \"8e964e77-4315-44b2-a34f-d0e2249e9a72\") " pod="openshift-marketplace/community-operators-r8hdr" Oct 11 10:45:02.328930 master-1 kubenswrapper[4771]: I1011 10:45:02.328825 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r8hdr" Oct 11 10:45:02.591045 master-2 kubenswrapper[4776]: I1011 10:45:02.590976 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lq47"] Oct 11 10:45:02.594754 master-2 kubenswrapper[4776]: W1011 10:45:02.594700 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89013c68_6873_4f47_bd39_d7eae57cd89b.slice/crio-94c86deb0008b919de4c04f201973916bff0475350cca9c14ee8e5cc04b4bc3c WatchSource:0}: Error finding container 94c86deb0008b919de4c04f201973916bff0475350cca9c14ee8e5cc04b4bc3c: Status 404 returned error can't find the container with id 94c86deb0008b919de4c04f201973916bff0475350cca9c14ee8e5cc04b4bc3c Oct 11 10:45:02.720654 master-0 kubenswrapper[4790]: I1011 10:45:02.720543 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r"] Oct 11 10:45:02.721830 master-0 kubenswrapper[4790]: E1011 10:45:02.720855 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76dd8647-4ad5-4874-b2d2-dee16aab637a" containerName="oauth-apiserver" Oct 11 10:45:02.721830 master-0 kubenswrapper[4790]: I1011 10:45:02.720874 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="76dd8647-4ad5-4874-b2d2-dee16aab637a" containerName="oauth-apiserver" Oct 11 10:45:02.721830 master-0 kubenswrapper[4790]: E1011 10:45:02.720894 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76dd8647-4ad5-4874-b2d2-dee16aab637a" containerName="fix-audit-permissions" Oct 11 10:45:02.721830 master-0 kubenswrapper[4790]: I1011 10:45:02.720903 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="76dd8647-4ad5-4874-b2d2-dee16aab637a" containerName="fix-audit-permissions" Oct 11 10:45:02.721830 master-0 kubenswrapper[4790]: I1011 10:45:02.720985 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="76dd8647-4ad5-4874-b2d2-dee16aab637a" containerName="oauth-apiserver" Oct 11 10:45:02.721830 master-0 kubenswrapper[4790]: I1011 10:45:02.721594 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.725090 master-0 kubenswrapper[4790]: I1011 10:45:02.725033 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 11 10:45:02.725090 master-0 kubenswrapper[4790]: I1011 10:45:02.725056 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 11 10:45:02.725380 master-0 kubenswrapper[4790]: I1011 10:45:02.725312 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 11 10:45:02.726027 master-0 kubenswrapper[4790]: I1011 10:45:02.725934 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 11 10:45:02.726094 master-0 kubenswrapper[4790]: I1011 10:45:02.726055 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 11 10:45:02.726094 master-0 kubenswrapper[4790]: I1011 10:45:02.725953 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 11 10:45:02.726183 master-0 kubenswrapper[4790]: I1011 10:45:02.726102 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 11 10:45:02.726228 master-0 kubenswrapper[4790]: I1011 10:45:02.726150 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-zlnjr" Oct 11 10:45:02.728830 master-0 kubenswrapper[4790]: I1011 10:45:02.728795 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 11 10:45:02.742879 master-0 kubenswrapper[4790]: I1011 10:45:02.742582 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r"] Oct 11 10:45:02.790673 master-1 kubenswrapper[4771]: I1011 10:45:02.790618 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r8hdr"] Oct 11 10:45:02.794818 master-1 kubenswrapper[4771]: W1011 10:45:02.794767 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e964e77_4315_44b2_a34f_d0e2249e9a72.slice/crio-7305b66b7aeba582a9f93dc93062bbbc5bb8eccd416f40fbbfbc9aebdb769b49 WatchSource:0}: Error finding container 7305b66b7aeba582a9f93dc93062bbbc5bb8eccd416f40fbbfbc9aebdb769b49: Status 404 returned error can't find the container with id 7305b66b7aeba582a9f93dc93062bbbc5bb8eccd416f40fbbfbc9aebdb769b49 Oct 11 10:45:02.883566 master-0 kubenswrapper[4790]: I1011 10:45:02.883455 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bc183705-096c-4af1-adf7-d3cd0e4532e1-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.883566 master-0 kubenswrapper[4790]: I1011 10:45:02.883538 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc183705-096c-4af1-adf7-d3cd0e4532e1-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.883566 master-0 kubenswrapper[4790]: I1011 10:45:02.883567 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bc183705-096c-4af1-adf7-d3cd0e4532e1-etcd-client\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.883970 master-0 kubenswrapper[4790]: I1011 10:45:02.883590 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjvvj\" (UniqueName: \"kubernetes.io/projected/bc183705-096c-4af1-adf7-d3cd0e4532e1-kube-api-access-gjvvj\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.883970 master-0 kubenswrapper[4790]: I1011 10:45:02.883608 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bc183705-096c-4af1-adf7-d3cd0e4532e1-encryption-config\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.883970 master-0 kubenswrapper[4790]: I1011 10:45:02.883629 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bc183705-096c-4af1-adf7-d3cd0e4532e1-audit-dir\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.883970 master-0 kubenswrapper[4790]: I1011 10:45:02.883645 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc183705-096c-4af1-adf7-d3cd0e4532e1-serving-cert\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.883970 master-0 kubenswrapper[4790]: I1011 10:45:02.883662 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bc183705-096c-4af1-adf7-d3cd0e4532e1-audit-policies\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.889252 master-1 kubenswrapper[4771]: I1011 10:45:02.889155 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5b846b7bb4-xmv6l" podUID="a65b0165-5747-48c9-9179-86f19861dd68" containerName="console" containerID="cri-o://6930f37f5d40f6d2be4d6635242240c2b455d30927958a5c8bf12f960d07b1a6" gracePeriod=15 Oct 11 10:45:02.945447 master-1 kubenswrapper[4771]: I1011 10:45:02.945396 4771 generic.go:334] "Generic (PLEG): container finished" podID="dd28168d-b375-4a82-8784-bc38fad4cc07" containerID="63deab1a90696cd378fba377a0ccafb64b7d1abb648ebd498f4ae24255a846e8" exitCode=0 Oct 11 10:45:02.945800 master-1 kubenswrapper[4771]: I1011 10:45:02.945468 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fn27x" event={"ID":"dd28168d-b375-4a82-8784-bc38fad4cc07","Type":"ContainerDied","Data":"63deab1a90696cd378fba377a0ccafb64b7d1abb648ebd498f4ae24255a846e8"} Oct 11 10:45:02.948822 master-1 kubenswrapper[4771]: I1011 10:45:02.948790 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8hdr" event={"ID":"8e964e77-4315-44b2-a34f-d0e2249e9a72","Type":"ContainerStarted","Data":"7305b66b7aeba582a9f93dc93062bbbc5bb8eccd416f40fbbfbc9aebdb769b49"} Oct 11 10:45:02.950514 master-1 kubenswrapper[4771]: I1011 10:45:02.950484 4771 generic.go:334] "Generic (PLEG): container finished" podID="aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1" containerID="b28d4d25a229e182db13cc45f68ab48fb16a28b5be04facbfd88980d7656d25d" exitCode=0 Oct 11 10:45:02.950514 master-1 kubenswrapper[4771]: I1011 10:45:02.950511 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btlwb" event={"ID":"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1","Type":"ContainerDied","Data":"b28d4d25a229e182db13cc45f68ab48fb16a28b5be04facbfd88980d7656d25d"} Oct 11 10:45:02.985312 master-0 kubenswrapper[4790]: I1011 10:45:02.985156 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bc183705-096c-4af1-adf7-d3cd0e4532e1-encryption-config\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.985312 master-0 kubenswrapper[4790]: I1011 10:45:02.985235 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bc183705-096c-4af1-adf7-d3cd0e4532e1-audit-dir\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.985312 master-0 kubenswrapper[4790]: I1011 10:45:02.985262 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc183705-096c-4af1-adf7-d3cd0e4532e1-serving-cert\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.985312 master-0 kubenswrapper[4790]: I1011 10:45:02.985285 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bc183705-096c-4af1-adf7-d3cd0e4532e1-audit-policies\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.985607 master-0 kubenswrapper[4790]: I1011 10:45:02.985356 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bc183705-096c-4af1-adf7-d3cd0e4532e1-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.985607 master-0 kubenswrapper[4790]: I1011 10:45:02.985394 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc183705-096c-4af1-adf7-d3cd0e4532e1-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.985607 master-0 kubenswrapper[4790]: I1011 10:45:02.985421 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bc183705-096c-4af1-adf7-d3cd0e4532e1-etcd-client\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.985607 master-0 kubenswrapper[4790]: I1011 10:45:02.985443 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjvvj\" (UniqueName: \"kubernetes.io/projected/bc183705-096c-4af1-adf7-d3cd0e4532e1-kube-api-access-gjvvj\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.986442 master-0 kubenswrapper[4790]: I1011 10:45:02.986378 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bc183705-096c-4af1-adf7-d3cd0e4532e1-audit-dir\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.986882 master-0 kubenswrapper[4790]: I1011 10:45:02.986841 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc183705-096c-4af1-adf7-d3cd0e4532e1-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.987154 master-0 kubenswrapper[4790]: I1011 10:45:02.987108 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bc183705-096c-4af1-adf7-d3cd0e4532e1-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.987154 master-0 kubenswrapper[4790]: I1011 10:45:02.987136 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bc183705-096c-4af1-adf7-d3cd0e4532e1-audit-policies\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.989243 master-0 kubenswrapper[4790]: I1011 10:45:02.989179 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bc183705-096c-4af1-adf7-d3cd0e4532e1-etcd-client\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.989576 master-0 kubenswrapper[4790]: I1011 10:45:02.989533 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc183705-096c-4af1-adf7-d3cd0e4532e1-serving-cert\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:02.991360 master-0 kubenswrapper[4790]: I1011 10:45:02.991265 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bc183705-096c-4af1-adf7-d3cd0e4532e1-encryption-config\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:03.013411 master-0 kubenswrapper[4790]: I1011 10:45:03.013301 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjvvj\" (UniqueName: \"kubernetes.io/projected/bc183705-096c-4af1-adf7-d3cd0e4532e1-kube-api-access-gjvvj\") pod \"apiserver-68f4c55ff4-nk86r\" (UID: \"bc183705-096c-4af1-adf7-d3cd0e4532e1\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:03.048177 master-0 kubenswrapper[4790]: I1011 10:45:03.048088 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:03.214453 master-2 kubenswrapper[4776]: I1011 10:45:03.214396 4776 generic.go:334] "Generic (PLEG): container finished" podID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerID="97798087352e6ac819c8a9870fc9a9bbf2fa235ec702671d629829d200039a5e" exitCode=0 Oct 11 10:45:03.214453 master-2 kubenswrapper[4776]: I1011 10:45:03.214444 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lq47" event={"ID":"89013c68-6873-4f47-bd39-d7eae57cd89b","Type":"ContainerDied","Data":"97798087352e6ac819c8a9870fc9a9bbf2fa235ec702671d629829d200039a5e"} Oct 11 10:45:03.214453 master-2 kubenswrapper[4776]: I1011 10:45:03.214493 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lq47" event={"ID":"89013c68-6873-4f47-bd39-d7eae57cd89b","Type":"ContainerStarted","Data":"94c86deb0008b919de4c04f201973916bff0475350cca9c14ee8e5cc04b4bc3c"} Oct 11 10:45:03.215938 master-2 kubenswrapper[4776]: I1011 10:45:03.215663 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: I1011 10:45:03.307157 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:03.308414 master-2 kubenswrapper[4776]: I1011 10:45:03.307221 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:03.308414 master-2 kubenswrapper[4776]: I1011 10:45:03.307312 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: I1011 10:45:03.312117 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:03.313455 master-2 kubenswrapper[4776]: I1011 10:45:03.312171 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:03.404837 master-1 kubenswrapper[4771]: I1011 10:45:03.404777 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b846b7bb4-xmv6l_a65b0165-5747-48c9-9179-86f19861dd68/console/0.log" Oct 11 10:45:03.404980 master-1 kubenswrapper[4771]: I1011 10:45:03.404856 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:45:03.476071 master-0 kubenswrapper[4790]: I1011 10:45:03.475991 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r"] Oct 11 10:45:03.484763 master-0 kubenswrapper[4790]: W1011 10:45:03.484486 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc183705_096c_4af1_adf7_d3cd0e4532e1.slice/crio-cc5e24850d0f6f614e758fd3bf1c6e67c3053d70b28fafb79a2d04da058ebe3c WatchSource:0}: Error finding container cc5e24850d0f6f614e758fd3bf1c6e67c3053d70b28fafb79a2d04da058ebe3c: Status 404 returned error can't find the container with id cc5e24850d0f6f614e758fd3bf1c6e67c3053d70b28fafb79a2d04da058ebe3c Oct 11 10:45:03.505694 master-1 kubenswrapper[4771]: I1011 10:45:03.505642 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-console-config\") pod \"a65b0165-5747-48c9-9179-86f19861dd68\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " Oct 11 10:45:03.505867 master-1 kubenswrapper[4771]: I1011 10:45:03.505690 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-service-ca\") pod \"a65b0165-5747-48c9-9179-86f19861dd68\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " Oct 11 10:45:03.505867 master-1 kubenswrapper[4771]: I1011 10:45:03.505742 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpp7h\" (UniqueName: \"kubernetes.io/projected/a65b0165-5747-48c9-9179-86f19861dd68-kube-api-access-qpp7h\") pod \"a65b0165-5747-48c9-9179-86f19861dd68\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " Oct 11 10:45:03.505867 master-1 kubenswrapper[4771]: I1011 10:45:03.505778 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a65b0165-5747-48c9-9179-86f19861dd68-console-serving-cert\") pod \"a65b0165-5747-48c9-9179-86f19861dd68\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " Oct 11 10:45:03.506250 master-1 kubenswrapper[4771]: I1011 10:45:03.506216 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-oauth-serving-cert\") pod \"a65b0165-5747-48c9-9179-86f19861dd68\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " Oct 11 10:45:03.506653 master-1 kubenswrapper[4771]: I1011 10:45:03.506250 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a65b0165-5747-48c9-9179-86f19861dd68-console-oauth-config\") pod \"a65b0165-5747-48c9-9179-86f19861dd68\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " Oct 11 10:45:03.506696 master-1 kubenswrapper[4771]: I1011 10:45:03.506676 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-trusted-ca-bundle\") pod \"a65b0165-5747-48c9-9179-86f19861dd68\" (UID: \"a65b0165-5747-48c9-9179-86f19861dd68\") " Oct 11 10:45:03.506746 master-1 kubenswrapper[4771]: I1011 10:45:03.506583 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-service-ca" (OuterVolumeSpecName: "service-ca") pod "a65b0165-5747-48c9-9179-86f19861dd68" (UID: "a65b0165-5747-48c9-9179-86f19861dd68"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:45:03.507075 master-1 kubenswrapper[4771]: I1011 10:45:03.507012 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-console-config" (OuterVolumeSpecName: "console-config") pod "a65b0165-5747-48c9-9179-86f19861dd68" (UID: "a65b0165-5747-48c9-9179-86f19861dd68"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:45:03.507294 master-1 kubenswrapper[4771]: I1011 10:45:03.507264 4771 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-console-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:45:03.507294 master-1 kubenswrapper[4771]: I1011 10:45:03.507291 4771 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-service-ca\") on node \"master-1\" DevicePath \"\"" Oct 11 10:45:03.508074 master-1 kubenswrapper[4771]: I1011 10:45:03.508020 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a65b0165-5747-48c9-9179-86f19861dd68" (UID: "a65b0165-5747-48c9-9179-86f19861dd68"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:45:03.508135 master-1 kubenswrapper[4771]: I1011 10:45:03.508089 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a65b0165-5747-48c9-9179-86f19861dd68" (UID: "a65b0165-5747-48c9-9179-86f19861dd68"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:45:03.510894 master-1 kubenswrapper[4771]: I1011 10:45:03.510837 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a65b0165-5747-48c9-9179-86f19861dd68-kube-api-access-qpp7h" (OuterVolumeSpecName: "kube-api-access-qpp7h") pod "a65b0165-5747-48c9-9179-86f19861dd68" (UID: "a65b0165-5747-48c9-9179-86f19861dd68"). InnerVolumeSpecName "kube-api-access-qpp7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:45:03.511320 master-1 kubenswrapper[4771]: I1011 10:45:03.511272 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a65b0165-5747-48c9-9179-86f19861dd68-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a65b0165-5747-48c9-9179-86f19861dd68" (UID: "a65b0165-5747-48c9-9179-86f19861dd68"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:45:03.511532 master-1 kubenswrapper[4771]: I1011 10:45:03.511498 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a65b0165-5747-48c9-9179-86f19861dd68-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a65b0165-5747-48c9-9179-86f19861dd68" (UID: "a65b0165-5747-48c9-9179-86f19861dd68"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:45:03.544092 master-2 kubenswrapper[4776]: I1011 10:45:03.544048 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:03.609606 master-1 kubenswrapper[4771]: I1011 10:45:03.609400 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpp7h\" (UniqueName: \"kubernetes.io/projected/a65b0165-5747-48c9-9179-86f19861dd68-kube-api-access-qpp7h\") on node \"master-1\" DevicePath \"\"" Oct 11 10:45:03.609606 master-1 kubenswrapper[4771]: I1011 10:45:03.609459 4771 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a65b0165-5747-48c9-9179-86f19861dd68-console-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:45:03.609606 master-1 kubenswrapper[4771]: I1011 10:45:03.609477 4771 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-oauth-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 11 10:45:03.609606 master-1 kubenswrapper[4771]: I1011 10:45:03.609492 4771 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a65b0165-5747-48c9-9179-86f19861dd68-console-oauth-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:45:03.609606 master-1 kubenswrapper[4771]: I1011 10:45:03.609503 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a65b0165-5747-48c9-9179-86f19861dd68-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:45:03.617667 master-2 kubenswrapper[4776]: I1011 10:45:03.617584 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-config-volume\") pod \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " Oct 11 10:45:03.617881 master-2 kubenswrapper[4776]: I1011 10:45:03.617692 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-secret-volume\") pod \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " Oct 11 10:45:03.617881 master-2 kubenswrapper[4776]: I1011 10:45:03.617763 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4gqp\" (UniqueName: \"kubernetes.io/projected/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-kube-api-access-j4gqp\") pod \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " Oct 11 10:45:03.618093 master-2 kubenswrapper[4776]: I1011 10:45:03.618053 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-config-volume" (OuterVolumeSpecName: "config-volume") pod "a4aea0e1-d6c8-4542-85c7-e46b945d61a0" (UID: "a4aea0e1-d6c8-4542-85c7-e46b945d61a0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:45:03.620902 master-2 kubenswrapper[4776]: I1011 10:45:03.620873 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a4aea0e1-d6c8-4542-85c7-e46b945d61a0" (UID: "a4aea0e1-d6c8-4542-85c7-e46b945d61a0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:45:03.621028 master-2 kubenswrapper[4776]: I1011 10:45:03.620988 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-kube-api-access-j4gqp" (OuterVolumeSpecName: "kube-api-access-j4gqp") pod "a4aea0e1-d6c8-4542-85c7-e46b945d61a0" (UID: "a4aea0e1-d6c8-4542-85c7-e46b945d61a0"). InnerVolumeSpecName "kube-api-access-j4gqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:45:03.719121 master-2 kubenswrapper[4776]: I1011 10:45:03.719054 4776 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-config-volume\") on node \"master-2\" DevicePath \"\"" Oct 11 10:45:03.719121 master-2 kubenswrapper[4776]: I1011 10:45:03.719096 4776 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-secret-volume\") on node \"master-2\" DevicePath \"\"" Oct 11 10:45:03.719121 master-2 kubenswrapper[4776]: I1011 10:45:03.719107 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4gqp\" (UniqueName: \"kubernetes.io/projected/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-kube-api-access-j4gqp\") on node \"master-2\" DevicePath \"\"" Oct 11 10:45:03.760058 master-0 kubenswrapper[4790]: I1011 10:45:03.759969 4790 generic.go:334] "Generic (PLEG): container finished" podID="bc183705-096c-4af1-adf7-d3cd0e4532e1" containerID="be32288dd089ab960bc2afabfe55a2399594c886fbc3d68706d27c242828cb8b" exitCode=0 Oct 11 10:45:03.760799 master-0 kubenswrapper[4790]: I1011 10:45:03.760073 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" event={"ID":"bc183705-096c-4af1-adf7-d3cd0e4532e1","Type":"ContainerDied","Data":"be32288dd089ab960bc2afabfe55a2399594c886fbc3d68706d27c242828cb8b"} Oct 11 10:45:03.760799 master-0 kubenswrapper[4790]: I1011 10:45:03.760130 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" event={"ID":"bc183705-096c-4af1-adf7-d3cd0e4532e1","Type":"ContainerStarted","Data":"cc5e24850d0f6f614e758fd3bf1c6e67c3053d70b28fafb79a2d04da058ebe3c"} Oct 11 10:45:03.961161 master-1 kubenswrapper[4771]: I1011 10:45:03.960977 4771 generic.go:334] "Generic (PLEG): container finished" podID="8e964e77-4315-44b2-a34f-d0e2249e9a72" containerID="77010f46cea21ca3cf2adec7d3a4536cce32a5cb226d001a2497b507dc532ac1" exitCode=0 Oct 11 10:45:03.961961 master-1 kubenswrapper[4771]: I1011 10:45:03.961156 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8hdr" event={"ID":"8e964e77-4315-44b2-a34f-d0e2249e9a72","Type":"ContainerDied","Data":"77010f46cea21ca3cf2adec7d3a4536cce32a5cb226d001a2497b507dc532ac1"} Oct 11 10:45:03.966445 master-1 kubenswrapper[4771]: I1011 10:45:03.966382 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btlwb" event={"ID":"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1","Type":"ContainerStarted","Data":"b2ac95ce6b7361d31eadc268199ed3e400e0a7f86b2cc191c90b28cc26682809"} Oct 11 10:45:03.968896 master-1 kubenswrapper[4771]: I1011 10:45:03.968856 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b846b7bb4-xmv6l_a65b0165-5747-48c9-9179-86f19861dd68/console/0.log" Oct 11 10:45:03.969022 master-1 kubenswrapper[4771]: I1011 10:45:03.968926 4771 generic.go:334] "Generic (PLEG): container finished" podID="a65b0165-5747-48c9-9179-86f19861dd68" containerID="6930f37f5d40f6d2be4d6635242240c2b455d30927958a5c8bf12f960d07b1a6" exitCode=2 Oct 11 10:45:03.969236 master-1 kubenswrapper[4771]: I1011 10:45:03.969072 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b846b7bb4-xmv6l" Oct 11 10:45:03.969445 master-1 kubenswrapper[4771]: I1011 10:45:03.969402 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b846b7bb4-xmv6l" event={"ID":"a65b0165-5747-48c9-9179-86f19861dd68","Type":"ContainerDied","Data":"6930f37f5d40f6d2be4d6635242240c2b455d30927958a5c8bf12f960d07b1a6"} Oct 11 10:45:03.969548 master-1 kubenswrapper[4771]: I1011 10:45:03.969458 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b846b7bb4-xmv6l" event={"ID":"a65b0165-5747-48c9-9179-86f19861dd68","Type":"ContainerDied","Data":"2e652727629cf31e7de5014abdf61de5e97f13fd0cbfe170fa06452ef6ed0070"} Oct 11 10:45:03.969548 master-1 kubenswrapper[4771]: I1011 10:45:03.969490 4771 scope.go:117] "RemoveContainer" containerID="6930f37f5d40f6d2be4d6635242240c2b455d30927958a5c8bf12f960d07b1a6" Oct 11 10:45:03.975509 master-1 kubenswrapper[4771]: I1011 10:45:03.975444 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fn27x" event={"ID":"dd28168d-b375-4a82-8784-bc38fad4cc07","Type":"ContainerStarted","Data":"3f320377dafbebce17a26632f3fc75e231a9d287fb6de8d7ce1701d394699b70"} Oct 11 10:45:04.000718 master-1 kubenswrapper[4771]: I1011 10:45:04.000646 4771 scope.go:117] "RemoveContainer" containerID="6930f37f5d40f6d2be4d6635242240c2b455d30927958a5c8bf12f960d07b1a6" Oct 11 10:45:04.001344 master-1 kubenswrapper[4771]: E1011 10:45:04.001282 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6930f37f5d40f6d2be4d6635242240c2b455d30927958a5c8bf12f960d07b1a6\": container with ID starting with 6930f37f5d40f6d2be4d6635242240c2b455d30927958a5c8bf12f960d07b1a6 not found: ID does not exist" containerID="6930f37f5d40f6d2be4d6635242240c2b455d30927958a5c8bf12f960d07b1a6" Oct 11 10:45:04.001470 master-1 kubenswrapper[4771]: I1011 10:45:04.001346 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6930f37f5d40f6d2be4d6635242240c2b455d30927958a5c8bf12f960d07b1a6"} err="failed to get container status \"6930f37f5d40f6d2be4d6635242240c2b455d30927958a5c8bf12f960d07b1a6\": rpc error: code = NotFound desc = could not find container \"6930f37f5d40f6d2be4d6635242240c2b455d30927958a5c8bf12f960d07b1a6\": container with ID starting with 6930f37f5d40f6d2be4d6635242240c2b455d30927958a5c8bf12f960d07b1a6 not found: ID does not exist" Oct 11 10:45:04.023816 master-1 kubenswrapper[4771]: I1011 10:45:04.023716 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-btlwb" podStartSLOduration=2.534020286 podStartE2EDuration="5.023688254s" podCreationTimestamp="2025-10-11 10:44:59 +0000 UTC" firstStartedPulling="2025-10-11 10:45:00.927553969 +0000 UTC m=+1132.901780410" lastFinishedPulling="2025-10-11 10:45:03.417221897 +0000 UTC m=+1135.391448378" observedRunningTime="2025-10-11 10:45:04.008464201 +0000 UTC m=+1135.982690722" watchObservedRunningTime="2025-10-11 10:45:04.023688254 +0000 UTC m=+1135.997914705" Oct 11 10:45:04.048393 master-1 kubenswrapper[4771]: I1011 10:45:04.046407 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fn27x" podStartSLOduration=2.500884562 podStartE2EDuration="5.04638057s" podCreationTimestamp="2025-10-11 10:44:59 +0000 UTC" firstStartedPulling="2025-10-11 10:45:00.930107912 +0000 UTC m=+1132.904334343" lastFinishedPulling="2025-10-11 10:45:03.47560391 +0000 UTC m=+1135.449830351" observedRunningTime="2025-10-11 10:45:04.041994535 +0000 UTC m=+1136.016220976" watchObservedRunningTime="2025-10-11 10:45:04.04638057 +0000 UTC m=+1136.020607021" Oct 11 10:45:04.068226 master-1 kubenswrapper[4771]: I1011 10:45:04.068139 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b846b7bb4-xmv6l"] Oct 11 10:45:04.071093 master-1 kubenswrapper[4771]: I1011 10:45:04.071037 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5b846b7bb4-xmv6l"] Oct 11 10:45:04.221488 master-2 kubenswrapper[4776]: I1011 10:45:04.221421 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" event={"ID":"a4aea0e1-d6c8-4542-85c7-e46b945d61a0","Type":"ContainerDied","Data":"7d380218deaac4977ed8862e6e0db8e065a96b5a1de4b5ed4bb8fa216797842d"} Oct 11 10:45:04.221488 master-2 kubenswrapper[4776]: I1011 10:45:04.221479 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d380218deaac4977ed8862e6e0db8e065a96b5a1de4b5ed4bb8fa216797842d" Oct 11 10:45:04.221488 master-2 kubenswrapper[4776]: I1011 10:45:04.221435 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:04.223776 master-2 kubenswrapper[4776]: I1011 10:45:04.223745 4776 generic.go:334] "Generic (PLEG): container finished" podID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerID="d922e24709975d381106a6dcea6ba30e0bc73a5b106d3e63ed79705c8f65ab22" exitCode=0 Oct 11 10:45:04.223776 master-2 kubenswrapper[4776]: I1011 10:45:04.223771 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lq47" event={"ID":"89013c68-6873-4f47-bd39-d7eae57cd89b","Type":"ContainerDied","Data":"d922e24709975d381106a6dcea6ba30e0bc73a5b106d3e63ed79705c8f65ab22"} Oct 11 10:45:04.445481 master-1 kubenswrapper[4771]: I1011 10:45:04.445399 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a65b0165-5747-48c9-9179-86f19861dd68" path="/var/lib/kubelet/pods/a65b0165-5747-48c9-9179-86f19861dd68/volumes" Oct 11 10:45:04.770305 master-0 kubenswrapper[4790]: I1011 10:45:04.770185 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" event={"ID":"bc183705-096c-4af1-adf7-d3cd0e4532e1","Type":"ContainerStarted","Data":"3bc29af6e88c7b8b34419367f59f128bbade88fe51d2455bcbbe1a01fe1d8528"} Oct 11 10:45:04.800496 master-0 kubenswrapper[4790]: I1011 10:45:04.800014 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" podStartSLOduration=61.799985373 podStartE2EDuration="1m1.799985373s" podCreationTimestamp="2025-10-11 10:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:45:04.798249127 +0000 UTC m=+381.352709499" watchObservedRunningTime="2025-10-11 10:45:04.799985373 +0000 UTC m=+381.354445675" Oct 11 10:45:04.986952 master-1 kubenswrapper[4771]: I1011 10:45:04.986907 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8hdr" event={"ID":"8e964e77-4315-44b2-a34f-d0e2249e9a72","Type":"ContainerStarted","Data":"3e6d711a5742f6f82fe5987f3befce4cbd02d09a7cb95428ec3ad9fce91b0d79"} Oct 11 10:45:05.238135 master-2 kubenswrapper[4776]: I1011 10:45:05.238060 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lq47" event={"ID":"89013c68-6873-4f47-bd39-d7eae57cd89b","Type":"ContainerStarted","Data":"77d005ff33d4c70c9570ec36ad29c9dd57a1f004b98d5ede79869adaab01feb6"} Oct 11 10:45:05.265765 master-2 kubenswrapper[4776]: I1011 10:45:05.265665 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7lq47" podStartSLOduration=2.889387984 podStartE2EDuration="4.26564512s" podCreationTimestamp="2025-10-11 10:45:01 +0000 UTC" firstStartedPulling="2025-10-11 10:45:03.215606653 +0000 UTC m=+1138.000033362" lastFinishedPulling="2025-10-11 10:45:04.591863779 +0000 UTC m=+1139.376290498" observedRunningTime="2025-10-11 10:45:05.26418257 +0000 UTC m=+1140.048609299" watchObservedRunningTime="2025-10-11 10:45:05.26564512 +0000 UTC m=+1140.050071829" Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: I1011 10:45:05.520766 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: I1011 10:45:05.520867 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:05.999089 master-1 kubenswrapper[4771]: I1011 10:45:05.999004 4771 generic.go:334] "Generic (PLEG): container finished" podID="8e964e77-4315-44b2-a34f-d0e2249e9a72" containerID="3e6d711a5742f6f82fe5987f3befce4cbd02d09a7cb95428ec3ad9fce91b0d79" exitCode=0 Oct 11 10:45:05.999089 master-1 kubenswrapper[4771]: I1011 10:45:05.999086 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8hdr" event={"ID":"8e964e77-4315-44b2-a34f-d0e2249e9a72","Type":"ContainerDied","Data":"3e6d711a5742f6f82fe5987f3befce4cbd02d09a7cb95428ec3ad9fce91b0d79"} Oct 11 10:45:07.009898 master-1 kubenswrapper[4771]: I1011 10:45:07.009839 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8hdr" event={"ID":"8e964e77-4315-44b2-a34f-d0e2249e9a72","Type":"ContainerStarted","Data":"bc926f519f674a84e8c91bddae5383960590b03479e2c4a4ef1e2497b3e6fbbe"} Oct 11 10:45:07.122558 master-1 kubenswrapper[4771]: I1011 10:45:07.122416 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r8hdr" podStartSLOduration=3.571885533 podStartE2EDuration="6.122394094s" podCreationTimestamp="2025-10-11 10:45:01 +0000 UTC" firstStartedPulling="2025-10-11 10:45:03.963208122 +0000 UTC m=+1135.937434603" lastFinishedPulling="2025-10-11 10:45:06.513716693 +0000 UTC m=+1138.487943164" observedRunningTime="2025-10-11 10:45:07.11628978 +0000 UTC m=+1139.090516261" watchObservedRunningTime="2025-10-11 10:45:07.122394094 +0000 UTC m=+1139.096620545" Oct 11 10:45:08.048921 master-0 kubenswrapper[4790]: I1011 10:45:08.048806 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:08.048921 master-0 kubenswrapper[4790]: I1011 10:45:08.048928 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:08.062396 master-0 kubenswrapper[4790]: I1011 10:45:08.062325 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: I1011 10:45:08.306888 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: I1011 10:45:08.306983 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:08.815581 master-0 kubenswrapper[4790]: I1011 10:45:08.815189 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r" Oct 11 10:45:09.755418 master-1 kubenswrapper[4771]: I1011 10:45:09.755203 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fn27x" Oct 11 10:45:09.755418 master-1 kubenswrapper[4771]: I1011 10:45:09.755302 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fn27x" Oct 11 10:45:09.823035 master-1 kubenswrapper[4771]: I1011 10:45:09.822965 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fn27x" Oct 11 10:45:09.950843 master-1 kubenswrapper[4771]: I1011 10:45:09.950749 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-btlwb" Oct 11 10:45:09.950843 master-1 kubenswrapper[4771]: I1011 10:45:09.950825 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-btlwb" Oct 11 10:45:10.016981 master-1 kubenswrapper[4771]: I1011 10:45:10.016813 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-btlwb" Oct 11 10:45:10.095647 master-1 kubenswrapper[4771]: I1011 10:45:10.095567 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-btlwb" Oct 11 10:45:10.101934 master-1 kubenswrapper[4771]: I1011 10:45:10.101851 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fn27x" Oct 11 10:45:10.515888 master-2 kubenswrapper[4776]: I1011 10:45:10.515777 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:10.515888 master-2 kubenswrapper[4776]: I1011 10:45:10.515868 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:12.158656 master-2 kubenswrapper[4776]: I1011 10:45:12.158585 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:12.159163 master-2 kubenswrapper[4776]: I1011 10:45:12.158844 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:12.184382 master-1 kubenswrapper[4771]: I1011 10:45:12.184297 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fn27x"] Oct 11 10:45:12.185058 master-1 kubenswrapper[4771]: I1011 10:45:12.184638 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fn27x" podUID="dd28168d-b375-4a82-8784-bc38fad4cc07" containerName="registry-server" containerID="cri-o://3f320377dafbebce17a26632f3fc75e231a9d287fb6de8d7ce1701d394699b70" gracePeriod=2 Oct 11 10:45:12.228712 master-2 kubenswrapper[4776]: I1011 10:45:12.228613 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:12.334409 master-1 kubenswrapper[4771]: I1011 10:45:12.329913 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r8hdr" Oct 11 10:45:12.334409 master-1 kubenswrapper[4771]: I1011 10:45:12.330212 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r8hdr" Oct 11 10:45:12.376364 master-2 kubenswrapper[4776]: I1011 10:45:12.376296 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:12.400709 master-1 kubenswrapper[4771]: I1011 10:45:12.400606 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-btlwb"] Oct 11 10:45:12.401286 master-1 kubenswrapper[4771]: I1011 10:45:12.401223 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-btlwb" podUID="aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1" containerName="registry-server" containerID="cri-o://b2ac95ce6b7361d31eadc268199ed3e400e0a7f86b2cc191c90b28cc26682809" gracePeriod=2 Oct 11 10:45:12.413921 master-1 kubenswrapper[4771]: I1011 10:45:12.413797 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r8hdr" Oct 11 10:45:12.609697 master-1 kubenswrapper[4771]: I1011 10:45:12.609610 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fn27x" Oct 11 10:45:12.766915 master-1 kubenswrapper[4771]: I1011 10:45:12.766786 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd28168d-b375-4a82-8784-bc38fad4cc07-catalog-content\") pod \"dd28168d-b375-4a82-8784-bc38fad4cc07\" (UID: \"dd28168d-b375-4a82-8784-bc38fad4cc07\") " Oct 11 10:45:12.766915 master-1 kubenswrapper[4771]: I1011 10:45:12.766864 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd28168d-b375-4a82-8784-bc38fad4cc07-utilities\") pod \"dd28168d-b375-4a82-8784-bc38fad4cc07\" (UID: \"dd28168d-b375-4a82-8784-bc38fad4cc07\") " Oct 11 10:45:12.766915 master-1 kubenswrapper[4771]: I1011 10:45:12.766895 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z8rx\" (UniqueName: \"kubernetes.io/projected/dd28168d-b375-4a82-8784-bc38fad4cc07-kube-api-access-4z8rx\") pod \"dd28168d-b375-4a82-8784-bc38fad4cc07\" (UID: \"dd28168d-b375-4a82-8784-bc38fad4cc07\") " Oct 11 10:45:12.768456 master-1 kubenswrapper[4771]: I1011 10:45:12.768382 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd28168d-b375-4a82-8784-bc38fad4cc07-utilities" (OuterVolumeSpecName: "utilities") pod "dd28168d-b375-4a82-8784-bc38fad4cc07" (UID: "dd28168d-b375-4a82-8784-bc38fad4cc07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:45:12.773981 master-1 kubenswrapper[4771]: I1011 10:45:12.773908 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd28168d-b375-4a82-8784-bc38fad4cc07-kube-api-access-4z8rx" (OuterVolumeSpecName: "kube-api-access-4z8rx") pod "dd28168d-b375-4a82-8784-bc38fad4cc07" (UID: "dd28168d-b375-4a82-8784-bc38fad4cc07"). InnerVolumeSpecName "kube-api-access-4z8rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:45:12.812351 master-1 kubenswrapper[4771]: I1011 10:45:12.812287 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btlwb" Oct 11 10:45:12.868682 master-1 kubenswrapper[4771]: I1011 10:45:12.868631 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd28168d-b375-4a82-8784-bc38fad4cc07-utilities\") on node \"master-1\" DevicePath \"\"" Oct 11 10:45:12.868682 master-1 kubenswrapper[4771]: I1011 10:45:12.868663 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z8rx\" (UniqueName: \"kubernetes.io/projected/dd28168d-b375-4a82-8784-bc38fad4cc07-kube-api-access-4z8rx\") on node \"master-1\" DevicePath \"\"" Oct 11 10:45:12.969347 master-1 kubenswrapper[4771]: I1011 10:45:12.969256 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1-catalog-content\") pod \"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1\" (UID: \"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1\") " Oct 11 10:45:12.969347 master-1 kubenswrapper[4771]: I1011 10:45:12.969345 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1-utilities\") pod \"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1\" (UID: \"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1\") " Oct 11 10:45:12.969809 master-1 kubenswrapper[4771]: I1011 10:45:12.969496 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjs9q\" (UniqueName: \"kubernetes.io/projected/aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1-kube-api-access-jjs9q\") pod \"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1\" (UID: \"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1\") " Oct 11 10:45:12.971274 master-1 kubenswrapper[4771]: I1011 10:45:12.971206 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1-utilities" (OuterVolumeSpecName: "utilities") pod "aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1" (UID: "aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:45:12.972787 master-1 kubenswrapper[4771]: I1011 10:45:12.972698 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1-kube-api-access-jjs9q" (OuterVolumeSpecName: "kube-api-access-jjs9q") pod "aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1" (UID: "aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1"). InnerVolumeSpecName "kube-api-access-jjs9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:45:12.997015 master-1 kubenswrapper[4771]: I1011 10:45:12.996921 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1" (UID: "aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:45:13.018265 master-1 kubenswrapper[4771]: I1011 10:45:13.018109 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd28168d-b375-4a82-8784-bc38fad4cc07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd28168d-b375-4a82-8784-bc38fad4cc07" (UID: "dd28168d-b375-4a82-8784-bc38fad4cc07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:45:13.058975 master-1 kubenswrapper[4771]: I1011 10:45:13.058878 4771 generic.go:334] "Generic (PLEG): container finished" podID="dd28168d-b375-4a82-8784-bc38fad4cc07" containerID="3f320377dafbebce17a26632f3fc75e231a9d287fb6de8d7ce1701d394699b70" exitCode=0 Oct 11 10:45:13.059265 master-1 kubenswrapper[4771]: I1011 10:45:13.059010 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fn27x" event={"ID":"dd28168d-b375-4a82-8784-bc38fad4cc07","Type":"ContainerDied","Data":"3f320377dafbebce17a26632f3fc75e231a9d287fb6de8d7ce1701d394699b70"} Oct 11 10:45:13.059265 master-1 kubenswrapper[4771]: I1011 10:45:13.059070 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fn27x" event={"ID":"dd28168d-b375-4a82-8784-bc38fad4cc07","Type":"ContainerDied","Data":"6a5c5b9ff534c3108ab033d2a189e452b9f1cc8c2fa78f601306eadfc2f6563e"} Oct 11 10:45:13.059265 master-1 kubenswrapper[4771]: I1011 10:45:13.059106 4771 scope.go:117] "RemoveContainer" containerID="3f320377dafbebce17a26632f3fc75e231a9d287fb6de8d7ce1701d394699b70" Oct 11 10:45:13.059265 master-1 kubenswrapper[4771]: I1011 10:45:13.059153 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fn27x" Oct 11 10:45:13.064403 master-1 kubenswrapper[4771]: I1011 10:45:13.064313 4771 generic.go:334] "Generic (PLEG): container finished" podID="aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1" containerID="b2ac95ce6b7361d31eadc268199ed3e400e0a7f86b2cc191c90b28cc26682809" exitCode=0 Oct 11 10:45:13.064533 master-1 kubenswrapper[4771]: I1011 10:45:13.064424 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btlwb" event={"ID":"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1","Type":"ContainerDied","Data":"b2ac95ce6b7361d31eadc268199ed3e400e0a7f86b2cc191c90b28cc26682809"} Oct 11 10:45:13.064533 master-1 kubenswrapper[4771]: I1011 10:45:13.064486 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btlwb" event={"ID":"aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1","Type":"ContainerDied","Data":"07f0178441f3d50314b9519115a4f3d7e321d0618d45e6c54bc9d3b3f3be9ea8"} Oct 11 10:45:13.064533 master-1 kubenswrapper[4771]: I1011 10:45:13.064510 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btlwb" Oct 11 10:45:13.070968 master-1 kubenswrapper[4771]: I1011 10:45:13.070877 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1-catalog-content\") on node \"master-1\" DevicePath \"\"" Oct 11 10:45:13.070968 master-1 kubenswrapper[4771]: I1011 10:45:13.070939 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1-utilities\") on node \"master-1\" DevicePath \"\"" Oct 11 10:45:13.070968 master-1 kubenswrapper[4771]: I1011 10:45:13.070952 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjs9q\" (UniqueName: \"kubernetes.io/projected/aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1-kube-api-access-jjs9q\") on node \"master-1\" DevicePath \"\"" Oct 11 10:45:13.070968 master-1 kubenswrapper[4771]: I1011 10:45:13.070963 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd28168d-b375-4a82-8784-bc38fad4cc07-catalog-content\") on node \"master-1\" DevicePath \"\"" Oct 11 10:45:13.080780 master-1 kubenswrapper[4771]: I1011 10:45:13.080713 4771 scope.go:117] "RemoveContainer" containerID="63deab1a90696cd378fba377a0ccafb64b7d1abb648ebd498f4ae24255a846e8" Oct 11 10:45:13.105780 master-1 kubenswrapper[4771]: I1011 10:45:13.105704 4771 scope.go:117] "RemoveContainer" containerID="09ebc1d9dd212e1b780cdd4c208a385d6c32b3be45f5a7fd3316afb9285c5132" Oct 11 10:45:13.128213 master-1 kubenswrapper[4771]: I1011 10:45:13.128139 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fn27x"] Oct 11 10:45:13.131332 master-1 kubenswrapper[4771]: I1011 10:45:13.131266 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r8hdr" Oct 11 10:45:13.131461 master-1 kubenswrapper[4771]: I1011 10:45:13.131370 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fn27x"] Oct 11 10:45:13.147890 master-1 kubenswrapper[4771]: I1011 10:45:13.147832 4771 scope.go:117] "RemoveContainer" containerID="3f320377dafbebce17a26632f3fc75e231a9d287fb6de8d7ce1701d394699b70" Oct 11 10:45:13.150237 master-1 kubenswrapper[4771]: E1011 10:45:13.150171 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f320377dafbebce17a26632f3fc75e231a9d287fb6de8d7ce1701d394699b70\": container with ID starting with 3f320377dafbebce17a26632f3fc75e231a9d287fb6de8d7ce1701d394699b70 not found: ID does not exist" containerID="3f320377dafbebce17a26632f3fc75e231a9d287fb6de8d7ce1701d394699b70" Oct 11 10:45:13.150327 master-1 kubenswrapper[4771]: I1011 10:45:13.150249 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f320377dafbebce17a26632f3fc75e231a9d287fb6de8d7ce1701d394699b70"} err="failed to get container status \"3f320377dafbebce17a26632f3fc75e231a9d287fb6de8d7ce1701d394699b70\": rpc error: code = NotFound desc = could not find container \"3f320377dafbebce17a26632f3fc75e231a9d287fb6de8d7ce1701d394699b70\": container with ID starting with 3f320377dafbebce17a26632f3fc75e231a9d287fb6de8d7ce1701d394699b70 not found: ID does not exist" Oct 11 10:45:13.150327 master-1 kubenswrapper[4771]: I1011 10:45:13.150289 4771 scope.go:117] "RemoveContainer" containerID="63deab1a90696cd378fba377a0ccafb64b7d1abb648ebd498f4ae24255a846e8" Oct 11 10:45:13.150445 master-1 kubenswrapper[4771]: I1011 10:45:13.150335 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-btlwb"] Oct 11 10:45:13.151095 master-1 kubenswrapper[4771]: E1011 10:45:13.151048 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63deab1a90696cd378fba377a0ccafb64b7d1abb648ebd498f4ae24255a846e8\": container with ID starting with 63deab1a90696cd378fba377a0ccafb64b7d1abb648ebd498f4ae24255a846e8 not found: ID does not exist" containerID="63deab1a90696cd378fba377a0ccafb64b7d1abb648ebd498f4ae24255a846e8" Oct 11 10:45:13.151146 master-1 kubenswrapper[4771]: I1011 10:45:13.151101 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63deab1a90696cd378fba377a0ccafb64b7d1abb648ebd498f4ae24255a846e8"} err="failed to get container status \"63deab1a90696cd378fba377a0ccafb64b7d1abb648ebd498f4ae24255a846e8\": rpc error: code = NotFound desc = could not find container \"63deab1a90696cd378fba377a0ccafb64b7d1abb648ebd498f4ae24255a846e8\": container with ID starting with 63deab1a90696cd378fba377a0ccafb64b7d1abb648ebd498f4ae24255a846e8 not found: ID does not exist" Oct 11 10:45:13.151146 master-1 kubenswrapper[4771]: I1011 10:45:13.151134 4771 scope.go:117] "RemoveContainer" containerID="09ebc1d9dd212e1b780cdd4c208a385d6c32b3be45f5a7fd3316afb9285c5132" Oct 11 10:45:13.151604 master-1 kubenswrapper[4771]: E1011 10:45:13.151570 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09ebc1d9dd212e1b780cdd4c208a385d6c32b3be45f5a7fd3316afb9285c5132\": container with ID starting with 09ebc1d9dd212e1b780cdd4c208a385d6c32b3be45f5a7fd3316afb9285c5132 not found: ID does not exist" containerID="09ebc1d9dd212e1b780cdd4c208a385d6c32b3be45f5a7fd3316afb9285c5132" Oct 11 10:45:13.151604 master-1 kubenswrapper[4771]: I1011 10:45:13.151597 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09ebc1d9dd212e1b780cdd4c208a385d6c32b3be45f5a7fd3316afb9285c5132"} err="failed to get container status \"09ebc1d9dd212e1b780cdd4c208a385d6c32b3be45f5a7fd3316afb9285c5132\": rpc error: code = NotFound desc = could not find container \"09ebc1d9dd212e1b780cdd4c208a385d6c32b3be45f5a7fd3316afb9285c5132\": container with ID starting with 09ebc1d9dd212e1b780cdd4c208a385d6c32b3be45f5a7fd3316afb9285c5132 not found: ID does not exist" Oct 11 10:45:13.151706 master-1 kubenswrapper[4771]: I1011 10:45:13.151612 4771 scope.go:117] "RemoveContainer" containerID="b2ac95ce6b7361d31eadc268199ed3e400e0a7f86b2cc191c90b28cc26682809" Oct 11 10:45:13.155719 master-1 kubenswrapper[4771]: I1011 10:45:13.155645 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-btlwb"] Oct 11 10:45:13.169062 master-1 kubenswrapper[4771]: I1011 10:45:13.168997 4771 scope.go:117] "RemoveContainer" containerID="b28d4d25a229e182db13cc45f68ab48fb16a28b5be04facbfd88980d7656d25d" Oct 11 10:45:13.188984 master-1 kubenswrapper[4771]: I1011 10:45:13.188741 4771 scope.go:117] "RemoveContainer" containerID="563b4b269fc1aea55e95d3112d36370a783bc78f1b8ea1724512328627aa0245" Oct 11 10:45:13.210240 master-1 kubenswrapper[4771]: I1011 10:45:13.210204 4771 scope.go:117] "RemoveContainer" containerID="b2ac95ce6b7361d31eadc268199ed3e400e0a7f86b2cc191c90b28cc26682809" Oct 11 10:45:13.211088 master-1 kubenswrapper[4771]: E1011 10:45:13.211024 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2ac95ce6b7361d31eadc268199ed3e400e0a7f86b2cc191c90b28cc26682809\": container with ID starting with b2ac95ce6b7361d31eadc268199ed3e400e0a7f86b2cc191c90b28cc26682809 not found: ID does not exist" containerID="b2ac95ce6b7361d31eadc268199ed3e400e0a7f86b2cc191c90b28cc26682809" Oct 11 10:45:13.211147 master-1 kubenswrapper[4771]: I1011 10:45:13.211112 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2ac95ce6b7361d31eadc268199ed3e400e0a7f86b2cc191c90b28cc26682809"} err="failed to get container status \"b2ac95ce6b7361d31eadc268199ed3e400e0a7f86b2cc191c90b28cc26682809\": rpc error: code = NotFound desc = could not find container \"b2ac95ce6b7361d31eadc268199ed3e400e0a7f86b2cc191c90b28cc26682809\": container with ID starting with b2ac95ce6b7361d31eadc268199ed3e400e0a7f86b2cc191c90b28cc26682809 not found: ID does not exist" Oct 11 10:45:13.211203 master-1 kubenswrapper[4771]: I1011 10:45:13.211175 4771 scope.go:117] "RemoveContainer" containerID="b28d4d25a229e182db13cc45f68ab48fb16a28b5be04facbfd88980d7656d25d" Oct 11 10:45:13.212316 master-1 kubenswrapper[4771]: E1011 10:45:13.212234 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b28d4d25a229e182db13cc45f68ab48fb16a28b5be04facbfd88980d7656d25d\": container with ID starting with b28d4d25a229e182db13cc45f68ab48fb16a28b5be04facbfd88980d7656d25d not found: ID does not exist" containerID="b28d4d25a229e182db13cc45f68ab48fb16a28b5be04facbfd88980d7656d25d" Oct 11 10:45:13.212446 master-1 kubenswrapper[4771]: I1011 10:45:13.212338 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b28d4d25a229e182db13cc45f68ab48fb16a28b5be04facbfd88980d7656d25d"} err="failed to get container status \"b28d4d25a229e182db13cc45f68ab48fb16a28b5be04facbfd88980d7656d25d\": rpc error: code = NotFound desc = could not find container \"b28d4d25a229e182db13cc45f68ab48fb16a28b5be04facbfd88980d7656d25d\": container with ID starting with b28d4d25a229e182db13cc45f68ab48fb16a28b5be04facbfd88980d7656d25d not found: ID does not exist" Oct 11 10:45:13.212506 master-1 kubenswrapper[4771]: I1011 10:45:13.212452 4771 scope.go:117] "RemoveContainer" containerID="563b4b269fc1aea55e95d3112d36370a783bc78f1b8ea1724512328627aa0245" Oct 11 10:45:13.213073 master-1 kubenswrapper[4771]: E1011 10:45:13.213021 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"563b4b269fc1aea55e95d3112d36370a783bc78f1b8ea1724512328627aa0245\": container with ID starting with 563b4b269fc1aea55e95d3112d36370a783bc78f1b8ea1724512328627aa0245 not found: ID does not exist" containerID="563b4b269fc1aea55e95d3112d36370a783bc78f1b8ea1724512328627aa0245" Oct 11 10:45:13.213161 master-1 kubenswrapper[4771]: I1011 10:45:13.213070 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"563b4b269fc1aea55e95d3112d36370a783bc78f1b8ea1724512328627aa0245"} err="failed to get container status \"563b4b269fc1aea55e95d3112d36370a783bc78f1b8ea1724512328627aa0245\": rpc error: code = NotFound desc = could not find container \"563b4b269fc1aea55e95d3112d36370a783bc78f1b8ea1724512328627aa0245\": container with ID starting with 563b4b269fc1aea55e95d3112d36370a783bc78f1b8ea1724512328627aa0245 not found: ID does not exist" Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: I1011 10:45:13.310430 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:13.312389 master-2 kubenswrapper[4776]: I1011 10:45:13.310495 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:13.455058 master-1 kubenswrapper[4771]: I1011 10:45:13.454875 4771 scope.go:117] "RemoveContainer" containerID="79e52bbf7393881dfbba04f7a9f71721266d98f1191a6c7be91f8bc0ce4e1139" Oct 11 10:45:13.474706 master-1 kubenswrapper[4771]: I1011 10:45:13.474648 4771 scope.go:117] "RemoveContainer" containerID="9e6a4086932c3b4c0590b1992411e46984c974a11450de3378bede5ca3045d02" Oct 11 10:45:13.496973 master-1 kubenswrapper[4771]: I1011 10:45:13.496891 4771 scope.go:117] "RemoveContainer" containerID="068b46162b2804f4e661290cc4e58111faa3ee64a5ff733b8a30de9f4b7d070e" Oct 11 10:45:13.524690 master-1 kubenswrapper[4771]: I1011 10:45:13.524586 4771 scope.go:117] "RemoveContainer" containerID="913e0c188082961ad93b5f6a07d9eda57e62160ccbff129947e77948c758035a" Oct 11 10:45:14.445040 master-1 kubenswrapper[4771]: I1011 10:45:14.444957 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1" path="/var/lib/kubelet/pods/aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1/volumes" Oct 11 10:45:14.446508 master-1 kubenswrapper[4771]: I1011 10:45:14.446466 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd28168d-b375-4a82-8784-bc38fad4cc07" path="/var/lib/kubelet/pods/dd28168d-b375-4a82-8784-bc38fad4cc07/volumes" Oct 11 10:45:15.148256 master-2 kubenswrapper[4776]: I1011 10:45:15.148144 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lq47"] Oct 11 10:45:15.149400 master-2 kubenswrapper[4776]: I1011 10:45:15.148503 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7lq47" podUID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerName="registry-server" containerID="cri-o://77d005ff33d4c70c9570ec36ad29c9dd57a1f004b98d5ede79869adaab01feb6" gracePeriod=2 Oct 11 10:45:15.329840 master-2 kubenswrapper[4776]: I1011 10:45:15.329751 4776 generic.go:334] "Generic (PLEG): container finished" podID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerID="77d005ff33d4c70c9570ec36ad29c9dd57a1f004b98d5ede79869adaab01feb6" exitCode=0 Oct 11 10:45:15.329840 master-2 kubenswrapper[4776]: I1011 10:45:15.329820 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lq47" event={"ID":"89013c68-6873-4f47-bd39-d7eae57cd89b","Type":"ContainerDied","Data":"77d005ff33d4c70c9570ec36ad29c9dd57a1f004b98d5ede79869adaab01feb6"} Oct 11 10:45:15.515600 master-2 kubenswrapper[4776]: I1011 10:45:15.515474 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:15.515957 master-2 kubenswrapper[4776]: I1011 10:45:15.515580 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:15.662352 master-2 kubenswrapper[4776]: I1011 10:45:15.662299 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:15.714210 master-2 kubenswrapper[4776]: I1011 10:45:15.714043 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-utilities\") pod \"89013c68-6873-4f47-bd39-d7eae57cd89b\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " Oct 11 10:45:15.714484 master-2 kubenswrapper[4776]: I1011 10:45:15.714229 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-catalog-content\") pod \"89013c68-6873-4f47-bd39-d7eae57cd89b\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " Oct 11 10:45:15.714484 master-2 kubenswrapper[4776]: I1011 10:45:15.714266 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxmvj\" (UniqueName: \"kubernetes.io/projected/89013c68-6873-4f47-bd39-d7eae57cd89b-kube-api-access-jxmvj\") pod \"89013c68-6873-4f47-bd39-d7eae57cd89b\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " Oct 11 10:45:15.716537 master-2 kubenswrapper[4776]: I1011 10:45:15.716472 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-utilities" (OuterVolumeSpecName: "utilities") pod "89013c68-6873-4f47-bd39-d7eae57cd89b" (UID: "89013c68-6873-4f47-bd39-d7eae57cd89b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:45:15.735268 master-2 kubenswrapper[4776]: I1011 10:45:15.734838 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89013c68-6873-4f47-bd39-d7eae57cd89b-kube-api-access-jxmvj" (OuterVolumeSpecName: "kube-api-access-jxmvj") pod "89013c68-6873-4f47-bd39-d7eae57cd89b" (UID: "89013c68-6873-4f47-bd39-d7eae57cd89b"). InnerVolumeSpecName "kube-api-access-jxmvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:45:15.764592 master-2 kubenswrapper[4776]: I1011 10:45:15.764510 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89013c68-6873-4f47-bd39-d7eae57cd89b" (UID: "89013c68-6873-4f47-bd39-d7eae57cd89b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:45:15.815851 master-2 kubenswrapper[4776]: I1011 10:45:15.815782 4776 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-catalog-content\") on node \"master-2\" DevicePath \"\"" Oct 11 10:45:15.815851 master-2 kubenswrapper[4776]: I1011 10:45:15.815825 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxmvj\" (UniqueName: \"kubernetes.io/projected/89013c68-6873-4f47-bd39-d7eae57cd89b-kube-api-access-jxmvj\") on node \"master-2\" DevicePath \"\"" Oct 11 10:45:15.815851 master-2 kubenswrapper[4776]: I1011 10:45:15.815836 4776 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-utilities\") on node \"master-2\" DevicePath \"\"" Oct 11 10:45:16.342941 master-2 kubenswrapper[4776]: I1011 10:45:16.342871 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lq47" event={"ID":"89013c68-6873-4f47-bd39-d7eae57cd89b","Type":"ContainerDied","Data":"94c86deb0008b919de4c04f201973916bff0475350cca9c14ee8e5cc04b4bc3c"} Oct 11 10:45:16.342941 master-2 kubenswrapper[4776]: I1011 10:45:16.342914 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:16.343523 master-2 kubenswrapper[4776]: I1011 10:45:16.342955 4776 scope.go:117] "RemoveContainer" containerID="77d005ff33d4c70c9570ec36ad29c9dd57a1f004b98d5ede79869adaab01feb6" Oct 11 10:45:16.365444 master-2 kubenswrapper[4776]: I1011 10:45:16.365413 4776 scope.go:117] "RemoveContainer" containerID="d922e24709975d381106a6dcea6ba30e0bc73a5b106d3e63ed79705c8f65ab22" Oct 11 10:45:16.380056 master-2 kubenswrapper[4776]: I1011 10:45:16.379993 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lq47"] Oct 11 10:45:16.386830 master-2 kubenswrapper[4776]: I1011 10:45:16.386763 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7lq47"] Oct 11 10:45:16.395917 master-2 kubenswrapper[4776]: I1011 10:45:16.395732 4776 scope.go:117] "RemoveContainer" containerID="97798087352e6ac819c8a9870fc9a9bbf2fa235ec702671d629829d200039a5e" Oct 11 10:45:17.185090 master-1 kubenswrapper[4771]: I1011 10:45:17.184958 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r8hdr"] Oct 11 10:45:17.187306 master-1 kubenswrapper[4771]: I1011 10:45:17.185495 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r8hdr" podUID="8e964e77-4315-44b2-a34f-d0e2249e9a72" containerName="registry-server" containerID="cri-o://bc926f519f674a84e8c91bddae5383960590b03479e2c4a4ef1e2497b3e6fbbe" gracePeriod=2 Oct 11 10:45:17.753046 master-1 kubenswrapper[4771]: I1011 10:45:17.752976 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r8hdr" Oct 11 10:45:17.945414 master-1 kubenswrapper[4771]: I1011 10:45:17.945234 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plx55\" (UniqueName: \"kubernetes.io/projected/8e964e77-4315-44b2-a34f-d0e2249e9a72-kube-api-access-plx55\") pod \"8e964e77-4315-44b2-a34f-d0e2249e9a72\" (UID: \"8e964e77-4315-44b2-a34f-d0e2249e9a72\") " Oct 11 10:45:17.945414 master-1 kubenswrapper[4771]: I1011 10:45:17.945332 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e964e77-4315-44b2-a34f-d0e2249e9a72-catalog-content\") pod \"8e964e77-4315-44b2-a34f-d0e2249e9a72\" (UID: \"8e964e77-4315-44b2-a34f-d0e2249e9a72\") " Oct 11 10:45:17.945655 master-1 kubenswrapper[4771]: I1011 10:45:17.945463 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e964e77-4315-44b2-a34f-d0e2249e9a72-utilities\") pod \"8e964e77-4315-44b2-a34f-d0e2249e9a72\" (UID: \"8e964e77-4315-44b2-a34f-d0e2249e9a72\") " Oct 11 10:45:17.947410 master-1 kubenswrapper[4771]: I1011 10:45:17.947340 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e964e77-4315-44b2-a34f-d0e2249e9a72-utilities" (OuterVolumeSpecName: "utilities") pod "8e964e77-4315-44b2-a34f-d0e2249e9a72" (UID: "8e964e77-4315-44b2-a34f-d0e2249e9a72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:45:17.949842 master-1 kubenswrapper[4771]: I1011 10:45:17.949804 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e964e77-4315-44b2-a34f-d0e2249e9a72-kube-api-access-plx55" (OuterVolumeSpecName: "kube-api-access-plx55") pod "8e964e77-4315-44b2-a34f-d0e2249e9a72" (UID: "8e964e77-4315-44b2-a34f-d0e2249e9a72"). InnerVolumeSpecName "kube-api-access-plx55". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:45:18.017069 master-1 kubenswrapper[4771]: I1011 10:45:18.016988 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e964e77-4315-44b2-a34f-d0e2249e9a72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e964e77-4315-44b2-a34f-d0e2249e9a72" (UID: "8e964e77-4315-44b2-a34f-d0e2249e9a72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:45:18.047329 master-1 kubenswrapper[4771]: I1011 10:45:18.047263 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plx55\" (UniqueName: \"kubernetes.io/projected/8e964e77-4315-44b2-a34f-d0e2249e9a72-kube-api-access-plx55\") on node \"master-1\" DevicePath \"\"" Oct 11 10:45:18.047329 master-1 kubenswrapper[4771]: I1011 10:45:18.047321 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e964e77-4315-44b2-a34f-d0e2249e9a72-catalog-content\") on node \"master-1\" DevicePath \"\"" Oct 11 10:45:18.047439 master-1 kubenswrapper[4771]: I1011 10:45:18.047341 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e964e77-4315-44b2-a34f-d0e2249e9a72-utilities\") on node \"master-1\" DevicePath \"\"" Oct 11 10:45:18.070592 master-2 kubenswrapper[4776]: I1011 10:45:18.070516 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89013c68-6873-4f47-bd39-d7eae57cd89b" path="/var/lib/kubelet/pods/89013c68-6873-4f47-bd39-d7eae57cd89b/volumes" Oct 11 10:45:18.112957 master-1 kubenswrapper[4771]: I1011 10:45:18.112859 4771 generic.go:334] "Generic (PLEG): container finished" podID="8e964e77-4315-44b2-a34f-d0e2249e9a72" containerID="bc926f519f674a84e8c91bddae5383960590b03479e2c4a4ef1e2497b3e6fbbe" exitCode=0 Oct 11 10:45:18.112957 master-1 kubenswrapper[4771]: I1011 10:45:18.112915 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8hdr" event={"ID":"8e964e77-4315-44b2-a34f-d0e2249e9a72","Type":"ContainerDied","Data":"bc926f519f674a84e8c91bddae5383960590b03479e2c4a4ef1e2497b3e6fbbe"} Oct 11 10:45:18.112957 master-1 kubenswrapper[4771]: I1011 10:45:18.112944 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8hdr" event={"ID":"8e964e77-4315-44b2-a34f-d0e2249e9a72","Type":"ContainerDied","Data":"7305b66b7aeba582a9f93dc93062bbbc5bb8eccd416f40fbbfbc9aebdb769b49"} Oct 11 10:45:18.112957 master-1 kubenswrapper[4771]: I1011 10:45:18.112964 4771 scope.go:117] "RemoveContainer" containerID="bc926f519f674a84e8c91bddae5383960590b03479e2c4a4ef1e2497b3e6fbbe" Oct 11 10:45:18.113594 master-1 kubenswrapper[4771]: I1011 10:45:18.113083 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r8hdr" Oct 11 10:45:18.150700 master-1 kubenswrapper[4771]: I1011 10:45:18.150629 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r8hdr"] Oct 11 10:45:18.154101 master-1 kubenswrapper[4771]: I1011 10:45:18.154044 4771 scope.go:117] "RemoveContainer" containerID="3e6d711a5742f6f82fe5987f3befce4cbd02d09a7cb95428ec3ad9fce91b0d79" Oct 11 10:45:18.155056 master-1 kubenswrapper[4771]: I1011 10:45:18.155015 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r8hdr"] Oct 11 10:45:18.173944 master-1 kubenswrapper[4771]: I1011 10:45:18.173862 4771 scope.go:117] "RemoveContainer" containerID="77010f46cea21ca3cf2adec7d3a4536cce32a5cb226d001a2497b507dc532ac1" Oct 11 10:45:18.202193 master-1 kubenswrapper[4771]: I1011 10:45:18.202148 4771 scope.go:117] "RemoveContainer" containerID="bc926f519f674a84e8c91bddae5383960590b03479e2c4a4ef1e2497b3e6fbbe" Oct 11 10:45:18.202642 master-1 kubenswrapper[4771]: E1011 10:45:18.202594 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc926f519f674a84e8c91bddae5383960590b03479e2c4a4ef1e2497b3e6fbbe\": container with ID starting with bc926f519f674a84e8c91bddae5383960590b03479e2c4a4ef1e2497b3e6fbbe not found: ID does not exist" containerID="bc926f519f674a84e8c91bddae5383960590b03479e2c4a4ef1e2497b3e6fbbe" Oct 11 10:45:18.202704 master-1 kubenswrapper[4771]: I1011 10:45:18.202656 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc926f519f674a84e8c91bddae5383960590b03479e2c4a4ef1e2497b3e6fbbe"} err="failed to get container status \"bc926f519f674a84e8c91bddae5383960590b03479e2c4a4ef1e2497b3e6fbbe\": rpc error: code = NotFound desc = could not find container \"bc926f519f674a84e8c91bddae5383960590b03479e2c4a4ef1e2497b3e6fbbe\": container with ID starting with bc926f519f674a84e8c91bddae5383960590b03479e2c4a4ef1e2497b3e6fbbe not found: ID does not exist" Oct 11 10:45:18.202747 master-1 kubenswrapper[4771]: I1011 10:45:18.202712 4771 scope.go:117] "RemoveContainer" containerID="3e6d711a5742f6f82fe5987f3befce4cbd02d09a7cb95428ec3ad9fce91b0d79" Oct 11 10:45:18.203589 master-1 kubenswrapper[4771]: E1011 10:45:18.203518 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e6d711a5742f6f82fe5987f3befce4cbd02d09a7cb95428ec3ad9fce91b0d79\": container with ID starting with 3e6d711a5742f6f82fe5987f3befce4cbd02d09a7cb95428ec3ad9fce91b0d79 not found: ID does not exist" containerID="3e6d711a5742f6f82fe5987f3befce4cbd02d09a7cb95428ec3ad9fce91b0d79" Oct 11 10:45:18.203673 master-1 kubenswrapper[4771]: I1011 10:45:18.203608 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e6d711a5742f6f82fe5987f3befce4cbd02d09a7cb95428ec3ad9fce91b0d79"} err="failed to get container status \"3e6d711a5742f6f82fe5987f3befce4cbd02d09a7cb95428ec3ad9fce91b0d79\": rpc error: code = NotFound desc = could not find container \"3e6d711a5742f6f82fe5987f3befce4cbd02d09a7cb95428ec3ad9fce91b0d79\": container with ID starting with 3e6d711a5742f6f82fe5987f3befce4cbd02d09a7cb95428ec3ad9fce91b0d79 not found: ID does not exist" Oct 11 10:45:18.203715 master-1 kubenswrapper[4771]: I1011 10:45:18.203679 4771 scope.go:117] "RemoveContainer" containerID="77010f46cea21ca3cf2adec7d3a4536cce32a5cb226d001a2497b507dc532ac1" Oct 11 10:45:18.204328 master-1 kubenswrapper[4771]: E1011 10:45:18.204283 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77010f46cea21ca3cf2adec7d3a4536cce32a5cb226d001a2497b507dc532ac1\": container with ID starting with 77010f46cea21ca3cf2adec7d3a4536cce32a5cb226d001a2497b507dc532ac1 not found: ID does not exist" containerID="77010f46cea21ca3cf2adec7d3a4536cce32a5cb226d001a2497b507dc532ac1" Oct 11 10:45:18.204387 master-1 kubenswrapper[4771]: I1011 10:45:18.204331 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77010f46cea21ca3cf2adec7d3a4536cce32a5cb226d001a2497b507dc532ac1"} err="failed to get container status \"77010f46cea21ca3cf2adec7d3a4536cce32a5cb226d001a2497b507dc532ac1\": rpc error: code = NotFound desc = could not find container \"77010f46cea21ca3cf2adec7d3a4536cce32a5cb226d001a2497b507dc532ac1\": container with ID starting with 77010f46cea21ca3cf2adec7d3a4536cce32a5cb226d001a2497b507dc532ac1 not found: ID does not exist" Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: I1011 10:45:18.307556 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: I1011 10:45:18.307632 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:18.447231 master-1 kubenswrapper[4771]: I1011 10:45:18.447144 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e964e77-4315-44b2-a34f-d0e2249e9a72" path="/var/lib/kubelet/pods/8e964e77-4315-44b2-a34f-d0e2249e9a72/volumes" Oct 11 10:45:20.515907 master-2 kubenswrapper[4776]: I1011 10:45:20.515839 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:20.516501 master-2 kubenswrapper[4776]: I1011 10:45:20.515911 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: I1011 10:45:23.308582 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:23.310577 master-2 kubenswrapper[4776]: I1011 10:45:23.308698 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:25.516712 master-2 kubenswrapper[4776]: I1011 10:45:25.516580 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:25.517419 master-2 kubenswrapper[4776]: I1011 10:45:25.516770 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: I1011 10:45:28.310319 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: I1011 10:45:28.310382 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:30.516724 master-2 kubenswrapper[4776]: I1011 10:45:30.516580 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:30.517759 master-2 kubenswrapper[4776]: I1011 10:45:30.516739 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:32.156343 master-2 kubenswrapper[4776]: I1011 10:45:32.156284 4776 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:45:32.156883 master-2 kubenswrapper[4776]: I1011 10:45:32.156649 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="2c4a583adfee975da84510940117e71a" containerName="etcdctl" containerID="cri-o://cc8943c5b4823b597a38ede8102a3e667afad877c11be87f804a4d9fcdbf5687" gracePeriod=30 Oct 11 10:45:32.156883 master-2 kubenswrapper[4776]: I1011 10:45:32.156727 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-rev" containerID="cri-o://8aca7dd04fbd9bc97f886a62f0850ed592b9776f6bcf8d57f228ba1b4d57e0dd" gracePeriod=30 Oct 11 10:45:32.156984 master-2 kubenswrapper[4776]: I1011 10:45:32.156742 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-metrics" containerID="cri-o://56dc1b99eea54bd4bc4092f0e7a9e5c850ceefafdfda928c057fe6d1b40b5d1d" gracePeriod=30 Oct 11 10:45:32.156984 master-2 kubenswrapper[4776]: I1011 10:45:32.156875 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd" containerID="cri-o://8ca4916746dcde3d1a7ba8c08259545f440c11f186b53d82aba07a17030c92d1" gracePeriod=30 Oct 11 10:45:32.156984 master-2 kubenswrapper[4776]: I1011 10:45:32.156797 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-readyz" containerID="cri-o://431d1c1363285965b411f06e0338b448e40c4fef537351ea45fb00ac08129886" gracePeriod=30 Oct 11 10:45:32.161685 master-2 kubenswrapper[4776]: I1011 10:45:32.161614 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:45:32.161995 master-2 kubenswrapper[4776]: E1011 10:45:32.161958 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerName="extract-content" Oct 11 10:45:32.161995 master-2 kubenswrapper[4776]: I1011 10:45:32.161988 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerName="extract-content" Oct 11 10:45:32.162093 master-2 kubenswrapper[4776]: E1011 10:45:32.162008 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-metrics" Oct 11 10:45:32.162912 master-2 kubenswrapper[4776]: I1011 10:45:32.162773 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-metrics" Oct 11 10:45:32.162912 master-2 kubenswrapper[4776]: E1011 10:45:32.162839 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4aea0e1-d6c8-4542-85c7-e46b945d61a0" containerName="collect-profiles" Oct 11 10:45:32.162912 master-2 kubenswrapper[4776]: I1011 10:45:32.162853 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4aea0e1-d6c8-4542-85c7-e46b945d61a0" containerName="collect-profiles" Oct 11 10:45:32.162912 master-2 kubenswrapper[4776]: E1011 10:45:32.162873 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-readyz" Oct 11 10:45:32.162912 master-2 kubenswrapper[4776]: I1011 10:45:32.162881 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-readyz" Oct 11 10:45:32.162912 master-2 kubenswrapper[4776]: E1011 10:45:32.162893 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerName="extract-utilities" Oct 11 10:45:32.162912 master-2 kubenswrapper[4776]: I1011 10:45:32.162901 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerName="extract-utilities" Oct 11 10:45:32.162912 master-2 kubenswrapper[4776]: E1011 10:45:32.162913 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-resources-copy" Oct 11 10:45:32.162912 master-2 kubenswrapper[4776]: I1011 10:45:32.162921 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-resources-copy" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: E1011 10:45:32.162936 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-ensure-env-vars" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.162944 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-ensure-env-vars" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: E1011 10:45:32.162960 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4a583adfee975da84510940117e71a" containerName="etcdctl" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.162968 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4a583adfee975da84510940117e71a" containerName="etcdctl" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: E1011 10:45:32.162979 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.162994 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: E1011 10:45:32.163011 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerName="registry-server" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163019 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerName="registry-server" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: E1011 10:45:32.163030 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4a583adfee975da84510940117e71a" containerName="setup" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163038 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4a583adfee975da84510940117e71a" containerName="setup" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: E1011 10:45:32.163056 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-rev" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163064 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-rev" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163378 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c4a583adfee975da84510940117e71a" containerName="etcdctl" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163406 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-readyz" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163421 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-rev" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163441 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4aea0e1-d6c8-4542-85c7-e46b945d61a0" containerName="collect-profiles" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163452 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163466 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-metrics" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163483 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerName="registry-server" Oct 11 10:45:32.277627 master-2 kubenswrapper[4776]: I1011 10:45:32.277558 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-cert-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.277850 master-2 kubenswrapper[4776]: I1011 10:45:32.277637 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-static-pod-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.277850 master-2 kubenswrapper[4776]: I1011 10:45:32.277663 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-data-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.277850 master-2 kubenswrapper[4776]: I1011 10:45:32.277714 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-log-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.277850 master-2 kubenswrapper[4776]: I1011 10:45:32.277744 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-usr-local-bin\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.277850 master-2 kubenswrapper[4776]: I1011 10:45:32.277833 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-resource-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379424 master-2 kubenswrapper[4776]: I1011 10:45:32.379331 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-cert-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379424 master-2 kubenswrapper[4776]: I1011 10:45:32.379394 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-static-pod-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379424 master-2 kubenswrapper[4776]: I1011 10:45:32.379413 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-data-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379424 master-2 kubenswrapper[4776]: I1011 10:45:32.379439 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-log-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379941 master-2 kubenswrapper[4776]: I1011 10:45:32.379443 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-cert-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379941 master-2 kubenswrapper[4776]: I1011 10:45:32.379523 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-data-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379941 master-2 kubenswrapper[4776]: I1011 10:45:32.379464 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-usr-local-bin\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379941 master-2 kubenswrapper[4776]: I1011 10:45:32.379534 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-log-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379941 master-2 kubenswrapper[4776]: I1011 10:45:32.379586 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-usr-local-bin\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379941 master-2 kubenswrapper[4776]: I1011 10:45:32.379577 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-static-pod-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379941 master-2 kubenswrapper[4776]: I1011 10:45:32.379641 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-resource-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379941 master-2 kubenswrapper[4776]: I1011 10:45:32.379721 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-resource-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.458252 master-2 kubenswrapper[4776]: I1011 10:45:32.458047 4776 generic.go:334] "Generic (PLEG): container finished" podID="56e683e1-6c74-4998-ac94-05f58a65965f" containerID="903137cd4045917d2201001cea3f552800cf2d073b4309b1386b4ec5c2d61b48" exitCode=0 Oct 11 10:45:32.458252 master-2 kubenswrapper[4776]: I1011 10:45:32.458170 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-2" event={"ID":"56e683e1-6c74-4998-ac94-05f58a65965f","Type":"ContainerDied","Data":"903137cd4045917d2201001cea3f552800cf2d073b4309b1386b4ec5c2d61b48"} Oct 11 10:45:32.466964 master-2 kubenswrapper[4776]: I1011 10:45:32.466878 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd-rev/0.log" Oct 11 10:45:32.469420 master-2 kubenswrapper[4776]: I1011 10:45:32.469253 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd-metrics/0.log" Oct 11 10:45:32.472399 master-2 kubenswrapper[4776]: I1011 10:45:32.472288 4776 generic.go:334] "Generic (PLEG): container finished" podID="2c4a583adfee975da84510940117e71a" containerID="8aca7dd04fbd9bc97f886a62f0850ed592b9776f6bcf8d57f228ba1b4d57e0dd" exitCode=2 Oct 11 10:45:32.472399 master-2 kubenswrapper[4776]: I1011 10:45:32.472342 4776 generic.go:334] "Generic (PLEG): container finished" podID="2c4a583adfee975da84510940117e71a" containerID="431d1c1363285965b411f06e0338b448e40c4fef537351ea45fb00ac08129886" exitCode=0 Oct 11 10:45:32.472399 master-2 kubenswrapper[4776]: I1011 10:45:32.472375 4776 generic.go:334] "Generic (PLEG): container finished" podID="2c4a583adfee975da84510940117e71a" containerID="56dc1b99eea54bd4bc4092f0e7a9e5c850ceefafdfda928c057fe6d1b40b5d1d" exitCode=2 Oct 11 10:45:32.501229 master-2 kubenswrapper[4776]: I1011 10:45:32.501146 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-2" oldPodUID="2c4a583adfee975da84510940117e71a" podUID="cd7826f9db5842f000a071fd58a1ae79" Oct 11 10:45:33.031578 master-2 kubenswrapper[4776]: I1011 10:45:33.031447 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:45:33.031578 master-2 kubenswrapper[4776]: I1011 10:45:33.031542 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: I1011 10:45:33.311014 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: I1011 10:45:33.311100 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:33.826809 master-2 kubenswrapper[4776]: I1011 10:45:33.826726 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:33.902920 master-2 kubenswrapper[4776]: I1011 10:45:33.902851 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-kubelet-dir\") pod \"56e683e1-6c74-4998-ac94-05f58a65965f\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " Oct 11 10:45:33.903140 master-2 kubenswrapper[4776]: I1011 10:45:33.902935 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56e683e1-6c74-4998-ac94-05f58a65965f-kube-api-access\") pod \"56e683e1-6c74-4998-ac94-05f58a65965f\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " Oct 11 10:45:33.903140 master-2 kubenswrapper[4776]: I1011 10:45:33.902989 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-var-lock\") pod \"56e683e1-6c74-4998-ac94-05f58a65965f\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " Oct 11 10:45:33.903402 master-2 kubenswrapper[4776]: I1011 10:45:33.903364 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-var-lock" (OuterVolumeSpecName: "var-lock") pod "56e683e1-6c74-4998-ac94-05f58a65965f" (UID: "56e683e1-6c74-4998-ac94-05f58a65965f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:45:33.903464 master-2 kubenswrapper[4776]: I1011 10:45:33.903394 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "56e683e1-6c74-4998-ac94-05f58a65965f" (UID: "56e683e1-6c74-4998-ac94-05f58a65965f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:45:33.906533 master-2 kubenswrapper[4776]: I1011 10:45:33.906476 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56e683e1-6c74-4998-ac94-05f58a65965f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "56e683e1-6c74-4998-ac94-05f58a65965f" (UID: "56e683e1-6c74-4998-ac94-05f58a65965f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:45:34.005228 master-2 kubenswrapper[4776]: I1011 10:45:34.005169 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56e683e1-6c74-4998-ac94-05f58a65965f-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:45:34.005228 master-2 kubenswrapper[4776]: I1011 10:45:34.005214 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:45:34.005228 master-2 kubenswrapper[4776]: I1011 10:45:34.005227 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:45:34.489032 master-2 kubenswrapper[4776]: I1011 10:45:34.488959 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-2" event={"ID":"56e683e1-6c74-4998-ac94-05f58a65965f","Type":"ContainerDied","Data":"a100df13f60f80ad4bc323ad2221d41c81f5ea6f0551eaf971a5f40bf6de7a88"} Oct 11 10:45:34.489032 master-2 kubenswrapper[4776]: I1011 10:45:34.489000 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a100df13f60f80ad4bc323ad2221d41c81f5ea6f0551eaf971a5f40bf6de7a88" Oct 11 10:45:34.489032 master-2 kubenswrapper[4776]: I1011 10:45:34.489020 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:35.515661 master-2 kubenswrapper[4776]: I1011 10:45:35.515571 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:35.515661 master-2 kubenswrapper[4776]: I1011 10:45:35.515630 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:38.031564 master-2 kubenswrapper[4776]: I1011 10:45:38.031511 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:45:38.031564 master-2 kubenswrapper[4776]: I1011 10:45:38.031565 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: I1011 10:45:38.307923 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:38.309764 master-2 kubenswrapper[4776]: I1011 10:45:38.309728 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:40.515936 master-2 kubenswrapper[4776]: I1011 10:45:40.515874 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:40.516507 master-2 kubenswrapper[4776]: I1011 10:45:40.515935 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:43.031553 master-2 kubenswrapper[4776]: I1011 10:45:43.031469 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:45:43.031553 master-2 kubenswrapper[4776]: I1011 10:45:43.031549 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:45:43.032375 master-2 kubenswrapper[4776]: I1011 10:45:43.031638 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:45:43.033639 master-2 kubenswrapper[4776]: I1011 10:45:43.033569 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:45:43.033778 master-2 kubenswrapper[4776]: I1011 10:45:43.033724 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: I1011 10:45:43.306580 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: I1011 10:45:43.306651 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:45.515529 master-2 kubenswrapper[4776]: I1011 10:45:45.515449 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:45.515529 master-2 kubenswrapper[4776]: I1011 10:45:45.515518 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:48.032171 master-2 kubenswrapper[4776]: I1011 10:45:48.032096 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:45:48.032171 master-2 kubenswrapper[4776]: I1011 10:45:48.032168 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: I1011 10:45:48.307049 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: I1011 10:45:48.307137 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:50.515816 master-2 kubenswrapper[4776]: I1011 10:45:50.515753 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:50.516283 master-2 kubenswrapper[4776]: I1011 10:45:50.515810 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:53.031541 master-2 kubenswrapper[4776]: I1011 10:45:53.031410 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:45:53.031541 master-2 kubenswrapper[4776]: I1011 10:45:53.031491 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: I1011 10:45:53.308535 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: I1011 10:45:53.308643 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:55.519772 master-2 kubenswrapper[4776]: I1011 10:45:55.516181 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:55.519772 master-2 kubenswrapper[4776]: I1011 10:45:55.516291 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:58.031930 master-2 kubenswrapper[4776]: I1011 10:45:58.031855 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:45:58.031930 master-2 kubenswrapper[4776]: I1011 10:45:58.031924 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: I1011 10:45:58.310439 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: I1011 10:45:58.310521 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:46:00.516179 master-2 kubenswrapper[4776]: I1011 10:46:00.516100 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:46:00.516725 master-2 kubenswrapper[4776]: I1011 10:46:00.516190 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:46:02.698240 master-2 kubenswrapper[4776]: I1011 10:46:02.698185 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd-rev/0.log" Oct 11 10:46:02.699881 master-2 kubenswrapper[4776]: I1011 10:46:02.699835 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd-metrics/0.log" Oct 11 10:46:02.700560 master-2 kubenswrapper[4776]: I1011 10:46:02.700519 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd/0.log" Oct 11 10:46:02.701155 master-2 kubenswrapper[4776]: I1011 10:46:02.701125 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcdctl/0.log" Oct 11 10:46:02.702803 master-2 kubenswrapper[4776]: I1011 10:46:02.702629 4776 generic.go:334] "Generic (PLEG): container finished" podID="2c4a583adfee975da84510940117e71a" containerID="8ca4916746dcde3d1a7ba8c08259545f440c11f186b53d82aba07a17030c92d1" exitCode=137 Oct 11 10:46:02.702803 master-2 kubenswrapper[4776]: I1011 10:46:02.702663 4776 generic.go:334] "Generic (PLEG): container finished" podID="2c4a583adfee975da84510940117e71a" containerID="cc8943c5b4823b597a38ede8102a3e667afad877c11be87f804a4d9fcdbf5687" exitCode=137 Oct 11 10:46:02.734784 master-2 kubenswrapper[4776]: I1011 10:46:02.734657 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd-rev/0.log" Oct 11 10:46:02.736281 master-2 kubenswrapper[4776]: I1011 10:46:02.736231 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd-metrics/0.log" Oct 11 10:46:02.736994 master-2 kubenswrapper[4776]: I1011 10:46:02.736948 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd/0.log" Oct 11 10:46:02.737593 master-2 kubenswrapper[4776]: I1011 10:46:02.737537 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcdctl/0.log" Oct 11 10:46:02.739232 master-2 kubenswrapper[4776]: I1011 10:46:02.739181 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:46:02.755615 master-2 kubenswrapper[4776]: I1011 10:46:02.755541 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-2" oldPodUID="2c4a583adfee975da84510940117e71a" podUID="cd7826f9db5842f000a071fd58a1ae79" Oct 11 10:46:02.885246 master-2 kubenswrapper[4776]: I1011 10:46:02.885136 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-resource-dir\") pod \"2c4a583adfee975da84510940117e71a\" (UID: \"2c4a583adfee975da84510940117e71a\") " Oct 11 10:46:02.885478 master-2 kubenswrapper[4776]: I1011 10:46:02.885235 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-usr-local-bin\") pod \"2c4a583adfee975da84510940117e71a\" (UID: \"2c4a583adfee975da84510940117e71a\") " Oct 11 10:46:02.885478 master-2 kubenswrapper[4776]: I1011 10:46:02.885324 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-static-pod-dir\") pod \"2c4a583adfee975da84510940117e71a\" (UID: \"2c4a583adfee975da84510940117e71a\") " Oct 11 10:46:02.885478 master-2 kubenswrapper[4776]: I1011 10:46:02.885365 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "2c4a583adfee975da84510940117e71a" (UID: "2c4a583adfee975da84510940117e71a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:02.885478 master-2 kubenswrapper[4776]: I1011 10:46:02.885404 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-cert-dir\") pod \"2c4a583adfee975da84510940117e71a\" (UID: \"2c4a583adfee975da84510940117e71a\") " Oct 11 10:46:02.885658 master-2 kubenswrapper[4776]: I1011 10:46:02.885493 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-log-dir\") pod \"2c4a583adfee975da84510940117e71a\" (UID: \"2c4a583adfee975da84510940117e71a\") " Oct 11 10:46:02.885658 master-2 kubenswrapper[4776]: I1011 10:46:02.885399 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "2c4a583adfee975da84510940117e71a" (UID: "2c4a583adfee975da84510940117e71a"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:02.885658 master-2 kubenswrapper[4776]: I1011 10:46:02.885436 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "2c4a583adfee975da84510940117e71a" (UID: "2c4a583adfee975da84510940117e71a"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:02.885658 master-2 kubenswrapper[4776]: I1011 10:46:02.885462 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "2c4a583adfee975da84510940117e71a" (UID: "2c4a583adfee975da84510940117e71a"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:02.885848 master-2 kubenswrapper[4776]: I1011 10:46:02.885664 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-log-dir" (OuterVolumeSpecName: "log-dir") pod "2c4a583adfee975da84510940117e71a" (UID: "2c4a583adfee975da84510940117e71a"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:02.885848 master-2 kubenswrapper[4776]: I1011 10:46:02.885737 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-data-dir\") pod \"2c4a583adfee975da84510940117e71a\" (UID: \"2c4a583adfee975da84510940117e71a\") " Oct 11 10:46:02.885848 master-2 kubenswrapper[4776]: I1011 10:46:02.885829 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-data-dir" (OuterVolumeSpecName: "data-dir") pod "2c4a583adfee975da84510940117e71a" (UID: "2c4a583adfee975da84510940117e71a"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:02.886375 master-2 kubenswrapper[4776]: I1011 10:46:02.886331 4776 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-data-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:02.886427 master-2 kubenswrapper[4776]: I1011 10:46:02.886401 4776 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-resource-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:02.886427 master-2 kubenswrapper[4776]: I1011 10:46:02.886424 4776 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-usr-local-bin\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:02.886517 master-2 kubenswrapper[4776]: I1011 10:46:02.886444 4776 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-static-pod-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:02.886517 master-2 kubenswrapper[4776]: I1011 10:46:02.886496 4776 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-cert-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:02.886517 master-2 kubenswrapper[4776]: I1011 10:46:02.886513 4776 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-log-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:03.031138 master-2 kubenswrapper[4776]: I1011 10:46:03.030960 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:46:03.031138 master-2 kubenswrapper[4776]: I1011 10:46:03.031040 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:46:03.303484 master-2 kubenswrapper[4776]: I1011 10:46:03.303287 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.12:6443/readyz\": dial tcp 192.168.34.12:6443: connect: connection refused" start-of-body= Oct 11 10:46:03.303484 master-2 kubenswrapper[4776]: I1011 10:46:03.303385 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:6443/readyz\": dial tcp 192.168.34.12:6443: connect: connection refused" Oct 11 10:46:03.710858 master-2 kubenswrapper[4776]: I1011 10:46:03.710741 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd-rev/0.log" Oct 11 10:46:03.713457 master-2 kubenswrapper[4776]: I1011 10:46:03.713405 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd-metrics/0.log" Oct 11 10:46:03.714262 master-2 kubenswrapper[4776]: I1011 10:46:03.714220 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd/0.log" Oct 11 10:46:03.714845 master-2 kubenswrapper[4776]: I1011 10:46:03.714805 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcdctl/0.log" Oct 11 10:46:03.716184 master-2 kubenswrapper[4776]: I1011 10:46:03.716136 4776 scope.go:117] "RemoveContainer" containerID="8aca7dd04fbd9bc97f886a62f0850ed592b9776f6bcf8d57f228ba1b4d57e0dd" Oct 11 10:46:03.716422 master-2 kubenswrapper[4776]: I1011 10:46:03.716217 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:46:03.722770 master-2 kubenswrapper[4776]: I1011 10:46:03.722702 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-2" oldPodUID="2c4a583adfee975da84510940117e71a" podUID="cd7826f9db5842f000a071fd58a1ae79" Oct 11 10:46:03.739414 master-2 kubenswrapper[4776]: I1011 10:46:03.739355 4776 scope.go:117] "RemoveContainer" containerID="431d1c1363285965b411f06e0338b448e40c4fef537351ea45fb00ac08129886" Oct 11 10:46:03.748009 master-2 kubenswrapper[4776]: I1011 10:46:03.747943 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-2" oldPodUID="2c4a583adfee975da84510940117e71a" podUID="cd7826f9db5842f000a071fd58a1ae79" Oct 11 10:46:03.757090 master-2 kubenswrapper[4776]: I1011 10:46:03.757053 4776 scope.go:117] "RemoveContainer" containerID="56dc1b99eea54bd4bc4092f0e7a9e5c850ceefafdfda928c057fe6d1b40b5d1d" Oct 11 10:46:03.776663 master-2 kubenswrapper[4776]: I1011 10:46:03.776617 4776 scope.go:117] "RemoveContainer" containerID="8ca4916746dcde3d1a7ba8c08259545f440c11f186b53d82aba07a17030c92d1" Oct 11 10:46:03.800965 master-2 kubenswrapper[4776]: I1011 10:46:03.800889 4776 scope.go:117] "RemoveContainer" containerID="cc8943c5b4823b597a38ede8102a3e667afad877c11be87f804a4d9fcdbf5687" Oct 11 10:46:03.822285 master-2 kubenswrapper[4776]: I1011 10:46:03.822226 4776 scope.go:117] "RemoveContainer" containerID="7f66f4dfc685ae37f005fda864fb1584f27f6f6ea0f20644d46be5a7beee01cb" Oct 11 10:46:03.847794 master-2 kubenswrapper[4776]: I1011 10:46:03.847762 4776 scope.go:117] "RemoveContainer" containerID="8727285f17e12497f3cb86862360f5e6e70608ca5f775837d9eae36b1c220a0e" Oct 11 10:46:03.867510 master-2 kubenswrapper[4776]: I1011 10:46:03.867446 4776 scope.go:117] "RemoveContainer" containerID="8498ac9ed169687a9469df6d265ee2510783d932551f6caa45673a37deb3682e" Oct 11 10:46:04.072071 master-2 kubenswrapper[4776]: I1011 10:46:04.071849 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c4a583adfee975da84510940117e71a" path="/var/lib/kubelet/pods/2c4a583adfee975da84510940117e71a/volumes" Oct 11 10:46:04.713811 master-2 kubenswrapper[4776]: I1011 10:46:04.712882 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-2_9041570beb5002e8da158e70e12f0c16/kube-apiserver-cert-syncer/0.log" Oct 11 10:46:04.713811 master-2 kubenswrapper[4776]: I1011 10:46:04.713492 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:04.721372 master-2 kubenswrapper[4776]: I1011 10:46:04.721277 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-2" oldPodUID="9041570beb5002e8da158e70e12f0c16" podUID="978811670a28b21932e323b181b31435" Oct 11 10:46:04.723321 master-2 kubenswrapper[4776]: I1011 10:46:04.723282 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-cert-dir\") pod \"9041570beb5002e8da158e70e12f0c16\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " Oct 11 10:46:04.723412 master-2 kubenswrapper[4776]: I1011 10:46:04.723351 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-audit-dir\") pod \"9041570beb5002e8da158e70e12f0c16\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " Oct 11 10:46:04.723471 master-2 kubenswrapper[4776]: I1011 10:46:04.723449 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-resource-dir\") pod \"9041570beb5002e8da158e70e12f0c16\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " Oct 11 10:46:04.723943 master-2 kubenswrapper[4776]: I1011 10:46:04.723883 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "9041570beb5002e8da158e70e12f0c16" (UID: "9041570beb5002e8da158e70e12f0c16"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:04.723994 master-2 kubenswrapper[4776]: I1011 10:46:04.723898 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9041570beb5002e8da158e70e12f0c16" (UID: "9041570beb5002e8da158e70e12f0c16"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:04.723994 master-2 kubenswrapper[4776]: I1011 10:46:04.723943 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "9041570beb5002e8da158e70e12f0c16" (UID: "9041570beb5002e8da158e70e12f0c16"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:04.730050 master-2 kubenswrapper[4776]: I1011 10:46:04.730004 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-2_9041570beb5002e8da158e70e12f0c16/kube-apiserver-cert-syncer/0.log" Oct 11 10:46:04.730639 master-2 kubenswrapper[4776]: I1011 10:46:04.730603 4776 generic.go:334] "Generic (PLEG): container finished" podID="9041570beb5002e8da158e70e12f0c16" containerID="ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da" exitCode=0 Oct 11 10:46:04.730708 master-2 kubenswrapper[4776]: I1011 10:46:04.730701 4776 scope.go:117] "RemoveContainer" containerID="b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57" Oct 11 10:46:04.730922 master-2 kubenswrapper[4776]: I1011 10:46:04.730883 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:04.736953 master-2 kubenswrapper[4776]: I1011 10:46:04.736910 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-2" oldPodUID="9041570beb5002e8da158e70e12f0c16" podUID="978811670a28b21932e323b181b31435" Oct 11 10:46:04.743055 master-2 kubenswrapper[4776]: I1011 10:46:04.743026 4776 scope.go:117] "RemoveContainer" containerID="784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6" Oct 11 10:46:04.748458 master-2 kubenswrapper[4776]: I1011 10:46:04.748436 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-2" oldPodUID="9041570beb5002e8da158e70e12f0c16" podUID="978811670a28b21932e323b181b31435" Oct 11 10:46:04.754567 master-2 kubenswrapper[4776]: I1011 10:46:04.754551 4776 scope.go:117] "RemoveContainer" containerID="f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9" Oct 11 10:46:04.767390 master-2 kubenswrapper[4776]: I1011 10:46:04.767342 4776 scope.go:117] "RemoveContainer" containerID="e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510" Oct 11 10:46:04.778739 master-2 kubenswrapper[4776]: I1011 10:46:04.778723 4776 scope.go:117] "RemoveContainer" containerID="ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da" Oct 11 10:46:04.792649 master-2 kubenswrapper[4776]: I1011 10:46:04.792598 4776 scope.go:117] "RemoveContainer" containerID="f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1" Oct 11 10:46:04.825289 master-2 kubenswrapper[4776]: I1011 10:46:04.825246 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:04.825289 master-2 kubenswrapper[4776]: I1011 10:46:04.825272 4776 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-resource-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:04.825289 master-2 kubenswrapper[4776]: I1011 10:46:04.825281 4776 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-cert-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:04.839041 master-2 kubenswrapper[4776]: I1011 10:46:04.838994 4776 scope.go:117] "RemoveContainer" containerID="b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57" Oct 11 10:46:04.839550 master-2 kubenswrapper[4776]: E1011 10:46:04.839486 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57\": container with ID starting with b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57 not found: ID does not exist" containerID="b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57" Oct 11 10:46:04.839550 master-2 kubenswrapper[4776]: I1011 10:46:04.839530 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57"} err="failed to get container status \"b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57\": rpc error: code = NotFound desc = could not find container \"b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57\": container with ID starting with b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57 not found: ID does not exist" Oct 11 10:46:04.839704 master-2 kubenswrapper[4776]: I1011 10:46:04.839555 4776 scope.go:117] "RemoveContainer" containerID="784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6" Oct 11 10:46:04.839994 master-2 kubenswrapper[4776]: E1011 10:46:04.839948 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6\": container with ID starting with 784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6 not found: ID does not exist" containerID="784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6" Oct 11 10:46:04.840047 master-2 kubenswrapper[4776]: I1011 10:46:04.839992 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6"} err="failed to get container status \"784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6\": rpc error: code = NotFound desc = could not find container \"784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6\": container with ID starting with 784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6 not found: ID does not exist" Oct 11 10:46:04.840047 master-2 kubenswrapper[4776]: I1011 10:46:04.840028 4776 scope.go:117] "RemoveContainer" containerID="f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9" Oct 11 10:46:04.840450 master-2 kubenswrapper[4776]: E1011 10:46:04.840416 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9\": container with ID starting with f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9 not found: ID does not exist" containerID="f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9" Oct 11 10:46:04.840450 master-2 kubenswrapper[4776]: I1011 10:46:04.840442 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9"} err="failed to get container status \"f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9\": rpc error: code = NotFound desc = could not find container \"f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9\": container with ID starting with f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9 not found: ID does not exist" Oct 11 10:46:04.840556 master-2 kubenswrapper[4776]: I1011 10:46:04.840457 4776 scope.go:117] "RemoveContainer" containerID="e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510" Oct 11 10:46:04.841300 master-2 kubenswrapper[4776]: E1011 10:46:04.841269 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510\": container with ID starting with e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510 not found: ID does not exist" containerID="e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510" Oct 11 10:46:04.841300 master-2 kubenswrapper[4776]: I1011 10:46:04.841289 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510"} err="failed to get container status \"e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510\": rpc error: code = NotFound desc = could not find container \"e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510\": container with ID starting with e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510 not found: ID does not exist" Oct 11 10:46:04.841416 master-2 kubenswrapper[4776]: I1011 10:46:04.841303 4776 scope.go:117] "RemoveContainer" containerID="ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da" Oct 11 10:46:04.841699 master-2 kubenswrapper[4776]: E1011 10:46:04.841628 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da\": container with ID starting with ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da not found: ID does not exist" containerID="ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da" Oct 11 10:46:04.841806 master-2 kubenswrapper[4776]: I1011 10:46:04.841698 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da"} err="failed to get container status \"ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da\": rpc error: code = NotFound desc = could not find container \"ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da\": container with ID starting with ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da not found: ID does not exist" Oct 11 10:46:04.841806 master-2 kubenswrapper[4776]: I1011 10:46:04.841738 4776 scope.go:117] "RemoveContainer" containerID="f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1" Oct 11 10:46:04.842121 master-2 kubenswrapper[4776]: E1011 10:46:04.842092 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1\": container with ID starting with f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1 not found: ID does not exist" containerID="f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1" Oct 11 10:46:04.842172 master-2 kubenswrapper[4776]: I1011 10:46:04.842118 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1"} err="failed to get container status \"f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1\": rpc error: code = NotFound desc = could not find container \"f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1\": container with ID starting with f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1 not found: ID does not exist" Oct 11 10:46:05.516131 master-2 kubenswrapper[4776]: I1011 10:46:05.516047 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:46:05.516460 master-2 kubenswrapper[4776]: I1011 10:46:05.516169 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:46:06.073961 master-2 kubenswrapper[4776]: I1011 10:46:06.073846 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9041570beb5002e8da158e70e12f0c16" path="/var/lib/kubelet/pods/9041570beb5002e8da158e70e12f0c16/volumes" Oct 11 10:46:08.031925 master-2 kubenswrapper[4776]: I1011 10:46:08.031822 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:46:08.031925 master-2 kubenswrapper[4776]: I1011 10:46:08.031914 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:46:08.058670 master-2 kubenswrapper[4776]: I1011 10:46:08.058586 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.12:6443/readyz\": dial tcp 192.168.34.12:6443: connect: connection refused" start-of-body= Oct 11 10:46:08.058888 master-2 kubenswrapper[4776]: I1011 10:46:08.058717 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:6443/readyz\": dial tcp 192.168.34.12:6443: connect: connection refused" Oct 11 10:46:08.303609 master-2 kubenswrapper[4776]: I1011 10:46:08.303407 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.12:6443/readyz\": dial tcp 192.168.34.12:6443: connect: connection refused" start-of-body= Oct 11 10:46:08.303609 master-2 kubenswrapper[4776]: I1011 10:46:08.303494 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:6443/readyz\": dial tcp 192.168.34.12:6443: connect: connection refused" Oct 11 10:46:09.058221 master-2 kubenswrapper[4776]: I1011 10:46:09.058144 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:09.072389 master-2 kubenswrapper[4776]: I1011 10:46:09.072344 4776 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" podUID="7ca61ebd-b6db-437f-b6d0-b91b94a84371" Oct 11 10:46:09.072389 master-2 kubenswrapper[4776]: I1011 10:46:09.072378 4776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" podUID="7ca61ebd-b6db-437f-b6d0-b91b94a84371" Oct 11 10:46:09.102986 master-2 kubenswrapper[4776]: I1011 10:46:09.102916 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-2"] Oct 11 10:46:09.103272 master-2 kubenswrapper[4776]: I1011 10:46:09.103170 4776 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:09.120229 master-2 kubenswrapper[4776]: I1011 10:46:09.120115 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-2"] Oct 11 10:46:09.136110 master-2 kubenswrapper[4776]: I1011 10:46:09.134924 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:09.136110 master-2 kubenswrapper[4776]: I1011 10:46:09.135288 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-2"] Oct 11 10:46:09.165551 master-2 kubenswrapper[4776]: W1011 10:46:09.165482 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod978811670a28b21932e323b181b31435.slice/crio-722814ff0aca0b8ddc53d987ba277cf7f734178822ae0416db6f657648412168 WatchSource:0}: Error finding container 722814ff0aca0b8ddc53d987ba277cf7f734178822ae0416db6f657648412168: Status 404 returned error can't find the container with id 722814ff0aca0b8ddc53d987ba277cf7f734178822ae0416db6f657648412168 Oct 11 10:46:09.771890 master-2 kubenswrapper[4776]: I1011 10:46:09.771825 4776 generic.go:334] "Generic (PLEG): container finished" podID="978811670a28b21932e323b181b31435" containerID="c33f9dbc69178f562a6fbf097b9145cc3bcae184a07ac83d7567b22faaebbd11" exitCode=0 Oct 11 10:46:09.771890 master-2 kubenswrapper[4776]: I1011 10:46:09.771866 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"978811670a28b21932e323b181b31435","Type":"ContainerDied","Data":"c33f9dbc69178f562a6fbf097b9145cc3bcae184a07ac83d7567b22faaebbd11"} Oct 11 10:46:09.771890 master-2 kubenswrapper[4776]: I1011 10:46:09.771896 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"978811670a28b21932e323b181b31435","Type":"ContainerStarted","Data":"722814ff0aca0b8ddc53d987ba277cf7f734178822ae0416db6f657648412168"} Oct 11 10:46:09.773873 master-2 kubenswrapper[4776]: I1011 10:46:09.773843 4776 generic.go:334] "Generic (PLEG): container finished" podID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerID="f9151fc06dbd01664b47f868a0c43e1a9e6b83f5d73cdb1bae7462ef40f38776" exitCode=0 Oct 11 10:46:09.773873 master-2 kubenswrapper[4776]: I1011 10:46:09.773868 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" event={"ID":"4125c617-d1f6-4f29-bae1-1165604b9cbd","Type":"ContainerDied","Data":"f9151fc06dbd01664b47f868a0c43e1a9e6b83f5d73cdb1bae7462ef40f38776"} Oct 11 10:46:10.060658 master-2 kubenswrapper[4776]: I1011 10:46:10.060072 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:46:10.080651 master-2 kubenswrapper[4776]: I1011 10:46:10.080617 4776 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-2" podUID="681786c2-8b94-4c7d-a99f-804e1f9f044f" Oct 11 10:46:10.080651 master-2 kubenswrapper[4776]: I1011 10:46:10.080649 4776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-2" podUID="681786c2-8b94-4c7d-a99f-804e1f9f044f" Oct 11 10:46:10.088006 master-2 kubenswrapper[4776]: I1011 10:46:10.087908 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:46:10.104848 master-2 kubenswrapper[4776]: I1011 10:46:10.104790 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:46:10.117249 master-2 kubenswrapper[4776]: I1011 10:46:10.117179 4776 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-2" Oct 11 10:46:10.123618 master-2 kubenswrapper[4776]: I1011 10:46:10.123537 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:46:10.163038 master-2 kubenswrapper[4776]: I1011 10:46:10.162980 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:46:10.192522 master-2 kubenswrapper[4776]: I1011 10:46:10.192449 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:46:10.207598 master-2 kubenswrapper[4776]: I1011 10:46:10.207543 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-image-import-ca\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.207598 master-2 kubenswrapper[4776]: I1011 10:46:10.207593 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-config\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.207762 master-2 kubenswrapper[4776]: I1011 10:46:10.207626 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vljtf\" (UniqueName: \"kubernetes.io/projected/4125c617-d1f6-4f29-bae1-1165604b9cbd-kube-api-access-vljtf\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.207762 master-2 kubenswrapper[4776]: I1011 10:46:10.207704 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit-dir\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.207762 master-2 kubenswrapper[4776]: I1011 10:46:10.207738 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-encryption-config\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.208164 master-2 kubenswrapper[4776]: I1011 10:46:10.208119 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-serving-cert\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.208232 master-2 kubenswrapper[4776]: I1011 10:46:10.208178 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-client\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.208232 master-2 kubenswrapper[4776]: I1011 10:46:10.208208 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-trusted-ca-bundle\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.208318 master-2 kubenswrapper[4776]: I1011 10:46:10.208238 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-serving-ca\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.208318 master-2 kubenswrapper[4776]: I1011 10:46:10.208262 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-node-pullsecrets\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.208318 master-2 kubenswrapper[4776]: I1011 10:46:10.208285 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.208769 master-2 kubenswrapper[4776]: I1011 10:46:10.208607 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:46:10.208893 master-2 kubenswrapper[4776]: I1011 10:46:10.208845 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit" (OuterVolumeSpecName: "audit") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:46:10.208893 master-2 kubenswrapper[4776]: I1011 10:46:10.208875 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:10.209207 master-2 kubenswrapper[4776]: I1011 10:46:10.209176 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:46:10.209311 master-2 kubenswrapper[4776]: I1011 10:46:10.209217 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:10.209311 master-2 kubenswrapper[4776]: I1011 10:46:10.209252 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:46:10.211239 master-2 kubenswrapper[4776]: I1011 10:46:10.211184 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:46:10.211357 master-2 kubenswrapper[4776]: I1011 10:46:10.211316 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4125c617-d1f6-4f29-bae1-1165604b9cbd-kube-api-access-vljtf" (OuterVolumeSpecName: "kube-api-access-vljtf") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "kube-api-access-vljtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:46:10.211501 master-2 kubenswrapper[4776]: I1011 10:46:10.211445 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:46:10.212631 master-2 kubenswrapper[4776]: I1011 10:46:10.212587 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:46:10.212848 master-2 kubenswrapper[4776]: I1011 10:46:10.212774 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-config" (OuterVolumeSpecName: "config") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:46:10.309245 master-2 kubenswrapper[4776]: I1011 10:46:10.309176 4776 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-encryption-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309245 master-2 kubenswrapper[4776]: I1011 10:46:10.309236 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309245 master-2 kubenswrapper[4776]: I1011 10:46:10.309250 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-client\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309245 master-2 kubenswrapper[4776]: I1011 10:46:10.309258 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309541 master-2 kubenswrapper[4776]: I1011 10:46:10.309268 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-serving-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309541 master-2 kubenswrapper[4776]: I1011 10:46:10.309277 4776 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-node-pullsecrets\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309541 master-2 kubenswrapper[4776]: I1011 10:46:10.309285 4776 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309541 master-2 kubenswrapper[4776]: I1011 10:46:10.309293 4776 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-image-import-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309541 master-2 kubenswrapper[4776]: I1011 10:46:10.309301 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309541 master-2 kubenswrapper[4776]: I1011 10:46:10.309311 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vljtf\" (UniqueName: \"kubernetes.io/projected/4125c617-d1f6-4f29-bae1-1165604b9cbd-kube-api-access-vljtf\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309541 master-2 kubenswrapper[4776]: I1011 10:46:10.309318 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.784806 master-2 kubenswrapper[4776]: I1011 10:46:10.784378 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" event={"ID":"4125c617-d1f6-4f29-bae1-1165604b9cbd","Type":"ContainerDied","Data":"8da59f9f35574c5f3bacdb804091911baf908469aad4410d43906f030b48b831"} Oct 11 10:46:10.784806 master-2 kubenswrapper[4776]: I1011 10:46:10.784450 4776 scope.go:117] "RemoveContainer" containerID="e4993a00e7728dc437a4c6094596c369ce11c26b8ae277d77d9133c67e1933b9" Oct 11 10:46:10.784806 master-2 kubenswrapper[4776]: I1011 10:46:10.784644 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:46:10.791685 master-2 kubenswrapper[4776]: I1011 10:46:10.791602 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"978811670a28b21932e323b181b31435","Type":"ContainerStarted","Data":"e6502be51ddefdca06d3a091a1356ff59f90e648646c60ca18073c7cc29dd884"} Oct 11 10:46:10.791867 master-2 kubenswrapper[4776]: I1011 10:46:10.791694 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"978811670a28b21932e323b181b31435","Type":"ContainerStarted","Data":"fab18106c8341976767bd5af4dbc1f1bac3d07bab245177bb31d7f4058237efa"} Oct 11 10:46:10.791867 master-2 kubenswrapper[4776]: I1011 10:46:10.791712 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"978811670a28b21932e323b181b31435","Type":"ContainerStarted","Data":"53abb969cf9ec7a6a0b3309d898dd34b335fe0c42ff8b613f60abc04e216e34c"} Oct 11 10:46:10.793324 master-2 kubenswrapper[4776]: I1011 10:46:10.793293 4776 generic.go:334] "Generic (PLEG): container finished" podID="cd7826f9db5842f000a071fd58a1ae79" containerID="473347917efdca54c9ba1fc2ce7b95dad4dd94ca6c0f5821dca541936ee87b10" exitCode=0 Oct 11 10:46:10.793324 master-2 kubenswrapper[4776]: I1011 10:46:10.793324 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"cd7826f9db5842f000a071fd58a1ae79","Type":"ContainerDied","Data":"473347917efdca54c9ba1fc2ce7b95dad4dd94ca6c0f5821dca541936ee87b10"} Oct 11 10:46:10.793324 master-2 kubenswrapper[4776]: I1011 10:46:10.793341 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"cd7826f9db5842f000a071fd58a1ae79","Type":"ContainerStarted","Data":"2f5dc325f87e3ed7b3a94bbd3cc2905c4a69e038d0597785cbb2ce2fdb2e9f37"} Oct 11 10:46:10.809232 master-2 kubenswrapper[4776]: I1011 10:46:10.809198 4776 scope.go:117] "RemoveContainer" containerID="f9151fc06dbd01664b47f868a0c43e1a9e6b83f5d73cdb1bae7462ef40f38776" Oct 11 10:46:10.837911 master-2 kubenswrapper[4776]: I1011 10:46:10.834071 4776 scope.go:117] "RemoveContainer" containerID="eb57f483b1fb4288bd615f1e2349b2230b6272e2d1ba16c1f8dcb73ce4999885" Oct 11 10:46:10.943684 master-2 kubenswrapper[4776]: I1011 10:46:10.943615 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-69df5d46bc-klwcv"] Oct 11 10:46:10.967947 master-2 kubenswrapper[4776]: I1011 10:46:10.967869 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-69df5d46bc-klwcv"] Oct 11 10:46:11.816885 master-2 kubenswrapper[4776]: I1011 10:46:11.816811 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"978811670a28b21932e323b181b31435","Type":"ContainerStarted","Data":"803d9f33ede284510ac06ea69345c389f6ba883c3072fbc05d47b05da5f8d05f"} Oct 11 10:46:11.817435 master-2 kubenswrapper[4776]: I1011 10:46:11.816891 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"978811670a28b21932e323b181b31435","Type":"ContainerStarted","Data":"81bd40984e0ececaa997a36721a48f361a68869ec7f5f8ab9db73abd3b783282"} Oct 11 10:46:11.817435 master-2 kubenswrapper[4776]: I1011 10:46:11.817132 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:11.821767 master-2 kubenswrapper[4776]: I1011 10:46:11.821658 4776 generic.go:334] "Generic (PLEG): container finished" podID="cd7826f9db5842f000a071fd58a1ae79" containerID="e75f231137713f045f1201a61014d9ccf9db84df88d66c8356e35c660a504624" exitCode=0 Oct 11 10:46:11.821843 master-2 kubenswrapper[4776]: I1011 10:46:11.821714 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"cd7826f9db5842f000a071fd58a1ae79","Type":"ContainerDied","Data":"e75f231137713f045f1201a61014d9ccf9db84df88d66c8356e35c660a504624"} Oct 11 10:46:11.870697 master-2 kubenswrapper[4776]: I1011 10:46:11.869169 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-2" podStartSLOduration=2.869138731 podStartE2EDuration="2.869138731s" podCreationTimestamp="2025-10-11 10:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:46:11.855208774 +0000 UTC m=+1206.639635493" watchObservedRunningTime="2025-10-11 10:46:11.869138731 +0000 UTC m=+1206.653565550" Oct 11 10:46:12.065564 master-2 kubenswrapper[4776]: I1011 10:46:12.065478 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" path="/var/lib/kubelet/pods/4125c617-d1f6-4f29-bae1-1165604b9cbd/volumes" Oct 11 10:46:12.831686 master-2 kubenswrapper[4776]: I1011 10:46:12.831598 4776 generic.go:334] "Generic (PLEG): container finished" podID="cd7826f9db5842f000a071fd58a1ae79" containerID="07cf5720cb90dab3edd879f83c1da3f7b2c6567ac99e60fef063fc76ab68476f" exitCode=0 Oct 11 10:46:12.831686 master-2 kubenswrapper[4776]: I1011 10:46:12.831649 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"cd7826f9db5842f000a071fd58a1ae79","Type":"ContainerDied","Data":"07cf5720cb90dab3edd879f83c1da3f7b2c6567ac99e60fef063fc76ab68476f"} Oct 11 10:46:13.031041 master-2 kubenswrapper[4776]: I1011 10:46:13.030991 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:46:13.031118 master-2 kubenswrapper[4776]: I1011 10:46:13.031043 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:46:13.306695 master-2 kubenswrapper[4776]: I1011 10:46:13.306633 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" Oct 11 10:46:13.852639 master-2 kubenswrapper[4776]: I1011 10:46:13.852558 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"cd7826f9db5842f000a071fd58a1ae79","Type":"ContainerStarted","Data":"0f53e46f2ca9a8a7f2aece0c78efd9c6ac75b85448fc5deac2bf1f78f0dfd137"} Oct 11 10:46:13.852639 master-2 kubenswrapper[4776]: I1011 10:46:13.852624 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"cd7826f9db5842f000a071fd58a1ae79","Type":"ContainerStarted","Data":"d47574c8ea8ad03e448653e7ae94459c94135291b8d46f602eff6c7e32ba5c40"} Oct 11 10:46:13.852639 master-2 kubenswrapper[4776]: I1011 10:46:13.852637 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"cd7826f9db5842f000a071fd58a1ae79","Type":"ContainerStarted","Data":"d1a0e578a5f5b18f8830b2435cf57c7cfd2e679c4028a45956e368e4891bfa04"} Oct 11 10:46:13.852639 master-2 kubenswrapper[4776]: I1011 10:46:13.852649 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"cd7826f9db5842f000a071fd58a1ae79","Type":"ContainerStarted","Data":"8834904950d9fc9a68f93ae37e78f800cc8f9a8eb962a08f0b62f9e4809cf65a"} Oct 11 10:46:14.135918 master-2 kubenswrapper[4776]: I1011 10:46:14.135790 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:14.135918 master-2 kubenswrapper[4776]: I1011 10:46:14.135849 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:14.141414 master-2 kubenswrapper[4776]: I1011 10:46:14.141380 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:14.865429 master-2 kubenswrapper[4776]: I1011 10:46:14.865358 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"cd7826f9db5842f000a071fd58a1ae79","Type":"ContainerStarted","Data":"270ff2c4f6bcd14e58618f09b77ff83eebe14d1109545f40f25b1270461f3ef3"} Oct 11 10:46:14.869654 master-2 kubenswrapper[4776]: I1011 10:46:14.869627 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:14.930698 master-2 kubenswrapper[4776]: I1011 10:46:14.930588 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-2" podStartSLOduration=4.930564563 podStartE2EDuration="4.930564563s" podCreationTimestamp="2025-10-11 10:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:46:14.926269136 +0000 UTC m=+1209.710695845" watchObservedRunningTime="2025-10-11 10:46:14.930564563 +0000 UTC m=+1209.714991302" Oct 11 10:46:15.171664 master-2 kubenswrapper[4776]: I1011 10:46:15.171611 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-2" Oct 11 10:46:18.071054 master-2 kubenswrapper[4776]: I1011 10:46:18.070978 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:46:19.812939 master-2 kubenswrapper[4776]: I1011 10:46:19.812852 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-8865994fd-5kbfp"] Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: E1011 10:46:19.813234 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver-check-endpoints" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: I1011 10:46:19.813255 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver-check-endpoints" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: E1011 10:46:19.813275 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e683e1-6c74-4998-ac94-05f58a65965f" containerName="installer" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: I1011 10:46:19.813287 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e683e1-6c74-4998-ac94-05f58a65965f" containerName="installer" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: E1011 10:46:19.813313 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: I1011 10:46:19.813327 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: E1011 10:46:19.813342 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="fix-audit-permissions" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: I1011 10:46:19.813354 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="fix-audit-permissions" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: I1011 10:46:19.813550 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: I1011 10:46:19.813572 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver-check-endpoints" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: I1011 10:46:19.813592 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e683e1-6c74-4998-ac94-05f58a65965f" containerName="installer" Oct 11 10:46:19.814794 master-2 kubenswrapper[4776]: I1011 10:46:19.814757 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.817867 master-2 kubenswrapper[4776]: I1011 10:46:19.817807 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 11 10:46:19.818399 master-2 kubenswrapper[4776]: I1011 10:46:19.818332 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 11 10:46:19.818486 master-2 kubenswrapper[4776]: I1011 10:46:19.818454 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 11 10:46:19.818949 master-2 kubenswrapper[4776]: I1011 10:46:19.818858 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 11 10:46:19.819592 master-2 kubenswrapper[4776]: I1011 10:46:19.819548 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 11 10:46:19.819950 master-2 kubenswrapper[4776]: I1011 10:46:19.819913 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 11 10:46:19.820661 master-2 kubenswrapper[4776]: I1011 10:46:19.820630 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 11 10:46:19.820905 master-2 kubenswrapper[4776]: I1011 10:46:19.820877 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 11 10:46:19.821083 master-2 kubenswrapper[4776]: I1011 10:46:19.821051 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-lntq9" Oct 11 10:46:19.821184 master-2 kubenswrapper[4776]: I1011 10:46:19.821158 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 11 10:46:19.836187 master-2 kubenswrapper[4776]: I1011 10:46:19.836128 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 11 10:46:19.848534 master-2 kubenswrapper[4776]: I1011 10:46:19.848458 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/350b6f3e-a23f-426b-9923-b2a09914e0cb-audit-dir\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.848731 master-2 kubenswrapper[4776]: I1011 10:46:19.848539 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-trusted-ca-bundle\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.848731 master-2 kubenswrapper[4776]: I1011 10:46:19.848690 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/350b6f3e-a23f-426b-9923-b2a09914e0cb-serving-cert\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.848893 master-2 kubenswrapper[4776]: I1011 10:46:19.848840 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-image-import-ca\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.848965 master-2 kubenswrapper[4776]: I1011 10:46:19.848943 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/350b6f3e-a23f-426b-9923-b2a09914e0cb-node-pullsecrets\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.849024 master-2 kubenswrapper[4776]: I1011 10:46:19.849008 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-etcd-serving-ca\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.849060 master-2 kubenswrapper[4776]: I1011 10:46:19.849043 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/350b6f3e-a23f-426b-9923-b2a09914e0cb-etcd-client\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.849121 master-2 kubenswrapper[4776]: I1011 10:46:19.849067 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/350b6f3e-a23f-426b-9923-b2a09914e0cb-encryption-config\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.849208 master-2 kubenswrapper[4776]: I1011 10:46:19.849185 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lc8w\" (UniqueName: \"kubernetes.io/projected/350b6f3e-a23f-426b-9923-b2a09914e0cb-kube-api-access-4lc8w\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.849264 master-2 kubenswrapper[4776]: I1011 10:46:19.849223 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-config\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.849558 master-2 kubenswrapper[4776]: I1011 10:46:19.849503 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-audit\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.850035 master-2 kubenswrapper[4776]: I1011 10:46:19.849967 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-8865994fd-5kbfp"] Oct 11 10:46:19.950623 master-2 kubenswrapper[4776]: I1011 10:46:19.950559 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/350b6f3e-a23f-426b-9923-b2a09914e0cb-audit-dir\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.950623 master-2 kubenswrapper[4776]: I1011 10:46:19.950614 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-trusted-ca-bundle\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.950623 master-2 kubenswrapper[4776]: I1011 10:46:19.950639 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/350b6f3e-a23f-426b-9923-b2a09914e0cb-serving-cert\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950685 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-image-import-ca\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950697 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/350b6f3e-a23f-426b-9923-b2a09914e0cb-audit-dir\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950713 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/350b6f3e-a23f-426b-9923-b2a09914e0cb-node-pullsecrets\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950775 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/350b6f3e-a23f-426b-9923-b2a09914e0cb-node-pullsecrets\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950786 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-etcd-serving-ca\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950811 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/350b6f3e-a23f-426b-9923-b2a09914e0cb-etcd-client\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950832 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/350b6f3e-a23f-426b-9923-b2a09914e0cb-encryption-config\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950872 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lc8w\" (UniqueName: \"kubernetes.io/projected/350b6f3e-a23f-426b-9923-b2a09914e0cb-kube-api-access-4lc8w\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950895 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-config\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950935 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-audit\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951883 master-2 kubenswrapper[4776]: I1011 10:46:19.951842 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-image-import-ca\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.952281 master-2 kubenswrapper[4776]: I1011 10:46:19.952248 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-etcd-serving-ca\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.952521 master-2 kubenswrapper[4776]: I1011 10:46:19.952476 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-audit\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.952668 master-2 kubenswrapper[4776]: I1011 10:46:19.952611 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-config\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.952897 master-2 kubenswrapper[4776]: I1011 10:46:19.952843 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-trusted-ca-bundle\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.954636 master-2 kubenswrapper[4776]: I1011 10:46:19.954598 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/350b6f3e-a23f-426b-9923-b2a09914e0cb-encryption-config\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.955394 master-2 kubenswrapper[4776]: I1011 10:46:19.955346 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/350b6f3e-a23f-426b-9923-b2a09914e0cb-serving-cert\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.959902 master-2 kubenswrapper[4776]: I1011 10:46:19.959852 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/350b6f3e-a23f-426b-9923-b2a09914e0cb-etcd-client\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.982873 master-2 kubenswrapper[4776]: I1011 10:46:19.981966 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lc8w\" (UniqueName: \"kubernetes.io/projected/350b6f3e-a23f-426b-9923-b2a09914e0cb-kube-api-access-4lc8w\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:20.131771 master-2 kubenswrapper[4776]: I1011 10:46:20.131555 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:20.171923 master-2 kubenswrapper[4776]: I1011 10:46:20.171842 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-2" Oct 11 10:46:20.574491 master-2 kubenswrapper[4776]: I1011 10:46:20.574434 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-8865994fd-5kbfp"] Oct 11 10:46:20.903662 master-2 kubenswrapper[4776]: I1011 10:46:20.903544 4776 generic.go:334] "Generic (PLEG): container finished" podID="350b6f3e-a23f-426b-9923-b2a09914e0cb" containerID="b89b5c0840c738481e8e36f4beb28c4d82117a8861c89ed1a8e99de967b8d99a" exitCode=0 Oct 11 10:46:20.904315 master-2 kubenswrapper[4776]: I1011 10:46:20.904265 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" event={"ID":"350b6f3e-a23f-426b-9923-b2a09914e0cb","Type":"ContainerDied","Data":"b89b5c0840c738481e8e36f4beb28c4d82117a8861c89ed1a8e99de967b8d99a"} Oct 11 10:46:20.904428 master-2 kubenswrapper[4776]: I1011 10:46:20.904410 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" event={"ID":"350b6f3e-a23f-426b-9923-b2a09914e0cb","Type":"ContainerStarted","Data":"eaf5f41a40813524659e8199b72ad73238bd49163bb61383dd7e4e2fcc924558"} Oct 11 10:46:21.913942 master-2 kubenswrapper[4776]: I1011 10:46:21.913869 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" event={"ID":"350b6f3e-a23f-426b-9923-b2a09914e0cb","Type":"ContainerStarted","Data":"2c4f4cf15af5c8df0b2d019e445fdf8dff21c6eb5222aff022c05b698c569fd0"} Oct 11 10:46:21.913942 master-2 kubenswrapper[4776]: I1011 10:46:21.913915 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" event={"ID":"350b6f3e-a23f-426b-9923-b2a09914e0cb","Type":"ContainerStarted","Data":"87c8825568cdfb6d2376cb5c391bb4c4ab0b3ce29e33cc8ad53c772cb1884816"} Oct 11 10:46:21.958457 master-2 kubenswrapper[4776]: I1011 10:46:21.958349 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" podStartSLOduration=122.958326831 podStartE2EDuration="2m2.958326831s" podCreationTimestamp="2025-10-11 10:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:46:21.957025336 +0000 UTC m=+1216.741452085" watchObservedRunningTime="2025-10-11 10:46:21.958326831 +0000 UTC m=+1216.742753560" Oct 11 10:46:25.133132 master-2 kubenswrapper[4776]: I1011 10:46:25.133061 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:25.133132 master-2 kubenswrapper[4776]: I1011 10:46:25.133134 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:25.142925 master-2 kubenswrapper[4776]: I1011 10:46:25.142884 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:25.959895 master-2 kubenswrapper[4776]: I1011 10:46:25.959813 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:29.140463 master-2 kubenswrapper[4776]: I1011 10:46:29.140265 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:30.188574 master-2 kubenswrapper[4776]: I1011 10:46:30.188515 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-2" Oct 11 10:46:30.205003 master-2 kubenswrapper[4776]: I1011 10:46:30.204946 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-2" Oct 11 10:46:34.076947 master-0 kubenswrapper[4790]: I1011 10:46:34.076885 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/revision-pruner-10-master-0"] Oct 11 10:46:34.078099 master-0 kubenswrapper[4790]: I1011 10:46:34.078052 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-0" Oct 11 10:46:34.082804 master-0 kubenswrapper[4790]: I1011 10:46:34.082680 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-xbqxb" Oct 11 10:46:34.095111 master-0 kubenswrapper[4790]: I1011 10:46:34.095020 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/revision-pruner-10-master-0"] Oct 11 10:46:34.237201 master-0 kubenswrapper[4790]: I1011 10:46:34.237080 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3-kubelet-dir\") pod \"revision-pruner-10-master-0\" (UID: \"e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3\") " pod="openshift-etcd/revision-pruner-10-master-0" Oct 11 10:46:34.237593 master-0 kubenswrapper[4790]: I1011 10:46:34.237271 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3-kube-api-access\") pod \"revision-pruner-10-master-0\" (UID: \"e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3\") " pod="openshift-etcd/revision-pruner-10-master-0" Oct 11 10:46:34.338510 master-0 kubenswrapper[4790]: I1011 10:46:34.338281 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3-kube-api-access\") pod \"revision-pruner-10-master-0\" (UID: \"e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3\") " pod="openshift-etcd/revision-pruner-10-master-0" Oct 11 10:46:34.338510 master-0 kubenswrapper[4790]: I1011 10:46:34.338413 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3-kubelet-dir\") pod \"revision-pruner-10-master-0\" (UID: \"e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3\") " pod="openshift-etcd/revision-pruner-10-master-0" Oct 11 10:46:34.339039 master-0 kubenswrapper[4790]: I1011 10:46:34.338620 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3-kubelet-dir\") pod \"revision-pruner-10-master-0\" (UID: \"e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3\") " pod="openshift-etcd/revision-pruner-10-master-0" Oct 11 10:46:34.360432 master-0 kubenswrapper[4790]: I1011 10:46:34.360348 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3-kube-api-access\") pod \"revision-pruner-10-master-0\" (UID: \"e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3\") " pod="openshift-etcd/revision-pruner-10-master-0" Oct 11 10:46:34.410526 master-0 kubenswrapper[4790]: I1011 10:46:34.410430 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-0" Oct 11 10:46:34.905336 master-0 kubenswrapper[4790]: I1011 10:46:34.905114 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/revision-pruner-10-master-0"] Oct 11 10:46:34.913989 master-0 kubenswrapper[4790]: W1011 10:46:34.913875 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode35e5ca9_d4d4_47f2_a2d0_217f9ac77ba3.slice/crio-c91bd4294e6142d5101911ef5ff984c42444eb6fa63a1f61f90d9fe739253af0 WatchSource:0}: Error finding container c91bd4294e6142d5101911ef5ff984c42444eb6fa63a1f61f90d9fe739253af0: Status 404 returned error can't find the container with id c91bd4294e6142d5101911ef5ff984c42444eb6fa63a1f61f90d9fe739253af0 Oct 11 10:46:35.292972 master-0 kubenswrapper[4790]: I1011 10:46:35.292878 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/revision-pruner-10-master-0" event={"ID":"e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3","Type":"ContainerStarted","Data":"c91bd4294e6142d5101911ef5ff984c42444eb6fa63a1f61f90d9fe739253af0"} Oct 11 10:46:36.312543 master-0 kubenswrapper[4790]: I1011 10:46:36.312432 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/revision-pruner-10-master-0" event={"ID":"e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3","Type":"ContainerStarted","Data":"403e2dc1bf2d947f244343a321e98557f5e484a29eac2d4b8168b223f45ad3d6"} Oct 11 10:46:36.343417 master-0 kubenswrapper[4790]: I1011 10:46:36.343293 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/revision-pruner-10-master-0" podStartSLOduration=2.343263893 podStartE2EDuration="2.343263893s" podCreationTimestamp="2025-10-11 10:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:46:36.341880745 +0000 UTC m=+472.896341047" watchObservedRunningTime="2025-10-11 10:46:36.343263893 +0000 UTC m=+472.897724225" Oct 11 10:46:36.869371 master-1 kubenswrapper[4771]: I1011 10:46:36.869258 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/revision-pruner-10-master-1"] Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: E1011 10:46:36.869514 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd28168d-b375-4a82-8784-bc38fad4cc07" containerName="registry-server" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: I1011 10:46:36.869530 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd28168d-b375-4a82-8784-bc38fad4cc07" containerName="registry-server" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: E1011 10:46:36.869542 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e964e77-4315-44b2-a34f-d0e2249e9a72" containerName="registry-server" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: I1011 10:46:36.869548 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e964e77-4315-44b2-a34f-d0e2249e9a72" containerName="registry-server" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: E1011 10:46:36.869561 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e964e77-4315-44b2-a34f-d0e2249e9a72" containerName="extract-utilities" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: I1011 10:46:36.869567 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e964e77-4315-44b2-a34f-d0e2249e9a72" containerName="extract-utilities" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: E1011 10:46:36.869576 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1" containerName="extract-content" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: I1011 10:46:36.869581 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1" containerName="extract-content" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: E1011 10:46:36.869589 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e964e77-4315-44b2-a34f-d0e2249e9a72" containerName="extract-content" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: I1011 10:46:36.869595 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e964e77-4315-44b2-a34f-d0e2249e9a72" containerName="extract-content" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: E1011 10:46:36.869605 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1" containerName="extract-utilities" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: I1011 10:46:36.869611 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1" containerName="extract-utilities" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: E1011 10:46:36.869622 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd28168d-b375-4a82-8784-bc38fad4cc07" containerName="extract-utilities" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: I1011 10:46:36.869628 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd28168d-b375-4a82-8784-bc38fad4cc07" containerName="extract-utilities" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: E1011 10:46:36.869637 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1" containerName="registry-server" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: I1011 10:46:36.869644 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1" containerName="registry-server" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: E1011 10:46:36.869651 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a65b0165-5747-48c9-9179-86f19861dd68" containerName="console" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: I1011 10:46:36.869656 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a65b0165-5747-48c9-9179-86f19861dd68" containerName="console" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: E1011 10:46:36.869664 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd28168d-b375-4a82-8784-bc38fad4cc07" containerName="extract-content" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: I1011 10:46:36.869670 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd28168d-b375-4a82-8784-bc38fad4cc07" containerName="extract-content" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: I1011 10:46:36.869749 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa7fddf6-d341-4992-bba8-9d5fa5b1e7a1" containerName="registry-server" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: I1011 10:46:36.869761 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a65b0165-5747-48c9-9179-86f19861dd68" containerName="console" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: I1011 10:46:36.869772 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e964e77-4315-44b2-a34f-d0e2249e9a72" containerName="registry-server" Oct 11 10:46:36.870051 master-1 kubenswrapper[4771]: I1011 10:46:36.869779 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd28168d-b375-4a82-8784-bc38fad4cc07" containerName="registry-server" Oct 11 10:46:36.871068 master-1 kubenswrapper[4771]: I1011 10:46:36.870224 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-1" Oct 11 10:46:36.873043 master-1 kubenswrapper[4771]: I1011 10:46:36.873017 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-xbqxb" Oct 11 10:46:36.884523 master-1 kubenswrapper[4771]: I1011 10:46:36.884460 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/revision-pruner-10-master-1"] Oct 11 10:46:36.977400 master-1 kubenswrapper[4771]: I1011 10:46:36.977307 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a47a3143-b015-49c8-a15d-678e348b64e8-kube-api-access\") pod \"revision-pruner-10-master-1\" (UID: \"a47a3143-b015-49c8-a15d-678e348b64e8\") " pod="openshift-etcd/revision-pruner-10-master-1" Oct 11 10:46:36.977400 master-1 kubenswrapper[4771]: I1011 10:46:36.977393 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a47a3143-b015-49c8-a15d-678e348b64e8-kubelet-dir\") pod \"revision-pruner-10-master-1\" (UID: \"a47a3143-b015-49c8-a15d-678e348b64e8\") " pod="openshift-etcd/revision-pruner-10-master-1" Oct 11 10:46:37.079174 master-1 kubenswrapper[4771]: I1011 10:46:37.079054 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a47a3143-b015-49c8-a15d-678e348b64e8-kube-api-access\") pod \"revision-pruner-10-master-1\" (UID: \"a47a3143-b015-49c8-a15d-678e348b64e8\") " pod="openshift-etcd/revision-pruner-10-master-1" Oct 11 10:46:37.079528 master-1 kubenswrapper[4771]: I1011 10:46:37.079201 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a47a3143-b015-49c8-a15d-678e348b64e8-kubelet-dir\") pod \"revision-pruner-10-master-1\" (UID: \"a47a3143-b015-49c8-a15d-678e348b64e8\") " pod="openshift-etcd/revision-pruner-10-master-1" Oct 11 10:46:37.079528 master-1 kubenswrapper[4771]: I1011 10:46:37.079423 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a47a3143-b015-49c8-a15d-678e348b64e8-kubelet-dir\") pod \"revision-pruner-10-master-1\" (UID: \"a47a3143-b015-49c8-a15d-678e348b64e8\") " pod="openshift-etcd/revision-pruner-10-master-1" Oct 11 10:46:37.112467 master-1 kubenswrapper[4771]: I1011 10:46:37.112329 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a47a3143-b015-49c8-a15d-678e348b64e8-kube-api-access\") pod \"revision-pruner-10-master-1\" (UID: \"a47a3143-b015-49c8-a15d-678e348b64e8\") " pod="openshift-etcd/revision-pruner-10-master-1" Oct 11 10:46:37.221248 master-1 kubenswrapper[4771]: I1011 10:46:37.221073 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-1" Oct 11 10:46:37.323050 master-0 kubenswrapper[4790]: I1011 10:46:37.322958 4790 generic.go:334] "Generic (PLEG): container finished" podID="e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3" containerID="403e2dc1bf2d947f244343a321e98557f5e484a29eac2d4b8168b223f45ad3d6" exitCode=0 Oct 11 10:46:37.323050 master-0 kubenswrapper[4790]: I1011 10:46:37.323022 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/revision-pruner-10-master-0" event={"ID":"e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3","Type":"ContainerDied","Data":"403e2dc1bf2d947f244343a321e98557f5e484a29eac2d4b8168b223f45ad3d6"} Oct 11 10:46:37.686515 master-1 kubenswrapper[4771]: I1011 10:46:37.686459 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/revision-pruner-10-master-1"] Oct 11 10:46:37.693900 master-1 kubenswrapper[4771]: W1011 10:46:37.693857 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda47a3143_b015_49c8_a15d_678e348b64e8.slice/crio-ac5ce240b40d8370900f9879a98c097946af4b021ca8b155564363fc97fb42b3 WatchSource:0}: Error finding container ac5ce240b40d8370900f9879a98c097946af4b021ca8b155564363fc97fb42b3: Status 404 returned error can't find the container with id ac5ce240b40d8370900f9879a98c097946af4b021ca8b155564363fc97fb42b3 Oct 11 10:46:37.732323 master-1 kubenswrapper[4771]: I1011 10:46:37.732275 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/revision-pruner-10-master-1" event={"ID":"a47a3143-b015-49c8-a15d-678e348b64e8","Type":"ContainerStarted","Data":"ac5ce240b40d8370900f9879a98c097946af4b021ca8b155564363fc97fb42b3"} Oct 11 10:46:38.729599 master-0 kubenswrapper[4790]: I1011 10:46:38.729511 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-0" Oct 11 10:46:38.744193 master-1 kubenswrapper[4771]: I1011 10:46:38.744055 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/revision-pruner-10-master-1" event={"ID":"a47a3143-b015-49c8-a15d-678e348b64e8","Type":"ContainerStarted","Data":"9f30d0f3808a9d2757a05579682aa059fc93ed81f62653b81a10620484dbf824"} Oct 11 10:46:38.778975 master-1 kubenswrapper[4771]: I1011 10:46:38.778795 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/revision-pruner-10-master-1" podStartSLOduration=2.778721886 podStartE2EDuration="2.778721886s" podCreationTimestamp="2025-10-11 10:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:46:38.773888509 +0000 UTC m=+1230.748115010" watchObservedRunningTime="2025-10-11 10:46:38.778721886 +0000 UTC m=+1230.752948367" Oct 11 10:46:38.895748 master-0 kubenswrapper[4790]: I1011 10:46:38.895646 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3-kubelet-dir\") pod \"e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3\" (UID: \"e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3\") " Oct 11 10:46:38.896013 master-0 kubenswrapper[4790]: I1011 10:46:38.895766 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3" (UID: "e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:38.896013 master-0 kubenswrapper[4790]: I1011 10:46:38.895922 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3-kube-api-access\") pod \"e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3\" (UID: \"e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3\") " Oct 11 10:46:38.896821 master-0 kubenswrapper[4790]: I1011 10:46:38.896778 4790 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Oct 11 10:46:38.899917 master-0 kubenswrapper[4790]: I1011 10:46:38.899878 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3" (UID: "e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:46:38.997926 master-0 kubenswrapper[4790]: I1011 10:46:38.997798 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3-kube-api-access\") on node \"master-0\" DevicePath \"\"" Oct 11 10:46:39.273862 master-2 kubenswrapper[4776]: I1011 10:46:39.273635 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/revision-pruner-10-master-2"] Oct 11 10:46:39.275153 master-2 kubenswrapper[4776]: I1011 10:46:39.275113 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:46:39.279336 master-2 kubenswrapper[4776]: I1011 10:46:39.279273 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-xbqxb" Oct 11 10:46:39.296533 master-2 kubenswrapper[4776]: I1011 10:46:39.296462 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/revision-pruner-10-master-2"] Oct 11 10:46:39.313432 master-2 kubenswrapper[4776]: I1011 10:46:39.313380 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kubelet-dir\") pod \"revision-pruner-10-master-2\" (UID: \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\") " pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:46:39.314427 master-2 kubenswrapper[4776]: I1011 10:46:39.314265 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kube-api-access\") pod \"revision-pruner-10-master-2\" (UID: \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\") " pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:46:39.339838 master-0 kubenswrapper[4790]: I1011 10:46:39.339755 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/revision-pruner-10-master-0" event={"ID":"e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3","Type":"ContainerDied","Data":"c91bd4294e6142d5101911ef5ff984c42444eb6fa63a1f61f90d9fe739253af0"} Oct 11 10:46:39.339838 master-0 kubenswrapper[4790]: I1011 10:46:39.339825 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c91bd4294e6142d5101911ef5ff984c42444eb6fa63a1f61f90d9fe739253af0" Oct 11 10:46:39.340320 master-0 kubenswrapper[4790]: I1011 10:46:39.339959 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-0" Oct 11 10:46:39.415822 master-2 kubenswrapper[4776]: I1011 10:46:39.415747 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kubelet-dir\") pod \"revision-pruner-10-master-2\" (UID: \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\") " pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:46:39.416047 master-2 kubenswrapper[4776]: I1011 10:46:39.415859 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kube-api-access\") pod \"revision-pruner-10-master-2\" (UID: \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\") " pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:46:39.416047 master-2 kubenswrapper[4776]: I1011 10:46:39.415874 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kubelet-dir\") pod \"revision-pruner-10-master-2\" (UID: \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\") " pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:46:39.449182 master-2 kubenswrapper[4776]: I1011 10:46:39.449087 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kube-api-access\") pod \"revision-pruner-10-master-2\" (UID: \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\") " pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:46:39.608308 master-2 kubenswrapper[4776]: I1011 10:46:39.608171 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:46:39.753424 master-1 kubenswrapper[4771]: I1011 10:46:39.753280 4771 generic.go:334] "Generic (PLEG): container finished" podID="a47a3143-b015-49c8-a15d-678e348b64e8" containerID="9f30d0f3808a9d2757a05579682aa059fc93ed81f62653b81a10620484dbf824" exitCode=0 Oct 11 10:46:39.753929 master-1 kubenswrapper[4771]: I1011 10:46:39.753414 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/revision-pruner-10-master-1" event={"ID":"a47a3143-b015-49c8-a15d-678e348b64e8","Type":"ContainerDied","Data":"9f30d0f3808a9d2757a05579682aa059fc93ed81f62653b81a10620484dbf824"} Oct 11 10:46:40.002629 master-2 kubenswrapper[4776]: I1011 10:46:40.002545 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/revision-pruner-10-master-2"] Oct 11 10:46:40.011057 master-2 kubenswrapper[4776]: W1011 10:46:40.011004 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc2320cb4_bf2c_4d63_b9c6_5a7461a547e8.slice/crio-4fd1252f9240022e9bbde7d90d6240b8481dae2c9c680177ca2d6dbdaa850aa7 WatchSource:0}: Error finding container 4fd1252f9240022e9bbde7d90d6240b8481dae2c9c680177ca2d6dbdaa850aa7: Status 404 returned error can't find the container with id 4fd1252f9240022e9bbde7d90d6240b8481dae2c9c680177ca2d6dbdaa850aa7 Oct 11 10:46:40.038508 master-2 kubenswrapper[4776]: I1011 10:46:40.038432 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/revision-pruner-10-master-2" event={"ID":"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8","Type":"ContainerStarted","Data":"4fd1252f9240022e9bbde7d90d6240b8481dae2c9c680177ca2d6dbdaa850aa7"} Oct 11 10:46:40.707798 master-2 kubenswrapper[4776]: I1011 10:46:40.707731 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-3-master-2"] Oct 11 10:46:40.724082 master-2 kubenswrapper[4776]: I1011 10:46:40.723999 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-3-master-2"] Oct 11 10:46:41.045099 master-2 kubenswrapper[4776]: I1011 10:46:41.045027 4776 generic.go:334] "Generic (PLEG): container finished" podID="c2320cb4-bf2c-4d63-b9c6-5a7461a547e8" containerID="c49554e1efe6551f4ec98c6ddaa43e072c8f37bf32235007b5d3b96cf0462be4" exitCode=0 Oct 11 10:46:41.045099 master-2 kubenswrapper[4776]: I1011 10:46:41.045078 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/revision-pruner-10-master-2" event={"ID":"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8","Type":"ContainerDied","Data":"c49554e1efe6551f4ec98c6ddaa43e072c8f37bf32235007b5d3b96cf0462be4"} Oct 11 10:46:41.260208 master-1 kubenswrapper[4771]: I1011 10:46:41.260150 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-1" Oct 11 10:46:41.346206 master-1 kubenswrapper[4771]: I1011 10:46:41.345558 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a47a3143-b015-49c8-a15d-678e348b64e8-kube-api-access\") pod \"a47a3143-b015-49c8-a15d-678e348b64e8\" (UID: \"a47a3143-b015-49c8-a15d-678e348b64e8\") " Oct 11 10:46:41.346436 master-1 kubenswrapper[4771]: I1011 10:46:41.346289 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a47a3143-b015-49c8-a15d-678e348b64e8-kubelet-dir\") pod \"a47a3143-b015-49c8-a15d-678e348b64e8\" (UID: \"a47a3143-b015-49c8-a15d-678e348b64e8\") " Oct 11 10:46:41.346520 master-1 kubenswrapper[4771]: I1011 10:46:41.346450 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a47a3143-b015-49c8-a15d-678e348b64e8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a47a3143-b015-49c8-a15d-678e348b64e8" (UID: "a47a3143-b015-49c8-a15d-678e348b64e8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:41.346903 master-1 kubenswrapper[4771]: I1011 10:46:41.346853 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a47a3143-b015-49c8-a15d-678e348b64e8-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:46:41.349888 master-1 kubenswrapper[4771]: I1011 10:46:41.349830 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a47a3143-b015-49c8-a15d-678e348b64e8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a47a3143-b015-49c8-a15d-678e348b64e8" (UID: "a47a3143-b015-49c8-a15d-678e348b64e8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:46:41.448263 master-1 kubenswrapper[4771]: I1011 10:46:41.448099 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a47a3143-b015-49c8-a15d-678e348b64e8-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:46:41.775017 master-1 kubenswrapper[4771]: I1011 10:46:41.774871 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/revision-pruner-10-master-1" event={"ID":"a47a3143-b015-49c8-a15d-678e348b64e8","Type":"ContainerDied","Data":"ac5ce240b40d8370900f9879a98c097946af4b021ca8b155564363fc97fb42b3"} Oct 11 10:46:41.775017 master-1 kubenswrapper[4771]: I1011 10:46:41.774933 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac5ce240b40d8370900f9879a98c097946af4b021ca8b155564363fc97fb42b3" Oct 11 10:46:41.775017 master-1 kubenswrapper[4771]: I1011 10:46:41.774956 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-1" Oct 11 10:46:42.067355 master-2 kubenswrapper[4776]: I1011 10:46:42.067255 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff524bb0-602a-4579-bac9-c3f5c19ec9ba" path="/var/lib/kubelet/pods/ff524bb0-602a-4579-bac9-c3f5c19ec9ba/volumes" Oct 11 10:46:42.366284 master-2 kubenswrapper[4776]: I1011 10:46:42.366244 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:46:42.458057 master-2 kubenswrapper[4776]: I1011 10:46:42.457984 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kube-api-access\") pod \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\" (UID: \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\") " Oct 11 10:46:42.458276 master-2 kubenswrapper[4776]: I1011 10:46:42.458125 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kubelet-dir\") pod \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\" (UID: \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\") " Oct 11 10:46:42.458499 master-2 kubenswrapper[4776]: I1011 10:46:42.458469 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c2320cb4-bf2c-4d63-b9c6-5a7461a547e8" (UID: "c2320cb4-bf2c-4d63-b9c6-5a7461a547e8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:42.461363 master-2 kubenswrapper[4776]: I1011 10:46:42.461219 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c2320cb4-bf2c-4d63-b9c6-5a7461a547e8" (UID: "c2320cb4-bf2c-4d63-b9c6-5a7461a547e8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:46:42.560604 master-2 kubenswrapper[4776]: I1011 10:46:42.560497 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:42.560604 master-2 kubenswrapper[4776]: I1011 10:46:42.560566 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:42.899612 master-1 kubenswrapper[4771]: I1011 10:46:42.899520 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-1-master-1"] Oct 11 10:46:42.905034 master-1 kubenswrapper[4771]: I1011 10:46:42.904956 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-1-master-1"] Oct 11 10:46:43.061251 master-2 kubenswrapper[4776]: I1011 10:46:43.061190 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/revision-pruner-10-master-2" event={"ID":"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8","Type":"ContainerDied","Data":"4fd1252f9240022e9bbde7d90d6240b8481dae2c9c680177ca2d6dbdaa850aa7"} Oct 11 10:46:43.061251 master-2 kubenswrapper[4776]: I1011 10:46:43.061236 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fd1252f9240022e9bbde7d90d6240b8481dae2c9c680177ca2d6dbdaa850aa7" Oct 11 10:46:43.061251 master-2 kubenswrapper[4776]: I1011 10:46:43.061239 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:46:43.472395 master-0 kubenswrapper[4790]: I1011 10:46:43.472278 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-10-master-0"] Oct 11 10:46:43.473410 master-0 kubenswrapper[4790]: E1011 10:46:43.472782 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3" containerName="pruner" Oct 11 10:46:43.473410 master-0 kubenswrapper[4790]: I1011 10:46:43.472810 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3" containerName="pruner" Oct 11 10:46:43.473410 master-0 kubenswrapper[4790]: I1011 10:46:43.472977 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="e35e5ca9-d4d4-47f2-a2d0-217f9ac77ba3" containerName="pruner" Oct 11 10:46:43.473887 master-0 kubenswrapper[4790]: I1011 10:46:43.473839 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-0" Oct 11 10:46:43.476992 master-0 kubenswrapper[4790]: I1011 10:46:43.476929 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-xbqxb" Oct 11 10:46:43.489065 master-0 kubenswrapper[4790]: I1011 10:46:43.488974 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-10-master-0"] Oct 11 10:46:43.658902 master-0 kubenswrapper[4790]: I1011 10:46:43.658800 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/527d9cd7-412d-4afb-9212-c8697426a964-kubelet-dir\") pod \"installer-10-master-0\" (UID: \"527d9cd7-412d-4afb-9212-c8697426a964\") " pod="openshift-etcd/installer-10-master-0" Oct 11 10:46:43.658902 master-0 kubenswrapper[4790]: I1011 10:46:43.658906 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/527d9cd7-412d-4afb-9212-c8697426a964-kube-api-access\") pod \"installer-10-master-0\" (UID: \"527d9cd7-412d-4afb-9212-c8697426a964\") " pod="openshift-etcd/installer-10-master-0" Oct 11 10:46:43.659257 master-0 kubenswrapper[4790]: I1011 10:46:43.659006 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/527d9cd7-412d-4afb-9212-c8697426a964-var-lock\") pod \"installer-10-master-0\" (UID: \"527d9cd7-412d-4afb-9212-c8697426a964\") " pod="openshift-etcd/installer-10-master-0" Oct 11 10:46:43.760588 master-0 kubenswrapper[4790]: I1011 10:46:43.760415 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/527d9cd7-412d-4afb-9212-c8697426a964-var-lock\") pod \"installer-10-master-0\" (UID: \"527d9cd7-412d-4afb-9212-c8697426a964\") " pod="openshift-etcd/installer-10-master-0" Oct 11 10:46:43.760588 master-0 kubenswrapper[4790]: I1011 10:46:43.760514 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/527d9cd7-412d-4afb-9212-c8697426a964-kubelet-dir\") pod \"installer-10-master-0\" (UID: \"527d9cd7-412d-4afb-9212-c8697426a964\") " pod="openshift-etcd/installer-10-master-0" Oct 11 10:46:43.760588 master-0 kubenswrapper[4790]: I1011 10:46:43.760553 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/527d9cd7-412d-4afb-9212-c8697426a964-kube-api-access\") pod \"installer-10-master-0\" (UID: \"527d9cd7-412d-4afb-9212-c8697426a964\") " pod="openshift-etcd/installer-10-master-0" Oct 11 10:46:43.760918 master-0 kubenswrapper[4790]: I1011 10:46:43.760619 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/527d9cd7-412d-4afb-9212-c8697426a964-var-lock\") pod \"installer-10-master-0\" (UID: \"527d9cd7-412d-4afb-9212-c8697426a964\") " pod="openshift-etcd/installer-10-master-0" Oct 11 10:46:43.760918 master-0 kubenswrapper[4790]: I1011 10:46:43.760730 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/527d9cd7-412d-4afb-9212-c8697426a964-kubelet-dir\") pod \"installer-10-master-0\" (UID: \"527d9cd7-412d-4afb-9212-c8697426a964\") " pod="openshift-etcd/installer-10-master-0" Oct 11 10:46:43.783441 master-0 kubenswrapper[4790]: I1011 10:46:43.783386 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/527d9cd7-412d-4afb-9212-c8697426a964-kube-api-access\") pod \"installer-10-master-0\" (UID: \"527d9cd7-412d-4afb-9212-c8697426a964\") " pod="openshift-etcd/installer-10-master-0" Oct 11 10:46:43.861698 master-0 kubenswrapper[4790]: I1011 10:46:43.861612 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-0" Oct 11 10:46:44.305232 master-0 kubenswrapper[4790]: I1011 10:46:44.305152 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-10-master-0"] Oct 11 10:46:44.369618 master-0 kubenswrapper[4790]: I1011 10:46:44.369540 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-0" event={"ID":"527d9cd7-412d-4afb-9212-c8697426a964","Type":"ContainerStarted","Data":"0718367c3bdc698f2094bf8918ac19ddd76bb29db33497d019a8831490485d51"} Oct 11 10:46:44.444221 master-1 kubenswrapper[4771]: I1011 10:46:44.444148 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="826e1279-bc0d-426e-b6e0-5108268f340e" path="/var/lib/kubelet/pods/826e1279-bc0d-426e-b6e0-5108268f340e/volumes" Oct 11 10:46:45.377790 master-0 kubenswrapper[4790]: I1011 10:46:45.377649 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-0" event={"ID":"527d9cd7-412d-4afb-9212-c8697426a964","Type":"ContainerStarted","Data":"84981853b8264575ef9774e4c0deccd1808b713d6d64a0dcb63fc54fcf80f561"} Oct 11 10:46:45.406833 master-0 kubenswrapper[4790]: I1011 10:46:45.406749 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-10-master-0" podStartSLOduration=2.406722992 podStartE2EDuration="2.406722992s" podCreationTimestamp="2025-10-11 10:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:46:45.405862938 +0000 UTC m=+481.960323260" watchObservedRunningTime="2025-10-11 10:46:45.406722992 +0000 UTC m=+481.961183284" Oct 11 10:47:07.551512 master-1 kubenswrapper[4771]: I1011 10:47:07.551440 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-1"] Oct 11 10:47:07.552317 master-1 kubenswrapper[4771]: E1011 10:47:07.551682 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a47a3143-b015-49c8-a15d-678e348b64e8" containerName="pruner" Oct 11 10:47:07.552317 master-1 kubenswrapper[4771]: I1011 10:47:07.551695 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a47a3143-b015-49c8-a15d-678e348b64e8" containerName="pruner" Oct 11 10:47:07.552317 master-1 kubenswrapper[4771]: I1011 10:47:07.551814 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a47a3143-b015-49c8-a15d-678e348b64e8" containerName="pruner" Oct 11 10:47:07.552436 master-1 kubenswrapper[4771]: I1011 10:47:07.552382 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-1" Oct 11 10:47:07.555268 master-1 kubenswrapper[4771]: I1011 10:47:07.555210 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-t28rg" Oct 11 10:47:07.568874 master-1 kubenswrapper[4771]: I1011 10:47:07.568798 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-1"] Oct 11 10:47:07.651386 master-1 kubenswrapper[4771]: I1011 10:47:07.651307 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75f73eff-98a5-47a6-b15c-2338930444b9-kubelet-dir\") pod \"installer-6-master-1\" (UID: \"75f73eff-98a5-47a6-b15c-2338930444b9\") " pod="openshift-kube-apiserver/installer-6-master-1" Oct 11 10:47:07.651736 master-1 kubenswrapper[4771]: I1011 10:47:07.651587 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75f73eff-98a5-47a6-b15c-2338930444b9-kube-api-access\") pod \"installer-6-master-1\" (UID: \"75f73eff-98a5-47a6-b15c-2338930444b9\") " pod="openshift-kube-apiserver/installer-6-master-1" Oct 11 10:47:07.651736 master-1 kubenswrapper[4771]: I1011 10:47:07.651695 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75f73eff-98a5-47a6-b15c-2338930444b9-var-lock\") pod \"installer-6-master-1\" (UID: \"75f73eff-98a5-47a6-b15c-2338930444b9\") " pod="openshift-kube-apiserver/installer-6-master-1" Oct 11 10:47:07.752916 master-1 kubenswrapper[4771]: I1011 10:47:07.752803 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75f73eff-98a5-47a6-b15c-2338930444b9-kube-api-access\") pod \"installer-6-master-1\" (UID: \"75f73eff-98a5-47a6-b15c-2338930444b9\") " pod="openshift-kube-apiserver/installer-6-master-1" Oct 11 10:47:07.752916 master-1 kubenswrapper[4771]: I1011 10:47:07.752874 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75f73eff-98a5-47a6-b15c-2338930444b9-var-lock\") pod \"installer-6-master-1\" (UID: \"75f73eff-98a5-47a6-b15c-2338930444b9\") " pod="openshift-kube-apiserver/installer-6-master-1" Oct 11 10:47:07.752916 master-1 kubenswrapper[4771]: I1011 10:47:07.752942 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75f73eff-98a5-47a6-b15c-2338930444b9-kubelet-dir\") pod \"installer-6-master-1\" (UID: \"75f73eff-98a5-47a6-b15c-2338930444b9\") " pod="openshift-kube-apiserver/installer-6-master-1" Oct 11 10:47:07.754205 master-1 kubenswrapper[4771]: I1011 10:47:07.753037 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75f73eff-98a5-47a6-b15c-2338930444b9-kubelet-dir\") pod \"installer-6-master-1\" (UID: \"75f73eff-98a5-47a6-b15c-2338930444b9\") " pod="openshift-kube-apiserver/installer-6-master-1" Oct 11 10:47:07.754205 master-1 kubenswrapper[4771]: I1011 10:47:07.753083 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75f73eff-98a5-47a6-b15c-2338930444b9-var-lock\") pod \"installer-6-master-1\" (UID: \"75f73eff-98a5-47a6-b15c-2338930444b9\") " pod="openshift-kube-apiserver/installer-6-master-1" Oct 11 10:47:07.779896 master-1 kubenswrapper[4771]: I1011 10:47:07.779843 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75f73eff-98a5-47a6-b15c-2338930444b9-kube-api-access\") pod \"installer-6-master-1\" (UID: \"75f73eff-98a5-47a6-b15c-2338930444b9\") " pod="openshift-kube-apiserver/installer-6-master-1" Oct 11 10:47:07.830830 master-2 kubenswrapper[4776]: I1011 10:47:07.830771 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7"] Oct 11 10:47:07.831560 master-2 kubenswrapper[4776]: E1011 10:47:07.831114 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2320cb4-bf2c-4d63-b9c6-5a7461a547e8" containerName="pruner" Oct 11 10:47:07.831560 master-2 kubenswrapper[4776]: I1011 10:47:07.831148 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2320cb4-bf2c-4d63-b9c6-5a7461a547e8" containerName="pruner" Oct 11 10:47:07.831560 master-2 kubenswrapper[4776]: I1011 10:47:07.831314 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2320cb4-bf2c-4d63-b9c6-5a7461a547e8" containerName="pruner" Oct 11 10:47:07.832641 master-2 kubenswrapper[4776]: I1011 10:47:07.832612 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:07.847297 master-2 kubenswrapper[4776]: I1011 10:47:07.847255 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7"] Oct 11 10:47:07.870188 master-1 kubenswrapper[4771]: I1011 10:47:07.870039 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-1" Oct 11 10:47:07.923132 master-2 kubenswrapper[4776]: I1011 10:47:07.923068 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5rwd\" (UniqueName: \"kubernetes.io/projected/4df9c769-cf84-4934-a70d-16984666e6ed-kube-api-access-g5rwd\") pod \"4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:07.923397 master-2 kubenswrapper[4776]: I1011 10:47:07.923216 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-util\") pod \"4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:07.923397 master-2 kubenswrapper[4776]: I1011 10:47:07.923262 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-bundle\") pod \"4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:08.024358 master-2 kubenswrapper[4776]: I1011 10:47:08.024301 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-util\") pod \"4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:08.024358 master-2 kubenswrapper[4776]: I1011 10:47:08.024360 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-bundle\") pod \"4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:08.024662 master-2 kubenswrapper[4776]: I1011 10:47:08.024396 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5rwd\" (UniqueName: \"kubernetes.io/projected/4df9c769-cf84-4934-a70d-16984666e6ed-kube-api-access-g5rwd\") pod \"4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:08.025399 master-2 kubenswrapper[4776]: I1011 10:47:08.025365 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-util\") pod \"4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:08.025537 master-2 kubenswrapper[4776]: I1011 10:47:08.025403 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-bundle\") pod \"4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:08.048879 master-2 kubenswrapper[4776]: I1011 10:47:08.048815 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5rwd\" (UniqueName: \"kubernetes.io/projected/4df9c769-cf84-4934-a70d-16984666e6ed-kube-api-access-g5rwd\") pod \"4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:08.147577 master-2 kubenswrapper[4776]: I1011 10:47:08.147475 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:08.356153 master-1 kubenswrapper[4771]: I1011 10:47:08.356065 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-1"] Oct 11 10:47:08.367185 master-1 kubenswrapper[4771]: W1011 10:47:08.367119 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod75f73eff_98a5_47a6_b15c_2338930444b9.slice/crio-ddde6b9d12870f560724e081c5814d541ef4de69b4025dd3e9fe28d6514d1fa5 WatchSource:0}: Error finding container ddde6b9d12870f560724e081c5814d541ef4de69b4025dd3e9fe28d6514d1fa5: Status 404 returned error can't find the container with id ddde6b9d12870f560724e081c5814d541ef4de69b4025dd3e9fe28d6514d1fa5 Oct 11 10:47:08.552151 master-2 kubenswrapper[4776]: I1011 10:47:08.552105 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7"] Oct 11 10:47:08.555932 master-2 kubenswrapper[4776]: W1011 10:47:08.555840 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4df9c769_cf84_4934_a70d_16984666e6ed.slice/crio-2602dd240acd49965323b80f85f1cc979e0ca998ea8a783b4482ae921a906b15 WatchSource:0}: Error finding container 2602dd240acd49965323b80f85f1cc979e0ca998ea8a783b4482ae921a906b15: Status 404 returned error can't find the container with id 2602dd240acd49965323b80f85f1cc979e0ca998ea8a783b4482ae921a906b15 Oct 11 10:47:08.977311 master-1 kubenswrapper[4771]: I1011 10:47:08.977113 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-1" event={"ID":"75f73eff-98a5-47a6-b15c-2338930444b9","Type":"ContainerStarted","Data":"2a911b9063702410de3ab33cdd411750a6c1735b8b89a04533d1fce1ca985ed4"} Oct 11 10:47:08.978122 master-1 kubenswrapper[4771]: I1011 10:47:08.978058 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-1" event={"ID":"75f73eff-98a5-47a6-b15c-2338930444b9","Type":"ContainerStarted","Data":"ddde6b9d12870f560724e081c5814d541ef4de69b4025dd3e9fe28d6514d1fa5"} Oct 11 10:47:09.004653 master-1 kubenswrapper[4771]: I1011 10:47:09.004535 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-1" podStartSLOduration=2.004514185 podStartE2EDuration="2.004514185s" podCreationTimestamp="2025-10-11 10:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:47:09.003522716 +0000 UTC m=+1260.977749167" watchObservedRunningTime="2025-10-11 10:47:09.004514185 +0000 UTC m=+1260.978740636" Oct 11 10:47:09.243635 master-2 kubenswrapper[4776]: I1011 10:47:09.243577 4776 generic.go:334] "Generic (PLEG): container finished" podID="4df9c769-cf84-4934-a70d-16984666e6ed" containerID="686b001d4c14f06bbecf2081fdd73a0a5e1b061794e316c9201bdf61e6e0037e" exitCode=0 Oct 11 10:47:09.243635 master-2 kubenswrapper[4776]: I1011 10:47:09.243630 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" event={"ID":"4df9c769-cf84-4934-a70d-16984666e6ed","Type":"ContainerDied","Data":"686b001d4c14f06bbecf2081fdd73a0a5e1b061794e316c9201bdf61e6e0037e"} Oct 11 10:47:09.244300 master-2 kubenswrapper[4776]: I1011 10:47:09.243658 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" event={"ID":"4df9c769-cf84-4934-a70d-16984666e6ed","Type":"ContainerStarted","Data":"2602dd240acd49965323b80f85f1cc979e0ca998ea8a783b4482ae921a906b15"} Oct 11 10:47:11.263029 master-2 kubenswrapper[4776]: I1011 10:47:11.262099 4776 generic.go:334] "Generic (PLEG): container finished" podID="4df9c769-cf84-4934-a70d-16984666e6ed" containerID="a8eb4ac91d75a294c570a3b36d660bcf024c2cc21a78faca1e93e878d79a935a" exitCode=0 Oct 11 10:47:11.263029 master-2 kubenswrapper[4776]: I1011 10:47:11.262153 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" event={"ID":"4df9c769-cf84-4934-a70d-16984666e6ed","Type":"ContainerDied","Data":"a8eb4ac91d75a294c570a3b36d660bcf024c2cc21a78faca1e93e878d79a935a"} Oct 11 10:47:12.275028 master-2 kubenswrapper[4776]: I1011 10:47:12.274947 4776 generic.go:334] "Generic (PLEG): container finished" podID="4df9c769-cf84-4934-a70d-16984666e6ed" containerID="747e01e8adbde76ee6c3e92fe622c0624a737296a2066cf73dd9e691e2e9cd6f" exitCode=0 Oct 11 10:47:12.275028 master-2 kubenswrapper[4776]: I1011 10:47:12.275001 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" event={"ID":"4df9c769-cf84-4934-a70d-16984666e6ed","Type":"ContainerDied","Data":"747e01e8adbde76ee6c3e92fe622c0624a737296a2066cf73dd9e691e2e9cd6f"} Oct 11 10:47:13.596589 master-2 kubenswrapper[4776]: I1011 10:47:13.596525 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:13.652485 master-1 kubenswrapper[4771]: I1011 10:47:13.652114 4771 scope.go:117] "RemoveContainer" containerID="9a616ae6ac6ffcbc27ae54a54aec1c65046926d3773ee73ab8bfdedb75371f06" Oct 11 10:47:13.797348 master-2 kubenswrapper[4776]: I1011 10:47:13.797230 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-util\") pod \"4df9c769-cf84-4934-a70d-16984666e6ed\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " Oct 11 10:47:13.797517 master-2 kubenswrapper[4776]: I1011 10:47:13.797379 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-bundle\") pod \"4df9c769-cf84-4934-a70d-16984666e6ed\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " Oct 11 10:47:13.797517 master-2 kubenswrapper[4776]: I1011 10:47:13.797459 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5rwd\" (UniqueName: \"kubernetes.io/projected/4df9c769-cf84-4934-a70d-16984666e6ed-kube-api-access-g5rwd\") pod \"4df9c769-cf84-4934-a70d-16984666e6ed\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " Oct 11 10:47:13.798239 master-2 kubenswrapper[4776]: I1011 10:47:13.798203 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-bundle" (OuterVolumeSpecName: "bundle") pod "4df9c769-cf84-4934-a70d-16984666e6ed" (UID: "4df9c769-cf84-4934-a70d-16984666e6ed"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:13.800196 master-2 kubenswrapper[4776]: I1011 10:47:13.800167 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4df9c769-cf84-4934-a70d-16984666e6ed-kube-api-access-g5rwd" (OuterVolumeSpecName: "kube-api-access-g5rwd") pod "4df9c769-cf84-4934-a70d-16984666e6ed" (UID: "4df9c769-cf84-4934-a70d-16984666e6ed"). InnerVolumeSpecName "kube-api-access-g5rwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:47:13.812776 master-2 kubenswrapper[4776]: I1011 10:47:13.811626 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-util" (OuterVolumeSpecName: "util") pod "4df9c769-cf84-4934-a70d-16984666e6ed" (UID: "4df9c769-cf84-4934-a70d-16984666e6ed"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:13.899286 master-2 kubenswrapper[4776]: I1011 10:47:13.899215 4776 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:13.899286 master-2 kubenswrapper[4776]: I1011 10:47:13.899253 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5rwd\" (UniqueName: \"kubernetes.io/projected/4df9c769-cf84-4934-a70d-16984666e6ed-kube-api-access-g5rwd\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:13.899286 master-2 kubenswrapper[4776]: I1011 10:47:13.899265 4776 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-util\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:14.288904 master-2 kubenswrapper[4776]: I1011 10:47:14.288820 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" event={"ID":"4df9c769-cf84-4934-a70d-16984666e6ed","Type":"ContainerDied","Data":"2602dd240acd49965323b80f85f1cc979e0ca998ea8a783b4482ae921a906b15"} Oct 11 10:47:14.288904 master-2 kubenswrapper[4776]: I1011 10:47:14.288869 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2602dd240acd49965323b80f85f1cc979e0ca998ea8a783b4482ae921a906b15" Oct 11 10:47:14.288904 master-2 kubenswrapper[4776]: I1011 10:47:14.288871 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:15.520355 master-2 kubenswrapper[4776]: I1011 10:47:15.520289 4776 scope.go:117] "RemoveContainer" containerID="5dcd1c043c2c18cfa07bcf1cba0c1e16ed116132974cf974809ac324fe8a6c21" Oct 11 10:47:15.831447 master-0 kubenswrapper[4790]: I1011 10:47:15.831260 4790 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Oct 11 10:47:15.832241 master-0 kubenswrapper[4790]: I1011 10:47:15.832009 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcdctl" containerID="cri-o://d5626e357f9cecab4c479d110f8d9828e354694d4906d68734ca04db0e170970" gracePeriod=30 Oct 11 10:47:15.832241 master-0 kubenswrapper[4790]: I1011 10:47:15.832081 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd-rev" containerID="cri-o://3581fc03c1130dc2dfacc8a0d155544ce350f22134d68a0cd6e79314b24ed060" gracePeriod=30 Oct 11 10:47:15.832241 master-0 kubenswrapper[4790]: I1011 10:47:15.832160 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd" containerID="cri-o://b73aa8db7e53e83a1a9430aab15d1a11f28139414df55dce80c436d48922f6ca" gracePeriod=30 Oct 11 10:47:15.832241 master-0 kubenswrapper[4790]: I1011 10:47:15.832140 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd-metrics" containerID="cri-o://95fc9c6d467ef4be49d4e412e32d764ddb12e42ab41ba14dd9259747220b1377" gracePeriod=30 Oct 11 10:47:15.832375 master-0 kubenswrapper[4790]: I1011 10:47:15.832087 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd-readyz" containerID="cri-o://e4fe8f20eab6d0f57dd7ff0348d7d79afb0c0c3e2753d58dd10a19dafe7b60ef" gracePeriod=30 Oct 11 10:47:15.835076 master-0 kubenswrapper[4790]: I1011 10:47:15.835030 4790 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Oct 11 10:47:15.835311 master-0 kubenswrapper[4790]: E1011 10:47:15.835278 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd" Oct 11 10:47:15.835350 master-0 kubenswrapper[4790]: I1011 10:47:15.835309 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd" Oct 11 10:47:15.835350 master-0 kubenswrapper[4790]: E1011 10:47:15.835330 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd" Oct 11 10:47:15.835350 master-0 kubenswrapper[4790]: I1011 10:47:15.835347 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd" Oct 11 10:47:15.835499 master-0 kubenswrapper[4790]: E1011 10:47:15.835362 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd-rev" Oct 11 10:47:15.835499 master-0 kubenswrapper[4790]: I1011 10:47:15.835377 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd-rev" Oct 11 10:47:15.835499 master-0 kubenswrapper[4790]: E1011 10:47:15.835404 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="setup" Oct 11 10:47:15.835499 master-0 kubenswrapper[4790]: I1011 10:47:15.835417 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="setup" Oct 11 10:47:15.835499 master-0 kubenswrapper[4790]: E1011 10:47:15.835434 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcdctl" Oct 11 10:47:15.835499 master-0 kubenswrapper[4790]: I1011 10:47:15.835447 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcdctl" Oct 11 10:47:15.835499 master-0 kubenswrapper[4790]: E1011 10:47:15.835462 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd-readyz" Oct 11 10:47:15.835499 master-0 kubenswrapper[4790]: I1011 10:47:15.835475 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd-readyz" Oct 11 10:47:15.835499 master-0 kubenswrapper[4790]: E1011 10:47:15.835488 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd-metrics" Oct 11 10:47:15.835499 master-0 kubenswrapper[4790]: I1011 10:47:15.835501 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd-metrics" Oct 11 10:47:15.835788 master-0 kubenswrapper[4790]: E1011 10:47:15.835518 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd-ensure-env-vars" Oct 11 10:47:15.835788 master-0 kubenswrapper[4790]: I1011 10:47:15.835531 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd-ensure-env-vars" Oct 11 10:47:15.835788 master-0 kubenswrapper[4790]: E1011 10:47:15.835548 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd-resources-copy" Oct 11 10:47:15.835788 master-0 kubenswrapper[4790]: I1011 10:47:15.835561 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd-resources-copy" Oct 11 10:47:15.835788 master-0 kubenswrapper[4790]: I1011 10:47:15.835675 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd-metrics" Oct 11 10:47:15.835788 master-0 kubenswrapper[4790]: I1011 10:47:15.835699 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd-readyz" Oct 11 10:47:15.835788 master-0 kubenswrapper[4790]: I1011 10:47:15.835743 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcdctl" Oct 11 10:47:15.835788 master-0 kubenswrapper[4790]: I1011 10:47:15.835755 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd" Oct 11 10:47:15.835788 master-0 kubenswrapper[4790]: I1011 10:47:15.835771 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd-rev" Oct 11 10:47:15.835788 master-0 kubenswrapper[4790]: I1011 10:47:15.835786 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd" Oct 11 10:47:15.836058 master-0 kubenswrapper[4790]: I1011 10:47:15.835804 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd" Oct 11 10:47:15.836058 master-0 kubenswrapper[4790]: E1011 10:47:15.835922 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd" Oct 11 10:47:15.836058 master-0 kubenswrapper[4790]: I1011 10:47:15.835936 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e53a8977ce5fc5588aef94f91dcc24" containerName="etcd" Oct 11 10:47:16.010035 master-0 kubenswrapper[4790]: I1011 10:47:16.009919 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/14286286be88b59efc7cfc15eca1cc38-cert-dir\") pod \"etcd-master-0\" (UID: \"14286286be88b59efc7cfc15eca1cc38\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:47:16.010035 master-0 kubenswrapper[4790]: I1011 10:47:16.009995 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/14286286be88b59efc7cfc15eca1cc38-static-pod-dir\") pod \"etcd-master-0\" (UID: \"14286286be88b59efc7cfc15eca1cc38\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:47:16.010035 master-0 kubenswrapper[4790]: I1011 10:47:16.010028 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/14286286be88b59efc7cfc15eca1cc38-data-dir\") pod \"etcd-master-0\" (UID: \"14286286be88b59efc7cfc15eca1cc38\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:47:16.010035 master-0 kubenswrapper[4790]: I1011 10:47:16.010051 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/14286286be88b59efc7cfc15eca1cc38-resource-dir\") pod \"etcd-master-0\" (UID: \"14286286be88b59efc7cfc15eca1cc38\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:47:16.010467 master-0 kubenswrapper[4790]: I1011 10:47:16.010072 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/14286286be88b59efc7cfc15eca1cc38-log-dir\") pod \"etcd-master-0\" (UID: \"14286286be88b59efc7cfc15eca1cc38\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:47:16.010467 master-0 kubenswrapper[4790]: I1011 10:47:16.010103 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/14286286be88b59efc7cfc15eca1cc38-usr-local-bin\") pod \"etcd-master-0\" (UID: \"14286286be88b59efc7cfc15eca1cc38\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:47:16.111930 master-0 kubenswrapper[4790]: I1011 10:47:16.111740 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/14286286be88b59efc7cfc15eca1cc38-log-dir\") pod \"etcd-master-0\" (UID: \"14286286be88b59efc7cfc15eca1cc38\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:47:16.111930 master-0 kubenswrapper[4790]: I1011 10:47:16.111802 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/14286286be88b59efc7cfc15eca1cc38-usr-local-bin\") pod \"etcd-master-0\" (UID: \"14286286be88b59efc7cfc15eca1cc38\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:47:16.111930 master-0 kubenswrapper[4790]: I1011 10:47:16.111842 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/14286286be88b59efc7cfc15eca1cc38-cert-dir\") pod \"etcd-master-0\" (UID: \"14286286be88b59efc7cfc15eca1cc38\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:47:16.111930 master-0 kubenswrapper[4790]: I1011 10:47:16.111866 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/14286286be88b59efc7cfc15eca1cc38-static-pod-dir\") pod \"etcd-master-0\" (UID: \"14286286be88b59efc7cfc15eca1cc38\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:47:16.111930 master-0 kubenswrapper[4790]: I1011 10:47:16.111895 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/14286286be88b59efc7cfc15eca1cc38-data-dir\") pod \"etcd-master-0\" (UID: \"14286286be88b59efc7cfc15eca1cc38\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:47:16.111930 master-0 kubenswrapper[4790]: I1011 10:47:16.111918 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/14286286be88b59efc7cfc15eca1cc38-resource-dir\") pod \"etcd-master-0\" (UID: \"14286286be88b59efc7cfc15eca1cc38\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:47:16.112333 master-0 kubenswrapper[4790]: I1011 10:47:16.111980 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/14286286be88b59efc7cfc15eca1cc38-log-dir\") pod \"etcd-master-0\" (UID: \"14286286be88b59efc7cfc15eca1cc38\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:47:16.112333 master-0 kubenswrapper[4790]: I1011 10:47:16.112030 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/14286286be88b59efc7cfc15eca1cc38-data-dir\") pod \"etcd-master-0\" (UID: \"14286286be88b59efc7cfc15eca1cc38\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:47:16.112333 master-0 kubenswrapper[4790]: I1011 10:47:16.112004 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/14286286be88b59efc7cfc15eca1cc38-resource-dir\") pod \"etcd-master-0\" (UID: \"14286286be88b59efc7cfc15eca1cc38\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:47:16.112333 master-0 kubenswrapper[4790]: I1011 10:47:16.112037 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/14286286be88b59efc7cfc15eca1cc38-cert-dir\") pod \"etcd-master-0\" (UID: \"14286286be88b59efc7cfc15eca1cc38\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:47:16.112333 master-0 kubenswrapper[4790]: I1011 10:47:16.112080 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/14286286be88b59efc7cfc15eca1cc38-static-pod-dir\") pod \"etcd-master-0\" (UID: \"14286286be88b59efc7cfc15eca1cc38\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:47:16.112333 master-0 kubenswrapper[4790]: I1011 10:47:16.112180 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/14286286be88b59efc7cfc15eca1cc38-usr-local-bin\") pod \"etcd-master-0\" (UID: \"14286286be88b59efc7cfc15eca1cc38\") " pod="openshift-etcd/etcd-master-0" Oct 11 10:47:16.559518 master-0 kubenswrapper[4790]: I1011 10:47:16.559458 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_a7e53a8977ce5fc5588aef94f91dcc24/etcd/1.log" Oct 11 10:47:16.560176 master-0 kubenswrapper[4790]: I1011 10:47:16.560140 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_a7e53a8977ce5fc5588aef94f91dcc24/etcd-rev/0.log" Oct 11 10:47:16.561216 master-0 kubenswrapper[4790]: I1011 10:47:16.561190 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_a7e53a8977ce5fc5588aef94f91dcc24/etcd-metrics/0.log" Oct 11 10:47:16.562909 master-0 kubenswrapper[4790]: I1011 10:47:16.562871 4790 generic.go:334] "Generic (PLEG): container finished" podID="a7e53a8977ce5fc5588aef94f91dcc24" containerID="3581fc03c1130dc2dfacc8a0d155544ce350f22134d68a0cd6e79314b24ed060" exitCode=2 Oct 11 10:47:16.562909 master-0 kubenswrapper[4790]: I1011 10:47:16.562905 4790 generic.go:334] "Generic (PLEG): container finished" podID="a7e53a8977ce5fc5588aef94f91dcc24" containerID="e4fe8f20eab6d0f57dd7ff0348d7d79afb0c0c3e2753d58dd10a19dafe7b60ef" exitCode=0 Oct 11 10:47:16.563046 master-0 kubenswrapper[4790]: I1011 10:47:16.562917 4790 generic.go:334] "Generic (PLEG): container finished" podID="a7e53a8977ce5fc5588aef94f91dcc24" containerID="95fc9c6d467ef4be49d4e412e32d764ddb12e42ab41ba14dd9259747220b1377" exitCode=2 Oct 11 10:47:16.565090 master-0 kubenswrapper[4790]: I1011 10:47:16.565059 4790 generic.go:334] "Generic (PLEG): container finished" podID="527d9cd7-412d-4afb-9212-c8697426a964" containerID="84981853b8264575ef9774e4c0deccd1808b713d6d64a0dcb63fc54fcf80f561" exitCode=0 Oct 11 10:47:16.565171 master-0 kubenswrapper[4790]: I1011 10:47:16.565098 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-0" event={"ID":"527d9cd7-412d-4afb-9212-c8697426a964","Type":"ContainerDied","Data":"84981853b8264575ef9774e4c0deccd1808b713d6d64a0dcb63fc54fcf80f561"} Oct 11 10:47:16.833167 master-0 kubenswrapper[4790]: I1011 10:47:16.832992 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" start-of-body= Oct 11 10:47:16.833167 master-0 kubenswrapper[4790]: I1011 10:47:16.833096 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" Oct 11 10:47:17.861813 master-0 kubenswrapper[4790]: I1011 10:47:17.861743 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-0" Oct 11 10:47:18.034296 master-0 kubenswrapper[4790]: I1011 10:47:18.034233 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/527d9cd7-412d-4afb-9212-c8697426a964-kubelet-dir\") pod \"527d9cd7-412d-4afb-9212-c8697426a964\" (UID: \"527d9cd7-412d-4afb-9212-c8697426a964\") " Oct 11 10:47:18.034296 master-0 kubenswrapper[4790]: I1011 10:47:18.034297 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/527d9cd7-412d-4afb-9212-c8697426a964-var-lock\") pod \"527d9cd7-412d-4afb-9212-c8697426a964\" (UID: \"527d9cd7-412d-4afb-9212-c8697426a964\") " Oct 11 10:47:18.034634 master-0 kubenswrapper[4790]: I1011 10:47:18.034346 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/527d9cd7-412d-4afb-9212-c8697426a964-kube-api-access\") pod \"527d9cd7-412d-4afb-9212-c8697426a964\" (UID: \"527d9cd7-412d-4afb-9212-c8697426a964\") " Oct 11 10:47:18.034634 master-0 kubenswrapper[4790]: I1011 10:47:18.034403 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/527d9cd7-412d-4afb-9212-c8697426a964-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "527d9cd7-412d-4afb-9212-c8697426a964" (UID: "527d9cd7-412d-4afb-9212-c8697426a964"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:47:18.034634 master-0 kubenswrapper[4790]: I1011 10:47:18.034485 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/527d9cd7-412d-4afb-9212-c8697426a964-var-lock" (OuterVolumeSpecName: "var-lock") pod "527d9cd7-412d-4afb-9212-c8697426a964" (UID: "527d9cd7-412d-4afb-9212-c8697426a964"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:47:18.034634 master-0 kubenswrapper[4790]: I1011 10:47:18.034594 4790 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/527d9cd7-412d-4afb-9212-c8697426a964-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Oct 11 10:47:18.034634 master-0 kubenswrapper[4790]: I1011 10:47:18.034614 4790 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/527d9cd7-412d-4afb-9212-c8697426a964-var-lock\") on node \"master-0\" DevicePath \"\"" Oct 11 10:47:18.038019 master-0 kubenswrapper[4790]: I1011 10:47:18.037961 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/527d9cd7-412d-4afb-9212-c8697426a964-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "527d9cd7-412d-4afb-9212-c8697426a964" (UID: "527d9cd7-412d-4afb-9212-c8697426a964"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:47:18.135906 master-0 kubenswrapper[4790]: I1011 10:47:18.135743 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/527d9cd7-412d-4afb-9212-c8697426a964-kube-api-access\") on node \"master-0\" DevicePath \"\"" Oct 11 10:47:18.578020 master-0 kubenswrapper[4790]: I1011 10:47:18.577908 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-0" event={"ID":"527d9cd7-412d-4afb-9212-c8697426a964","Type":"ContainerDied","Data":"0718367c3bdc698f2094bf8918ac19ddd76bb29db33497d019a8831490485d51"} Oct 11 10:47:18.578020 master-0 kubenswrapper[4790]: I1011 10:47:18.578018 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0718367c3bdc698f2094bf8918ac19ddd76bb29db33497d019a8831490485d51" Oct 11 10:47:18.578825 master-0 kubenswrapper[4790]: I1011 10:47:18.578774 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-0" Oct 11 10:47:21.243258 master-0 kubenswrapper[4790]: I1011 10:47:21.243181 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-7f4f89bcdb-rh9fx"] Oct 11 10:47:21.244130 master-0 kubenswrapper[4790]: E1011 10:47:21.243390 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527d9cd7-412d-4afb-9212-c8697426a964" containerName="installer" Oct 11 10:47:21.244130 master-0 kubenswrapper[4790]: I1011 10:47:21.243406 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="527d9cd7-412d-4afb-9212-c8697426a964" containerName="installer" Oct 11 10:47:21.244130 master-0 kubenswrapper[4790]: I1011 10:47:21.243535 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="527d9cd7-412d-4afb-9212-c8697426a964" containerName="installer" Oct 11 10:47:21.244130 master-0 kubenswrapper[4790]: I1011 10:47:21.244132 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" Oct 11 10:47:21.247090 master-0 kubenswrapper[4790]: I1011 10:47:21.247029 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Oct 11 10:47:21.247374 master-0 kubenswrapper[4790]: I1011 10:47:21.247346 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Oct 11 10:47:21.247815 master-0 kubenswrapper[4790]: I1011 10:47:21.247788 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Oct 11 10:47:21.248015 master-0 kubenswrapper[4790]: I1011 10:47:21.247988 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Oct 11 10:47:21.248147 master-0 kubenswrapper[4790]: I1011 10:47:21.248124 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Oct 11 10:47:21.265394 master-0 kubenswrapper[4790]: I1011 10:47:21.265329 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7f4f89bcdb-rh9fx"] Oct 11 10:47:21.392856 master-0 kubenswrapper[4790]: I1011 10:47:21.392775 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/504ab58a-33b6-400f-8f3f-8ed6be984915-webhook-cert\") pod \"lvms-operator-7f4f89bcdb-rh9fx\" (UID: \"504ab58a-33b6-400f-8f3f-8ed6be984915\") " pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" Oct 11 10:47:21.393146 master-0 kubenswrapper[4790]: I1011 10:47:21.392926 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/504ab58a-33b6-400f-8f3f-8ed6be984915-socket-dir\") pod \"lvms-operator-7f4f89bcdb-rh9fx\" (UID: \"504ab58a-33b6-400f-8f3f-8ed6be984915\") " pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" Oct 11 10:47:21.393146 master-0 kubenswrapper[4790]: I1011 10:47:21.392963 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/504ab58a-33b6-400f-8f3f-8ed6be984915-metrics-cert\") pod \"lvms-operator-7f4f89bcdb-rh9fx\" (UID: \"504ab58a-33b6-400f-8f3f-8ed6be984915\") " pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" Oct 11 10:47:21.393263 master-0 kubenswrapper[4790]: I1011 10:47:21.393150 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/504ab58a-33b6-400f-8f3f-8ed6be984915-apiservice-cert\") pod \"lvms-operator-7f4f89bcdb-rh9fx\" (UID: \"504ab58a-33b6-400f-8f3f-8ed6be984915\") " pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" Oct 11 10:47:21.393263 master-0 kubenswrapper[4790]: I1011 10:47:21.393186 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zv8h\" (UniqueName: \"kubernetes.io/projected/504ab58a-33b6-400f-8f3f-8ed6be984915-kube-api-access-8zv8h\") pod \"lvms-operator-7f4f89bcdb-rh9fx\" (UID: \"504ab58a-33b6-400f-8f3f-8ed6be984915\") " pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" Oct 11 10:47:21.495252 master-0 kubenswrapper[4790]: I1011 10:47:21.495069 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/504ab58a-33b6-400f-8f3f-8ed6be984915-socket-dir\") pod \"lvms-operator-7f4f89bcdb-rh9fx\" (UID: \"504ab58a-33b6-400f-8f3f-8ed6be984915\") " pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" Oct 11 10:47:21.495252 master-0 kubenswrapper[4790]: I1011 10:47:21.495162 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/504ab58a-33b6-400f-8f3f-8ed6be984915-metrics-cert\") pod \"lvms-operator-7f4f89bcdb-rh9fx\" (UID: \"504ab58a-33b6-400f-8f3f-8ed6be984915\") " pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" Oct 11 10:47:21.495252 master-0 kubenswrapper[4790]: I1011 10:47:21.495203 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/504ab58a-33b6-400f-8f3f-8ed6be984915-apiservice-cert\") pod \"lvms-operator-7f4f89bcdb-rh9fx\" (UID: \"504ab58a-33b6-400f-8f3f-8ed6be984915\") " pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" Oct 11 10:47:21.495252 master-0 kubenswrapper[4790]: I1011 10:47:21.495241 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zv8h\" (UniqueName: \"kubernetes.io/projected/504ab58a-33b6-400f-8f3f-8ed6be984915-kube-api-access-8zv8h\") pod \"lvms-operator-7f4f89bcdb-rh9fx\" (UID: \"504ab58a-33b6-400f-8f3f-8ed6be984915\") " pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" Oct 11 10:47:21.495688 master-0 kubenswrapper[4790]: I1011 10:47:21.495322 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/504ab58a-33b6-400f-8f3f-8ed6be984915-webhook-cert\") pod \"lvms-operator-7f4f89bcdb-rh9fx\" (UID: \"504ab58a-33b6-400f-8f3f-8ed6be984915\") " pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" Oct 11 10:47:21.495930 master-0 kubenswrapper[4790]: I1011 10:47:21.495857 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/504ab58a-33b6-400f-8f3f-8ed6be984915-socket-dir\") pod \"lvms-operator-7f4f89bcdb-rh9fx\" (UID: \"504ab58a-33b6-400f-8f3f-8ed6be984915\") " pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" Oct 11 10:47:21.499591 master-0 kubenswrapper[4790]: I1011 10:47:21.499533 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/504ab58a-33b6-400f-8f3f-8ed6be984915-webhook-cert\") pod \"lvms-operator-7f4f89bcdb-rh9fx\" (UID: \"504ab58a-33b6-400f-8f3f-8ed6be984915\") " pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" Oct 11 10:47:21.500453 master-0 kubenswrapper[4790]: I1011 10:47:21.500390 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/504ab58a-33b6-400f-8f3f-8ed6be984915-apiservice-cert\") pod \"lvms-operator-7f4f89bcdb-rh9fx\" (UID: \"504ab58a-33b6-400f-8f3f-8ed6be984915\") " pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" Oct 11 10:47:21.501268 master-0 kubenswrapper[4790]: I1011 10:47:21.501223 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/504ab58a-33b6-400f-8f3f-8ed6be984915-metrics-cert\") pod \"lvms-operator-7f4f89bcdb-rh9fx\" (UID: \"504ab58a-33b6-400f-8f3f-8ed6be984915\") " pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" Oct 11 10:47:21.525825 master-0 kubenswrapper[4790]: I1011 10:47:21.525766 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zv8h\" (UniqueName: \"kubernetes.io/projected/504ab58a-33b6-400f-8f3f-8ed6be984915-kube-api-access-8zv8h\") pod \"lvms-operator-7f4f89bcdb-rh9fx\" (UID: \"504ab58a-33b6-400f-8f3f-8ed6be984915\") " pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" Oct 11 10:47:21.559929 master-0 kubenswrapper[4790]: I1011 10:47:21.559859 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" Oct 11 10:47:21.833383 master-0 kubenswrapper[4790]: I1011 10:47:21.833318 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" start-of-body= Oct 11 10:47:21.833383 master-0 kubenswrapper[4790]: I1011 10:47:21.833402 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" Oct 11 10:47:22.003471 master-0 kubenswrapper[4790]: I1011 10:47:22.003393 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7f4f89bcdb-rh9fx"] Oct 11 10:47:22.013128 master-0 kubenswrapper[4790]: W1011 10:47:22.013066 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod504ab58a_33b6_400f_8f3f_8ed6be984915.slice/crio-a9aa1d1d269ae6bc7d8fbd857f1784dbc7b4e717ec2163fa271a0d7c15fae2dd WatchSource:0}: Error finding container a9aa1d1d269ae6bc7d8fbd857f1784dbc7b4e717ec2163fa271a0d7c15fae2dd: Status 404 returned error can't find the container with id a9aa1d1d269ae6bc7d8fbd857f1784dbc7b4e717ec2163fa271a0d7c15fae2dd Oct 11 10:47:22.018192 master-0 kubenswrapper[4790]: I1011 10:47:22.018152 4790 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:47:22.604690 master-0 kubenswrapper[4790]: I1011 10:47:22.604599 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" event={"ID":"504ab58a-33b6-400f-8f3f-8ed6be984915","Type":"ContainerStarted","Data":"a9aa1d1d269ae6bc7d8fbd857f1784dbc7b4e717ec2163fa271a0d7c15fae2dd"} Oct 11 10:47:26.833098 master-0 kubenswrapper[4790]: I1011 10:47:26.833002 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" start-of-body= Oct 11 10:47:26.833903 master-0 kubenswrapper[4790]: I1011 10:47:26.833125 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" Oct 11 10:47:26.833903 master-0 kubenswrapper[4790]: I1011 10:47:26.833234 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-guard-master-0" Oct 11 10:47:26.834387 master-0 kubenswrapper[4790]: I1011 10:47:26.834309 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" start-of-body= Oct 11 10:47:26.834496 master-0 kubenswrapper[4790]: I1011 10:47:26.834436 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" Oct 11 10:47:27.632018 master-0 kubenswrapper[4790]: I1011 10:47:27.631767 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" event={"ID":"504ab58a-33b6-400f-8f3f-8ed6be984915","Type":"ContainerStarted","Data":"a6701b274e8f5e138b58ffe2d1c3a1b4ca33b4650d0b25acda5098cb29b36a5b"} Oct 11 10:47:27.632018 master-0 kubenswrapper[4790]: I1011 10:47:27.632013 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" Oct 11 10:47:27.669532 master-0 kubenswrapper[4790]: I1011 10:47:27.669404 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" podStartSLOduration=1.3791847640000001 podStartE2EDuration="6.669365413s" podCreationTimestamp="2025-10-11 10:47:21 +0000 UTC" firstStartedPulling="2025-10-11 10:47:22.018033055 +0000 UTC m=+518.572493347" lastFinishedPulling="2025-10-11 10:47:27.308213664 +0000 UTC m=+523.862673996" observedRunningTime="2025-10-11 10:47:27.664780608 +0000 UTC m=+524.219240950" watchObservedRunningTime="2025-10-11 10:47:27.669365413 +0000 UTC m=+524.223825745" Oct 11 10:47:28.642276 master-0 kubenswrapper[4790]: I1011 10:47:28.642151 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-7f4f89bcdb-rh9fx" Oct 11 10:47:31.832887 master-0 kubenswrapper[4790]: I1011 10:47:31.832760 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" start-of-body= Oct 11 10:47:31.833529 master-0 kubenswrapper[4790]: I1011 10:47:31.832912 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" Oct 11 10:47:34.519278 master-2 kubenswrapper[4776]: I1011 10:47:34.519213 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf"] Oct 11 10:47:34.519860 master-2 kubenswrapper[4776]: E1011 10:47:34.519478 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df9c769-cf84-4934-a70d-16984666e6ed" containerName="pull" Oct 11 10:47:34.519860 master-2 kubenswrapper[4776]: I1011 10:47:34.519492 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df9c769-cf84-4934-a70d-16984666e6ed" containerName="pull" Oct 11 10:47:34.519860 master-2 kubenswrapper[4776]: E1011 10:47:34.519505 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df9c769-cf84-4934-a70d-16984666e6ed" containerName="util" Oct 11 10:47:34.519860 master-2 kubenswrapper[4776]: I1011 10:47:34.519513 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df9c769-cf84-4934-a70d-16984666e6ed" containerName="util" Oct 11 10:47:34.519860 master-2 kubenswrapper[4776]: E1011 10:47:34.519529 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df9c769-cf84-4934-a70d-16984666e6ed" containerName="extract" Oct 11 10:47:34.519860 master-2 kubenswrapper[4776]: I1011 10:47:34.519536 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df9c769-cf84-4934-a70d-16984666e6ed" containerName="extract" Oct 11 10:47:34.519860 master-2 kubenswrapper[4776]: I1011 10:47:34.519649 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4df9c769-cf84-4934-a70d-16984666e6ed" containerName="extract" Oct 11 10:47:34.520607 master-2 kubenswrapper[4776]: I1011 10:47:34.520558 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.536971 master-2 kubenswrapper[4776]: I1011 10:47:34.536920 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf"] Oct 11 10:47:34.598745 master-2 kubenswrapper[4776]: I1011 10:47:34.598262 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.598745 master-2 kubenswrapper[4776]: I1011 10:47:34.598336 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.598745 master-2 kubenswrapper[4776]: I1011 10:47:34.598406 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8jv9\" (UniqueName: \"kubernetes.io/projected/88274ab8-e9fc-466f-a1c0-a4f210d7beae-kube-api-access-q8jv9\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.699756 master-2 kubenswrapper[4776]: I1011 10:47:34.699657 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.699756 master-2 kubenswrapper[4776]: I1011 10:47:34.699759 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8jv9\" (UniqueName: \"kubernetes.io/projected/88274ab8-e9fc-466f-a1c0-a4f210d7beae-kube-api-access-q8jv9\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.700106 master-2 kubenswrapper[4776]: I1011 10:47:34.699810 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.700329 master-2 kubenswrapper[4776]: I1011 10:47:34.700269 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.700329 master-2 kubenswrapper[4776]: I1011 10:47:34.700308 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.731319 master-2 kubenswrapper[4776]: I1011 10:47:34.731260 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8jv9\" (UniqueName: \"kubernetes.io/projected/88274ab8-e9fc-466f-a1c0-a4f210d7beae-kube-api-access-q8jv9\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.835167 master-2 kubenswrapper[4776]: I1011 10:47:34.834754 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.930706 master-2 kubenswrapper[4776]: I1011 10:47:34.926911 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6"] Oct 11 10:47:34.930706 master-2 kubenswrapper[4776]: I1011 10:47:34.928558 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:34.951628 master-2 kubenswrapper[4776]: I1011 10:47:34.951573 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6"] Oct 11 10:47:35.010368 master-2 kubenswrapper[4776]: I1011 10:47:35.010312 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.010368 master-2 kubenswrapper[4776]: I1011 10:47:35.010369 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.010585 master-2 kubenswrapper[4776]: I1011 10:47:35.010511 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzl5z\" (UniqueName: \"kubernetes.io/projected/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-kube-api-access-nzl5z\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.111961 master-2 kubenswrapper[4776]: I1011 10:47:35.111823 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.111961 master-2 kubenswrapper[4776]: I1011 10:47:35.111891 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.111961 master-2 kubenswrapper[4776]: I1011 10:47:35.111942 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzl5z\" (UniqueName: \"kubernetes.io/projected/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-kube-api-access-nzl5z\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.112413 master-2 kubenswrapper[4776]: I1011 10:47:35.112373 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.112604 master-2 kubenswrapper[4776]: I1011 10:47:35.112574 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.135345 master-2 kubenswrapper[4776]: I1011 10:47:35.135295 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzl5z\" (UniqueName: \"kubernetes.io/projected/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-kube-api-access-nzl5z\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.249779 master-2 kubenswrapper[4776]: I1011 10:47:35.249617 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf"] Oct 11 10:47:35.251437 master-2 kubenswrapper[4776]: I1011 10:47:35.251382 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.415475 master-2 kubenswrapper[4776]: I1011 10:47:35.415409 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" event={"ID":"88274ab8-e9fc-466f-a1c0-a4f210d7beae","Type":"ContainerStarted","Data":"db3f09da0741b1b6979e52de550944c4e47d224e0a7c3f5306dc4940c5314a6e"} Oct 11 10:47:35.415475 master-2 kubenswrapper[4776]: I1011 10:47:35.415466 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" event={"ID":"88274ab8-e9fc-466f-a1c0-a4f210d7beae","Type":"ContainerStarted","Data":"075be6091c656d0be6436a71f3518fd2ae48c009d5a01ed8baceb70ed015d7d7"} Oct 11 10:47:35.748827 master-2 kubenswrapper[4776]: W1011 10:47:35.748752 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27ad118d_adf9_4bbb_93ca_a7ca0e52a1bf.slice/crio-f7e8a85b65f5b64c4c9e99652ee47c22fdcbb69827a1aac046883437537f5a0d WatchSource:0}: Error finding container f7e8a85b65f5b64c4c9e99652ee47c22fdcbb69827a1aac046883437537f5a0d: Status 404 returned error can't find the container with id f7e8a85b65f5b64c4c9e99652ee47c22fdcbb69827a1aac046883437537f5a0d Oct 11 10:47:35.749286 master-2 kubenswrapper[4776]: I1011 10:47:35.748928 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6"] Oct 11 10:47:36.321824 master-2 kubenswrapper[4776]: I1011 10:47:36.321732 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt"] Oct 11 10:47:36.323201 master-2 kubenswrapper[4776]: I1011 10:47:36.323148 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.337121 master-2 kubenswrapper[4776]: I1011 10:47:36.337062 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt"] Oct 11 10:47:36.425201 master-2 kubenswrapper[4776]: I1011 10:47:36.425135 4776 generic.go:334] "Generic (PLEG): container finished" podID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerID="28f1ad4c7e54c6e7bae910d8098c0f170760e51d44fa824bf874ee757107cfaf" exitCode=0 Oct 11 10:47:36.425450 master-2 kubenswrapper[4776]: I1011 10:47:36.425215 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" event={"ID":"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf","Type":"ContainerDied","Data":"28f1ad4c7e54c6e7bae910d8098c0f170760e51d44fa824bf874ee757107cfaf"} Oct 11 10:47:36.425450 master-2 kubenswrapper[4776]: I1011 10:47:36.425247 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" event={"ID":"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf","Type":"ContainerStarted","Data":"f7e8a85b65f5b64c4c9e99652ee47c22fdcbb69827a1aac046883437537f5a0d"} Oct 11 10:47:36.427047 master-2 kubenswrapper[4776]: I1011 10:47:36.427019 4776 generic.go:334] "Generic (PLEG): container finished" podID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerID="db3f09da0741b1b6979e52de550944c4e47d224e0a7c3f5306dc4940c5314a6e" exitCode=0 Oct 11 10:47:36.427047 master-2 kubenswrapper[4776]: I1011 10:47:36.427052 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" event={"ID":"88274ab8-e9fc-466f-a1c0-a4f210d7beae","Type":"ContainerDied","Data":"db3f09da0741b1b6979e52de550944c4e47d224e0a7c3f5306dc4940c5314a6e"} Oct 11 10:47:36.428464 master-2 kubenswrapper[4776]: I1011 10:47:36.428405 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdf9m\" (UniqueName: \"kubernetes.io/projected/fb231c9e-66e8-4fdf-870d-a927418a72fa-kube-api-access-mdf9m\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.428464 master-2 kubenswrapper[4776]: I1011 10:47:36.428451 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.428606 master-2 kubenswrapper[4776]: I1011 10:47:36.428476 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.529747 master-2 kubenswrapper[4776]: I1011 10:47:36.529664 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdf9m\" (UniqueName: \"kubernetes.io/projected/fb231c9e-66e8-4fdf-870d-a927418a72fa-kube-api-access-mdf9m\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.529747 master-2 kubenswrapper[4776]: I1011 10:47:36.529749 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.530283 master-2 kubenswrapper[4776]: I1011 10:47:36.529797 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.530509 master-2 kubenswrapper[4776]: I1011 10:47:36.530416 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.530509 master-2 kubenswrapper[4776]: I1011 10:47:36.530469 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.567426 master-2 kubenswrapper[4776]: I1011 10:47:36.567308 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdf9m\" (UniqueName: \"kubernetes.io/projected/fb231c9e-66e8-4fdf-870d-a927418a72fa-kube-api-access-mdf9m\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.646640 master-2 kubenswrapper[4776]: I1011 10:47:36.646362 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.833243 master-0 kubenswrapper[4790]: I1011 10:47:36.833117 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" start-of-body= Oct 11 10:47:36.833243 master-0 kubenswrapper[4790]: I1011 10:47:36.833220 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" Oct 11 10:47:37.105491 master-2 kubenswrapper[4776]: I1011 10:47:37.105445 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt"] Oct 11 10:47:37.108766 master-2 kubenswrapper[4776]: W1011 10:47:37.107986 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb231c9e_66e8_4fdf_870d_a927418a72fa.slice/crio-60ed1a00a20b67f1956ea0a0a0b32d98fa346f6c776f5458abb7274705bca8fa WatchSource:0}: Error finding container 60ed1a00a20b67f1956ea0a0a0b32d98fa346f6c776f5458abb7274705bca8fa: Status 404 returned error can't find the container with id 60ed1a00a20b67f1956ea0a0a0b32d98fa346f6c776f5458abb7274705bca8fa Oct 11 10:47:37.434751 master-2 kubenswrapper[4776]: I1011 10:47:37.434435 4776 generic.go:334] "Generic (PLEG): container finished" podID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerID="39d64073eca44763d4810b05db5d70ffbca978f133a7a44b8ba9480c3da9a335" exitCode=0 Oct 11 10:47:37.434751 master-2 kubenswrapper[4776]: I1011 10:47:37.434507 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" event={"ID":"fb231c9e-66e8-4fdf-870d-a927418a72fa","Type":"ContainerDied","Data":"39d64073eca44763d4810b05db5d70ffbca978f133a7a44b8ba9480c3da9a335"} Oct 11 10:47:37.435047 master-2 kubenswrapper[4776]: I1011 10:47:37.434769 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" event={"ID":"fb231c9e-66e8-4fdf-870d-a927418a72fa","Type":"ContainerStarted","Data":"60ed1a00a20b67f1956ea0a0a0b32d98fa346f6c776f5458abb7274705bca8fa"} Oct 11 10:47:38.442449 master-2 kubenswrapper[4776]: I1011 10:47:38.442351 4776 generic.go:334] "Generic (PLEG): container finished" podID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerID="1020de00cf93f1e2ffe9bd58d8cb1276ac5e65643d39d4776195df14f3677e41" exitCode=0 Oct 11 10:47:38.442449 master-2 kubenswrapper[4776]: I1011 10:47:38.442414 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" event={"ID":"88274ab8-e9fc-466f-a1c0-a4f210d7beae","Type":"ContainerDied","Data":"1020de00cf93f1e2ffe9bd58d8cb1276ac5e65643d39d4776195df14f3677e41"} Oct 11 10:47:39.451712 master-2 kubenswrapper[4776]: I1011 10:47:39.451647 4776 generic.go:334] "Generic (PLEG): container finished" podID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerID="80b24e95c365c5820128ebd64c113cb5de8b49ca2df72b3c7182cc6b16ad2cf8" exitCode=0 Oct 11 10:47:39.452028 master-2 kubenswrapper[4776]: I1011 10:47:39.451715 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" event={"ID":"88274ab8-e9fc-466f-a1c0-a4f210d7beae","Type":"ContainerDied","Data":"80b24e95c365c5820128ebd64c113cb5de8b49ca2df72b3c7182cc6b16ad2cf8"} Oct 11 10:47:40.466407 master-2 kubenswrapper[4776]: I1011 10:47:40.466363 4776 generic.go:334] "Generic (PLEG): container finished" podID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerID="f2214ae6145487a3745153c956da3ff5e1e01b6136fad0863abb18dad0873bbd" exitCode=0 Oct 11 10:47:40.467343 master-2 kubenswrapper[4776]: I1011 10:47:40.466418 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" event={"ID":"fb231c9e-66e8-4fdf-870d-a927418a72fa","Type":"ContainerDied","Data":"f2214ae6145487a3745153c956da3ff5e1e01b6136fad0863abb18dad0873bbd"} Oct 11 10:47:40.468602 master-2 kubenswrapper[4776]: I1011 10:47:40.468522 4776 generic.go:334] "Generic (PLEG): container finished" podID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerID="ded3f00970be8ea1637c5f85c665806c5a28fbc3cd6e930abf54671fefb96c09" exitCode=0 Oct 11 10:47:40.468760 master-2 kubenswrapper[4776]: I1011 10:47:40.468729 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" event={"ID":"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf","Type":"ContainerDied","Data":"ded3f00970be8ea1637c5f85c665806c5a28fbc3cd6e930abf54671fefb96c09"} Oct 11 10:47:40.986617 master-2 kubenswrapper[4776]: I1011 10:47:40.986583 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:41.092962 master-2 kubenswrapper[4776]: I1011 10:47:41.092841 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8jv9\" (UniqueName: \"kubernetes.io/projected/88274ab8-e9fc-466f-a1c0-a4f210d7beae-kube-api-access-q8jv9\") pod \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " Oct 11 10:47:41.092962 master-2 kubenswrapper[4776]: I1011 10:47:41.092961 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-util\") pod \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " Oct 11 10:47:41.093145 master-2 kubenswrapper[4776]: I1011 10:47:41.093043 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-bundle\") pod \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " Oct 11 10:47:41.094079 master-2 kubenswrapper[4776]: I1011 10:47:41.094025 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-bundle" (OuterVolumeSpecName: "bundle") pod "88274ab8-e9fc-466f-a1c0-a4f210d7beae" (UID: "88274ab8-e9fc-466f-a1c0-a4f210d7beae"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:41.095621 master-2 kubenswrapper[4776]: I1011 10:47:41.095555 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88274ab8-e9fc-466f-a1c0-a4f210d7beae-kube-api-access-q8jv9" (OuterVolumeSpecName: "kube-api-access-q8jv9") pod "88274ab8-e9fc-466f-a1c0-a4f210d7beae" (UID: "88274ab8-e9fc-466f-a1c0-a4f210d7beae"). InnerVolumeSpecName "kube-api-access-q8jv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:47:41.102497 master-2 kubenswrapper[4776]: I1011 10:47:41.102444 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-util" (OuterVolumeSpecName: "util") pod "88274ab8-e9fc-466f-a1c0-a4f210d7beae" (UID: "88274ab8-e9fc-466f-a1c0-a4f210d7beae"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:41.194695 master-2 kubenswrapper[4776]: I1011 10:47:41.194621 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8jv9\" (UniqueName: \"kubernetes.io/projected/88274ab8-e9fc-466f-a1c0-a4f210d7beae-kube-api-access-q8jv9\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:41.194964 master-2 kubenswrapper[4776]: I1011 10:47:41.194899 4776 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-util\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:41.195631 master-2 kubenswrapper[4776]: I1011 10:47:41.195537 4776 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:41.479861 master-2 kubenswrapper[4776]: I1011 10:47:41.479785 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" event={"ID":"88274ab8-e9fc-466f-a1c0-a4f210d7beae","Type":"ContainerDied","Data":"075be6091c656d0be6436a71f3518fd2ae48c009d5a01ed8baceb70ed015d7d7"} Oct 11 10:47:41.479861 master-2 kubenswrapper[4776]: I1011 10:47:41.479847 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="075be6091c656d0be6436a71f3518fd2ae48c009d5a01ed8baceb70ed015d7d7" Oct 11 10:47:41.480438 master-2 kubenswrapper[4776]: I1011 10:47:41.479959 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:41.484780 master-2 kubenswrapper[4776]: I1011 10:47:41.484131 4776 generic.go:334] "Generic (PLEG): container finished" podID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerID="3a4d994b947ef75b2bb0de7cf22273e6998d07e99740c2f7d03761a1aca1861b" exitCode=0 Oct 11 10:47:41.484780 master-2 kubenswrapper[4776]: I1011 10:47:41.484195 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" event={"ID":"fb231c9e-66e8-4fdf-870d-a927418a72fa","Type":"ContainerDied","Data":"3a4d994b947ef75b2bb0de7cf22273e6998d07e99740c2f7d03761a1aca1861b"} Oct 11 10:47:41.486443 master-2 kubenswrapper[4776]: I1011 10:47:41.486420 4776 generic.go:334] "Generic (PLEG): container finished" podID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerID="fbf19dd452fce4effe941a49a87f63dc6de0bf44c956630b2ea2b9f4396e7f36" exitCode=0 Oct 11 10:47:41.486507 master-2 kubenswrapper[4776]: I1011 10:47:41.486452 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" event={"ID":"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf","Type":"ContainerDied","Data":"fbf19dd452fce4effe941a49a87f63dc6de0bf44c956630b2ea2b9f4396e7f36"} Oct 11 10:47:41.722989 master-2 kubenswrapper[4776]: I1011 10:47:41.722941 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l"] Oct 11 10:47:41.723396 master-2 kubenswrapper[4776]: E1011 10:47:41.723383 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerName="extract" Oct 11 10:47:41.723462 master-2 kubenswrapper[4776]: I1011 10:47:41.723452 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerName="extract" Oct 11 10:47:41.723536 master-2 kubenswrapper[4776]: E1011 10:47:41.723524 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerName="util" Oct 11 10:47:41.723607 master-2 kubenswrapper[4776]: I1011 10:47:41.723596 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerName="util" Oct 11 10:47:41.723685 master-2 kubenswrapper[4776]: E1011 10:47:41.723661 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerName="pull" Oct 11 10:47:41.723748 master-2 kubenswrapper[4776]: I1011 10:47:41.723739 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerName="pull" Oct 11 10:47:41.723909 master-2 kubenswrapper[4776]: I1011 10:47:41.723898 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerName="extract" Oct 11 10:47:41.725025 master-2 kubenswrapper[4776]: I1011 10:47:41.725005 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:41.744858 master-2 kubenswrapper[4776]: I1011 10:47:41.744646 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l"] Oct 11 10:47:41.805983 master-2 kubenswrapper[4776]: I1011 10:47:41.805781 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp5xx\" (UniqueName: \"kubernetes.io/projected/4f83a3b5-333b-4284-b03d-c03db77c3241-kube-api-access-gp5xx\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:41.805983 master-2 kubenswrapper[4776]: I1011 10:47:41.805891 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:41.805983 master-2 kubenswrapper[4776]: I1011 10:47:41.805930 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:41.833728 master-0 kubenswrapper[4790]: I1011 10:47:41.833616 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" start-of-body= Oct 11 10:47:41.834514 master-0 kubenswrapper[4790]: I1011 10:47:41.833787 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" Oct 11 10:47:41.906977 master-2 kubenswrapper[4776]: I1011 10:47:41.906869 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:41.906977 master-2 kubenswrapper[4776]: I1011 10:47:41.906935 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:41.906977 master-2 kubenswrapper[4776]: I1011 10:47:41.906991 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp5xx\" (UniqueName: \"kubernetes.io/projected/4f83a3b5-333b-4284-b03d-c03db77c3241-kube-api-access-gp5xx\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:41.907649 master-2 kubenswrapper[4776]: I1011 10:47:41.907601 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:41.907649 master-2 kubenswrapper[4776]: I1011 10:47:41.907615 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:41.941894 master-2 kubenswrapper[4776]: I1011 10:47:41.941844 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp5xx\" (UniqueName: \"kubernetes.io/projected/4f83a3b5-333b-4284-b03d-c03db77c3241-kube-api-access-gp5xx\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:42.078265 master-2 kubenswrapper[4776]: I1011 10:47:42.078074 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:42.538586 master-2 kubenswrapper[4776]: W1011 10:47:42.538526 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f83a3b5_333b_4284_b03d_c03db77c3241.slice/crio-1d3c16a3929d8110ce5e8257b370727ad2dedf5c1aea80548a6befaede03b16c WatchSource:0}: Error finding container 1d3c16a3929d8110ce5e8257b370727ad2dedf5c1aea80548a6befaede03b16c: Status 404 returned error can't find the container with id 1d3c16a3929d8110ce5e8257b370727ad2dedf5c1aea80548a6befaede03b16c Oct 11 10:47:42.542462 master-2 kubenswrapper[4776]: I1011 10:47:42.542416 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l"] Oct 11 10:47:42.852168 master-2 kubenswrapper[4776]: I1011 10:47:42.852135 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:42.869668 master-2 kubenswrapper[4776]: I1011 10:47:42.869624 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:43.022965 master-2 kubenswrapper[4776]: I1011 10:47:43.022928 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-util\") pod \"fb231c9e-66e8-4fdf-870d-a927418a72fa\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " Oct 11 10:47:43.023745 master-2 kubenswrapper[4776]: I1011 10:47:43.023728 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-util\") pod \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " Oct 11 10:47:43.024009 master-2 kubenswrapper[4776]: I1011 10:47:43.023995 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-bundle\") pod \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " Oct 11 10:47:43.024124 master-2 kubenswrapper[4776]: I1011 10:47:43.024111 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdf9m\" (UniqueName: \"kubernetes.io/projected/fb231c9e-66e8-4fdf-870d-a927418a72fa-kube-api-access-mdf9m\") pod \"fb231c9e-66e8-4fdf-870d-a927418a72fa\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " Oct 11 10:47:43.024581 master-2 kubenswrapper[4776]: I1011 10:47:43.024568 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-bundle\") pod \"fb231c9e-66e8-4fdf-870d-a927418a72fa\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " Oct 11 10:47:43.024706 master-2 kubenswrapper[4776]: I1011 10:47:43.024692 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzl5z\" (UniqueName: \"kubernetes.io/projected/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-kube-api-access-nzl5z\") pod \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " Oct 11 10:47:43.025041 master-2 kubenswrapper[4776]: I1011 10:47:43.024995 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-bundle" (OuterVolumeSpecName: "bundle") pod "fb231c9e-66e8-4fdf-870d-a927418a72fa" (UID: "fb231c9e-66e8-4fdf-870d-a927418a72fa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:43.025563 master-2 kubenswrapper[4776]: I1011 10:47:43.025520 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-bundle" (OuterVolumeSpecName: "bundle") pod "27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" (UID: "27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:43.027068 master-2 kubenswrapper[4776]: I1011 10:47:43.026981 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb231c9e-66e8-4fdf-870d-a927418a72fa-kube-api-access-mdf9m" (OuterVolumeSpecName: "kube-api-access-mdf9m") pod "fb231c9e-66e8-4fdf-870d-a927418a72fa" (UID: "fb231c9e-66e8-4fdf-870d-a927418a72fa"). InnerVolumeSpecName "kube-api-access-mdf9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:47:43.027220 master-2 kubenswrapper[4776]: I1011 10:47:43.027202 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-kube-api-access-nzl5z" (OuterVolumeSpecName: "kube-api-access-nzl5z") pod "27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" (UID: "27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf"). InnerVolumeSpecName "kube-api-access-nzl5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:47:43.033975 master-2 kubenswrapper[4776]: I1011 10:47:43.033947 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-util" (OuterVolumeSpecName: "util") pod "fb231c9e-66e8-4fdf-870d-a927418a72fa" (UID: "fb231c9e-66e8-4fdf-870d-a927418a72fa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:43.038225 master-2 kubenswrapper[4776]: I1011 10:47:43.037781 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-util" (OuterVolumeSpecName: "util") pod "27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" (UID: "27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:43.127101 master-2 kubenswrapper[4776]: I1011 10:47:43.126225 4776 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:43.127101 master-2 kubenswrapper[4776]: I1011 10:47:43.126445 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdf9m\" (UniqueName: \"kubernetes.io/projected/fb231c9e-66e8-4fdf-870d-a927418a72fa-kube-api-access-mdf9m\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:43.127101 master-2 kubenswrapper[4776]: I1011 10:47:43.126943 4776 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:43.127101 master-2 kubenswrapper[4776]: I1011 10:47:43.126960 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzl5z\" (UniqueName: \"kubernetes.io/projected/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-kube-api-access-nzl5z\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:43.127101 master-2 kubenswrapper[4776]: I1011 10:47:43.126974 4776 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-util\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:43.127101 master-2 kubenswrapper[4776]: I1011 10:47:43.126983 4776 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-util\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:43.503968 master-2 kubenswrapper[4776]: I1011 10:47:43.503912 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:43.503968 master-2 kubenswrapper[4776]: I1011 10:47:43.503920 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" event={"ID":"fb231c9e-66e8-4fdf-870d-a927418a72fa","Type":"ContainerDied","Data":"60ed1a00a20b67f1956ea0a0a0b32d98fa346f6c776f5458abb7274705bca8fa"} Oct 11 10:47:43.504228 master-2 kubenswrapper[4776]: I1011 10:47:43.503998 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60ed1a00a20b67f1956ea0a0a0b32d98fa346f6c776f5458abb7274705bca8fa" Oct 11 10:47:43.506070 master-2 kubenswrapper[4776]: I1011 10:47:43.506017 4776 generic.go:334] "Generic (PLEG): container finished" podID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerID="3e0e93dbf8c43d20e82487f3c320d83e0986abea8429ad92f58db09a7d7bc359" exitCode=0 Oct 11 10:47:43.506168 master-2 kubenswrapper[4776]: I1011 10:47:43.506129 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" event={"ID":"4f83a3b5-333b-4284-b03d-c03db77c3241","Type":"ContainerDied","Data":"3e0e93dbf8c43d20e82487f3c320d83e0986abea8429ad92f58db09a7d7bc359"} Oct 11 10:47:43.506212 master-2 kubenswrapper[4776]: I1011 10:47:43.506175 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" event={"ID":"4f83a3b5-333b-4284-b03d-c03db77c3241","Type":"ContainerStarted","Data":"1d3c16a3929d8110ce5e8257b370727ad2dedf5c1aea80548a6befaede03b16c"} Oct 11 10:47:43.512642 master-2 kubenswrapper[4776]: I1011 10:47:43.512589 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" event={"ID":"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf","Type":"ContainerDied","Data":"f7e8a85b65f5b64c4c9e99652ee47c22fdcbb69827a1aac046883437537f5a0d"} Oct 11 10:47:43.512642 master-2 kubenswrapper[4776]: I1011 10:47:43.512639 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7e8a85b65f5b64c4c9e99652ee47c22fdcbb69827a1aac046883437537f5a0d" Oct 11 10:47:43.512908 master-2 kubenswrapper[4776]: I1011 10:47:43.512876 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:44.530706 master-2 kubenswrapper[4776]: I1011 10:47:44.527222 4776 generic.go:334] "Generic (PLEG): container finished" podID="8757af56-20fb-439e-adba-7e4e50378936" containerID="25fc39e758a2899d86cad41cf89dd130d8c1f8d7d2271b02d90a5c1db60a0fae" exitCode=0 Oct 11 10:47:44.530706 master-2 kubenswrapper[4776]: I1011 10:47:44.527272 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-v6dfc" event={"ID":"8757af56-20fb-439e-adba-7e4e50378936","Type":"ContainerDied","Data":"25fc39e758a2899d86cad41cf89dd130d8c1f8d7d2271b02d90a5c1db60a0fae"} Oct 11 10:47:45.534717 master-2 kubenswrapper[4776]: I1011 10:47:45.534633 4776 generic.go:334] "Generic (PLEG): container finished" podID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerID="3527b856abee72f41d55546ce66e57c5fd282375bf2fd3d51da6cf6aa9ac8f13" exitCode=0 Oct 11 10:47:45.534717 master-2 kubenswrapper[4776]: I1011 10:47:45.534695 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" event={"ID":"4f83a3b5-333b-4284-b03d-c03db77c3241","Type":"ContainerDied","Data":"3527b856abee72f41d55546ce66e57c5fd282375bf2fd3d51da6cf6aa9ac8f13"} Oct 11 10:47:45.609230 master-2 kubenswrapper[4776]: I1011 10:47:45.609189 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:47:45.660421 master-2 kubenswrapper[4776]: I1011 10:47:45.660379 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpxlw\" (UniqueName: \"kubernetes.io/projected/8757af56-20fb-439e-adba-7e4e50378936-kube-api-access-xpxlw\") pod \"8757af56-20fb-439e-adba-7e4e50378936\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " Oct 11 10:47:45.660535 master-2 kubenswrapper[4776]: I1011 10:47:45.660513 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-var-run-resolv-conf\") pod \"8757af56-20fb-439e-adba-7e4e50378936\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " Oct 11 10:47:45.660635 master-2 kubenswrapper[4776]: I1011 10:47:45.660590 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "8757af56-20fb-439e-adba-7e4e50378936" (UID: "8757af56-20fb-439e-adba-7e4e50378936"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:47:45.660704 master-2 kubenswrapper[4776]: I1011 10:47:45.660639 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "8757af56-20fb-439e-adba-7e4e50378936" (UID: "8757af56-20fb-439e-adba-7e4e50378936"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:47:45.660704 master-2 kubenswrapper[4776]: I1011 10:47:45.660610 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-resolv-conf\") pod \"8757af56-20fb-439e-adba-7e4e50378936\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " Oct 11 10:47:45.660808 master-2 kubenswrapper[4776]: I1011 10:47:45.660763 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-ca-bundle\") pod \"8757af56-20fb-439e-adba-7e4e50378936\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " Oct 11 10:47:45.660922 master-2 kubenswrapper[4776]: I1011 10:47:45.660882 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "8757af56-20fb-439e-adba-7e4e50378936" (UID: "8757af56-20fb-439e-adba-7e4e50378936"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:47:45.661123 master-2 kubenswrapper[4776]: I1011 10:47:45.661095 4776 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:45.661123 master-2 kubenswrapper[4776]: I1011 10:47:45.661118 4776 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-var-run-resolv-conf\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:45.661204 master-2 kubenswrapper[4776]: I1011 10:47:45.661130 4776 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-resolv-conf\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:45.663494 master-2 kubenswrapper[4776]: I1011 10:47:45.663459 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8757af56-20fb-439e-adba-7e4e50378936-kube-api-access-xpxlw" (OuterVolumeSpecName: "kube-api-access-xpxlw") pod "8757af56-20fb-439e-adba-7e4e50378936" (UID: "8757af56-20fb-439e-adba-7e4e50378936"). InnerVolumeSpecName "kube-api-access-xpxlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:47:45.761665 master-2 kubenswrapper[4776]: I1011 10:47:45.761595 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpxlw\" (UniqueName: \"kubernetes.io/projected/8757af56-20fb-439e-adba-7e4e50378936-kube-api-access-xpxlw\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:46.405046 master-0 kubenswrapper[4790]: I1011 10:47:46.404983 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_a7e53a8977ce5fc5588aef94f91dcc24/etcd/2.log" Oct 11 10:47:46.405782 master-0 kubenswrapper[4790]: I1011 10:47:46.405412 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_a7e53a8977ce5fc5588aef94f91dcc24/etcd/1.log" Oct 11 10:47:46.406274 master-0 kubenswrapper[4790]: I1011 10:47:46.406240 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_a7e53a8977ce5fc5588aef94f91dcc24/etcd-rev/0.log" Oct 11 10:47:46.407512 master-0 kubenswrapper[4790]: I1011 10:47:46.407473 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_a7e53a8977ce5fc5588aef94f91dcc24/etcd-metrics/0.log" Oct 11 10:47:46.408047 master-0 kubenswrapper[4790]: I1011 10:47:46.408005 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_a7e53a8977ce5fc5588aef94f91dcc24/etcdctl/0.log" Oct 11 10:47:46.409535 master-0 kubenswrapper[4790]: I1011 10:47:46.409508 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Oct 11 10:47:46.415805 master-0 kubenswrapper[4790]: I1011 10:47:46.415681 4790 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-0" oldPodUID="a7e53a8977ce5fc5588aef94f91dcc24" podUID="14286286be88b59efc7cfc15eca1cc38" Oct 11 10:47:46.538822 master-0 kubenswrapper[4790]: I1011 10:47:46.538734 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-usr-local-bin\") pod \"a7e53a8977ce5fc5588aef94f91dcc24\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " Oct 11 10:47:46.539167 master-0 kubenswrapper[4790]: I1011 10:47:46.538859 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-cert-dir\") pod \"a7e53a8977ce5fc5588aef94f91dcc24\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " Oct 11 10:47:46.539167 master-0 kubenswrapper[4790]: I1011 10:47:46.538881 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "a7e53a8977ce5fc5588aef94f91dcc24" (UID: "a7e53a8977ce5fc5588aef94f91dcc24"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:47:46.539167 master-0 kubenswrapper[4790]: I1011 10:47:46.538941 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-resource-dir\") pod \"a7e53a8977ce5fc5588aef94f91dcc24\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " Oct 11 10:47:46.539167 master-0 kubenswrapper[4790]: I1011 10:47:46.538982 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "a7e53a8977ce5fc5588aef94f91dcc24" (UID: "a7e53a8977ce5fc5588aef94f91dcc24"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:47:46.539167 master-0 kubenswrapper[4790]: I1011 10:47:46.539015 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-data-dir\") pod \"a7e53a8977ce5fc5588aef94f91dcc24\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " Oct 11 10:47:46.539167 master-0 kubenswrapper[4790]: I1011 10:47:46.539091 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-log-dir\") pod \"a7e53a8977ce5fc5588aef94f91dcc24\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " Oct 11 10:47:46.539167 master-0 kubenswrapper[4790]: I1011 10:47:46.539129 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-data-dir" (OuterVolumeSpecName: "data-dir") pod "a7e53a8977ce5fc5588aef94f91dcc24" (UID: "a7e53a8977ce5fc5588aef94f91dcc24"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:47:46.539167 master-0 kubenswrapper[4790]: I1011 10:47:46.539176 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-static-pod-dir\") pod \"a7e53a8977ce5fc5588aef94f91dcc24\" (UID: \"a7e53a8977ce5fc5588aef94f91dcc24\") " Oct 11 10:47:46.539807 master-0 kubenswrapper[4790]: I1011 10:47:46.539197 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "a7e53a8977ce5fc5588aef94f91dcc24" (UID: "a7e53a8977ce5fc5588aef94f91dcc24"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:47:46.539807 master-0 kubenswrapper[4790]: I1011 10:47:46.539245 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-log-dir" (OuterVolumeSpecName: "log-dir") pod "a7e53a8977ce5fc5588aef94f91dcc24" (UID: "a7e53a8977ce5fc5588aef94f91dcc24"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:47:46.539807 master-0 kubenswrapper[4790]: I1011 10:47:46.539308 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "a7e53a8977ce5fc5588aef94f91dcc24" (UID: "a7e53a8977ce5fc5588aef94f91dcc24"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:47:46.539807 master-0 kubenswrapper[4790]: I1011 10:47:46.539529 4790 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Oct 11 10:47:46.539807 master-0 kubenswrapper[4790]: I1011 10:47:46.539561 4790 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-cert-dir\") on node \"master-0\" DevicePath \"\"" Oct 11 10:47:46.539807 master-0 kubenswrapper[4790]: I1011 10:47:46.539579 4790 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-resource-dir\") on node \"master-0\" DevicePath \"\"" Oct 11 10:47:46.539807 master-0 kubenswrapper[4790]: I1011 10:47:46.539596 4790 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-data-dir\") on node \"master-0\" DevicePath \"\"" Oct 11 10:47:46.539807 master-0 kubenswrapper[4790]: I1011 10:47:46.539615 4790 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-log-dir\") on node \"master-0\" DevicePath \"\"" Oct 11 10:47:46.539807 master-0 kubenswrapper[4790]: I1011 10:47:46.539631 4790 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/a7e53a8977ce5fc5588aef94f91dcc24-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Oct 11 10:47:46.543129 master-2 kubenswrapper[4776]: I1011 10:47:46.543070 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-v6dfc" event={"ID":"8757af56-20fb-439e-adba-7e4e50378936","Type":"ContainerDied","Data":"a75184c2ae2e40b8111ddce83c4348030deaf04dbe5bc245f6d525047856b81f"} Oct 11 10:47:46.543129 master-2 kubenswrapper[4776]: I1011 10:47:46.543102 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:47:46.543744 master-2 kubenswrapper[4776]: I1011 10:47:46.543115 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a75184c2ae2e40b8111ddce83c4348030deaf04dbe5bc245f6d525047856b81f" Oct 11 10:47:46.544966 master-2 kubenswrapper[4776]: I1011 10:47:46.544936 4776 generic.go:334] "Generic (PLEG): container finished" podID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerID="533b32d657ced976c1f8ae60f6fde2e135c65e3a32a237e000363038aba1c179" exitCode=0 Oct 11 10:47:46.545043 master-2 kubenswrapper[4776]: I1011 10:47:46.544971 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" event={"ID":"4f83a3b5-333b-4284-b03d-c03db77c3241","Type":"ContainerDied","Data":"533b32d657ced976c1f8ae60f6fde2e135c65e3a32a237e000363038aba1c179"} Oct 11 10:47:46.741687 master-0 kubenswrapper[4790]: I1011 10:47:46.741502 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_a7e53a8977ce5fc5588aef94f91dcc24/etcd/2.log" Oct 11 10:47:46.742393 master-0 kubenswrapper[4790]: I1011 10:47:46.742333 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_a7e53a8977ce5fc5588aef94f91dcc24/etcd/1.log" Oct 11 10:47:46.743449 master-0 kubenswrapper[4790]: I1011 10:47:46.743383 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_a7e53a8977ce5fc5588aef94f91dcc24/etcd-rev/0.log" Oct 11 10:47:46.745067 master-0 kubenswrapper[4790]: I1011 10:47:46.745012 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_a7e53a8977ce5fc5588aef94f91dcc24/etcd-metrics/0.log" Oct 11 10:47:46.745670 master-0 kubenswrapper[4790]: I1011 10:47:46.745620 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_a7e53a8977ce5fc5588aef94f91dcc24/etcdctl/0.log" Oct 11 10:47:46.747232 master-0 kubenswrapper[4790]: I1011 10:47:46.747158 4790 generic.go:334] "Generic (PLEG): container finished" podID="a7e53a8977ce5fc5588aef94f91dcc24" containerID="b73aa8db7e53e83a1a9430aab15d1a11f28139414df55dce80c436d48922f6ca" exitCode=137 Oct 11 10:47:46.747232 master-0 kubenswrapper[4790]: I1011 10:47:46.747211 4790 generic.go:334] "Generic (PLEG): container finished" podID="a7e53a8977ce5fc5588aef94f91dcc24" containerID="d5626e357f9cecab4c479d110f8d9828e354694d4906d68734ca04db0e170970" exitCode=137 Oct 11 10:47:46.747412 master-0 kubenswrapper[4790]: I1011 10:47:46.747282 4790 scope.go:117] "RemoveContainer" containerID="b73aa8db7e53e83a1a9430aab15d1a11f28139414df55dce80c436d48922f6ca" Oct 11 10:47:46.747412 master-0 kubenswrapper[4790]: I1011 10:47:46.747315 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Oct 11 10:47:46.753440 master-1 kubenswrapper[4771]: I1011 10:47:46.753336 4771 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 11 10:47:46.754349 master-1 kubenswrapper[4771]: I1011 10:47:46.753732 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver" containerID="cri-o://d035b13d9431b1216e273c4ac7fb5eb87624d8740b70d29326082336302e3b46" gracePeriod=135 Oct 11 10:47:46.754349 master-1 kubenswrapper[4771]: I1011 10:47:46.753868 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-cert-syncer" containerID="cri-o://7e5a3711f36461fe4ced62a6738267cdf151c6f22d750936a4256bced2e89c2a" gracePeriod=135 Oct 11 10:47:46.754349 master-1 kubenswrapper[4771]: I1011 10:47:46.753821 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a15e7539d2a0c42e8c6c8995bf98ff26ca0f322daf83394df48b4f13fc42d10b" gracePeriod=135 Oct 11 10:47:46.754349 master-1 kubenswrapper[4771]: I1011 10:47:46.753929 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://452189c1a156cff2357db3338f99f86d41c76ed0f97b4459672ad6a8fe0dc5c7" gracePeriod=135 Oct 11 10:47:46.754349 master-1 kubenswrapper[4771]: I1011 10:47:46.753866 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://55ecf6fefa862d92619ce534057ad20c836371d13f4c0d70468214b0bd6e3db4" gracePeriod=135 Oct 11 10:47:46.754875 master-0 kubenswrapper[4790]: I1011 10:47:46.754752 4790 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-0" oldPodUID="a7e53a8977ce5fc5588aef94f91dcc24" podUID="14286286be88b59efc7cfc15eca1cc38" Oct 11 10:47:46.755473 master-1 kubenswrapper[4771]: I1011 10:47:46.754807 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 11 10:47:46.755473 master-1 kubenswrapper[4771]: E1011 10:47:46.755117 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver" Oct 11 10:47:46.755473 master-1 kubenswrapper[4771]: I1011 10:47:46.755138 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver" Oct 11 10:47:46.755473 master-1 kubenswrapper[4771]: E1011 10:47:46.755162 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-cert-regeneration-controller" Oct 11 10:47:46.755473 master-1 kubenswrapper[4771]: I1011 10:47:46.755175 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-cert-regeneration-controller" Oct 11 10:47:46.755473 master-1 kubenswrapper[4771]: E1011 10:47:46.755189 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-insecure-readyz" Oct 11 10:47:46.755473 master-1 kubenswrapper[4771]: I1011 10:47:46.755201 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-insecure-readyz" Oct 11 10:47:46.755473 master-1 kubenswrapper[4771]: E1011 10:47:46.755222 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-cert-syncer" Oct 11 10:47:46.755473 master-1 kubenswrapper[4771]: I1011 10:47:46.755234 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-cert-syncer" Oct 11 10:47:46.755473 master-1 kubenswrapper[4771]: E1011 10:47:46.755268 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d61efaa0f96869cf2939026aad6022" containerName="setup" Oct 11 10:47:46.755473 master-1 kubenswrapper[4771]: I1011 10:47:46.755280 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d61efaa0f96869cf2939026aad6022" containerName="setup" Oct 11 10:47:46.755473 master-1 kubenswrapper[4771]: E1011 10:47:46.755294 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-check-endpoints" Oct 11 10:47:46.755473 master-1 kubenswrapper[4771]: I1011 10:47:46.755308 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-check-endpoints" Oct 11 10:47:46.756001 master-1 kubenswrapper[4771]: I1011 10:47:46.755517 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-check-endpoints" Oct 11 10:47:46.756001 master-1 kubenswrapper[4771]: I1011 10:47:46.755544 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-cert-syncer" Oct 11 10:47:46.756001 master-1 kubenswrapper[4771]: I1011 10:47:46.755559 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-insecure-readyz" Oct 11 10:47:46.756001 master-1 kubenswrapper[4771]: I1011 10:47:46.755582 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver" Oct 11 10:47:46.756001 master-1 kubenswrapper[4771]: I1011 10:47:46.755598 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-cert-regeneration-controller" Oct 11 10:47:46.775305 master-0 kubenswrapper[4790]: I1011 10:47:46.775239 4790 scope.go:117] "RemoveContainer" containerID="31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227" Oct 11 10:47:46.787266 master-0 kubenswrapper[4790]: I1011 10:47:46.787183 4790 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-0" oldPodUID="a7e53a8977ce5fc5588aef94f91dcc24" podUID="14286286be88b59efc7cfc15eca1cc38" Oct 11 10:47:46.811613 master-0 kubenswrapper[4790]: I1011 10:47:46.811482 4790 scope.go:117] "RemoveContainer" containerID="3581fc03c1130dc2dfacc8a0d155544ce350f22134d68a0cd6e79314b24ed060" Oct 11 10:47:46.833783 master-0 kubenswrapper[4790]: I1011 10:47:46.833044 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" start-of-body= Oct 11 10:47:46.833783 master-0 kubenswrapper[4790]: I1011 10:47:46.833134 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" Oct 11 10:47:46.842043 master-0 kubenswrapper[4790]: I1011 10:47:46.841961 4790 scope.go:117] "RemoveContainer" containerID="e4fe8f20eab6d0f57dd7ff0348d7d79afb0c0c3e2753d58dd10a19dafe7b60ef" Oct 11 10:47:46.858630 master-0 kubenswrapper[4790]: I1011 10:47:46.858540 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-2chzf"] Oct 11 10:47:46.859919 master-0 kubenswrapper[4790]: I1011 10:47:46.859872 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-2chzf" Oct 11 10:47:46.864405 master-0 kubenswrapper[4790]: I1011 10:47:46.864335 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Oct 11 10:47:46.864683 master-0 kubenswrapper[4790]: I1011 10:47:46.864645 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Oct 11 10:47:46.865411 master-0 kubenswrapper[4790]: I1011 10:47:46.865363 4790 scope.go:117] "RemoveContainer" containerID="95fc9c6d467ef4be49d4e412e32d764ddb12e42ab41ba14dd9259747220b1377" Oct 11 10:47:46.877984 master-1 kubenswrapper[4771]: I1011 10:47:46.877845 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/23141951a25391899fad7b9f2d5b6739-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"23141951a25391899fad7b9f2d5b6739\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:47:46.877984 master-1 kubenswrapper[4771]: I1011 10:47:46.877926 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/23141951a25391899fad7b9f2d5b6739-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"23141951a25391899fad7b9f2d5b6739\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:47:46.877984 master-1 kubenswrapper[4771]: I1011 10:47:46.877979 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23141951a25391899fad7b9f2d5b6739-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"23141951a25391899fad7b9f2d5b6739\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:47:46.883935 master-0 kubenswrapper[4790]: I1011 10:47:46.883880 4790 scope.go:117] "RemoveContainer" containerID="d5626e357f9cecab4c479d110f8d9828e354694d4906d68734ca04db0e170970" Oct 11 10:47:46.899791 master-0 kubenswrapper[4790]: I1011 10:47:46.899665 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-2chzf"] Oct 11 10:47:46.907052 master-0 kubenswrapper[4790]: I1011 10:47:46.906599 4790 scope.go:117] "RemoveContainer" containerID="033430f1d3826d181c0b75fb8343feb31c60c650260f0caebe6a290f1c9deca0" Oct 11 10:47:46.937853 master-0 kubenswrapper[4790]: I1011 10:47:46.937805 4790 scope.go:117] "RemoveContainer" containerID="57c32f6a37a34d84078aadb245675fb641cce16a04adbde4e12fbce6a920df2f" Oct 11 10:47:46.976955 master-0 kubenswrapper[4790]: I1011 10:47:46.976747 4790 scope.go:117] "RemoveContainer" containerID="0b424e04233179f6773559ee329a6da085bcdbf1e83437befa1c0cee6d727796" Oct 11 10:47:46.978699 master-1 kubenswrapper[4771]: I1011 10:47:46.978655 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/23141951a25391899fad7b9f2d5b6739-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"23141951a25391899fad7b9f2d5b6739\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:47:46.978783 master-1 kubenswrapper[4771]: I1011 10:47:46.978702 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/23141951a25391899fad7b9f2d5b6739-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"23141951a25391899fad7b9f2d5b6739\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:47:46.978783 master-1 kubenswrapper[4771]: I1011 10:47:46.978732 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23141951a25391899fad7b9f2d5b6739-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"23141951a25391899fad7b9f2d5b6739\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:47:46.978913 master-1 kubenswrapper[4771]: I1011 10:47:46.978809 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23141951a25391899fad7b9f2d5b6739-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"23141951a25391899fad7b9f2d5b6739\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:47:46.978913 master-1 kubenswrapper[4771]: I1011 10:47:46.978811 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/23141951a25391899fad7b9f2d5b6739-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"23141951a25391899fad7b9f2d5b6739\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:47:46.978913 master-1 kubenswrapper[4771]: I1011 10:47:46.978887 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/23141951a25391899fad7b9f2d5b6739-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"23141951a25391899fad7b9f2d5b6739\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:47:47.000341 master-0 kubenswrapper[4790]: I1011 10:47:47.000182 4790 scope.go:117] "RemoveContainer" containerID="b73aa8db7e53e83a1a9430aab15d1a11f28139414df55dce80c436d48922f6ca" Oct 11 10:47:47.001058 master-0 kubenswrapper[4790]: E1011 10:47:47.001009 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b73aa8db7e53e83a1a9430aab15d1a11f28139414df55dce80c436d48922f6ca\": container with ID starting with b73aa8db7e53e83a1a9430aab15d1a11f28139414df55dce80c436d48922f6ca not found: ID does not exist" containerID="b73aa8db7e53e83a1a9430aab15d1a11f28139414df55dce80c436d48922f6ca" Oct 11 10:47:47.001123 master-0 kubenswrapper[4790]: I1011 10:47:47.001078 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73aa8db7e53e83a1a9430aab15d1a11f28139414df55dce80c436d48922f6ca"} err="failed to get container status \"b73aa8db7e53e83a1a9430aab15d1a11f28139414df55dce80c436d48922f6ca\": rpc error: code = NotFound desc = could not find container \"b73aa8db7e53e83a1a9430aab15d1a11f28139414df55dce80c436d48922f6ca\": container with ID starting with b73aa8db7e53e83a1a9430aab15d1a11f28139414df55dce80c436d48922f6ca not found: ID does not exist" Oct 11 10:47:47.001123 master-0 kubenswrapper[4790]: I1011 10:47:47.001118 4790 scope.go:117] "RemoveContainer" containerID="31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227" Oct 11 10:47:47.001579 master-0 kubenswrapper[4790]: E1011 10:47:47.001556 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227\": container with ID starting with 31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227 not found: ID does not exist" containerID="31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227" Oct 11 10:47:47.001674 master-0 kubenswrapper[4790]: I1011 10:47:47.001655 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227"} err="failed to get container status \"31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227\": rpc error: code = NotFound desc = could not find container \"31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227\": container with ID starting with 31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227 not found: ID does not exist" Oct 11 10:47:47.001778 master-0 kubenswrapper[4790]: I1011 10:47:47.001765 4790 scope.go:117] "RemoveContainer" containerID="3581fc03c1130dc2dfacc8a0d155544ce350f22134d68a0cd6e79314b24ed060" Oct 11 10:47:47.002133 master-0 kubenswrapper[4790]: E1011 10:47:47.002117 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3581fc03c1130dc2dfacc8a0d155544ce350f22134d68a0cd6e79314b24ed060\": container with ID starting with 3581fc03c1130dc2dfacc8a0d155544ce350f22134d68a0cd6e79314b24ed060 not found: ID does not exist" containerID="3581fc03c1130dc2dfacc8a0d155544ce350f22134d68a0cd6e79314b24ed060" Oct 11 10:47:47.002217 master-0 kubenswrapper[4790]: I1011 10:47:47.002201 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3581fc03c1130dc2dfacc8a0d155544ce350f22134d68a0cd6e79314b24ed060"} err="failed to get container status \"3581fc03c1130dc2dfacc8a0d155544ce350f22134d68a0cd6e79314b24ed060\": rpc error: code = NotFound desc = could not find container \"3581fc03c1130dc2dfacc8a0d155544ce350f22134d68a0cd6e79314b24ed060\": container with ID starting with 3581fc03c1130dc2dfacc8a0d155544ce350f22134d68a0cd6e79314b24ed060 not found: ID does not exist" Oct 11 10:47:47.002282 master-0 kubenswrapper[4790]: I1011 10:47:47.002272 4790 scope.go:117] "RemoveContainer" containerID="e4fe8f20eab6d0f57dd7ff0348d7d79afb0c0c3e2753d58dd10a19dafe7b60ef" Oct 11 10:47:47.003016 master-0 kubenswrapper[4790]: E1011 10:47:47.003000 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4fe8f20eab6d0f57dd7ff0348d7d79afb0c0c3e2753d58dd10a19dafe7b60ef\": container with ID starting with e4fe8f20eab6d0f57dd7ff0348d7d79afb0c0c3e2753d58dd10a19dafe7b60ef not found: ID does not exist" containerID="e4fe8f20eab6d0f57dd7ff0348d7d79afb0c0c3e2753d58dd10a19dafe7b60ef" Oct 11 10:47:47.003106 master-0 kubenswrapper[4790]: I1011 10:47:47.003088 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4fe8f20eab6d0f57dd7ff0348d7d79afb0c0c3e2753d58dd10a19dafe7b60ef"} err="failed to get container status \"e4fe8f20eab6d0f57dd7ff0348d7d79afb0c0c3e2753d58dd10a19dafe7b60ef\": rpc error: code = NotFound desc = could not find container \"e4fe8f20eab6d0f57dd7ff0348d7d79afb0c0c3e2753d58dd10a19dafe7b60ef\": container with ID starting with e4fe8f20eab6d0f57dd7ff0348d7d79afb0c0c3e2753d58dd10a19dafe7b60ef not found: ID does not exist" Oct 11 10:47:47.003172 master-0 kubenswrapper[4790]: I1011 10:47:47.003158 4790 scope.go:117] "RemoveContainer" containerID="95fc9c6d467ef4be49d4e412e32d764ddb12e42ab41ba14dd9259747220b1377" Oct 11 10:47:47.003600 master-0 kubenswrapper[4790]: E1011 10:47:47.003585 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95fc9c6d467ef4be49d4e412e32d764ddb12e42ab41ba14dd9259747220b1377\": container with ID starting with 95fc9c6d467ef4be49d4e412e32d764ddb12e42ab41ba14dd9259747220b1377 not found: ID does not exist" containerID="95fc9c6d467ef4be49d4e412e32d764ddb12e42ab41ba14dd9259747220b1377" Oct 11 10:47:47.003797 master-0 kubenswrapper[4790]: I1011 10:47:47.003780 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95fc9c6d467ef4be49d4e412e32d764ddb12e42ab41ba14dd9259747220b1377"} err="failed to get container status \"95fc9c6d467ef4be49d4e412e32d764ddb12e42ab41ba14dd9259747220b1377\": rpc error: code = NotFound desc = could not find container \"95fc9c6d467ef4be49d4e412e32d764ddb12e42ab41ba14dd9259747220b1377\": container with ID starting with 95fc9c6d467ef4be49d4e412e32d764ddb12e42ab41ba14dd9259747220b1377 not found: ID does not exist" Oct 11 10:47:47.003864 master-0 kubenswrapper[4790]: I1011 10:47:47.003852 4790 scope.go:117] "RemoveContainer" containerID="d5626e357f9cecab4c479d110f8d9828e354694d4906d68734ca04db0e170970" Oct 11 10:47:47.004387 master-0 kubenswrapper[4790]: E1011 10:47:47.004345 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5626e357f9cecab4c479d110f8d9828e354694d4906d68734ca04db0e170970\": container with ID starting with d5626e357f9cecab4c479d110f8d9828e354694d4906d68734ca04db0e170970 not found: ID does not exist" containerID="d5626e357f9cecab4c479d110f8d9828e354694d4906d68734ca04db0e170970" Oct 11 10:47:47.004472 master-0 kubenswrapper[4790]: I1011 10:47:47.004454 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5626e357f9cecab4c479d110f8d9828e354694d4906d68734ca04db0e170970"} err="failed to get container status \"d5626e357f9cecab4c479d110f8d9828e354694d4906d68734ca04db0e170970\": rpc error: code = NotFound desc = could not find container \"d5626e357f9cecab4c479d110f8d9828e354694d4906d68734ca04db0e170970\": container with ID starting with d5626e357f9cecab4c479d110f8d9828e354694d4906d68734ca04db0e170970 not found: ID does not exist" Oct 11 10:47:47.004535 master-0 kubenswrapper[4790]: I1011 10:47:47.004524 4790 scope.go:117] "RemoveContainer" containerID="033430f1d3826d181c0b75fb8343feb31c60c650260f0caebe6a290f1c9deca0" Oct 11 10:47:47.004851 master-0 kubenswrapper[4790]: E1011 10:47:47.004836 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"033430f1d3826d181c0b75fb8343feb31c60c650260f0caebe6a290f1c9deca0\": container with ID starting with 033430f1d3826d181c0b75fb8343feb31c60c650260f0caebe6a290f1c9deca0 not found: ID does not exist" containerID="033430f1d3826d181c0b75fb8343feb31c60c650260f0caebe6a290f1c9deca0" Oct 11 10:47:47.005032 master-0 kubenswrapper[4790]: I1011 10:47:47.005014 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"033430f1d3826d181c0b75fb8343feb31c60c650260f0caebe6a290f1c9deca0"} err="failed to get container status \"033430f1d3826d181c0b75fb8343feb31c60c650260f0caebe6a290f1c9deca0\": rpc error: code = NotFound desc = could not find container \"033430f1d3826d181c0b75fb8343feb31c60c650260f0caebe6a290f1c9deca0\": container with ID starting with 033430f1d3826d181c0b75fb8343feb31c60c650260f0caebe6a290f1c9deca0 not found: ID does not exist" Oct 11 10:47:47.005102 master-0 kubenswrapper[4790]: I1011 10:47:47.005092 4790 scope.go:117] "RemoveContainer" containerID="57c32f6a37a34d84078aadb245675fb641cce16a04adbde4e12fbce6a920df2f" Oct 11 10:47:47.005505 master-0 kubenswrapper[4790]: E1011 10:47:47.005491 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57c32f6a37a34d84078aadb245675fb641cce16a04adbde4e12fbce6a920df2f\": container with ID starting with 57c32f6a37a34d84078aadb245675fb641cce16a04adbde4e12fbce6a920df2f not found: ID does not exist" containerID="57c32f6a37a34d84078aadb245675fb641cce16a04adbde4e12fbce6a920df2f" Oct 11 10:47:47.005591 master-0 kubenswrapper[4790]: I1011 10:47:47.005572 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57c32f6a37a34d84078aadb245675fb641cce16a04adbde4e12fbce6a920df2f"} err="failed to get container status \"57c32f6a37a34d84078aadb245675fb641cce16a04adbde4e12fbce6a920df2f\": rpc error: code = NotFound desc = could not find container \"57c32f6a37a34d84078aadb245675fb641cce16a04adbde4e12fbce6a920df2f\": container with ID starting with 57c32f6a37a34d84078aadb245675fb641cce16a04adbde4e12fbce6a920df2f not found: ID does not exist" Oct 11 10:47:47.005659 master-0 kubenswrapper[4790]: I1011 10:47:47.005645 4790 scope.go:117] "RemoveContainer" containerID="0b424e04233179f6773559ee329a6da085bcdbf1e83437befa1c0cee6d727796" Oct 11 10:47:47.006063 master-0 kubenswrapper[4790]: E1011 10:47:47.006043 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b424e04233179f6773559ee329a6da085bcdbf1e83437befa1c0cee6d727796\": container with ID starting with 0b424e04233179f6773559ee329a6da085bcdbf1e83437befa1c0cee6d727796 not found: ID does not exist" containerID="0b424e04233179f6773559ee329a6da085bcdbf1e83437befa1c0cee6d727796" Oct 11 10:47:47.006151 master-0 kubenswrapper[4790]: I1011 10:47:47.006135 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b424e04233179f6773559ee329a6da085bcdbf1e83437befa1c0cee6d727796"} err="failed to get container status \"0b424e04233179f6773559ee329a6da085bcdbf1e83437befa1c0cee6d727796\": rpc error: code = NotFound desc = could not find container \"0b424e04233179f6773559ee329a6da085bcdbf1e83437befa1c0cee6d727796\": container with ID starting with 0b424e04233179f6773559ee329a6da085bcdbf1e83437befa1c0cee6d727796 not found: ID does not exist" Oct 11 10:47:47.006216 master-0 kubenswrapper[4790]: I1011 10:47:47.006206 4790 scope.go:117] "RemoveContainer" containerID="b73aa8db7e53e83a1a9430aab15d1a11f28139414df55dce80c436d48922f6ca" Oct 11 10:47:47.006547 master-0 kubenswrapper[4790]: I1011 10:47:47.006531 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73aa8db7e53e83a1a9430aab15d1a11f28139414df55dce80c436d48922f6ca"} err="failed to get container status \"b73aa8db7e53e83a1a9430aab15d1a11f28139414df55dce80c436d48922f6ca\": rpc error: code = NotFound desc = could not find container \"b73aa8db7e53e83a1a9430aab15d1a11f28139414df55dce80c436d48922f6ca\": container with ID starting with b73aa8db7e53e83a1a9430aab15d1a11f28139414df55dce80c436d48922f6ca not found: ID does not exist" Oct 11 10:47:47.006658 master-0 kubenswrapper[4790]: I1011 10:47:47.006644 4790 scope.go:117] "RemoveContainer" containerID="31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227" Oct 11 10:47:47.008651 master-0 kubenswrapper[4790]: I1011 10:47:47.008634 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227"} err="failed to get container status \"31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227\": rpc error: code = NotFound desc = could not find container \"31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227\": container with ID starting with 31fd9a1eb8f7baffc7dd0dc0c3438ec3345e4e70ffaf5fbc351f82b6c2165227 not found: ID does not exist" Oct 11 10:47:47.008753 master-0 kubenswrapper[4790]: I1011 10:47:47.008742 4790 scope.go:117] "RemoveContainer" containerID="3581fc03c1130dc2dfacc8a0d155544ce350f22134d68a0cd6e79314b24ed060" Oct 11 10:47:47.009202 master-0 kubenswrapper[4790]: I1011 10:47:47.009147 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3581fc03c1130dc2dfacc8a0d155544ce350f22134d68a0cd6e79314b24ed060"} err="failed to get container status \"3581fc03c1130dc2dfacc8a0d155544ce350f22134d68a0cd6e79314b24ed060\": rpc error: code = NotFound desc = could not find container \"3581fc03c1130dc2dfacc8a0d155544ce350f22134d68a0cd6e79314b24ed060\": container with ID starting with 3581fc03c1130dc2dfacc8a0d155544ce350f22134d68a0cd6e79314b24ed060 not found: ID does not exist" Oct 11 10:47:47.009282 master-0 kubenswrapper[4790]: I1011 10:47:47.009270 4790 scope.go:117] "RemoveContainer" containerID="e4fe8f20eab6d0f57dd7ff0348d7d79afb0c0c3e2753d58dd10a19dafe7b60ef" Oct 11 10:47:47.009655 master-0 kubenswrapper[4790]: I1011 10:47:47.009638 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4fe8f20eab6d0f57dd7ff0348d7d79afb0c0c3e2753d58dd10a19dafe7b60ef"} err="failed to get container status \"e4fe8f20eab6d0f57dd7ff0348d7d79afb0c0c3e2753d58dd10a19dafe7b60ef\": rpc error: code = NotFound desc = could not find container \"e4fe8f20eab6d0f57dd7ff0348d7d79afb0c0c3e2753d58dd10a19dafe7b60ef\": container with ID starting with e4fe8f20eab6d0f57dd7ff0348d7d79afb0c0c3e2753d58dd10a19dafe7b60ef not found: ID does not exist" Oct 11 10:47:47.009782 master-0 kubenswrapper[4790]: I1011 10:47:47.009762 4790 scope.go:117] "RemoveContainer" containerID="95fc9c6d467ef4be49d4e412e32d764ddb12e42ab41ba14dd9259747220b1377" Oct 11 10:47:47.010236 master-0 kubenswrapper[4790]: I1011 10:47:47.010218 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95fc9c6d467ef4be49d4e412e32d764ddb12e42ab41ba14dd9259747220b1377"} err="failed to get container status \"95fc9c6d467ef4be49d4e412e32d764ddb12e42ab41ba14dd9259747220b1377\": rpc error: code = NotFound desc = could not find container \"95fc9c6d467ef4be49d4e412e32d764ddb12e42ab41ba14dd9259747220b1377\": container with ID starting with 95fc9c6d467ef4be49d4e412e32d764ddb12e42ab41ba14dd9259747220b1377 not found: ID does not exist" Oct 11 10:47:47.010369 master-0 kubenswrapper[4790]: I1011 10:47:47.010356 4790 scope.go:117] "RemoveContainer" containerID="d5626e357f9cecab4c479d110f8d9828e354694d4906d68734ca04db0e170970" Oct 11 10:47:47.010806 master-0 kubenswrapper[4790]: I1011 10:47:47.010780 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5626e357f9cecab4c479d110f8d9828e354694d4906d68734ca04db0e170970"} err="failed to get container status \"d5626e357f9cecab4c479d110f8d9828e354694d4906d68734ca04db0e170970\": rpc error: code = NotFound desc = could not find container \"d5626e357f9cecab4c479d110f8d9828e354694d4906d68734ca04db0e170970\": container with ID starting with d5626e357f9cecab4c479d110f8d9828e354694d4906d68734ca04db0e170970 not found: ID does not exist" Oct 11 10:47:47.010923 master-0 kubenswrapper[4790]: I1011 10:47:47.010877 4790 scope.go:117] "RemoveContainer" containerID="033430f1d3826d181c0b75fb8343feb31c60c650260f0caebe6a290f1c9deca0" Oct 11 10:47:47.011311 master-0 kubenswrapper[4790]: I1011 10:47:47.011290 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"033430f1d3826d181c0b75fb8343feb31c60c650260f0caebe6a290f1c9deca0"} err="failed to get container status \"033430f1d3826d181c0b75fb8343feb31c60c650260f0caebe6a290f1c9deca0\": rpc error: code = NotFound desc = could not find container \"033430f1d3826d181c0b75fb8343feb31c60c650260f0caebe6a290f1c9deca0\": container with ID starting with 033430f1d3826d181c0b75fb8343feb31c60c650260f0caebe6a290f1c9deca0 not found: ID does not exist" Oct 11 10:47:47.011448 master-0 kubenswrapper[4790]: I1011 10:47:47.011426 4790 scope.go:117] "RemoveContainer" containerID="57c32f6a37a34d84078aadb245675fb641cce16a04adbde4e12fbce6a920df2f" Oct 11 10:47:47.012061 master-0 kubenswrapper[4790]: I1011 10:47:47.012038 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57c32f6a37a34d84078aadb245675fb641cce16a04adbde4e12fbce6a920df2f"} err="failed to get container status \"57c32f6a37a34d84078aadb245675fb641cce16a04adbde4e12fbce6a920df2f\": rpc error: code = NotFound desc = could not find container \"57c32f6a37a34d84078aadb245675fb641cce16a04adbde4e12fbce6a920df2f\": container with ID starting with 57c32f6a37a34d84078aadb245675fb641cce16a04adbde4e12fbce6a920df2f not found: ID does not exist" Oct 11 10:47:47.012169 master-0 kubenswrapper[4790]: I1011 10:47:47.012152 4790 scope.go:117] "RemoveContainer" containerID="0b424e04233179f6773559ee329a6da085bcdbf1e83437befa1c0cee6d727796" Oct 11 10:47:47.012812 master-0 kubenswrapper[4790]: I1011 10:47:47.012793 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b424e04233179f6773559ee329a6da085bcdbf1e83437befa1c0cee6d727796"} err="failed to get container status \"0b424e04233179f6773559ee329a6da085bcdbf1e83437befa1c0cee6d727796\": rpc error: code = NotFound desc = could not find container \"0b424e04233179f6773559ee329a6da085bcdbf1e83437befa1c0cee6d727796\": container with ID starting with 0b424e04233179f6773559ee329a6da085bcdbf1e83437befa1c0cee6d727796 not found: ID does not exist" Oct 11 10:47:47.044767 master-0 kubenswrapper[4790]: I1011 10:47:47.044667 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26zzd\" (UniqueName: \"kubernetes.io/projected/2086cc9e-bd35-4e52-94aa-25d3e140537f-kube-api-access-26zzd\") pod \"cert-manager-operator-controller-manager-57cd46d6d-2chzf\" (UID: \"2086cc9e-bd35-4e52-94aa-25d3e140537f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-2chzf" Oct 11 10:47:47.148371 master-0 kubenswrapper[4790]: I1011 10:47:47.146857 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26zzd\" (UniqueName: \"kubernetes.io/projected/2086cc9e-bd35-4e52-94aa-25d3e140537f-kube-api-access-26zzd\") pod \"cert-manager-operator-controller-manager-57cd46d6d-2chzf\" (UID: \"2086cc9e-bd35-4e52-94aa-25d3e140537f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-2chzf" Oct 11 10:47:47.175662 master-0 kubenswrapper[4790]: I1011 10:47:47.175594 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26zzd\" (UniqueName: \"kubernetes.io/projected/2086cc9e-bd35-4e52-94aa-25d3e140537f-kube-api-access-26zzd\") pod \"cert-manager-operator-controller-manager-57cd46d6d-2chzf\" (UID: \"2086cc9e-bd35-4e52-94aa-25d3e140537f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-2chzf" Oct 11 10:47:47.184121 master-0 kubenswrapper[4790]: I1011 10:47:47.184069 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-2chzf" Oct 11 10:47:47.262629 master-1 kubenswrapper[4771]: I1011 10:47:47.262531 4771 generic.go:334] "Generic (PLEG): container finished" podID="75f73eff-98a5-47a6-b15c-2338930444b9" containerID="2a911b9063702410de3ab33cdd411750a6c1735b8b89a04533d1fce1ca985ed4" exitCode=0 Oct 11 10:47:47.262629 master-1 kubenswrapper[4771]: I1011 10:47:47.262596 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-1" event={"ID":"75f73eff-98a5-47a6-b15c-2338930444b9","Type":"ContainerDied","Data":"2a911b9063702410de3ab33cdd411750a6c1735b8b89a04533d1fce1ca985ed4"} Oct 11 10:47:47.266892 master-1 kubenswrapper[4771]: I1011 10:47:47.266833 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_42d61efaa0f96869cf2939026aad6022/kube-apiserver-cert-syncer/0.log" Oct 11 10:47:47.268082 master-1 kubenswrapper[4771]: I1011 10:47:47.268023 4771 generic.go:334] "Generic (PLEG): container finished" podID="42d61efaa0f96869cf2939026aad6022" containerID="a15e7539d2a0c42e8c6c8995bf98ff26ca0f322daf83394df48b4f13fc42d10b" exitCode=0 Oct 11 10:47:47.268082 master-1 kubenswrapper[4771]: I1011 10:47:47.268071 4771 generic.go:334] "Generic (PLEG): container finished" podID="42d61efaa0f96869cf2939026aad6022" containerID="452189c1a156cff2357db3338f99f86d41c76ed0f97b4459672ad6a8fe0dc5c7" exitCode=0 Oct 11 10:47:47.268323 master-1 kubenswrapper[4771]: I1011 10:47:47.268087 4771 generic.go:334] "Generic (PLEG): container finished" podID="42d61efaa0f96869cf2939026aad6022" containerID="55ecf6fefa862d92619ce534057ad20c836371d13f4c0d70468214b0bd6e3db4" exitCode=0 Oct 11 10:47:47.268323 master-1 kubenswrapper[4771]: I1011 10:47:47.268102 4771 generic.go:334] "Generic (PLEG): container finished" podID="42d61efaa0f96869cf2939026aad6022" containerID="7e5a3711f36461fe4ced62a6738267cdf151c6f22d750936a4256bced2e89c2a" exitCode=2 Oct 11 10:47:47.294504 master-1 kubenswrapper[4771]: I1011 10:47:47.294198 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="42d61efaa0f96869cf2939026aad6022" podUID="23141951a25391899fad7b9f2d5b6739" Oct 11 10:47:47.593327 master-0 kubenswrapper[4790]: I1011 10:47:47.593230 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-2chzf"] Oct 11 10:47:47.602957 master-0 kubenswrapper[4790]: W1011 10:47:47.602873 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2086cc9e_bd35_4e52_94aa_25d3e140537f.slice/crio-b79f15327cd61b03ec5d22756d0f8cccda1ea150b36305e1cd7b4f98efd84d79 WatchSource:0}: Error finding container b79f15327cd61b03ec5d22756d0f8cccda1ea150b36305e1cd7b4f98efd84d79: Status 404 returned error can't find the container with id b79f15327cd61b03ec5d22756d0f8cccda1ea150b36305e1cd7b4f98efd84d79 Oct 11 10:47:47.757453 master-0 kubenswrapper[4790]: I1011 10:47:47.757371 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-2chzf" event={"ID":"2086cc9e-bd35-4e52-94aa-25d3e140537f","Type":"ContainerStarted","Data":"b79f15327cd61b03ec5d22756d0f8cccda1ea150b36305e1cd7b4f98efd84d79"} Oct 11 10:47:47.871003 master-2 kubenswrapper[4776]: I1011 10:47:47.870966 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:47.993170 master-2 kubenswrapper[4776]: I1011 10:47:47.993113 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-bundle\") pod \"4f83a3b5-333b-4284-b03d-c03db77c3241\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " Oct 11 10:47:47.993376 master-2 kubenswrapper[4776]: I1011 10:47:47.993244 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-util\") pod \"4f83a3b5-333b-4284-b03d-c03db77c3241\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " Oct 11 10:47:47.993376 master-2 kubenswrapper[4776]: I1011 10:47:47.993312 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp5xx\" (UniqueName: \"kubernetes.io/projected/4f83a3b5-333b-4284-b03d-c03db77c3241-kube-api-access-gp5xx\") pod \"4f83a3b5-333b-4284-b03d-c03db77c3241\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " Oct 11 10:47:47.995694 master-2 kubenswrapper[4776]: I1011 10:47:47.995137 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-bundle" (OuterVolumeSpecName: "bundle") pod "4f83a3b5-333b-4284-b03d-c03db77c3241" (UID: "4f83a3b5-333b-4284-b03d-c03db77c3241"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:47.998801 master-2 kubenswrapper[4776]: I1011 10:47:47.996150 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f83a3b5-333b-4284-b03d-c03db77c3241-kube-api-access-gp5xx" (OuterVolumeSpecName: "kube-api-access-gp5xx") pod "4f83a3b5-333b-4284-b03d-c03db77c3241" (UID: "4f83a3b5-333b-4284-b03d-c03db77c3241"). InnerVolumeSpecName "kube-api-access-gp5xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:47:48.007382 master-2 kubenswrapper[4776]: I1011 10:47:48.007335 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-util" (OuterVolumeSpecName: "util") pod "4f83a3b5-333b-4284-b03d-c03db77c3241" (UID: "4f83a3b5-333b-4284-b03d-c03db77c3241"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:48.094830 master-2 kubenswrapper[4776]: I1011 10:47:48.094700 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp5xx\" (UniqueName: \"kubernetes.io/projected/4f83a3b5-333b-4284-b03d-c03db77c3241-kube-api-access-gp5xx\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:48.094830 master-2 kubenswrapper[4776]: I1011 10:47:48.094744 4776 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:48.094830 master-2 kubenswrapper[4776]: I1011 10:47:48.094755 4776 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-util\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: I1011 10:47:48.247036 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:47:48.247110 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:47:48.250178 master-1 kubenswrapper[4771]: I1011 10:47:48.247123 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:47:48.306687 master-0 kubenswrapper[4790]: I1011 10:47:48.306603 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7e53a8977ce5fc5588aef94f91dcc24" path="/var/lib/kubelet/pods/a7e53a8977ce5fc5588aef94f91dcc24/volumes" Oct 11 10:47:48.557147 master-2 kubenswrapper[4776]: I1011 10:47:48.557050 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" event={"ID":"4f83a3b5-333b-4284-b03d-c03db77c3241","Type":"ContainerDied","Data":"1d3c16a3929d8110ce5e8257b370727ad2dedf5c1aea80548a6befaede03b16c"} Oct 11 10:47:48.557147 master-2 kubenswrapper[4776]: I1011 10:47:48.557104 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d3c16a3929d8110ce5e8257b370727ad2dedf5c1aea80548a6befaede03b16c" Oct 11 10:47:48.557147 master-2 kubenswrapper[4776]: I1011 10:47:48.557125 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:48.675727 master-1 kubenswrapper[4771]: I1011 10:47:48.675693 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-1" Oct 11 10:47:48.806870 master-1 kubenswrapper[4771]: I1011 10:47:48.806772 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75f73eff-98a5-47a6-b15c-2338930444b9-kube-api-access\") pod \"75f73eff-98a5-47a6-b15c-2338930444b9\" (UID: \"75f73eff-98a5-47a6-b15c-2338930444b9\") " Oct 11 10:47:48.806870 master-1 kubenswrapper[4771]: I1011 10:47:48.806867 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75f73eff-98a5-47a6-b15c-2338930444b9-var-lock\") pod \"75f73eff-98a5-47a6-b15c-2338930444b9\" (UID: \"75f73eff-98a5-47a6-b15c-2338930444b9\") " Oct 11 10:47:48.807169 master-1 kubenswrapper[4771]: I1011 10:47:48.806982 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75f73eff-98a5-47a6-b15c-2338930444b9-kubelet-dir\") pod \"75f73eff-98a5-47a6-b15c-2338930444b9\" (UID: \"75f73eff-98a5-47a6-b15c-2338930444b9\") " Oct 11 10:47:48.807231 master-1 kubenswrapper[4771]: I1011 10:47:48.807147 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f73eff-98a5-47a6-b15c-2338930444b9-var-lock" (OuterVolumeSpecName: "var-lock") pod "75f73eff-98a5-47a6-b15c-2338930444b9" (UID: "75f73eff-98a5-47a6-b15c-2338930444b9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:47:48.807309 master-1 kubenswrapper[4771]: I1011 10:47:48.807209 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f73eff-98a5-47a6-b15c-2338930444b9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "75f73eff-98a5-47a6-b15c-2338930444b9" (UID: "75f73eff-98a5-47a6-b15c-2338930444b9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:47:48.807679 master-1 kubenswrapper[4771]: I1011 10:47:48.807634 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75f73eff-98a5-47a6-b15c-2338930444b9-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:47:48.807679 master-1 kubenswrapper[4771]: I1011 10:47:48.807671 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75f73eff-98a5-47a6-b15c-2338930444b9-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 11 10:47:48.812741 master-1 kubenswrapper[4771]: I1011 10:47:48.812685 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75f73eff-98a5-47a6-b15c-2338930444b9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "75f73eff-98a5-47a6-b15c-2338930444b9" (UID: "75f73eff-98a5-47a6-b15c-2338930444b9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:47:48.909412 master-1 kubenswrapper[4771]: I1011 10:47:48.909245 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75f73eff-98a5-47a6-b15c-2338930444b9-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:47:49.286189 master-1 kubenswrapper[4771]: I1011 10:47:49.286045 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-1" event={"ID":"75f73eff-98a5-47a6-b15c-2338930444b9","Type":"ContainerDied","Data":"ddde6b9d12870f560724e081c5814d541ef4de69b4025dd3e9fe28d6514d1fa5"} Oct 11 10:47:49.287466 master-1 kubenswrapper[4771]: I1011 10:47:49.287424 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddde6b9d12870f560724e081c5814d541ef4de69b4025dd3e9fe28d6514d1fa5" Oct 11 10:47:49.287665 master-1 kubenswrapper[4771]: I1011 10:47:49.286209 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-1" Oct 11 10:47:51.833137 master-0 kubenswrapper[4790]: I1011 10:47:51.833062 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" start-of-body= Oct 11 10:47:51.833830 master-0 kubenswrapper[4790]: I1011 10:47:51.833253 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: I1011 10:47:53.246229 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:47:53.246322 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:47:53.249810 master-1 kubenswrapper[4771]: I1011 10:47:53.246331 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:47:53.797669 master-0 kubenswrapper[4790]: I1011 10:47:53.797578 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-2chzf" event={"ID":"2086cc9e-bd35-4e52-94aa-25d3e140537f","Type":"ContainerStarted","Data":"75da7807698c214f672031efcf5a3b337d41563ad51027bb40f51470996ac593"} Oct 11 10:47:53.833125 master-0 kubenswrapper[4790]: I1011 10:47:53.832981 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-2chzf" podStartSLOduration=2.36817931 podStartE2EDuration="7.832948416s" podCreationTimestamp="2025-10-11 10:47:46 +0000 UTC" firstStartedPulling="2025-10-11 10:47:47.607412444 +0000 UTC m=+544.161872736" lastFinishedPulling="2025-10-11 10:47:53.07218155 +0000 UTC m=+549.626641842" observedRunningTime="2025-10-11 10:47:53.830754188 +0000 UTC m=+550.385214500" watchObservedRunningTime="2025-10-11 10:47:53.832948416 +0000 UTC m=+550.387408738" Oct 11 10:47:55.881968 master-0 kubenswrapper[4790]: I1011 10:47:55.881921 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-d969966f-nb76r"] Oct 11 10:47:55.883294 master-0 kubenswrapper[4790]: I1011 10:47:55.883276 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-d969966f-nb76r" Oct 11 10:47:55.888585 master-0 kubenswrapper[4790]: I1011 10:47:55.888535 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 11 10:47:55.889111 master-0 kubenswrapper[4790]: I1011 10:47:55.889078 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 11 10:47:55.916982 master-0 kubenswrapper[4790]: I1011 10:47:55.916918 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-d969966f-nb76r"] Oct 11 10:47:55.960735 master-0 kubenswrapper[4790]: I1011 10:47:55.959767 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9abe9cfa-95f8-4a08-bbc2-27776956894d-bound-sa-token\") pod \"cert-manager-webhook-d969966f-nb76r\" (UID: \"9abe9cfa-95f8-4a08-bbc2-27776956894d\") " pod="cert-manager/cert-manager-webhook-d969966f-nb76r" Oct 11 10:47:55.960735 master-0 kubenswrapper[4790]: I1011 10:47:55.959849 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfcfk\" (UniqueName: \"kubernetes.io/projected/9abe9cfa-95f8-4a08-bbc2-27776956894d-kube-api-access-qfcfk\") pod \"cert-manager-webhook-d969966f-nb76r\" (UID: \"9abe9cfa-95f8-4a08-bbc2-27776956894d\") " pod="cert-manager/cert-manager-webhook-d969966f-nb76r" Oct 11 10:47:56.060751 master-0 kubenswrapper[4790]: I1011 10:47:56.060664 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfcfk\" (UniqueName: \"kubernetes.io/projected/9abe9cfa-95f8-4a08-bbc2-27776956894d-kube-api-access-qfcfk\") pod \"cert-manager-webhook-d969966f-nb76r\" (UID: \"9abe9cfa-95f8-4a08-bbc2-27776956894d\") " pod="cert-manager/cert-manager-webhook-d969966f-nb76r" Oct 11 10:47:56.061099 master-0 kubenswrapper[4790]: I1011 10:47:56.060806 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9abe9cfa-95f8-4a08-bbc2-27776956894d-bound-sa-token\") pod \"cert-manager-webhook-d969966f-nb76r\" (UID: \"9abe9cfa-95f8-4a08-bbc2-27776956894d\") " pod="cert-manager/cert-manager-webhook-d969966f-nb76r" Oct 11 10:47:56.082082 master-0 kubenswrapper[4790]: I1011 10:47:56.082030 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9abe9cfa-95f8-4a08-bbc2-27776956894d-bound-sa-token\") pod \"cert-manager-webhook-d969966f-nb76r\" (UID: \"9abe9cfa-95f8-4a08-bbc2-27776956894d\") " pod="cert-manager/cert-manager-webhook-d969966f-nb76r" Oct 11 10:47:56.091111 master-0 kubenswrapper[4790]: I1011 10:47:56.091034 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfcfk\" (UniqueName: \"kubernetes.io/projected/9abe9cfa-95f8-4a08-bbc2-27776956894d-kube-api-access-qfcfk\") pod \"cert-manager-webhook-d969966f-nb76r\" (UID: \"9abe9cfa-95f8-4a08-bbc2-27776956894d\") " pod="cert-manager/cert-manager-webhook-d969966f-nb76r" Oct 11 10:47:56.207319 master-0 kubenswrapper[4790]: I1011 10:47:56.207134 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-d969966f-nb76r" Oct 11 10:47:56.292816 master-0 kubenswrapper[4790]: I1011 10:47:56.292340 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Oct 11 10:47:56.320580 master-0 kubenswrapper[4790]: I1011 10:47:56.320492 4790 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a13f92a6-018a-40b2-bc65-890f74a263cf" Oct 11 10:47:56.320580 master-0 kubenswrapper[4790]: I1011 10:47:56.320563 4790 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a13f92a6-018a-40b2-bc65-890f74a263cf" Oct 11 10:47:56.354304 master-0 kubenswrapper[4790]: I1011 10:47:56.349102 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Oct 11 10:47:56.354304 master-0 kubenswrapper[4790]: I1011 10:47:56.353388 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Oct 11 10:47:56.354304 master-0 kubenswrapper[4790]: I1011 10:47:56.353541 4790 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Oct 11 10:47:56.379819 master-0 kubenswrapper[4790]: I1011 10:47:56.379673 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Oct 11 10:47:56.385361 master-0 kubenswrapper[4790]: I1011 10:47:56.385309 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Oct 11 10:47:56.678895 master-0 kubenswrapper[4790]: I1011 10:47:56.678826 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-d969966f-nb76r"] Oct 11 10:47:56.740201 master-0 kubenswrapper[4790]: W1011 10:47:56.740130 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9abe9cfa_95f8_4a08_bbc2_27776956894d.slice/crio-059a63b44d0423b9932a13d24a6332d5ff6d5c9943bc0816c4feec4b89eabdf4 WatchSource:0}: Error finding container 059a63b44d0423b9932a13d24a6332d5ff6d5c9943bc0816c4feec4b89eabdf4: Status 404 returned error can't find the container with id 059a63b44d0423b9932a13d24a6332d5ff6d5c9943bc0816c4feec4b89eabdf4 Oct 11 10:47:56.823281 master-0 kubenswrapper[4790]: I1011 10:47:56.823235 4790 generic.go:334] "Generic (PLEG): container finished" podID="14286286be88b59efc7cfc15eca1cc38" containerID="bfadd2755eb7320911873101cfc631f1a704f65f1ecce019279ff9bc67ece8e4" exitCode=0 Oct 11 10:47:56.823480 master-0 kubenswrapper[4790]: I1011 10:47:56.823310 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"14286286be88b59efc7cfc15eca1cc38","Type":"ContainerDied","Data":"bfadd2755eb7320911873101cfc631f1a704f65f1ecce019279ff9bc67ece8e4"} Oct 11 10:47:56.823480 master-0 kubenswrapper[4790]: I1011 10:47:56.823342 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"14286286be88b59efc7cfc15eca1cc38","Type":"ContainerStarted","Data":"e78b164ae9dda95edce0f09c5efcd61213f9f83e4ab58beb48285be2e9c46bac"} Oct 11 10:47:56.824577 master-0 kubenswrapper[4790]: I1011 10:47:56.824537 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-d969966f-nb76r" event={"ID":"9abe9cfa-95f8-4a08-bbc2-27776956894d","Type":"ContainerStarted","Data":"059a63b44d0423b9932a13d24a6332d5ff6d5c9943bc0816c4feec4b89eabdf4"} Oct 11 10:47:56.834182 master-0 kubenswrapper[4790]: I1011 10:47:56.834135 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" start-of-body= Oct 11 10:47:56.834331 master-0 kubenswrapper[4790]: I1011 10:47:56.834191 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": dial tcp 192.168.34.10:9980: connect: connection refused" Oct 11 10:47:57.832109 master-0 kubenswrapper[4790]: I1011 10:47:57.831984 4790 generic.go:334] "Generic (PLEG): container finished" podID="14286286be88b59efc7cfc15eca1cc38" containerID="7d63b1afde70e72ad60f45f2155003b889c6d6ab5be70efe5f737a384950ad05" exitCode=0 Oct 11 10:47:57.832109 master-0 kubenswrapper[4790]: I1011 10:47:57.832044 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"14286286be88b59efc7cfc15eca1cc38","Type":"ContainerDied","Data":"7d63b1afde70e72ad60f45f2155003b889c6d6ab5be70efe5f737a384950ad05"} Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: I1011 10:47:58.244545 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:47:58.244614 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:47:58.246871 master-1 kubenswrapper[4771]: I1011 10:47:58.246816 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:47:58.247095 master-1 kubenswrapper[4771]: I1011 10:47:58.247077 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: I1011 10:47:58.251975 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:47:58.252000 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:47:58.253485 master-1 kubenswrapper[4771]: I1011 10:47:58.253461 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:47:58.839749 master-0 kubenswrapper[4790]: I1011 10:47:58.839611 4790 generic.go:334] "Generic (PLEG): container finished" podID="14286286be88b59efc7cfc15eca1cc38" containerID="6591437e1f6567863066369b6d16e7e64b625afe8e9aac3f31ee299e2668dd5c" exitCode=0 Oct 11 10:47:58.839749 master-0 kubenswrapper[4790]: I1011 10:47:58.839681 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"14286286be88b59efc7cfc15eca1cc38","Type":"ContainerDied","Data":"6591437e1f6567863066369b6d16e7e64b625afe8e9aac3f31ee299e2668dd5c"} Oct 11 10:47:58.859926 master-0 kubenswrapper[4790]: I1011 10:47:58.857811 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7d9f95dbf-grj2w"] Oct 11 10:47:58.859926 master-0 kubenswrapper[4790]: I1011 10:47:58.858395 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-grj2w" Oct 11 10:47:58.880742 master-0 kubenswrapper[4790]: I1011 10:47:58.879091 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7d9f95dbf-grj2w"] Oct 11 10:47:58.905863 master-0 kubenswrapper[4790]: I1011 10:47:58.905813 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74561c36-ea2e-4209-9253-b6a58d832f5f-bound-sa-token\") pod \"cert-manager-cainjector-7d9f95dbf-grj2w\" (UID: \"74561c36-ea2e-4209-9253-b6a58d832f5f\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-grj2w" Oct 11 10:47:58.906017 master-0 kubenswrapper[4790]: I1011 10:47:58.905878 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g48wx\" (UniqueName: \"kubernetes.io/projected/74561c36-ea2e-4209-9253-b6a58d832f5f-kube-api-access-g48wx\") pod \"cert-manager-cainjector-7d9f95dbf-grj2w\" (UID: \"74561c36-ea2e-4209-9253-b6a58d832f5f\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-grj2w" Oct 11 10:47:59.007186 master-0 kubenswrapper[4790]: I1011 10:47:59.006591 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74561c36-ea2e-4209-9253-b6a58d832f5f-bound-sa-token\") pod \"cert-manager-cainjector-7d9f95dbf-grj2w\" (UID: \"74561c36-ea2e-4209-9253-b6a58d832f5f\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-grj2w" Oct 11 10:47:59.007186 master-0 kubenswrapper[4790]: I1011 10:47:59.006668 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g48wx\" (UniqueName: \"kubernetes.io/projected/74561c36-ea2e-4209-9253-b6a58d832f5f-kube-api-access-g48wx\") pod \"cert-manager-cainjector-7d9f95dbf-grj2w\" (UID: \"74561c36-ea2e-4209-9253-b6a58d832f5f\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-grj2w" Oct 11 10:47:59.043371 master-0 kubenswrapper[4790]: I1011 10:47:59.032718 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74561c36-ea2e-4209-9253-b6a58d832f5f-bound-sa-token\") pod \"cert-manager-cainjector-7d9f95dbf-grj2w\" (UID: \"74561c36-ea2e-4209-9253-b6a58d832f5f\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-grj2w" Oct 11 10:47:59.043371 master-0 kubenswrapper[4790]: I1011 10:47:59.037244 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g48wx\" (UniqueName: \"kubernetes.io/projected/74561c36-ea2e-4209-9253-b6a58d832f5f-kube-api-access-g48wx\") pod \"cert-manager-cainjector-7d9f95dbf-grj2w\" (UID: \"74561c36-ea2e-4209-9253-b6a58d832f5f\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-grj2w" Oct 11 10:47:59.199804 master-0 kubenswrapper[4790]: I1011 10:47:59.199733 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-grj2w" Oct 11 10:47:59.662730 master-0 kubenswrapper[4790]: I1011 10:47:59.659065 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7d9f95dbf-grj2w"] Oct 11 10:47:59.848861 master-0 kubenswrapper[4790]: I1011 10:47:59.848763 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"14286286be88b59efc7cfc15eca1cc38","Type":"ContainerStarted","Data":"0dd6722942b406e54c78f2254c3ac8a43586d9102f113302b3c46811ed8a2fd7"} Oct 11 10:47:59.849403 master-0 kubenswrapper[4790]: I1011 10:47:59.848865 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"14286286be88b59efc7cfc15eca1cc38","Type":"ContainerStarted","Data":"2f7bcc72fad76121492b63caeffaf1694217e2478cc44d558ae9c6beb845205e"} Oct 11 10:47:59.849403 master-0 kubenswrapper[4790]: I1011 10:47:59.848897 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"14286286be88b59efc7cfc15eca1cc38","Type":"ContainerStarted","Data":"f9192c343b8e534587ea6333b21771e1b2fc25d380603ab9c4b5eaf439343cdb"} Oct 11 10:47:59.850457 master-0 kubenswrapper[4790]: I1011 10:47:59.850395 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-grj2w" event={"ID":"74561c36-ea2e-4209-9253-b6a58d832f5f","Type":"ContainerStarted","Data":"7ce06916680bfdbb808cc899797e575ec7eb561ea7b8bc94474076f5854532c7"} Oct 11 10:48:00.859838 master-0 kubenswrapper[4790]: I1011 10:48:00.859769 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"14286286be88b59efc7cfc15eca1cc38","Type":"ContainerStarted","Data":"c1eb6340e8713b2b5442e1c58163b72b76e3b53b381610ba551f21765bb9d626"} Oct 11 10:48:01.870048 master-0 kubenswrapper[4790]: I1011 10:48:01.869974 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-grj2w" event={"ID":"74561c36-ea2e-4209-9253-b6a58d832f5f","Type":"ContainerStarted","Data":"5d0a40cf1b74d36b15a69de30b09bf0bcf44c1620f955d762c5f522e94d65820"} Oct 11 10:48:01.872122 master-0 kubenswrapper[4790]: I1011 10:48:01.872078 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-d969966f-nb76r" event={"ID":"9abe9cfa-95f8-4a08-bbc2-27776956894d","Type":"ContainerStarted","Data":"66674512fe4e633d963250cbabb536c6494725fe376372d0b6e06c069ecf34b0"} Oct 11 10:48:01.872312 master-0 kubenswrapper[4790]: I1011 10:48:01.872279 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-d969966f-nb76r" Oct 11 10:48:01.877681 master-0 kubenswrapper[4790]: I1011 10:48:01.877626 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"14286286be88b59efc7cfc15eca1cc38","Type":"ContainerStarted","Data":"b5344628af1fb60355807e51149b37a4033b5ad835d2decefc545134b802a8db"} Oct 11 10:48:01.962957 master-0 kubenswrapper[4790]: I1011 10:48:01.962847 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-grj2w" podStartSLOduration=2.720260952 podStartE2EDuration="3.962823827s" podCreationTimestamp="2025-10-11 10:47:58 +0000 UTC" firstStartedPulling="2025-10-11 10:47:59.678222274 +0000 UTC m=+556.232682566" lastFinishedPulling="2025-10-11 10:48:00.920785149 +0000 UTC m=+557.475245441" observedRunningTime="2025-10-11 10:48:01.893798011 +0000 UTC m=+558.448258373" watchObservedRunningTime="2025-10-11 10:48:01.962823827 +0000 UTC m=+558.517284119" Oct 11 10:48:01.963340 master-0 kubenswrapper[4790]: I1011 10:48:01.963002 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=5.962998671 podStartE2EDuration="5.962998671s" podCreationTimestamp="2025-10-11 10:47:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:48:01.958371586 +0000 UTC m=+558.512831908" watchObservedRunningTime="2025-10-11 10:48:01.962998671 +0000 UTC m=+558.517458963" Oct 11 10:48:01.989624 master-0 kubenswrapper[4790]: I1011 10:48:01.989519 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-d969966f-nb76r" podStartSLOduration=2.810983699 podStartE2EDuration="6.989495277s" podCreationTimestamp="2025-10-11 10:47:55 +0000 UTC" firstStartedPulling="2025-10-11 10:47:56.743027011 +0000 UTC m=+553.297487303" lastFinishedPulling="2025-10-11 10:48:00.921538579 +0000 UTC m=+557.475998881" observedRunningTime="2025-10-11 10:48:01.985858649 +0000 UTC m=+558.540318961" watchObservedRunningTime="2025-10-11 10:48:01.989495277 +0000 UTC m=+558.543955569" Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: I1011 10:48:03.252140 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:48:03.252211 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:48:03.254558 master-1 kubenswrapper[4771]: I1011 10:48:03.252251 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:48:03.725816 master-0 kubenswrapper[4790]: I1011 10:48:03.725696 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq"] Oct 11 10:48:03.726605 master-0 kubenswrapper[4790]: I1011 10:48:03.726456 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq" Oct 11 10:48:03.729057 master-0 kubenswrapper[4790]: I1011 10:48:03.729015 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 11 10:48:03.729131 master-0 kubenswrapper[4790]: I1011 10:48:03.729026 4790 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 11 10:48:03.729240 master-0 kubenswrapper[4790]: I1011 10:48:03.729207 4790 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 11 10:48:03.730632 master-0 kubenswrapper[4790]: I1011 10:48:03.730301 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 11 10:48:03.749623 master-0 kubenswrapper[4790]: I1011 10:48:03.749565 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq"] Oct 11 10:48:03.902777 master-0 kubenswrapper[4790]: I1011 10:48:03.902686 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01ae1cda-0c92-4f86-bff5-90e6cbb3881e-webhook-cert\") pod \"metallb-operator-controller-manager-56b566d9f-hppvq\" (UID: \"01ae1cda-0c92-4f86-bff5-90e6cbb3881e\") " pod="metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq" Oct 11 10:48:03.902777 master-0 kubenswrapper[4790]: I1011 10:48:03.902774 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxsk8\" (UniqueName: \"kubernetes.io/projected/01ae1cda-0c92-4f86-bff5-90e6cbb3881e-kube-api-access-xxsk8\") pod \"metallb-operator-controller-manager-56b566d9f-hppvq\" (UID: \"01ae1cda-0c92-4f86-bff5-90e6cbb3881e\") " pod="metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq" Oct 11 10:48:03.903036 master-0 kubenswrapper[4790]: I1011 10:48:03.902799 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01ae1cda-0c92-4f86-bff5-90e6cbb3881e-apiservice-cert\") pod \"metallb-operator-controller-manager-56b566d9f-hppvq\" (UID: \"01ae1cda-0c92-4f86-bff5-90e6cbb3881e\") " pod="metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq" Oct 11 10:48:04.004810 master-0 kubenswrapper[4790]: I1011 10:48:04.004640 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01ae1cda-0c92-4f86-bff5-90e6cbb3881e-webhook-cert\") pod \"metallb-operator-controller-manager-56b566d9f-hppvq\" (UID: \"01ae1cda-0c92-4f86-bff5-90e6cbb3881e\") " pod="metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq" Oct 11 10:48:04.004810 master-0 kubenswrapper[4790]: I1011 10:48:04.004764 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxsk8\" (UniqueName: \"kubernetes.io/projected/01ae1cda-0c92-4f86-bff5-90e6cbb3881e-kube-api-access-xxsk8\") pod \"metallb-operator-controller-manager-56b566d9f-hppvq\" (UID: \"01ae1cda-0c92-4f86-bff5-90e6cbb3881e\") " pod="metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq" Oct 11 10:48:04.004810 master-0 kubenswrapper[4790]: I1011 10:48:04.004802 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01ae1cda-0c92-4f86-bff5-90e6cbb3881e-apiservice-cert\") pod \"metallb-operator-controller-manager-56b566d9f-hppvq\" (UID: \"01ae1cda-0c92-4f86-bff5-90e6cbb3881e\") " pod="metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq" Oct 11 10:48:04.008397 master-0 kubenswrapper[4790]: I1011 10:48:04.008343 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01ae1cda-0c92-4f86-bff5-90e6cbb3881e-webhook-cert\") pod \"metallb-operator-controller-manager-56b566d9f-hppvq\" (UID: \"01ae1cda-0c92-4f86-bff5-90e6cbb3881e\") " pod="metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq" Oct 11 10:48:04.009033 master-0 kubenswrapper[4790]: I1011 10:48:04.009001 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01ae1cda-0c92-4f86-bff5-90e6cbb3881e-apiservice-cert\") pod \"metallb-operator-controller-manager-56b566d9f-hppvq\" (UID: \"01ae1cda-0c92-4f86-bff5-90e6cbb3881e\") " pod="metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq" Oct 11 10:48:04.035276 master-0 kubenswrapper[4790]: I1011 10:48:04.035207 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxsk8\" (UniqueName: \"kubernetes.io/projected/01ae1cda-0c92-4f86-bff5-90e6cbb3881e-kube-api-access-xxsk8\") pod \"metallb-operator-controller-manager-56b566d9f-hppvq\" (UID: \"01ae1cda-0c92-4f86-bff5-90e6cbb3881e\") " pod="metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq" Oct 11 10:48:04.040015 master-0 kubenswrapper[4790]: I1011 10:48:04.039960 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq" Oct 11 10:48:04.363083 master-0 kubenswrapper[4790]: I1011 10:48:04.363004 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm"] Oct 11 10:48:04.363877 master-0 kubenswrapper[4790]: I1011 10:48:04.363785 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm" Oct 11 10:48:04.366558 master-0 kubenswrapper[4790]: I1011 10:48:04.366397 4790 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 11 10:48:04.368797 master-0 kubenswrapper[4790]: I1011 10:48:04.366666 4790 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 11 10:48:04.429627 master-0 kubenswrapper[4790]: I1011 10:48:04.385534 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm"] Oct 11 10:48:04.493793 master-0 kubenswrapper[4790]: I1011 10:48:04.493062 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq"] Oct 11 10:48:04.495269 master-0 kubenswrapper[4790]: W1011 10:48:04.495220 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01ae1cda_0c92_4f86_bff5_90e6cbb3881e.slice/crio-4867139461cbffe57276c5a4e6bd53f3c3fb5ab7002db203271fe065ba453dfb WatchSource:0}: Error finding container 4867139461cbffe57276c5a4e6bd53f3c3fb5ab7002db203271fe065ba453dfb: Status 404 returned error can't find the container with id 4867139461cbffe57276c5a4e6bd53f3c3fb5ab7002db203271fe065ba453dfb Oct 11 10:48:04.531601 master-0 kubenswrapper[4790]: I1011 10:48:04.531550 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d4bad0b-955f-4d0e-8849-8257c50682cb-apiservice-cert\") pod \"metallb-operator-webhook-server-84d69c968c-btbcm\" (UID: \"3d4bad0b-955f-4d0e-8849-8257c50682cb\") " pod="metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm" Oct 11 10:48:04.531851 master-0 kubenswrapper[4790]: I1011 10:48:04.531615 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v5df\" (UniqueName: \"kubernetes.io/projected/3d4bad0b-955f-4d0e-8849-8257c50682cb-kube-api-access-8v5df\") pod \"metallb-operator-webhook-server-84d69c968c-btbcm\" (UID: \"3d4bad0b-955f-4d0e-8849-8257c50682cb\") " pod="metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm" Oct 11 10:48:04.531851 master-0 kubenswrapper[4790]: I1011 10:48:04.531644 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d4bad0b-955f-4d0e-8849-8257c50682cb-webhook-cert\") pod \"metallb-operator-webhook-server-84d69c968c-btbcm\" (UID: \"3d4bad0b-955f-4d0e-8849-8257c50682cb\") " pod="metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm" Oct 11 10:48:04.633494 master-0 kubenswrapper[4790]: I1011 10:48:04.633344 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v5df\" (UniqueName: \"kubernetes.io/projected/3d4bad0b-955f-4d0e-8849-8257c50682cb-kube-api-access-8v5df\") pod \"metallb-operator-webhook-server-84d69c968c-btbcm\" (UID: \"3d4bad0b-955f-4d0e-8849-8257c50682cb\") " pod="metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm" Oct 11 10:48:04.633494 master-0 kubenswrapper[4790]: I1011 10:48:04.633424 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d4bad0b-955f-4d0e-8849-8257c50682cb-webhook-cert\") pod \"metallb-operator-webhook-server-84d69c968c-btbcm\" (UID: \"3d4bad0b-955f-4d0e-8849-8257c50682cb\") " pod="metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm" Oct 11 10:48:04.633494 master-0 kubenswrapper[4790]: I1011 10:48:04.633479 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d4bad0b-955f-4d0e-8849-8257c50682cb-apiservice-cert\") pod \"metallb-operator-webhook-server-84d69c968c-btbcm\" (UID: \"3d4bad0b-955f-4d0e-8849-8257c50682cb\") " pod="metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm" Oct 11 10:48:04.637090 master-0 kubenswrapper[4790]: I1011 10:48:04.637054 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d4bad0b-955f-4d0e-8849-8257c50682cb-apiservice-cert\") pod \"metallb-operator-webhook-server-84d69c968c-btbcm\" (UID: \"3d4bad0b-955f-4d0e-8849-8257c50682cb\") " pod="metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm" Oct 11 10:48:04.638336 master-0 kubenswrapper[4790]: I1011 10:48:04.638280 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d4bad0b-955f-4d0e-8849-8257c50682cb-webhook-cert\") pod \"metallb-operator-webhook-server-84d69c968c-btbcm\" (UID: \"3d4bad0b-955f-4d0e-8849-8257c50682cb\") " pod="metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm" Oct 11 10:48:04.667797 master-0 kubenswrapper[4790]: I1011 10:48:04.667727 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v5df\" (UniqueName: \"kubernetes.io/projected/3d4bad0b-955f-4d0e-8849-8257c50682cb-kube-api-access-8v5df\") pod \"metallb-operator-webhook-server-84d69c968c-btbcm\" (UID: \"3d4bad0b-955f-4d0e-8849-8257c50682cb\") " pod="metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm" Oct 11 10:48:04.732802 master-0 kubenswrapper[4790]: I1011 10:48:04.732727 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm" Oct 11 10:48:04.899505 master-0 kubenswrapper[4790]: I1011 10:48:04.899345 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq" event={"ID":"01ae1cda-0c92-4f86-bff5-90e6cbb3881e","Type":"ContainerStarted","Data":"4867139461cbffe57276c5a4e6bd53f3c3fb5ab7002db203271fe065ba453dfb"} Oct 11 10:48:05.122805 master-0 kubenswrapper[4790]: I1011 10:48:05.122224 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm"] Oct 11 10:48:05.126947 master-0 kubenswrapper[4790]: W1011 10:48:05.126877 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d4bad0b_955f_4d0e_8849_8257c50682cb.slice/crio-57bde74650573cb025cf5a842d2c32a6ebe4755fd5aa0ac986b69535ad98665c WatchSource:0}: Error finding container 57bde74650573cb025cf5a842d2c32a6ebe4755fd5aa0ac986b69535ad98665c: Status 404 returned error can't find the container with id 57bde74650573cb025cf5a842d2c32a6ebe4755fd5aa0ac986b69535ad98665c Oct 11 10:48:05.912737 master-0 kubenswrapper[4790]: I1011 10:48:05.911919 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm" event={"ID":"3d4bad0b-955f-4d0e-8849-8257c50682cb","Type":"ContainerStarted","Data":"57bde74650573cb025cf5a842d2c32a6ebe4755fd5aa0ac986b69535ad98665c"} Oct 11 10:48:06.211880 master-0 kubenswrapper[4790]: I1011 10:48:06.211737 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-d969966f-nb76r" Oct 11 10:48:06.390016 master-0 kubenswrapper[4790]: I1011 10:48:06.389947 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Oct 11 10:48:06.392588 master-0 kubenswrapper[4790]: I1011 10:48:06.390848 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Oct 11 10:48:06.833737 master-0 kubenswrapper[4790]: I1011 10:48:06.833635 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:48:06.834069 master-0 kubenswrapper[4790]: I1011 10:48:06.833752 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:48:07.928032 master-0 kubenswrapper[4790]: I1011 10:48:07.927688 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq" event={"ID":"01ae1cda-0c92-4f86-bff5-90e6cbb3881e","Type":"ContainerStarted","Data":"e99479718a5b7e2800963c9d04442b4c128d951c09212a5b57a9152a94e6b303"} Oct 11 10:48:07.928032 master-0 kubenswrapper[4790]: I1011 10:48:07.927932 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq" Oct 11 10:48:07.968883 master-0 kubenswrapper[4790]: I1011 10:48:07.968727 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq" podStartSLOduration=1.802334223 podStartE2EDuration="4.968684992s" podCreationTimestamp="2025-10-11 10:48:03 +0000 UTC" firstStartedPulling="2025-10-11 10:48:04.50134625 +0000 UTC m=+561.055806542" lastFinishedPulling="2025-10-11 10:48:07.667697019 +0000 UTC m=+564.222157311" observedRunningTime="2025-10-11 10:48:07.964341425 +0000 UTC m=+564.518801717" watchObservedRunningTime="2025-10-11 10:48:07.968684992 +0000 UTC m=+564.523145284" Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: I1011 10:48:08.243271 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:48:08.243381 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:48:08.245320 master-1 kubenswrapper[4771]: I1011 10:48:08.243438 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:48:08.470641 master-0 kubenswrapper[4790]: I1011 10:48:08.470542 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-pnhrj"] Oct 11 10:48:08.471303 master-0 kubenswrapper[4790]: I1011 10:48:08.471280 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-pnhrj" Oct 11 10:48:08.473584 master-0 kubenswrapper[4790]: I1011 10:48:08.473514 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 11 10:48:08.474005 master-0 kubenswrapper[4790]: I1011 10:48:08.473952 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 11 10:48:08.485347 master-0 kubenswrapper[4790]: I1011 10:48:08.485278 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-pnhrj"] Oct 11 10:48:08.505229 master-0 kubenswrapper[4790]: I1011 10:48:08.505175 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqqc4\" (UniqueName: \"kubernetes.io/projected/6b296384-0413-4a1d-825b-530b97e53c9a-kube-api-access-pqqc4\") pod \"nmstate-operator-858ddd8f98-pnhrj\" (UID: \"6b296384-0413-4a1d-825b-530b97e53c9a\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-pnhrj" Oct 11 10:48:08.606567 master-0 kubenswrapper[4790]: I1011 10:48:08.606503 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqqc4\" (UniqueName: \"kubernetes.io/projected/6b296384-0413-4a1d-825b-530b97e53c9a-kube-api-access-pqqc4\") pod \"nmstate-operator-858ddd8f98-pnhrj\" (UID: \"6b296384-0413-4a1d-825b-530b97e53c9a\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-pnhrj" Oct 11 10:48:08.638740 master-0 kubenswrapper[4790]: I1011 10:48:08.633898 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqqc4\" (UniqueName: \"kubernetes.io/projected/6b296384-0413-4a1d-825b-530b97e53c9a-kube-api-access-pqqc4\") pod \"nmstate-operator-858ddd8f98-pnhrj\" (UID: \"6b296384-0413-4a1d-825b-530b97e53c9a\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-pnhrj" Oct 11 10:48:08.823420 master-0 kubenswrapper[4790]: I1011 10:48:08.823325 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-pnhrj" Oct 11 10:48:09.907889 master-0 kubenswrapper[4790]: I1011 10:48:09.907674 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-pnhrj"] Oct 11 10:48:09.918026 master-0 kubenswrapper[4790]: W1011 10:48:09.917952 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b296384_0413_4a1d_825b_530b97e53c9a.slice/crio-8ddac305cd29bae1b3728337a406fac4652bf7d6296684a083d2e3d6f72d0c14 WatchSource:0}: Error finding container 8ddac305cd29bae1b3728337a406fac4652bf7d6296684a083d2e3d6f72d0c14: Status 404 returned error can't find the container with id 8ddac305cd29bae1b3728337a406fac4652bf7d6296684a083d2e3d6f72d0c14 Oct 11 10:48:09.940030 master-0 kubenswrapper[4790]: I1011 10:48:09.939924 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-pnhrj" event={"ID":"6b296384-0413-4a1d-825b-530b97e53c9a","Type":"ContainerStarted","Data":"8ddac305cd29bae1b3728337a406fac4652bf7d6296684a083d2e3d6f72d0c14"} Oct 11 10:48:09.942944 master-0 kubenswrapper[4790]: I1011 10:48:09.942285 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm" event={"ID":"3d4bad0b-955f-4d0e-8849-8257c50682cb","Type":"ContainerStarted","Data":"75a2cd5e768356f1bb494fc5d056e2b1d7d6ecf960512659b6f293f638834254"} Oct 11 10:48:09.942944 master-0 kubenswrapper[4790]: I1011 10:48:09.942888 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm" Oct 11 10:48:11.834310 master-0 kubenswrapper[4790]: I1011 10:48:11.833940 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:48:11.836133 master-0 kubenswrapper[4790]: I1011 10:48:11.834349 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:48:11.956673 master-0 kubenswrapper[4790]: I1011 10:48:11.956559 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-pnhrj" event={"ID":"6b296384-0413-4a1d-825b-530b97e53c9a","Type":"ContainerStarted","Data":"371194d2a4de8f9948b4470f90b110478fc7afbffb9265643326ed10249c415c"} Oct 11 10:48:11.985801 master-0 kubenswrapper[4790]: I1011 10:48:11.985696 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm" podStartSLOduration=3.597787519 podStartE2EDuration="7.985670597s" podCreationTimestamp="2025-10-11 10:48:04 +0000 UTC" firstStartedPulling="2025-10-11 10:48:05.130686195 +0000 UTC m=+561.685146487" lastFinishedPulling="2025-10-11 10:48:09.518569273 +0000 UTC m=+566.073029565" observedRunningTime="2025-10-11 10:48:09.975375366 +0000 UTC m=+566.529835678" watchObservedRunningTime="2025-10-11 10:48:11.985670597 +0000 UTC m=+568.540130889" Oct 11 10:48:11.986103 master-0 kubenswrapper[4790]: I1011 10:48:11.985846 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-pnhrj" podStartSLOduration=2.124419363 podStartE2EDuration="3.985842021s" podCreationTimestamp="2025-10-11 10:48:08 +0000 UTC" firstStartedPulling="2025-10-11 10:48:09.92147442 +0000 UTC m=+566.475934722" lastFinishedPulling="2025-10-11 10:48:11.782897078 +0000 UTC m=+568.337357380" observedRunningTime="2025-10-11 10:48:11.984482465 +0000 UTC m=+568.538942767" watchObservedRunningTime="2025-10-11 10:48:11.985842021 +0000 UTC m=+568.540302313" Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: I1011 10:48:13.246020 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:48:13.246122 master-1 kubenswrapper[4771]: I1011 10:48:13.246090 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:48:14.892700 master-0 kubenswrapper[4790]: I1011 10:48:14.892616 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-8bmlp"] Oct 11 10:48:14.893566 master-0 kubenswrapper[4790]: I1011 10:48:14.893534 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-8bmlp" Oct 11 10:48:14.896457 master-0 kubenswrapper[4790]: I1011 10:48:14.896406 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Oct 11 10:48:14.897062 master-0 kubenswrapper[4790]: I1011 10:48:14.897027 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Oct 11 10:48:14.904644 master-0 kubenswrapper[4790]: I1011 10:48:14.904581 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmnvz\" (UniqueName: \"kubernetes.io/projected/fd5cd971-0d18-4313-9102-4b59431a75ab-kube-api-access-rmnvz\") pod \"obo-prometheus-operator-7c8cf85677-8bmlp\" (UID: \"fd5cd971-0d18-4313-9102-4b59431a75ab\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-8bmlp" Oct 11 10:48:14.908401 master-0 kubenswrapper[4790]: I1011 10:48:14.908328 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-8bmlp"] Oct 11 10:48:15.006853 master-0 kubenswrapper[4790]: I1011 10:48:15.006277 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmnvz\" (UniqueName: \"kubernetes.io/projected/fd5cd971-0d18-4313-9102-4b59431a75ab-kube-api-access-rmnvz\") pod \"obo-prometheus-operator-7c8cf85677-8bmlp\" (UID: \"fd5cd971-0d18-4313-9102-4b59431a75ab\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-8bmlp" Oct 11 10:48:15.016568 master-0 kubenswrapper[4790]: I1011 10:48:15.016464 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-7d4cc89fcb-9nqxf"] Oct 11 10:48:15.017461 master-0 kubenswrapper[4790]: I1011 10:48:15.017429 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-7d4cc89fcb-9nqxf" Oct 11 10:48:15.037812 master-0 kubenswrapper[4790]: I1011 10:48:15.032004 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-7d4cc89fcb-9nqxf"] Oct 11 10:48:15.043748 master-0 kubenswrapper[4790]: I1011 10:48:15.043675 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmnvz\" (UniqueName: \"kubernetes.io/projected/fd5cd971-0d18-4313-9102-4b59431a75ab-kube-api-access-rmnvz\") pod \"obo-prometheus-operator-7c8cf85677-8bmlp\" (UID: \"fd5cd971-0d18-4313-9102-4b59431a75ab\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-8bmlp" Oct 11 10:48:15.052696 master-2 kubenswrapper[4776]: I1011 10:48:15.052548 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw"] Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052796 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerName="extract" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052810 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerName="extract" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052822 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerName="extract" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052828 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerName="extract" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052841 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerName="pull" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052847 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerName="pull" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052855 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerName="util" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052860 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerName="util" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052872 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8757af56-20fb-439e-adba-7e4e50378936" containerName="assisted-installer-controller" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052879 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="8757af56-20fb-439e-adba-7e4e50378936" containerName="assisted-installer-controller" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052888 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerName="extract" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052893 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerName="extract" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052903 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerName="util" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052910 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerName="util" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052916 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerName="util" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052922 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerName="util" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052930 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerName="pull" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052935 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerName="pull" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052949 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerName="pull" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052955 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerName="pull" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.053047 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerName="extract" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.053058 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerName="extract" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.053069 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerName="extract" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.053076 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="8757af56-20fb-439e-adba-7e4e50378936" containerName="assisted-installer-controller" Oct 11 10:48:15.054087 master-2 kubenswrapper[4776]: I1011 10:48:15.053486 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" Oct 11 10:48:15.057752 master-2 kubenswrapper[4776]: I1011 10:48:15.057691 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Oct 11 10:48:15.058755 master-0 kubenswrapper[4790]: I1011 10:48:15.055901 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92"] Oct 11 10:48:15.058755 master-0 kubenswrapper[4790]: I1011 10:48:15.057907 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92" Oct 11 10:48:15.062756 master-0 kubenswrapper[4790]: I1011 10:48:15.060861 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Oct 11 10:48:15.062756 master-0 kubenswrapper[4790]: I1011 10:48:15.062602 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92"] Oct 11 10:48:15.070704 master-2 kubenswrapper[4776]: I1011 10:48:15.070648 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw"] Oct 11 10:48:15.107196 master-0 kubenswrapper[4790]: I1011 10:48:15.107160 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a65018a-6409-43ce-abe4-498a3ea576d4-bound-sa-token\") pod \"cert-manager-7d4cc89fcb-9nqxf\" (UID: \"0a65018a-6409-43ce-abe4-498a3ea576d4\") " pod="cert-manager/cert-manager-7d4cc89fcb-9nqxf" Oct 11 10:48:15.107334 master-0 kubenswrapper[4790]: I1011 10:48:15.107210 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01cf17dd-ef3f-47aa-8779-a099fc6d45a1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92\" (UID: \"01cf17dd-ef3f-47aa-8779-a099fc6d45a1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92" Oct 11 10:48:15.107334 master-0 kubenswrapper[4790]: I1011 10:48:15.107263 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01cf17dd-ef3f-47aa-8779-a099fc6d45a1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92\" (UID: \"01cf17dd-ef3f-47aa-8779-a099fc6d45a1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92" Oct 11 10:48:15.107334 master-0 kubenswrapper[4790]: I1011 10:48:15.107307 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2c8f\" (UniqueName: \"kubernetes.io/projected/0a65018a-6409-43ce-abe4-498a3ea576d4-kube-api-access-q2c8f\") pod \"cert-manager-7d4cc89fcb-9nqxf\" (UID: \"0a65018a-6409-43ce-abe4-498a3ea576d4\") " pod="cert-manager/cert-manager-7d4cc89fcb-9nqxf" Oct 11 10:48:15.109406 master-2 kubenswrapper[4776]: I1011 10:48:15.109321 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0d366bc3-091d-4bff-a8fd-c70fb91c1db6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw\" (UID: \"0d366bc3-091d-4bff-a8fd-c70fb91c1db6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" Oct 11 10:48:15.109625 master-2 kubenswrapper[4776]: I1011 10:48:15.109466 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d366bc3-091d-4bff-a8fd-c70fb91c1db6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw\" (UID: \"0d366bc3-091d-4bff-a8fd-c70fb91c1db6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" Oct 11 10:48:15.209126 master-0 kubenswrapper[4790]: I1011 10:48:15.208946 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a65018a-6409-43ce-abe4-498a3ea576d4-bound-sa-token\") pod \"cert-manager-7d4cc89fcb-9nqxf\" (UID: \"0a65018a-6409-43ce-abe4-498a3ea576d4\") " pod="cert-manager/cert-manager-7d4cc89fcb-9nqxf" Oct 11 10:48:15.209126 master-0 kubenswrapper[4790]: I1011 10:48:15.209040 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01cf17dd-ef3f-47aa-8779-a099fc6d45a1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92\" (UID: \"01cf17dd-ef3f-47aa-8779-a099fc6d45a1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92" Oct 11 10:48:15.209126 master-0 kubenswrapper[4790]: I1011 10:48:15.209078 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01cf17dd-ef3f-47aa-8779-a099fc6d45a1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92\" (UID: \"01cf17dd-ef3f-47aa-8779-a099fc6d45a1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92" Oct 11 10:48:15.209126 master-0 kubenswrapper[4790]: I1011 10:48:15.209106 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2c8f\" (UniqueName: \"kubernetes.io/projected/0a65018a-6409-43ce-abe4-498a3ea576d4-kube-api-access-q2c8f\") pod \"cert-manager-7d4cc89fcb-9nqxf\" (UID: \"0a65018a-6409-43ce-abe4-498a3ea576d4\") " pod="cert-manager/cert-manager-7d4cc89fcb-9nqxf" Oct 11 10:48:15.211464 master-2 kubenswrapper[4776]: I1011 10:48:15.211373 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d366bc3-091d-4bff-a8fd-c70fb91c1db6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw\" (UID: \"0d366bc3-091d-4bff-a8fd-c70fb91c1db6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" Oct 11 10:48:15.211464 master-2 kubenswrapper[4776]: I1011 10:48:15.211473 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0d366bc3-091d-4bff-a8fd-c70fb91c1db6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw\" (UID: \"0d366bc3-091d-4bff-a8fd-c70fb91c1db6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" Oct 11 10:48:15.211765 master-0 kubenswrapper[4790]: I1011 10:48:15.211672 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-8bmlp" Oct 11 10:48:15.214016 master-0 kubenswrapper[4790]: I1011 10:48:15.213952 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01cf17dd-ef3f-47aa-8779-a099fc6d45a1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92\" (UID: \"01cf17dd-ef3f-47aa-8779-a099fc6d45a1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92" Oct 11 10:48:15.214131 master-0 kubenswrapper[4790]: I1011 10:48:15.214027 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01cf17dd-ef3f-47aa-8779-a099fc6d45a1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92\" (UID: \"01cf17dd-ef3f-47aa-8779-a099fc6d45a1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92" Oct 11 10:48:15.214981 master-2 kubenswrapper[4776]: I1011 10:48:15.214945 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d366bc3-091d-4bff-a8fd-c70fb91c1db6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw\" (UID: \"0d366bc3-091d-4bff-a8fd-c70fb91c1db6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" Oct 11 10:48:15.216379 master-2 kubenswrapper[4776]: I1011 10:48:15.216343 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0d366bc3-091d-4bff-a8fd-c70fb91c1db6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw\" (UID: \"0d366bc3-091d-4bff-a8fd-c70fb91c1db6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" Oct 11 10:48:15.219984 master-0 kubenswrapper[4790]: I1011 10:48:15.219809 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-4pfh4"] Oct 11 10:48:15.220575 master-0 kubenswrapper[4790]: I1011 10:48:15.220536 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-4pfh4" Oct 11 10:48:15.223772 master-0 kubenswrapper[4790]: I1011 10:48:15.223698 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Oct 11 10:48:15.237167 master-0 kubenswrapper[4790]: I1011 10:48:15.237105 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a65018a-6409-43ce-abe4-498a3ea576d4-bound-sa-token\") pod \"cert-manager-7d4cc89fcb-9nqxf\" (UID: \"0a65018a-6409-43ce-abe4-498a3ea576d4\") " pod="cert-manager/cert-manager-7d4cc89fcb-9nqxf" Oct 11 10:48:15.238830 master-0 kubenswrapper[4790]: I1011 10:48:15.238783 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-4pfh4"] Oct 11 10:48:15.241637 master-0 kubenswrapper[4790]: I1011 10:48:15.240812 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2c8f\" (UniqueName: \"kubernetes.io/projected/0a65018a-6409-43ce-abe4-498a3ea576d4-kube-api-access-q2c8f\") pod \"cert-manager-7d4cc89fcb-9nqxf\" (UID: \"0a65018a-6409-43ce-abe4-498a3ea576d4\") " pod="cert-manager/cert-manager-7d4cc89fcb-9nqxf" Oct 11 10:48:15.321200 master-0 kubenswrapper[4790]: I1011 10:48:15.311054 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c9eb95a-71cf-4cf8-b4c3-4ed5f3ca1fba-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-4pfh4\" (UID: \"5c9eb95a-71cf-4cf8-b4c3-4ed5f3ca1fba\") " pod="openshift-operators/observability-operator-cc5f78dfc-4pfh4" Oct 11 10:48:15.321200 master-0 kubenswrapper[4790]: I1011 10:48:15.311212 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6rts\" (UniqueName: \"kubernetes.io/projected/5c9eb95a-71cf-4cf8-b4c3-4ed5f3ca1fba-kube-api-access-t6rts\") pod \"observability-operator-cc5f78dfc-4pfh4\" (UID: \"5c9eb95a-71cf-4cf8-b4c3-4ed5f3ca1fba\") " pod="openshift-operators/observability-operator-cc5f78dfc-4pfh4" Oct 11 10:48:15.370286 master-2 kubenswrapper[4776]: I1011 10:48:15.370132 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" Oct 11 10:48:15.387926 master-0 kubenswrapper[4790]: I1011 10:48:15.387844 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-7d4cc89fcb-9nqxf" Oct 11 10:48:15.397008 master-0 kubenswrapper[4790]: I1011 10:48:15.396935 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92" Oct 11 10:48:15.412910 master-0 kubenswrapper[4790]: I1011 10:48:15.412850 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6rts\" (UniqueName: \"kubernetes.io/projected/5c9eb95a-71cf-4cf8-b4c3-4ed5f3ca1fba-kube-api-access-t6rts\") pod \"observability-operator-cc5f78dfc-4pfh4\" (UID: \"5c9eb95a-71cf-4cf8-b4c3-4ed5f3ca1fba\") " pod="openshift-operators/observability-operator-cc5f78dfc-4pfh4" Oct 11 10:48:15.413020 master-0 kubenswrapper[4790]: I1011 10:48:15.412947 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c9eb95a-71cf-4cf8-b4c3-4ed5f3ca1fba-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-4pfh4\" (UID: \"5c9eb95a-71cf-4cf8-b4c3-4ed5f3ca1fba\") " pod="openshift-operators/observability-operator-cc5f78dfc-4pfh4" Oct 11 10:48:15.423760 master-0 kubenswrapper[4790]: I1011 10:48:15.423652 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c9eb95a-71cf-4cf8-b4c3-4ed5f3ca1fba-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-4pfh4\" (UID: \"5c9eb95a-71cf-4cf8-b4c3-4ed5f3ca1fba\") " pod="openshift-operators/observability-operator-cc5f78dfc-4pfh4" Oct 11 10:48:15.443664 master-0 kubenswrapper[4790]: I1011 10:48:15.443593 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-l5f8k"] Oct 11 10:48:15.444881 master-0 kubenswrapper[4790]: I1011 10:48:15.444829 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-l5f8k" Oct 11 10:48:15.462500 master-0 kubenswrapper[4790]: I1011 10:48:15.461305 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6rts\" (UniqueName: \"kubernetes.io/projected/5c9eb95a-71cf-4cf8-b4c3-4ed5f3ca1fba-kube-api-access-t6rts\") pod \"observability-operator-cc5f78dfc-4pfh4\" (UID: \"5c9eb95a-71cf-4cf8-b4c3-4ed5f3ca1fba\") " pod="openshift-operators/observability-operator-cc5f78dfc-4pfh4" Oct 11 10:48:15.463735 master-0 kubenswrapper[4790]: I1011 10:48:15.463515 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-l5f8k"] Oct 11 10:48:15.514929 master-0 kubenswrapper[4790]: I1011 10:48:15.514871 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d3f45c80-0e01-450b-9b74-00b327f44495-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-l5f8k\" (UID: \"d3f45c80-0e01-450b-9b74-00b327f44495\") " pod="openshift-operators/perses-operator-54bc95c9fb-l5f8k" Oct 11 10:48:15.515202 master-0 kubenswrapper[4790]: I1011 10:48:15.514950 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcqqq\" (UniqueName: \"kubernetes.io/projected/d3f45c80-0e01-450b-9b74-00b327f44495-kube-api-access-qcqqq\") pod \"perses-operator-54bc95c9fb-l5f8k\" (UID: \"d3f45c80-0e01-450b-9b74-00b327f44495\") " pod="openshift-operators/perses-operator-54bc95c9fb-l5f8k" Oct 11 10:48:15.596853 master-0 kubenswrapper[4790]: I1011 10:48:15.579613 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-4pfh4" Oct 11 10:48:15.621744 master-0 kubenswrapper[4790]: I1011 10:48:15.620142 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcqqq\" (UniqueName: \"kubernetes.io/projected/d3f45c80-0e01-450b-9b74-00b327f44495-kube-api-access-qcqqq\") pod \"perses-operator-54bc95c9fb-l5f8k\" (UID: \"d3f45c80-0e01-450b-9b74-00b327f44495\") " pod="openshift-operators/perses-operator-54bc95c9fb-l5f8k" Oct 11 10:48:15.621744 master-0 kubenswrapper[4790]: I1011 10:48:15.620260 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d3f45c80-0e01-450b-9b74-00b327f44495-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-l5f8k\" (UID: \"d3f45c80-0e01-450b-9b74-00b327f44495\") " pod="openshift-operators/perses-operator-54bc95c9fb-l5f8k" Oct 11 10:48:15.621744 master-0 kubenswrapper[4790]: I1011 10:48:15.621373 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d3f45c80-0e01-450b-9b74-00b327f44495-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-l5f8k\" (UID: \"d3f45c80-0e01-450b-9b74-00b327f44495\") " pod="openshift-operators/perses-operator-54bc95c9fb-l5f8k" Oct 11 10:48:15.661036 master-0 kubenswrapper[4790]: I1011 10:48:15.660402 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcqqq\" (UniqueName: \"kubernetes.io/projected/d3f45c80-0e01-450b-9b74-00b327f44495-kube-api-access-qcqqq\") pod \"perses-operator-54bc95c9fb-l5f8k\" (UID: \"d3f45c80-0e01-450b-9b74-00b327f44495\") " pod="openshift-operators/perses-operator-54bc95c9fb-l5f8k" Oct 11 10:48:15.682966 master-0 kubenswrapper[4790]: I1011 10:48:15.682917 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-8bmlp"] Oct 11 10:48:15.691175 master-0 kubenswrapper[4790]: W1011 10:48:15.691096 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd5cd971_0d18_4313_9102_4b59431a75ab.slice/crio-44787949514ee5f01c4f1136a51355f7d15ab47cb0068ff543214028080864e7 WatchSource:0}: Error finding container 44787949514ee5f01c4f1136a51355f7d15ab47cb0068ff543214028080864e7: Status 404 returned error can't find the container with id 44787949514ee5f01c4f1136a51355f7d15ab47cb0068ff543214028080864e7 Oct 11 10:48:15.771250 master-0 kubenswrapper[4790]: I1011 10:48:15.771169 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-l5f8k" Oct 11 10:48:15.801907 master-2 kubenswrapper[4776]: I1011 10:48:15.801773 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw"] Oct 11 10:48:15.812985 master-2 kubenswrapper[4776]: W1011 10:48:15.812920 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d366bc3_091d_4bff_a8fd_c70fb91c1db6.slice/crio-db4e0a20ccca7e896bb24012c4ab57597dc758f484ddd74a98edd9b073ff182e WatchSource:0}: Error finding container db4e0a20ccca7e896bb24012c4ab57597dc758f484ddd74a98edd9b073ff182e: Status 404 returned error can't find the container with id db4e0a20ccca7e896bb24012c4ab57597dc758f484ddd74a98edd9b073ff182e Oct 11 10:48:15.914439 master-0 kubenswrapper[4790]: I1011 10:48:15.914379 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92"] Oct 11 10:48:15.932500 master-0 kubenswrapper[4790]: W1011 10:48:15.932434 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01cf17dd_ef3f_47aa_8779_a099fc6d45a1.slice/crio-ad4e8b63f47d74158b4a2a4ab9dff92c64f3eab6c4e6abe1f03ebe05260c0249 WatchSource:0}: Error finding container ad4e8b63f47d74158b4a2a4ab9dff92c64f3eab6c4e6abe1f03ebe05260c0249: Status 404 returned error can't find the container with id ad4e8b63f47d74158b4a2a4ab9dff92c64f3eab6c4e6abe1f03ebe05260c0249 Oct 11 10:48:15.969682 master-0 kubenswrapper[4790]: I1011 10:48:15.969628 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-7d4cc89fcb-9nqxf"] Oct 11 10:48:15.979257 master-0 kubenswrapper[4790]: W1011 10:48:15.979131 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a65018a_6409_43ce_abe4_498a3ea576d4.slice/crio-4fa17b0853985692154fb7e022d85361b89c0e601389d84543a7c88a44717d99 WatchSource:0}: Error finding container 4fa17b0853985692154fb7e022d85361b89c0e601389d84543a7c88a44717d99: Status 404 returned error can't find the container with id 4fa17b0853985692154fb7e022d85361b89c0e601389d84543a7c88a44717d99 Oct 11 10:48:15.993350 master-0 kubenswrapper[4790]: I1011 10:48:15.993287 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-7d4cc89fcb-9nqxf" event={"ID":"0a65018a-6409-43ce-abe4-498a3ea576d4","Type":"ContainerStarted","Data":"4fa17b0853985692154fb7e022d85361b89c0e601389d84543a7c88a44717d99"} Oct 11 10:48:15.994291 master-0 kubenswrapper[4790]: I1011 10:48:15.994263 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92" event={"ID":"01cf17dd-ef3f-47aa-8779-a099fc6d45a1","Type":"ContainerStarted","Data":"ad4e8b63f47d74158b4a2a4ab9dff92c64f3eab6c4e6abe1f03ebe05260c0249"} Oct 11 10:48:15.996535 master-0 kubenswrapper[4790]: I1011 10:48:15.996505 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-8bmlp" event={"ID":"fd5cd971-0d18-4313-9102-4b59431a75ab","Type":"ContainerStarted","Data":"44787949514ee5f01c4f1136a51355f7d15ab47cb0068ff543214028080864e7"} Oct 11 10:48:16.045679 master-0 kubenswrapper[4790]: I1011 10:48:16.045626 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-4pfh4"] Oct 11 10:48:16.063865 master-0 kubenswrapper[4790]: W1011 10:48:16.063815 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c9eb95a_71cf_4cf8_b4c3_4ed5f3ca1fba.slice/crio-c19be77ab7510ac82d99e8c75ec458c8bd6b168ae4068f3c735d102ca03221d5 WatchSource:0}: Error finding container c19be77ab7510ac82d99e8c75ec458c8bd6b168ae4068f3c735d102ca03221d5: Status 404 returned error can't find the container with id c19be77ab7510ac82d99e8c75ec458c8bd6b168ae4068f3c735d102ca03221d5 Oct 11 10:48:16.197211 master-0 kubenswrapper[4790]: I1011 10:48:16.197164 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-l5f8k"] Oct 11 10:48:16.744078 master-2 kubenswrapper[4776]: I1011 10:48:16.743969 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" event={"ID":"0d366bc3-091d-4bff-a8fd-c70fb91c1db6","Type":"ContainerStarted","Data":"db4e0a20ccca7e896bb24012c4ab57597dc758f484ddd74a98edd9b073ff182e"} Oct 11 10:48:16.835211 master-0 kubenswrapper[4790]: I1011 10:48:16.835096 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": context deadline exceeded" start-of-body= Oct 11 10:48:16.835562 master-0 kubenswrapper[4790]: I1011 10:48:16.835216 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": context deadline exceeded" Oct 11 10:48:17.004919 master-0 kubenswrapper[4790]: I1011 10:48:17.004847 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-7d4cc89fcb-9nqxf" event={"ID":"0a65018a-6409-43ce-abe4-498a3ea576d4","Type":"ContainerStarted","Data":"30fd3faaf46dd915bca4f2363c1729938ca49265cabbdd70259cfbd58b1e4c40"} Oct 11 10:48:17.006427 master-0 kubenswrapper[4790]: I1011 10:48:17.006372 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-54bc95c9fb-l5f8k" event={"ID":"d3f45c80-0e01-450b-9b74-00b327f44495","Type":"ContainerStarted","Data":"871b9ad1e0e74662f9d2a8c4d16a588d83f5502ab2e4686816d5e6c5d4c33dcd"} Oct 11 10:48:17.007911 master-0 kubenswrapper[4790]: I1011 10:48:17.007842 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-cc5f78dfc-4pfh4" event={"ID":"5c9eb95a-71cf-4cf8-b4c3-4ed5f3ca1fba","Type":"ContainerStarted","Data":"c19be77ab7510ac82d99e8c75ec458c8bd6b168ae4068f3c735d102ca03221d5"} Oct 11 10:48:17.032314 master-0 kubenswrapper[4790]: I1011 10:48:17.032224 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-7d4cc89fcb-9nqxf" podStartSLOduration=3.032199221 podStartE2EDuration="3.032199221s" podCreationTimestamp="2025-10-11 10:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:48:17.029014175 +0000 UTC m=+573.583474477" watchObservedRunningTime="2025-10-11 10:48:17.032199221 +0000 UTC m=+573.586659523" Oct 11 10:48:17.382011 master-0 kubenswrapper[4790]: I1011 10:48:17.381931 4790 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:48:17.382247 master-0 kubenswrapper[4790]: I1011 10:48:17.382030 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-0" podUID="14286286be88b59efc7cfc15eca1cc38" containerName="etcd" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: I1011 10:48:18.243401 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:48:18.243488 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:48:18.246229 master-1 kubenswrapper[4771]: I1011 10:48:18.243509 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:48:20.040972 master-0 kubenswrapper[4790]: I1011 10:48:20.040768 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92" event={"ID":"01cf17dd-ef3f-47aa-8779-a099fc6d45a1","Type":"ContainerStarted","Data":"640f070bcaba1a206fe6799d3c0b2ae9f35cb5252496f3b9d8cd5711b4ab8424"} Oct 11 10:48:20.042277 master-0 kubenswrapper[4790]: I1011 10:48:20.042213 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-54bc95c9fb-l5f8k" event={"ID":"d3f45c80-0e01-450b-9b74-00b327f44495","Type":"ContainerStarted","Data":"224c13404fda1e0a268044002e71d817076002a03cf1071f77ab909d22278ddf"} Oct 11 10:48:20.042387 master-0 kubenswrapper[4790]: I1011 10:48:20.042368 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-54bc95c9fb-l5f8k" Oct 11 10:48:20.043765 master-0 kubenswrapper[4790]: I1011 10:48:20.043692 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-8bmlp" event={"ID":"fd5cd971-0d18-4313-9102-4b59431a75ab","Type":"ContainerStarted","Data":"935d2b311bfb697ce85dffbccfe171c48240c15188be19c47235cbdd1267003a"} Oct 11 10:48:20.076777 master-0 kubenswrapper[4790]: I1011 10:48:20.076699 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92" podStartSLOduration=1.539420115 podStartE2EDuration="5.076679207s" podCreationTimestamp="2025-10-11 10:48:15 +0000 UTC" firstStartedPulling="2025-10-11 10:48:15.935112636 +0000 UTC m=+572.489572938" lastFinishedPulling="2025-10-11 10:48:19.472371738 +0000 UTC m=+576.026832030" observedRunningTime="2025-10-11 10:48:20.074394305 +0000 UTC m=+576.628854617" watchObservedRunningTime="2025-10-11 10:48:20.076679207 +0000 UTC m=+576.631139499" Oct 11 10:48:20.102904 master-0 kubenswrapper[4790]: I1011 10:48:20.102822 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-54bc95c9fb-l5f8k" podStartSLOduration=1.830063389 podStartE2EDuration="5.102804243s" podCreationTimestamp="2025-10-11 10:48:15 +0000 UTC" firstStartedPulling="2025-10-11 10:48:16.202410909 +0000 UTC m=+572.756871201" lastFinishedPulling="2025-10-11 10:48:19.475151773 +0000 UTC m=+576.029612055" observedRunningTime="2025-10-11 10:48:20.102061933 +0000 UTC m=+576.656522235" watchObservedRunningTime="2025-10-11 10:48:20.102804243 +0000 UTC m=+576.657264535" Oct 11 10:48:21.838815 master-0 kubenswrapper[4790]: I1011 10:48:21.836001 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:48:21.838815 master-0 kubenswrapper[4790]: I1011 10:48:21.836083 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:48:22.057577 master-0 kubenswrapper[4790]: I1011 10:48:22.057502 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-cc5f78dfc-4pfh4" event={"ID":"5c9eb95a-71cf-4cf8-b4c3-4ed5f3ca1fba","Type":"ContainerStarted","Data":"f5da2d26b334a7a67d54c3b734cf64aecff03f3b8841b245bba51d74136ddab1"} Oct 11 10:48:22.058033 master-0 kubenswrapper[4790]: I1011 10:48:22.057992 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-cc5f78dfc-4pfh4" Oct 11 10:48:22.060201 master-0 kubenswrapper[4790]: I1011 10:48:22.060163 4790 patch_prober.go:28] interesting pod/observability-operator-cc5f78dfc-4pfh4 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.130.0.42:8081/healthz\": dial tcp 10.130.0.42:8081: connect: connection refused" start-of-body= Oct 11 10:48:22.060277 master-0 kubenswrapper[4790]: I1011 10:48:22.060226 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-cc5f78dfc-4pfh4" podUID="5c9eb95a-71cf-4cf8-b4c3-4ed5f3ca1fba" containerName="operator" probeResult="failure" output="Get \"http://10.130.0.42:8081/healthz\": dial tcp 10.130.0.42:8081: connect: connection refused" Oct 11 10:48:22.098118 master-0 kubenswrapper[4790]: I1011 10:48:22.098042 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-8bmlp" podStartSLOduration=4.3068726139999995 podStartE2EDuration="8.098019846s" podCreationTimestamp="2025-10-11 10:48:14 +0000 UTC" firstStartedPulling="2025-10-11 10:48:15.693749694 +0000 UTC m=+572.248209976" lastFinishedPulling="2025-10-11 10:48:19.484896916 +0000 UTC m=+576.039357208" observedRunningTime="2025-10-11 10:48:20.142034133 +0000 UTC m=+576.696494425" watchObservedRunningTime="2025-10-11 10:48:22.098019846 +0000 UTC m=+578.652480138" Oct 11 10:48:22.100882 master-0 kubenswrapper[4790]: I1011 10:48:22.100823 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-cc5f78dfc-4pfh4" podStartSLOduration=1.412032932 podStartE2EDuration="7.100809761s" podCreationTimestamp="2025-10-11 10:48:15 +0000 UTC" firstStartedPulling="2025-10-11 10:48:16.067443071 +0000 UTC m=+572.621903363" lastFinishedPulling="2025-10-11 10:48:21.7562199 +0000 UTC m=+578.310680192" observedRunningTime="2025-10-11 10:48:22.09631243 +0000 UTC m=+578.650772732" watchObservedRunningTime="2025-10-11 10:48:22.100809761 +0000 UTC m=+578.655270053" Oct 11 10:48:22.786365 master-2 kubenswrapper[4776]: I1011 10:48:22.786297 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" event={"ID":"0d366bc3-091d-4bff-a8fd-c70fb91c1db6","Type":"ContainerStarted","Data":"9e661aa200da88e024b9003c269a859af807771db7a6539e340775bd3699fe74"} Oct 11 10:48:22.842520 master-2 kubenswrapper[4776]: I1011 10:48:22.842450 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" podStartSLOduration=1.686950357 podStartE2EDuration="7.842435498s" podCreationTimestamp="2025-10-11 10:48:15 +0000 UTC" firstStartedPulling="2025-10-11 10:48:15.816079803 +0000 UTC m=+1330.600506512" lastFinishedPulling="2025-10-11 10:48:21.971564944 +0000 UTC m=+1336.755991653" observedRunningTime="2025-10-11 10:48:22.841417041 +0000 UTC m=+1337.625843760" watchObservedRunningTime="2025-10-11 10:48:22.842435498 +0000 UTC m=+1337.626862207" Oct 11 10:48:23.121817 master-0 kubenswrapper[4790]: I1011 10:48:23.121768 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-cc5f78dfc-4pfh4" Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: I1011 10:48:23.245094 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:48:23.245729 master-1 kubenswrapper[4771]: I1011 10:48:23.245194 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:48:24.740429 master-0 kubenswrapper[4790]: I1011 10:48:24.740350 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm" Oct 11 10:48:25.775819 master-0 kubenswrapper[4790]: I1011 10:48:25.775764 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-54bc95c9fb-l5f8k" Oct 11 10:48:26.836876 master-0 kubenswrapper[4790]: I1011 10:48:26.836603 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": context deadline exceeded" start-of-body= Oct 11 10:48:26.837464 master-0 kubenswrapper[4790]: I1011 10:48:26.836900 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": context deadline exceeded" Oct 11 10:48:27.381473 master-0 kubenswrapper[4790]: I1011 10:48:27.381341 4790 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:48:27.381961 master-0 kubenswrapper[4790]: I1011 10:48:27.381492 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-0" podUID="14286286be88b59efc7cfc15eca1cc38" containerName="etcd" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: I1011 10:48:28.243264 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:48:28.243345 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:48:28.245056 master-1 kubenswrapper[4771]: I1011 10:48:28.243348 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:48:31.838061 master-0 kubenswrapper[4790]: I1011 10:48:31.837971 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:48:31.838966 master-0 kubenswrapper[4790]: I1011 10:48:31.838081 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: I1011 10:48:33.242270 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:48:33.242482 master-1 kubenswrapper[4771]: I1011 10:48:33.242406 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:48:36.839051 master-0 kubenswrapper[4790]: I1011 10:48:36.838923 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:48:36.839051 master-0 kubenswrapper[4790]: I1011 10:48:36.839049 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:48:37.382308 master-0 kubenswrapper[4790]: I1011 10:48:37.382169 4790 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:48:37.382308 master-0 kubenswrapper[4790]: I1011 10:48:37.382292 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-0" podUID="14286286be88b59efc7cfc15eca1cc38" containerName="etcd" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: I1011 10:48:38.243261 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:48:38.243387 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:48:38.247474 master-1 kubenswrapper[4771]: I1011 10:48:38.243347 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:48:41.839416 master-0 kubenswrapper[4790]: I1011 10:48:41.839308 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:48:41.840461 master-0 kubenswrapper[4790]: I1011 10:48:41.839448 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: I1011 10:48:43.245916 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:48:43.246011 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:48:43.248172 master-1 kubenswrapper[4771]: I1011 10:48:43.246017 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:48:44.044607 master-0 kubenswrapper[4790]: I1011 10:48:44.044488 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq" Oct 11 10:48:46.840077 master-0 kubenswrapper[4790]: I1011 10:48:46.839927 4790 patch_prober.go:28] interesting pod/etcd-guard-master-0 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:48:46.840077 master-0 kubenswrapper[4790]: I1011 10:48:46.840047 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-0" podUID="c6436766-e7b0-471b-acbf-861280191521" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:48:47.381673 master-0 kubenswrapper[4790]: I1011 10:48:47.381553 4790 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:48:47.381673 master-0 kubenswrapper[4790]: I1011 10:48:47.381661 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-0" podUID="14286286be88b59efc7cfc15eca1cc38" containerName="etcd" probeResult="failure" output="Get \"https://192.168.34.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:48:47.922601 master-0 kubenswrapper[4790]: I1011 10:48:47.922529 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-guard-master-0" Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: I1011 10:48:48.245162 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:48:48.245218 master-1 kubenswrapper[4771]: I1011 10:48:48.245233 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: I1011 10:48:53.245399 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]log ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]api-openshift-apiserver-available ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]informer-sync ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/crd-informer-synced ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/bootstrap-controller ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]autoregister-completion ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: [-]shutdown failed: reason withheld Oct 11 10:48:53.245518 master-1 kubenswrapper[4771]: readyz check failed Oct 11 10:48:53.248791 master-1 kubenswrapper[4771]: I1011 10:48:53.245515 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:48:53.291754 master-0 kubenswrapper[4790]: I1011 10:48:53.291646 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-54x4w"] Oct 11 10:48:53.293180 master-0 kubenswrapper[4790]: I1011 10:48:53.293137 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-54x4w" Oct 11 10:48:53.297635 master-0 kubenswrapper[4790]: I1011 10:48:53.297582 4790 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 11 10:48:53.394020 master-0 kubenswrapper[4790]: I1011 10:48:53.393909 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7359c204-2acb-4c3b-b05f-2a124f3862fb-cert\") pod \"frr-k8s-webhook-server-64bf5d555-54x4w\" (UID: \"7359c204-2acb-4c3b-b05f-2a124f3862fb\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-54x4w" Oct 11 10:48:53.394466 master-0 kubenswrapper[4790]: I1011 10:48:53.394141 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6rtf\" (UniqueName: \"kubernetes.io/projected/7359c204-2acb-4c3b-b05f-2a124f3862fb-kube-api-access-n6rtf\") pod \"frr-k8s-webhook-server-64bf5d555-54x4w\" (UID: \"7359c204-2acb-4c3b-b05f-2a124f3862fb\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-54x4w" Oct 11 10:48:53.495981 master-0 kubenswrapper[4790]: I1011 10:48:53.495896 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rtf\" (UniqueName: \"kubernetes.io/projected/7359c204-2acb-4c3b-b05f-2a124f3862fb-kube-api-access-n6rtf\") pod \"frr-k8s-webhook-server-64bf5d555-54x4w\" (UID: \"7359c204-2acb-4c3b-b05f-2a124f3862fb\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-54x4w" Oct 11 10:48:53.496286 master-0 kubenswrapper[4790]: I1011 10:48:53.496030 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7359c204-2acb-4c3b-b05f-2a124f3862fb-cert\") pod \"frr-k8s-webhook-server-64bf5d555-54x4w\" (UID: \"7359c204-2acb-4c3b-b05f-2a124f3862fb\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-54x4w" Oct 11 10:48:53.502121 master-0 kubenswrapper[4790]: I1011 10:48:53.502055 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7359c204-2acb-4c3b-b05f-2a124f3862fb-cert\") pod \"frr-k8s-webhook-server-64bf5d555-54x4w\" (UID: \"7359c204-2acb-4c3b-b05f-2a124f3862fb\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-54x4w" Oct 11 10:48:53.901288 master-0 kubenswrapper[4790]: I1011 10:48:53.901096 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-54x4w"] Oct 11 10:48:53.923627 master-0 kubenswrapper[4790]: I1011 10:48:53.923564 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6rtf\" (UniqueName: \"kubernetes.io/projected/7359c204-2acb-4c3b-b05f-2a124f3862fb-kube-api-access-n6rtf\") pod \"frr-k8s-webhook-server-64bf5d555-54x4w\" (UID: \"7359c204-2acb-4c3b-b05f-2a124f3862fb\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-54x4w" Oct 11 10:48:53.981698 master-2 kubenswrapper[4776]: I1011 10:48:53.981620 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-hwrzt"] Oct 11 10:48:53.984040 master-2 kubenswrapper[4776]: I1011 10:48:53.984000 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:53.988785 master-2 kubenswrapper[4776]: I1011 10:48:53.988649 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 11 10:48:53.988785 master-2 kubenswrapper[4776]: I1011 10:48:53.988694 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 11 10:48:53.989093 master-2 kubenswrapper[4776]: I1011 10:48:53.988982 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 11 10:48:53.989283 master-2 kubenswrapper[4776]: I1011 10:48:53.989192 4776 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 11 10:48:54.051814 master-0 kubenswrapper[4790]: I1011 10:48:54.051755 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-5xkrb"] Oct 11 10:48:54.054504 master-0 kubenswrapper[4790]: I1011 10:48:54.054479 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.058694 master-0 kubenswrapper[4790]: I1011 10:48:54.058668 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 11 10:48:54.058909 master-0 kubenswrapper[4790]: I1011 10:48:54.058701 4790 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 11 10:48:54.069536 master-2 kubenswrapper[4776]: I1011 10:48:54.069504 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-frr-conf\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.069795 master-2 kubenswrapper[4776]: I1011 10:48:54.069778 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-reloader\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.069898 master-2 kubenswrapper[4776]: I1011 10:48:54.069884 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4cd9\" (UniqueName: \"kubernetes.io/projected/a7969839-a9c5-4a06-8472-84032bfb16f1-kube-api-access-p4cd9\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.070000 master-2 kubenswrapper[4776]: I1011 10:48:54.069985 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-frr-sockets\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.070091 master-2 kubenswrapper[4776]: I1011 10:48:54.070079 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a7969839-a9c5-4a06-8472-84032bfb16f1-frr-startup\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.070190 master-2 kubenswrapper[4776]: I1011 10:48:54.070177 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-metrics\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.070273 master-2 kubenswrapper[4776]: I1011 10:48:54.070260 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7969839-a9c5-4a06-8472-84032bfb16f1-metrics-certs\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.107150 master-0 kubenswrapper[4790]: I1011 10:48:54.107078 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd096860-a678-4b71-a23d-70ecd6b79a0d-metrics-certs\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.107409 master-0 kubenswrapper[4790]: I1011 10:48:54.107156 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/bd096860-a678-4b71-a23d-70ecd6b79a0d-metrics\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.107409 master-0 kubenswrapper[4790]: I1011 10:48:54.107204 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/bd096860-a678-4b71-a23d-70ecd6b79a0d-frr-startup\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.107409 master-0 kubenswrapper[4790]: I1011 10:48:54.107318 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mcq4\" (UniqueName: \"kubernetes.io/projected/bd096860-a678-4b71-a23d-70ecd6b79a0d-kube-api-access-6mcq4\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.107409 master-0 kubenswrapper[4790]: I1011 10:48:54.107362 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/bd096860-a678-4b71-a23d-70ecd6b79a0d-frr-conf\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.107589 master-0 kubenswrapper[4790]: I1011 10:48:54.107471 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/bd096860-a678-4b71-a23d-70ecd6b79a0d-frr-sockets\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.107589 master-0 kubenswrapper[4790]: I1011 10:48:54.107508 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/bd096860-a678-4b71-a23d-70ecd6b79a0d-reloader\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.124383 master-1 kubenswrapper[4771]: I1011 10:48:54.124247 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-lvzhx"] Oct 11 10:48:54.124951 master-1 kubenswrapper[4771]: E1011 10:48:54.124614 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f73eff-98a5-47a6-b15c-2338930444b9" containerName="installer" Oct 11 10:48:54.124951 master-1 kubenswrapper[4771]: I1011 10:48:54.124636 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f73eff-98a5-47a6-b15c-2338930444b9" containerName="installer" Oct 11 10:48:54.124951 master-1 kubenswrapper[4771]: I1011 10:48:54.124775 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f73eff-98a5-47a6-b15c-2338930444b9" containerName="installer" Oct 11 10:48:54.127261 master-1 kubenswrapper[4771]: I1011 10:48:54.127208 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.130200 master-1 kubenswrapper[4771]: I1011 10:48:54.130141 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 11 10:48:54.131655 master-1 kubenswrapper[4771]: I1011 10:48:54.131384 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 11 10:48:54.131655 master-1 kubenswrapper[4771]: I1011 10:48:54.131490 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 11 10:48:54.132675 master-1 kubenswrapper[4771]: I1011 10:48:54.131828 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 11 10:48:54.171927 master-2 kubenswrapper[4776]: I1011 10:48:54.171858 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7969839-a9c5-4a06-8472-84032bfb16f1-metrics-certs\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.172138 master-2 kubenswrapper[4776]: I1011 10:48:54.172105 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-frr-conf\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.172191 master-2 kubenswrapper[4776]: I1011 10:48:54.172169 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-reloader\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.172240 master-2 kubenswrapper[4776]: I1011 10:48:54.172205 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4cd9\" (UniqueName: \"kubernetes.io/projected/a7969839-a9c5-4a06-8472-84032bfb16f1-kube-api-access-p4cd9\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.172290 master-2 kubenswrapper[4776]: I1011 10:48:54.172241 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-frr-sockets\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.172341 master-2 kubenswrapper[4776]: I1011 10:48:54.172287 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a7969839-a9c5-4a06-8472-84032bfb16f1-frr-startup\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.172341 master-2 kubenswrapper[4776]: I1011 10:48:54.172318 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-metrics\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.172844 master-2 kubenswrapper[4776]: I1011 10:48:54.172787 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-frr-conf\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.172957 master-2 kubenswrapper[4776]: I1011 10:48:54.172920 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-frr-sockets\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.173146 master-2 kubenswrapper[4776]: I1011 10:48:54.173099 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-metrics\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.173146 master-2 kubenswrapper[4776]: I1011 10:48:54.173128 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-reloader\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.173631 master-2 kubenswrapper[4776]: I1011 10:48:54.173586 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a7969839-a9c5-4a06-8472-84032bfb16f1-frr-startup\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.175517 master-2 kubenswrapper[4776]: I1011 10:48:54.175483 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7969839-a9c5-4a06-8472-84032bfb16f1-metrics-certs\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.209444 master-0 kubenswrapper[4790]: I1011 10:48:54.209254 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/bd096860-a678-4b71-a23d-70ecd6b79a0d-frr-sockets\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.209776 master-0 kubenswrapper[4790]: I1011 10:48:54.209758 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/bd096860-a678-4b71-a23d-70ecd6b79a0d-reloader\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.209876 master-0 kubenswrapper[4790]: I1011 10:48:54.209864 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd096860-a678-4b71-a23d-70ecd6b79a0d-metrics-certs\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.209967 master-0 kubenswrapper[4790]: I1011 10:48:54.209953 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/bd096860-a678-4b71-a23d-70ecd6b79a0d-metrics\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.210042 master-0 kubenswrapper[4790]: I1011 10:48:54.210030 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/bd096860-a678-4b71-a23d-70ecd6b79a0d-frr-startup\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.210158 master-0 kubenswrapper[4790]: I1011 10:48:54.210146 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/bd096860-a678-4b71-a23d-70ecd6b79a0d-frr-conf\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.210267 master-0 kubenswrapper[4790]: I1011 10:48:54.210249 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mcq4\" (UniqueName: \"kubernetes.io/projected/bd096860-a678-4b71-a23d-70ecd6b79a0d-kube-api-access-6mcq4\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.211186 master-0 kubenswrapper[4790]: I1011 10:48:54.210166 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/bd096860-a678-4b71-a23d-70ecd6b79a0d-frr-sockets\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.211186 master-0 kubenswrapper[4790]: I1011 10:48:54.210570 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/bd096860-a678-4b71-a23d-70ecd6b79a0d-reloader\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.211186 master-0 kubenswrapper[4790]: I1011 10:48:54.210652 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/bd096860-a678-4b71-a23d-70ecd6b79a0d-frr-conf\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.211186 master-0 kubenswrapper[4790]: I1011 10:48:54.210872 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/bd096860-a678-4b71-a23d-70ecd6b79a0d-metrics\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.211390 master-0 kubenswrapper[4790]: I1011 10:48:54.211348 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/bd096860-a678-4b71-a23d-70ecd6b79a0d-frr-startup\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.212898 master-0 kubenswrapper[4790]: I1011 10:48:54.212852 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-54x4w" Oct 11 10:48:54.214882 master-0 kubenswrapper[4790]: I1011 10:48:54.214858 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd096860-a678-4b71-a23d-70ecd6b79a0d-metrics-certs\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.270516 master-2 kubenswrapper[4776]: I1011 10:48:54.270374 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4cd9\" (UniqueName: \"kubernetes.io/projected/a7969839-a9c5-4a06-8472-84032bfb16f1-kube-api-access-p4cd9\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.301545 master-2 kubenswrapper[4776]: I1011 10:48:54.301489 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.313177 master-1 kubenswrapper[4771]: I1011 10:48:54.313096 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f1e63653-b356-4bf6-b91a-6d386b1f3c33-frr-startup\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.313177 master-1 kubenswrapper[4771]: I1011 10:48:54.313160 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f1e63653-b356-4bf6-b91a-6d386b1f3c33-reloader\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.313177 master-1 kubenswrapper[4771]: I1011 10:48:54.313190 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x6tk\" (UniqueName: \"kubernetes.io/projected/f1e63653-b356-4bf6-b91a-6d386b1f3c33-kube-api-access-8x6tk\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.313871 master-1 kubenswrapper[4771]: I1011 10:48:54.313231 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f1e63653-b356-4bf6-b91a-6d386b1f3c33-frr-sockets\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.313871 master-1 kubenswrapper[4771]: I1011 10:48:54.313258 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1e63653-b356-4bf6-b91a-6d386b1f3c33-metrics-certs\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.313871 master-1 kubenswrapper[4771]: I1011 10:48:54.313282 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f1e63653-b356-4bf6-b91a-6d386b1f3c33-frr-conf\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.313871 master-1 kubenswrapper[4771]: I1011 10:48:54.313488 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f1e63653-b356-4bf6-b91a-6d386b1f3c33-metrics\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.415887 master-1 kubenswrapper[4771]: I1011 10:48:54.415696 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f1e63653-b356-4bf6-b91a-6d386b1f3c33-frr-startup\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.415887 master-1 kubenswrapper[4771]: I1011 10:48:54.415789 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f1e63653-b356-4bf6-b91a-6d386b1f3c33-reloader\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.415887 master-1 kubenswrapper[4771]: I1011 10:48:54.415825 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x6tk\" (UniqueName: \"kubernetes.io/projected/f1e63653-b356-4bf6-b91a-6d386b1f3c33-kube-api-access-8x6tk\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.415887 master-1 kubenswrapper[4771]: I1011 10:48:54.415859 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f1e63653-b356-4bf6-b91a-6d386b1f3c33-frr-sockets\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.415887 master-1 kubenswrapper[4771]: I1011 10:48:54.415885 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1e63653-b356-4bf6-b91a-6d386b1f3c33-metrics-certs\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.415887 master-1 kubenswrapper[4771]: I1011 10:48:54.415910 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f1e63653-b356-4bf6-b91a-6d386b1f3c33-metrics\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.416382 master-1 kubenswrapper[4771]: I1011 10:48:54.415932 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f1e63653-b356-4bf6-b91a-6d386b1f3c33-frr-conf\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.416674 master-1 kubenswrapper[4771]: I1011 10:48:54.416608 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f1e63653-b356-4bf6-b91a-6d386b1f3c33-frr-sockets\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.416755 master-1 kubenswrapper[4771]: I1011 10:48:54.416717 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f1e63653-b356-4bf6-b91a-6d386b1f3c33-reloader\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.416936 master-1 kubenswrapper[4771]: I1011 10:48:54.416894 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f1e63653-b356-4bf6-b91a-6d386b1f3c33-frr-conf\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.417207 master-1 kubenswrapper[4771]: I1011 10:48:54.417173 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f1e63653-b356-4bf6-b91a-6d386b1f3c33-frr-startup\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.417261 master-1 kubenswrapper[4771]: I1011 10:48:54.417159 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f1e63653-b356-4bf6-b91a-6d386b1f3c33-metrics\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.422162 master-1 kubenswrapper[4771]: I1011 10:48:54.422082 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1e63653-b356-4bf6-b91a-6d386b1f3c33-metrics-certs\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.513418 master-0 kubenswrapper[4790]: I1011 10:48:54.513259 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mcq4\" (UniqueName: \"kubernetes.io/projected/bd096860-a678-4b71-a23d-70ecd6b79a0d-kube-api-access-6mcq4\") pod \"frr-k8s-5xkrb\" (UID: \"bd096860-a678-4b71-a23d-70ecd6b79a0d\") " pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.519452 master-1 kubenswrapper[4771]: I1011 10:48:54.518853 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x6tk\" (UniqueName: \"kubernetes.io/projected/f1e63653-b356-4bf6-b91a-6d386b1f3c33-kube-api-access-8x6tk\") pod \"frr-k8s-lvzhx\" (UID: \"f1e63653-b356-4bf6-b91a-6d386b1f3c33\") " pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.672172 master-0 kubenswrapper[4790]: I1011 10:48:54.672099 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:48:54.757663 master-1 kubenswrapper[4771]: I1011 10:48:54.757344 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:48:54.782794 master-0 kubenswrapper[4790]: I1011 10:48:54.782476 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-54x4w"] Oct 11 10:48:54.788541 master-0 kubenswrapper[4790]: W1011 10:48:54.788484 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7359c204_2acb_4c3b_b05f_2a124f3862fb.slice/crio-3eb4384d713a0d19539c60cd01fcdb0f30ce181686e9cc4b369634045bcc2e0c WatchSource:0}: Error finding container 3eb4384d713a0d19539c60cd01fcdb0f30ce181686e9cc4b369634045bcc2e0c: Status 404 returned error can't find the container with id 3eb4384d713a0d19539c60cd01fcdb0f30ce181686e9cc4b369634045bcc2e0c Oct 11 10:48:54.881227 master-0 kubenswrapper[4790]: I1011 10:48:54.881171 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-8n7ld"] Oct 11 10:48:54.882632 master-0 kubenswrapper[4790]: I1011 10:48:54.882612 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8n7ld" Oct 11 10:48:54.885597 master-0 kubenswrapper[4790]: I1011 10:48:54.885539 4790 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 11 10:48:54.886475 master-0 kubenswrapper[4790]: I1011 10:48:54.886421 4790 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 11 10:48:54.886536 master-0 kubenswrapper[4790]: I1011 10:48:54.886487 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 11 10:48:54.911357 master-2 kubenswrapper[4776]: I1011 10:48:54.911314 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-g9nhb"] Oct 11 10:48:54.912268 master-2 kubenswrapper[4776]: I1011 10:48:54.912242 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g9nhb" Oct 11 10:48:54.915266 master-2 kubenswrapper[4776]: I1011 10:48:54.915223 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 11 10:48:54.915621 master-2 kubenswrapper[4776]: I1011 10:48:54.915599 4776 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 11 10:48:54.915760 master-2 kubenswrapper[4776]: I1011 10:48:54.915634 4776 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 11 10:48:54.927006 master-1 kubenswrapper[4771]: I1011 10:48:54.925186 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-524kt"] Oct 11 10:48:54.927509 master-1 kubenswrapper[4771]: I1011 10:48:54.927475 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-524kt" Oct 11 10:48:54.930934 master-1 kubenswrapper[4771]: I1011 10:48:54.930895 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 11 10:48:54.930995 master-1 kubenswrapper[4771]: I1011 10:48:54.930968 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 11 10:48:54.931307 master-1 kubenswrapper[4771]: I1011 10:48:54.931271 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 11 10:48:54.948951 master-0 kubenswrapper[4790]: I1011 10:48:54.948857 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-rtr4h"] Oct 11 10:48:54.953996 master-0 kubenswrapper[4790]: I1011 10:48:54.953930 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-rtr4h" Oct 11 10:48:54.958078 master-0 kubenswrapper[4790]: I1011 10:48:54.958025 4790 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 11 10:48:54.962687 master-0 kubenswrapper[4790]: I1011 10:48:54.962612 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-rtr4h"] Oct 11 10:48:54.982541 master-2 kubenswrapper[4776]: I1011 10:48:54.982457 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmj2l\" (UniqueName: \"kubernetes.io/projected/018da26f-14c3-468f-bab0-089a91b3ef26-kube-api-access-tmj2l\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:54.982541 master-2 kubenswrapper[4776]: I1011 10:48:54.982544 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-metrics-certs\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:54.983280 master-2 kubenswrapper[4776]: I1011 10:48:54.982585 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/018da26f-14c3-468f-bab0-089a91b3ef26-metallb-excludel2\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:54.983280 master-2 kubenswrapper[4776]: I1011 10:48:54.982639 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-memberlist\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:55.006201 master-2 kubenswrapper[4776]: I1011 10:48:55.006128 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerStarted","Data":"da562bc3ad6c4f3edf56c5ce222e8644e275ada51cc84d1ac3562af94f3c9d9e"} Oct 11 10:48:55.035893 master-0 kubenswrapper[4790]: I1011 10:48:55.035754 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0bd4ff7d-5743-4ecb-86e8-72a738214533-metrics-certs\") pod \"controller-68d546b9d8-rtr4h\" (UID: \"0bd4ff7d-5743-4ecb-86e8-72a738214533\") " pod="metallb-system/controller-68d546b9d8-rtr4h" Oct 11 10:48:55.035893 master-0 kubenswrapper[4790]: I1011 10:48:55.035876 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0bd4ff7d-5743-4ecb-86e8-72a738214533-cert\") pod \"controller-68d546b9d8-rtr4h\" (UID: \"0bd4ff7d-5743-4ecb-86e8-72a738214533\") " pod="metallb-system/controller-68d546b9d8-rtr4h" Oct 11 10:48:55.036123 master-0 kubenswrapper[4790]: I1011 10:48:55.035917 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hk4r\" (UniqueName: \"kubernetes.io/projected/0bd4ff7d-5743-4ecb-86e8-72a738214533-kube-api-access-8hk4r\") pod \"controller-68d546b9d8-rtr4h\" (UID: \"0bd4ff7d-5743-4ecb-86e8-72a738214533\") " pod="metallb-system/controller-68d546b9d8-rtr4h" Oct 11 10:48:55.036123 master-0 kubenswrapper[4790]: I1011 10:48:55.035943 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d3e23ec-dfa6-46d4-bf57-4e89ee459be5-metrics-certs\") pod \"speaker-8n7ld\" (UID: \"7d3e23ec-dfa6-46d4-bf57-4e89ee459be5\") " pod="metallb-system/speaker-8n7ld" Oct 11 10:48:55.036123 master-0 kubenswrapper[4790]: I1011 10:48:55.035977 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm682\" (UniqueName: \"kubernetes.io/projected/7d3e23ec-dfa6-46d4-bf57-4e89ee459be5-kube-api-access-jm682\") pod \"speaker-8n7ld\" (UID: \"7d3e23ec-dfa6-46d4-bf57-4e89ee459be5\") " pod="metallb-system/speaker-8n7ld" Oct 11 10:48:55.036123 master-0 kubenswrapper[4790]: I1011 10:48:55.036024 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7d3e23ec-dfa6-46d4-bf57-4e89ee459be5-memberlist\") pod \"speaker-8n7ld\" (UID: \"7d3e23ec-dfa6-46d4-bf57-4e89ee459be5\") " pod="metallb-system/speaker-8n7ld" Oct 11 10:48:55.036123 master-0 kubenswrapper[4790]: I1011 10:48:55.036047 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7d3e23ec-dfa6-46d4-bf57-4e89ee459be5-metallb-excludel2\") pod \"speaker-8n7ld\" (UID: \"7d3e23ec-dfa6-46d4-bf57-4e89ee459be5\") " pod="metallb-system/speaker-8n7ld" Oct 11 10:48:55.084240 master-2 kubenswrapper[4776]: I1011 10:48:55.084156 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmj2l\" (UniqueName: \"kubernetes.io/projected/018da26f-14c3-468f-bab0-089a91b3ef26-kube-api-access-tmj2l\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:55.084240 master-2 kubenswrapper[4776]: I1011 10:48:55.084217 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-metrics-certs\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:55.084574 master-2 kubenswrapper[4776]: I1011 10:48:55.084267 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/018da26f-14c3-468f-bab0-089a91b3ef26-metallb-excludel2\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:55.084574 master-2 kubenswrapper[4776]: I1011 10:48:55.084317 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-memberlist\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:55.084574 master-2 kubenswrapper[4776]: E1011 10:48:55.084478 4776 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 11 10:48:55.084574 master-2 kubenswrapper[4776]: E1011 10:48:55.084547 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-memberlist podName:018da26f-14c3-468f-bab0-089a91b3ef26 nodeName:}" failed. No retries permitted until 2025-10-11 10:48:55.584525981 +0000 UTC m=+1370.368952690 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-memberlist") pod "speaker-g9nhb" (UID: "018da26f-14c3-468f-bab0-089a91b3ef26") : secret "metallb-memberlist" not found Oct 11 10:48:55.085207 master-2 kubenswrapper[4776]: I1011 10:48:55.085156 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/018da26f-14c3-468f-bab0-089a91b3ef26-metallb-excludel2\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:55.086966 master-2 kubenswrapper[4776]: I1011 10:48:55.086916 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-metrics-certs\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:55.112399 master-2 kubenswrapper[4776]: I1011 10:48:55.112331 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmj2l\" (UniqueName: \"kubernetes.io/projected/018da26f-14c3-468f-bab0-089a91b3ef26-kube-api-access-tmj2l\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:55.126324 master-1 kubenswrapper[4771]: I1011 10:48:55.126208 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/72f01dc6-72cd-4eb0-8039-57150e0758bf-metallb-excludel2\") pod \"speaker-524kt\" (UID: \"72f01dc6-72cd-4eb0-8039-57150e0758bf\") " pod="metallb-system/speaker-524kt" Oct 11 10:48:55.126324 master-1 kubenswrapper[4771]: I1011 10:48:55.126329 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72f01dc6-72cd-4eb0-8039-57150e0758bf-metrics-certs\") pod \"speaker-524kt\" (UID: \"72f01dc6-72cd-4eb0-8039-57150e0758bf\") " pod="metallb-system/speaker-524kt" Oct 11 10:48:55.127206 master-1 kubenswrapper[4771]: I1011 10:48:55.126495 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72f01dc6-72cd-4eb0-8039-57150e0758bf-memberlist\") pod \"speaker-524kt\" (UID: \"72f01dc6-72cd-4eb0-8039-57150e0758bf\") " pod="metallb-system/speaker-524kt" Oct 11 10:48:55.127206 master-1 kubenswrapper[4771]: I1011 10:48:55.126561 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmqn9\" (UniqueName: \"kubernetes.io/projected/72f01dc6-72cd-4eb0-8039-57150e0758bf-kube-api-access-nmqn9\") pod \"speaker-524kt\" (UID: \"72f01dc6-72cd-4eb0-8039-57150e0758bf\") " pod="metallb-system/speaker-524kt" Oct 11 10:48:55.137388 master-0 kubenswrapper[4790]: I1011 10:48:55.137301 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hk4r\" (UniqueName: \"kubernetes.io/projected/0bd4ff7d-5743-4ecb-86e8-72a738214533-kube-api-access-8hk4r\") pod \"controller-68d546b9d8-rtr4h\" (UID: \"0bd4ff7d-5743-4ecb-86e8-72a738214533\") " pod="metallb-system/controller-68d546b9d8-rtr4h" Oct 11 10:48:55.137388 master-0 kubenswrapper[4790]: I1011 10:48:55.137380 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d3e23ec-dfa6-46d4-bf57-4e89ee459be5-metrics-certs\") pod \"speaker-8n7ld\" (UID: \"7d3e23ec-dfa6-46d4-bf57-4e89ee459be5\") " pod="metallb-system/speaker-8n7ld" Oct 11 10:48:55.137671 master-0 kubenswrapper[4790]: I1011 10:48:55.137422 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm682\" (UniqueName: \"kubernetes.io/projected/7d3e23ec-dfa6-46d4-bf57-4e89ee459be5-kube-api-access-jm682\") pod \"speaker-8n7ld\" (UID: \"7d3e23ec-dfa6-46d4-bf57-4e89ee459be5\") " pod="metallb-system/speaker-8n7ld" Oct 11 10:48:55.137671 master-0 kubenswrapper[4790]: I1011 10:48:55.137474 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7d3e23ec-dfa6-46d4-bf57-4e89ee459be5-memberlist\") pod \"speaker-8n7ld\" (UID: \"7d3e23ec-dfa6-46d4-bf57-4e89ee459be5\") " pod="metallb-system/speaker-8n7ld" Oct 11 10:48:55.137671 master-0 kubenswrapper[4790]: I1011 10:48:55.137502 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7d3e23ec-dfa6-46d4-bf57-4e89ee459be5-metallb-excludel2\") pod \"speaker-8n7ld\" (UID: \"7d3e23ec-dfa6-46d4-bf57-4e89ee459be5\") " pod="metallb-system/speaker-8n7ld" Oct 11 10:48:55.137671 master-0 kubenswrapper[4790]: I1011 10:48:55.137532 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0bd4ff7d-5743-4ecb-86e8-72a738214533-metrics-certs\") pod \"controller-68d546b9d8-rtr4h\" (UID: \"0bd4ff7d-5743-4ecb-86e8-72a738214533\") " pod="metallb-system/controller-68d546b9d8-rtr4h" Oct 11 10:48:55.137671 master-0 kubenswrapper[4790]: I1011 10:48:55.137593 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0bd4ff7d-5743-4ecb-86e8-72a738214533-cert\") pod \"controller-68d546b9d8-rtr4h\" (UID: \"0bd4ff7d-5743-4ecb-86e8-72a738214533\") " pod="metallb-system/controller-68d546b9d8-rtr4h" Oct 11 10:48:55.137960 master-0 kubenswrapper[4790]: E1011 10:48:55.137879 4790 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 11 10:48:55.138105 master-0 kubenswrapper[4790]: E1011 10:48:55.138071 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d3e23ec-dfa6-46d4-bf57-4e89ee459be5-memberlist podName:7d3e23ec-dfa6-46d4-bf57-4e89ee459be5 nodeName:}" failed. No retries permitted until 2025-10-11 10:48:55.638028715 +0000 UTC m=+612.192489167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7d3e23ec-dfa6-46d4-bf57-4e89ee459be5-memberlist") pod "speaker-8n7ld" (UID: "7d3e23ec-dfa6-46d4-bf57-4e89ee459be5") : secret "metallb-memberlist" not found Oct 11 10:48:55.138939 master-0 kubenswrapper[4790]: I1011 10:48:55.138888 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7d3e23ec-dfa6-46d4-bf57-4e89ee459be5-metallb-excludel2\") pod \"speaker-8n7ld\" (UID: \"7d3e23ec-dfa6-46d4-bf57-4e89ee459be5\") " pod="metallb-system/speaker-8n7ld" Oct 11 10:48:55.140986 master-0 kubenswrapper[4790]: I1011 10:48:55.140579 4790 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 11 10:48:55.140986 master-0 kubenswrapper[4790]: I1011 10:48:55.140890 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d3e23ec-dfa6-46d4-bf57-4e89ee459be5-metrics-certs\") pod \"speaker-8n7ld\" (UID: \"7d3e23ec-dfa6-46d4-bf57-4e89ee459be5\") " pod="metallb-system/speaker-8n7ld" Oct 11 10:48:55.146783 master-0 kubenswrapper[4790]: I1011 10:48:55.142963 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0bd4ff7d-5743-4ecb-86e8-72a738214533-metrics-certs\") pod \"controller-68d546b9d8-rtr4h\" (UID: \"0bd4ff7d-5743-4ecb-86e8-72a738214533\") " pod="metallb-system/controller-68d546b9d8-rtr4h" Oct 11 10:48:55.152088 master-0 kubenswrapper[4790]: I1011 10:48:55.152031 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0bd4ff7d-5743-4ecb-86e8-72a738214533-cert\") pod \"controller-68d546b9d8-rtr4h\" (UID: \"0bd4ff7d-5743-4ecb-86e8-72a738214533\") " pod="metallb-system/controller-68d546b9d8-rtr4h" Oct 11 10:48:55.165953 master-0 kubenswrapper[4790]: I1011 10:48:55.165867 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm682\" (UniqueName: \"kubernetes.io/projected/7d3e23ec-dfa6-46d4-bf57-4e89ee459be5-kube-api-access-jm682\") pod \"speaker-8n7ld\" (UID: \"7d3e23ec-dfa6-46d4-bf57-4e89ee459be5\") " pod="metallb-system/speaker-8n7ld" Oct 11 10:48:55.169887 master-0 kubenswrapper[4790]: I1011 10:48:55.169826 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hk4r\" (UniqueName: \"kubernetes.io/projected/0bd4ff7d-5743-4ecb-86e8-72a738214533-kube-api-access-8hk4r\") pod \"controller-68d546b9d8-rtr4h\" (UID: \"0bd4ff7d-5743-4ecb-86e8-72a738214533\") " pod="metallb-system/controller-68d546b9d8-rtr4h" Oct 11 10:48:55.228326 master-1 kubenswrapper[4771]: I1011 10:48:55.228180 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/72f01dc6-72cd-4eb0-8039-57150e0758bf-metallb-excludel2\") pod \"speaker-524kt\" (UID: \"72f01dc6-72cd-4eb0-8039-57150e0758bf\") " pod="metallb-system/speaker-524kt" Oct 11 10:48:55.228326 master-1 kubenswrapper[4771]: I1011 10:48:55.228297 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72f01dc6-72cd-4eb0-8039-57150e0758bf-metrics-certs\") pod \"speaker-524kt\" (UID: \"72f01dc6-72cd-4eb0-8039-57150e0758bf\") " pod="metallb-system/speaker-524kt" Oct 11 10:48:55.228862 master-1 kubenswrapper[4771]: I1011 10:48:55.228351 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72f01dc6-72cd-4eb0-8039-57150e0758bf-memberlist\") pod \"speaker-524kt\" (UID: \"72f01dc6-72cd-4eb0-8039-57150e0758bf\") " pod="metallb-system/speaker-524kt" Oct 11 10:48:55.228862 master-1 kubenswrapper[4771]: I1011 10:48:55.228424 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmqn9\" (UniqueName: \"kubernetes.io/projected/72f01dc6-72cd-4eb0-8039-57150e0758bf-kube-api-access-nmqn9\") pod \"speaker-524kt\" (UID: \"72f01dc6-72cd-4eb0-8039-57150e0758bf\") " pod="metallb-system/speaker-524kt" Oct 11 10:48:55.228862 master-1 kubenswrapper[4771]: E1011 10:48:55.228655 4771 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 11 10:48:55.228862 master-1 kubenswrapper[4771]: E1011 10:48:55.228807 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72f01dc6-72cd-4eb0-8039-57150e0758bf-memberlist podName:72f01dc6-72cd-4eb0-8039-57150e0758bf nodeName:}" failed. No retries permitted until 2025-10-11 10:48:55.728769685 +0000 UTC m=+1367.702996166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/72f01dc6-72cd-4eb0-8039-57150e0758bf-memberlist") pod "speaker-524kt" (UID: "72f01dc6-72cd-4eb0-8039-57150e0758bf") : secret "metallb-memberlist" not found Oct 11 10:48:55.230418 master-1 kubenswrapper[4771]: I1011 10:48:55.230328 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/72f01dc6-72cd-4eb0-8039-57150e0758bf-metallb-excludel2\") pod \"speaker-524kt\" (UID: \"72f01dc6-72cd-4eb0-8039-57150e0758bf\") " pod="metallb-system/speaker-524kt" Oct 11 10:48:55.233892 master-1 kubenswrapper[4771]: I1011 10:48:55.233795 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72f01dc6-72cd-4eb0-8039-57150e0758bf-metrics-certs\") pod \"speaker-524kt\" (UID: \"72f01dc6-72cd-4eb0-8039-57150e0758bf\") " pod="metallb-system/speaker-524kt" Oct 11 10:48:55.264014 master-1 kubenswrapper[4771]: I1011 10:48:55.263863 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmqn9\" (UniqueName: \"kubernetes.io/projected/72f01dc6-72cd-4eb0-8039-57150e0758bf-kube-api-access-nmqn9\") pod \"speaker-524kt\" (UID: \"72f01dc6-72cd-4eb0-8039-57150e0758bf\") " pod="metallb-system/speaker-524kt" Oct 11 10:48:55.284824 master-0 kubenswrapper[4790]: I1011 10:48:55.284730 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5xkrb" event={"ID":"bd096860-a678-4b71-a23d-70ecd6b79a0d","Type":"ContainerStarted","Data":"2178a4b0af6e24d7fa63bc35d0bc782018b4fb72c46afa0209a738419753769f"} Oct 11 10:48:55.286023 master-0 kubenswrapper[4790]: I1011 10:48:55.285895 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-54x4w" event={"ID":"7359c204-2acb-4c3b-b05f-2a124f3862fb","Type":"ContainerStarted","Data":"3eb4384d713a0d19539c60cd01fcdb0f30ce181686e9cc4b369634045bcc2e0c"} Oct 11 10:48:55.298879 master-0 kubenswrapper[4790]: I1011 10:48:55.298803 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-rtr4h" Oct 11 10:48:55.591627 master-2 kubenswrapper[4776]: I1011 10:48:55.591360 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-memberlist\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:55.591627 master-2 kubenswrapper[4776]: E1011 10:48:55.591534 4776 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 11 10:48:55.592753 master-2 kubenswrapper[4776]: E1011 10:48:55.592732 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-memberlist podName:018da26f-14c3-468f-bab0-089a91b3ef26 nodeName:}" failed. No retries permitted until 2025-10-11 10:48:56.59159782 +0000 UTC m=+1371.376024529 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-memberlist") pod "speaker-g9nhb" (UID: "018da26f-14c3-468f-bab0-089a91b3ef26") : secret "metallb-memberlist" not found Oct 11 10:48:55.644672 master-0 kubenswrapper[4790]: I1011 10:48:55.644599 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7d3e23ec-dfa6-46d4-bf57-4e89ee459be5-memberlist\") pod \"speaker-8n7ld\" (UID: \"7d3e23ec-dfa6-46d4-bf57-4e89ee459be5\") " pod="metallb-system/speaker-8n7ld" Oct 11 10:48:55.645599 master-0 kubenswrapper[4790]: E1011 10:48:55.644842 4790 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 11 10:48:55.645599 master-0 kubenswrapper[4790]: E1011 10:48:55.644931 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d3e23ec-dfa6-46d4-bf57-4e89ee459be5-memberlist podName:7d3e23ec-dfa6-46d4-bf57-4e89ee459be5 nodeName:}" failed. No retries permitted until 2025-10-11 10:48:56.644907087 +0000 UTC m=+613.199367389 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7d3e23ec-dfa6-46d4-bf57-4e89ee459be5-memberlist") pod "speaker-8n7ld" (UID: "7d3e23ec-dfa6-46d4-bf57-4e89ee459be5") : secret "metallb-memberlist" not found Oct 11 10:48:55.725679 master-0 kubenswrapper[4790]: I1011 10:48:55.725546 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-rtr4h"] Oct 11 10:48:55.730159 master-0 kubenswrapper[4790]: W1011 10:48:55.730042 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bd4ff7d_5743_4ecb_86e8_72a738214533.slice/crio-b12d1120521b5dbe02d9fe1e03ed7553109738c5fbfa1f62efbc1a38cf4585a6 WatchSource:0}: Error finding container b12d1120521b5dbe02d9fe1e03ed7553109738c5fbfa1f62efbc1a38cf4585a6: Status 404 returned error can't find the container with id b12d1120521b5dbe02d9fe1e03ed7553109738c5fbfa1f62efbc1a38cf4585a6 Oct 11 10:48:55.738088 master-1 kubenswrapper[4771]: I1011 10:48:55.738016 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72f01dc6-72cd-4eb0-8039-57150e0758bf-memberlist\") pod \"speaker-524kt\" (UID: \"72f01dc6-72cd-4eb0-8039-57150e0758bf\") " pod="metallb-system/speaker-524kt" Oct 11 10:48:55.738588 master-1 kubenswrapper[4771]: E1011 10:48:55.738298 4771 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 11 10:48:55.738588 master-1 kubenswrapper[4771]: E1011 10:48:55.738406 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72f01dc6-72cd-4eb0-8039-57150e0758bf-memberlist podName:72f01dc6-72cd-4eb0-8039-57150e0758bf nodeName:}" failed. No retries permitted until 2025-10-11 10:48:56.738381462 +0000 UTC m=+1368.712607943 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/72f01dc6-72cd-4eb0-8039-57150e0758bf-memberlist") pod "speaker-524kt" (UID: "72f01dc6-72cd-4eb0-8039-57150e0758bf") : secret "metallb-memberlist" not found Oct 11 10:48:55.749338 master-1 kubenswrapper[4771]: I1011 10:48:55.749287 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvzhx" event={"ID":"f1e63653-b356-4bf6-b91a-6d386b1f3c33","Type":"ContainerStarted","Data":"d6423b0468f14f6975932098395821066a281fec17a5714650148996d09e9caa"} Oct 11 10:48:56.319233 master-0 kubenswrapper[4790]: I1011 10:48:56.319153 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-rtr4h" event={"ID":"0bd4ff7d-5743-4ecb-86e8-72a738214533","Type":"ContainerStarted","Data":"67cf87ff9ce384175f6b582b655ebfb055948d2882a8f4b64d37bd68bfc474ef"} Oct 11 10:48:56.319233 master-0 kubenswrapper[4790]: I1011 10:48:56.319221 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-rtr4h" event={"ID":"0bd4ff7d-5743-4ecb-86e8-72a738214533","Type":"ContainerStarted","Data":"b12d1120521b5dbe02d9fe1e03ed7553109738c5fbfa1f62efbc1a38cf4585a6"} Oct 11 10:48:56.391638 master-0 kubenswrapper[4790]: I1011 10:48:56.391587 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Oct 11 10:48:56.402261 master-0 kubenswrapper[4790]: I1011 10:48:56.402235 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Oct 11 10:48:56.613405 master-2 kubenswrapper[4776]: I1011 10:48:56.613346 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-memberlist\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:56.617888 master-2 kubenswrapper[4776]: I1011 10:48:56.617607 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-memberlist\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:56.661153 master-0 kubenswrapper[4790]: I1011 10:48:56.660936 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7d3e23ec-dfa6-46d4-bf57-4e89ee459be5-memberlist\") pod \"speaker-8n7ld\" (UID: \"7d3e23ec-dfa6-46d4-bf57-4e89ee459be5\") " pod="metallb-system/speaker-8n7ld" Oct 11 10:48:56.665583 master-0 kubenswrapper[4790]: I1011 10:48:56.665506 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7d3e23ec-dfa6-46d4-bf57-4e89ee459be5-memberlist\") pod \"speaker-8n7ld\" (UID: \"7d3e23ec-dfa6-46d4-bf57-4e89ee459be5\") " pod="metallb-system/speaker-8n7ld" Oct 11 10:48:56.709132 master-0 kubenswrapper[4790]: I1011 10:48:56.709010 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8n7ld" Oct 11 10:48:56.725444 master-2 kubenswrapper[4776]: I1011 10:48:56.725382 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g9nhb" Oct 11 10:48:56.756460 master-2 kubenswrapper[4776]: W1011 10:48:56.756323 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod018da26f_14c3_468f_bab0_089a91b3ef26.slice/crio-2e053274556684ab5eb85e6ef62333df56217058d4d5e6826b9fd6d037a67cbc WatchSource:0}: Error finding container 2e053274556684ab5eb85e6ef62333df56217058d4d5e6826b9fd6d037a67cbc: Status 404 returned error can't find the container with id 2e053274556684ab5eb85e6ef62333df56217058d4d5e6826b9fd6d037a67cbc Oct 11 10:48:56.757302 master-1 kubenswrapper[4771]: I1011 10:48:56.757243 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72f01dc6-72cd-4eb0-8039-57150e0758bf-memberlist\") pod \"speaker-524kt\" (UID: \"72f01dc6-72cd-4eb0-8039-57150e0758bf\") " pod="metallb-system/speaker-524kt" Oct 11 10:48:56.763105 master-1 kubenswrapper[4771]: I1011 10:48:56.762930 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72f01dc6-72cd-4eb0-8039-57150e0758bf-memberlist\") pod \"speaker-524kt\" (UID: \"72f01dc6-72cd-4eb0-8039-57150e0758bf\") " pod="metallb-system/speaker-524kt" Oct 11 10:48:56.866733 master-0 kubenswrapper[4790]: I1011 10:48:56.860546 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-nf8q6"] Oct 11 10:48:56.866733 master-0 kubenswrapper[4790]: I1011 10:48:56.861332 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nf8q6" Oct 11 10:48:56.866733 master-0 kubenswrapper[4790]: I1011 10:48:56.864608 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-w4js8"] Oct 11 10:48:56.866733 master-0 kubenswrapper[4790]: I1011 10:48:56.865527 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-w4js8" Oct 11 10:48:56.879744 master-0 kubenswrapper[4790]: I1011 10:48:56.878119 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-7f4xb"] Oct 11 10:48:56.879744 master-0 kubenswrapper[4790]: I1011 10:48:56.878849 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7f4xb" Oct 11 10:48:56.879744 master-0 kubenswrapper[4790]: I1011 10:48:56.879124 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 11 10:48:56.885425 master-0 kubenswrapper[4790]: I1011 10:48:56.884926 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-nf8q6"] Oct 11 10:48:56.890413 master-0 kubenswrapper[4790]: I1011 10:48:56.890358 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-w4js8"] Oct 11 10:48:56.896659 master-2 kubenswrapper[4776]: I1011 10:48:56.895440 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-cwqqw"] Oct 11 10:48:56.896659 master-2 kubenswrapper[4776]: I1011 10:48:56.896390 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:56.905250 master-1 kubenswrapper[4771]: I1011 10:48:56.905172 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-djsq6"] Oct 11 10:48:56.906102 master-1 kubenswrapper[4771]: I1011 10:48:56.906077 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-djsq6" Oct 11 10:48:56.910699 master-2 kubenswrapper[4776]: I1011 10:48:56.910633 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 11 10:48:56.911298 master-2 kubenswrapper[4776]: I1011 10:48:56.911222 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 11 10:48:56.919689 master-1 kubenswrapper[4771]: I1011 10:48:56.919632 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 11 10:48:56.921836 master-1 kubenswrapper[4771]: I1011 10:48:56.919913 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 11 10:48:56.960612 master-1 kubenswrapper[4771]: I1011 10:48:56.960504 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl5pt\" (UniqueName: \"kubernetes.io/projected/3e92427e-68a9-4496-9578-a0386bd5f5b3-kube-api-access-vl5pt\") pod \"nmstate-handler-djsq6\" (UID: \"3e92427e-68a9-4496-9578-a0386bd5f5b3\") " pod="openshift-nmstate/nmstate-handler-djsq6" Oct 11 10:48:56.961208 master-1 kubenswrapper[4771]: I1011 10:48:56.960645 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3e92427e-68a9-4496-9578-a0386bd5f5b3-nmstate-lock\") pod \"nmstate-handler-djsq6\" (UID: \"3e92427e-68a9-4496-9578-a0386bd5f5b3\") " pod="openshift-nmstate/nmstate-handler-djsq6" Oct 11 10:48:56.961208 master-1 kubenswrapper[4771]: I1011 10:48:56.960687 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3e92427e-68a9-4496-9578-a0386bd5f5b3-ovs-socket\") pod \"nmstate-handler-djsq6\" (UID: \"3e92427e-68a9-4496-9578-a0386bd5f5b3\") " pod="openshift-nmstate/nmstate-handler-djsq6" Oct 11 10:48:56.961208 master-1 kubenswrapper[4771]: I1011 10:48:56.960718 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3e92427e-68a9-4496-9578-a0386bd5f5b3-dbus-socket\") pod \"nmstate-handler-djsq6\" (UID: \"3e92427e-68a9-4496-9578-a0386bd5f5b3\") " pod="openshift-nmstate/nmstate-handler-djsq6" Oct 11 10:48:56.972321 master-0 kubenswrapper[4790]: I1011 10:48:56.972215 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0510dc20-c216-4f9a-b547-246dfdfc7d6f-dbus-socket\") pod \"nmstate-handler-7f4xb\" (UID: \"0510dc20-c216-4f9a-b547-246dfdfc7d6f\") " pod="openshift-nmstate/nmstate-handler-7f4xb" Oct 11 10:48:56.972321 master-0 kubenswrapper[4790]: I1011 10:48:56.972268 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0510dc20-c216-4f9a-b547-246dfdfc7d6f-ovs-socket\") pod \"nmstate-handler-7f4xb\" (UID: \"0510dc20-c216-4f9a-b547-246dfdfc7d6f\") " pod="openshift-nmstate/nmstate-handler-7f4xb" Oct 11 10:48:56.972321 master-0 kubenswrapper[4790]: I1011 10:48:56.972297 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0510dc20-c216-4f9a-b547-246dfdfc7d6f-nmstate-lock\") pod \"nmstate-handler-7f4xb\" (UID: \"0510dc20-c216-4f9a-b547-246dfdfc7d6f\") " pod="openshift-nmstate/nmstate-handler-7f4xb" Oct 11 10:48:56.972321 master-0 kubenswrapper[4790]: I1011 10:48:56.972317 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsrbt\" (UniqueName: \"kubernetes.io/projected/ca51cef5-fa00-4ea1-b7e6-e6e70bce9a0e-kube-api-access-nsrbt\") pod \"nmstate-metrics-fdff9cb8d-w4js8\" (UID: \"ca51cef5-fa00-4ea1-b7e6-e6e70bce9a0e\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-w4js8" Oct 11 10:48:56.972671 master-0 kubenswrapper[4790]: I1011 10:48:56.972350 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ba695300-f2da-45e9-a825-81d462fc2d37-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-nf8q6\" (UID: \"ba695300-f2da-45e9-a825-81d462fc2d37\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nf8q6" Oct 11 10:48:56.972671 master-0 kubenswrapper[4790]: I1011 10:48:56.972382 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmwhg\" (UniqueName: \"kubernetes.io/projected/0510dc20-c216-4f9a-b547-246dfdfc7d6f-kube-api-access-zmwhg\") pod \"nmstate-handler-7f4xb\" (UID: \"0510dc20-c216-4f9a-b547-246dfdfc7d6f\") " pod="openshift-nmstate/nmstate-handler-7f4xb" Oct 11 10:48:56.972671 master-0 kubenswrapper[4790]: I1011 10:48:56.972404 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7vrl\" (UniqueName: \"kubernetes.io/projected/ba695300-f2da-45e9-a825-81d462fc2d37-kube-api-access-n7vrl\") pod \"nmstate-webhook-6cdbc54649-nf8q6\" (UID: \"ba695300-f2da-45e9-a825-81d462fc2d37\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nf8q6" Oct 11 10:48:57.018569 master-2 kubenswrapper[4776]: I1011 10:48:57.018509 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpx5g\" (UniqueName: \"kubernetes.io/projected/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-kube-api-access-qpx5g\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.018569 master-2 kubenswrapper[4776]: I1011 10:48:57.018566 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-nmstate-lock\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.018850 master-2 kubenswrapper[4776]: I1011 10:48:57.018598 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-ovs-socket\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.018850 master-2 kubenswrapper[4776]: I1011 10:48:57.018638 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-dbus-socket\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.022735 master-2 kubenswrapper[4776]: I1011 10:48:57.022665 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g9nhb" event={"ID":"018da26f-14c3-468f-bab0-089a91b3ef26","Type":"ContainerStarted","Data":"2e053274556684ab5eb85e6ef62333df56217058d4d5e6826b9fd6d037a67cbc"} Oct 11 10:48:57.047818 master-1 kubenswrapper[4771]: I1011 10:48:57.047672 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-524kt" Oct 11 10:48:57.064281 master-1 kubenswrapper[4771]: I1011 10:48:57.064225 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3e92427e-68a9-4496-9578-a0386bd5f5b3-nmstate-lock\") pod \"nmstate-handler-djsq6\" (UID: \"3e92427e-68a9-4496-9578-a0386bd5f5b3\") " pod="openshift-nmstate/nmstate-handler-djsq6" Oct 11 10:48:57.064281 master-1 kubenswrapper[4771]: I1011 10:48:57.064285 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3e92427e-68a9-4496-9578-a0386bd5f5b3-ovs-socket\") pod \"nmstate-handler-djsq6\" (UID: \"3e92427e-68a9-4496-9578-a0386bd5f5b3\") " pod="openshift-nmstate/nmstate-handler-djsq6" Oct 11 10:48:57.064622 master-1 kubenswrapper[4771]: I1011 10:48:57.064314 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3e92427e-68a9-4496-9578-a0386bd5f5b3-dbus-socket\") pod \"nmstate-handler-djsq6\" (UID: \"3e92427e-68a9-4496-9578-a0386bd5f5b3\") " pod="openshift-nmstate/nmstate-handler-djsq6" Oct 11 10:48:57.064622 master-1 kubenswrapper[4771]: I1011 10:48:57.064375 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl5pt\" (UniqueName: \"kubernetes.io/projected/3e92427e-68a9-4496-9578-a0386bd5f5b3-kube-api-access-vl5pt\") pod \"nmstate-handler-djsq6\" (UID: \"3e92427e-68a9-4496-9578-a0386bd5f5b3\") " pod="openshift-nmstate/nmstate-handler-djsq6" Oct 11 10:48:57.064622 master-1 kubenswrapper[4771]: I1011 10:48:57.064627 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3e92427e-68a9-4496-9578-a0386bd5f5b3-ovs-socket\") pod \"nmstate-handler-djsq6\" (UID: \"3e92427e-68a9-4496-9578-a0386bd5f5b3\") " pod="openshift-nmstate/nmstate-handler-djsq6" Oct 11 10:48:57.064948 master-1 kubenswrapper[4771]: I1011 10:48:57.064683 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3e92427e-68a9-4496-9578-a0386bd5f5b3-nmstate-lock\") pod \"nmstate-handler-djsq6\" (UID: \"3e92427e-68a9-4496-9578-a0386bd5f5b3\") " pod="openshift-nmstate/nmstate-handler-djsq6" Oct 11 10:48:57.065010 master-1 kubenswrapper[4771]: I1011 10:48:57.064961 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3e92427e-68a9-4496-9578-a0386bd5f5b3-dbus-socket\") pod \"nmstate-handler-djsq6\" (UID: \"3e92427e-68a9-4496-9578-a0386bd5f5b3\") " pod="openshift-nmstate/nmstate-handler-djsq6" Oct 11 10:48:57.067709 master-2 kubenswrapper[4776]: I1011 10:48:57.067621 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd"] Oct 11 10:48:57.069633 master-2 kubenswrapper[4776]: I1011 10:48:57.069616 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.073647 master-0 kubenswrapper[4790]: I1011 10:48:57.073517 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsrbt\" (UniqueName: \"kubernetes.io/projected/ca51cef5-fa00-4ea1-b7e6-e6e70bce9a0e-kube-api-access-nsrbt\") pod \"nmstate-metrics-fdff9cb8d-w4js8\" (UID: \"ca51cef5-fa00-4ea1-b7e6-e6e70bce9a0e\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-w4js8" Oct 11 10:48:57.073647 master-0 kubenswrapper[4790]: I1011 10:48:57.073578 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ba695300-f2da-45e9-a825-81d462fc2d37-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-nf8q6\" (UID: \"ba695300-f2da-45e9-a825-81d462fc2d37\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nf8q6" Oct 11 10:48:57.073647 master-0 kubenswrapper[4790]: I1011 10:48:57.073627 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmwhg\" (UniqueName: \"kubernetes.io/projected/0510dc20-c216-4f9a-b547-246dfdfc7d6f-kube-api-access-zmwhg\") pod \"nmstate-handler-7f4xb\" (UID: \"0510dc20-c216-4f9a-b547-246dfdfc7d6f\") " pod="openshift-nmstate/nmstate-handler-7f4xb" Oct 11 10:48:57.074011 master-0 kubenswrapper[4790]: I1011 10:48:57.073678 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7vrl\" (UniqueName: \"kubernetes.io/projected/ba695300-f2da-45e9-a825-81d462fc2d37-kube-api-access-n7vrl\") pod \"nmstate-webhook-6cdbc54649-nf8q6\" (UID: \"ba695300-f2da-45e9-a825-81d462fc2d37\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nf8q6" Oct 11 10:48:57.074011 master-0 kubenswrapper[4790]: I1011 10:48:57.073749 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0510dc20-c216-4f9a-b547-246dfdfc7d6f-dbus-socket\") pod \"nmstate-handler-7f4xb\" (UID: \"0510dc20-c216-4f9a-b547-246dfdfc7d6f\") " pod="openshift-nmstate/nmstate-handler-7f4xb" Oct 11 10:48:57.074011 master-0 kubenswrapper[4790]: I1011 10:48:57.073771 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0510dc20-c216-4f9a-b547-246dfdfc7d6f-ovs-socket\") pod \"nmstate-handler-7f4xb\" (UID: \"0510dc20-c216-4f9a-b547-246dfdfc7d6f\") " pod="openshift-nmstate/nmstate-handler-7f4xb" Oct 11 10:48:57.074011 master-0 kubenswrapper[4790]: I1011 10:48:57.073802 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0510dc20-c216-4f9a-b547-246dfdfc7d6f-nmstate-lock\") pod \"nmstate-handler-7f4xb\" (UID: \"0510dc20-c216-4f9a-b547-246dfdfc7d6f\") " pod="openshift-nmstate/nmstate-handler-7f4xb" Oct 11 10:48:57.074011 master-0 kubenswrapper[4790]: I1011 10:48:57.073875 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0510dc20-c216-4f9a-b547-246dfdfc7d6f-nmstate-lock\") pod \"nmstate-handler-7f4xb\" (UID: \"0510dc20-c216-4f9a-b547-246dfdfc7d6f\") " pod="openshift-nmstate/nmstate-handler-7f4xb" Oct 11 10:48:57.076152 master-0 kubenswrapper[4790]: I1011 10:48:57.075087 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0510dc20-c216-4f9a-b547-246dfdfc7d6f-dbus-socket\") pod \"nmstate-handler-7f4xb\" (UID: \"0510dc20-c216-4f9a-b547-246dfdfc7d6f\") " pod="openshift-nmstate/nmstate-handler-7f4xb" Oct 11 10:48:57.076152 master-0 kubenswrapper[4790]: I1011 10:48:57.075210 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0510dc20-c216-4f9a-b547-246dfdfc7d6f-ovs-socket\") pod \"nmstate-handler-7f4xb\" (UID: \"0510dc20-c216-4f9a-b547-246dfdfc7d6f\") " pod="openshift-nmstate/nmstate-handler-7f4xb" Oct 11 10:48:57.077491 master-2 kubenswrapper[4776]: I1011 10:48:57.077443 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 11 10:48:57.077651 master-2 kubenswrapper[4776]: I1011 10:48:57.077451 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 11 10:48:57.077942 master-0 kubenswrapper[4790]: I1011 10:48:57.077910 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ba695300-f2da-45e9-a825-81d462fc2d37-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-nf8q6\" (UID: \"ba695300-f2da-45e9-a825-81d462fc2d37\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nf8q6" Oct 11 10:48:57.079283 master-1 kubenswrapper[4771]: W1011 10:48:57.076812 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72f01dc6_72cd_4eb0_8039_57150e0758bf.slice/crio-1d9eb93b1d5f75ed17266fb9b1595fe17751c138092994a9052144327e88854b WatchSource:0}: Error finding container 1d9eb93b1d5f75ed17266fb9b1595fe17751c138092994a9052144327e88854b: Status 404 returned error can't find the container with id 1d9eb93b1d5f75ed17266fb9b1595fe17751c138092994a9052144327e88854b Oct 11 10:48:57.088208 master-2 kubenswrapper[4776]: I1011 10:48:57.088169 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd"] Oct 11 10:48:57.103762 master-1 kubenswrapper[4771]: I1011 10:48:57.103674 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl5pt\" (UniqueName: \"kubernetes.io/projected/3e92427e-68a9-4496-9578-a0386bd5f5b3-kube-api-access-vl5pt\") pod \"nmstate-handler-djsq6\" (UID: \"3e92427e-68a9-4496-9578-a0386bd5f5b3\") " pod="openshift-nmstate/nmstate-handler-djsq6" Oct 11 10:48:57.108809 master-0 kubenswrapper[4790]: I1011 10:48:57.108170 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7vrl\" (UniqueName: \"kubernetes.io/projected/ba695300-f2da-45e9-a825-81d462fc2d37-kube-api-access-n7vrl\") pod \"nmstate-webhook-6cdbc54649-nf8q6\" (UID: \"ba695300-f2da-45e9-a825-81d462fc2d37\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nf8q6" Oct 11 10:48:57.112995 master-0 kubenswrapper[4790]: I1011 10:48:57.111313 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsrbt\" (UniqueName: \"kubernetes.io/projected/ca51cef5-fa00-4ea1-b7e6-e6e70bce9a0e-kube-api-access-nsrbt\") pod \"nmstate-metrics-fdff9cb8d-w4js8\" (UID: \"ca51cef5-fa00-4ea1-b7e6-e6e70bce9a0e\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-w4js8" Oct 11 10:48:57.117073 master-0 kubenswrapper[4790]: I1011 10:48:57.117024 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmwhg\" (UniqueName: \"kubernetes.io/projected/0510dc20-c216-4f9a-b547-246dfdfc7d6f-kube-api-access-zmwhg\") pod \"nmstate-handler-7f4xb\" (UID: \"0510dc20-c216-4f9a-b547-246dfdfc7d6f\") " pod="openshift-nmstate/nmstate-handler-7f4xb" Oct 11 10:48:57.120501 master-2 kubenswrapper[4776]: I1011 10:48:57.120427 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-ovs-socket\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.120712 master-2 kubenswrapper[4776]: I1011 10:48:57.120530 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-ovs-socket\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.120712 master-2 kubenswrapper[4776]: I1011 10:48:57.120533 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-dbus-socket\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.120712 master-2 kubenswrapper[4776]: I1011 10:48:57.120634 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2n4f\" (UniqueName: \"kubernetes.io/projected/2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2-kube-api-access-c2n4f\") pod \"nmstate-console-plugin-6b874cbd85-p97jd\" (UID: \"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.120712 master-2 kubenswrapper[4776]: I1011 10:48:57.120691 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpx5g\" (UniqueName: \"kubernetes.io/projected/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-kube-api-access-qpx5g\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.120843 master-2 kubenswrapper[4776]: I1011 10:48:57.120721 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-p97jd\" (UID: \"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.120843 master-2 kubenswrapper[4776]: I1011 10:48:57.120755 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-p97jd\" (UID: \"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.120903 master-2 kubenswrapper[4776]: I1011 10:48:57.120840 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-nmstate-lock\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.120936 master-2 kubenswrapper[4776]: I1011 10:48:57.120757 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-dbus-socket\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.120936 master-2 kubenswrapper[4776]: I1011 10:48:57.120913 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-nmstate-lock\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.155140 master-2 kubenswrapper[4776]: I1011 10:48:57.155009 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpx5g\" (UniqueName: \"kubernetes.io/projected/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-kube-api-access-qpx5g\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.217868 master-2 kubenswrapper[4776]: I1011 10:48:57.217589 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.221919 master-2 kubenswrapper[4776]: I1011 10:48:57.221885 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2n4f\" (UniqueName: \"kubernetes.io/projected/2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2-kube-api-access-c2n4f\") pod \"nmstate-console-plugin-6b874cbd85-p97jd\" (UID: \"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.221997 master-2 kubenswrapper[4776]: I1011 10:48:57.221946 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-p97jd\" (UID: \"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.221997 master-2 kubenswrapper[4776]: I1011 10:48:57.221977 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-p97jd\" (UID: \"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.223391 master-2 kubenswrapper[4776]: I1011 10:48:57.223335 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-p97jd\" (UID: \"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.225426 master-2 kubenswrapper[4776]: I1011 10:48:57.225380 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-p97jd\" (UID: \"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.229158 master-1 kubenswrapper[4771]: I1011 10:48:57.229103 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-djsq6" Oct 11 10:48:57.245240 master-2 kubenswrapper[4776]: I1011 10:48:57.245003 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2n4f\" (UniqueName: \"kubernetes.io/projected/2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2-kube-api-access-c2n4f\") pod \"nmstate-console-plugin-6b874cbd85-p97jd\" (UID: \"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.250218 master-1 kubenswrapper[4771]: W1011 10:48:57.249167 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e92427e_68a9_4496_9578_a0386bd5f5b3.slice/crio-d8122906cd222cfc295fa625a57fe54a4b6985be89018fdf7057dd4f32a54278 WatchSource:0}: Error finding container d8122906cd222cfc295fa625a57fe54a4b6985be89018fdf7057dd4f32a54278: Status 404 returned error can't find the container with id d8122906cd222cfc295fa625a57fe54a4b6985be89018fdf7057dd4f32a54278 Oct 11 10:48:57.295181 master-0 kubenswrapper[4790]: I1011 10:48:57.294593 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nf8q6" Oct 11 10:48:57.301481 master-1 kubenswrapper[4771]: I1011 10:48:57.301423 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69f8677c95-9ncnx"] Oct 11 10:48:57.302857 master-1 kubenswrapper[4771]: I1011 10:48:57.302817 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.306004 master-1 kubenswrapper[4771]: I1011 10:48:57.305585 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 11 10:48:57.306004 master-1 kubenswrapper[4771]: I1011 10:48:57.305618 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-8wmjp" Oct 11 10:48:57.306004 master-1 kubenswrapper[4771]: I1011 10:48:57.305589 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 11 10:48:57.306004 master-1 kubenswrapper[4771]: I1011 10:48:57.305779 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 11 10:48:57.306004 master-1 kubenswrapper[4771]: I1011 10:48:57.305995 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 11 10:48:57.306912 master-1 kubenswrapper[4771]: I1011 10:48:57.306876 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 11 10:48:57.309386 master-0 kubenswrapper[4790]: I1011 10:48:57.309236 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-w4js8" Oct 11 10:48:57.309485 master-2 kubenswrapper[4776]: I1011 10:48:57.309403 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6f9d445f57-z6k82"] Oct 11 10:48:57.312056 master-0 kubenswrapper[4790]: I1011 10:48:57.311801 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8n7ld" event={"ID":"7d3e23ec-dfa6-46d4-bf57-4e89ee459be5","Type":"ContainerStarted","Data":"69e2fe06492b039c70785483990a1baaa42493149addceeb07e44f30a579bb4d"} Oct 11 10:48:57.312056 master-0 kubenswrapper[4790]: I1011 10:48:57.311865 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8n7ld" event={"ID":"7d3e23ec-dfa6-46d4-bf57-4e89ee459be5","Type":"ContainerStarted","Data":"5b1a1555bfc740741678499598d7c6f9d36dd1f4f09eb9059069f6fa6588fe82"} Oct 11 10:48:57.315498 master-0 kubenswrapper[4790]: I1011 10:48:57.315441 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7f4xb" Oct 11 10:48:57.316546 master-1 kubenswrapper[4771]: I1011 10:48:57.316485 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69f8677c95-9ncnx"] Oct 11 10:48:57.318478 master-0 kubenswrapper[4790]: I1011 10:48:57.318209 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-rtr4h" event={"ID":"0bd4ff7d-5743-4ecb-86e8-72a738214533","Type":"ContainerStarted","Data":"f1afc80009e4967865d987eb40153b9ddb8d76b97150006a9b8af8641bef245b"} Oct 11 10:48:57.318931 master-0 kubenswrapper[4790]: I1011 10:48:57.318903 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-rtr4h" Oct 11 10:48:57.319456 master-1 kubenswrapper[4771]: I1011 10:48:57.319405 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 11 10:48:57.349360 master-0 kubenswrapper[4790]: I1011 10:48:57.349269 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-rtr4h" podStartSLOduration=2.108411691 podStartE2EDuration="3.349245366s" podCreationTimestamp="2025-10-11 10:48:54 +0000 UTC" firstStartedPulling="2025-10-11 10:48:55.87910688 +0000 UTC m=+612.433567172" lastFinishedPulling="2025-10-11 10:48:57.119940555 +0000 UTC m=+613.674400847" observedRunningTime="2025-10-11 10:48:57.345063014 +0000 UTC m=+613.899523316" watchObservedRunningTime="2025-10-11 10:48:57.349245366 +0000 UTC m=+613.903705658" Oct 11 10:48:57.369227 master-1 kubenswrapper[4771]: I1011 10:48:57.369151 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/733e6a5e-667b-4b9e-a359-577c976193f1-console-serving-cert\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.369227 master-1 kubenswrapper[4771]: I1011 10:48:57.369211 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/733e6a5e-667b-4b9e-a359-577c976193f1-service-ca\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.369622 master-1 kubenswrapper[4771]: I1011 10:48:57.369263 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/733e6a5e-667b-4b9e-a359-577c976193f1-oauth-serving-cert\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.369622 master-1 kubenswrapper[4771]: I1011 10:48:57.369291 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj645\" (UniqueName: \"kubernetes.io/projected/733e6a5e-667b-4b9e-a359-577c976193f1-kube-api-access-lj645\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.369622 master-1 kubenswrapper[4771]: I1011 10:48:57.369330 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/733e6a5e-667b-4b9e-a359-577c976193f1-trusted-ca-bundle\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.369622 master-1 kubenswrapper[4771]: I1011 10:48:57.369381 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/733e6a5e-667b-4b9e-a359-577c976193f1-console-config\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.369622 master-1 kubenswrapper[4771]: I1011 10:48:57.369414 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/733e6a5e-667b-4b9e-a359-577c976193f1-console-oauth-config\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.370781 master-0 kubenswrapper[4790]: W1011 10:48:57.370721 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0510dc20_c216_4f9a_b547_246dfdfc7d6f.slice/crio-0892d4e25e4dfe1e070f3c79c17a0dcfeed9d6bfe113b8d352b30ef5be152020 WatchSource:0}: Error finding container 0892d4e25e4dfe1e070f3c79c17a0dcfeed9d6bfe113b8d352b30ef5be152020: Status 404 returned error can't find the container with id 0892d4e25e4dfe1e070f3c79c17a0dcfeed9d6bfe113b8d352b30ef5be152020 Oct 11 10:48:57.396121 master-2 kubenswrapper[4776]: I1011 10:48:57.396071 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.470210 master-1 kubenswrapper[4771]: I1011 10:48:57.470137 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/733e6a5e-667b-4b9e-a359-577c976193f1-oauth-serving-cert\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.470210 master-1 kubenswrapper[4771]: I1011 10:48:57.470184 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj645\" (UniqueName: \"kubernetes.io/projected/733e6a5e-667b-4b9e-a359-577c976193f1-kube-api-access-lj645\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.470210 master-1 kubenswrapper[4771]: I1011 10:48:57.470227 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/733e6a5e-667b-4b9e-a359-577c976193f1-trusted-ca-bundle\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.470658 master-1 kubenswrapper[4771]: I1011 10:48:57.470266 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/733e6a5e-667b-4b9e-a359-577c976193f1-console-config\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.470658 master-1 kubenswrapper[4771]: I1011 10:48:57.470289 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/733e6a5e-667b-4b9e-a359-577c976193f1-console-oauth-config\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.470658 master-1 kubenswrapper[4771]: I1011 10:48:57.470318 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/733e6a5e-667b-4b9e-a359-577c976193f1-console-serving-cert\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.470658 master-1 kubenswrapper[4771]: I1011 10:48:57.470340 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/733e6a5e-667b-4b9e-a359-577c976193f1-service-ca\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.471908 master-1 kubenswrapper[4771]: I1011 10:48:57.471750 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/733e6a5e-667b-4b9e-a359-577c976193f1-oauth-serving-cert\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.472518 master-1 kubenswrapper[4771]: I1011 10:48:57.472454 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/733e6a5e-667b-4b9e-a359-577c976193f1-console-config\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.472954 master-1 kubenswrapper[4771]: I1011 10:48:57.472918 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/733e6a5e-667b-4b9e-a359-577c976193f1-trusted-ca-bundle\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.473125 master-1 kubenswrapper[4771]: I1011 10:48:57.473093 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/733e6a5e-667b-4b9e-a359-577c976193f1-service-ca\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.475478 master-1 kubenswrapper[4771]: I1011 10:48:57.475106 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/733e6a5e-667b-4b9e-a359-577c976193f1-console-serving-cert\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.476108 master-1 kubenswrapper[4771]: I1011 10:48:57.476012 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/733e6a5e-667b-4b9e-a359-577c976193f1-console-oauth-config\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.489862 master-1 kubenswrapper[4771]: I1011 10:48:57.489819 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj645\" (UniqueName: \"kubernetes.io/projected/733e6a5e-667b-4b9e-a359-577c976193f1-kube-api-access-lj645\") pod \"console-69f8677c95-9ncnx\" (UID: \"733e6a5e-667b-4b9e-a359-577c976193f1\") " pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.561783 master-0 kubenswrapper[4790]: I1011 10:48:57.559282 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-nf8q6"] Oct 11 10:48:57.568335 master-0 kubenswrapper[4790]: W1011 10:48:57.568295 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba695300_f2da_45e9_a825_81d462fc2d37.slice/crio-77bfba122366eee57220829ca169781ce3b68b5d7d9cd9cc3d0051c8ab99cfe5 WatchSource:0}: Error finding container 77bfba122366eee57220829ca169781ce3b68b5d7d9cd9cc3d0051c8ab99cfe5: Status 404 returned error can't find the container with id 77bfba122366eee57220829ca169781ce3b68b5d7d9cd9cc3d0051c8ab99cfe5 Oct 11 10:48:57.621959 master-1 kubenswrapper[4771]: I1011 10:48:57.621795 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:48:57.771091 master-1 kubenswrapper[4771]: I1011 10:48:57.771026 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-djsq6" event={"ID":"3e92427e-68a9-4496-9578-a0386bd5f5b3","Type":"ContainerStarted","Data":"d8122906cd222cfc295fa625a57fe54a4b6985be89018fdf7057dd4f32a54278"} Oct 11 10:48:57.773314 master-1 kubenswrapper[4771]: I1011 10:48:57.773258 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-524kt" event={"ID":"72f01dc6-72cd-4eb0-8039-57150e0758bf","Type":"ContainerStarted","Data":"1d9eb93b1d5f75ed17266fb9b1595fe17751c138092994a9052144327e88854b"} Oct 11 10:48:57.837743 master-0 kubenswrapper[4790]: I1011 10:48:57.837604 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-w4js8"] Oct 11 10:48:57.846307 master-0 kubenswrapper[4790]: W1011 10:48:57.845968 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca51cef5_fa00_4ea1_b7e6_e6e70bce9a0e.slice/crio-d280714a35416cff60c27ac7194da8e5d0a3728082b6dd69ff516ea6ceb8d117 WatchSource:0}: Error finding container d280714a35416cff60c27ac7194da8e5d0a3728082b6dd69ff516ea6ceb8d117: Status 404 returned error can't find the container with id d280714a35416cff60c27ac7194da8e5d0a3728082b6dd69ff516ea6ceb8d117 Oct 11 10:48:58.029194 master-2 kubenswrapper[4776]: I1011 10:48:58.029143 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-cwqqw" event={"ID":"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614","Type":"ContainerStarted","Data":"1cf526033748b2471ac92bdd534612627a5565219bc7675de1e9849ae155faf3"} Oct 11 10:48:58.117990 master-1 kubenswrapper[4771]: I1011 10:48:58.117937 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69f8677c95-9ncnx"] Oct 11 10:48:58.237997 master-1 kubenswrapper[4771]: I1011 10:48:58.237943 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 11 10:48:58.238255 master-1 kubenswrapper[4771]: I1011 10:48:58.238011 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 11 10:48:58.326812 master-0 kubenswrapper[4790]: I1011 10:48:58.326742 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nf8q6" event={"ID":"ba695300-f2da-45e9-a825-81d462fc2d37","Type":"ContainerStarted","Data":"77bfba122366eee57220829ca169781ce3b68b5d7d9cd9cc3d0051c8ab99cfe5"} Oct 11 10:48:58.328749 master-0 kubenswrapper[4790]: I1011 10:48:58.328694 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8n7ld" event={"ID":"7d3e23ec-dfa6-46d4-bf57-4e89ee459be5","Type":"ContainerStarted","Data":"6adc00087a258c176e89fff2b6e276c7b4340b462630d541afc18f6a4490ad96"} Oct 11 10:48:58.329260 master-0 kubenswrapper[4790]: I1011 10:48:58.328954 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-8n7ld" Oct 11 10:48:58.330044 master-0 kubenswrapper[4790]: I1011 10:48:58.329983 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-w4js8" event={"ID":"ca51cef5-fa00-4ea1-b7e6-e6e70bce9a0e","Type":"ContainerStarted","Data":"d280714a35416cff60c27ac7194da8e5d0a3728082b6dd69ff516ea6ceb8d117"} Oct 11 10:48:58.331184 master-0 kubenswrapper[4790]: I1011 10:48:58.331123 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7f4xb" event={"ID":"0510dc20-c216-4f9a-b547-246dfdfc7d6f","Type":"ContainerStarted","Data":"0892d4e25e4dfe1e070f3c79c17a0dcfeed9d6bfe113b8d352b30ef5be152020"} Oct 11 10:48:58.355056 master-0 kubenswrapper[4790]: I1011 10:48:58.354333 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-8n7ld" podStartSLOduration=3.58132414 podStartE2EDuration="4.354308297s" podCreationTimestamp="2025-10-11 10:48:54 +0000 UTC" firstStartedPulling="2025-10-11 10:48:57.087134488 +0000 UTC m=+613.641594770" lastFinishedPulling="2025-10-11 10:48:57.860118635 +0000 UTC m=+614.414578927" observedRunningTime="2025-10-11 10:48:58.354062051 +0000 UTC m=+614.908522363" watchObservedRunningTime="2025-10-11 10:48:58.354308297 +0000 UTC m=+614.908768599" Oct 11 10:48:59.788652 master-1 kubenswrapper[4771]: I1011 10:48:59.788610 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_42d61efaa0f96869cf2939026aad6022/kube-apiserver-cert-syncer/0.log" Oct 11 10:48:59.789265 master-1 kubenswrapper[4771]: I1011 10:48:59.789209 4771 generic.go:334] "Generic (PLEG): container finished" podID="42d61efaa0f96869cf2939026aad6022" containerID="d035b13d9431b1216e273c4ac7fb5eb87624d8740b70d29326082336302e3b46" exitCode=0 Oct 11 10:49:00.042736 master-2 kubenswrapper[4776]: I1011 10:49:00.042660 4776 generic.go:334] "Generic (PLEG): container finished" podID="a7969839-a9c5-4a06-8472-84032bfb16f1" containerID="2d85131d0b78968e260fb4a4f6f260fb925669f144ba66cb84a6e4b5e3785fd7" exitCode=0 Oct 11 10:49:00.042736 master-2 kubenswrapper[4776]: I1011 10:49:00.042734 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerDied","Data":"2d85131d0b78968e260fb4a4f6f260fb925669f144ba66cb84a6e4b5e3785fd7"} Oct 11 10:49:00.137367 master-2 kubenswrapper[4776]: I1011 10:49:00.137309 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd"] Oct 11 10:49:00.800220 master-1 kubenswrapper[4771]: I1011 10:49:00.800161 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69f8677c95-9ncnx" event={"ID":"733e6a5e-667b-4b9e-a359-577c976193f1","Type":"ContainerStarted","Data":"aa3d533766c729d41f6c2e94ef952060ec8437e2048a8be7f1811e02f5315694"} Oct 11 10:49:01.053341 master-2 kubenswrapper[4776]: I1011 10:49:01.053298 4776 generic.go:334] "Generic (PLEG): container finished" podID="a7969839-a9c5-4a06-8472-84032bfb16f1" containerID="810fccd45766370baebceb462bd6ca21c050ab404c82e5965f717a60ba4426b6" exitCode=0 Oct 11 10:49:01.053859 master-2 kubenswrapper[4776]: I1011 10:49:01.053361 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerDied","Data":"810fccd45766370baebceb462bd6ca21c050ab404c82e5965f717a60ba4426b6"} Oct 11 10:49:01.057530 master-2 kubenswrapper[4776]: I1011 10:49:01.057461 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" event={"ID":"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2","Type":"ContainerStarted","Data":"2a73c2e72ee1fbbc2275604c3cf3ac9a8c5b90432c8781d1788942420cf09440"} Oct 11 10:49:01.446649 master-1 kubenswrapper[4771]: I1011 10:49:01.446605 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_42d61efaa0f96869cf2939026aad6022/kube-apiserver-cert-syncer/0.log" Oct 11 10:49:01.448694 master-1 kubenswrapper[4771]: I1011 10:49:01.448653 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:49:01.458194 master-1 kubenswrapper[4771]: I1011 10:49:01.458140 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="42d61efaa0f96869cf2939026aad6022" podUID="23141951a25391899fad7b9f2d5b6739" Oct 11 10:49:01.555050 master-1 kubenswrapper[4771]: I1011 10:49:01.552779 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-resource-dir\") pod \"42d61efaa0f96869cf2939026aad6022\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " Oct 11 10:49:01.555050 master-1 kubenswrapper[4771]: I1011 10:49:01.552899 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-cert-dir\") pod \"42d61efaa0f96869cf2939026aad6022\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " Oct 11 10:49:01.555050 master-1 kubenswrapper[4771]: I1011 10:49:01.552988 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-audit-dir\") pod \"42d61efaa0f96869cf2939026aad6022\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " Oct 11 10:49:01.555050 master-1 kubenswrapper[4771]: I1011 10:49:01.553378 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "42d61efaa0f96869cf2939026aad6022" (UID: "42d61efaa0f96869cf2939026aad6022"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:49:01.555050 master-1 kubenswrapper[4771]: I1011 10:49:01.553415 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "42d61efaa0f96869cf2939026aad6022" (UID: "42d61efaa0f96869cf2939026aad6022"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:49:01.555050 master-1 kubenswrapper[4771]: I1011 10:49:01.553450 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "42d61efaa0f96869cf2939026aad6022" (UID: "42d61efaa0f96869cf2939026aad6022"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:49:01.656545 master-1 kubenswrapper[4771]: I1011 10:49:01.655052 4771 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:49:01.656545 master-1 kubenswrapper[4771]: I1011 10:49:01.655093 4771 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-resource-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:49:01.656545 master-1 kubenswrapper[4771]: I1011 10:49:01.655102 4771 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-cert-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:49:01.814762 master-1 kubenswrapper[4771]: I1011 10:49:01.814690 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_42d61efaa0f96869cf2939026aad6022/kube-apiserver-cert-syncer/0.log" Oct 11 10:49:01.816568 master-1 kubenswrapper[4771]: I1011 10:49:01.815940 4771 scope.go:117] "RemoveContainer" containerID="a15e7539d2a0c42e8c6c8995bf98ff26ca0f322daf83394df48b4f13fc42d10b" Oct 11 10:49:01.816691 master-1 kubenswrapper[4771]: I1011 10:49:01.816653 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:49:01.822497 master-1 kubenswrapper[4771]: I1011 10:49:01.822458 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="42d61efaa0f96869cf2939026aad6022" podUID="23141951a25391899fad7b9f2d5b6739" Oct 11 10:49:01.839927 master-1 kubenswrapper[4771]: I1011 10:49:01.839209 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="42d61efaa0f96869cf2939026aad6022" podUID="23141951a25391899fad7b9f2d5b6739" Oct 11 10:49:02.066118 master-2 kubenswrapper[4776]: I1011 10:49:02.066039 4776 generic.go:334] "Generic (PLEG): container finished" podID="a7969839-a9c5-4a06-8472-84032bfb16f1" containerID="6e1a2e8eb0cb9ca2b15ad391e2536486bad31c4622a2380f6d76ab2751f0da07" exitCode=0 Oct 11 10:49:02.066118 master-2 kubenswrapper[4776]: I1011 10:49:02.066106 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerDied","Data":"6e1a2e8eb0cb9ca2b15ad391e2536486bad31c4622a2380f6d76ab2751f0da07"} Oct 11 10:49:02.373058 master-0 kubenswrapper[4790]: I1011 10:49:02.372942 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7f4xb" event={"ID":"0510dc20-c216-4f9a-b547-246dfdfc7d6f","Type":"ContainerStarted","Data":"896003a5d761753682474a3ff3764bf095553cd9ed7c77c90d2ab392d86fb6ae"} Oct 11 10:49:02.373727 master-0 kubenswrapper[4790]: I1011 10:49:02.373685 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-7f4xb" Oct 11 10:49:02.375417 master-0 kubenswrapper[4790]: I1011 10:49:02.375387 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nf8q6" event={"ID":"ba695300-f2da-45e9-a825-81d462fc2d37","Type":"ContainerStarted","Data":"bef3fa469f056a37de1590df8388f77cc05599e20984e7f520dca92bad92c41b"} Oct 11 10:49:02.375814 master-0 kubenswrapper[4790]: I1011 10:49:02.375797 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nf8q6" Oct 11 10:49:02.377635 master-0 kubenswrapper[4790]: I1011 10:49:02.377606 4790 generic.go:334] "Generic (PLEG): container finished" podID="bd096860-a678-4b71-a23d-70ecd6b79a0d" containerID="d2b731d75bb78feada279b6f954eb95a844aede877e73198f6f2986f8451c9d3" exitCode=0 Oct 11 10:49:02.377690 master-0 kubenswrapper[4790]: I1011 10:49:02.377651 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5xkrb" event={"ID":"bd096860-a678-4b71-a23d-70ecd6b79a0d","Type":"ContainerDied","Data":"d2b731d75bb78feada279b6f954eb95a844aede877e73198f6f2986f8451c9d3"} Oct 11 10:49:02.380548 master-0 kubenswrapper[4790]: I1011 10:49:02.380524 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-w4js8" event={"ID":"ca51cef5-fa00-4ea1-b7e6-e6e70bce9a0e","Type":"ContainerStarted","Data":"448d09c3a57c8ae58711dcc6850bf0448a935fc4e2d58ddf78b2acc603abae13"} Oct 11 10:49:02.380593 master-0 kubenswrapper[4790]: I1011 10:49:02.380547 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-w4js8" event={"ID":"ca51cef5-fa00-4ea1-b7e6-e6e70bce9a0e","Type":"ContainerStarted","Data":"40f7437a6ab48e4ce69e619a5055f1df33b4838e11dd33faf6a0a462201c215b"} Oct 11 10:49:02.382285 master-0 kubenswrapper[4790]: I1011 10:49:02.382253 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-54x4w" event={"ID":"7359c204-2acb-4c3b-b05f-2a124f3862fb","Type":"ContainerStarted","Data":"1cd4b9e757b66f6f559342df739d18f0bdbf9b68d541fd01f6bd9b58b2273cfe"} Oct 11 10:49:02.382614 master-0 kubenswrapper[4790]: I1011 10:49:02.382592 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-54x4w" Oct 11 10:49:02.430875 master-0 kubenswrapper[4790]: I1011 10:49:02.430763 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-7f4xb" podStartSLOduration=2.284630475 podStartE2EDuration="6.430732828s" podCreationTimestamp="2025-10-11 10:48:56 +0000 UTC" firstStartedPulling="2025-10-11 10:48:57.372897338 +0000 UTC m=+613.927357630" lastFinishedPulling="2025-10-11 10:49:01.518999691 +0000 UTC m=+618.073459983" observedRunningTime="2025-10-11 10:49:02.400539881 +0000 UTC m=+618.955000173" watchObservedRunningTime="2025-10-11 10:49:02.430732828 +0000 UTC m=+618.985193130" Oct 11 10:49:02.448243 master-1 kubenswrapper[4771]: I1011 10:49:02.445903 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42d61efaa0f96869cf2939026aad6022" path="/var/lib/kubelet/pods/42d61efaa0f96869cf2939026aad6022/volumes" Oct 11 10:49:02.449126 master-1 kubenswrapper[4771]: I1011 10:49:02.449043 4771 scope.go:117] "RemoveContainer" containerID="452189c1a156cff2357db3338f99f86d41c76ed0f97b4459672ad6a8fe0dc5c7" Oct 11 10:49:02.458264 master-0 kubenswrapper[4790]: I1011 10:49:02.458184 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-54x4w" podStartSLOduration=3.7915816490000003 podStartE2EDuration="10.458160481s" podCreationTimestamp="2025-10-11 10:48:52 +0000 UTC" firstStartedPulling="2025-10-11 10:48:54.793917975 +0000 UTC m=+611.348378287" lastFinishedPulling="2025-10-11 10:49:01.460496827 +0000 UTC m=+618.014957119" observedRunningTime="2025-10-11 10:49:02.456539808 +0000 UTC m=+619.011000110" watchObservedRunningTime="2025-10-11 10:49:02.458160481 +0000 UTC m=+619.012620783" Oct 11 10:49:02.498612 master-1 kubenswrapper[4771]: I1011 10:49:02.498538 4771 scope.go:117] "RemoveContainer" containerID="55ecf6fefa862d92619ce534057ad20c836371d13f4c0d70468214b0bd6e3db4" Oct 11 10:49:02.506374 master-0 kubenswrapper[4790]: I1011 10:49:02.505391 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-w4js8" podStartSLOduration=2.909938094 podStartE2EDuration="6.505369443s" podCreationTimestamp="2025-10-11 10:48:56 +0000 UTC" firstStartedPulling="2025-10-11 10:48:57.85354988 +0000 UTC m=+614.408010172" lastFinishedPulling="2025-10-11 10:49:01.448981219 +0000 UTC m=+618.003441521" observedRunningTime="2025-10-11 10:49:02.48129566 +0000 UTC m=+619.035755952" watchObservedRunningTime="2025-10-11 10:49:02.505369443 +0000 UTC m=+619.059829735" Oct 11 10:49:02.535378 master-1 kubenswrapper[4771]: I1011 10:49:02.535322 4771 scope.go:117] "RemoveContainer" containerID="7e5a3711f36461fe4ced62a6738267cdf151c6f22d750936a4256bced2e89c2a" Oct 11 10:49:02.576406 master-1 kubenswrapper[4771]: I1011 10:49:02.575836 4771 scope.go:117] "RemoveContainer" containerID="d035b13d9431b1216e273c4ac7fb5eb87624d8740b70d29326082336302e3b46" Oct 11 10:49:02.641407 master-1 kubenswrapper[4771]: I1011 10:49:02.641375 4771 scope.go:117] "RemoveContainer" containerID="546001aeab4a76f01af18f5f0a0232cc48a20c2025802d7d9983eb8c840e0866" Oct 11 10:49:02.832588 master-1 kubenswrapper[4771]: I1011 10:49:02.832521 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-djsq6" event={"ID":"3e92427e-68a9-4496-9578-a0386bd5f5b3","Type":"ContainerStarted","Data":"82b3117028c1393abaa55457d166e03f4b9b033326efa34c5fda13a687ec71ed"} Oct 11 10:49:02.833289 master-1 kubenswrapper[4771]: I1011 10:49:02.832616 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-djsq6" Oct 11 10:49:02.836239 master-1 kubenswrapper[4771]: I1011 10:49:02.836185 4771 generic.go:334] "Generic (PLEG): container finished" podID="f1e63653-b356-4bf6-b91a-6d386b1f3c33" containerID="061c58050d82ba1bcc1df769608d0f7dd7a5b57018512b29a7cd4a8db046629f" exitCode=0 Oct 11 10:49:02.836389 master-1 kubenswrapper[4771]: I1011 10:49:02.836263 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvzhx" event={"ID":"f1e63653-b356-4bf6-b91a-6d386b1f3c33","Type":"ContainerDied","Data":"061c58050d82ba1bcc1df769608d0f7dd7a5b57018512b29a7cd4a8db046629f"} Oct 11 10:49:02.838730 master-1 kubenswrapper[4771]: I1011 10:49:02.838688 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-524kt" event={"ID":"72f01dc6-72cd-4eb0-8039-57150e0758bf","Type":"ContainerStarted","Data":"13f878e8ffe2bb81a2ff9441a77dd5dc7f6643745de0f5325acf5d0ca8d6ca26"} Oct 11 10:49:02.841571 master-1 kubenswrapper[4771]: I1011 10:49:02.841540 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69f8677c95-9ncnx" event={"ID":"733e6a5e-667b-4b9e-a359-577c976193f1","Type":"ContainerStarted","Data":"d0a0690f05bf6419109c3dbe54649f8b09999cb9e8c0e0810ef25797a35afea4"} Oct 11 10:49:02.861543 master-1 kubenswrapper[4771]: I1011 10:49:02.861440 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-djsq6" podStartSLOduration=1.59400261 podStartE2EDuration="6.86142442s" podCreationTimestamp="2025-10-11 10:48:56 +0000 UTC" firstStartedPulling="2025-10-11 10:48:57.252868565 +0000 UTC m=+1369.227095016" lastFinishedPulling="2025-10-11 10:49:02.520290385 +0000 UTC m=+1374.494516826" observedRunningTime="2025-10-11 10:49:02.859536395 +0000 UTC m=+1374.833762836" watchObservedRunningTime="2025-10-11 10:49:02.86142442 +0000 UTC m=+1374.835650871" Oct 11 10:49:02.918929 master-1 kubenswrapper[4771]: I1011 10:49:02.918840 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69f8677c95-9ncnx" podStartSLOduration=5.918818183 podStartE2EDuration="5.918818183s" podCreationTimestamp="2025-10-11 10:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:49:02.916002631 +0000 UTC m=+1374.890229072" watchObservedRunningTime="2025-10-11 10:49:02.918818183 +0000 UTC m=+1374.893044624" Oct 11 10:49:03.238194 master-1 kubenswrapper[4771]: I1011 10:49:03.237556 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 11 10:49:03.238194 master-1 kubenswrapper[4771]: I1011 10:49:03.237637 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 11 10:49:03.392443 master-0 kubenswrapper[4790]: I1011 10:49:03.392366 4790 generic.go:334] "Generic (PLEG): container finished" podID="bd096860-a678-4b71-a23d-70ecd6b79a0d" containerID="a6e58df76386bb8e8ef07b134f1ffd543e0d2acceab2a91217be9721eba5984b" exitCode=0 Oct 11 10:49:03.395750 master-0 kubenswrapper[4790]: I1011 10:49:03.395257 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5xkrb" event={"ID":"bd096860-a678-4b71-a23d-70ecd6b79a0d","Type":"ContainerDied","Data":"a6e58df76386bb8e8ef07b134f1ffd543e0d2acceab2a91217be9721eba5984b"} Oct 11 10:49:03.444845 master-0 kubenswrapper[4790]: I1011 10:49:03.444675 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nf8q6" podStartSLOduration=3.554812534 podStartE2EDuration="7.444646626s" podCreationTimestamp="2025-10-11 10:48:56 +0000 UTC" firstStartedPulling="2025-10-11 10:48:57.575404992 +0000 UTC m=+614.129865284" lastFinishedPulling="2025-10-11 10:49:01.465239074 +0000 UTC m=+618.019699376" observedRunningTime="2025-10-11 10:49:02.506924185 +0000 UTC m=+619.061384497" watchObservedRunningTime="2025-10-11 10:49:03.444646626 +0000 UTC m=+619.999106958" Oct 11 10:49:03.853458 master-1 kubenswrapper[4771]: I1011 10:49:03.853267 4771 generic.go:334] "Generic (PLEG): container finished" podID="f1e63653-b356-4bf6-b91a-6d386b1f3c33" containerID="0334ffbda62413275839f28919de6c11cdbed4cb055f314d0f4b773c855eb34c" exitCode=0 Oct 11 10:49:03.853458 master-1 kubenswrapper[4771]: I1011 10:49:03.853410 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvzhx" event={"ID":"f1e63653-b356-4bf6-b91a-6d386b1f3c33","Type":"ContainerDied","Data":"0334ffbda62413275839f28919de6c11cdbed4cb055f314d0f4b773c855eb34c"} Oct 11 10:49:04.080011 master-2 kubenswrapper[4776]: I1011 10:49:04.079946 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-cwqqw" event={"ID":"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614","Type":"ContainerStarted","Data":"db48ef04b71e4abfea32101496ffb211fb94e0e5b9db79472ce75a84db722669"} Oct 11 10:49:04.080782 master-2 kubenswrapper[4776]: I1011 10:49:04.080199 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:49:04.081912 master-2 kubenswrapper[4776]: I1011 10:49:04.081872 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g9nhb" event={"ID":"018da26f-14c3-468f-bab0-089a91b3ef26","Type":"ContainerStarted","Data":"0c25f65d39c11a10d2ab94e1a284849d92d987a4ba68949b08055afb062c5a71"} Oct 11 10:49:04.085435 master-2 kubenswrapper[4776]: I1011 10:49:04.085407 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerStarted","Data":"660146d92780a8daa1e7cc0ec9356a09e60f843c3b31e42d0bcac924fccacf2f"} Oct 11 10:49:04.085435 master-2 kubenswrapper[4776]: I1011 10:49:04.085431 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerStarted","Data":"f4cf841cf8a891127a1eb84034cb7e432bebdb81a9cdca29523823c8fcf23a03"} Oct 11 10:49:04.085567 master-2 kubenswrapper[4776]: I1011 10:49:04.085442 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerStarted","Data":"fdfb7b76d3283faf73e957e99c95a0486b90e03019656b0c33da0c5f92f4244a"} Oct 11 10:49:04.085567 master-2 kubenswrapper[4776]: I1011 10:49:04.085454 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerStarted","Data":"15d9491ebe5a9a0c85a026d09f48cc74e3f7d501690126d8e7257c9710a12e3b"} Oct 11 10:49:04.087019 master-2 kubenswrapper[4776]: I1011 10:49:04.086951 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" event={"ID":"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2","Type":"ContainerStarted","Data":"6365afc3d8d83f54bd90ac532aee4f68c2513147783f4f6b8cc14f5f69b9a2aa"} Oct 11 10:49:04.108012 master-2 kubenswrapper[4776]: I1011 10:49:04.107881 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-cwqqw" podStartSLOduration=1.8506049180000002 podStartE2EDuration="8.107861327s" podCreationTimestamp="2025-10-11 10:48:56 +0000 UTC" firstStartedPulling="2025-10-11 10:48:57.263827849 +0000 UTC m=+1372.048254558" lastFinishedPulling="2025-10-11 10:49:03.521084258 +0000 UTC m=+1378.305510967" observedRunningTime="2025-10-11 10:49:04.105135774 +0000 UTC m=+1378.889562493" watchObservedRunningTime="2025-10-11 10:49:04.107861327 +0000 UTC m=+1378.892288036" Oct 11 10:49:04.144751 master-2 kubenswrapper[4776]: I1011 10:49:04.134159 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" podStartSLOduration=3.738643448 podStartE2EDuration="7.134139239s" podCreationTimestamp="2025-10-11 10:48:57 +0000 UTC" firstStartedPulling="2025-10-11 10:49:00.128194118 +0000 UTC m=+1374.912620827" lastFinishedPulling="2025-10-11 10:49:03.523689909 +0000 UTC m=+1378.308116618" observedRunningTime="2025-10-11 10:49:04.130367027 +0000 UTC m=+1378.914793746" watchObservedRunningTime="2025-10-11 10:49:04.134139239 +0000 UTC m=+1378.918565958" Oct 11 10:49:04.404125 master-0 kubenswrapper[4790]: I1011 10:49:04.404051 4790 generic.go:334] "Generic (PLEG): container finished" podID="bd096860-a678-4b71-a23d-70ecd6b79a0d" containerID="01c897c87d8f8a7cf6baba2200820b92059430adc1563df5a24ec06d698ab7a9" exitCode=0 Oct 11 10:49:04.404942 master-0 kubenswrapper[4790]: I1011 10:49:04.404268 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5xkrb" event={"ID":"bd096860-a678-4b71-a23d-70ecd6b79a0d","Type":"ContainerDied","Data":"01c897c87d8f8a7cf6baba2200820b92059430adc1563df5a24ec06d698ab7a9"} Oct 11 10:49:04.864583 master-1 kubenswrapper[4771]: I1011 10:49:04.864509 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvzhx" event={"ID":"f1e63653-b356-4bf6-b91a-6d386b1f3c33","Type":"ContainerDied","Data":"d934544b69458e648d28108ff72e6e855ddc5b36170c4331901f85f9b2013df3"} Oct 11 10:49:04.865202 master-1 kubenswrapper[4771]: I1011 10:49:04.864401 4771 generic.go:334] "Generic (PLEG): container finished" podID="f1e63653-b356-4bf6-b91a-6d386b1f3c33" containerID="d934544b69458e648d28108ff72e6e855ddc5b36170c4331901f85f9b2013df3" exitCode=0 Oct 11 10:49:04.884126 master-1 kubenswrapper[4771]: I1011 10:49:04.877502 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-524kt" event={"ID":"72f01dc6-72cd-4eb0-8039-57150e0758bf","Type":"ContainerStarted","Data":"d37b1e37373baf43f3f67ce871913965475fa1ee724fe4135d14b88b1c10ebba"} Oct 11 10:49:04.884126 master-1 kubenswrapper[4771]: I1011 10:49:04.879006 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-524kt" Oct 11 10:49:04.932129 master-1 kubenswrapper[4771]: I1011 10:49:04.930900 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-524kt" podStartSLOduration=4.412853247 podStartE2EDuration="10.930883435s" podCreationTimestamp="2025-10-11 10:48:54 +0000 UTC" firstStartedPulling="2025-10-11 10:48:57.083822787 +0000 UTC m=+1369.058049258" lastFinishedPulling="2025-10-11 10:49:03.601853005 +0000 UTC m=+1375.576079446" observedRunningTime="2025-10-11 10:49:04.929979558 +0000 UTC m=+1376.904205999" watchObservedRunningTime="2025-10-11 10:49:04.930883435 +0000 UTC m=+1376.905109876" Oct 11 10:49:05.096175 master-2 kubenswrapper[4776]: I1011 10:49:05.096114 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g9nhb" event={"ID":"018da26f-14c3-468f-bab0-089a91b3ef26","Type":"ContainerStarted","Data":"6487986e2b8d76acb82a49f89b5d910e5b5cf96fda73c1595fbde2cc0649dd49"} Oct 11 10:49:05.217210 master-2 kubenswrapper[4776]: I1011 10:49:05.217127 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-g9nhb" podStartSLOduration=3.33585625 podStartE2EDuration="11.217108382s" podCreationTimestamp="2025-10-11 10:48:54 +0000 UTC" firstStartedPulling="2025-10-11 10:48:56.758727083 +0000 UTC m=+1371.543153792" lastFinishedPulling="2025-10-11 10:49:04.639979215 +0000 UTC m=+1379.424405924" observedRunningTime="2025-10-11 10:49:05.21443465 +0000 UTC m=+1379.998861409" watchObservedRunningTime="2025-10-11 10:49:05.217108382 +0000 UTC m=+1380.001535091" Oct 11 10:49:05.304111 master-0 kubenswrapper[4790]: I1011 10:49:05.304027 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-rtr4h" Oct 11 10:49:05.415207 master-0 kubenswrapper[4790]: I1011 10:49:05.415121 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5xkrb" event={"ID":"bd096860-a678-4b71-a23d-70ecd6b79a0d","Type":"ContainerStarted","Data":"363cfa304dabd2abd3b423133ce71d5412423784d251e2d462483f789125e224"} Oct 11 10:49:05.415207 master-0 kubenswrapper[4790]: I1011 10:49:05.415180 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5xkrb" event={"ID":"bd096860-a678-4b71-a23d-70ecd6b79a0d","Type":"ContainerStarted","Data":"846d6dea051489b20a5fdf0e50be9061753ef7a09012c2a6880726df214d6341"} Oct 11 10:49:05.415207 master-0 kubenswrapper[4790]: I1011 10:49:05.415195 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5xkrb" event={"ID":"bd096860-a678-4b71-a23d-70ecd6b79a0d","Type":"ContainerStarted","Data":"3814e1169066eb5f133a731006069b1dc8aabf8a2041ded1e0e1ac3eea968aa0"} Oct 11 10:49:05.415207 master-0 kubenswrapper[4790]: I1011 10:49:05.415207 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5xkrb" event={"ID":"bd096860-a678-4b71-a23d-70ecd6b79a0d","Type":"ContainerStarted","Data":"437fd564f674ad40cbfea75214746496978529e1543cfb68813f78d6407c586d"} Oct 11 10:49:05.415207 master-0 kubenswrapper[4790]: I1011 10:49:05.415218 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5xkrb" event={"ID":"bd096860-a678-4b71-a23d-70ecd6b79a0d","Type":"ContainerStarted","Data":"6e6bbd5fcab64812b3669f62cb3e668299cc24f61ae6fb0eabcfa62ac935a3b6"} Oct 11 10:49:05.415207 master-0 kubenswrapper[4790]: I1011 10:49:05.415228 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5xkrb" event={"ID":"bd096860-a678-4b71-a23d-70ecd6b79a0d","Type":"ContainerStarted","Data":"5caed0c6041a80b902e0ba6a8c947468ac24bb32564bcf70b083f76af88ae8d4"} Oct 11 10:49:05.416203 master-0 kubenswrapper[4790]: I1011 10:49:05.416035 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:49:05.896200 master-1 kubenswrapper[4771]: I1011 10:49:05.896120 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvzhx" event={"ID":"f1e63653-b356-4bf6-b91a-6d386b1f3c33","Type":"ContainerStarted","Data":"06797530c40fd5883bac35a7432b5b6b13c75d2f48ebb82338c56a27a05cc991"} Oct 11 10:49:05.896200 master-1 kubenswrapper[4771]: I1011 10:49:05.896199 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvzhx" event={"ID":"f1e63653-b356-4bf6-b91a-6d386b1f3c33","Type":"ContainerStarted","Data":"6656112c9be2dff2eeb0f857ebb0a4ef3d7f1d0a1e0b9003159fc88ccbdf3b65"} Oct 11 10:49:05.897281 master-1 kubenswrapper[4771]: I1011 10:49:05.896224 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvzhx" event={"ID":"f1e63653-b356-4bf6-b91a-6d386b1f3c33","Type":"ContainerStarted","Data":"d4906bccb5f00bc5ad9a090e2b636be0c07ebd66559498406de5aa2630b2b54d"} Oct 11 10:49:05.942751 master-0 kubenswrapper[4790]: I1011 10:49:05.942540 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-5xkrb" podStartSLOduration=6.308042138 podStartE2EDuration="12.942504571s" podCreationTimestamp="2025-10-11 10:48:53 +0000 UTC" firstStartedPulling="2025-10-11 10:48:54.828230333 +0000 UTC m=+611.382690625" lastFinishedPulling="2025-10-11 10:49:01.462692766 +0000 UTC m=+618.017153058" observedRunningTime="2025-10-11 10:49:05.937275861 +0000 UTC m=+622.491736213" watchObservedRunningTime="2025-10-11 10:49:05.942504571 +0000 UTC m=+622.496964903" Oct 11 10:49:06.106438 master-2 kubenswrapper[4776]: I1011 10:49:06.106384 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerStarted","Data":"abe839578a5067a2c1195dd85fb5d672d1e4d38a472f5351b4b8f42fcad85ae8"} Oct 11 10:49:06.106438 master-2 kubenswrapper[4776]: I1011 10:49:06.106436 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerStarted","Data":"57b0bf7b7f3b477b71ff3074a899bba6b58a425b15b987e5bcb3953f1065229d"} Oct 11 10:49:06.107152 master-2 kubenswrapper[4776]: I1011 10:49:06.106534 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-g9nhb" Oct 11 10:49:06.146934 master-2 kubenswrapper[4776]: I1011 10:49:06.146840 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-hwrzt" podStartSLOduration=2.701471696 podStartE2EDuration="13.146821972s" podCreationTimestamp="2025-10-11 10:48:53 +0000 UTC" firstStartedPulling="2025-10-11 10:48:54.45303563 +0000 UTC m=+1369.237462339" lastFinishedPulling="2025-10-11 10:49:04.898385906 +0000 UTC m=+1379.682812615" observedRunningTime="2025-10-11 10:49:06.142498085 +0000 UTC m=+1380.926924794" watchObservedRunningTime="2025-10-11 10:49:06.146821972 +0000 UTC m=+1380.931248691" Oct 11 10:49:06.914549 master-1 kubenswrapper[4771]: I1011 10:49:06.914451 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvzhx" event={"ID":"f1e63653-b356-4bf6-b91a-6d386b1f3c33","Type":"ContainerStarted","Data":"8a3c45be82c19ad78af58745d6ecb1937c5ad168c9d253338098a26c219e7bd0"} Oct 11 10:49:06.915681 master-1 kubenswrapper[4771]: I1011 10:49:06.915610 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:49:06.915992 master-1 kubenswrapper[4771]: I1011 10:49:06.915913 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvzhx" event={"ID":"f1e63653-b356-4bf6-b91a-6d386b1f3c33","Type":"ContainerStarted","Data":"f6bc0bac91103a3736bac8cdadc3f57883d3d35f6390a75d1223cb724e09ca3d"} Oct 11 10:49:06.916295 master-1 kubenswrapper[4771]: I1011 10:49:06.916220 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lvzhx" event={"ID":"f1e63653-b356-4bf6-b91a-6d386b1f3c33","Type":"ContainerStarted","Data":"ce536cb60007c1de48f428cc06d44c25cbb4e9e6d2b2e852c371a1ae4856ad23"} Oct 11 10:49:06.955752 master-1 kubenswrapper[4771]: I1011 10:49:06.952259 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-lvzhx" podStartSLOduration=6.434047858 podStartE2EDuration="13.952229776s" podCreationTimestamp="2025-10-11 10:48:53 +0000 UTC" firstStartedPulling="2025-10-11 10:48:54.930977546 +0000 UTC m=+1366.905203997" lastFinishedPulling="2025-10-11 10:49:02.449159474 +0000 UTC m=+1374.423385915" observedRunningTime="2025-10-11 10:49:06.950032642 +0000 UTC m=+1378.924259163" watchObservedRunningTime="2025-10-11 10:49:06.952229776 +0000 UTC m=+1378.926456257" Oct 11 10:49:07.113957 master-2 kubenswrapper[4776]: I1011 10:49:07.113901 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:49:07.259047 master-1 kubenswrapper[4771]: I1011 10:49:07.258898 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-djsq6" Oct 11 10:49:07.340546 master-0 kubenswrapper[4790]: I1011 10:49:07.340439 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-7f4xb" Oct 11 10:49:07.622402 master-1 kubenswrapper[4771]: I1011 10:49:07.622283 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:49:07.622808 master-1 kubenswrapper[4771]: I1011 10:49:07.622437 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:49:07.631395 master-1 kubenswrapper[4771]: I1011 10:49:07.631308 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:49:07.946542 master-1 kubenswrapper[4771]: I1011 10:49:07.945770 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69f8677c95-9ncnx" Oct 11 10:49:08.069255 master-0 kubenswrapper[4790]: I1011 10:49:08.068550 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6f9d445f57-w4nwq"] Oct 11 10:49:08.238518 master-1 kubenswrapper[4771]: I1011 10:49:08.238394 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 11 10:49:08.238518 master-1 kubenswrapper[4771]: I1011 10:49:08.238476 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 11 10:49:09.302618 master-2 kubenswrapper[4776]: I1011 10:49:09.302550 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:49:09.339716 master-2 kubenswrapper[4776]: I1011 10:49:09.339406 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:49:09.503572 master-1 kubenswrapper[4771]: I1011 10:49:09.503218 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-5-master-1"] Oct 11 10:49:09.504303 master-2 kubenswrapper[4776]: I1011 10:49:09.504173 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-5-master-2"] Oct 11 10:49:09.512798 master-1 kubenswrapper[4771]: I1011 10:49:09.512722 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-5-master-1"] Oct 11 10:49:09.513721 master-2 kubenswrapper[4776]: I1011 10:49:09.513636 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-5-master-2"] Oct 11 10:49:09.673214 master-0 kubenswrapper[4790]: I1011 10:49:09.673124 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:49:09.712254 master-0 kubenswrapper[4790]: I1011 10:49:09.712184 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:49:09.758001 master-1 kubenswrapper[4771]: I1011 10:49:09.757838 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:49:09.795772 master-1 kubenswrapper[4771]: I1011 10:49:09.795676 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:49:10.068118 master-2 kubenswrapper[4776]: I1011 10:49:10.068064 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebeec22d-9309-4efd-bbc0-f44c750a258c" path="/var/lib/kubelet/pods/ebeec22d-9309-4efd-bbc0-f44c750a258c/volumes" Oct 11 10:49:10.449474 master-1 kubenswrapper[4771]: I1011 10:49:10.449386 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0f830cc-d36c-4ccd-97cb-2d4a99726684" path="/var/lib/kubelet/pods/f0f830cc-d36c-4ccd-97cb-2d4a99726684/volumes" Oct 11 10:49:12.258153 master-2 kubenswrapper[4776]: I1011 10:49:12.258042 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:49:12.436693 master-1 kubenswrapper[4771]: I1011 10:49:12.436604 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:49:12.464963 master-1 kubenswrapper[4771]: I1011 10:49:12.464896 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="dd9c0979-c66d-405d-b41b-02fec7c5a5da" Oct 11 10:49:12.464963 master-1 kubenswrapper[4771]: I1011 10:49:12.464954 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="dd9c0979-c66d-405d-b41b-02fec7c5a5da" Oct 11 10:49:12.492901 master-1 kubenswrapper[4771]: I1011 10:49:12.492781 4771 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:49:12.493793 master-1 kubenswrapper[4771]: I1011 10:49:12.493712 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 11 10:49:12.508135 master-1 kubenswrapper[4771]: I1011 10:49:12.508064 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 11 10:49:12.519099 master-1 kubenswrapper[4771]: I1011 10:49:12.519042 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:49:12.524428 master-1 kubenswrapper[4771]: I1011 10:49:12.524349 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 11 10:49:12.549997 master-1 kubenswrapper[4771]: W1011 10:49:12.549897 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23141951a25391899fad7b9f2d5b6739.slice/crio-96af06f738dd615b275d7cdbc3ed2324efae9eec171f4d91d3b8d40221367da9 WatchSource:0}: Error finding container 96af06f738dd615b275d7cdbc3ed2324efae9eec171f4d91d3b8d40221367da9: Status 404 returned error can't find the container with id 96af06f738dd615b275d7cdbc3ed2324efae9eec171f4d91d3b8d40221367da9 Oct 11 10:49:12.969469 master-1 kubenswrapper[4771]: I1011 10:49:12.969403 4771 generic.go:334] "Generic (PLEG): container finished" podID="23141951a25391899fad7b9f2d5b6739" containerID="5e79b10186bcde1683e0ff15f9bb04b0604789f489cf69d5e42d464f0d1d0aed" exitCode=0 Oct 11 10:49:12.969469 master-1 kubenswrapper[4771]: I1011 10:49:12.969468 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"23141951a25391899fad7b9f2d5b6739","Type":"ContainerDied","Data":"5e79b10186bcde1683e0ff15f9bb04b0604789f489cf69d5e42d464f0d1d0aed"} Oct 11 10:49:12.969901 master-1 kubenswrapper[4771]: I1011 10:49:12.969507 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"23141951a25391899fad7b9f2d5b6739","Type":"ContainerStarted","Data":"96af06f738dd615b275d7cdbc3ed2324efae9eec171f4d91d3b8d40221367da9"} Oct 11 10:49:13.238499 master-1 kubenswrapper[4771]: I1011 10:49:13.238408 4771 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 11 10:49:13.238786 master-1 kubenswrapper[4771]: I1011 10:49:13.238496 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="86b914fa-4ccd-42fb-965a-a1bc19442489" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 11 10:49:13.722310 master-1 kubenswrapper[4771]: I1011 10:49:13.722238 4771 scope.go:117] "RemoveContainer" containerID="2b7fb64c483453dbfbd93869288690ed38d6d29cb105ac6ec22c06d0d9551aa1" Oct 11 10:49:13.994465 master-1 kubenswrapper[4771]: I1011 10:49:13.994347 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"23141951a25391899fad7b9f2d5b6739","Type":"ContainerStarted","Data":"6981b4508ac74e45d282fe5a59f106dabfea7c7c3293de729e9d6f0db68c7510"} Oct 11 10:49:13.994465 master-1 kubenswrapper[4771]: I1011 10:49:13.994465 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"23141951a25391899fad7b9f2d5b6739","Type":"ContainerStarted","Data":"07c0fc0bccb7b7da4962275bebaa5f5ab1861795bc9d86159b3e163ce05cddac"} Oct 11 10:49:13.994832 master-1 kubenswrapper[4771]: I1011 10:49:13.994490 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"23141951a25391899fad7b9f2d5b6739","Type":"ContainerStarted","Data":"78130b5301fab8a631d47c753616bc63c2d7d8f2931253367a84eb0adf8538fb"} Oct 11 10:49:14.226136 master-0 kubenswrapper[4790]: I1011 10:49:14.226055 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-54x4w" Oct 11 10:49:14.311368 master-2 kubenswrapper[4776]: I1011 10:49:14.308802 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:49:14.675287 master-0 kubenswrapper[4790]: I1011 10:49:14.675195 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-5xkrb" Oct 11 10:49:14.764532 master-1 kubenswrapper[4771]: I1011 10:49:14.760986 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-lvzhx" Oct 11 10:49:15.004995 master-1 kubenswrapper[4771]: I1011 10:49:15.004941 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"23141951a25391899fad7b9f2d5b6739","Type":"ContainerStarted","Data":"761b6c7301a42f97ce8830816995738e797ac6ef0b6d2847f3ce0b1a956aa630"} Oct 11 10:49:15.004995 master-1 kubenswrapper[4771]: I1011 10:49:15.004998 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"23141951a25391899fad7b9f2d5b6739","Type":"ContainerStarted","Data":"6b790055ab747dc39b0de5102915f7f68179102f3a383ae626b746d0c22b0f1f"} Oct 11 10:49:15.005256 master-1 kubenswrapper[4771]: I1011 10:49:15.005177 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:49:15.579762 master-2 kubenswrapper[4776]: I1011 10:49:15.579713 4776 scope.go:117] "RemoveContainer" containerID="25ad594b9284fd2089fd6abfaa970257ef0a465b5e9177e3d753cb32feaf3eb1" Oct 11 10:49:16.713960 master-0 kubenswrapper[4790]: I1011 10:49:16.713870 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-8n7ld" Oct 11 10:49:16.730308 master-2 kubenswrapper[4776]: I1011 10:49:16.730198 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-g9nhb" Oct 11 10:49:17.053735 master-1 kubenswrapper[4771]: I1011 10:49:17.053636 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-524kt" Oct 11 10:49:17.079663 master-1 kubenswrapper[4771]: I1011 10:49:17.079551 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-1" podStartSLOduration=5.079524857 podStartE2EDuration="5.079524857s" podCreationTimestamp="2025-10-11 10:49:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:49:15.30612152 +0000 UTC m=+1387.280347971" watchObservedRunningTime="2025-10-11 10:49:17.079524857 +0000 UTC m=+1389.053751338" Oct 11 10:49:17.301477 master-0 kubenswrapper[4790]: I1011 10:49:17.301394 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nf8q6" Oct 11 10:49:17.520434 master-1 kubenswrapper[4771]: I1011 10:49:17.520224 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:49:17.520434 master-1 kubenswrapper[4771]: I1011 10:49:17.520289 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:49:17.533787 master-1 kubenswrapper[4771]: I1011 10:49:17.533707 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:49:18.036692 master-1 kubenswrapper[4771]: I1011 10:49:18.036611 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:49:18.245177 master-1 kubenswrapper[4771]: I1011 10:49:18.245102 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 11 10:49:22.347286 master-2 kubenswrapper[4776]: I1011 10:49:22.347219 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6f9d445f57-z6k82" podUID="9ac259b6-cf42-49b4-b1b7-76cc9072d059" containerName="console" containerID="cri-o://64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c" gracePeriod=15 Oct 11 10:49:22.791841 master-2 kubenswrapper[4776]: I1011 10:49:22.791162 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6f9d445f57-z6k82_9ac259b6-cf42-49b4-b1b7-76cc9072d059/console/0.log" Oct 11 10:49:22.791841 master-2 kubenswrapper[4776]: I1011 10:49:22.791266 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:49:22.823460 master-2 kubenswrapper[4776]: I1011 10:49:22.823390 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-config\") pod \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " Oct 11 10:49:22.824041 master-2 kubenswrapper[4776]: I1011 10:49:22.824004 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-serving-cert\") pod \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " Oct 11 10:49:22.824116 master-2 kubenswrapper[4776]: I1011 10:49:22.824075 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-oauth-config\") pod \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " Oct 11 10:49:22.824116 master-2 kubenswrapper[4776]: I1011 10:49:22.824098 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-trusted-ca-bundle\") pod \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " Oct 11 10:49:22.824116 master-2 kubenswrapper[4776]: I1011 10:49:22.824098 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-config" (OuterVolumeSpecName: "console-config") pod "9ac259b6-cf42-49b4-b1b7-76cc9072d059" (UID: "9ac259b6-cf42-49b4-b1b7-76cc9072d059"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:49:22.824267 master-2 kubenswrapper[4776]: I1011 10:49:22.824154 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzqm8\" (UniqueName: \"kubernetes.io/projected/9ac259b6-cf42-49b4-b1b7-76cc9072d059-kube-api-access-tzqm8\") pod \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " Oct 11 10:49:22.824267 master-2 kubenswrapper[4776]: I1011 10:49:22.824196 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-oauth-serving-cert\") pod \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " Oct 11 10:49:22.824267 master-2 kubenswrapper[4776]: I1011 10:49:22.824255 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-service-ca\") pod \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " Oct 11 10:49:22.824558 master-2 kubenswrapper[4776]: I1011 10:49:22.824526 4776 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:49:22.825210 master-2 kubenswrapper[4776]: I1011 10:49:22.825175 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9ac259b6-cf42-49b4-b1b7-76cc9072d059" (UID: "9ac259b6-cf42-49b4-b1b7-76cc9072d059"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:49:22.825336 master-2 kubenswrapper[4776]: I1011 10:49:22.825285 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-service-ca" (OuterVolumeSpecName: "service-ca") pod "9ac259b6-cf42-49b4-b1b7-76cc9072d059" (UID: "9ac259b6-cf42-49b4-b1b7-76cc9072d059"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:49:22.826575 master-2 kubenswrapper[4776]: I1011 10:49:22.826408 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9ac259b6-cf42-49b4-b1b7-76cc9072d059" (UID: "9ac259b6-cf42-49b4-b1b7-76cc9072d059"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:49:22.831283 master-2 kubenswrapper[4776]: I1011 10:49:22.830789 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9ac259b6-cf42-49b4-b1b7-76cc9072d059" (UID: "9ac259b6-cf42-49b4-b1b7-76cc9072d059"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:49:22.831283 master-2 kubenswrapper[4776]: I1011 10:49:22.831072 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac259b6-cf42-49b4-b1b7-76cc9072d059-kube-api-access-tzqm8" (OuterVolumeSpecName: "kube-api-access-tzqm8") pod "9ac259b6-cf42-49b4-b1b7-76cc9072d059" (UID: "9ac259b6-cf42-49b4-b1b7-76cc9072d059"). InnerVolumeSpecName "kube-api-access-tzqm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:49:22.833546 master-2 kubenswrapper[4776]: I1011 10:49:22.833528 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9ac259b6-cf42-49b4-b1b7-76cc9072d059" (UID: "9ac259b6-cf42-49b4-b1b7-76cc9072d059"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:49:22.925651 master-2 kubenswrapper[4776]: I1011 10:49:22.925608 4776 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-oauth-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:49:22.925651 master-2 kubenswrapper[4776]: I1011 10:49:22.925645 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:49:22.925651 master-2 kubenswrapper[4776]: I1011 10:49:22.925655 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzqm8\" (UniqueName: \"kubernetes.io/projected/9ac259b6-cf42-49b4-b1b7-76cc9072d059-kube-api-access-tzqm8\") on node \"master-2\" DevicePath \"\"" Oct 11 10:49:22.925651 master-2 kubenswrapper[4776]: I1011 10:49:22.925665 4776 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-oauth-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:49:22.925968 master-2 kubenswrapper[4776]: I1011 10:49:22.925713 4776 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-service-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:49:22.925968 master-2 kubenswrapper[4776]: I1011 10:49:22.925725 4776 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:49:23.252269 master-2 kubenswrapper[4776]: I1011 10:49:23.252162 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6f9d445f57-z6k82_9ac259b6-cf42-49b4-b1b7-76cc9072d059/console/0.log" Oct 11 10:49:23.252661 master-2 kubenswrapper[4776]: I1011 10:49:23.252293 4776 generic.go:334] "Generic (PLEG): container finished" podID="9ac259b6-cf42-49b4-b1b7-76cc9072d059" containerID="64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c" exitCode=2 Oct 11 10:49:23.252661 master-2 kubenswrapper[4776]: I1011 10:49:23.252385 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:49:23.255115 master-2 kubenswrapper[4776]: I1011 10:49:23.252367 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f9d445f57-z6k82" event={"ID":"9ac259b6-cf42-49b4-b1b7-76cc9072d059","Type":"ContainerDied","Data":"64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c"} Oct 11 10:49:23.255115 master-2 kubenswrapper[4776]: I1011 10:49:23.252837 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f9d445f57-z6k82" event={"ID":"9ac259b6-cf42-49b4-b1b7-76cc9072d059","Type":"ContainerDied","Data":"88f0b704732f3071e12ed605ea3d2e3766c911ff93df7e212bf6bc543d745d21"} Oct 11 10:49:23.255115 master-2 kubenswrapper[4776]: I1011 10:49:23.252892 4776 scope.go:117] "RemoveContainer" containerID="64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c" Oct 11 10:49:23.289065 master-2 kubenswrapper[4776]: I1011 10:49:23.288616 4776 scope.go:117] "RemoveContainer" containerID="64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c" Oct 11 10:49:23.289536 master-2 kubenswrapper[4776]: E1011 10:49:23.289429 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c\": container with ID starting with 64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c not found: ID does not exist" containerID="64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c" Oct 11 10:49:23.289727 master-2 kubenswrapper[4776]: I1011 10:49:23.289601 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c"} err="failed to get container status \"64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c\": rpc error: code = NotFound desc = could not find container \"64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c\": container with ID starting with 64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c not found: ID does not exist" Oct 11 10:49:23.298382 master-2 kubenswrapper[4776]: I1011 10:49:23.298323 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6f9d445f57-z6k82"] Oct 11 10:49:23.302584 master-2 kubenswrapper[4776]: I1011 10:49:23.302523 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6f9d445f57-z6k82"] Oct 11 10:49:23.946995 master-2 kubenswrapper[4776]: I1011 10:49:23.946919 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69f8677c95-z9d9d"] Oct 11 10:49:23.947659 master-2 kubenswrapper[4776]: E1011 10:49:23.947237 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac259b6-cf42-49b4-b1b7-76cc9072d059" containerName="console" Oct 11 10:49:23.947659 master-2 kubenswrapper[4776]: I1011 10:49:23.947254 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac259b6-cf42-49b4-b1b7-76cc9072d059" containerName="console" Oct 11 10:49:23.947659 master-2 kubenswrapper[4776]: I1011 10:49:23.947411 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ac259b6-cf42-49b4-b1b7-76cc9072d059" containerName="console" Oct 11 10:49:23.948027 master-2 kubenswrapper[4776]: I1011 10:49:23.948000 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:23.956363 master-2 kubenswrapper[4776]: I1011 10:49:23.956312 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 11 10:49:23.957403 master-2 kubenswrapper[4776]: I1011 10:49:23.957345 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-8wmjp" Oct 11 10:49:23.957806 master-2 kubenswrapper[4776]: I1011 10:49:23.957769 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 11 10:49:23.958360 master-2 kubenswrapper[4776]: I1011 10:49:23.958306 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 11 10:49:23.958605 master-2 kubenswrapper[4776]: I1011 10:49:23.958415 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 11 10:49:23.958926 master-2 kubenswrapper[4776]: I1011 10:49:23.958848 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 11 10:49:23.976765 master-2 kubenswrapper[4776]: I1011 10:49:23.973071 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 11 10:49:23.976765 master-2 kubenswrapper[4776]: I1011 10:49:23.975621 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69f8677c95-z9d9d"] Oct 11 10:49:24.044244 master-2 kubenswrapper[4776]: I1011 10:49:24.044096 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-oauth-serving-cert\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.044495 master-2 kubenswrapper[4776]: I1011 10:49:24.044221 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm685\" (UniqueName: \"kubernetes.io/projected/722d06e2-c934-4ba0-82e4-51c4b2104851-kube-api-access-qm685\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.044854 master-2 kubenswrapper[4776]: I1011 10:49:24.044814 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-console-config\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.045106 master-2 kubenswrapper[4776]: I1011 10:49:24.045051 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/722d06e2-c934-4ba0-82e4-51c4b2104851-console-oauth-config\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.045187 master-2 kubenswrapper[4776]: I1011 10:49:24.045112 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-trusted-ca-bundle\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.045245 master-2 kubenswrapper[4776]: I1011 10:49:24.045183 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/722d06e2-c934-4ba0-82e4-51c4b2104851-console-serving-cert\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.045427 master-2 kubenswrapper[4776]: I1011 10:49:24.045349 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-service-ca\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.070656 master-2 kubenswrapper[4776]: I1011 10:49:24.070592 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ac259b6-cf42-49b4-b1b7-76cc9072d059" path="/var/lib/kubelet/pods/9ac259b6-cf42-49b4-b1b7-76cc9072d059/volumes" Oct 11 10:49:24.147335 master-2 kubenswrapper[4776]: I1011 10:49:24.147247 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-console-config\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.147567 master-2 kubenswrapper[4776]: I1011 10:49:24.147429 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/722d06e2-c934-4ba0-82e4-51c4b2104851-console-oauth-config\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.147567 master-2 kubenswrapper[4776]: I1011 10:49:24.147482 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-trusted-ca-bundle\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.147567 master-2 kubenswrapper[4776]: I1011 10:49:24.147532 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/722d06e2-c934-4ba0-82e4-51c4b2104851-console-serving-cert\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.147691 master-2 kubenswrapper[4776]: I1011 10:49:24.147568 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-service-ca\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.147691 master-2 kubenswrapper[4776]: I1011 10:49:24.147619 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-oauth-serving-cert\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.147691 master-2 kubenswrapper[4776]: I1011 10:49:24.147642 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm685\" (UniqueName: \"kubernetes.io/projected/722d06e2-c934-4ba0-82e4-51c4b2104851-kube-api-access-qm685\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.150360 master-2 kubenswrapper[4776]: I1011 10:49:24.150279 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-console-config\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.151954 master-2 kubenswrapper[4776]: I1011 10:49:24.151906 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-service-ca\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.152142 master-2 kubenswrapper[4776]: I1011 10:49:24.152106 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-trusted-ca-bundle\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.153180 master-2 kubenswrapper[4776]: I1011 10:49:24.152989 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-oauth-serving-cert\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.155597 master-2 kubenswrapper[4776]: I1011 10:49:24.155537 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/722d06e2-c934-4ba0-82e4-51c4b2104851-console-serving-cert\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.156484 master-2 kubenswrapper[4776]: I1011 10:49:24.156445 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/722d06e2-c934-4ba0-82e4-51c4b2104851-console-oauth-config\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.188217 master-2 kubenswrapper[4776]: I1011 10:49:24.188139 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm685\" (UniqueName: \"kubernetes.io/projected/722d06e2-c934-4ba0-82e4-51c4b2104851-kube-api-access-qm685\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.271345 master-2 kubenswrapper[4776]: I1011 10:49:24.271227 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.675882 master-2 kubenswrapper[4776]: I1011 10:49:24.675852 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69f8677c95-z9d9d"] Oct 11 10:49:24.682659 master-2 kubenswrapper[4776]: W1011 10:49:24.682632 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod722d06e2_c934_4ba0_82e4_51c4b2104851.slice/crio-cf945511894f0824106ac2fc18130f0fa56882302f89dd12285078f2b110ef3e WatchSource:0}: Error finding container cf945511894f0824106ac2fc18130f0fa56882302f89dd12285078f2b110ef3e: Status 404 returned error can't find the container with id cf945511894f0824106ac2fc18130f0fa56882302f89dd12285078f2b110ef3e Oct 11 10:49:25.276239 master-2 kubenswrapper[4776]: I1011 10:49:25.276176 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69f8677c95-z9d9d" event={"ID":"722d06e2-c934-4ba0-82e4-51c4b2104851","Type":"ContainerStarted","Data":"044fa5901fba56d1be7c30b3901846c5c8e6b627ff7cc261145d28f59a7e889c"} Oct 11 10:49:25.276239 master-2 kubenswrapper[4776]: I1011 10:49:25.276224 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69f8677c95-z9d9d" event={"ID":"722d06e2-c934-4ba0-82e4-51c4b2104851","Type":"ContainerStarted","Data":"cf945511894f0824106ac2fc18130f0fa56882302f89dd12285078f2b110ef3e"} Oct 11 10:49:25.314696 master-2 kubenswrapper[4776]: I1011 10:49:25.314592 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69f8677c95-z9d9d" podStartSLOduration=28.314569281 podStartE2EDuration="28.314569281s" podCreationTimestamp="2025-10-11 10:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:49:25.311254032 +0000 UTC m=+1400.095680781" watchObservedRunningTime="2025-10-11 10:49:25.314569281 +0000 UTC m=+1400.098996000" Oct 11 10:49:26.275332 master-0 kubenswrapper[4790]: I1011 10:49:26.275235 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-rxlsk"] Oct 11 10:49:26.277233 master-0 kubenswrapper[4790]: I1011 10:49:26.277174 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.281454 master-0 kubenswrapper[4790]: I1011 10:49:26.281404 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Oct 11 10:49:26.297295 master-2 kubenswrapper[4776]: I1011 10:49:26.297233 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-kjcgl"] Oct 11 10:49:26.298172 master-2 kubenswrapper[4776]: I1011 10:49:26.298138 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.301470 master-1 kubenswrapper[4771]: I1011 10:49:26.301330 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-l9x5s"] Oct 11 10:49:26.301519 master-2 kubenswrapper[4776]: I1011 10:49:26.301481 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Oct 11 10:49:26.302220 master-2 kubenswrapper[4776]: I1011 10:49:26.302193 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Oct 11 10:49:26.302462 master-2 kubenswrapper[4776]: I1011 10:49:26.302423 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Oct 11 10:49:26.302642 master-0 kubenswrapper[4790]: I1011 10:49:26.302283 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-registration-dir\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.302642 master-0 kubenswrapper[4790]: I1011 10:49:26.302368 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-sys\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.302642 master-0 kubenswrapper[4790]: I1011 10:49:26.302409 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-node-plugin-dir\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.302642 master-0 kubenswrapper[4790]: I1011 10:49:26.302440 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-device-dir\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.302642 master-0 kubenswrapper[4790]: I1011 10:49:26.302469 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-metrics-cert\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.302642 master-0 kubenswrapper[4790]: I1011 10:49:26.302523 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-pod-volumes-dir\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.302642 master-0 kubenswrapper[4790]: I1011 10:49:26.302645 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-lvmd-config\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.303118 master-1 kubenswrapper[4771]: I1011 10:49:26.303084 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.303266 master-0 kubenswrapper[4790]: I1011 10:49:26.302686 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-run-udev\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.303266 master-0 kubenswrapper[4790]: I1011 10:49:26.302757 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-file-lock-dir\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.303266 master-0 kubenswrapper[4790]: I1011 10:49:26.302835 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-csi-plugin-dir\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.303266 master-0 kubenswrapper[4790]: I1011 10:49:26.302903 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdl74\" (UniqueName: \"kubernetes.io/projected/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-kube-api-access-pdl74\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.307544 master-1 kubenswrapper[4771]: I1011 10:49:26.307471 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Oct 11 10:49:26.308474 master-1 kubenswrapper[4771]: I1011 10:49:26.307607 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Oct 11 10:49:26.308474 master-1 kubenswrapper[4771]: I1011 10:49:26.307503 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Oct 11 10:49:26.317230 master-0 kubenswrapper[4790]: I1011 10:49:26.317047 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-rxlsk"] Oct 11 10:49:26.325816 master-2 kubenswrapper[4776]: I1011 10:49:26.325766 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-kjcgl"] Oct 11 10:49:26.331664 master-1 kubenswrapper[4771]: I1011 10:49:26.329717 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-l9x5s"] Oct 11 10:49:26.380553 master-2 kubenswrapper[4776]: I1011 10:49:26.380488 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-registration-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.380798 master-2 kubenswrapper[4776]: I1011 10:49:26.380767 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-run-udev\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.380857 master-2 kubenswrapper[4776]: I1011 10:49:26.380823 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-pod-volumes-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.380912 master-2 kubenswrapper[4776]: I1011 10:49:26.380896 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-lvmd-config\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.380960 master-2 kubenswrapper[4776]: I1011 10:49:26.380920 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-node-plugin-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.380960 master-2 kubenswrapper[4776]: I1011 10:49:26.380956 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-csi-plugin-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.381041 master-2 kubenswrapper[4776]: I1011 10:49:26.380975 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/5648f79c-6a71-4f6f-8bde-b85a18b200bb-metrics-cert\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.381041 master-2 kubenswrapper[4776]: I1011 10:49:26.381000 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-device-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.381041 master-2 kubenswrapper[4776]: I1011 10:49:26.381021 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-file-lock-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.381173 master-2 kubenswrapper[4776]: I1011 10:49:26.381042 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-sys\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.381173 master-2 kubenswrapper[4776]: I1011 10:49:26.381065 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbhmz\" (UniqueName: \"kubernetes.io/projected/5648f79c-6a71-4f6f-8bde-b85a18b200bb-kube-api-access-kbhmz\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.404522 master-0 kubenswrapper[4790]: I1011 10:49:26.404441 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-lvmd-config\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.404522 master-0 kubenswrapper[4790]: I1011 10:49:26.404505 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-run-udev\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.404522 master-0 kubenswrapper[4790]: I1011 10:49:26.404526 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-file-lock-dir\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.405308 master-0 kubenswrapper[4790]: I1011 10:49:26.404556 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-csi-plugin-dir\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.405308 master-0 kubenswrapper[4790]: I1011 10:49:26.404586 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdl74\" (UniqueName: \"kubernetes.io/projected/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-kube-api-access-pdl74\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.405308 master-0 kubenswrapper[4790]: I1011 10:49:26.404625 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-registration-dir\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.405308 master-0 kubenswrapper[4790]: I1011 10:49:26.404649 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-sys\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.405308 master-0 kubenswrapper[4790]: I1011 10:49:26.404672 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-node-plugin-dir\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.405308 master-0 kubenswrapper[4790]: I1011 10:49:26.404690 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-device-dir\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.405308 master-0 kubenswrapper[4790]: I1011 10:49:26.404906 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-metrics-cert\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.405308 master-0 kubenswrapper[4790]: I1011 10:49:26.404931 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-pod-volumes-dir\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.405308 master-0 kubenswrapper[4790]: I1011 10:49:26.404942 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-lvmd-config\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.405308 master-0 kubenswrapper[4790]: I1011 10:49:26.404942 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-run-udev\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.405308 master-0 kubenswrapper[4790]: I1011 10:49:26.405042 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-device-dir\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.405308 master-0 kubenswrapper[4790]: I1011 10:49:26.405150 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-sys\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.406085 master-0 kubenswrapper[4790]: I1011 10:49:26.405324 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-file-lock-dir\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.406085 master-0 kubenswrapper[4790]: I1011 10:49:26.405473 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-registration-dir\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.406085 master-0 kubenswrapper[4790]: I1011 10:49:26.405523 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-csi-plugin-dir\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.406085 master-0 kubenswrapper[4790]: I1011 10:49:26.405721 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-pod-volumes-dir\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.406085 master-0 kubenswrapper[4790]: I1011 10:49:26.405931 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-node-plugin-dir\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.408527 master-0 kubenswrapper[4790]: I1011 10:49:26.408487 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-metrics-cert\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.409568 master-1 kubenswrapper[4771]: I1011 10:49:26.409460 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-pod-volumes-dir\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.409938 master-1 kubenswrapper[4771]: I1011 10:49:26.409630 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-metrics-cert\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.409938 master-1 kubenswrapper[4771]: I1011 10:49:26.409686 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-device-dir\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.409938 master-1 kubenswrapper[4771]: I1011 10:49:26.409735 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-run-udev\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.409938 master-1 kubenswrapper[4771]: I1011 10:49:26.409832 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-file-lock-dir\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.409938 master-1 kubenswrapper[4771]: I1011 10:49:26.409884 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-registration-dir\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.409938 master-1 kubenswrapper[4771]: I1011 10:49:26.409926 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-node-plugin-dir\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.410427 master-1 kubenswrapper[4771]: I1011 10:49:26.409980 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-lvmd-config\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.410427 master-1 kubenswrapper[4771]: I1011 10:49:26.410040 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-sys\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.410427 master-1 kubenswrapper[4771]: I1011 10:49:26.410126 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-csi-plugin-dir\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.410427 master-1 kubenswrapper[4771]: I1011 10:49:26.410187 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6w2m\" (UniqueName: \"kubernetes.io/projected/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-kube-api-access-v6w2m\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.423790 master-0 kubenswrapper[4790]: I1011 10:49:26.423676 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdl74\" (UniqueName: \"kubernetes.io/projected/3d08adb1-c6cb-41b9-a68b-68a1e41b883a-kube-api-access-pdl74\") pod \"vg-manager-rxlsk\" (UID: \"3d08adb1-c6cb-41b9-a68b-68a1e41b883a\") " pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.483005 master-2 kubenswrapper[4776]: I1011 10:49:26.482931 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-run-udev\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483005 master-2 kubenswrapper[4776]: I1011 10:49:26.483005 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-pod-volumes-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483262 master-2 kubenswrapper[4776]: I1011 10:49:26.483121 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-lvmd-config\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483262 master-2 kubenswrapper[4776]: I1011 10:49:26.483157 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-node-plugin-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483262 master-2 kubenswrapper[4776]: I1011 10:49:26.483147 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-run-udev\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483356 master-2 kubenswrapper[4776]: I1011 10:49:26.483264 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-csi-plugin-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483356 master-2 kubenswrapper[4776]: I1011 10:49:26.483272 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-pod-volumes-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483356 master-2 kubenswrapper[4776]: I1011 10:49:26.483290 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/5648f79c-6a71-4f6f-8bde-b85a18b200bb-metrics-cert\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483487 master-2 kubenswrapper[4776]: I1011 10:49:26.483367 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-file-lock-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483487 master-2 kubenswrapper[4776]: I1011 10:49:26.483395 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-device-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483487 master-2 kubenswrapper[4776]: I1011 10:49:26.483422 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-sys\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483487 master-2 kubenswrapper[4776]: I1011 10:49:26.483470 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbhmz\" (UniqueName: \"kubernetes.io/projected/5648f79c-6a71-4f6f-8bde-b85a18b200bb-kube-api-access-kbhmz\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483617 master-2 kubenswrapper[4776]: I1011 10:49:26.483516 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-registration-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483617 master-2 kubenswrapper[4776]: I1011 10:49:26.483519 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-lvmd-config\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483617 master-2 kubenswrapper[4776]: I1011 10:49:26.483604 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-registration-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.484032 master-2 kubenswrapper[4776]: I1011 10:49:26.483650 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-device-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.484032 master-2 kubenswrapper[4776]: I1011 10:49:26.483700 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-sys\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.484032 master-2 kubenswrapper[4776]: I1011 10:49:26.483699 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-file-lock-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.484032 master-2 kubenswrapper[4776]: I1011 10:49:26.483891 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-node-plugin-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.484032 master-2 kubenswrapper[4776]: I1011 10:49:26.483961 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-csi-plugin-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.487771 master-2 kubenswrapper[4776]: I1011 10:49:26.487732 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/5648f79c-6a71-4f6f-8bde-b85a18b200bb-metrics-cert\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.514941 master-1 kubenswrapper[4771]: I1011 10:49:26.511804 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-pod-volumes-dir\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.514941 master-1 kubenswrapper[4771]: I1011 10:49:26.512008 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-metrics-cert\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.514941 master-1 kubenswrapper[4771]: I1011 10:49:26.512076 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-device-dir\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.514941 master-1 kubenswrapper[4771]: I1011 10:49:26.512005 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-pod-volumes-dir\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.514941 master-1 kubenswrapper[4771]: I1011 10:49:26.512160 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-run-udev\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.514941 master-1 kubenswrapper[4771]: I1011 10:49:26.512217 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-file-lock-dir\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.514941 master-1 kubenswrapper[4771]: I1011 10:49:26.512268 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-registration-dir\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.514941 master-1 kubenswrapper[4771]: I1011 10:49:26.512279 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-run-udev\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.514941 master-1 kubenswrapper[4771]: I1011 10:49:26.512315 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-node-plugin-dir\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.514941 master-1 kubenswrapper[4771]: I1011 10:49:26.512334 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-device-dir\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.514941 master-1 kubenswrapper[4771]: I1011 10:49:26.512410 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-lvmd-config\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.514941 master-1 kubenswrapper[4771]: I1011 10:49:26.512503 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-sys\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.514941 master-1 kubenswrapper[4771]: I1011 10:49:26.512580 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-csi-plugin-dir\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.514941 master-1 kubenswrapper[4771]: I1011 10:49:26.512656 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6w2m\" (UniqueName: \"kubernetes.io/projected/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-kube-api-access-v6w2m\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.514941 master-1 kubenswrapper[4771]: I1011 10:49:26.512683 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-file-lock-dir\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.514941 master-1 kubenswrapper[4771]: I1011 10:49:26.512716 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-registration-dir\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.514941 master-1 kubenswrapper[4771]: I1011 10:49:26.512925 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-sys\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.514941 master-1 kubenswrapper[4771]: I1011 10:49:26.512946 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-lvmd-config\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.516115 master-2 kubenswrapper[4776]: I1011 10:49:26.516039 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbhmz\" (UniqueName: \"kubernetes.io/projected/5648f79c-6a71-4f6f-8bde-b85a18b200bb-kube-api-access-kbhmz\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.516522 master-1 kubenswrapper[4771]: I1011 10:49:26.512969 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-node-plugin-dir\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.516522 master-1 kubenswrapper[4771]: I1011 10:49:26.513093 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-csi-plugin-dir\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.517698 master-1 kubenswrapper[4771]: I1011 10:49:26.517642 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-metrics-cert\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.535149 master-1 kubenswrapper[4771]: I1011 10:49:26.535074 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6w2m\" (UniqueName: \"kubernetes.io/projected/5f47a805-6ec5-4d90-a1be-dfaec7c5c818-kube-api-access-v6w2m\") pod \"vg-manager-l9x5s\" (UID: \"5f47a805-6ec5-4d90-a1be-dfaec7c5c818\") " pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:26.609195 master-0 kubenswrapper[4790]: I1011 10:49:26.609102 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:26.614897 master-2 kubenswrapper[4776]: I1011 10:49:26.614762 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.638161 master-1 kubenswrapper[4771]: I1011 10:49:26.637966 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:27.099886 master-0 kubenswrapper[4790]: I1011 10:49:27.099792 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-rxlsk"] Oct 11 10:49:27.106839 master-2 kubenswrapper[4776]: I1011 10:49:27.106582 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-kjcgl"] Oct 11 10:49:27.113753 master-1 kubenswrapper[4771]: I1011 10:49:27.113671 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-l9x5s"] Oct 11 10:49:27.114780 master-0 kubenswrapper[4790]: W1011 10:49:27.114245 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d08adb1_c6cb_41b9_a68b_68a1e41b883a.slice/crio-385f9d8067ff024e7eb1474454370d2badf2807493d5a215237ca9ab23771762 WatchSource:0}: Error finding container 385f9d8067ff024e7eb1474454370d2badf2807493d5a215237ca9ab23771762: Status 404 returned error can't find the container with id 385f9d8067ff024e7eb1474454370d2badf2807493d5a215237ca9ab23771762 Oct 11 10:49:27.120970 master-2 kubenswrapper[4776]: W1011 10:49:27.120913 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5648f79c_6a71_4f6f_8bde_b85a18b200bb.slice/crio-9b2ea25ac9d68a89f1585b51b76f1e1c447645d1f6f2942ca575a9928bfbc186 WatchSource:0}: Error finding container 9b2ea25ac9d68a89f1585b51b76f1e1c447645d1f6f2942ca575a9928bfbc186: Status 404 returned error can't find the container with id 9b2ea25ac9d68a89f1585b51b76f1e1c447645d1f6f2942ca575a9928bfbc186 Oct 11 10:49:27.121383 master-1 kubenswrapper[4771]: W1011 10:49:27.121283 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f47a805_6ec5_4d90_a1be_dfaec7c5c818.slice/crio-d9862099bb13f3420abdd504122ac2ec077e7f887c15a2f7115b60bd375a4c7b WatchSource:0}: Error finding container d9862099bb13f3420abdd504122ac2ec077e7f887c15a2f7115b60bd375a4c7b: Status 404 returned error can't find the container with id d9862099bb13f3420abdd504122ac2ec077e7f887c15a2f7115b60bd375a4c7b Oct 11 10:49:27.289432 master-2 kubenswrapper[4776]: I1011 10:49:27.289382 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-kjcgl" event={"ID":"5648f79c-6a71-4f6f-8bde-b85a18b200bb","Type":"ContainerStarted","Data":"9b2ea25ac9d68a89f1585b51b76f1e1c447645d1f6f2942ca575a9928bfbc186"} Oct 11 10:49:27.578115 master-0 kubenswrapper[4790]: I1011 10:49:27.578032 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-rxlsk" event={"ID":"3d08adb1-c6cb-41b9-a68b-68a1e41b883a","Type":"ContainerStarted","Data":"a565c019c9824f5d08f5c05d055d268c1b884d168d1b37b88e96d612f7a7f10e"} Oct 11 10:49:27.578115 master-0 kubenswrapper[4790]: I1011 10:49:27.578104 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-rxlsk" event={"ID":"3d08adb1-c6cb-41b9-a68b-68a1e41b883a","Type":"ContainerStarted","Data":"385f9d8067ff024e7eb1474454370d2badf2807493d5a215237ca9ab23771762"} Oct 11 10:49:27.615789 master-0 kubenswrapper[4790]: I1011 10:49:27.615649 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-rxlsk" podStartSLOduration=1.615620072 podStartE2EDuration="1.615620072s" podCreationTimestamp="2025-10-11 10:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:49:27.611150082 +0000 UTC m=+644.165610454" watchObservedRunningTime="2025-10-11 10:49:27.615620072 +0000 UTC m=+644.170080364" Oct 11 10:49:28.112013 master-1 kubenswrapper[4771]: I1011 10:49:28.111650 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-l9x5s" event={"ID":"5f47a805-6ec5-4d90-a1be-dfaec7c5c818","Type":"ContainerStarted","Data":"d9862099bb13f3420abdd504122ac2ec077e7f887c15a2f7115b60bd375a4c7b"} Oct 11 10:49:29.592820 master-0 kubenswrapper[4790]: I1011 10:49:29.592753 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-rxlsk_3d08adb1-c6cb-41b9-a68b-68a1e41b883a/vg-manager/0.log" Oct 11 10:49:29.592820 master-0 kubenswrapper[4790]: I1011 10:49:29.592813 4790 generic.go:334] "Generic (PLEG): container finished" podID="3d08adb1-c6cb-41b9-a68b-68a1e41b883a" containerID="a565c019c9824f5d08f5c05d055d268c1b884d168d1b37b88e96d612f7a7f10e" exitCode=1 Oct 11 10:49:29.593749 master-0 kubenswrapper[4790]: I1011 10:49:29.592852 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-rxlsk" event={"ID":"3d08adb1-c6cb-41b9-a68b-68a1e41b883a","Type":"ContainerDied","Data":"a565c019c9824f5d08f5c05d055d268c1b884d168d1b37b88e96d612f7a7f10e"} Oct 11 10:49:29.593749 master-0 kubenswrapper[4790]: I1011 10:49:29.593391 4790 scope.go:117] "RemoveContainer" containerID="a565c019c9824f5d08f5c05d055d268c1b884d168d1b37b88e96d612f7a7f10e" Oct 11 10:49:29.916234 master-0 kubenswrapper[4790]: I1011 10:49:29.916150 4790 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Oct 11 10:49:30.602918 master-0 kubenswrapper[4790]: I1011 10:49:30.602823 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-rxlsk_3d08adb1-c6cb-41b9-a68b-68a1e41b883a/vg-manager/0.log" Oct 11 10:49:30.602918 master-0 kubenswrapper[4790]: I1011 10:49:30.602917 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-rxlsk" event={"ID":"3d08adb1-c6cb-41b9-a68b-68a1e41b883a","Type":"ContainerStarted","Data":"0769b027ed49aca1b276af8399b5e4fafd3aee07ccc9f93cd2ac4a7303e482df"} Oct 11 10:49:30.744922 master-0 kubenswrapper[4790]: I1011 10:49:30.743955 4790 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2025-10-11T10:49:29.916196642Z","Handler":null,"Name":""} Oct 11 10:49:30.747306 master-0 kubenswrapper[4790]: I1011 10:49:30.747246 4790 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Oct 11 10:49:30.747456 master-0 kubenswrapper[4790]: I1011 10:49:30.747320 4790 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Oct 11 10:49:32.138936 master-1 kubenswrapper[4771]: I1011 10:49:32.138750 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-l9x5s" event={"ID":"5f47a805-6ec5-4d90-a1be-dfaec7c5c818","Type":"ContainerStarted","Data":"d8938647ff6d38271de4e8bdb38b410218398c6b3c96d71c30f2ceb2a91f05cf"} Oct 11 10:49:32.330430 master-2 kubenswrapper[4776]: I1011 10:49:32.330370 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-kjcgl" event={"ID":"5648f79c-6a71-4f6f-8bde-b85a18b200bb","Type":"ContainerStarted","Data":"fd081b69fcb20cd544ccfa2cdd1a44c6ed00db6df9d8d498706580d6534ed198"} Oct 11 10:49:32.352565 master-1 kubenswrapper[4771]: I1011 10:49:32.352474 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-l9x5s" podStartSLOduration=1.844631008 podStartE2EDuration="6.352453918s" podCreationTimestamp="2025-10-11 10:49:26 +0000 UTC" firstStartedPulling="2025-10-11 10:49:27.124309256 +0000 UTC m=+1399.098535727" lastFinishedPulling="2025-10-11 10:49:31.632132186 +0000 UTC m=+1403.606358637" observedRunningTime="2025-10-11 10:49:32.350842842 +0000 UTC m=+1404.325069303" watchObservedRunningTime="2025-10-11 10:49:32.352453918 +0000 UTC m=+1404.326680379" Oct 11 10:49:32.386209 master-2 kubenswrapper[4776]: I1011 10:49:32.386123 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-kjcgl" podStartSLOduration=1.6736251 podStartE2EDuration="6.386105385s" podCreationTimestamp="2025-10-11 10:49:26 +0000 UTC" firstStartedPulling="2025-10-11 10:49:27.125698133 +0000 UTC m=+1401.910124852" lastFinishedPulling="2025-10-11 10:49:31.838178418 +0000 UTC m=+1406.622605137" observedRunningTime="2025-10-11 10:49:32.382347944 +0000 UTC m=+1407.166774653" watchObservedRunningTime="2025-10-11 10:49:32.386105385 +0000 UTC m=+1407.170532094" Oct 11 10:49:32.526778 master-1 kubenswrapper[4771]: I1011 10:49:32.526617 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 11 10:49:33.107298 master-0 kubenswrapper[4790]: I1011 10:49:33.107217 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6f9d445f57-w4nwq" podUID="e299247b-558b-4b6c-9d7c-335475344fdc" containerName="console" containerID="cri-o://3088103a931b6dbaca679e1253ed8ab435375376728343b7793fadb15e48f6e0" gracePeriod=15 Oct 11 10:49:33.525789 master-0 kubenswrapper[4790]: I1011 10:49:33.525738 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6f9d445f57-w4nwq_e299247b-558b-4b6c-9d7c-335475344fdc/console/0.log" Oct 11 10:49:33.526145 master-0 kubenswrapper[4790]: I1011 10:49:33.525842 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:49:33.625132 master-0 kubenswrapper[4790]: I1011 10:49:33.625083 4790 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6f9d445f57-w4nwq_e299247b-558b-4b6c-9d7c-335475344fdc/console/0.log" Oct 11 10:49:33.625765 master-0 kubenswrapper[4790]: I1011 10:49:33.625165 4790 generic.go:334] "Generic (PLEG): container finished" podID="e299247b-558b-4b6c-9d7c-335475344fdc" containerID="3088103a931b6dbaca679e1253ed8ab435375376728343b7793fadb15e48f6e0" exitCode=2 Oct 11 10:49:33.625765 master-0 kubenswrapper[4790]: I1011 10:49:33.625213 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f9d445f57-w4nwq" event={"ID":"e299247b-558b-4b6c-9d7c-335475344fdc","Type":"ContainerDied","Data":"3088103a931b6dbaca679e1253ed8ab435375376728343b7793fadb15e48f6e0"} Oct 11 10:49:33.625765 master-0 kubenswrapper[4790]: I1011 10:49:33.625259 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f9d445f57-w4nwq" event={"ID":"e299247b-558b-4b6c-9d7c-335475344fdc","Type":"ContainerDied","Data":"c55a75264f621b83bca51a6b3abc1339de4396e41f1c001671f54d97f3acff21"} Oct 11 10:49:33.625765 master-0 kubenswrapper[4790]: I1011 10:49:33.625290 4790 scope.go:117] "RemoveContainer" containerID="3088103a931b6dbaca679e1253ed8ab435375376728343b7793fadb15e48f6e0" Oct 11 10:49:33.625765 master-0 kubenswrapper[4790]: I1011 10:49:33.625496 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f9d445f57-w4nwq" Oct 11 10:49:33.646512 master-0 kubenswrapper[4790]: I1011 10:49:33.646454 4790 scope.go:117] "RemoveContainer" containerID="3088103a931b6dbaca679e1253ed8ab435375376728343b7793fadb15e48f6e0" Oct 11 10:49:33.647202 master-0 kubenswrapper[4790]: E1011 10:49:33.647143 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3088103a931b6dbaca679e1253ed8ab435375376728343b7793fadb15e48f6e0\": container with ID starting with 3088103a931b6dbaca679e1253ed8ab435375376728343b7793fadb15e48f6e0 not found: ID does not exist" containerID="3088103a931b6dbaca679e1253ed8ab435375376728343b7793fadb15e48f6e0" Oct 11 10:49:33.647302 master-0 kubenswrapper[4790]: I1011 10:49:33.647211 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3088103a931b6dbaca679e1253ed8ab435375376728343b7793fadb15e48f6e0"} err="failed to get container status \"3088103a931b6dbaca679e1253ed8ab435375376728343b7793fadb15e48f6e0\": rpc error: code = NotFound desc = could not find container \"3088103a931b6dbaca679e1253ed8ab435375376728343b7793fadb15e48f6e0\": container with ID starting with 3088103a931b6dbaca679e1253ed8ab435375376728343b7793fadb15e48f6e0 not found: ID does not exist" Oct 11 10:49:33.718437 master-0 kubenswrapper[4790]: I1011 10:49:33.718329 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-service-ca\") pod \"e299247b-558b-4b6c-9d7c-335475344fdc\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " Oct 11 10:49:33.718921 master-0 kubenswrapper[4790]: I1011 10:49:33.718594 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-oauth-serving-cert\") pod \"e299247b-558b-4b6c-9d7c-335475344fdc\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " Oct 11 10:49:33.718921 master-0 kubenswrapper[4790]: I1011 10:49:33.718665 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e299247b-558b-4b6c-9d7c-335475344fdc-console-serving-cert\") pod \"e299247b-558b-4b6c-9d7c-335475344fdc\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " Oct 11 10:49:33.718921 master-0 kubenswrapper[4790]: I1011 10:49:33.718747 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e299247b-558b-4b6c-9d7c-335475344fdc-console-oauth-config\") pod \"e299247b-558b-4b6c-9d7c-335475344fdc\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " Oct 11 10:49:33.718921 master-0 kubenswrapper[4790]: I1011 10:49:33.718853 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-trusted-ca-bundle\") pod \"e299247b-558b-4b6c-9d7c-335475344fdc\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " Oct 11 10:49:33.718921 master-0 kubenswrapper[4790]: I1011 10:49:33.718922 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brv7r\" (UniqueName: \"kubernetes.io/projected/e299247b-558b-4b6c-9d7c-335475344fdc-kube-api-access-brv7r\") pod \"e299247b-558b-4b6c-9d7c-335475344fdc\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " Oct 11 10:49:33.719478 master-0 kubenswrapper[4790]: I1011 10:49:33.718989 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-console-config\") pod \"e299247b-558b-4b6c-9d7c-335475344fdc\" (UID: \"e299247b-558b-4b6c-9d7c-335475344fdc\") " Oct 11 10:49:33.719750 master-0 kubenswrapper[4790]: I1011 10:49:33.719470 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-service-ca" (OuterVolumeSpecName: "service-ca") pod "e299247b-558b-4b6c-9d7c-335475344fdc" (UID: "e299247b-558b-4b6c-9d7c-335475344fdc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:49:33.720272 master-0 kubenswrapper[4790]: I1011 10:49:33.720240 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e299247b-558b-4b6c-9d7c-335475344fdc" (UID: "e299247b-558b-4b6c-9d7c-335475344fdc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:49:33.721180 master-0 kubenswrapper[4790]: I1011 10:49:33.721132 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-console-config" (OuterVolumeSpecName: "console-config") pod "e299247b-558b-4b6c-9d7c-335475344fdc" (UID: "e299247b-558b-4b6c-9d7c-335475344fdc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:49:33.721344 master-0 kubenswrapper[4790]: I1011 10:49:33.721197 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e299247b-558b-4b6c-9d7c-335475344fdc" (UID: "e299247b-558b-4b6c-9d7c-335475344fdc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:49:33.726124 master-0 kubenswrapper[4790]: I1011 10:49:33.726085 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e299247b-558b-4b6c-9d7c-335475344fdc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e299247b-558b-4b6c-9d7c-335475344fdc" (UID: "e299247b-558b-4b6c-9d7c-335475344fdc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:49:33.727030 master-0 kubenswrapper[4790]: I1011 10:49:33.726938 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e299247b-558b-4b6c-9d7c-335475344fdc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e299247b-558b-4b6c-9d7c-335475344fdc" (UID: "e299247b-558b-4b6c-9d7c-335475344fdc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:49:33.727172 master-0 kubenswrapper[4790]: I1011 10:49:33.727107 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e299247b-558b-4b6c-9d7c-335475344fdc-kube-api-access-brv7r" (OuterVolumeSpecName: "kube-api-access-brv7r") pod "e299247b-558b-4b6c-9d7c-335475344fdc" (UID: "e299247b-558b-4b6c-9d7c-335475344fdc"). InnerVolumeSpecName "kube-api-access-brv7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:49:33.822099 master-0 kubenswrapper[4790]: I1011 10:49:33.822007 4790 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-service-ca\") on node \"master-0\" DevicePath \"\"" Oct 11 10:49:33.822099 master-0 kubenswrapper[4790]: I1011 10:49:33.822098 4790 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Oct 11 10:49:33.822393 master-0 kubenswrapper[4790]: I1011 10:49:33.822126 4790 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e299247b-558b-4b6c-9d7c-335475344fdc-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Oct 11 10:49:33.822393 master-0 kubenswrapper[4790]: I1011 10:49:33.822148 4790 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e299247b-558b-4b6c-9d7c-335475344fdc-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Oct 11 10:49:33.822393 master-0 kubenswrapper[4790]: I1011 10:49:33.822168 4790 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Oct 11 10:49:33.822393 master-0 kubenswrapper[4790]: I1011 10:49:33.822189 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brv7r\" (UniqueName: \"kubernetes.io/projected/e299247b-558b-4b6c-9d7c-335475344fdc-kube-api-access-brv7r\") on node \"master-0\" DevicePath \"\"" Oct 11 10:49:33.822393 master-0 kubenswrapper[4790]: I1011 10:49:33.822209 4790 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e299247b-558b-4b6c-9d7c-335475344fdc-console-config\") on node \"master-0\" DevicePath \"\"" Oct 11 10:49:33.970958 master-0 kubenswrapper[4790]: I1011 10:49:33.970872 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6f9d445f57-w4nwq"] Oct 11 10:49:33.978683 master-0 kubenswrapper[4790]: I1011 10:49:33.978567 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6f9d445f57-w4nwq"] Oct 11 10:49:34.272602 master-2 kubenswrapper[4776]: I1011 10:49:34.272537 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:34.272602 master-2 kubenswrapper[4776]: I1011 10:49:34.272595 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:34.278148 master-2 kubenswrapper[4776]: I1011 10:49:34.278075 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:34.309950 master-0 kubenswrapper[4790]: I1011 10:49:34.308520 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e299247b-558b-4b6c-9d7c-335475344fdc" path="/var/lib/kubelet/pods/e299247b-558b-4b6c-9d7c-335475344fdc/volumes" Oct 11 10:49:34.370796 master-2 kubenswrapper[4776]: I1011 10:49:34.368953 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-kjcgl_5648f79c-6a71-4f6f-8bde-b85a18b200bb/vg-manager/0.log" Oct 11 10:49:34.370796 master-2 kubenswrapper[4776]: I1011 10:49:34.369013 4776 generic.go:334] "Generic (PLEG): container finished" podID="5648f79c-6a71-4f6f-8bde-b85a18b200bb" containerID="fd081b69fcb20cd544ccfa2cdd1a44c6ed00db6df9d8d498706580d6534ed198" exitCode=1 Oct 11 10:49:34.370796 master-2 kubenswrapper[4776]: I1011 10:49:34.369391 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-kjcgl" event={"ID":"5648f79c-6a71-4f6f-8bde-b85a18b200bb","Type":"ContainerDied","Data":"fd081b69fcb20cd544ccfa2cdd1a44c6ed00db6df9d8d498706580d6534ed198"} Oct 11 10:49:34.370796 master-2 kubenswrapper[4776]: I1011 10:49:34.370181 4776 scope.go:117] "RemoveContainer" containerID="fd081b69fcb20cd544ccfa2cdd1a44c6ed00db6df9d8d498706580d6534ed198" Oct 11 10:49:34.373951 master-2 kubenswrapper[4776]: I1011 10:49:34.373835 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:34.659388 master-2 kubenswrapper[4776]: I1011 10:49:34.659341 4776 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Oct 11 10:49:34.920720 master-2 kubenswrapper[4776]: I1011 10:49:34.920577 4776 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2025-10-11T10:49:34.659368998Z","Handler":null,"Name":""} Oct 11 10:49:34.923286 master-2 kubenswrapper[4776]: I1011 10:49:34.923242 4776 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Oct 11 10:49:34.923358 master-2 kubenswrapper[4776]: I1011 10:49:34.923306 4776 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Oct 11 10:49:35.163757 master-1 kubenswrapper[4771]: I1011 10:49:35.163660 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-l9x5s_5f47a805-6ec5-4d90-a1be-dfaec7c5c818/vg-manager/0.log" Oct 11 10:49:35.164807 master-1 kubenswrapper[4771]: I1011 10:49:35.163758 4771 generic.go:334] "Generic (PLEG): container finished" podID="5f47a805-6ec5-4d90-a1be-dfaec7c5c818" containerID="d8938647ff6d38271de4e8bdb38b410218398c6b3c96d71c30f2ceb2a91f05cf" exitCode=1 Oct 11 10:49:35.164807 master-1 kubenswrapper[4771]: I1011 10:49:35.163814 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-l9x5s" event={"ID":"5f47a805-6ec5-4d90-a1be-dfaec7c5c818","Type":"ContainerDied","Data":"d8938647ff6d38271de4e8bdb38b410218398c6b3c96d71c30f2ceb2a91f05cf"} Oct 11 10:49:35.164807 master-1 kubenswrapper[4771]: I1011 10:49:35.164527 4771 scope.go:117] "RemoveContainer" containerID="d8938647ff6d38271de4e8bdb38b410218398c6b3c96d71c30f2ceb2a91f05cf" Oct 11 10:49:35.380727 master-2 kubenswrapper[4776]: I1011 10:49:35.380566 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-kjcgl_5648f79c-6a71-4f6f-8bde-b85a18b200bb/vg-manager/0.log" Oct 11 10:49:35.381701 master-2 kubenswrapper[4776]: I1011 10:49:35.381635 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-kjcgl" event={"ID":"5648f79c-6a71-4f6f-8bde-b85a18b200bb","Type":"ContainerStarted","Data":"32d9e9657fcf1cce6ae9505666e681688cdea8f91ec69725af5a1043b546b958"} Oct 11 10:49:35.577242 master-1 kubenswrapper[4771]: I1011 10:49:35.577153 4771 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Oct 11 10:49:36.175766 master-1 kubenswrapper[4771]: I1011 10:49:36.175557 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-l9x5s_5f47a805-6ec5-4d90-a1be-dfaec7c5c818/vg-manager/0.log" Oct 11 10:49:36.175766 master-1 kubenswrapper[4771]: I1011 10:49:36.175638 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-l9x5s" event={"ID":"5f47a805-6ec5-4d90-a1be-dfaec7c5c818","Type":"ContainerStarted","Data":"5affce9a11708ad0d32c9ea6232aa6aff06c771b15c6fdae9dd419b025450ff7"} Oct 11 10:49:36.392646 master-1 kubenswrapper[4771]: I1011 10:49:36.392314 4771 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2025-10-11T10:49:35.577190949Z","Handler":null,"Name":""} Oct 11 10:49:36.396210 master-1 kubenswrapper[4771]: I1011 10:49:36.396139 4771 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Oct 11 10:49:36.396408 master-1 kubenswrapper[4771]: I1011 10:49:36.396254 4771 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Oct 11 10:49:36.609867 master-0 kubenswrapper[4790]: I1011 10:49:36.609776 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:36.612817 master-0 kubenswrapper[4790]: I1011 10:49:36.612768 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:36.616210 master-2 kubenswrapper[4776]: I1011 10:49:36.616059 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:36.618330 master-2 kubenswrapper[4776]: I1011 10:49:36.618078 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:36.638910 master-1 kubenswrapper[4771]: I1011 10:49:36.638799 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:36.647912 master-0 kubenswrapper[4790]: I1011 10:49:36.647823 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:36.649262 master-0 kubenswrapper[4790]: I1011 10:49:36.649218 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-rxlsk" Oct 11 10:49:37.393286 master-2 kubenswrapper[4776]: I1011 10:49:37.393224 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:37.394843 master-2 kubenswrapper[4776]: I1011 10:49:37.394792 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:42.202338 master-0 kubenswrapper[4790]: I1011 10:49:42.202259 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5qz5r"] Oct 11 10:49:42.203079 master-0 kubenswrapper[4790]: E1011 10:49:42.202518 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e299247b-558b-4b6c-9d7c-335475344fdc" containerName="console" Oct 11 10:49:42.203079 master-0 kubenswrapper[4790]: I1011 10:49:42.202536 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="e299247b-558b-4b6c-9d7c-335475344fdc" containerName="console" Oct 11 10:49:42.203079 master-0 kubenswrapper[4790]: I1011 10:49:42.202684 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="e299247b-558b-4b6c-9d7c-335475344fdc" containerName="console" Oct 11 10:49:42.203245 master-0 kubenswrapper[4790]: I1011 10:49:42.203211 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5qz5r" Oct 11 10:49:42.205656 master-0 kubenswrapper[4790]: I1011 10:49:42.205621 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 11 10:49:42.205912 master-0 kubenswrapper[4790]: I1011 10:49:42.205881 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 11 10:49:42.215814 master-0 kubenswrapper[4790]: I1011 10:49:42.215772 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5qz5r"] Oct 11 10:49:42.346045 master-0 kubenswrapper[4790]: I1011 10:49:42.345961 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjsv8\" (UniqueName: \"kubernetes.io/projected/1e0eae2d-8fb0-4da2-b668-7e68e812682e-kube-api-access-xjsv8\") pod \"openstack-operator-index-5qz5r\" (UID: \"1e0eae2d-8fb0-4da2-b668-7e68e812682e\") " pod="openstack-operators/openstack-operator-index-5qz5r" Oct 11 10:49:42.447697 master-0 kubenswrapper[4790]: I1011 10:49:42.447599 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjsv8\" (UniqueName: \"kubernetes.io/projected/1e0eae2d-8fb0-4da2-b668-7e68e812682e-kube-api-access-xjsv8\") pod \"openstack-operator-index-5qz5r\" (UID: \"1e0eae2d-8fb0-4da2-b668-7e68e812682e\") " pod="openstack-operators/openstack-operator-index-5qz5r" Oct 11 10:49:42.471140 master-0 kubenswrapper[4790]: I1011 10:49:42.470983 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjsv8\" (UniqueName: \"kubernetes.io/projected/1e0eae2d-8fb0-4da2-b668-7e68e812682e-kube-api-access-xjsv8\") pod \"openstack-operator-index-5qz5r\" (UID: \"1e0eae2d-8fb0-4da2-b668-7e68e812682e\") " pod="openstack-operators/openstack-operator-index-5qz5r" Oct 11 10:49:42.517555 master-0 kubenswrapper[4790]: I1011 10:49:42.517466 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5qz5r" Oct 11 10:49:42.975048 master-0 kubenswrapper[4790]: I1011 10:49:42.974982 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5qz5r"] Oct 11 10:49:42.982002 master-0 kubenswrapper[4790]: W1011 10:49:42.981912 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e0eae2d_8fb0_4da2_b668_7e68e812682e.slice/crio-6c9151ee8df0131ac2502a5ebcb11017f3173ee51beb89b0ecfd331fd6eb9af5 WatchSource:0}: Error finding container 6c9151ee8df0131ac2502a5ebcb11017f3173ee51beb89b0ecfd331fd6eb9af5: Status 404 returned error can't find the container with id 6c9151ee8df0131ac2502a5ebcb11017f3173ee51beb89b0ecfd331fd6eb9af5 Oct 11 10:49:43.705325 master-0 kubenswrapper[4790]: I1011 10:49:43.705251 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5qz5r" event={"ID":"1e0eae2d-8fb0-4da2-b668-7e68e812682e","Type":"ContainerStarted","Data":"6c9151ee8df0131ac2502a5ebcb11017f3173ee51beb89b0ecfd331fd6eb9af5"} Oct 11 10:49:46.642662 master-1 kubenswrapper[4771]: I1011 10:49:46.642579 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:46.643609 master-1 kubenswrapper[4771]: I1011 10:49:46.643071 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:46.644108 master-1 kubenswrapper[4771]: I1011 10:49:46.644061 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-l9x5s" Oct 11 10:49:51.765291 master-0 kubenswrapper[4790]: I1011 10:49:51.765210 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5qz5r" event={"ID":"1e0eae2d-8fb0-4da2-b668-7e68e812682e","Type":"ContainerStarted","Data":"0975185ed6e30305f667e10eec57bee48416fae36cef7e6d25229e9488efa83b"} Oct 11 10:49:51.795378 master-0 kubenswrapper[4790]: I1011 10:49:51.795201 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5qz5r" podStartSLOduration=2.099880572 podStartE2EDuration="9.795156702s" podCreationTimestamp="2025-10-11 10:49:42 +0000 UTC" firstStartedPulling="2025-10-11 10:49:42.983229129 +0000 UTC m=+659.537689421" lastFinishedPulling="2025-10-11 10:49:50.678505259 +0000 UTC m=+667.232965551" observedRunningTime="2025-10-11 10:49:51.793506017 +0000 UTC m=+668.347966309" watchObservedRunningTime="2025-10-11 10:49:51.795156702 +0000 UTC m=+668.349617034" Oct 11 10:49:52.518050 master-0 kubenswrapper[4790]: I1011 10:49:52.517840 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-5qz5r" Oct 11 10:49:52.518461 master-0 kubenswrapper[4790]: I1011 10:49:52.518141 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-5qz5r" Oct 11 10:49:52.549411 master-0 kubenswrapper[4790]: I1011 10:49:52.549332 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-5qz5r" Oct 11 10:50:02.550419 master-0 kubenswrapper[4790]: I1011 10:50:02.550340 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-5qz5r" Oct 11 10:50:03.007995 master-0 kubenswrapper[4790]: I1011 10:50:03.007789 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-6-master-0"] Oct 11 10:50:03.008935 master-0 kubenswrapper[4790]: I1011 10:50:03.008901 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-0" Oct 11 10:50:03.012905 master-0 kubenswrapper[4790]: I1011 10:50:03.012819 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-t28rg" Oct 11 10:50:03.027661 master-0 kubenswrapper[4790]: I1011 10:50:03.026775 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-6-master-0"] Oct 11 10:50:03.094175 master-0 kubenswrapper[4790]: I1011 10:50:03.094099 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14a0545c-12d2-49a0-be5e-17f472bac134-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"14a0545c-12d2-49a0-be5e-17f472bac134\") " pod="openshift-kube-apiserver/revision-pruner-6-master-0" Oct 11 10:50:03.094790 master-0 kubenswrapper[4790]: I1011 10:50:03.094746 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14a0545c-12d2-49a0-be5e-17f472bac134-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"14a0545c-12d2-49a0-be5e-17f472bac134\") " pod="openshift-kube-apiserver/revision-pruner-6-master-0" Oct 11 10:50:03.196224 master-0 kubenswrapper[4790]: I1011 10:50:03.196092 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14a0545c-12d2-49a0-be5e-17f472bac134-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"14a0545c-12d2-49a0-be5e-17f472bac134\") " pod="openshift-kube-apiserver/revision-pruner-6-master-0" Oct 11 10:50:03.196583 master-0 kubenswrapper[4790]: I1011 10:50:03.196254 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14a0545c-12d2-49a0-be5e-17f472bac134-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"14a0545c-12d2-49a0-be5e-17f472bac134\") " pod="openshift-kube-apiserver/revision-pruner-6-master-0" Oct 11 10:50:03.196583 master-0 kubenswrapper[4790]: I1011 10:50:03.196384 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14a0545c-12d2-49a0-be5e-17f472bac134-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"14a0545c-12d2-49a0-be5e-17f472bac134\") " pod="openshift-kube-apiserver/revision-pruner-6-master-0" Oct 11 10:50:03.219698 master-0 kubenswrapper[4790]: I1011 10:50:03.219609 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14a0545c-12d2-49a0-be5e-17f472bac134-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"14a0545c-12d2-49a0-be5e-17f472bac134\") " pod="openshift-kube-apiserver/revision-pruner-6-master-0" Oct 11 10:50:03.336579 master-0 kubenswrapper[4790]: I1011 10:50:03.336449 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-0" Oct 11 10:50:04.003652 master-0 kubenswrapper[4790]: W1011 10:50:04.003559 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod14a0545c_12d2_49a0_be5e_17f472bac134.slice/crio-b8dcb1e6a79f758d4b56351011463269e2f84dae8484b90ceaaf4bc932ecb153 WatchSource:0}: Error finding container b8dcb1e6a79f758d4b56351011463269e2f84dae8484b90ceaaf4bc932ecb153: Status 404 returned error can't find the container with id b8dcb1e6a79f758d4b56351011463269e2f84dae8484b90ceaaf4bc932ecb153 Oct 11 10:50:04.039289 master-0 kubenswrapper[4790]: I1011 10:50:04.039050 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-6-master-0"] Oct 11 10:50:04.862692 master-0 kubenswrapper[4790]: I1011 10:50:04.862512 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-6-master-0" event={"ID":"14a0545c-12d2-49a0-be5e-17f472bac134","Type":"ContainerStarted","Data":"1b18bd815a4c78ab3d993af09213efb805e050af67596981f577c078fd8793c8"} Oct 11 10:50:04.862692 master-0 kubenswrapper[4790]: I1011 10:50:04.862582 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-6-master-0" event={"ID":"14a0545c-12d2-49a0-be5e-17f472bac134","Type":"ContainerStarted","Data":"b8dcb1e6a79f758d4b56351011463269e2f84dae8484b90ceaaf4bc932ecb153"} Oct 11 10:50:04.893365 master-0 kubenswrapper[4790]: I1011 10:50:04.893234 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-6-master-0" podStartSLOduration=2.893199858 podStartE2EDuration="2.893199858s" podCreationTimestamp="2025-10-11 10:50:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:50:04.89293016 +0000 UTC m=+681.447390462" watchObservedRunningTime="2025-10-11 10:50:04.893199858 +0000 UTC m=+681.447660190" Oct 11 10:50:05.806787 master-1 kubenswrapper[4771]: I1011 10:50:05.806636 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-6-master-1"] Oct 11 10:50:05.809592 master-1 kubenswrapper[4771]: I1011 10:50:05.809186 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 11 10:50:05.817849 master-1 kubenswrapper[4771]: I1011 10:50:05.817771 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-t28rg" Oct 11 10:50:05.828030 master-1 kubenswrapper[4771]: I1011 10:50:05.827968 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-6-master-1"] Oct 11 10:50:05.862847 master-1 kubenswrapper[4771]: I1011 10:50:05.862732 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c72a1cdf-d776-4854-b379-bde17097bab0-kube-api-access\") pod \"revision-pruner-6-master-1\" (UID: \"c72a1cdf-d776-4854-b379-bde17097bab0\") " pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 11 10:50:05.863124 master-1 kubenswrapper[4771]: I1011 10:50:05.862884 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c72a1cdf-d776-4854-b379-bde17097bab0-kubelet-dir\") pod \"revision-pruner-6-master-1\" (UID: \"c72a1cdf-d776-4854-b379-bde17097bab0\") " pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 11 10:50:05.872569 master-0 kubenswrapper[4790]: I1011 10:50:05.872454 4790 generic.go:334] "Generic (PLEG): container finished" podID="14a0545c-12d2-49a0-be5e-17f472bac134" containerID="1b18bd815a4c78ab3d993af09213efb805e050af67596981f577c078fd8793c8" exitCode=0 Oct 11 10:50:05.872569 master-0 kubenswrapper[4790]: I1011 10:50:05.872526 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-6-master-0" event={"ID":"14a0545c-12d2-49a0-be5e-17f472bac134","Type":"ContainerDied","Data":"1b18bd815a4c78ab3d993af09213efb805e050af67596981f577c078fd8793c8"} Oct 11 10:50:05.965304 master-1 kubenswrapper[4771]: I1011 10:50:05.964721 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c72a1cdf-d776-4854-b379-bde17097bab0-kube-api-access\") pod \"revision-pruner-6-master-1\" (UID: \"c72a1cdf-d776-4854-b379-bde17097bab0\") " pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 11 10:50:05.965713 master-1 kubenswrapper[4771]: I1011 10:50:05.965415 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c72a1cdf-d776-4854-b379-bde17097bab0-kubelet-dir\") pod \"revision-pruner-6-master-1\" (UID: \"c72a1cdf-d776-4854-b379-bde17097bab0\") " pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 11 10:50:05.965713 master-1 kubenswrapper[4771]: I1011 10:50:05.965563 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c72a1cdf-d776-4854-b379-bde17097bab0-kubelet-dir\") pod \"revision-pruner-6-master-1\" (UID: \"c72a1cdf-d776-4854-b379-bde17097bab0\") " pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 11 10:50:05.985838 master-1 kubenswrapper[4771]: I1011 10:50:05.985707 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c72a1cdf-d776-4854-b379-bde17097bab0-kube-api-access\") pod \"revision-pruner-6-master-1\" (UID: \"c72a1cdf-d776-4854-b379-bde17097bab0\") " pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 11 10:50:06.133910 master-1 kubenswrapper[4771]: I1011 10:50:06.133759 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 11 10:50:06.595219 master-1 kubenswrapper[4771]: I1011 10:50:06.595018 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-6-master-1"] Oct 11 10:50:07.301744 master-0 kubenswrapper[4790]: I1011 10:50:07.301665 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-0" Oct 11 10:50:07.435231 master-1 kubenswrapper[4771]: I1011 10:50:07.435120 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-6-master-1" event={"ID":"c72a1cdf-d776-4854-b379-bde17097bab0","Type":"ContainerStarted","Data":"f5834e38fc462bced57a8335af6c00a64ee6e2c61e99884fc618f9c7f277f266"} Oct 11 10:50:07.435231 master-1 kubenswrapper[4771]: I1011 10:50:07.435209 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-6-master-1" event={"ID":"c72a1cdf-d776-4854-b379-bde17097bab0","Type":"ContainerStarted","Data":"bc4363b6b2838fc8c75a54bdd92f428b47f1f3dc2a5fc935fbfd0bcc098edabf"} Oct 11 10:50:07.458799 master-0 kubenswrapper[4790]: I1011 10:50:07.458682 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14a0545c-12d2-49a0-be5e-17f472bac134-kubelet-dir\") pod \"14a0545c-12d2-49a0-be5e-17f472bac134\" (UID: \"14a0545c-12d2-49a0-be5e-17f472bac134\") " Oct 11 10:50:07.458799 master-0 kubenswrapper[4790]: I1011 10:50:07.458825 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14a0545c-12d2-49a0-be5e-17f472bac134-kube-api-access\") pod \"14a0545c-12d2-49a0-be5e-17f472bac134\" (UID: \"14a0545c-12d2-49a0-be5e-17f472bac134\") " Oct 11 10:50:07.459607 master-0 kubenswrapper[4790]: I1011 10:50:07.459021 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14a0545c-12d2-49a0-be5e-17f472bac134-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "14a0545c-12d2-49a0-be5e-17f472bac134" (UID: "14a0545c-12d2-49a0-be5e-17f472bac134"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:50:07.459897 master-0 kubenswrapper[4790]: I1011 10:50:07.459840 4790 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14a0545c-12d2-49a0-be5e-17f472bac134-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Oct 11 10:50:07.460166 master-1 kubenswrapper[4771]: I1011 10:50:07.460041 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-6-master-1" podStartSLOduration=2.4600203130000002 podStartE2EDuration="2.460020313s" podCreationTimestamp="2025-10-11 10:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:50:07.456948714 +0000 UTC m=+1439.431175165" watchObservedRunningTime="2025-10-11 10:50:07.460020313 +0000 UTC m=+1439.434246764" Oct 11 10:50:07.464434 master-0 kubenswrapper[4790]: I1011 10:50:07.464356 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a0545c-12d2-49a0-be5e-17f472bac134-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "14a0545c-12d2-49a0-be5e-17f472bac134" (UID: "14a0545c-12d2-49a0-be5e-17f472bac134"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:50:07.560975 master-0 kubenswrapper[4790]: I1011 10:50:07.560906 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14a0545c-12d2-49a0-be5e-17f472bac134-kube-api-access\") on node \"master-0\" DevicePath \"\"" Oct 11 10:50:07.887887 master-0 kubenswrapper[4790]: I1011 10:50:07.887647 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-6-master-0" event={"ID":"14a0545c-12d2-49a0-be5e-17f472bac134","Type":"ContainerDied","Data":"b8dcb1e6a79f758d4b56351011463269e2f84dae8484b90ceaaf4bc932ecb153"} Oct 11 10:50:07.887887 master-0 kubenswrapper[4790]: I1011 10:50:07.887752 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8dcb1e6a79f758d4b56351011463269e2f84dae8484b90ceaaf4bc932ecb153" Oct 11 10:50:07.887887 master-0 kubenswrapper[4790]: I1011 10:50:07.887767 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-0" Oct 11 10:50:08.444127 master-1 kubenswrapper[4771]: I1011 10:50:08.444023 4771 generic.go:334] "Generic (PLEG): container finished" podID="c72a1cdf-d776-4854-b379-bde17097bab0" containerID="f5834e38fc462bced57a8335af6c00a64ee6e2c61e99884fc618f9c7f277f266" exitCode=0 Oct 11 10:50:08.451703 master-1 kubenswrapper[4771]: I1011 10:50:08.451597 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-6-master-1" event={"ID":"c72a1cdf-d776-4854-b379-bde17097bab0","Type":"ContainerDied","Data":"f5834e38fc462bced57a8335af6c00a64ee6e2c61e99884fc618f9c7f277f266"} Oct 11 10:50:08.652842 master-2 kubenswrapper[4776]: I1011 10:50:08.652784 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-6-master-2"] Oct 11 10:50:08.654020 master-2 kubenswrapper[4776]: I1011 10:50:08.653817 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:08.660807 master-2 kubenswrapper[4776]: I1011 10:50:08.660759 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-t28rg" Oct 11 10:50:08.818276 master-2 kubenswrapper[4776]: I1011 10:50:08.818222 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0ae11ca-a8d5-4a55-9898-269dfe907446-kube-api-access\") pod \"revision-pruner-6-master-2\" (UID: \"b0ae11ca-a8d5-4a55-9898-269dfe907446\") " pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:08.818571 master-2 kubenswrapper[4776]: I1011 10:50:08.818309 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0ae11ca-a8d5-4a55-9898-269dfe907446-kubelet-dir\") pod \"revision-pruner-6-master-2\" (UID: \"b0ae11ca-a8d5-4a55-9898-269dfe907446\") " pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:08.920072 master-2 kubenswrapper[4776]: I1011 10:50:08.920015 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0ae11ca-a8d5-4a55-9898-269dfe907446-kube-api-access\") pod \"revision-pruner-6-master-2\" (UID: \"b0ae11ca-a8d5-4a55-9898-269dfe907446\") " pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:08.920072 master-2 kubenswrapper[4776]: I1011 10:50:08.920074 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0ae11ca-a8d5-4a55-9898-269dfe907446-kubelet-dir\") pod \"revision-pruner-6-master-2\" (UID: \"b0ae11ca-a8d5-4a55-9898-269dfe907446\") " pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:08.920352 master-2 kubenswrapper[4776]: I1011 10:50:08.920151 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0ae11ca-a8d5-4a55-9898-269dfe907446-kubelet-dir\") pod \"revision-pruner-6-master-2\" (UID: \"b0ae11ca-a8d5-4a55-9898-269dfe907446\") " pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:08.933833 master-2 kubenswrapper[4776]: I1011 10:50:08.933755 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-6-master-2"] Oct 11 10:50:09.449334 master-2 kubenswrapper[4776]: I1011 10:50:09.449245 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0ae11ca-a8d5-4a55-9898-269dfe907446-kube-api-access\") pod \"revision-pruner-6-master-2\" (UID: \"b0ae11ca-a8d5-4a55-9898-269dfe907446\") " pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:09.576204 master-2 kubenswrapper[4776]: I1011 10:50:09.576132 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:09.790427 master-2 kubenswrapper[4776]: I1011 10:50:09.790372 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w"] Oct 11 10:50:09.791767 master-2 kubenswrapper[4776]: I1011 10:50:09.791738 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:09.794162 master-2 kubenswrapper[4776]: I1011 10:50:09.794123 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 11 10:50:09.795020 master-2 kubenswrapper[4776]: I1011 10:50:09.794971 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 11 10:50:09.814788 master-2 kubenswrapper[4776]: I1011 10:50:09.811467 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w"] Oct 11 10:50:09.919092 master-1 kubenswrapper[4771]: I1011 10:50:09.918978 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 11 10:50:09.935715 master-2 kubenswrapper[4776]: I1011 10:50:09.935589 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-bundle\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:09.936026 master-2 kubenswrapper[4776]: I1011 10:50:09.935775 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-util\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:09.936026 master-2 kubenswrapper[4776]: I1011 10:50:09.935834 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v48t\" (UniqueName: \"kubernetes.io/projected/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-kube-api-access-5v48t\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:10.000710 master-2 kubenswrapper[4776]: I1011 10:50:10.000532 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-6-master-2"] Oct 11 10:50:10.031269 master-1 kubenswrapper[4771]: I1011 10:50:10.031136 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c72a1cdf-d776-4854-b379-bde17097bab0-kubelet-dir\") pod \"c72a1cdf-d776-4854-b379-bde17097bab0\" (UID: \"c72a1cdf-d776-4854-b379-bde17097bab0\") " Oct 11 10:50:10.031789 master-1 kubenswrapper[4771]: I1011 10:50:10.031333 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c72a1cdf-d776-4854-b379-bde17097bab0-kube-api-access\") pod \"c72a1cdf-d776-4854-b379-bde17097bab0\" (UID: \"c72a1cdf-d776-4854-b379-bde17097bab0\") " Oct 11 10:50:10.031789 master-1 kubenswrapper[4771]: I1011 10:50:10.031378 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c72a1cdf-d776-4854-b379-bde17097bab0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c72a1cdf-d776-4854-b379-bde17097bab0" (UID: "c72a1cdf-d776-4854-b379-bde17097bab0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:50:10.032074 master-1 kubenswrapper[4771]: I1011 10:50:10.031949 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c72a1cdf-d776-4854-b379-bde17097bab0-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 11 10:50:10.036773 master-2 kubenswrapper[4776]: I1011 10:50:10.036713 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-util\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:10.036773 master-2 kubenswrapper[4776]: I1011 10:50:10.036774 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v48t\" (UniqueName: \"kubernetes.io/projected/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-kube-api-access-5v48t\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:10.036989 master-2 kubenswrapper[4776]: I1011 10:50:10.036840 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-bundle\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:10.037390 master-2 kubenswrapper[4776]: I1011 10:50:10.037349 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-bundle\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:10.037390 master-2 kubenswrapper[4776]: I1011 10:50:10.037345 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-util\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:10.043721 master-1 kubenswrapper[4771]: I1011 10:50:10.043649 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c72a1cdf-d776-4854-b379-bde17097bab0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c72a1cdf-d776-4854-b379-bde17097bab0" (UID: "c72a1cdf-d776-4854-b379-bde17097bab0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:50:10.060768 master-2 kubenswrapper[4776]: I1011 10:50:10.060734 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v48t\" (UniqueName: \"kubernetes.io/projected/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-kube-api-access-5v48t\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:10.109650 master-2 kubenswrapper[4776]: I1011 10:50:10.109576 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:10.133947 master-1 kubenswrapper[4771]: I1011 10:50:10.133747 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c72a1cdf-d776-4854-b379-bde17097bab0-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 11 10:50:10.240701 master-1 kubenswrapper[4771]: I1011 10:50:10.240615 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-master-1"] Oct 11 10:50:10.268656 master-1 kubenswrapper[4771]: I1011 10:50:10.268591 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-master-1"] Oct 11 10:50:10.454050 master-1 kubenswrapper[4771]: I1011 10:50:10.453842 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5e7e1ec-47a8-4283-9119-0d9d1343963e" path="/var/lib/kubelet/pods/f5e7e1ec-47a8-4283-9119-0d9d1343963e/volumes" Oct 11 10:50:10.466931 master-1 kubenswrapper[4771]: I1011 10:50:10.463488 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-6-master-1" event={"ID":"c72a1cdf-d776-4854-b379-bde17097bab0","Type":"ContainerDied","Data":"bc4363b6b2838fc8c75a54bdd92f428b47f1f3dc2a5fc935fbfd0bcc098edabf"} Oct 11 10:50:10.466931 master-1 kubenswrapper[4771]: I1011 10:50:10.463561 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc4363b6b2838fc8c75a54bdd92f428b47f1f3dc2a5fc935fbfd0bcc098edabf" Oct 11 10:50:10.466931 master-1 kubenswrapper[4771]: I1011 10:50:10.463650 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 11 10:50:10.528257 master-2 kubenswrapper[4776]: I1011 10:50:10.528204 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w"] Oct 11 10:50:10.534444 master-2 kubenswrapper[4776]: W1011 10:50:10.534386 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b3b7ba7_98af_44ea_b6da_9c37d9e1a6c7.slice/crio-b9c7ddb530d84b9909f6c52396407a73ff58e8ddb2c142a940d107ef62cb61f5 WatchSource:0}: Error finding container b9c7ddb530d84b9909f6c52396407a73ff58e8ddb2c142a940d107ef62cb61f5: Status 404 returned error can't find the container with id b9c7ddb530d84b9909f6c52396407a73ff58e8ddb2c142a940d107ef62cb61f5 Oct 11 10:50:10.632385 master-2 kubenswrapper[4776]: I1011 10:50:10.632256 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" event={"ID":"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7","Type":"ContainerStarted","Data":"b9c7ddb530d84b9909f6c52396407a73ff58e8ddb2c142a940d107ef62cb61f5"} Oct 11 10:50:10.634075 master-2 kubenswrapper[4776]: I1011 10:50:10.634029 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-6-master-2" event={"ID":"b0ae11ca-a8d5-4a55-9898-269dfe907446","Type":"ContainerStarted","Data":"97d2ffd60350f21f0391dcc9487188f6d1cbec2083573758ddac10b06f8b652f"} Oct 11 10:50:10.634140 master-2 kubenswrapper[4776]: I1011 10:50:10.634096 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-6-master-2" event={"ID":"b0ae11ca-a8d5-4a55-9898-269dfe907446","Type":"ContainerStarted","Data":"6bd4d19fed71de4022731fcbd86fc0592211b6138b640fa3bf6c9472b80af3af"} Oct 11 10:50:10.669245 master-2 kubenswrapper[4776]: I1011 10:50:10.669141 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-6-master-2" podStartSLOduration=2.669112873 podStartE2EDuration="2.669112873s" podCreationTimestamp="2025-10-11 10:50:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:50:10.662613797 +0000 UTC m=+1445.447040506" watchObservedRunningTime="2025-10-11 10:50:10.669112873 +0000 UTC m=+1445.453539622" Oct 11 10:50:11.645986 master-2 kubenswrapper[4776]: I1011 10:50:11.645852 4776 generic.go:334] "Generic (PLEG): container finished" podID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerID="4f921e436ab0f91cb17ee83e0c6c33d9542df9412303b9c24f03eca2d8428e93" exitCode=0 Oct 11 10:50:11.647053 master-2 kubenswrapper[4776]: I1011 10:50:11.646063 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" event={"ID":"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7","Type":"ContainerDied","Data":"4f921e436ab0f91cb17ee83e0c6c33d9542df9412303b9c24f03eca2d8428e93"} Oct 11 10:50:11.648535 master-2 kubenswrapper[4776]: I1011 10:50:11.647953 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:50:11.648535 master-2 kubenswrapper[4776]: I1011 10:50:11.648134 4776 generic.go:334] "Generic (PLEG): container finished" podID="b0ae11ca-a8d5-4a55-9898-269dfe907446" containerID="97d2ffd60350f21f0391dcc9487188f6d1cbec2083573758ddac10b06f8b652f" exitCode=0 Oct 11 10:50:11.648535 master-2 kubenswrapper[4776]: I1011 10:50:11.648193 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-6-master-2" event={"ID":"b0ae11ca-a8d5-4a55-9898-269dfe907446","Type":"ContainerDied","Data":"97d2ffd60350f21f0391dcc9487188f6d1cbec2083573758ddac10b06f8b652f"} Oct 11 10:50:13.070728 master-2 kubenswrapper[4776]: I1011 10:50:13.070606 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:13.101859 master-2 kubenswrapper[4776]: I1011 10:50:13.101798 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0ae11ca-a8d5-4a55-9898-269dfe907446-kube-api-access\") pod \"b0ae11ca-a8d5-4a55-9898-269dfe907446\" (UID: \"b0ae11ca-a8d5-4a55-9898-269dfe907446\") " Oct 11 10:50:13.101859 master-2 kubenswrapper[4776]: I1011 10:50:13.101860 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0ae11ca-a8d5-4a55-9898-269dfe907446-kubelet-dir\") pod \"b0ae11ca-a8d5-4a55-9898-269dfe907446\" (UID: \"b0ae11ca-a8d5-4a55-9898-269dfe907446\") " Oct 11 10:50:13.102545 master-2 kubenswrapper[4776]: I1011 10:50:13.102197 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ae11ca-a8d5-4a55-9898-269dfe907446-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b0ae11ca-a8d5-4a55-9898-269dfe907446" (UID: "b0ae11ca-a8d5-4a55-9898-269dfe907446"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:50:13.113967 master-2 kubenswrapper[4776]: I1011 10:50:13.113901 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ae11ca-a8d5-4a55-9898-269dfe907446-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b0ae11ca-a8d5-4a55-9898-269dfe907446" (UID: "b0ae11ca-a8d5-4a55-9898-269dfe907446"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:50:13.203603 master-2 kubenswrapper[4776]: I1011 10:50:13.203350 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0ae11ca-a8d5-4a55-9898-269dfe907446-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:50:13.203603 master-2 kubenswrapper[4776]: I1011 10:50:13.203398 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0ae11ca-a8d5-4a55-9898-269dfe907446-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:50:13.663876 master-2 kubenswrapper[4776]: I1011 10:50:13.663800 4776 generic.go:334] "Generic (PLEG): container finished" podID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerID="ed461758d15a346692f0fcd52aed862eab3e744e921a1f7a4e1926c605052e91" exitCode=0 Oct 11 10:50:13.664218 master-2 kubenswrapper[4776]: I1011 10:50:13.663898 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" event={"ID":"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7","Type":"ContainerDied","Data":"ed461758d15a346692f0fcd52aed862eab3e744e921a1f7a4e1926c605052e91"} Oct 11 10:50:13.665775 master-2 kubenswrapper[4776]: I1011 10:50:13.665740 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-6-master-2" event={"ID":"b0ae11ca-a8d5-4a55-9898-269dfe907446","Type":"ContainerDied","Data":"6bd4d19fed71de4022731fcbd86fc0592211b6138b640fa3bf6c9472b80af3af"} Oct 11 10:50:13.665775 master-2 kubenswrapper[4776]: I1011 10:50:13.665780 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bd4d19fed71de4022731fcbd86fc0592211b6138b640fa3bf6c9472b80af3af" Oct 11 10:50:13.665887 master-2 kubenswrapper[4776]: I1011 10:50:13.665802 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:13.828630 master-1 kubenswrapper[4771]: I1011 10:50:13.828503 4771 scope.go:117] "RemoveContainer" containerID="d38cc7e81ae0071969a185999498646cddc10ee8b65bed60da29b4c1f46a55dc" Oct 11 10:50:14.676981 master-2 kubenswrapper[4776]: I1011 10:50:14.676880 4776 generic.go:334] "Generic (PLEG): container finished" podID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerID="2798ee030ae90bc44aeeeb787817932e921c9947d301d1bbd08e8d4c5d1dc632" exitCode=0 Oct 11 10:50:14.676981 master-2 kubenswrapper[4776]: I1011 10:50:14.676955 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" event={"ID":"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7","Type":"ContainerDied","Data":"2798ee030ae90bc44aeeeb787817932e921c9947d301d1bbd08e8d4c5d1dc632"} Oct 11 10:50:16.002844 master-2 kubenswrapper[4776]: I1011 10:50:16.002775 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:16.146256 master-2 kubenswrapper[4776]: I1011 10:50:16.146147 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v48t\" (UniqueName: \"kubernetes.io/projected/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-kube-api-access-5v48t\") pod \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " Oct 11 10:50:16.146519 master-2 kubenswrapper[4776]: I1011 10:50:16.146307 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-util\") pod \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " Oct 11 10:50:16.146519 master-2 kubenswrapper[4776]: I1011 10:50:16.146364 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-bundle\") pod \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " Oct 11 10:50:16.148042 master-2 kubenswrapper[4776]: I1011 10:50:16.147996 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-bundle" (OuterVolumeSpecName: "bundle") pod "0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" (UID: "0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:50:16.150344 master-2 kubenswrapper[4776]: I1011 10:50:16.150288 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-kube-api-access-5v48t" (OuterVolumeSpecName: "kube-api-access-5v48t") pod "0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" (UID: "0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7"). InnerVolumeSpecName "kube-api-access-5v48t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:50:16.161637 master-2 kubenswrapper[4776]: I1011 10:50:16.161576 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-util" (OuterVolumeSpecName: "util") pod "0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" (UID: "0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:50:16.249299 master-2 kubenswrapper[4776]: I1011 10:50:16.249122 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v48t\" (UniqueName: \"kubernetes.io/projected/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-kube-api-access-5v48t\") on node \"master-2\" DevicePath \"\"" Oct 11 10:50:16.249299 master-2 kubenswrapper[4776]: I1011 10:50:16.249187 4776 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-util\") on node \"master-2\" DevicePath \"\"" Oct 11 10:50:16.249299 master-2 kubenswrapper[4776]: I1011 10:50:16.249208 4776 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:50:16.698774 master-2 kubenswrapper[4776]: I1011 10:50:16.698456 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" event={"ID":"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7","Type":"ContainerDied","Data":"b9c7ddb530d84b9909f6c52396407a73ff58e8ddb2c142a940d107ef62cb61f5"} Oct 11 10:50:16.698774 master-2 kubenswrapper[4776]: I1011 10:50:16.698515 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9c7ddb530d84b9909f6c52396407a73ff58e8ddb2c142a940d107ef62cb61f5" Oct 11 10:50:16.698774 master-2 kubenswrapper[4776]: I1011 10:50:16.698572 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:22.316602 master-0 kubenswrapper[4790]: I1011 10:50:22.316523 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-688d597459-j48hd"] Oct 11 10:50:22.317508 master-0 kubenswrapper[4790]: E1011 10:50:22.316867 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a0545c-12d2-49a0-be5e-17f472bac134" containerName="pruner" Oct 11 10:50:22.317508 master-0 kubenswrapper[4790]: I1011 10:50:22.316887 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a0545c-12d2-49a0-be5e-17f472bac134" containerName="pruner" Oct 11 10:50:22.317508 master-0 kubenswrapper[4790]: I1011 10:50:22.317033 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="14a0545c-12d2-49a0-be5e-17f472bac134" containerName="pruner" Oct 11 10:50:22.318276 master-0 kubenswrapper[4790]: I1011 10:50:22.317931 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-688d597459-j48hd" Oct 11 10:50:22.357124 master-0 kubenswrapper[4790]: I1011 10:50:22.357044 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-688d597459-j48hd"] Oct 11 10:50:22.402318 master-0 kubenswrapper[4790]: I1011 10:50:22.399921 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkgkc\" (UniqueName: \"kubernetes.io/projected/90e79615-f456-4c3a-9e00-9683d29da694-kube-api-access-lkgkc\") pod \"openstack-operator-controller-operator-688d597459-j48hd\" (UID: \"90e79615-f456-4c3a-9e00-9683d29da694\") " pod="openstack-operators/openstack-operator-controller-operator-688d597459-j48hd" Oct 11 10:50:22.501105 master-0 kubenswrapper[4790]: I1011 10:50:22.501028 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkgkc\" (UniqueName: \"kubernetes.io/projected/90e79615-f456-4c3a-9e00-9683d29da694-kube-api-access-lkgkc\") pod \"openstack-operator-controller-operator-688d597459-j48hd\" (UID: \"90e79615-f456-4c3a-9e00-9683d29da694\") " pod="openstack-operators/openstack-operator-controller-operator-688d597459-j48hd" Oct 11 10:50:22.525955 master-0 kubenswrapper[4790]: I1011 10:50:22.525906 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkgkc\" (UniqueName: \"kubernetes.io/projected/90e79615-f456-4c3a-9e00-9683d29da694-kube-api-access-lkgkc\") pod \"openstack-operator-controller-operator-688d597459-j48hd\" (UID: \"90e79615-f456-4c3a-9e00-9683d29da694\") " pod="openstack-operators/openstack-operator-controller-operator-688d597459-j48hd" Oct 11 10:50:22.636597 master-0 kubenswrapper[4790]: I1011 10:50:22.636415 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-688d597459-j48hd" Oct 11 10:50:23.086441 master-0 kubenswrapper[4790]: I1011 10:50:23.086376 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-688d597459-j48hd"] Oct 11 10:50:24.027285 master-0 kubenswrapper[4790]: I1011 10:50:24.027222 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-688d597459-j48hd" event={"ID":"90e79615-f456-4c3a-9e00-9683d29da694","Type":"ContainerStarted","Data":"866fb0186a0f425bf7f92274c83bb58565b7bd8634ccccde3ba085ee3d405915"} Oct 11 10:50:27.051263 master-0 kubenswrapper[4790]: I1011 10:50:27.051174 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-688d597459-j48hd" event={"ID":"90e79615-f456-4c3a-9e00-9683d29da694","Type":"ContainerStarted","Data":"65b1d808f699a2b6c607b7d8b40f13329503df25dad386f96458868225389a4c"} Oct 11 10:50:29.068922 master-0 kubenswrapper[4790]: I1011 10:50:29.068799 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-688d597459-j48hd" event={"ID":"90e79615-f456-4c3a-9e00-9683d29da694","Type":"ContainerStarted","Data":"d21bfc050573bee2ce236ab06a1a4750d84b960c474c7f0184f4e2764c771e61"} Oct 11 10:50:29.069966 master-0 kubenswrapper[4790]: I1011 10:50:29.069115 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-688d597459-j48hd" Oct 11 10:50:29.120182 master-0 kubenswrapper[4790]: I1011 10:50:29.120070 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-688d597459-j48hd" podStartSLOduration=2.012448059 podStartE2EDuration="7.120040251s" podCreationTimestamp="2025-10-11 10:50:22 +0000 UTC" firstStartedPulling="2025-10-11 10:50:23.097174909 +0000 UTC m=+699.651635201" lastFinishedPulling="2025-10-11 10:50:28.204767101 +0000 UTC m=+704.759227393" observedRunningTime="2025-10-11 10:50:29.11888933 +0000 UTC m=+705.673349652" watchObservedRunningTime="2025-10-11 10:50:29.120040251 +0000 UTC m=+705.674500553" Oct 11 10:50:32.643043 master-0 kubenswrapper[4790]: I1011 10:50:32.642064 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-688d597459-j48hd" Oct 11 10:50:35.007196 master-0 kubenswrapper[4790]: I1011 10:50:35.007129 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-566868fd7b-vpll7"] Oct 11 10:50:35.008150 master-0 kubenswrapper[4790]: I1011 10:50:35.008126 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-566868fd7b-vpll7" Oct 11 10:50:35.047303 master-0 kubenswrapper[4790]: I1011 10:50:35.047240 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-566868fd7b-vpll7"] Oct 11 10:50:35.126152 master-0 kubenswrapper[4790]: I1011 10:50:35.125960 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx8qh\" (UniqueName: \"kubernetes.io/projected/0b3dee2f-71f4-480b-a67a-ac73ab42d1f9-kube-api-access-vx8qh\") pod \"openstack-operator-controller-operator-566868fd7b-vpll7\" (UID: \"0b3dee2f-71f4-480b-a67a-ac73ab42d1f9\") " pod="openstack-operators/openstack-operator-controller-operator-566868fd7b-vpll7" Oct 11 10:50:35.227126 master-0 kubenswrapper[4790]: I1011 10:50:35.227060 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx8qh\" (UniqueName: \"kubernetes.io/projected/0b3dee2f-71f4-480b-a67a-ac73ab42d1f9-kube-api-access-vx8qh\") pod \"openstack-operator-controller-operator-566868fd7b-vpll7\" (UID: \"0b3dee2f-71f4-480b-a67a-ac73ab42d1f9\") " pod="openstack-operators/openstack-operator-controller-operator-566868fd7b-vpll7" Oct 11 10:50:35.247664 master-0 kubenswrapper[4790]: I1011 10:50:35.247606 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx8qh\" (UniqueName: \"kubernetes.io/projected/0b3dee2f-71f4-480b-a67a-ac73ab42d1f9-kube-api-access-vx8qh\") pod \"openstack-operator-controller-operator-566868fd7b-vpll7\" (UID: \"0b3dee2f-71f4-480b-a67a-ac73ab42d1f9\") " pod="openstack-operators/openstack-operator-controller-operator-566868fd7b-vpll7" Oct 11 10:50:35.340527 master-0 kubenswrapper[4790]: I1011 10:50:35.340445 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-566868fd7b-vpll7" Oct 11 10:50:35.788768 master-0 kubenswrapper[4790]: I1011 10:50:35.788555 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-566868fd7b-vpll7"] Oct 11 10:50:35.800231 master-0 kubenswrapper[4790]: W1011 10:50:35.800150 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b3dee2f_71f4_480b_a67a_ac73ab42d1f9.slice/crio-070e76e069764f246ee6726f4516e958738d349cd2b2e24c51954ba693d16100 WatchSource:0}: Error finding container 070e76e069764f246ee6726f4516e958738d349cd2b2e24c51954ba693d16100: Status 404 returned error can't find the container with id 070e76e069764f246ee6726f4516e958738d349cd2b2e24c51954ba693d16100 Oct 11 10:50:36.122879 master-0 kubenswrapper[4790]: I1011 10:50:36.122821 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-566868fd7b-vpll7" event={"ID":"0b3dee2f-71f4-480b-a67a-ac73ab42d1f9","Type":"ContainerStarted","Data":"7e764480a1439c202cd12594886e48ed0490c259faab7593e59ab2f0350d996d"} Oct 11 10:50:36.122879 master-0 kubenswrapper[4790]: I1011 10:50:36.122887 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-566868fd7b-vpll7" event={"ID":"0b3dee2f-71f4-480b-a67a-ac73ab42d1f9","Type":"ContainerStarted","Data":"c6dd853fb6f82b8a00a3ba262160819580a535d21a365d783b51dee534d0d855"} Oct 11 10:50:36.122879 master-0 kubenswrapper[4790]: I1011 10:50:36.122903 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-566868fd7b-vpll7" event={"ID":"0b3dee2f-71f4-480b-a67a-ac73ab42d1f9","Type":"ContainerStarted","Data":"070e76e069764f246ee6726f4516e958738d349cd2b2e24c51954ba693d16100"} Oct 11 10:50:36.123652 master-0 kubenswrapper[4790]: I1011 10:50:36.123018 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-566868fd7b-vpll7" Oct 11 10:50:36.179563 master-0 kubenswrapper[4790]: I1011 10:50:36.179423 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-566868fd7b-vpll7" podStartSLOduration=2.179391697 podStartE2EDuration="2.179391697s" podCreationTimestamp="2025-10-11 10:50:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:50:36.171119483 +0000 UTC m=+712.725579855" watchObservedRunningTime="2025-10-11 10:50:36.179391697 +0000 UTC m=+712.733852029" Oct 11 10:50:45.344049 master-0 kubenswrapper[4790]: I1011 10:50:45.343971 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-566868fd7b-vpll7" Oct 11 10:50:45.458221 master-0 kubenswrapper[4790]: I1011 10:50:45.458144 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-688d597459-j48hd"] Oct 11 10:50:45.458513 master-0 kubenswrapper[4790]: I1011 10:50:45.458437 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-operator-688d597459-j48hd" podUID="90e79615-f456-4c3a-9e00-9683d29da694" containerName="operator" containerID="cri-o://65b1d808f699a2b6c607b7d8b40f13329503df25dad386f96458868225389a4c" gracePeriod=10 Oct 11 10:50:45.458663 master-0 kubenswrapper[4790]: I1011 10:50:45.458534 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-operator-688d597459-j48hd" podUID="90e79615-f456-4c3a-9e00-9683d29da694" containerName="kube-rbac-proxy" containerID="cri-o://d21bfc050573bee2ce236ab06a1a4750d84b960c474c7f0184f4e2764c771e61" gracePeriod=10 Oct 11 10:50:45.900846 master-0 kubenswrapper[4790]: I1011 10:50:45.900771 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-688d597459-j48hd" Oct 11 10:50:45.977184 master-0 kubenswrapper[4790]: I1011 10:50:45.977015 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkgkc\" (UniqueName: \"kubernetes.io/projected/90e79615-f456-4c3a-9e00-9683d29da694-kube-api-access-lkgkc\") pod \"90e79615-f456-4c3a-9e00-9683d29da694\" (UID: \"90e79615-f456-4c3a-9e00-9683d29da694\") " Oct 11 10:50:45.980513 master-0 kubenswrapper[4790]: I1011 10:50:45.980429 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90e79615-f456-4c3a-9e00-9683d29da694-kube-api-access-lkgkc" (OuterVolumeSpecName: "kube-api-access-lkgkc") pod "90e79615-f456-4c3a-9e00-9683d29da694" (UID: "90e79615-f456-4c3a-9e00-9683d29da694"). InnerVolumeSpecName "kube-api-access-lkgkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:50:46.078609 master-0 kubenswrapper[4790]: I1011 10:50:46.078509 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkgkc\" (UniqueName: \"kubernetes.io/projected/90e79615-f456-4c3a-9e00-9683d29da694-kube-api-access-lkgkc\") on node \"master-0\" DevicePath \"\"" Oct 11 10:50:46.186477 master-0 kubenswrapper[4790]: I1011 10:50:46.186342 4790 generic.go:334] "Generic (PLEG): container finished" podID="90e79615-f456-4c3a-9e00-9683d29da694" containerID="d21bfc050573bee2ce236ab06a1a4750d84b960c474c7f0184f4e2764c771e61" exitCode=0 Oct 11 10:50:46.186477 master-0 kubenswrapper[4790]: I1011 10:50:46.186428 4790 generic.go:334] "Generic (PLEG): container finished" podID="90e79615-f456-4c3a-9e00-9683d29da694" containerID="65b1d808f699a2b6c607b7d8b40f13329503df25dad386f96458868225389a4c" exitCode=0 Oct 11 10:50:46.186477 master-0 kubenswrapper[4790]: I1011 10:50:46.186430 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-688d597459-j48hd" event={"ID":"90e79615-f456-4c3a-9e00-9683d29da694","Type":"ContainerDied","Data":"d21bfc050573bee2ce236ab06a1a4750d84b960c474c7f0184f4e2764c771e61"} Oct 11 10:50:46.186915 master-0 kubenswrapper[4790]: I1011 10:50:46.186504 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-688d597459-j48hd" event={"ID":"90e79615-f456-4c3a-9e00-9683d29da694","Type":"ContainerDied","Data":"65b1d808f699a2b6c607b7d8b40f13329503df25dad386f96458868225389a4c"} Oct 11 10:50:46.186915 master-0 kubenswrapper[4790]: I1011 10:50:46.186398 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-688d597459-j48hd" Oct 11 10:50:46.186915 master-0 kubenswrapper[4790]: I1011 10:50:46.186544 4790 scope.go:117] "RemoveContainer" containerID="d21bfc050573bee2ce236ab06a1a4750d84b960c474c7f0184f4e2764c771e61" Oct 11 10:50:46.186915 master-0 kubenswrapper[4790]: I1011 10:50:46.186520 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-688d597459-j48hd" event={"ID":"90e79615-f456-4c3a-9e00-9683d29da694","Type":"ContainerDied","Data":"866fb0186a0f425bf7f92274c83bb58565b7bd8634ccccde3ba085ee3d405915"} Oct 11 10:50:46.208951 master-0 kubenswrapper[4790]: I1011 10:50:46.208809 4790 scope.go:117] "RemoveContainer" containerID="65b1d808f699a2b6c607b7d8b40f13329503df25dad386f96458868225389a4c" Oct 11 10:50:46.235578 master-0 kubenswrapper[4790]: I1011 10:50:46.235533 4790 scope.go:117] "RemoveContainer" containerID="d21bfc050573bee2ce236ab06a1a4750d84b960c474c7f0184f4e2764c771e61" Oct 11 10:50:46.236236 master-0 kubenswrapper[4790]: E1011 10:50:46.236195 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d21bfc050573bee2ce236ab06a1a4750d84b960c474c7f0184f4e2764c771e61\": container with ID starting with d21bfc050573bee2ce236ab06a1a4750d84b960c474c7f0184f4e2764c771e61 not found: ID does not exist" containerID="d21bfc050573bee2ce236ab06a1a4750d84b960c474c7f0184f4e2764c771e61" Oct 11 10:50:46.236313 master-0 kubenswrapper[4790]: I1011 10:50:46.236241 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21bfc050573bee2ce236ab06a1a4750d84b960c474c7f0184f4e2764c771e61"} err="failed to get container status \"d21bfc050573bee2ce236ab06a1a4750d84b960c474c7f0184f4e2764c771e61\": rpc error: code = NotFound desc = could not find container \"d21bfc050573bee2ce236ab06a1a4750d84b960c474c7f0184f4e2764c771e61\": container with ID starting with d21bfc050573bee2ce236ab06a1a4750d84b960c474c7f0184f4e2764c771e61 not found: ID does not exist" Oct 11 10:50:46.236313 master-0 kubenswrapper[4790]: I1011 10:50:46.236268 4790 scope.go:117] "RemoveContainer" containerID="65b1d808f699a2b6c607b7d8b40f13329503df25dad386f96458868225389a4c" Oct 11 10:50:46.236764 master-0 kubenswrapper[4790]: E1011 10:50:46.236727 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65b1d808f699a2b6c607b7d8b40f13329503df25dad386f96458868225389a4c\": container with ID starting with 65b1d808f699a2b6c607b7d8b40f13329503df25dad386f96458868225389a4c not found: ID does not exist" containerID="65b1d808f699a2b6c607b7d8b40f13329503df25dad386f96458868225389a4c" Oct 11 10:50:46.236827 master-0 kubenswrapper[4790]: I1011 10:50:46.236754 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65b1d808f699a2b6c607b7d8b40f13329503df25dad386f96458868225389a4c"} err="failed to get container status \"65b1d808f699a2b6c607b7d8b40f13329503df25dad386f96458868225389a4c\": rpc error: code = NotFound desc = could not find container \"65b1d808f699a2b6c607b7d8b40f13329503df25dad386f96458868225389a4c\": container with ID starting with 65b1d808f699a2b6c607b7d8b40f13329503df25dad386f96458868225389a4c not found: ID does not exist" Oct 11 10:50:46.236827 master-0 kubenswrapper[4790]: I1011 10:50:46.236777 4790 scope.go:117] "RemoveContainer" containerID="d21bfc050573bee2ce236ab06a1a4750d84b960c474c7f0184f4e2764c771e61" Oct 11 10:50:46.237426 master-0 kubenswrapper[4790]: I1011 10:50:46.237307 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21bfc050573bee2ce236ab06a1a4750d84b960c474c7f0184f4e2764c771e61"} err="failed to get container status \"d21bfc050573bee2ce236ab06a1a4750d84b960c474c7f0184f4e2764c771e61\": rpc error: code = NotFound desc = could not find container \"d21bfc050573bee2ce236ab06a1a4750d84b960c474c7f0184f4e2764c771e61\": container with ID starting with d21bfc050573bee2ce236ab06a1a4750d84b960c474c7f0184f4e2764c771e61 not found: ID does not exist" Oct 11 10:50:46.237512 master-0 kubenswrapper[4790]: I1011 10:50:46.237470 4790 scope.go:117] "RemoveContainer" containerID="65b1d808f699a2b6c607b7d8b40f13329503df25dad386f96458868225389a4c" Oct 11 10:50:46.239249 master-0 kubenswrapper[4790]: I1011 10:50:46.239207 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65b1d808f699a2b6c607b7d8b40f13329503df25dad386f96458868225389a4c"} err="failed to get container status \"65b1d808f699a2b6c607b7d8b40f13329503df25dad386f96458868225389a4c\": rpc error: code = NotFound desc = could not find container \"65b1d808f699a2b6c607b7d8b40f13329503df25dad386f96458868225389a4c\": container with ID starting with 65b1d808f699a2b6c607b7d8b40f13329503df25dad386f96458868225389a4c not found: ID does not exist" Oct 11 10:50:46.241014 master-0 kubenswrapper[4790]: I1011 10:50:46.240956 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-688d597459-j48hd"] Oct 11 10:50:46.256063 master-0 kubenswrapper[4790]: I1011 10:50:46.255983 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-688d597459-j48hd"] Oct 11 10:50:46.300681 master-0 kubenswrapper[4790]: I1011 10:50:46.300619 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90e79615-f456-4c3a-9e00-9683d29da694" path="/var/lib/kubelet/pods/90e79615-f456-4c3a-9e00-9683d29da694/volumes" Oct 11 10:51:56.741521 master-0 kubenswrapper[4790]: I1011 10:51:56.741373 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-658c7b459c-fzlrm"] Oct 11 10:51:56.742340 master-0 kubenswrapper[4790]: E1011 10:51:56.741865 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e79615-f456-4c3a-9e00-9683d29da694" containerName="kube-rbac-proxy" Oct 11 10:51:56.742340 master-0 kubenswrapper[4790]: I1011 10:51:56.741959 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e79615-f456-4c3a-9e00-9683d29da694" containerName="kube-rbac-proxy" Oct 11 10:51:56.742340 master-0 kubenswrapper[4790]: E1011 10:51:56.741981 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e79615-f456-4c3a-9e00-9683d29da694" containerName="operator" Oct 11 10:51:56.742340 master-0 kubenswrapper[4790]: I1011 10:51:56.741990 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e79615-f456-4c3a-9e00-9683d29da694" containerName="operator" Oct 11 10:51:56.742340 master-0 kubenswrapper[4790]: I1011 10:51:56.742218 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="90e79615-f456-4c3a-9e00-9683d29da694" containerName="operator" Oct 11 10:51:56.742340 master-0 kubenswrapper[4790]: I1011 10:51:56.742239 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="90e79615-f456-4c3a-9e00-9683d29da694" containerName="kube-rbac-proxy" Oct 11 10:51:56.746355 master-0 kubenswrapper[4790]: I1011 10:51:56.744295 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-658c7b459c-fzlrm" Oct 11 10:51:56.767822 master-2 kubenswrapper[4776]: I1011 10:51:56.767768 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq"] Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: E1011 10:51:56.768016 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ae11ca-a8d5-4a55-9898-269dfe907446" containerName="pruner" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: I1011 10:51:56.768028 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ae11ca-a8d5-4a55-9898-269dfe907446" containerName="pruner" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: E1011 10:51:56.768037 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerName="util" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: I1011 10:51:56.768043 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerName="util" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: E1011 10:51:56.768061 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerName="extract" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: I1011 10:51:56.768066 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerName="extract" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: E1011 10:51:56.768076 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerName="pull" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: I1011 10:51:56.768081 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerName="pull" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: I1011 10:51:56.768182 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerName="extract" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: I1011 10:51:56.768193 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ae11ca-a8d5-4a55-9898-269dfe907446" containerName="pruner" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: I1011 10:51:56.768888 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" Oct 11 10:51:56.772012 master-2 kubenswrapper[4776]: I1011 10:51:56.771983 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 11 10:51:56.772062 master-2 kubenswrapper[4776]: I1011 10:51:56.772014 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 11 10:51:56.776274 master-0 kubenswrapper[4790]: I1011 10:51:56.776175 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-658c7b459c-fzlrm"] Oct 11 10:51:56.806697 master-2 kubenswrapper[4776]: I1011 10:51:56.804739 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq"] Oct 11 10:51:56.822702 master-2 kubenswrapper[4776]: I1011 10:51:56.822057 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl"] Oct 11 10:51:56.824822 master-2 kubenswrapper[4776]: I1011 10:51:56.823531 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" Oct 11 10:51:56.850082 master-2 kubenswrapper[4776]: I1011 10:51:56.850049 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl"] Oct 11 10:51:56.869994 master-2 kubenswrapper[4776]: I1011 10:51:56.867597 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxsgf\" (UniqueName: \"kubernetes.io/projected/6892e393-3308-4454-90bc-06af6038c240-kube-api-access-dxsgf\") pod \"cinder-operator-controller-manager-5484486656-rw2pq\" (UID: \"6892e393-3308-4454-90bc-06af6038c240\") " pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" Oct 11 10:51:56.870842 master-2 kubenswrapper[4776]: I1011 10:51:56.870779 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgsjl\" (UniqueName: \"kubernetes.io/projected/81dbec9a-863f-4698-a04b-2fd7e6bb2a02-kube-api-access-wgsjl\") pod \"designate-operator-controller-manager-67d84b9cc-fxdhl\" (UID: \"81dbec9a-863f-4698-a04b-2fd7e6bb2a02\") " pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" Oct 11 10:51:56.872827 master-1 kubenswrapper[4771]: I1011 10:51:56.872730 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-68fc865f87-dfx76"] Oct 11 10:51:56.877269 master-1 kubenswrapper[4771]: E1011 10:51:56.873083 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c72a1cdf-d776-4854-b379-bde17097bab0" containerName="pruner" Oct 11 10:51:56.877269 master-1 kubenswrapper[4771]: I1011 10:51:56.873100 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c72a1cdf-d776-4854-b379-bde17097bab0" containerName="pruner" Oct 11 10:51:56.877269 master-1 kubenswrapper[4771]: I1011 10:51:56.873209 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c72a1cdf-d776-4854-b379-bde17097bab0" containerName="pruner" Oct 11 10:51:56.877269 master-1 kubenswrapper[4771]: I1011 10:51:56.874081 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-68fc865f87-dfx76" Oct 11 10:51:56.881710 master-1 kubenswrapper[4771]: I1011 10:51:56.881633 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 11 10:51:56.882816 master-1 kubenswrapper[4771]: I1011 10:51:56.881694 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 11 10:51:56.885602 master-2 kubenswrapper[4776]: I1011 10:51:56.885522 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb"] Oct 11 10:51:56.888525 master-2 kubenswrapper[4776]: I1011 10:51:56.888465 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" Oct 11 10:51:56.908424 master-1 kubenswrapper[4771]: I1011 10:51:56.908341 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-68fc865f87-dfx76"] Oct 11 10:51:56.908832 master-2 kubenswrapper[4776]: I1011 10:51:56.908753 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb"] Oct 11 10:51:56.909242 master-0 kubenswrapper[4790]: I1011 10:51:56.909158 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsm2k\" (UniqueName: \"kubernetes.io/projected/f74275f9-5962-46ed-bbf9-9ca7dabea845-kube-api-access-vsm2k\") pod \"barbican-operator-controller-manager-658c7b459c-fzlrm\" (UID: \"f74275f9-5962-46ed-bbf9-9ca7dabea845\") " pod="openstack-operators/barbican-operator-controller-manager-658c7b459c-fzlrm" Oct 11 10:51:56.918914 master-1 kubenswrapper[4771]: I1011 10:51:56.918854 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnmnt\" (UniqueName: \"kubernetes.io/projected/608a645c-104f-4fab-b1c6-cbcae70ca0f4-kube-api-access-pnmnt\") pod \"heat-operator-controller-manager-68fc865f87-dfx76\" (UID: \"608a645c-104f-4fab-b1c6-cbcae70ca0f4\") " pod="openstack-operators/heat-operator-controller-manager-68fc865f87-dfx76" Oct 11 10:51:56.920348 master-0 kubenswrapper[4790]: I1011 10:51:56.920286 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-d68fd5cdf-2dkw2"] Oct 11 10:51:56.921323 master-0 kubenswrapper[4790]: I1011 10:51:56.921294 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-2dkw2" Oct 11 10:51:56.924154 master-0 kubenswrapper[4790]: I1011 10:51:56.924104 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 11 10:51:56.941040 master-0 kubenswrapper[4790]: I1011 10:51:56.940953 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54969ff695-mxpp2"] Oct 11 10:51:56.941907 master-0 kubenswrapper[4790]: I1011 10:51:56.941874 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54969ff695-mxpp2" Oct 11 10:51:56.948078 master-0 kubenswrapper[4790]: I1011 10:51:56.948027 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d68fd5cdf-2dkw2"] Oct 11 10:51:56.968959 master-0 kubenswrapper[4790]: I1011 10:51:56.968901 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54969ff695-mxpp2"] Oct 11 10:51:56.972859 master-2 kubenswrapper[4776]: I1011 10:51:56.972773 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxsgf\" (UniqueName: \"kubernetes.io/projected/6892e393-3308-4454-90bc-06af6038c240-kube-api-access-dxsgf\") pod \"cinder-operator-controller-manager-5484486656-rw2pq\" (UID: \"6892e393-3308-4454-90bc-06af6038c240\") " pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" Oct 11 10:51:56.972859 master-2 kubenswrapper[4776]: I1011 10:51:56.972844 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4wp6\" (UniqueName: \"kubernetes.io/projected/0ae13eec-2496-4fc6-a6e1-db8b75944959-kube-api-access-p4wp6\") pod \"glance-operator-controller-manager-59bd97c6b9-kmrbb\" (UID: \"0ae13eec-2496-4fc6-a6e1-db8b75944959\") " pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" Oct 11 10:51:56.973258 master-2 kubenswrapper[4776]: I1011 10:51:56.972884 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgsjl\" (UniqueName: \"kubernetes.io/projected/81dbec9a-863f-4698-a04b-2fd7e6bb2a02-kube-api-access-wgsjl\") pod \"designate-operator-controller-manager-67d84b9cc-fxdhl\" (UID: \"81dbec9a-863f-4698-a04b-2fd7e6bb2a02\") " pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" Oct 11 10:51:56.989184 master-1 kubenswrapper[4771]: I1011 10:51:56.989102 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6b498574d4-brh6p"] Oct 11 10:51:56.992375 master-1 kubenswrapper[4771]: I1011 10:51:56.992322 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6b498574d4-brh6p" Oct 11 10:51:56.995071 master-2 kubenswrapper[4776]: I1011 10:51:56.994637 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv"] Oct 11 10:51:56.995771 master-2 kubenswrapper[4776]: I1011 10:51:56.995584 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" Oct 11 10:51:57.008717 master-1 kubenswrapper[4771]: I1011 10:51:57.008631 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6b498574d4-brh6p"] Oct 11 10:51:57.019173 master-0 kubenswrapper[4790]: I1011 10:51:57.013572 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19f1cdae-2bd6-42f2-aedc-7da343eeab3f-cert\") pod \"infra-operator-controller-manager-d68fd5cdf-2dkw2\" (UID: \"19f1cdae-2bd6-42f2-aedc-7da343eeab3f\") " pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-2dkw2" Oct 11 10:51:57.019173 master-0 kubenswrapper[4790]: I1011 10:51:57.013654 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc9ns\" (UniqueName: \"kubernetes.io/projected/19f1cdae-2bd6-42f2-aedc-7da343eeab3f-kube-api-access-nc9ns\") pod \"infra-operator-controller-manager-d68fd5cdf-2dkw2\" (UID: \"19f1cdae-2bd6-42f2-aedc-7da343eeab3f\") " pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-2dkw2" Oct 11 10:51:57.019173 master-0 kubenswrapper[4790]: I1011 10:51:57.013774 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsm2k\" (UniqueName: \"kubernetes.io/projected/f74275f9-5962-46ed-bbf9-9ca7dabea845-kube-api-access-vsm2k\") pod \"barbican-operator-controller-manager-658c7b459c-fzlrm\" (UID: \"f74275f9-5962-46ed-bbf9-9ca7dabea845\") " pod="openstack-operators/barbican-operator-controller-manager-658c7b459c-fzlrm" Oct 11 10:51:57.020234 master-1 kubenswrapper[4771]: I1011 10:51:57.020143 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqpww\" (UniqueName: \"kubernetes.io/projected/9f700fad-87cd-467c-9b8a-99a82dd72d9b-kube-api-access-mqpww\") pod \"ironic-operator-controller-manager-6b498574d4-brh6p\" (UID: \"9f700fad-87cd-467c-9b8a-99a82dd72d9b\") " pod="openstack-operators/ironic-operator-controller-manager-6b498574d4-brh6p" Oct 11 10:51:57.020644 master-1 kubenswrapper[4771]: I1011 10:51:57.020277 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnmnt\" (UniqueName: \"kubernetes.io/projected/608a645c-104f-4fab-b1c6-cbcae70ca0f4-kube-api-access-pnmnt\") pod \"heat-operator-controller-manager-68fc865f87-dfx76\" (UID: \"608a645c-104f-4fab-b1c6-cbcae70ca0f4\") " pod="openstack-operators/heat-operator-controller-manager-68fc865f87-dfx76" Oct 11 10:51:57.021266 master-2 kubenswrapper[4776]: I1011 10:51:57.021135 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgsjl\" (UniqueName: \"kubernetes.io/projected/81dbec9a-863f-4698-a04b-2fd7e6bb2a02-kube-api-access-wgsjl\") pod \"designate-operator-controller-manager-67d84b9cc-fxdhl\" (UID: \"81dbec9a-863f-4698-a04b-2fd7e6bb2a02\") " pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" Oct 11 10:51:57.027115 master-2 kubenswrapper[4776]: I1011 10:51:57.025000 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv"] Oct 11 10:51:57.031840 master-2 kubenswrapper[4776]: I1011 10:51:57.029883 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxsgf\" (UniqueName: \"kubernetes.io/projected/6892e393-3308-4454-90bc-06af6038c240-kube-api-access-dxsgf\") pod \"cinder-operator-controller-manager-5484486656-rw2pq\" (UID: \"6892e393-3308-4454-90bc-06af6038c240\") " pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" Oct 11 10:51:57.040110 master-2 kubenswrapper[4776]: I1011 10:51:57.039406 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4"] Oct 11 10:51:57.041184 master-2 kubenswrapper[4776]: I1011 10:51:57.041099 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" Oct 11 10:51:57.051331 master-0 kubenswrapper[4790]: I1011 10:51:57.051276 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsm2k\" (UniqueName: \"kubernetes.io/projected/f74275f9-5962-46ed-bbf9-9ca7dabea845-kube-api-access-vsm2k\") pod \"barbican-operator-controller-manager-658c7b459c-fzlrm\" (UID: \"f74275f9-5962-46ed-bbf9-9ca7dabea845\") " pod="openstack-operators/barbican-operator-controller-manager-658c7b459c-fzlrm" Oct 11 10:51:57.054394 master-2 kubenswrapper[4776]: I1011 10:51:57.054343 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4"] Oct 11 10:51:57.057708 master-1 kubenswrapper[4771]: I1011 10:51:57.057636 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7f4856d67b-9lktk"] Oct 11 10:51:57.058840 master-1 kubenswrapper[4771]: I1011 10:51:57.058805 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7f4856d67b-9lktk" Oct 11 10:51:57.073769 master-2 kubenswrapper[4776]: I1011 10:51:57.073705 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42bbp\" (UniqueName: \"kubernetes.io/projected/aef1b152-b3f9-4e71-acd5-912ea87347e5-kube-api-access-42bbp\") pod \"manila-operator-controller-manager-6d78f57554-k69p4\" (UID: \"aef1b152-b3f9-4e71-acd5-912ea87347e5\") " pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" Oct 11 10:51:57.073981 master-2 kubenswrapper[4776]: I1011 10:51:57.073828 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms6ck\" (UniqueName: \"kubernetes.io/projected/77ea22b9-4ddd-47e0-8d6b-33a046ec10fa-kube-api-access-ms6ck\") pod \"keystone-operator-controller-manager-f4487c759-5ktpv\" (UID: \"77ea22b9-4ddd-47e0-8d6b-33a046ec10fa\") " pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" Oct 11 10:51:57.073981 master-2 kubenswrapper[4776]: I1011 10:51:57.073858 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4wp6\" (UniqueName: \"kubernetes.io/projected/0ae13eec-2496-4fc6-a6e1-db8b75944959-kube-api-access-p4wp6\") pod \"glance-operator-controller-manager-59bd97c6b9-kmrbb\" (UID: \"0ae13eec-2496-4fc6-a6e1-db8b75944959\") " pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" Oct 11 10:51:57.075117 master-1 kubenswrapper[4771]: I1011 10:51:57.075065 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnmnt\" (UniqueName: \"kubernetes.io/projected/608a645c-104f-4fab-b1c6-cbcae70ca0f4-kube-api-access-pnmnt\") pod \"heat-operator-controller-manager-68fc865f87-dfx76\" (UID: \"608a645c-104f-4fab-b1c6-cbcae70ca0f4\") " pod="openstack-operators/heat-operator-controller-manager-68fc865f87-dfx76" Oct 11 10:51:57.079028 master-1 kubenswrapper[4771]: I1011 10:51:57.078968 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7f4856d67b-9lktk"] Oct 11 10:51:57.081339 master-0 kubenswrapper[4790]: I1011 10:51:57.080653 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-658c7b459c-fzlrm" Oct 11 10:51:57.091057 master-2 kubenswrapper[4776]: I1011 10:51:57.091025 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" Oct 11 10:51:57.113037 master-2 kubenswrapper[4776]: I1011 10:51:57.112984 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576"] Oct 11 10:51:57.113841 master-2 kubenswrapper[4776]: I1011 10:51:57.113817 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4wp6\" (UniqueName: \"kubernetes.io/projected/0ae13eec-2496-4fc6-a6e1-db8b75944959-kube-api-access-p4wp6\") pod \"glance-operator-controller-manager-59bd97c6b9-kmrbb\" (UID: \"0ae13eec-2496-4fc6-a6e1-db8b75944959\") " pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" Oct 11 10:51:57.115058 master-2 kubenswrapper[4776]: I1011 10:51:57.114743 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" Oct 11 10:51:57.115572 master-0 kubenswrapper[4790]: I1011 10:51:57.115398 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19f1cdae-2bd6-42f2-aedc-7da343eeab3f-cert\") pod \"infra-operator-controller-manager-d68fd5cdf-2dkw2\" (UID: \"19f1cdae-2bd6-42f2-aedc-7da343eeab3f\") " pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-2dkw2" Oct 11 10:51:57.115572 master-0 kubenswrapper[4790]: I1011 10:51:57.115457 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6fhn\" (UniqueName: \"kubernetes.io/projected/7c307dd6-17af-4fc5-8b19-d6fd59f46d04-kube-api-access-t6fhn\") pod \"horizon-operator-controller-manager-54969ff695-mxpp2\" (UID: \"7c307dd6-17af-4fc5-8b19-d6fd59f46d04\") " pod="openstack-operators/horizon-operator-controller-manager-54969ff695-mxpp2" Oct 11 10:51:57.115572 master-0 kubenswrapper[4790]: I1011 10:51:57.115496 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9ns\" (UniqueName: \"kubernetes.io/projected/19f1cdae-2bd6-42f2-aedc-7da343eeab3f-kube-api-access-nc9ns\") pod \"infra-operator-controller-manager-d68fd5cdf-2dkw2\" (UID: \"19f1cdae-2bd6-42f2-aedc-7da343eeab3f\") " pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-2dkw2" Oct 11 10:51:57.117402 master-0 kubenswrapper[4790]: E1011 10:51:57.115651 4790 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 11 10:51:57.117402 master-0 kubenswrapper[4790]: E1011 10:51:57.115789 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19f1cdae-2bd6-42f2-aedc-7da343eeab3f-cert podName:19f1cdae-2bd6-42f2-aedc-7da343eeab3f nodeName:}" failed. No retries permitted until 2025-10-11 10:51:57.615760913 +0000 UTC m=+794.170221275 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/19f1cdae-2bd6-42f2-aedc-7da343eeab3f-cert") pod "infra-operator-controller-manager-d68fd5cdf-2dkw2" (UID: "19f1cdae-2bd6-42f2-aedc-7da343eeab3f") : secret "infra-operator-webhook-server-cert" not found Oct 11 10:51:57.118900 master-2 kubenswrapper[4776]: I1011 10:51:57.116988 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576"] Oct 11 10:51:57.121082 master-0 kubenswrapper[4790]: I1011 10:51:57.121040 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-64487ccd4d-fzt8d"] Oct 11 10:51:57.122474 master-1 kubenswrapper[4771]: I1011 10:51:57.122343 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqpww\" (UniqueName: \"kubernetes.io/projected/9f700fad-87cd-467c-9b8a-99a82dd72d9b-kube-api-access-mqpww\") pod \"ironic-operator-controller-manager-6b498574d4-brh6p\" (UID: \"9f700fad-87cd-467c-9b8a-99a82dd72d9b\") " pod="openstack-operators/ironic-operator-controller-manager-6b498574d4-brh6p" Oct 11 10:51:57.122904 master-1 kubenswrapper[4771]: I1011 10:51:57.122605 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8tlj\" (UniqueName: \"kubernetes.io/projected/5012c546-4311-4415-bb9a-9074edfc09e2-kube-api-access-n8tlj\") pod \"mariadb-operator-controller-manager-7f4856d67b-9lktk\" (UID: \"5012c546-4311-4415-bb9a-9074edfc09e2\") " pod="openstack-operators/mariadb-operator-controller-manager-7f4856d67b-9lktk" Oct 11 10:51:57.129876 master-0 kubenswrapper[4790]: I1011 10:51:57.129843 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-64487ccd4d-fzt8d" Oct 11 10:51:57.144493 master-0 kubenswrapper[4790]: I1011 10:51:57.144403 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-64487ccd4d-fzt8d"] Oct 11 10:51:57.159472 master-1 kubenswrapper[4771]: I1011 10:51:57.159384 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqpww\" (UniqueName: \"kubernetes.io/projected/9f700fad-87cd-467c-9b8a-99a82dd72d9b-kube-api-access-mqpww\") pod \"ironic-operator-controller-manager-6b498574d4-brh6p\" (UID: \"9f700fad-87cd-467c-9b8a-99a82dd72d9b\") " pod="openstack-operators/ironic-operator-controller-manager-6b498574d4-brh6p" Oct 11 10:51:57.166275 master-0 kubenswrapper[4790]: I1011 10:51:57.166044 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc9ns\" (UniqueName: \"kubernetes.io/projected/19f1cdae-2bd6-42f2-aedc-7da343eeab3f-kube-api-access-nc9ns\") pod \"infra-operator-controller-manager-d68fd5cdf-2dkw2\" (UID: \"19f1cdae-2bd6-42f2-aedc-7da343eeab3f\") " pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-2dkw2" Oct 11 10:51:57.168569 master-1 kubenswrapper[4771]: I1011 10:51:57.168341 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-f456fb6cd-wnhd7"] Oct 11 10:51:57.170805 master-1 kubenswrapper[4771]: I1011 10:51:57.170737 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-f456fb6cd-wnhd7" Oct 11 10:51:57.175884 master-2 kubenswrapper[4776]: I1011 10:51:57.175820 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ddhx\" (UniqueName: \"kubernetes.io/projected/dfa45a38-a374-4106-80ff-49527e765f82-kube-api-access-7ddhx\") pod \"neutron-operator-controller-manager-7c95684bcc-vt576\" (UID: \"dfa45a38-a374-4106-80ff-49527e765f82\") " pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" Oct 11 10:51:57.176351 master-2 kubenswrapper[4776]: I1011 10:51:57.175995 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms6ck\" (UniqueName: \"kubernetes.io/projected/77ea22b9-4ddd-47e0-8d6b-33a046ec10fa-kube-api-access-ms6ck\") pod \"keystone-operator-controller-manager-f4487c759-5ktpv\" (UID: \"77ea22b9-4ddd-47e0-8d6b-33a046ec10fa\") " pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" Oct 11 10:51:57.176351 master-2 kubenswrapper[4776]: I1011 10:51:57.176039 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42bbp\" (UniqueName: \"kubernetes.io/projected/aef1b152-b3f9-4e71-acd5-912ea87347e5-kube-api-access-42bbp\") pod \"manila-operator-controller-manager-6d78f57554-k69p4\" (UID: \"aef1b152-b3f9-4e71-acd5-912ea87347e5\") " pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" Oct 11 10:51:57.190772 master-1 kubenswrapper[4771]: I1011 10:51:57.190465 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-f456fb6cd-wnhd7"] Oct 11 10:51:57.202889 master-1 kubenswrapper[4771]: I1011 10:51:57.202814 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-68fc865f87-dfx76" Oct 11 10:51:57.203434 master-0 kubenswrapper[4790]: I1011 10:51:57.201612 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78696cb447sdltf"] Oct 11 10:51:57.203434 master-0 kubenswrapper[4790]: I1011 10:51:57.202949 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78696cb447sdltf" Oct 11 10:51:57.206240 master-0 kubenswrapper[4790]: I1011 10:51:57.205916 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 11 10:51:57.216202 master-2 kubenswrapper[4776]: I1011 10:51:57.216134 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42bbp\" (UniqueName: \"kubernetes.io/projected/aef1b152-b3f9-4e71-acd5-912ea87347e5-kube-api-access-42bbp\") pod \"manila-operator-controller-manager-6d78f57554-k69p4\" (UID: \"aef1b152-b3f9-4e71-acd5-912ea87347e5\") " pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" Oct 11 10:51:57.216516 master-0 kubenswrapper[4790]: I1011 10:51:57.216423 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6fhn\" (UniqueName: \"kubernetes.io/projected/7c307dd6-17af-4fc5-8b19-d6fd59f46d04-kube-api-access-t6fhn\") pod \"horizon-operator-controller-manager-54969ff695-mxpp2\" (UID: \"7c307dd6-17af-4fc5-8b19-d6fd59f46d04\") " pod="openstack-operators/horizon-operator-controller-manager-54969ff695-mxpp2" Oct 11 10:51:57.216856 master-0 kubenswrapper[4790]: I1011 10:51:57.216558 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntnqz\" (UniqueName: \"kubernetes.io/projected/5409d60b-bd64-4ef3-9531-b264971d7d85-kube-api-access-ntnqz\") pod \"nova-operator-controller-manager-64487ccd4d-fzt8d\" (UID: \"5409d60b-bd64-4ef3-9531-b264971d7d85\") " pod="openstack-operators/nova-operator-controller-manager-64487ccd4d-fzt8d" Oct 11 10:51:57.223393 master-1 kubenswrapper[4771]: I1011 10:51:57.223305 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjbjk\" (UniqueName: \"kubernetes.io/projected/87e9a396-b599-4e77-ab0d-24602bde55eb-kube-api-access-qjbjk\") pod \"octavia-operator-controller-manager-f456fb6cd-wnhd7\" (UID: \"87e9a396-b599-4e77-ab0d-24602bde55eb\") " pod="openstack-operators/octavia-operator-controller-manager-f456fb6cd-wnhd7" Oct 11 10:51:57.223708 master-1 kubenswrapper[4771]: I1011 10:51:57.223417 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8tlj\" (UniqueName: \"kubernetes.io/projected/5012c546-4311-4415-bb9a-9074edfc09e2-kube-api-access-n8tlj\") pod \"mariadb-operator-controller-manager-7f4856d67b-9lktk\" (UID: \"5012c546-4311-4415-bb9a-9074edfc09e2\") " pod="openstack-operators/mariadb-operator-controller-manager-7f4856d67b-9lktk" Oct 11 10:51:57.228978 master-2 kubenswrapper[4776]: I1011 10:51:57.228913 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" Oct 11 10:51:57.232870 master-2 kubenswrapper[4776]: I1011 10:51:57.232599 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms6ck\" (UniqueName: \"kubernetes.io/projected/77ea22b9-4ddd-47e0-8d6b-33a046ec10fa-kube-api-access-ms6ck\") pod \"keystone-operator-controller-manager-f4487c759-5ktpv\" (UID: \"77ea22b9-4ddd-47e0-8d6b-33a046ec10fa\") " pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" Oct 11 10:51:57.238477 master-0 kubenswrapper[4790]: I1011 10:51:57.236932 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78696cb447sdltf"] Oct 11 10:51:57.256128 master-0 kubenswrapper[4790]: I1011 10:51:57.254721 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-qt8lg"] Oct 11 10:51:57.256332 master-0 kubenswrapper[4790]: I1011 10:51:57.256310 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-qt8lg" Oct 11 10:51:57.264744 master-2 kubenswrapper[4776]: I1011 10:51:57.264653 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" Oct 11 10:51:57.277834 master-2 kubenswrapper[4776]: I1011 10:51:57.277607 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ddhx\" (UniqueName: \"kubernetes.io/projected/dfa45a38-a374-4106-80ff-49527e765f82-kube-api-access-7ddhx\") pod \"neutron-operator-controller-manager-7c95684bcc-vt576\" (UID: \"dfa45a38-a374-4106-80ff-49527e765f82\") " pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" Oct 11 10:51:57.281982 master-0 kubenswrapper[4790]: I1011 10:51:57.281850 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-qt8lg"] Oct 11 10:51:57.288205 master-0 kubenswrapper[4790]: I1011 10:51:57.288164 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6fhn\" (UniqueName: \"kubernetes.io/projected/7c307dd6-17af-4fc5-8b19-d6fd59f46d04-kube-api-access-t6fhn\") pod \"horizon-operator-controller-manager-54969ff695-mxpp2\" (UID: \"7c307dd6-17af-4fc5-8b19-d6fd59f46d04\") " pod="openstack-operators/horizon-operator-controller-manager-54969ff695-mxpp2" Oct 11 10:51:57.301286 master-1 kubenswrapper[4771]: I1011 10:51:57.301224 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8tlj\" (UniqueName: \"kubernetes.io/projected/5012c546-4311-4415-bb9a-9074edfc09e2-kube-api-access-n8tlj\") pod \"mariadb-operator-controller-manager-7f4856d67b-9lktk\" (UID: \"5012c546-4311-4415-bb9a-9074edfc09e2\") " pod="openstack-operators/mariadb-operator-controller-manager-7f4856d67b-9lktk" Oct 11 10:51:57.301758 master-0 kubenswrapper[4790]: I1011 10:51:57.301393 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-569c9576c5-wpgbc"] Oct 11 10:51:57.303215 master-0 kubenswrapper[4790]: I1011 10:51:57.303188 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-569c9576c5-wpgbc" Oct 11 10:51:57.312084 master-1 kubenswrapper[4771]: I1011 10:51:57.311622 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6b498574d4-brh6p" Oct 11 10:51:57.318765 master-2 kubenswrapper[4776]: I1011 10:51:57.318184 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" Oct 11 10:51:57.323892 master-0 kubenswrapper[4790]: I1011 10:51:57.317874 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21c0c53d-d3a0-45bf-84b3-930269d44522-cert\") pod \"openstack-baremetal-operator-controller-manager-78696cb447sdltf\" (UID: \"21c0c53d-d3a0-45bf-84b3-930269d44522\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78696cb447sdltf" Oct 11 10:51:57.323892 master-0 kubenswrapper[4790]: I1011 10:51:57.317948 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntnqz\" (UniqueName: \"kubernetes.io/projected/5409d60b-bd64-4ef3-9531-b264971d7d85-kube-api-access-ntnqz\") pod \"nova-operator-controller-manager-64487ccd4d-fzt8d\" (UID: \"5409d60b-bd64-4ef3-9531-b264971d7d85\") " pod="openstack-operators/nova-operator-controller-manager-64487ccd4d-fzt8d" Oct 11 10:51:57.323892 master-0 kubenswrapper[4790]: I1011 10:51:57.317978 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmbgk\" (UniqueName: \"kubernetes.io/projected/21c0c53d-d3a0-45bf-84b3-930269d44522-kube-api-access-bmbgk\") pod \"openstack-baremetal-operator-controller-manager-78696cb447sdltf\" (UID: \"21c0c53d-d3a0-45bf-84b3-930269d44522\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78696cb447sdltf" Oct 11 10:51:57.324462 master-1 kubenswrapper[4771]: I1011 10:51:57.324398 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjbjk\" (UniqueName: \"kubernetes.io/projected/87e9a396-b599-4e77-ab0d-24602bde55eb-kube-api-access-qjbjk\") pod \"octavia-operator-controller-manager-f456fb6cd-wnhd7\" (UID: \"87e9a396-b599-4e77-ab0d-24602bde55eb\") " pod="openstack-operators/octavia-operator-controller-manager-f456fb6cd-wnhd7" Oct 11 10:51:57.325573 master-0 kubenswrapper[4790]: I1011 10:51:57.325518 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-569c9576c5-wpgbc"] Oct 11 10:51:57.326181 master-2 kubenswrapper[4776]: I1011 10:51:57.326131 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g"] Oct 11 10:51:57.334308 master-2 kubenswrapper[4776]: I1011 10:51:57.334244 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ddhx\" (UniqueName: \"kubernetes.io/projected/dfa45a38-a374-4106-80ff-49527e765f82-kube-api-access-7ddhx\") pod \"neutron-operator-controller-manager-7c95684bcc-vt576\" (UID: \"dfa45a38-a374-4106-80ff-49527e765f82\") " pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" Oct 11 10:51:57.335485 master-2 kubenswrapper[4776]: I1011 10:51:57.335441 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" Oct 11 10:51:57.380207 master-2 kubenswrapper[4776]: I1011 10:51:57.380059 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddgpv\" (UniqueName: \"kubernetes.io/projected/b4fb1c59-06e0-48e3-a428-e104afe4c0f7-kube-api-access-ddgpv\") pod \"swift-operator-controller-manager-6d4f9d7767-x9x4g\" (UID: \"b4fb1c59-06e0-48e3-a428-e104afe4c0f7\") " pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" Oct 11 10:51:57.381826 master-1 kubenswrapper[4771]: I1011 10:51:57.381655 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjbjk\" (UniqueName: \"kubernetes.io/projected/87e9a396-b599-4e77-ab0d-24602bde55eb-kube-api-access-qjbjk\") pod \"octavia-operator-controller-manager-f456fb6cd-wnhd7\" (UID: \"87e9a396-b599-4e77-ab0d-24602bde55eb\") " pod="openstack-operators/octavia-operator-controller-manager-f456fb6cd-wnhd7" Oct 11 10:51:57.384351 master-0 kubenswrapper[4790]: I1011 10:51:57.384291 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntnqz\" (UniqueName: \"kubernetes.io/projected/5409d60b-bd64-4ef3-9531-b264971d7d85-kube-api-access-ntnqz\") pod \"nova-operator-controller-manager-64487ccd4d-fzt8d\" (UID: \"5409d60b-bd64-4ef3-9531-b264971d7d85\") " pod="openstack-operators/nova-operator-controller-manager-64487ccd4d-fzt8d" Oct 11 10:51:57.384595 master-1 kubenswrapper[4771]: I1011 10:51:57.384123 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7585684bd7-x8n88"] Oct 11 10:51:57.386471 master-1 kubenswrapper[4771]: I1011 10:51:57.386429 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7585684bd7-x8n88" Oct 11 10:51:57.392837 master-2 kubenswrapper[4776]: I1011 10:51:57.390006 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" Oct 11 10:51:57.401171 master-2 kubenswrapper[4776]: I1011 10:51:57.400170 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g"] Oct 11 10:51:57.402469 master-1 kubenswrapper[4771]: I1011 10:51:57.402337 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7f4856d67b-9lktk" Oct 11 10:51:57.418048 master-1 kubenswrapper[4771]: I1011 10:51:57.408042 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7585684bd7-x8n88"] Oct 11 10:51:57.418776 master-0 kubenswrapper[4790]: I1011 10:51:57.418682 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21c0c53d-d3a0-45bf-84b3-930269d44522-cert\") pod \"openstack-baremetal-operator-controller-manager-78696cb447sdltf\" (UID: \"21c0c53d-d3a0-45bf-84b3-930269d44522\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78696cb447sdltf" Oct 11 10:51:57.418776 master-0 kubenswrapper[4790]: I1011 10:51:57.418761 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrt2d\" (UniqueName: \"kubernetes.io/projected/b135d684-0aee-445a-8c2b-a5c5a656b626-kube-api-access-lrt2d\") pod \"ovn-operator-controller-manager-f9dd6d5b6-qt8lg\" (UID: \"b135d684-0aee-445a-8c2b-a5c5a656b626\") " pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-qt8lg" Oct 11 10:51:57.419089 master-0 kubenswrapper[4790]: I1011 10:51:57.418817 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmbgk\" (UniqueName: \"kubernetes.io/projected/21c0c53d-d3a0-45bf-84b3-930269d44522-kube-api-access-bmbgk\") pod \"openstack-baremetal-operator-controller-manager-78696cb447sdltf\" (UID: \"21c0c53d-d3a0-45bf-84b3-930269d44522\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78696cb447sdltf" Oct 11 10:51:57.419089 master-0 kubenswrapper[4790]: E1011 10:51:57.418991 4790 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 11 10:51:57.419089 master-0 kubenswrapper[4790]: I1011 10:51:57.419034 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8lkw\" (UniqueName: \"kubernetes.io/projected/6c42b87f-92a0-4250-bc3a-7b117dcf8df8-kube-api-access-x8lkw\") pod \"placement-operator-controller-manager-569c9576c5-wpgbc\" (UID: \"6c42b87f-92a0-4250-bc3a-7b117dcf8df8\") " pod="openstack-operators/placement-operator-controller-manager-569c9576c5-wpgbc" Oct 11 10:51:57.419210 master-0 kubenswrapper[4790]: E1011 10:51:57.419098 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21c0c53d-d3a0-45bf-84b3-930269d44522-cert podName:21c0c53d-d3a0-45bf-84b3-930269d44522 nodeName:}" failed. No retries permitted until 2025-10-11 10:51:57.919072548 +0000 UTC m=+794.473532930 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/21c0c53d-d3a0-45bf-84b3-930269d44522-cert") pod "openstack-baremetal-operator-controller-manager-78696cb447sdltf" (UID: "21c0c53d-d3a0-45bf-84b3-930269d44522") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 11 10:51:57.425899 master-1 kubenswrapper[4771]: I1011 10:51:57.425847 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnnwq\" (UniqueName: \"kubernetes.io/projected/a0b63855-5ade-4060-9016-a2009f5e5b45-kube-api-access-pnnwq\") pod \"telemetry-operator-controller-manager-7585684bd7-x8n88\" (UID: \"a0b63855-5ade-4060-9016-a2009f5e5b45\") " pod="openstack-operators/telemetry-operator-controller-manager-7585684bd7-x8n88" Oct 11 10:51:57.426458 master-1 kubenswrapper[4771]: I1011 10:51:57.426387 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-565dfd7bb9-bbh7m"] Oct 11 10:51:57.428686 master-1 kubenswrapper[4771]: I1011 10:51:57.428557 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-565dfd7bb9-bbh7m" Oct 11 10:51:57.446095 master-1 kubenswrapper[4771]: I1011 10:51:57.446037 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-565dfd7bb9-bbh7m"] Oct 11 10:51:57.453187 master-2 kubenswrapper[4776]: I1011 10:51:57.453135 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x"] Oct 11 10:51:57.454664 master-2 kubenswrapper[4776]: I1011 10:51:57.454633 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" Oct 11 10:51:57.461406 master-2 kubenswrapper[4776]: I1011 10:51:57.461367 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" Oct 11 10:51:57.465349 master-0 kubenswrapper[4790]: I1011 10:51:57.465259 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmbgk\" (UniqueName: \"kubernetes.io/projected/21c0c53d-d3a0-45bf-84b3-930269d44522-kube-api-access-bmbgk\") pod \"openstack-baremetal-operator-controller-manager-78696cb447sdltf\" (UID: \"21c0c53d-d3a0-45bf-84b3-930269d44522\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78696cb447sdltf" Oct 11 10:51:57.470400 master-2 kubenswrapper[4776]: I1011 10:51:57.470350 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x"] Oct 11 10:51:57.490242 master-2 kubenswrapper[4776]: I1011 10:51:57.490202 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddgpv\" (UniqueName: \"kubernetes.io/projected/b4fb1c59-06e0-48e3-a428-e104afe4c0f7-kube-api-access-ddgpv\") pod \"swift-operator-controller-manager-6d4f9d7767-x9x4g\" (UID: \"b4fb1c59-06e0-48e3-a428-e104afe4c0f7\") " pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" Oct 11 10:51:57.505219 master-1 kubenswrapper[4771]: I1011 10:51:57.505025 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-f456fb6cd-wnhd7" Oct 11 10:51:57.511455 master-0 kubenswrapper[4790]: I1011 10:51:57.511373 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-64487ccd4d-fzt8d" Oct 11 10:51:57.524297 master-0 kubenswrapper[4790]: I1011 10:51:57.524233 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8lkw\" (UniqueName: \"kubernetes.io/projected/6c42b87f-92a0-4250-bc3a-7b117dcf8df8-kube-api-access-x8lkw\") pod \"placement-operator-controller-manager-569c9576c5-wpgbc\" (UID: \"6c42b87f-92a0-4250-bc3a-7b117dcf8df8\") " pod="openstack-operators/placement-operator-controller-manager-569c9576c5-wpgbc" Oct 11 10:51:57.524805 master-0 kubenswrapper[4790]: I1011 10:51:57.524769 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrt2d\" (UniqueName: \"kubernetes.io/projected/b135d684-0aee-445a-8c2b-a5c5a656b626-kube-api-access-lrt2d\") pod \"ovn-operator-controller-manager-f9dd6d5b6-qt8lg\" (UID: \"b135d684-0aee-445a-8c2b-a5c5a656b626\") " pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-qt8lg" Oct 11 10:51:57.526728 master-1 kubenswrapper[4771]: I1011 10:51:57.526648 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bzt2\" (UniqueName: \"kubernetes.io/projected/84f24be5-1586-471c-b099-c6c81ef56674-kube-api-access-7bzt2\") pod \"test-operator-controller-manager-565dfd7bb9-bbh7m\" (UID: \"84f24be5-1586-471c-b099-c6c81ef56674\") " pod="openstack-operators/test-operator-controller-manager-565dfd7bb9-bbh7m" Oct 11 10:51:57.527171 master-1 kubenswrapper[4771]: I1011 10:51:57.526938 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnnwq\" (UniqueName: \"kubernetes.io/projected/a0b63855-5ade-4060-9016-a2009f5e5b45-kube-api-access-pnnwq\") pod \"telemetry-operator-controller-manager-7585684bd7-x8n88\" (UID: \"a0b63855-5ade-4060-9016-a2009f5e5b45\") " pod="openstack-operators/telemetry-operator-controller-manager-7585684bd7-x8n88" Oct 11 10:51:57.533064 master-0 kubenswrapper[4790]: I1011 10:51:57.532943 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-658c7b459c-fzlrm"] Oct 11 10:51:57.533082 master-2 kubenswrapper[4776]: I1011 10:51:57.532908 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddgpv\" (UniqueName: \"kubernetes.io/projected/b4fb1c59-06e0-48e3-a428-e104afe4c0f7-kube-api-access-ddgpv\") pod \"swift-operator-controller-manager-6d4f9d7767-x9x4g\" (UID: \"b4fb1c59-06e0-48e3-a428-e104afe4c0f7\") " pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" Oct 11 10:51:57.549752 master-1 kubenswrapper[4771]: I1011 10:51:57.549698 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnnwq\" (UniqueName: \"kubernetes.io/projected/a0b63855-5ade-4060-9016-a2009f5e5b45-kube-api-access-pnnwq\") pod \"telemetry-operator-controller-manager-7585684bd7-x8n88\" (UID: \"a0b63855-5ade-4060-9016-a2009f5e5b45\") " pod="openstack-operators/telemetry-operator-controller-manager-7585684bd7-x8n88" Oct 11 10:51:57.550946 master-0 kubenswrapper[4790]: I1011 10:51:57.550899 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrt2d\" (UniqueName: \"kubernetes.io/projected/b135d684-0aee-445a-8c2b-a5c5a656b626-kube-api-access-lrt2d\") pod \"ovn-operator-controller-manager-f9dd6d5b6-qt8lg\" (UID: \"b135d684-0aee-445a-8c2b-a5c5a656b626\") " pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-qt8lg" Oct 11 10:51:57.556906 master-0 kubenswrapper[4790]: I1011 10:51:57.556864 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8lkw\" (UniqueName: \"kubernetes.io/projected/6c42b87f-92a0-4250-bc3a-7b117dcf8df8-kube-api-access-x8lkw\") pod \"placement-operator-controller-manager-569c9576c5-wpgbc\" (UID: \"6c42b87f-92a0-4250-bc3a-7b117dcf8df8\") " pod="openstack-operators/placement-operator-controller-manager-569c9576c5-wpgbc" Oct 11 10:51:57.571450 master-0 kubenswrapper[4790]: I1011 10:51:57.571385 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54969ff695-mxpp2" Oct 11 10:51:57.587582 master-0 kubenswrapper[4790]: I1011 10:51:57.587156 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-qt8lg" Oct 11 10:51:57.593791 master-2 kubenswrapper[4776]: I1011 10:51:57.593734 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fv96\" (UniqueName: \"kubernetes.io/projected/76a4deee-4f6c-4088-9a56-4d9141924af2-kube-api-access-4fv96\") pod \"watcher-operator-controller-manager-7c4579d8cf-ttj8x\" (UID: \"76a4deee-4f6c-4088-9a56-4d9141924af2\") " pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" Oct 11 10:51:57.598190 master-2 kubenswrapper[4776]: I1011 10:51:57.598150 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq"] Oct 11 10:51:57.627450 master-1 kubenswrapper[4771]: I1011 10:51:57.627343 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bzt2\" (UniqueName: \"kubernetes.io/projected/84f24be5-1586-471c-b099-c6c81ef56674-kube-api-access-7bzt2\") pod \"test-operator-controller-manager-565dfd7bb9-bbh7m\" (UID: \"84f24be5-1586-471c-b099-c6c81ef56674\") " pod="openstack-operators/test-operator-controller-manager-565dfd7bb9-bbh7m" Oct 11 10:51:57.628257 master-0 kubenswrapper[4790]: I1011 10:51:57.626803 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19f1cdae-2bd6-42f2-aedc-7da343eeab3f-cert\") pod \"infra-operator-controller-manager-d68fd5cdf-2dkw2\" (UID: \"19f1cdae-2bd6-42f2-aedc-7da343eeab3f\") " pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-2dkw2" Oct 11 10:51:57.634585 master-0 kubenswrapper[4790]: I1011 10:51:57.632504 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19f1cdae-2bd6-42f2-aedc-7da343eeab3f-cert\") pod \"infra-operator-controller-manager-d68fd5cdf-2dkw2\" (UID: \"19f1cdae-2bd6-42f2-aedc-7da343eeab3f\") " pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-2dkw2" Oct 11 10:51:57.642283 master-0 kubenswrapper[4790]: I1011 10:51:57.642237 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-569c9576c5-wpgbc" Oct 11 10:51:57.652443 master-2 kubenswrapper[4776]: I1011 10:51:57.652383 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" Oct 11 10:51:57.654360 master-0 kubenswrapper[4790]: I1011 10:51:57.654297 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms"] Oct 11 10:51:57.657878 master-0 kubenswrapper[4790]: I1011 10:51:57.657113 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms" Oct 11 10:51:57.669361 master-0 kubenswrapper[4790]: I1011 10:51:57.669097 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 11 10:51:57.680740 master-0 kubenswrapper[4790]: I1011 10:51:57.680664 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms"] Oct 11 10:51:57.688459 master-1 kubenswrapper[4771]: I1011 10:51:57.688183 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bzt2\" (UniqueName: \"kubernetes.io/projected/84f24be5-1586-471c-b099-c6c81ef56674-kube-api-access-7bzt2\") pod \"test-operator-controller-manager-565dfd7bb9-bbh7m\" (UID: \"84f24be5-1586-471c-b099-c6c81ef56674\") " pod="openstack-operators/test-operator-controller-manager-565dfd7bb9-bbh7m" Oct 11 10:51:57.696650 master-2 kubenswrapper[4776]: I1011 10:51:57.696549 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fv96\" (UniqueName: \"kubernetes.io/projected/76a4deee-4f6c-4088-9a56-4d9141924af2-kube-api-access-4fv96\") pod \"watcher-operator-controller-manager-7c4579d8cf-ttj8x\" (UID: \"76a4deee-4f6c-4088-9a56-4d9141924af2\") " pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" Oct 11 10:51:57.721597 master-1 kubenswrapper[4771]: I1011 10:51:57.721543 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7585684bd7-x8n88" Oct 11 10:51:57.736896 master-1 kubenswrapper[4771]: I1011 10:51:57.736830 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-68fc865f87-dfx76"] Oct 11 10:51:57.740257 master-2 kubenswrapper[4776]: I1011 10:51:57.740161 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl"] Oct 11 10:51:57.746983 master-1 kubenswrapper[4771]: I1011 10:51:57.746724 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:51:57.756503 master-1 kubenswrapper[4771]: I1011 10:51:57.756433 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-565dfd7bb9-bbh7m" Oct 11 10:51:57.768720 master-2 kubenswrapper[4776]: I1011 10:51:57.768470 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fv96\" (UniqueName: \"kubernetes.io/projected/76a4deee-4f6c-4088-9a56-4d9141924af2-kube-api-access-4fv96\") pod \"watcher-operator-controller-manager-7c4579d8cf-ttj8x\" (UID: \"76a4deee-4f6c-4088-9a56-4d9141924af2\") " pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" Oct 11 10:51:57.777305 master-2 kubenswrapper[4776]: I1011 10:51:57.777256 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" Oct 11 10:51:57.779156 master-0 kubenswrapper[4790]: I1011 10:51:57.778132 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-658c7b459c-fzlrm" event={"ID":"f74275f9-5962-46ed-bbf9-9ca7dabea845","Type":"ContainerStarted","Data":"0b4365c7dc2ee855855b8ac105a5b9668bebabbf177ea6828f276517bdfc93db"} Oct 11 10:51:57.782019 master-1 kubenswrapper[4771]: I1011 10:51:57.781947 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp"] Oct 11 10:51:57.783162 master-1 kubenswrapper[4771]: I1011 10:51:57.783127 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp" Oct 11 10:51:57.809696 master-1 kubenswrapper[4771]: I1011 10:51:57.809416 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp"] Oct 11 10:51:57.817474 master-1 kubenswrapper[4771]: I1011 10:51:57.817427 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6b498574d4-brh6p"] Oct 11 10:51:57.820683 master-1 kubenswrapper[4771]: W1011 10:51:57.820504 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f700fad_87cd_467c_9b8a_99a82dd72d9b.slice/crio-112980b751234ca8e7f2218ebf4a72c9699c90415731d0169a0762eabd82689e WatchSource:0}: Error finding container 112980b751234ca8e7f2218ebf4a72c9699c90415731d0169a0762eabd82689e: Status 404 returned error can't find the container with id 112980b751234ca8e7f2218ebf4a72c9699c90415731d0169a0762eabd82689e Oct 11 10:51:57.834861 master-0 kubenswrapper[4790]: I1011 10:51:57.834799 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eaac04d2-f217-437a-b0db-9cc23f0373d9-cert\") pod \"openstack-operator-controller-manager-6df4464d49-mxsms\" (UID: \"eaac04d2-f217-437a-b0db-9cc23f0373d9\") " pod="openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms" Oct 11 10:51:57.834998 master-0 kubenswrapper[4790]: I1011 10:51:57.834972 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkhbq\" (UniqueName: \"kubernetes.io/projected/eaac04d2-f217-437a-b0db-9cc23f0373d9-kube-api-access-xkhbq\") pod \"openstack-operator-controller-manager-6df4464d49-mxsms\" (UID: \"eaac04d2-f217-437a-b0db-9cc23f0373d9\") " pod="openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms" Oct 11 10:51:57.848350 master-0 kubenswrapper[4790]: I1011 10:51:57.848302 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-2dkw2" Oct 11 10:51:57.869025 master-2 kubenswrapper[4776]: I1011 10:51:57.868983 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb"] Oct 11 10:51:57.874744 master-1 kubenswrapper[4771]: I1011 10:51:57.872547 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7f4856d67b-9lktk"] Oct 11 10:51:57.878926 master-1 kubenswrapper[4771]: W1011 10:51:57.878675 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5012c546_4311_4415_bb9a_9074edfc09e2.slice/crio-50641270ba024a3ae8b506aa736cce25288b44114f86c69b7293c2779adc9e77 WatchSource:0}: Error finding container 50641270ba024a3ae8b506aa736cce25288b44114f86c69b7293c2779adc9e77: Status 404 returned error can't find the container with id 50641270ba024a3ae8b506aa736cce25288b44114f86c69b7293c2779adc9e77 Oct 11 10:51:57.932655 master-1 kubenswrapper[4771]: I1011 10:51:57.932434 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npkcr\" (UniqueName: \"kubernetes.io/projected/dd610cd6-c61f-4cc3-9e63-5cede4c8393b-kube-api-access-npkcr\") pod \"rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp\" (UID: \"dd610cd6-c61f-4cc3-9e63-5cede4c8393b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp" Oct 11 10:51:57.937124 master-0 kubenswrapper[4790]: I1011 10:51:57.935985 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkhbq\" (UniqueName: \"kubernetes.io/projected/eaac04d2-f217-437a-b0db-9cc23f0373d9-kube-api-access-xkhbq\") pod \"openstack-operator-controller-manager-6df4464d49-mxsms\" (UID: \"eaac04d2-f217-437a-b0db-9cc23f0373d9\") " pod="openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms" Oct 11 10:51:57.937124 master-0 kubenswrapper[4790]: I1011 10:51:57.936050 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21c0c53d-d3a0-45bf-84b3-930269d44522-cert\") pod \"openstack-baremetal-operator-controller-manager-78696cb447sdltf\" (UID: \"21c0c53d-d3a0-45bf-84b3-930269d44522\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78696cb447sdltf" Oct 11 10:51:57.937124 master-0 kubenswrapper[4790]: I1011 10:51:57.936073 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eaac04d2-f217-437a-b0db-9cc23f0373d9-cert\") pod \"openstack-operator-controller-manager-6df4464d49-mxsms\" (UID: \"eaac04d2-f217-437a-b0db-9cc23f0373d9\") " pod="openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms" Oct 11 10:51:57.937124 master-0 kubenswrapper[4790]: E1011 10:51:57.936283 4790 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 11 10:51:57.937124 master-0 kubenswrapper[4790]: E1011 10:51:57.936369 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaac04d2-f217-437a-b0db-9cc23f0373d9-cert podName:eaac04d2-f217-437a-b0db-9cc23f0373d9 nodeName:}" failed. No retries permitted until 2025-10-11 10:51:58.436349116 +0000 UTC m=+794.990809408 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/eaac04d2-f217-437a-b0db-9cc23f0373d9-cert") pod "openstack-operator-controller-manager-6df4464d49-mxsms" (UID: "eaac04d2-f217-437a-b0db-9cc23f0373d9") : secret "webhook-server-cert" not found Oct 11 10:51:57.940756 master-0 kubenswrapper[4790]: I1011 10:51:57.940459 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21c0c53d-d3a0-45bf-84b3-930269d44522-cert\") pod \"openstack-baremetal-operator-controller-manager-78696cb447sdltf\" (UID: \"21c0c53d-d3a0-45bf-84b3-930269d44522\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78696cb447sdltf" Oct 11 10:51:57.961652 master-0 kubenswrapper[4790]: I1011 10:51:57.961281 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkhbq\" (UniqueName: \"kubernetes.io/projected/eaac04d2-f217-437a-b0db-9cc23f0373d9-kube-api-access-xkhbq\") pod \"openstack-operator-controller-manager-6df4464d49-mxsms\" (UID: \"eaac04d2-f217-437a-b0db-9cc23f0373d9\") " pod="openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms" Oct 11 10:51:57.964571 master-2 kubenswrapper[4776]: I1011 10:51:57.964500 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv"] Oct 11 10:51:57.993335 master-1 kubenswrapper[4771]: I1011 10:51:57.993106 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-f456fb6cd-wnhd7"] Oct 11 10:51:57.993335 master-1 kubenswrapper[4771]: W1011 10:51:57.993284 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87e9a396_b599_4e77_ab0d_24602bde55eb.slice/crio-57f05263c3637aefe7d23580a4f13e660b7d95dbb0bfa6de7a5815238f573e40 WatchSource:0}: Error finding container 57f05263c3637aefe7d23580a4f13e660b7d95dbb0bfa6de7a5815238f573e40: Status 404 returned error can't find the container with id 57f05263c3637aefe7d23580a4f13e660b7d95dbb0bfa6de7a5815238f573e40 Oct 11 10:51:58.035421 master-0 kubenswrapper[4790]: I1011 10:51:58.035315 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-64487ccd4d-fzt8d"] Oct 11 10:51:58.035908 master-1 kubenswrapper[4771]: I1011 10:51:58.034773 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npkcr\" (UniqueName: \"kubernetes.io/projected/dd610cd6-c61f-4cc3-9e63-5cede4c8393b-kube-api-access-npkcr\") pod \"rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp\" (UID: \"dd610cd6-c61f-4cc3-9e63-5cede4c8393b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp" Oct 11 10:51:58.041881 master-0 kubenswrapper[4790]: W1011 10:51:58.041803 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5409d60b_bd64_4ef3_9531_b264971d7d85.slice/crio-ff5b3345190b56922afd299bda24109abb663fde8b39b786a9be7d57ab57b654 WatchSource:0}: Error finding container ff5b3345190b56922afd299bda24109abb663fde8b39b786a9be7d57ab57b654: Status 404 returned error can't find the container with id ff5b3345190b56922afd299bda24109abb663fde8b39b786a9be7d57ab57b654 Oct 11 10:51:58.057570 master-1 kubenswrapper[4771]: I1011 10:51:58.057476 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npkcr\" (UniqueName: \"kubernetes.io/projected/dd610cd6-c61f-4cc3-9e63-5cede4c8393b-kube-api-access-npkcr\") pod \"rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp\" (UID: \"dd610cd6-c61f-4cc3-9e63-5cede4c8393b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp" Oct 11 10:51:58.101559 master-1 kubenswrapper[4771]: I1011 10:51:58.101478 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp" Oct 11 10:51:58.108462 master-2 kubenswrapper[4776]: I1011 10:51:58.108410 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4"] Oct 11 10:51:58.113019 master-2 kubenswrapper[4776]: I1011 10:51:58.112968 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576"] Oct 11 10:51:58.130617 master-0 kubenswrapper[4790]: I1011 10:51:58.130563 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78696cb447sdltf" Oct 11 10:51:58.141817 master-0 kubenswrapper[4790]: I1011 10:51:58.141762 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-569c9576c5-wpgbc"] Oct 11 10:51:58.145668 master-0 kubenswrapper[4790]: I1011 10:51:58.145645 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-qt8lg"] Oct 11 10:51:58.145939 master-0 kubenswrapper[4790]: W1011 10:51:58.145914 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb135d684_0aee_445a_8c2b_a5c5a656b626.slice/crio-bf58d2dedb74858dd134dc115b577a38ad6f4937ef9c552587299a28ded00d4d WatchSource:0}: Error finding container bf58d2dedb74858dd134dc115b577a38ad6f4937ef9c552587299a28ded00d4d: Status 404 returned error can't find the container with id bf58d2dedb74858dd134dc115b577a38ad6f4937ef9c552587299a28ded00d4d Oct 11 10:51:58.162587 master-0 kubenswrapper[4790]: I1011 10:51:58.162517 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54969ff695-mxpp2"] Oct 11 10:51:58.174304 master-0 kubenswrapper[4790]: W1011 10:51:58.174227 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c307dd6_17af_4fc5_8b19_d6fd59f46d04.slice/crio-28f5797f04e0fe592135e1a4cf930f9905c1390e10d1402225e7ae5e4a602f70 WatchSource:0}: Error finding container 28f5797f04e0fe592135e1a4cf930f9905c1390e10d1402225e7ae5e4a602f70: Status 404 returned error can't find the container with id 28f5797f04e0fe592135e1a4cf930f9905c1390e10d1402225e7ae5e4a602f70 Oct 11 10:51:58.178302 master-1 kubenswrapper[4771]: I1011 10:51:58.178075 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7585684bd7-x8n88"] Oct 11 10:51:58.240787 master-1 kubenswrapper[4771]: I1011 10:51:58.240716 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-565dfd7bb9-bbh7m"] Oct 11 10:51:58.244620 master-1 kubenswrapper[4771]: W1011 10:51:58.244558 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84f24be5_1586_471c_b099_c6c81ef56674.slice/crio-54b68ccfd8bee58168870a664dfb4642da4b115c8a3eb996886c35d744721edd WatchSource:0}: Error finding container 54b68ccfd8bee58168870a664dfb4642da4b115c8a3eb996886c35d744721edd: Status 404 returned error can't find the container with id 54b68ccfd8bee58168870a664dfb4642da4b115c8a3eb996886c35d744721edd Oct 11 10:51:58.314461 master-2 kubenswrapper[4776]: I1011 10:51:58.314424 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g"] Oct 11 10:51:58.315533 master-1 kubenswrapper[4771]: I1011 10:51:58.315443 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6b498574d4-brh6p" event={"ID":"9f700fad-87cd-467c-9b8a-99a82dd72d9b","Type":"ContainerStarted","Data":"112980b751234ca8e7f2218ebf4a72c9699c90415731d0169a0762eabd82689e"} Oct 11 10:51:58.316877 master-1 kubenswrapper[4771]: I1011 10:51:58.316829 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-f456fb6cd-wnhd7" event={"ID":"87e9a396-b599-4e77-ab0d-24602bde55eb","Type":"ContainerStarted","Data":"57f05263c3637aefe7d23580a4f13e660b7d95dbb0bfa6de7a5815238f573e40"} Oct 11 10:51:58.317972 master-1 kubenswrapper[4771]: I1011 10:51:58.317900 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7585684bd7-x8n88" event={"ID":"a0b63855-5ade-4060-9016-a2009f5e5b45","Type":"ContainerStarted","Data":"1417e878c4ec8c33080524292f7e4fb3af87532607bf352cb321b5d6127bf304"} Oct 11 10:51:58.319276 master-1 kubenswrapper[4771]: I1011 10:51:58.319225 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-68fc865f87-dfx76" event={"ID":"608a645c-104f-4fab-b1c6-cbcae70ca0f4","Type":"ContainerStarted","Data":"d4df31c04fb831f57ffe738508c7cd28bae0f4e0560b3d7fbc927e906485547c"} Oct 11 10:51:58.319338 master-2 kubenswrapper[4776]: W1011 10:51:58.319217 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4fb1c59_06e0_48e3_a428_e104afe4c0f7.slice/crio-e10964b154779710e5b10dd7be9c7ec4207bb3a0a1a71351a63412c235711473 WatchSource:0}: Error finding container e10964b154779710e5b10dd7be9c7ec4207bb3a0a1a71351a63412c235711473: Status 404 returned error can't find the container with id e10964b154779710e5b10dd7be9c7ec4207bb3a0a1a71351a63412c235711473 Oct 11 10:51:58.321348 master-1 kubenswrapper[4771]: I1011 10:51:58.321300 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-565dfd7bb9-bbh7m" event={"ID":"84f24be5-1586-471c-b099-c6c81ef56674","Type":"ContainerStarted","Data":"54b68ccfd8bee58168870a664dfb4642da4b115c8a3eb996886c35d744721edd"} Oct 11 10:51:58.322492 master-1 kubenswrapper[4771]: I1011 10:51:58.322446 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7f4856d67b-9lktk" event={"ID":"5012c546-4311-4415-bb9a-9074edfc09e2","Type":"ContainerStarted","Data":"50641270ba024a3ae8b506aa736cce25288b44114f86c69b7293c2779adc9e77"} Oct 11 10:51:58.341934 master-0 kubenswrapper[4790]: I1011 10:51:58.341819 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d68fd5cdf-2dkw2"] Oct 11 10:51:58.351095 master-0 kubenswrapper[4790]: W1011 10:51:58.351029 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19f1cdae_2bd6_42f2_aedc_7da343eeab3f.slice/crio-4266f1178d515a46f7ab4c097c954e43465d0e1f07c76e2b362f1350921b0b49 WatchSource:0}: Error finding container 4266f1178d515a46f7ab4c097c954e43465d0e1f07c76e2b362f1350921b0b49: Status 404 returned error can't find the container with id 4266f1178d515a46f7ab4c097c954e43465d0e1f07c76e2b362f1350921b0b49 Oct 11 10:51:58.438955 master-2 kubenswrapper[4776]: I1011 10:51:58.438896 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x"] Oct 11 10:51:58.441323 master-2 kubenswrapper[4776]: W1011 10:51:58.441275 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76a4deee_4f6c_4088_9a56_4d9141924af2.slice/crio-480ea18a67d12af6346cdf9814ceec6587b27be17f11d40b3b80288f3b37b4a3 WatchSource:0}: Error finding container 480ea18a67d12af6346cdf9814ceec6587b27be17f11d40b3b80288f3b37b4a3: Status 404 returned error can't find the container with id 480ea18a67d12af6346cdf9814ceec6587b27be17f11d40b3b80288f3b37b4a3 Oct 11 10:51:58.443855 master-0 kubenswrapper[4790]: I1011 10:51:58.443768 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eaac04d2-f217-437a-b0db-9cc23f0373d9-cert\") pod \"openstack-operator-controller-manager-6df4464d49-mxsms\" (UID: \"eaac04d2-f217-437a-b0db-9cc23f0373d9\") " pod="openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms" Oct 11 10:51:58.444176 master-0 kubenswrapper[4790]: E1011 10:51:58.444026 4790 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 11 10:51:58.444176 master-0 kubenswrapper[4790]: E1011 10:51:58.444144 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaac04d2-f217-437a-b0db-9cc23f0373d9-cert podName:eaac04d2-f217-437a-b0db-9cc23f0373d9 nodeName:}" failed. No retries permitted until 2025-10-11 10:51:59.444121505 +0000 UTC m=+795.998581807 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/eaac04d2-f217-437a-b0db-9cc23f0373d9-cert") pod "openstack-operator-controller-manager-6df4464d49-mxsms" (UID: "eaac04d2-f217-437a-b0db-9cc23f0373d9") : secret "webhook-server-cert" not found Oct 11 10:51:58.506730 master-2 kubenswrapper[4776]: I1011 10:51:58.506666 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" event={"ID":"b4fb1c59-06e0-48e3-a428-e104afe4c0f7","Type":"ContainerStarted","Data":"e10964b154779710e5b10dd7be9c7ec4207bb3a0a1a71351a63412c235711473"} Oct 11 10:51:58.508874 master-2 kubenswrapper[4776]: I1011 10:51:58.508713 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" event={"ID":"0ae13eec-2496-4fc6-a6e1-db8b75944959","Type":"ContainerStarted","Data":"445d7dca6c4f95b76f060cbfd085b83f0586f8444df7d20ef904a68c1d3fb21f"} Oct 11 10:51:58.510061 master-2 kubenswrapper[4776]: I1011 10:51:58.510026 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" event={"ID":"76a4deee-4f6c-4088-9a56-4d9141924af2","Type":"ContainerStarted","Data":"480ea18a67d12af6346cdf9814ceec6587b27be17f11d40b3b80288f3b37b4a3"} Oct 11 10:51:58.511194 master-2 kubenswrapper[4776]: I1011 10:51:58.511153 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" event={"ID":"dfa45a38-a374-4106-80ff-49527e765f82","Type":"ContainerStarted","Data":"c469615d7bd87268909321c5098b3faeb326076e290d36e967ecd0e9b0fb3191"} Oct 11 10:51:58.512251 master-2 kubenswrapper[4776]: I1011 10:51:58.512226 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" event={"ID":"77ea22b9-4ddd-47e0-8d6b-33a046ec10fa","Type":"ContainerStarted","Data":"42535b5d5ed832d2129ea582ffaf02ba02b14faf51b4e26ed245811335e92256"} Oct 11 10:51:58.513516 master-2 kubenswrapper[4776]: I1011 10:51:58.513302 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" event={"ID":"81dbec9a-863f-4698-a04b-2fd7e6bb2a02","Type":"ContainerStarted","Data":"1e79ce35fbd382137739b7787ca5ecbfc1355f62841bbb8f6ea27184b5230a4e"} Oct 11 10:51:58.515216 master-2 kubenswrapper[4776]: I1011 10:51:58.515132 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" event={"ID":"aef1b152-b3f9-4e71-acd5-912ea87347e5","Type":"ContainerStarted","Data":"a99a07e811537522836ee684f7c91b124316af68e62ed9c5c2d02daa8288371e"} Oct 11 10:51:58.516443 master-2 kubenswrapper[4776]: I1011 10:51:58.516420 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" event={"ID":"6892e393-3308-4454-90bc-06af6038c240","Type":"ContainerStarted","Data":"2325dbc1a300f4db8e2127277016cc7fe6292252edd8b7bf8c08cf345ac13ea2"} Oct 11 10:51:58.567132 master-1 kubenswrapper[4771]: I1011 10:51:58.567067 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp"] Oct 11 10:51:58.598182 master-0 kubenswrapper[4790]: I1011 10:51:58.597142 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78696cb447sdltf"] Oct 11 10:51:58.605214 master-0 kubenswrapper[4790]: W1011 10:51:58.605155 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21c0c53d_d3a0_45bf_84b3_930269d44522.slice/crio-1a5deb5d4077f2ddea7f4af077eb7bf664c066c6489b47e900a513d1b1aa7168 WatchSource:0}: Error finding container 1a5deb5d4077f2ddea7f4af077eb7bf664c066c6489b47e900a513d1b1aa7168: Status 404 returned error can't find the container with id 1a5deb5d4077f2ddea7f4af077eb7bf664c066c6489b47e900a513d1b1aa7168 Oct 11 10:51:58.798097 master-0 kubenswrapper[4790]: I1011 10:51:58.798023 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-2dkw2" event={"ID":"19f1cdae-2bd6-42f2-aedc-7da343eeab3f","Type":"ContainerStarted","Data":"4266f1178d515a46f7ab4c097c954e43465d0e1f07c76e2b362f1350921b0b49"} Oct 11 10:51:58.800003 master-0 kubenswrapper[4790]: I1011 10:51:58.799926 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54969ff695-mxpp2" event={"ID":"7c307dd6-17af-4fc5-8b19-d6fd59f46d04","Type":"ContainerStarted","Data":"28f5797f04e0fe592135e1a4cf930f9905c1390e10d1402225e7ae5e4a602f70"} Oct 11 10:51:58.801277 master-0 kubenswrapper[4790]: I1011 10:51:58.801239 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-569c9576c5-wpgbc" event={"ID":"6c42b87f-92a0-4250-bc3a-7b117dcf8df8","Type":"ContainerStarted","Data":"e5a4e321cec755b4c9d6d97797b6ea153af34773a75fba9c871f395aa01b258f"} Oct 11 10:51:58.803088 master-0 kubenswrapper[4790]: I1011 10:51:58.803052 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78696cb447sdltf" event={"ID":"21c0c53d-d3a0-45bf-84b3-930269d44522","Type":"ContainerStarted","Data":"1a5deb5d4077f2ddea7f4af077eb7bf664c066c6489b47e900a513d1b1aa7168"} Oct 11 10:51:58.805231 master-0 kubenswrapper[4790]: I1011 10:51:58.805192 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-64487ccd4d-fzt8d" event={"ID":"5409d60b-bd64-4ef3-9531-b264971d7d85","Type":"ContainerStarted","Data":"ff5b3345190b56922afd299bda24109abb663fde8b39b786a9be7d57ab57b654"} Oct 11 10:51:58.806295 master-0 kubenswrapper[4790]: I1011 10:51:58.806254 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-qt8lg" event={"ID":"b135d684-0aee-445a-8c2b-a5c5a656b626","Type":"ContainerStarted","Data":"bf58d2dedb74858dd134dc115b577a38ad6f4937ef9c552587299a28ded00d4d"} Oct 11 10:51:59.330790 master-1 kubenswrapper[4771]: I1011 10:51:59.330706 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp" event={"ID":"dd610cd6-c61f-4cc3-9e63-5cede4c8393b","Type":"ContainerStarted","Data":"0b4affbab944663a634f511679cdcb6056a1dd538220c52ad684e595b571f6f1"} Oct 11 10:51:59.461787 master-0 kubenswrapper[4790]: I1011 10:51:59.461132 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eaac04d2-f217-437a-b0db-9cc23f0373d9-cert\") pod \"openstack-operator-controller-manager-6df4464d49-mxsms\" (UID: \"eaac04d2-f217-437a-b0db-9cc23f0373d9\") " pod="openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms" Oct 11 10:51:59.467854 master-0 kubenswrapper[4790]: I1011 10:51:59.467812 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eaac04d2-f217-437a-b0db-9cc23f0373d9-cert\") pod \"openstack-operator-controller-manager-6df4464d49-mxsms\" (UID: \"eaac04d2-f217-437a-b0db-9cc23f0373d9\") " pod="openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms" Oct 11 10:51:59.508366 master-0 kubenswrapper[4790]: I1011 10:51:59.508316 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms" Oct 11 10:51:59.829913 master-0 kubenswrapper[4790]: I1011 10:51:59.819020 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-658c7b459c-fzlrm" event={"ID":"f74275f9-5962-46ed-bbf9-9ca7dabea845","Type":"ContainerStarted","Data":"9056e918687243915cd44a8c7aa8b91c8d48787e2c6f2f16ce172469ec0791d7"} Oct 11 10:51:59.931826 master-0 kubenswrapper[4790]: I1011 10:51:59.931753 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms"] Oct 11 10:52:00.313214 master-0 kubenswrapper[4790]: W1011 10:52:00.313162 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaac04d2_f217_437a_b0db_9cc23f0373d9.slice/crio-2e8313d5669b88da027d018d458dddc0a47010415bd8b882cbecbed62e368acc WatchSource:0}: Error finding container 2e8313d5669b88da027d018d458dddc0a47010415bd8b882cbecbed62e368acc: Status 404 returned error can't find the container with id 2e8313d5669b88da027d018d458dddc0a47010415bd8b882cbecbed62e368acc Oct 11 10:52:00.828045 master-0 kubenswrapper[4790]: I1011 10:52:00.827964 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-658c7b459c-fzlrm" event={"ID":"f74275f9-5962-46ed-bbf9-9ca7dabea845","Type":"ContainerStarted","Data":"c469b1f9624dc83431ecfa1097a646cc5204df163b5e9600f542f9184e330157"} Oct 11 10:52:00.828244 master-0 kubenswrapper[4790]: I1011 10:52:00.828076 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-658c7b459c-fzlrm" Oct 11 10:52:00.829269 master-0 kubenswrapper[4790]: I1011 10:52:00.829215 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms" event={"ID":"eaac04d2-f217-437a-b0db-9cc23f0373d9","Type":"ContainerStarted","Data":"2e8313d5669b88da027d018d458dddc0a47010415bd8b882cbecbed62e368acc"} Oct 11 10:52:02.845238 master-0 kubenswrapper[4790]: I1011 10:52:02.845174 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-qt8lg" event={"ID":"b135d684-0aee-445a-8c2b-a5c5a656b626","Type":"ContainerStarted","Data":"f640e13f11de91b617029d2fdd39a8a326a773b2d9e02b6521eb9a33a08101e0"} Oct 11 10:52:02.853318 master-0 kubenswrapper[4790]: I1011 10:52:02.847006 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-64487ccd4d-fzt8d" event={"ID":"5409d60b-bd64-4ef3-9531-b264971d7d85","Type":"ContainerStarted","Data":"72acd123369e5a79e158e1c77d131a1cd9069735e8e1d24a9b1885d47062cd04"} Oct 11 10:52:02.853318 master-0 kubenswrapper[4790]: I1011 10:52:02.848693 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-2dkw2" event={"ID":"19f1cdae-2bd6-42f2-aedc-7da343eeab3f","Type":"ContainerStarted","Data":"c99a97fb3d1e339f0f59a39f0083a7b1d3f32414e607e3d9b2728bda5d2ba691"} Oct 11 10:52:02.853318 master-0 kubenswrapper[4790]: I1011 10:52:02.851198 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54969ff695-mxpp2" event={"ID":"7c307dd6-17af-4fc5-8b19-d6fd59f46d04","Type":"ContainerStarted","Data":"da4c04407ccb096e6ac194d0124c5080b7ca07594b2e2ddf939bdcf0e0944300"} Oct 11 10:52:02.853318 master-0 kubenswrapper[4790]: I1011 10:52:02.852621 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-569c9576c5-wpgbc" event={"ID":"6c42b87f-92a0-4250-bc3a-7b117dcf8df8","Type":"ContainerStarted","Data":"fb1028b4755566b85d715830c2ee0708fac025f310affaf0d13092f92979fc82"} Oct 11 10:52:02.856981 master-0 kubenswrapper[4790]: I1011 10:52:02.855733 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78696cb447sdltf" event={"ID":"21c0c53d-d3a0-45bf-84b3-930269d44522","Type":"ContainerStarted","Data":"1eca6c3bb22c908f052c0c1290370df1fbae05682e137ba7ac1007b2ab888b91"} Oct 11 10:52:02.861721 master-0 kubenswrapper[4790]: I1011 10:52:02.861670 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms" event={"ID":"eaac04d2-f217-437a-b0db-9cc23f0373d9","Type":"ContainerStarted","Data":"4098ac39db0ed59946f155cac604eb1993fdc5e9fb5898ed87c461be5152fe8b"} Oct 11 10:52:03.871558 master-0 kubenswrapper[4790]: I1011 10:52:03.871453 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-qt8lg" event={"ID":"b135d684-0aee-445a-8c2b-a5c5a656b626","Type":"ContainerStarted","Data":"ea4d6846b635e4229cce4b63a087d15e792a98486ab324ff333bda09ee17ab5f"} Oct 11 10:52:03.872325 master-0 kubenswrapper[4790]: I1011 10:52:03.871600 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-qt8lg" Oct 11 10:52:03.874185 master-0 kubenswrapper[4790]: I1011 10:52:03.874140 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-64487ccd4d-fzt8d" event={"ID":"5409d60b-bd64-4ef3-9531-b264971d7d85","Type":"ContainerStarted","Data":"b2cfb028b6aefa96d5a7c700e6836a3c4e642e9f52c7f1431cc462732eb89820"} Oct 11 10:52:03.874469 master-0 kubenswrapper[4790]: I1011 10:52:03.874425 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-64487ccd4d-fzt8d" Oct 11 10:52:03.876955 master-0 kubenswrapper[4790]: I1011 10:52:03.876904 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-2dkw2" event={"ID":"19f1cdae-2bd6-42f2-aedc-7da343eeab3f","Type":"ContainerStarted","Data":"94abc5eed96ab9d66944fa23b51cfccbd88ca3cce68e2ac777cd669eaa9926dd"} Oct 11 10:52:03.877278 master-0 kubenswrapper[4790]: I1011 10:52:03.877233 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-2dkw2" Oct 11 10:52:03.879179 master-0 kubenswrapper[4790]: I1011 10:52:03.879137 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54969ff695-mxpp2" event={"ID":"7c307dd6-17af-4fc5-8b19-d6fd59f46d04","Type":"ContainerStarted","Data":"2a4ea12348174660f930fd9dc8f8c5cc45a22a2f4c64269e436c99f6ad787756"} Oct 11 10:52:03.879219 master-0 kubenswrapper[4790]: I1011 10:52:03.879198 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-54969ff695-mxpp2" Oct 11 10:52:03.881199 master-0 kubenswrapper[4790]: I1011 10:52:03.881145 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-569c9576c5-wpgbc" event={"ID":"6c42b87f-92a0-4250-bc3a-7b117dcf8df8","Type":"ContainerStarted","Data":"c69aa721dd66e9ebfa9acaa641972414223791d47232a85e5e781a80433ff900"} Oct 11 10:52:03.881408 master-0 kubenswrapper[4790]: I1011 10:52:03.881364 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-569c9576c5-wpgbc" Oct 11 10:52:03.882651 master-0 kubenswrapper[4790]: I1011 10:52:03.882619 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78696cb447sdltf" event={"ID":"21c0c53d-d3a0-45bf-84b3-930269d44522","Type":"ContainerStarted","Data":"eba0f894775776451abe26f5f38d30403da2394c7afe40c350b6ecc7a0ad2a29"} Oct 11 10:52:03.882754 master-0 kubenswrapper[4790]: I1011 10:52:03.882733 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78696cb447sdltf" Oct 11 10:52:03.884395 master-0 kubenswrapper[4790]: I1011 10:52:03.884361 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms" event={"ID":"eaac04d2-f217-437a-b0db-9cc23f0373d9","Type":"ContainerStarted","Data":"496616631f6edf2b7361233334dcbd10923471c94500daceacfb0a34e5f3f347"} Oct 11 10:52:03.884604 master-0 kubenswrapper[4790]: I1011 10:52:03.884565 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms" Oct 11 10:52:03.900159 master-0 kubenswrapper[4790]: I1011 10:52:03.900070 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-658c7b459c-fzlrm" podStartSLOduration=6.036135212 podStartE2EDuration="7.900050647s" podCreationTimestamp="2025-10-11 10:51:56 +0000 UTC" firstStartedPulling="2025-10-11 10:51:57.541466688 +0000 UTC m=+794.095926980" lastFinishedPulling="2025-10-11 10:51:59.405382133 +0000 UTC m=+795.959842415" observedRunningTime="2025-10-11 10:52:00.850234023 +0000 UTC m=+797.404694325" watchObservedRunningTime="2025-10-11 10:52:03.900050647 +0000 UTC m=+800.454510969" Oct 11 10:52:03.904312 master-0 kubenswrapper[4790]: I1011 10:52:03.904240 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-qt8lg" podStartSLOduration=2.585296461 podStartE2EDuration="6.9042285s" podCreationTimestamp="2025-10-11 10:51:57 +0000 UTC" firstStartedPulling="2025-10-11 10:51:58.169849698 +0000 UTC m=+794.724309990" lastFinishedPulling="2025-10-11 10:52:02.488781727 +0000 UTC m=+799.043242029" observedRunningTime="2025-10-11 10:52:03.897420946 +0000 UTC m=+800.451881268" watchObservedRunningTime="2025-10-11 10:52:03.9042285 +0000 UTC m=+800.458688822" Oct 11 10:52:03.934733 master-0 kubenswrapper[4790]: I1011 10:52:03.931095 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-54969ff695-mxpp2" podStartSLOduration=3.642513883 podStartE2EDuration="7.931059818s" podCreationTimestamp="2025-10-11 10:51:56 +0000 UTC" firstStartedPulling="2025-10-11 10:51:58.197180649 +0000 UTC m=+794.751640941" lastFinishedPulling="2025-10-11 10:52:02.485726564 +0000 UTC m=+799.040186876" observedRunningTime="2025-10-11 10:52:03.922391083 +0000 UTC m=+800.476851395" watchObservedRunningTime="2025-10-11 10:52:03.931059818 +0000 UTC m=+800.485520120" Oct 11 10:52:03.966741 master-0 kubenswrapper[4790]: I1011 10:52:03.966420 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms" podStartSLOduration=6.966393647 podStartE2EDuration="6.966393647s" podCreationTimestamp="2025-10-11 10:51:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:52:03.962431709 +0000 UTC m=+800.516892011" watchObservedRunningTime="2025-10-11 10:52:03.966393647 +0000 UTC m=+800.520853939" Oct 11 10:52:04.018588 master-0 kubenswrapper[4790]: I1011 10:52:04.018325 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-64487ccd4d-fzt8d" podStartSLOduration=2.576805312 podStartE2EDuration="7.018298004s" podCreationTimestamp="2025-10-11 10:51:57 +0000 UTC" firstStartedPulling="2025-10-11 10:51:58.044747876 +0000 UTC m=+794.599208168" lastFinishedPulling="2025-10-11 10:52:02.486240558 +0000 UTC m=+799.040700860" observedRunningTime="2025-10-11 10:52:03.987239052 +0000 UTC m=+800.541699374" watchObservedRunningTime="2025-10-11 10:52:04.018298004 +0000 UTC m=+800.572758296" Oct 11 10:52:04.018849 master-0 kubenswrapper[4790]: I1011 10:52:04.018656 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-2dkw2" podStartSLOduration=3.884591137 podStartE2EDuration="8.018651383s" podCreationTimestamp="2025-10-11 10:51:56 +0000 UTC" firstStartedPulling="2025-10-11 10:51:58.354815164 +0000 UTC m=+794.909275456" lastFinishedPulling="2025-10-11 10:52:02.48887539 +0000 UTC m=+799.043335702" observedRunningTime="2025-10-11 10:52:04.016158196 +0000 UTC m=+800.570618518" watchObservedRunningTime="2025-10-11 10:52:04.018651383 +0000 UTC m=+800.573111675" Oct 11 10:52:04.059576 master-0 kubenswrapper[4790]: I1011 10:52:04.059485 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78696cb447sdltf" podStartSLOduration=3.181059477 podStartE2EDuration="7.05945875s" podCreationTimestamp="2025-10-11 10:51:57 +0000 UTC" firstStartedPulling="2025-10-11 10:51:58.608053981 +0000 UTC m=+795.162514313" lastFinishedPulling="2025-10-11 10:52:02.486453254 +0000 UTC m=+799.040913586" observedRunningTime="2025-10-11 10:52:04.054512856 +0000 UTC m=+800.608973148" watchObservedRunningTime="2025-10-11 10:52:04.05945875 +0000 UTC m=+800.613919032" Oct 11 10:52:04.582068 master-2 kubenswrapper[4776]: I1011 10:52:04.582010 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" event={"ID":"aef1b152-b3f9-4e71-acd5-912ea87347e5","Type":"ContainerStarted","Data":"948fde857a9b381c7a814421c3a5d3187f91eff1b951b476ce627aeba5ba177a"} Oct 11 10:52:04.583922 master-2 kubenswrapper[4776]: I1011 10:52:04.583896 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" event={"ID":"6892e393-3308-4454-90bc-06af6038c240","Type":"ContainerStarted","Data":"1d671761c34b9ecaf265bfd7d4b1f3aabe17afed30964db486fa8c1a554ed3ba"} Oct 11 10:52:04.585684 master-2 kubenswrapper[4776]: I1011 10:52:04.585624 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" event={"ID":"b4fb1c59-06e0-48e3-a428-e104afe4c0f7","Type":"ContainerStarted","Data":"96744c11337202cf7be38e2fb10d292699ebb27db69b3217414380ce0b790a29"} Oct 11 10:52:04.587124 master-2 kubenswrapper[4776]: I1011 10:52:04.587090 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" event={"ID":"0ae13eec-2496-4fc6-a6e1-db8b75944959","Type":"ContainerStarted","Data":"eb90e6b27fb7a6d629a88c3113680bf9faa957d756a3690989b4bf180b9a30b3"} Oct 11 10:52:04.588379 master-2 kubenswrapper[4776]: I1011 10:52:04.588346 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" event={"ID":"76a4deee-4f6c-4088-9a56-4d9141924af2","Type":"ContainerStarted","Data":"543f42e3e61d9c62a18c7dcbbe3c9db42b1cf7cd340a01f4b44ecf8b32e9b804"} Oct 11 10:52:04.590128 master-2 kubenswrapper[4776]: I1011 10:52:04.590094 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" event={"ID":"dfa45a38-a374-4106-80ff-49527e765f82","Type":"ContainerStarted","Data":"ae4559afb3f3421f36a8afff29e46fdbdf54e58d5c26a19c2af812f7f8b82878"} Oct 11 10:52:04.591819 master-2 kubenswrapper[4776]: I1011 10:52:04.591781 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" event={"ID":"77ea22b9-4ddd-47e0-8d6b-33a046ec10fa","Type":"ContainerStarted","Data":"d471fe633dbfb8b86b2c7f1efd6611b02ce4213b4a2e19634d2465a91e10a94a"} Oct 11 10:52:04.593701 master-2 kubenswrapper[4776]: I1011 10:52:04.593652 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" event={"ID":"81dbec9a-863f-4698-a04b-2fd7e6bb2a02","Type":"ContainerStarted","Data":"d6b8c3b8a3bbb1a280c571e349c76452d0cda627f0295661e0a951c3cc71596c"} Oct 11 10:52:05.367640 master-1 kubenswrapper[4771]: I1011 10:52:05.367579 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7585684bd7-x8n88" event={"ID":"a0b63855-5ade-4060-9016-a2009f5e5b45","Type":"ContainerStarted","Data":"61b1d86db60d1ef08fae745bff4e2990d6e0710f297a7cbcad6ec61d4e26a9f8"} Oct 11 10:52:05.369153 master-1 kubenswrapper[4771]: I1011 10:52:05.369120 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-565dfd7bb9-bbh7m" event={"ID":"84f24be5-1586-471c-b099-c6c81ef56674","Type":"ContainerStarted","Data":"4bbd4bd58f0a4e2f72e37fe8d911d861215c2e73d1ac55e10c3ef1408f3aef45"} Oct 11 10:52:05.370493 master-1 kubenswrapper[4771]: I1011 10:52:05.370449 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7f4856d67b-9lktk" event={"ID":"5012c546-4311-4415-bb9a-9074edfc09e2","Type":"ContainerStarted","Data":"ee5b794b3c7eb6a1579647ac5e38c47fb4a4c7cdbae1bb87c486b4cc4f5b9223"} Oct 11 10:52:05.372507 master-1 kubenswrapper[4771]: I1011 10:52:05.372469 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp" event={"ID":"dd610cd6-c61f-4cc3-9e63-5cede4c8393b","Type":"ContainerStarted","Data":"33f5861033987a587a008dbb0dfec7e74f5f027174580864f8bffd1335bba07d"} Oct 11 10:52:05.374960 master-1 kubenswrapper[4771]: I1011 10:52:05.374929 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6b498574d4-brh6p" event={"ID":"9f700fad-87cd-467c-9b8a-99a82dd72d9b","Type":"ContainerStarted","Data":"8574216a044b5ecae7dc3816f4b0af0ba4a401027097aa3dcccd6fd0a4b60348"} Oct 11 10:52:05.377909 master-1 kubenswrapper[4771]: I1011 10:52:05.377868 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-f456fb6cd-wnhd7" event={"ID":"87e9a396-b599-4e77-ab0d-24602bde55eb","Type":"ContainerStarted","Data":"5897aaab4a679e641ec47cdeb063815b7c319c78872e68173797b0678fcc29c2"} Oct 11 10:52:05.379983 master-1 kubenswrapper[4771]: I1011 10:52:05.379912 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-68fc865f87-dfx76" event={"ID":"608a645c-104f-4fab-b1c6-cbcae70ca0f4","Type":"ContainerStarted","Data":"202127bd84ada1b6018b4e1980a9295c708b4e02d5408de2dd43b3f8367a1267"} Oct 11 10:52:07.084376 master-0 kubenswrapper[4790]: I1011 10:52:07.084275 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-658c7b459c-fzlrm" Oct 11 10:52:07.114821 master-0 kubenswrapper[4790]: I1011 10:52:07.114482 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-569c9576c5-wpgbc" podStartSLOduration=5.797994332 podStartE2EDuration="10.114444574s" podCreationTimestamp="2025-10-11 10:51:57 +0000 UTC" firstStartedPulling="2025-10-11 10:51:58.169795016 +0000 UTC m=+794.724255308" lastFinishedPulling="2025-10-11 10:52:02.486245218 +0000 UTC m=+799.040705550" observedRunningTime="2025-10-11 10:52:04.081502718 +0000 UTC m=+800.635963010" watchObservedRunningTime="2025-10-11 10:52:07.114444574 +0000 UTC m=+803.668904906" Oct 11 10:52:07.515773 master-0 kubenswrapper[4790]: I1011 10:52:07.515559 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-64487ccd4d-fzt8d" Oct 11 10:52:07.577199 master-0 kubenswrapper[4790]: I1011 10:52:07.576257 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-54969ff695-mxpp2" Oct 11 10:52:07.590866 master-0 kubenswrapper[4790]: I1011 10:52:07.590519 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-qt8lg" Oct 11 10:52:07.646558 master-2 kubenswrapper[4776]: I1011 10:52:07.646014 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" event={"ID":"aef1b152-b3f9-4e71-acd5-912ea87347e5","Type":"ContainerStarted","Data":"9ce7f1114634f4d6c97c8fbe69f683adaae4793216aa18c8161422d64ed02b50"} Oct 11 10:52:07.648038 master-0 kubenswrapper[4790]: I1011 10:52:07.647941 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-569c9576c5-wpgbc" Oct 11 10:52:07.650459 master-2 kubenswrapper[4776]: I1011 10:52:07.650397 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" Oct 11 10:52:07.651720 master-2 kubenswrapper[4776]: I1011 10:52:07.651625 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" event={"ID":"6892e393-3308-4454-90bc-06af6038c240","Type":"ContainerStarted","Data":"bcc08fdcbef1a487f0fed1e55c87510747ed53ed96624ef310d4afdd03aea0b3"} Oct 11 10:52:07.652343 master-2 kubenswrapper[4776]: I1011 10:52:07.652301 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" Oct 11 10:52:07.659889 master-2 kubenswrapper[4776]: I1011 10:52:07.659825 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" event={"ID":"b4fb1c59-06e0-48e3-a428-e104afe4c0f7","Type":"ContainerStarted","Data":"bde459dfdf435c9a56a96c3f0be185ef523f2170e67a79271541516a08168379"} Oct 11 10:52:07.660358 master-2 kubenswrapper[4776]: I1011 10:52:07.660051 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" Oct 11 10:52:07.673695 master-2 kubenswrapper[4776]: I1011 10:52:07.672130 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" event={"ID":"0ae13eec-2496-4fc6-a6e1-db8b75944959","Type":"ContainerStarted","Data":"89b5e8fb20c4760a8d4090642dbe825dd95de845610bbbb0ad21679c9293f405"} Oct 11 10:52:07.673695 master-2 kubenswrapper[4776]: I1011 10:52:07.672850 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" Oct 11 10:52:07.677700 master-2 kubenswrapper[4776]: I1011 10:52:07.677007 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" podStartSLOduration=1.403423979 podStartE2EDuration="10.676992456s" podCreationTimestamp="2025-10-11 10:51:57 +0000 UTC" firstStartedPulling="2025-10-11 10:51:58.115036845 +0000 UTC m=+1552.899463554" lastFinishedPulling="2025-10-11 10:52:07.388605312 +0000 UTC m=+1562.173032031" observedRunningTime="2025-10-11 10:52:07.67308711 +0000 UTC m=+1562.457513819" watchObservedRunningTime="2025-10-11 10:52:07.676992456 +0000 UTC m=+1562.461419165" Oct 11 10:52:07.679775 master-2 kubenswrapper[4776]: I1011 10:52:07.678013 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" event={"ID":"76a4deee-4f6c-4088-9a56-4d9141924af2","Type":"ContainerStarted","Data":"478410317ac13077a7c605a6091139aec4ff51e943162d71f7406ce54d4b9109"} Oct 11 10:52:07.679775 master-2 kubenswrapper[4776]: I1011 10:52:07.678222 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" Oct 11 10:52:07.684101 master-2 kubenswrapper[4776]: I1011 10:52:07.682746 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" event={"ID":"77ea22b9-4ddd-47e0-8d6b-33a046ec10fa","Type":"ContainerStarted","Data":"86ea8366d381d19c1cb4cd566560ff22017f428230def887949ac97e54c275ee"} Oct 11 10:52:07.684101 master-2 kubenswrapper[4776]: I1011 10:52:07.683958 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" Oct 11 10:52:07.700709 master-2 kubenswrapper[4776]: I1011 10:52:07.700385 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" event={"ID":"81dbec9a-863f-4698-a04b-2fd7e6bb2a02","Type":"ContainerStarted","Data":"908dc1309168d6400346bf478385da8d5671667f824ce8bf4f01027f08088509"} Oct 11 10:52:07.701702 master-2 kubenswrapper[4776]: I1011 10:52:07.701196 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" Oct 11 10:52:07.741519 master-2 kubenswrapper[4776]: I1011 10:52:07.741457 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" podStartSLOduration=2.9663799539999998 podStartE2EDuration="11.741440003s" podCreationTimestamp="2025-10-11 10:51:56 +0000 UTC" firstStartedPulling="2025-10-11 10:51:57.878575649 +0000 UTC m=+1552.663002358" lastFinishedPulling="2025-10-11 10:52:06.653635698 +0000 UTC m=+1561.438062407" observedRunningTime="2025-10-11 10:52:07.719478137 +0000 UTC m=+1562.503904856" watchObservedRunningTime="2025-10-11 10:52:07.741440003 +0000 UTC m=+1562.525866712" Oct 11 10:52:07.743311 master-2 kubenswrapper[4776]: I1011 10:52:07.743279 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" podStartSLOduration=2.40693587 podStartE2EDuration="10.743273522s" podCreationTimestamp="2025-10-11 10:51:57 +0000 UTC" firstStartedPulling="2025-10-11 10:51:58.322032704 +0000 UTC m=+1553.106459413" lastFinishedPulling="2025-10-11 10:52:06.658370356 +0000 UTC m=+1561.442797065" observedRunningTime="2025-10-11 10:52:07.741077562 +0000 UTC m=+1562.525504271" watchObservedRunningTime="2025-10-11 10:52:07.743273522 +0000 UTC m=+1562.527700221" Oct 11 10:52:07.770133 master-2 kubenswrapper[4776]: I1011 10:52:07.770071 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" podStartSLOduration=1.8538699699999999 podStartE2EDuration="11.770056518s" podCreationTimestamp="2025-10-11 10:51:56 +0000 UTC" firstStartedPulling="2025-10-11 10:51:57.575385194 +0000 UTC m=+1552.359811903" lastFinishedPulling="2025-10-11 10:52:07.491571742 +0000 UTC m=+1562.275998451" observedRunningTime="2025-10-11 10:52:07.768943657 +0000 UTC m=+1562.553370366" watchObservedRunningTime="2025-10-11 10:52:07.770056518 +0000 UTC m=+1562.554483227" Oct 11 10:52:07.793636 master-2 kubenswrapper[4776]: I1011 10:52:07.793551 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" podStartSLOduration=2.5345846979999997 podStartE2EDuration="10.793532603s" podCreationTimestamp="2025-10-11 10:51:57 +0000 UTC" firstStartedPulling="2025-10-11 10:51:58.444275857 +0000 UTC m=+1553.228702566" lastFinishedPulling="2025-10-11 10:52:06.703223752 +0000 UTC m=+1561.487650471" observedRunningTime="2025-10-11 10:52:07.790045519 +0000 UTC m=+1562.574472228" watchObservedRunningTime="2025-10-11 10:52:07.793532603 +0000 UTC m=+1562.577959322" Oct 11 10:52:07.856689 master-0 kubenswrapper[4790]: I1011 10:52:07.856626 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-2dkw2" Oct 11 10:52:07.996067 master-2 kubenswrapper[4776]: I1011 10:52:07.995983 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" podStartSLOduration=2.517443329 podStartE2EDuration="11.995968198s" podCreationTimestamp="2025-10-11 10:51:56 +0000 UTC" firstStartedPulling="2025-10-11 10:51:57.973245274 +0000 UTC m=+1552.757671983" lastFinishedPulling="2025-10-11 10:52:07.451770143 +0000 UTC m=+1562.236196852" observedRunningTime="2025-10-11 10:52:07.990975054 +0000 UTC m=+1562.775401763" watchObservedRunningTime="2025-10-11 10:52:07.995968198 +0000 UTC m=+1562.780394907" Oct 11 10:52:07.997957 master-2 kubenswrapper[4776]: I1011 10:52:07.997915 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" podStartSLOduration=2.4103041960000002 podStartE2EDuration="11.997906851s" podCreationTimestamp="2025-10-11 10:51:56 +0000 UTC" firstStartedPulling="2025-10-11 10:51:57.768651121 +0000 UTC m=+1552.553077830" lastFinishedPulling="2025-10-11 10:52:07.356253776 +0000 UTC m=+1562.140680485" observedRunningTime="2025-10-11 10:52:07.817288978 +0000 UTC m=+1562.601715687" watchObservedRunningTime="2025-10-11 10:52:07.997906851 +0000 UTC m=+1562.782333560" Oct 11 10:52:08.139667 master-0 kubenswrapper[4790]: I1011 10:52:08.139443 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78696cb447sdltf" Oct 11 10:52:08.430836 master-1 kubenswrapper[4771]: I1011 10:52:08.430557 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7585684bd7-x8n88" event={"ID":"a0b63855-5ade-4060-9016-a2009f5e5b45","Type":"ContainerStarted","Data":"39ff2940e88ad20d8e65ad517cceba3c07a107366731bc736d90f05ef049f0a2"} Oct 11 10:52:08.430836 master-1 kubenswrapper[4771]: I1011 10:52:08.430760 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7585684bd7-x8n88" Oct 11 10:52:08.435103 master-1 kubenswrapper[4771]: I1011 10:52:08.435038 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-68fc865f87-dfx76" event={"ID":"608a645c-104f-4fab-b1c6-cbcae70ca0f4","Type":"ContainerStarted","Data":"54fb253aef1e7a77788b473b56432d3f6e38becd9b818ea0065dbf1c3ec23750"} Oct 11 10:52:08.435291 master-1 kubenswrapper[4771]: I1011 10:52:08.435145 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-68fc865f87-dfx76" Oct 11 10:52:08.447666 master-1 kubenswrapper[4771]: I1011 10:52:08.447633 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-565dfd7bb9-bbh7m" Oct 11 10:52:08.447846 master-1 kubenswrapper[4771]: I1011 10:52:08.447830 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7f4856d67b-9lktk" Oct 11 10:52:08.447959 master-1 kubenswrapper[4771]: I1011 10:52:08.447945 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6b498574d4-brh6p" Oct 11 10:52:08.448069 master-1 kubenswrapper[4771]: I1011 10:52:08.448055 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-f456fb6cd-wnhd7" Oct 11 10:52:08.448173 master-1 kubenswrapper[4771]: I1011 10:52:08.448150 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-565dfd7bb9-bbh7m" event={"ID":"84f24be5-1586-471c-b099-c6c81ef56674","Type":"ContainerStarted","Data":"679d673a381d791739ef1d791993125259f18a7be676cee2655b9a9298478edc"} Oct 11 10:52:08.448260 master-1 kubenswrapper[4771]: I1011 10:52:08.448242 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7f4856d67b-9lktk" event={"ID":"5012c546-4311-4415-bb9a-9074edfc09e2","Type":"ContainerStarted","Data":"2582341f8c26803c35a24f8876b1925d84cf18e54e5a19da5835825ff331e8d8"} Oct 11 10:52:08.448378 master-1 kubenswrapper[4771]: I1011 10:52:08.448338 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6b498574d4-brh6p" event={"ID":"9f700fad-87cd-467c-9b8a-99a82dd72d9b","Type":"ContainerStarted","Data":"14fb7255e1575650c0260cc1228253177bc2593ddaa3b312e2c4a4712d460003"} Oct 11 10:52:08.448499 master-1 kubenswrapper[4771]: I1011 10:52:08.448478 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-f456fb6cd-wnhd7" event={"ID":"87e9a396-b599-4e77-ab0d-24602bde55eb","Type":"ContainerStarted","Data":"0bb362d02625f8307ac99882c21bf4c6cd85b865a50f7365d799ea9a8117753f"} Oct 11 10:52:08.711800 master-2 kubenswrapper[4776]: I1011 10:52:08.711369 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" event={"ID":"dfa45a38-a374-4106-80ff-49527e765f82","Type":"ContainerStarted","Data":"48d8fd0f9b192d7e8cb770d76ad9e527ef68340c54c99515e2b41dea8b8c9ea6"} Oct 11 10:52:08.714421 master-2 kubenswrapper[4776]: I1011 10:52:08.713859 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" Oct 11 10:52:08.715372 master-2 kubenswrapper[4776]: I1011 10:52:08.715316 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" Oct 11 10:52:08.715846 master-2 kubenswrapper[4776]: I1011 10:52:08.715721 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" Oct 11 10:52:08.716480 master-2 kubenswrapper[4776]: I1011 10:52:08.716099 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" Oct 11 10:52:08.717167 master-2 kubenswrapper[4776]: I1011 10:52:08.717080 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" Oct 11 10:52:08.717486 master-2 kubenswrapper[4776]: I1011 10:52:08.717449 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" Oct 11 10:52:08.719010 master-2 kubenswrapper[4776]: I1011 10:52:08.718915 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" Oct 11 10:52:08.959959 master-1 kubenswrapper[4771]: I1011 10:52:08.959858 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp" podStartSLOduration=6.045027368 podStartE2EDuration="11.959829506s" podCreationTimestamp="2025-10-11 10:51:57 +0000 UTC" firstStartedPulling="2025-10-11 10:51:58.573883421 +0000 UTC m=+1550.548109902" lastFinishedPulling="2025-10-11 10:52:04.488685599 +0000 UTC m=+1556.462912040" observedRunningTime="2025-10-11 10:52:05.398984267 +0000 UTC m=+1557.373210728" watchObservedRunningTime="2025-10-11 10:52:08.959829506 +0000 UTC m=+1560.934055987" Oct 11 10:52:08.960955 master-1 kubenswrapper[4771]: I1011 10:52:08.960885 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7585684bd7-x8n88" podStartSLOduration=2.906456855 podStartE2EDuration="11.960872077s" podCreationTimestamp="2025-10-11 10:51:57 +0000 UTC" firstStartedPulling="2025-10-11 10:51:58.192769278 +0000 UTC m=+1550.166995719" lastFinishedPulling="2025-10-11 10:52:07.2471845 +0000 UTC m=+1559.221410941" observedRunningTime="2025-10-11 10:52:08.804564828 +0000 UTC m=+1560.778791339" watchObservedRunningTime="2025-10-11 10:52:08.960872077 +0000 UTC m=+1560.935098548" Oct 11 10:52:09.070115 master-1 kubenswrapper[4771]: I1011 10:52:09.069304 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6b498574d4-brh6p" podStartSLOduration=3.729008553 podStartE2EDuration="13.069262177s" podCreationTimestamp="2025-10-11 10:51:56 +0000 UTC" firstStartedPulling="2025-10-11 10:51:57.825598639 +0000 UTC m=+1549.799825080" lastFinishedPulling="2025-10-11 10:52:07.165852273 +0000 UTC m=+1559.140078704" observedRunningTime="2025-10-11 10:52:09.055750116 +0000 UTC m=+1561.029976637" watchObservedRunningTime="2025-10-11 10:52:09.069262177 +0000 UTC m=+1561.043488678" Oct 11 10:52:09.073244 master-2 kubenswrapper[4776]: I1011 10:52:09.073076 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" podStartSLOduration=2.641837474 podStartE2EDuration="12.073048782s" podCreationTimestamp="2025-10-11 10:51:57 +0000 UTC" firstStartedPulling="2025-10-11 10:51:58.140385072 +0000 UTC m=+1552.924811791" lastFinishedPulling="2025-10-11 10:52:07.57159639 +0000 UTC m=+1562.356023099" observedRunningTime="2025-10-11 10:52:09.033419088 +0000 UTC m=+1563.817845837" watchObservedRunningTime="2025-10-11 10:52:09.073048782 +0000 UTC m=+1563.857475531" Oct 11 10:52:09.084229 master-1 kubenswrapper[4771]: I1011 10:52:09.084128 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-68fc865f87-dfx76" podStartSLOduration=3.372610996 podStartE2EDuration="13.084104228s" podCreationTimestamp="2025-10-11 10:51:56 +0000 UTC" firstStartedPulling="2025-10-11 10:51:57.746463956 +0000 UTC m=+1549.720690397" lastFinishedPulling="2025-10-11 10:52:07.457957188 +0000 UTC m=+1559.432183629" observedRunningTime="2025-10-11 10:52:09.082576473 +0000 UTC m=+1561.056802934" watchObservedRunningTime="2025-10-11 10:52:09.084104228 +0000 UTC m=+1561.058330679" Oct 11 10:52:09.150171 master-1 kubenswrapper[4771]: I1011 10:52:09.150084 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-565dfd7bb9-bbh7m" podStartSLOduration=3.051049794 podStartE2EDuration="12.150063659s" podCreationTimestamp="2025-10-11 10:51:57 +0000 UTC" firstStartedPulling="2025-10-11 10:51:58.246645659 +0000 UTC m=+1550.220872120" lastFinishedPulling="2025-10-11 10:52:07.345659544 +0000 UTC m=+1559.319885985" observedRunningTime="2025-10-11 10:52:09.147671889 +0000 UTC m=+1561.121898330" watchObservedRunningTime="2025-10-11 10:52:09.150063659 +0000 UTC m=+1561.124290110" Oct 11 10:52:09.190474 master-1 kubenswrapper[4771]: I1011 10:52:09.188517 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-f456fb6cd-wnhd7" podStartSLOduration=3.00947081 podStartE2EDuration="12.188482032s" podCreationTimestamp="2025-10-11 10:51:57 +0000 UTC" firstStartedPulling="2025-10-11 10:51:57.995629536 +0000 UTC m=+1549.969855977" lastFinishedPulling="2025-10-11 10:52:07.174640748 +0000 UTC m=+1559.148867199" observedRunningTime="2025-10-11 10:52:09.181775508 +0000 UTC m=+1561.156002029" watchObservedRunningTime="2025-10-11 10:52:09.188482032 +0000 UTC m=+1561.162708523" Oct 11 10:52:09.214385 master-1 kubenswrapper[4771]: I1011 10:52:09.214129 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7f4856d67b-9lktk" podStartSLOduration=2.803604225 podStartE2EDuration="12.214109485s" podCreationTimestamp="2025-10-11 10:51:57 +0000 UTC" firstStartedPulling="2025-10-11 10:51:57.881167079 +0000 UTC m=+1549.855393520" lastFinishedPulling="2025-10-11 10:52:07.291672339 +0000 UTC m=+1559.265898780" observedRunningTime="2025-10-11 10:52:09.212978652 +0000 UTC m=+1561.187205203" watchObservedRunningTime="2025-10-11 10:52:09.214109485 +0000 UTC m=+1561.188335926" Oct 11 10:52:09.458925 master-1 kubenswrapper[4771]: I1011 10:52:09.458816 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6b498574d4-brh6p" Oct 11 10:52:09.459747 master-1 kubenswrapper[4771]: I1011 10:52:09.459277 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-f456fb6cd-wnhd7" Oct 11 10:52:09.459747 master-1 kubenswrapper[4771]: I1011 10:52:09.459388 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-68fc865f87-dfx76" Oct 11 10:52:09.460728 master-1 kubenswrapper[4771]: I1011 10:52:09.460693 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7f4856d67b-9lktk" Oct 11 10:52:09.462839 master-1 kubenswrapper[4771]: I1011 10:52:09.462777 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-565dfd7bb9-bbh7m" Oct 11 10:52:09.462911 master-1 kubenswrapper[4771]: I1011 10:52:09.462883 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7585684bd7-x8n88" Oct 11 10:52:09.518065 master-0 kubenswrapper[4790]: I1011 10:52:09.517883 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms" Oct 11 10:52:09.754633 master-2 kubenswrapper[4776]: I1011 10:52:09.750395 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" Oct 11 10:52:09.772710 master-2 kubenswrapper[4776]: I1011 10:52:09.763025 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" Oct 11 10:52:48.097977 master-1 kubenswrapper[4771]: I1011 10:52:48.097869 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fd846fcd9-m5nng"] Oct 11 10:52:48.099413 master-1 kubenswrapper[4771]: I1011 10:52:48.099378 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd846fcd9-m5nng" Oct 11 10:52:48.102704 master-1 kubenswrapper[4771]: I1011 10:52:48.102658 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 10:52:48.102911 master-1 kubenswrapper[4771]: I1011 10:52:48.102659 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 11 10:52:48.102911 master-1 kubenswrapper[4771]: I1011 10:52:48.102874 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 11 10:52:48.134662 master-1 kubenswrapper[4771]: I1011 10:52:48.134603 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd846fcd9-m5nng"] Oct 11 10:52:48.149703 master-2 kubenswrapper[4776]: I1011 10:52:48.149226 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fd846fcd9-l5ldp"] Oct 11 10:52:48.157700 master-2 kubenswrapper[4776]: I1011 10:52:48.150526 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:52:48.161695 master-2 kubenswrapper[4776]: I1011 10:52:48.159720 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd846fcd9-l5ldp"] Oct 11 10:52:48.176704 master-2 kubenswrapper[4776]: I1011 10:52:48.170169 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 10:52:48.176704 master-2 kubenswrapper[4776]: I1011 10:52:48.170399 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 11 10:52:48.176704 master-2 kubenswrapper[4776]: I1011 10:52:48.170422 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 11 10:52:48.215523 master-1 kubenswrapper[4771]: I1011 10:52:48.215477 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be8543e-04fe-4e9d-91c9-8219b843b991-config\") pod \"dnsmasq-dns-5fd846fcd9-m5nng\" (UID: \"7be8543e-04fe-4e9d-91c9-8219b843b991\") " pod="openstack/dnsmasq-dns-5fd846fcd9-m5nng" Oct 11 10:52:48.215760 master-1 kubenswrapper[4771]: I1011 10:52:48.215538 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpgxh\" (UniqueName: \"kubernetes.io/projected/7be8543e-04fe-4e9d-91c9-8219b843b991-kube-api-access-cpgxh\") pod \"dnsmasq-dns-5fd846fcd9-m5nng\" (UID: \"7be8543e-04fe-4e9d-91c9-8219b843b991\") " pod="openstack/dnsmasq-dns-5fd846fcd9-m5nng" Oct 11 10:52:48.221108 master-0 kubenswrapper[4790]: I1011 10:52:48.221028 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6944757b7f-plhvq"] Oct 11 10:52:48.224604 master-0 kubenswrapper[4790]: I1011 10:52:48.224555 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6944757b7f-plhvq" Oct 11 10:52:48.228527 master-0 kubenswrapper[4790]: I1011 10:52:48.228012 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 10:52:48.228527 master-0 kubenswrapper[4790]: I1011 10:52:48.228278 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 10:52:48.228527 master-0 kubenswrapper[4790]: I1011 10:52:48.228430 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 11 10:52:48.228887 master-0 kubenswrapper[4790]: I1011 10:52:48.228819 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 11 10:52:48.245136 master-0 kubenswrapper[4790]: I1011 10:52:48.245061 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6944757b7f-plhvq"] Oct 11 10:52:48.252707 master-2 kubenswrapper[4776]: I1011 10:52:48.252507 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htpgt\" (UniqueName: \"kubernetes.io/projected/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-kube-api-access-htpgt\") pod \"dnsmasq-dns-5fd846fcd9-l5ldp\" (UID: \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\") " pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:52:48.252707 master-2 kubenswrapper[4776]: I1011 10:52:48.252594 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-config\") pod \"dnsmasq-dns-5fd846fcd9-l5ldp\" (UID: \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\") " pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:52:48.317408 master-1 kubenswrapper[4771]: I1011 10:52:48.317304 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be8543e-04fe-4e9d-91c9-8219b843b991-config\") pod \"dnsmasq-dns-5fd846fcd9-m5nng\" (UID: \"7be8543e-04fe-4e9d-91c9-8219b843b991\") " pod="openstack/dnsmasq-dns-5fd846fcd9-m5nng" Oct 11 10:52:48.317408 master-1 kubenswrapper[4771]: I1011 10:52:48.317379 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpgxh\" (UniqueName: \"kubernetes.io/projected/7be8543e-04fe-4e9d-91c9-8219b843b991-kube-api-access-cpgxh\") pod \"dnsmasq-dns-5fd846fcd9-m5nng\" (UID: \"7be8543e-04fe-4e9d-91c9-8219b843b991\") " pod="openstack/dnsmasq-dns-5fd846fcd9-m5nng" Oct 11 10:52:48.318424 master-1 kubenswrapper[4771]: I1011 10:52:48.318389 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be8543e-04fe-4e9d-91c9-8219b843b991-config\") pod \"dnsmasq-dns-5fd846fcd9-m5nng\" (UID: \"7be8543e-04fe-4e9d-91c9-8219b843b991\") " pod="openstack/dnsmasq-dns-5fd846fcd9-m5nng" Oct 11 10:52:48.337906 master-1 kubenswrapper[4771]: I1011 10:52:48.337820 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpgxh\" (UniqueName: \"kubernetes.io/projected/7be8543e-04fe-4e9d-91c9-8219b843b991-kube-api-access-cpgxh\") pod \"dnsmasq-dns-5fd846fcd9-m5nng\" (UID: \"7be8543e-04fe-4e9d-91c9-8219b843b991\") " pod="openstack/dnsmasq-dns-5fd846fcd9-m5nng" Oct 11 10:52:48.364692 master-2 kubenswrapper[4776]: I1011 10:52:48.364597 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htpgt\" (UniqueName: \"kubernetes.io/projected/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-kube-api-access-htpgt\") pod \"dnsmasq-dns-5fd846fcd9-l5ldp\" (UID: \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\") " pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:52:48.364921 master-2 kubenswrapper[4776]: I1011 10:52:48.364823 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-config\") pod \"dnsmasq-dns-5fd846fcd9-l5ldp\" (UID: \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\") " pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:52:48.365882 master-2 kubenswrapper[4776]: I1011 10:52:48.365846 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-config\") pod \"dnsmasq-dns-5fd846fcd9-l5ldp\" (UID: \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\") " pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:52:48.369819 master-0 kubenswrapper[4790]: I1011 10:52:48.367931 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97fbf89-0d03-4ed8-a0d2-4f796e705e20-dns-svc\") pod \"dnsmasq-dns-6944757b7f-plhvq\" (UID: \"f97fbf89-0d03-4ed8-a0d2-4f796e705e20\") " pod="openstack/dnsmasq-dns-6944757b7f-plhvq" Oct 11 10:52:48.370153 master-0 kubenswrapper[4790]: I1011 10:52:48.370050 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97fbf89-0d03-4ed8-a0d2-4f796e705e20-config\") pod \"dnsmasq-dns-6944757b7f-plhvq\" (UID: \"f97fbf89-0d03-4ed8-a0d2-4f796e705e20\") " pod="openstack/dnsmasq-dns-6944757b7f-plhvq" Oct 11 10:52:48.370153 master-0 kubenswrapper[4790]: I1011 10:52:48.370104 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wx2k\" (UniqueName: \"kubernetes.io/projected/f97fbf89-0d03-4ed8-a0d2-4f796e705e20-kube-api-access-7wx2k\") pod \"dnsmasq-dns-6944757b7f-plhvq\" (UID: \"f97fbf89-0d03-4ed8-a0d2-4f796e705e20\") " pod="openstack/dnsmasq-dns-6944757b7f-plhvq" Oct 11 10:52:48.383091 master-2 kubenswrapper[4776]: I1011 10:52:48.383038 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htpgt\" (UniqueName: \"kubernetes.io/projected/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-kube-api-access-htpgt\") pod \"dnsmasq-dns-5fd846fcd9-l5ldp\" (UID: \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\") " pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:52:48.436643 master-1 kubenswrapper[4771]: I1011 10:52:48.436482 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd846fcd9-m5nng" Oct 11 10:52:48.471935 master-0 kubenswrapper[4790]: I1011 10:52:48.471852 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97fbf89-0d03-4ed8-a0d2-4f796e705e20-dns-svc\") pod \"dnsmasq-dns-6944757b7f-plhvq\" (UID: \"f97fbf89-0d03-4ed8-a0d2-4f796e705e20\") " pod="openstack/dnsmasq-dns-6944757b7f-plhvq" Oct 11 10:52:48.471935 master-0 kubenswrapper[4790]: I1011 10:52:48.471922 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97fbf89-0d03-4ed8-a0d2-4f796e705e20-config\") pod \"dnsmasq-dns-6944757b7f-plhvq\" (UID: \"f97fbf89-0d03-4ed8-a0d2-4f796e705e20\") " pod="openstack/dnsmasq-dns-6944757b7f-plhvq" Oct 11 10:52:48.471935 master-0 kubenswrapper[4790]: I1011 10:52:48.471959 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wx2k\" (UniqueName: \"kubernetes.io/projected/f97fbf89-0d03-4ed8-a0d2-4f796e705e20-kube-api-access-7wx2k\") pod \"dnsmasq-dns-6944757b7f-plhvq\" (UID: \"f97fbf89-0d03-4ed8-a0d2-4f796e705e20\") " pod="openstack/dnsmasq-dns-6944757b7f-plhvq" Oct 11 10:52:48.472965 master-0 kubenswrapper[4790]: I1011 10:52:48.472908 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97fbf89-0d03-4ed8-a0d2-4f796e705e20-dns-svc\") pod \"dnsmasq-dns-6944757b7f-plhvq\" (UID: \"f97fbf89-0d03-4ed8-a0d2-4f796e705e20\") " pod="openstack/dnsmasq-dns-6944757b7f-plhvq" Oct 11 10:52:48.473470 master-0 kubenswrapper[4790]: I1011 10:52:48.473421 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97fbf89-0d03-4ed8-a0d2-4f796e705e20-config\") pod \"dnsmasq-dns-6944757b7f-plhvq\" (UID: \"f97fbf89-0d03-4ed8-a0d2-4f796e705e20\") " pod="openstack/dnsmasq-dns-6944757b7f-plhvq" Oct 11 10:52:48.498448 master-0 kubenswrapper[4790]: I1011 10:52:48.498280 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wx2k\" (UniqueName: \"kubernetes.io/projected/f97fbf89-0d03-4ed8-a0d2-4f796e705e20-kube-api-access-7wx2k\") pod \"dnsmasq-dns-6944757b7f-plhvq\" (UID: \"f97fbf89-0d03-4ed8-a0d2-4f796e705e20\") " pod="openstack/dnsmasq-dns-6944757b7f-plhvq" Oct 11 10:52:48.538831 master-0 kubenswrapper[4790]: I1011 10:52:48.538337 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6944757b7f-plhvq" Oct 11 10:52:48.578765 master-2 kubenswrapper[4776]: I1011 10:52:48.578656 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:52:48.915447 master-1 kubenswrapper[4771]: I1011 10:52:48.915175 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd846fcd9-m5nng"] Oct 11 10:52:48.918136 master-1 kubenswrapper[4771]: W1011 10:52:48.918047 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7be8543e_04fe_4e9d_91c9_8219b843b991.slice/crio-05a1b34b2d5123ecb163a3c60cc5758f73fbbac730006935705b7b57187290ed WatchSource:0}: Error finding container 05a1b34b2d5123ecb163a3c60cc5758f73fbbac730006935705b7b57187290ed: Status 404 returned error can't find the container with id 05a1b34b2d5123ecb163a3c60cc5758f73fbbac730006935705b7b57187290ed Oct 11 10:52:48.980301 master-2 kubenswrapper[4776]: I1011 10:52:48.980260 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd846fcd9-l5ldp"] Oct 11 10:52:48.984261 master-2 kubenswrapper[4776]: W1011 10:52:48.984217 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod977f2c8c_eb07_4fb7_ae7e_6d0688c6081f.slice/crio-bcff671aa0831c6673eae62f2a6c1c6fa0565bd88455b6fc9735c4678ae0771e WatchSource:0}: Error finding container bcff671aa0831c6673eae62f2a6c1c6fa0565bd88455b6fc9735c4678ae0771e: Status 404 returned error can't find the container with id bcff671aa0831c6673eae62f2a6c1c6fa0565bd88455b6fc9735c4678ae0771e Oct 11 10:52:49.007346 master-0 kubenswrapper[4790]: I1011 10:52:49.007165 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6944757b7f-plhvq"] Oct 11 10:52:49.019975 master-0 kubenswrapper[4790]: I1011 10:52:49.019896 4790 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:52:49.082074 master-2 kubenswrapper[4776]: I1011 10:52:49.081981 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" event={"ID":"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f","Type":"ContainerStarted","Data":"bcff671aa0831c6673eae62f2a6c1c6fa0565bd88455b6fc9735c4678ae0771e"} Oct 11 10:52:49.273596 master-0 kubenswrapper[4790]: I1011 10:52:49.273372 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6944757b7f-plhvq" event={"ID":"f97fbf89-0d03-4ed8-a0d2-4f796e705e20","Type":"ContainerStarted","Data":"a4a9276558748dc6921cea20229a43f4acf57679858b58aec74901d23d4a131c"} Oct 11 10:52:49.778018 master-1 kubenswrapper[4771]: I1011 10:52:49.777930 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd846fcd9-m5nng" event={"ID":"7be8543e-04fe-4e9d-91c9-8219b843b991","Type":"ContainerStarted","Data":"05a1b34b2d5123ecb163a3c60cc5758f73fbbac730006935705b7b57187290ed"} Oct 11 10:52:50.530557 master-1 kubenswrapper[4771]: I1011 10:52:50.530507 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd846fcd9-m5nng"] Oct 11 10:52:50.569171 master-1 kubenswrapper[4771]: I1011 10:52:50.569085 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b4bcc4d85-hc9b5"] Oct 11 10:52:50.571448 master-1 kubenswrapper[4771]: I1011 10:52:50.571114 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" Oct 11 10:52:50.575620 master-1 kubenswrapper[4771]: I1011 10:52:50.575572 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 10:52:50.582820 master-1 kubenswrapper[4771]: I1011 10:52:50.582321 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b4bcc4d85-hc9b5"] Oct 11 10:52:50.763938 master-1 kubenswrapper[4771]: I1011 10:52:50.763670 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r45c\" (UniqueName: \"kubernetes.io/projected/37015f12-0983-4016-9f76-6d0e3f641f28-kube-api-access-8r45c\") pod \"dnsmasq-dns-5b4bcc4d85-hc9b5\" (UID: \"37015f12-0983-4016-9f76-6d0e3f641f28\") " pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" Oct 11 10:52:50.763938 master-1 kubenswrapper[4771]: I1011 10:52:50.763814 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37015f12-0983-4016-9f76-6d0e3f641f28-dns-svc\") pod \"dnsmasq-dns-5b4bcc4d85-hc9b5\" (UID: \"37015f12-0983-4016-9f76-6d0e3f641f28\") " pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" Oct 11 10:52:50.763938 master-1 kubenswrapper[4771]: I1011 10:52:50.763858 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37015f12-0983-4016-9f76-6d0e3f641f28-config\") pod \"dnsmasq-dns-5b4bcc4d85-hc9b5\" (UID: \"37015f12-0983-4016-9f76-6d0e3f641f28\") " pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" Oct 11 10:52:50.865171 master-1 kubenswrapper[4771]: I1011 10:52:50.865083 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37015f12-0983-4016-9f76-6d0e3f641f28-config\") pod \"dnsmasq-dns-5b4bcc4d85-hc9b5\" (UID: \"37015f12-0983-4016-9f76-6d0e3f641f28\") " pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" Oct 11 10:52:50.866777 master-1 kubenswrapper[4771]: I1011 10:52:50.865190 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r45c\" (UniqueName: \"kubernetes.io/projected/37015f12-0983-4016-9f76-6d0e3f641f28-kube-api-access-8r45c\") pod \"dnsmasq-dns-5b4bcc4d85-hc9b5\" (UID: \"37015f12-0983-4016-9f76-6d0e3f641f28\") " pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" Oct 11 10:52:50.866777 master-1 kubenswrapper[4771]: I1011 10:52:50.865242 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37015f12-0983-4016-9f76-6d0e3f641f28-dns-svc\") pod \"dnsmasq-dns-5b4bcc4d85-hc9b5\" (UID: \"37015f12-0983-4016-9f76-6d0e3f641f28\") " pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" Oct 11 10:52:50.866777 master-1 kubenswrapper[4771]: I1011 10:52:50.866151 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37015f12-0983-4016-9f76-6d0e3f641f28-dns-svc\") pod \"dnsmasq-dns-5b4bcc4d85-hc9b5\" (UID: \"37015f12-0983-4016-9f76-6d0e3f641f28\") " pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" Oct 11 10:52:50.868524 master-1 kubenswrapper[4771]: I1011 10:52:50.868472 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37015f12-0983-4016-9f76-6d0e3f641f28-config\") pod \"dnsmasq-dns-5b4bcc4d85-hc9b5\" (UID: \"37015f12-0983-4016-9f76-6d0e3f641f28\") " pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" Oct 11 10:52:50.898524 master-1 kubenswrapper[4771]: I1011 10:52:50.898418 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r45c\" (UniqueName: \"kubernetes.io/projected/37015f12-0983-4016-9f76-6d0e3f641f28-kube-api-access-8r45c\") pod \"dnsmasq-dns-5b4bcc4d85-hc9b5\" (UID: \"37015f12-0983-4016-9f76-6d0e3f641f28\") " pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" Oct 11 10:52:51.075149 master-2 kubenswrapper[4776]: I1011 10:52:51.075039 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd846fcd9-l5ldp"] Oct 11 10:52:51.135896 master-2 kubenswrapper[4776]: I1011 10:52:51.135590 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86d565bb9-rbc2v"] Oct 11 10:52:51.136824 master-2 kubenswrapper[4776]: I1011 10:52:51.136794 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.139578 master-2 kubenswrapper[4776]: I1011 10:52:51.139550 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 10:52:51.141796 master-2 kubenswrapper[4776]: I1011 10:52:51.141228 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86d565bb9-rbc2v"] Oct 11 10:52:51.196991 master-1 kubenswrapper[4771]: I1011 10:52:51.196854 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" Oct 11 10:52:51.203777 master-2 kubenswrapper[4776]: I1011 10:52:51.203729 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnm67\" (UniqueName: \"kubernetes.io/projected/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-kube-api-access-vnm67\") pod \"dnsmasq-dns-86d565bb9-rbc2v\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.203985 master-2 kubenswrapper[4776]: I1011 10:52:51.203944 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-dns-svc\") pod \"dnsmasq-dns-86d565bb9-rbc2v\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.204155 master-2 kubenswrapper[4776]: I1011 10:52:51.204133 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-config\") pod \"dnsmasq-dns-86d565bb9-rbc2v\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.306961 master-2 kubenswrapper[4776]: I1011 10:52:51.306757 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-config\") pod \"dnsmasq-dns-86d565bb9-rbc2v\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.306961 master-2 kubenswrapper[4776]: I1011 10:52:51.306863 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnm67\" (UniqueName: \"kubernetes.io/projected/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-kube-api-access-vnm67\") pod \"dnsmasq-dns-86d565bb9-rbc2v\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.306961 master-2 kubenswrapper[4776]: I1011 10:52:51.306939 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-dns-svc\") pod \"dnsmasq-dns-86d565bb9-rbc2v\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.310426 master-2 kubenswrapper[4776]: I1011 10:52:51.308947 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-dns-svc\") pod \"dnsmasq-dns-86d565bb9-rbc2v\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.310426 master-2 kubenswrapper[4776]: I1011 10:52:51.309361 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-config\") pod \"dnsmasq-dns-86d565bb9-rbc2v\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.344756 master-2 kubenswrapper[4776]: I1011 10:52:51.338056 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnm67\" (UniqueName: \"kubernetes.io/projected/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-kube-api-access-vnm67\") pod \"dnsmasq-dns-86d565bb9-rbc2v\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.457255 master-2 kubenswrapper[4776]: I1011 10:52:51.457123 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:54.858009 master-1 kubenswrapper[4771]: I1011 10:52:54.855781 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 11 10:52:54.858009 master-1 kubenswrapper[4771]: I1011 10:52:54.857092 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 11 10:52:54.860809 master-1 kubenswrapper[4771]: I1011 10:52:54.860759 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 11 10:52:54.861112 master-1 kubenswrapper[4771]: I1011 10:52:54.861085 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 11 10:52:54.861456 master-1 kubenswrapper[4771]: I1011 10:52:54.861427 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 11 10:52:54.863005 master-1 kubenswrapper[4771]: I1011 10:52:54.862870 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 11 10:52:54.863177 master-1 kubenswrapper[4771]: I1011 10:52:54.863100 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 11 10:52:54.863414 master-1 kubenswrapper[4771]: I1011 10:52:54.862647 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 11 10:52:54.879616 master-1 kubenswrapper[4771]: I1011 10:52:54.879526 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 11 10:52:55.026731 master-1 kubenswrapper[4771]: I1011 10:52:55.026675 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6fe99fba-e358-4203-a516-04b9ae19d789-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.026958 master-1 kubenswrapper[4771]: I1011 10:52:55.026724 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6fe99fba-e358-4203-a516-04b9ae19d789-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.027006 master-1 kubenswrapper[4771]: I1011 10:52:55.026975 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6fe99fba-e358-4203-a516-04b9ae19d789-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.027047 master-1 kubenswrapper[4771]: I1011 10:52:55.027011 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6fe99fba-e358-4203-a516-04b9ae19d789-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.027047 master-1 kubenswrapper[4771]: I1011 10:52:55.027034 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6fe99fba-e358-4203-a516-04b9ae19d789-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.027107 master-1 kubenswrapper[4771]: I1011 10:52:55.027062 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvfr5\" (UniqueName: \"kubernetes.io/projected/6fe99fba-e358-4203-a516-04b9ae19d789-kube-api-access-hvfr5\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.027107 master-1 kubenswrapper[4771]: I1011 10:52:55.027085 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6fe99fba-e358-4203-a516-04b9ae19d789-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.027166 master-1 kubenswrapper[4771]: I1011 10:52:55.027116 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fe99fba-e358-4203-a516-04b9ae19d789-config-data\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.027166 master-1 kubenswrapper[4771]: I1011 10:52:55.027135 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6fe99fba-e358-4203-a516-04b9ae19d789-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.027166 master-1 kubenswrapper[4771]: I1011 10:52:55.027156 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6fe99fba-e358-4203-a516-04b9ae19d789-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.027263 master-1 kubenswrapper[4771]: I1011 10:52:55.027175 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3f889608-8928-49b4-887e-c3f52b41fe53\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c0542fa6-eb99-4081-a1c8-ffbcb1c5f846\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.129507 master-1 kubenswrapper[4771]: I1011 10:52:55.129273 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6fe99fba-e358-4203-a516-04b9ae19d789-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.129507 master-1 kubenswrapper[4771]: I1011 10:52:55.129474 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6fe99fba-e358-4203-a516-04b9ae19d789-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.129873 master-1 kubenswrapper[4771]: I1011 10:52:55.129514 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6fe99fba-e358-4203-a516-04b9ae19d789-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.129873 master-1 kubenswrapper[4771]: I1011 10:52:55.129556 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvfr5\" (UniqueName: \"kubernetes.io/projected/6fe99fba-e358-4203-a516-04b9ae19d789-kube-api-access-hvfr5\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.129873 master-1 kubenswrapper[4771]: I1011 10:52:55.129591 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6fe99fba-e358-4203-a516-04b9ae19d789-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.129873 master-1 kubenswrapper[4771]: I1011 10:52:55.129636 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fe99fba-e358-4203-a516-04b9ae19d789-config-data\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.129873 master-1 kubenswrapper[4771]: I1011 10:52:55.129663 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6fe99fba-e358-4203-a516-04b9ae19d789-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.129873 master-1 kubenswrapper[4771]: I1011 10:52:55.129695 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6fe99fba-e358-4203-a516-04b9ae19d789-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.129873 master-1 kubenswrapper[4771]: I1011 10:52:55.129723 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3f889608-8928-49b4-887e-c3f52b41fe53\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c0542fa6-eb99-4081-a1c8-ffbcb1c5f846\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.129873 master-1 kubenswrapper[4771]: I1011 10:52:55.129774 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6fe99fba-e358-4203-a516-04b9ae19d789-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.129873 master-1 kubenswrapper[4771]: I1011 10:52:55.129804 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6fe99fba-e358-4203-a516-04b9ae19d789-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.131922 master-1 kubenswrapper[4771]: I1011 10:52:55.130957 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6fe99fba-e358-4203-a516-04b9ae19d789-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.132513 master-1 kubenswrapper[4771]: I1011 10:52:55.132019 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fe99fba-e358-4203-a516-04b9ae19d789-config-data\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.132513 master-1 kubenswrapper[4771]: I1011 10:52:55.132254 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6fe99fba-e358-4203-a516-04b9ae19d789-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.134240 master-1 kubenswrapper[4771]: I1011 10:52:55.133225 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6fe99fba-e358-4203-a516-04b9ae19d789-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.134240 master-1 kubenswrapper[4771]: I1011 10:52:55.134193 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6fe99fba-e358-4203-a516-04b9ae19d789-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.136438 master-1 kubenswrapper[4771]: I1011 10:52:55.136398 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6fe99fba-e358-4203-a516-04b9ae19d789-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.136614 master-1 kubenswrapper[4771]: I1011 10:52:55.136582 4771 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:52:55.136669 master-1 kubenswrapper[4771]: I1011 10:52:55.136627 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3f889608-8928-49b4-887e-c3f52b41fe53\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c0542fa6-eb99-4081-a1c8-ffbcb1c5f846\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/e4a8b82c9fe18ef491f81f118a31075ad74b50c23080fed1324dd231fdb36208/globalmount\"" pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.139520 master-1 kubenswrapper[4771]: I1011 10:52:55.139478 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6fe99fba-e358-4203-a516-04b9ae19d789-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.139616 master-1 kubenswrapper[4771]: I1011 10:52:55.139490 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6fe99fba-e358-4203-a516-04b9ae19d789-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.146460 master-1 kubenswrapper[4771]: I1011 10:52:55.145381 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6fe99fba-e358-4203-a516-04b9ae19d789-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.155321 master-1 kubenswrapper[4771]: I1011 10:52:55.155232 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvfr5\" (UniqueName: \"kubernetes.io/projected/6fe99fba-e358-4203-a516-04b9ae19d789-kube-api-access-hvfr5\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:55.163642 master-1 kubenswrapper[4771]: I1011 10:52:55.163567 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 11 10:52:55.165432 master-1 kubenswrapper[4771]: I1011 10:52:55.165348 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 11 10:52:55.169922 master-1 kubenswrapper[4771]: I1011 10:52:55.169671 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 11 10:52:55.171428 master-1 kubenswrapper[4771]: I1011 10:52:55.171067 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 11 10:52:55.181598 master-1 kubenswrapper[4771]: I1011 10:52:55.181535 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 11 10:52:55.195218 master-1 kubenswrapper[4771]: I1011 10:52:55.195067 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 11 10:52:55.231247 master-1 kubenswrapper[4771]: I1011 10:52:55.231178 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3843977f-b9b9-4f98-9205-5dbe3113fa5e-kolla-config\") pod \"memcached-0\" (UID: \"3843977f-b9b9-4f98-9205-5dbe3113fa5e\") " pod="openstack/memcached-0" Oct 11 10:52:55.235427 master-1 kubenswrapper[4771]: I1011 10:52:55.235388 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3843977f-b9b9-4f98-9205-5dbe3113fa5e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3843977f-b9b9-4f98-9205-5dbe3113fa5e\") " pod="openstack/memcached-0" Oct 11 10:52:55.235647 master-1 kubenswrapper[4771]: I1011 10:52:55.235621 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3843977f-b9b9-4f98-9205-5dbe3113fa5e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3843977f-b9b9-4f98-9205-5dbe3113fa5e\") " pod="openstack/memcached-0" Oct 11 10:52:55.235840 master-1 kubenswrapper[4771]: I1011 10:52:55.235820 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdhgr\" (UniqueName: \"kubernetes.io/projected/3843977f-b9b9-4f98-9205-5dbe3113fa5e-kube-api-access-xdhgr\") pod \"memcached-0\" (UID: \"3843977f-b9b9-4f98-9205-5dbe3113fa5e\") " pod="openstack/memcached-0" Oct 11 10:52:55.235989 master-1 kubenswrapper[4771]: I1011 10:52:55.235969 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3843977f-b9b9-4f98-9205-5dbe3113fa5e-config-data\") pod \"memcached-0\" (UID: \"3843977f-b9b9-4f98-9205-5dbe3113fa5e\") " pod="openstack/memcached-0" Oct 11 10:52:55.337897 master-1 kubenswrapper[4771]: I1011 10:52:55.337811 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3843977f-b9b9-4f98-9205-5dbe3113fa5e-kolla-config\") pod \"memcached-0\" (UID: \"3843977f-b9b9-4f98-9205-5dbe3113fa5e\") " pod="openstack/memcached-0" Oct 11 10:52:55.338218 master-1 kubenswrapper[4771]: I1011 10:52:55.337964 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3843977f-b9b9-4f98-9205-5dbe3113fa5e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3843977f-b9b9-4f98-9205-5dbe3113fa5e\") " pod="openstack/memcached-0" Oct 11 10:52:55.338218 master-1 kubenswrapper[4771]: I1011 10:52:55.338004 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3843977f-b9b9-4f98-9205-5dbe3113fa5e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3843977f-b9b9-4f98-9205-5dbe3113fa5e\") " pod="openstack/memcached-0" Oct 11 10:52:55.338218 master-1 kubenswrapper[4771]: I1011 10:52:55.338055 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdhgr\" (UniqueName: \"kubernetes.io/projected/3843977f-b9b9-4f98-9205-5dbe3113fa5e-kube-api-access-xdhgr\") pod \"memcached-0\" (UID: \"3843977f-b9b9-4f98-9205-5dbe3113fa5e\") " pod="openstack/memcached-0" Oct 11 10:52:55.338218 master-1 kubenswrapper[4771]: I1011 10:52:55.338104 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3843977f-b9b9-4f98-9205-5dbe3113fa5e-config-data\") pod \"memcached-0\" (UID: \"3843977f-b9b9-4f98-9205-5dbe3113fa5e\") " pod="openstack/memcached-0" Oct 11 10:52:55.339389 master-1 kubenswrapper[4771]: I1011 10:52:55.339333 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3843977f-b9b9-4f98-9205-5dbe3113fa5e-kolla-config\") pod \"memcached-0\" (UID: \"3843977f-b9b9-4f98-9205-5dbe3113fa5e\") " pod="openstack/memcached-0" Oct 11 10:52:55.339511 master-1 kubenswrapper[4771]: I1011 10:52:55.339474 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3843977f-b9b9-4f98-9205-5dbe3113fa5e-config-data\") pod \"memcached-0\" (UID: \"3843977f-b9b9-4f98-9205-5dbe3113fa5e\") " pod="openstack/memcached-0" Oct 11 10:52:55.341203 master-1 kubenswrapper[4771]: I1011 10:52:55.341156 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3843977f-b9b9-4f98-9205-5dbe3113fa5e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3843977f-b9b9-4f98-9205-5dbe3113fa5e\") " pod="openstack/memcached-0" Oct 11 10:52:55.342850 master-1 kubenswrapper[4771]: I1011 10:52:55.342812 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3843977f-b9b9-4f98-9205-5dbe3113fa5e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3843977f-b9b9-4f98-9205-5dbe3113fa5e\") " pod="openstack/memcached-0" Oct 11 10:52:55.363290 master-1 kubenswrapper[4771]: I1011 10:52:55.363237 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdhgr\" (UniqueName: \"kubernetes.io/projected/3843977f-b9b9-4f98-9205-5dbe3113fa5e-kube-api-access-xdhgr\") pod \"memcached-0\" (UID: \"3843977f-b9b9-4f98-9205-5dbe3113fa5e\") " pod="openstack/memcached-0" Oct 11 10:52:55.526001 master-1 kubenswrapper[4771]: I1011 10:52:55.525795 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 11 10:52:56.746420 master-1 kubenswrapper[4771]: I1011 10:52:56.746324 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3f889608-8928-49b4-887e-c3f52b41fe53\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c0542fa6-eb99-4081-a1c8-ffbcb1c5f846\") pod \"rabbitmq-server-0\" (UID: \"6fe99fba-e358-4203-a516-04b9ae19d789\") " pod="openstack/rabbitmq-server-0" Oct 11 10:52:57.008117 master-1 kubenswrapper[4771]: I1011 10:52:57.007889 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 11 10:52:57.099702 master-2 kubenswrapper[4776]: I1011 10:52:57.099588 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 10:52:57.101255 master-2 kubenswrapper[4776]: I1011 10:52:57.100774 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 11 10:52:57.110738 master-2 kubenswrapper[4776]: I1011 10:52:57.110688 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 10:52:57.206862 master-2 kubenswrapper[4776]: I1011 10:52:57.206810 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm52p\" (UniqueName: \"kubernetes.io/projected/cbfedacb-2045-4297-be8f-3582dd2bcd7b-kube-api-access-cm52p\") pod \"kube-state-metrics-0\" (UID: \"cbfedacb-2045-4297-be8f-3582dd2bcd7b\") " pod="openstack/kube-state-metrics-0" Oct 11 10:52:57.310284 master-2 kubenswrapper[4776]: I1011 10:52:57.310116 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm52p\" (UniqueName: \"kubernetes.io/projected/cbfedacb-2045-4297-be8f-3582dd2bcd7b-kube-api-access-cm52p\") pod \"kube-state-metrics-0\" (UID: \"cbfedacb-2045-4297-be8f-3582dd2bcd7b\") " pod="openstack/kube-state-metrics-0" Oct 11 10:52:57.340932 master-2 kubenswrapper[4776]: I1011 10:52:57.340707 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm52p\" (UniqueName: \"kubernetes.io/projected/cbfedacb-2045-4297-be8f-3582dd2bcd7b-kube-api-access-cm52p\") pod \"kube-state-metrics-0\" (UID: \"cbfedacb-2045-4297-be8f-3582dd2bcd7b\") " pod="openstack/kube-state-metrics-0" Oct 11 10:52:57.460560 master-2 kubenswrapper[4776]: I1011 10:52:57.460495 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 11 10:52:57.862754 master-0 kubenswrapper[4790]: I1011 10:52:57.862652 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Oct 11 10:52:57.864661 master-0 kubenswrapper[4790]: I1011 10:52:57.864627 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Oct 11 10:52:57.880385 master-0 kubenswrapper[4790]: I1011 10:52:57.880329 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 11 10:52:57.880732 master-0 kubenswrapper[4790]: I1011 10:52:57.880400 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 11 10:52:57.880732 master-0 kubenswrapper[4790]: I1011 10:52:57.880557 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 11 10:52:57.880732 master-0 kubenswrapper[4790]: I1011 10:52:57.880657 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 11 10:52:57.882496 master-0 kubenswrapper[4790]: I1011 10:52:57.882179 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 11 10:52:57.896546 master-0 kubenswrapper[4790]: I1011 10:52:57.883564 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 11 10:52:57.900707 master-2 kubenswrapper[4776]: I1011 10:52:57.900571 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Oct 11 10:52:57.900909 master-0 kubenswrapper[4790]: I1011 10:52:57.900841 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Oct 11 10:52:57.905034 master-2 kubenswrapper[4776]: I1011 10:52:57.904920 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:57.909020 master-2 kubenswrapper[4776]: I1011 10:52:57.908984 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Oct 11 10:52:57.909205 master-2 kubenswrapper[4776]: I1011 10:52:57.909182 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Oct 11 10:52:57.909386 master-2 kubenswrapper[4776]: I1011 10:52:57.909364 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Oct 11 10:52:57.919977 master-0 kubenswrapper[4790]: I1011 10:52:57.919820 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-1"] Oct 11 10:52:57.920564 master-2 kubenswrapper[4776]: I1011 10:52:57.920504 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Oct 11 10:52:57.922534 master-0 kubenswrapper[4790]: I1011 10:52:57.922502 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:57.927122 master-0 kubenswrapper[4790]: I1011 10:52:57.926999 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Oct 11 10:52:57.927441 master-0 kubenswrapper[4790]: I1011 10:52:57.927152 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Oct 11 10:52:57.928386 master-0 kubenswrapper[4790]: I1011 10:52:57.927694 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Oct 11 10:52:57.946085 master-0 kubenswrapper[4790]: I1011 10:52:57.941207 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-1"] Oct 11 10:52:57.946085 master-0 kubenswrapper[4790]: I1011 10:52:57.945115 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c13cb0d1-c50f-44fa-824a-46ece423a7cc-server-conf\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:57.946085 master-0 kubenswrapper[4790]: I1011 10:52:57.945163 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c13cb0d1-c50f-44fa-824a-46ece423a7cc-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:57.946085 master-0 kubenswrapper[4790]: I1011 10:52:57.945195 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktw9d\" (UniqueName: \"kubernetes.io/projected/1ebbbaaf-f668-4c40-b437-2e730aef3912-kube-api-access-ktw9d\") pod \"alertmanager-metric-storage-1\" (UID: \"1ebbbaaf-f668-4c40-b437-2e730aef3912\") " pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:57.946085 master-0 kubenswrapper[4790]: I1011 10:52:57.945226 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1ebbbaaf-f668-4c40-b437-2e730aef3912-config-volume\") pod \"alertmanager-metric-storage-1\" (UID: \"1ebbbaaf-f668-4c40-b437-2e730aef3912\") " pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:57.946085 master-0 kubenswrapper[4790]: I1011 10:52:57.945248 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/1ebbbaaf-f668-4c40-b437-2e730aef3912-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-1\" (UID: \"1ebbbaaf-f668-4c40-b437-2e730aef3912\") " pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:57.946085 master-0 kubenswrapper[4790]: I1011 10:52:57.945268 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kvmh\" (UniqueName: \"kubernetes.io/projected/c13cb0d1-c50f-44fa-824a-46ece423a7cc-kube-api-access-2kvmh\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:57.946085 master-0 kubenswrapper[4790]: I1011 10:52:57.945285 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c13cb0d1-c50f-44fa-824a-46ece423a7cc-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:57.946085 master-0 kubenswrapper[4790]: I1011 10:52:57.945301 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1ebbbaaf-f668-4c40-b437-2e730aef3912-config-out\") pod \"alertmanager-metric-storage-1\" (UID: \"1ebbbaaf-f668-4c40-b437-2e730aef3912\") " pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:57.946085 master-0 kubenswrapper[4790]: I1011 10:52:57.945323 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1ebbbaaf-f668-4c40-b437-2e730aef3912-tls-assets\") pod \"alertmanager-metric-storage-1\" (UID: \"1ebbbaaf-f668-4c40-b437-2e730aef3912\") " pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:57.946085 master-0 kubenswrapper[4790]: I1011 10:52:57.945343 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c13cb0d1-c50f-44fa-824a-46ece423a7cc-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:57.946085 master-0 kubenswrapper[4790]: I1011 10:52:57.945359 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c13cb0d1-c50f-44fa-824a-46ece423a7cc-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:57.946085 master-0 kubenswrapper[4790]: I1011 10:52:57.945380 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c13cb0d1-c50f-44fa-824a-46ece423a7cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:57.946085 master-0 kubenswrapper[4790]: I1011 10:52:57.945405 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1ebbbaaf-f668-4c40-b437-2e730aef3912-web-config\") pod \"alertmanager-metric-storage-1\" (UID: \"1ebbbaaf-f668-4c40-b437-2e730aef3912\") " pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:57.946085 master-0 kubenswrapper[4790]: I1011 10:52:57.945426 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c13cb0d1-c50f-44fa-824a-46ece423a7cc-config-data\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:57.946085 master-0 kubenswrapper[4790]: I1011 10:52:57.945446 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-91b83a1d-eeb3-4510-8d45-14992a484dba\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c2c7f44e-079f-46cd-a04e-fcf37ad4dbd2\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:57.946085 master-0 kubenswrapper[4790]: I1011 10:52:57.945464 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c13cb0d1-c50f-44fa-824a-46ece423a7cc-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:57.946699 master-0 kubenswrapper[4790]: I1011 10:52:57.945483 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c13cb0d1-c50f-44fa-824a-46ece423a7cc-pod-info\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.025496 master-2 kubenswrapper[4776]: I1011 10:52:58.025071 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8d885df4-18cc-401a-8226-cd3d17b3f770-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.025496 master-2 kubenswrapper[4776]: I1011 10:52:58.025140 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8d885df4-18cc-401a-8226-cd3d17b3f770-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.025496 master-2 kubenswrapper[4776]: I1011 10:52:58.025187 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8d885df4-18cc-401a-8226-cd3d17b3f770-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.025496 master-2 kubenswrapper[4776]: I1011 10:52:58.025216 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8d885df4-18cc-401a-8226-cd3d17b3f770-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.025496 master-2 kubenswrapper[4776]: I1011 10:52:58.025379 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48qkj\" (UniqueName: \"kubernetes.io/projected/8d885df4-18cc-401a-8226-cd3d17b3f770-kube-api-access-48qkj\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.025496 master-2 kubenswrapper[4776]: I1011 10:52:58.025470 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/8d885df4-18cc-401a-8226-cd3d17b3f770-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.051737 master-0 kubenswrapper[4790]: I1011 10:52:58.050686 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/1ebbbaaf-f668-4c40-b437-2e730aef3912-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-1\" (UID: \"1ebbbaaf-f668-4c40-b437-2e730aef3912\") " pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:58.051737 master-0 kubenswrapper[4790]: I1011 10:52:58.050818 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c13cb0d1-c50f-44fa-824a-46ece423a7cc-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.051737 master-0 kubenswrapper[4790]: I1011 10:52:58.050852 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kvmh\" (UniqueName: \"kubernetes.io/projected/c13cb0d1-c50f-44fa-824a-46ece423a7cc-kube-api-access-2kvmh\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.051737 master-0 kubenswrapper[4790]: I1011 10:52:58.051024 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1ebbbaaf-f668-4c40-b437-2e730aef3912-config-out\") pod \"alertmanager-metric-storage-1\" (UID: \"1ebbbaaf-f668-4c40-b437-2e730aef3912\") " pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:58.051737 master-0 kubenswrapper[4790]: I1011 10:52:58.051069 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1ebbbaaf-f668-4c40-b437-2e730aef3912-tls-assets\") pod \"alertmanager-metric-storage-1\" (UID: \"1ebbbaaf-f668-4c40-b437-2e730aef3912\") " pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:58.051737 master-0 kubenswrapper[4790]: I1011 10:52:58.051148 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c13cb0d1-c50f-44fa-824a-46ece423a7cc-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.051737 master-0 kubenswrapper[4790]: I1011 10:52:58.051170 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c13cb0d1-c50f-44fa-824a-46ece423a7cc-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.051737 master-0 kubenswrapper[4790]: I1011 10:52:58.051203 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c13cb0d1-c50f-44fa-824a-46ece423a7cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.051737 master-0 kubenswrapper[4790]: I1011 10:52:58.051249 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1ebbbaaf-f668-4c40-b437-2e730aef3912-web-config\") pod \"alertmanager-metric-storage-1\" (UID: \"1ebbbaaf-f668-4c40-b437-2e730aef3912\") " pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:58.051737 master-0 kubenswrapper[4790]: I1011 10:52:58.051525 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/1ebbbaaf-f668-4c40-b437-2e730aef3912-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-1\" (UID: \"1ebbbaaf-f668-4c40-b437-2e730aef3912\") " pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:58.051737 master-0 kubenswrapper[4790]: I1011 10:52:58.051278 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c13cb0d1-c50f-44fa-824a-46ece423a7cc-config-data\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.051737 master-0 kubenswrapper[4790]: I1011 10:52:58.051756 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-91b83a1d-eeb3-4510-8d45-14992a484dba\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c2c7f44e-079f-46cd-a04e-fcf37ad4dbd2\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.052457 master-0 kubenswrapper[4790]: I1011 10:52:58.051795 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c13cb0d1-c50f-44fa-824a-46ece423a7cc-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.052457 master-0 kubenswrapper[4790]: I1011 10:52:58.051837 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c13cb0d1-c50f-44fa-824a-46ece423a7cc-pod-info\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.052457 master-0 kubenswrapper[4790]: I1011 10:52:58.051873 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c13cb0d1-c50f-44fa-824a-46ece423a7cc-server-conf\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.052457 master-0 kubenswrapper[4790]: I1011 10:52:58.051927 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c13cb0d1-c50f-44fa-824a-46ece423a7cc-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.052457 master-0 kubenswrapper[4790]: I1011 10:52:58.051982 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktw9d\" (UniqueName: \"kubernetes.io/projected/1ebbbaaf-f668-4c40-b437-2e730aef3912-kube-api-access-ktw9d\") pod \"alertmanager-metric-storage-1\" (UID: \"1ebbbaaf-f668-4c40-b437-2e730aef3912\") " pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:58.052457 master-0 kubenswrapper[4790]: I1011 10:52:58.052062 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1ebbbaaf-f668-4c40-b437-2e730aef3912-config-volume\") pod \"alertmanager-metric-storage-1\" (UID: \"1ebbbaaf-f668-4c40-b437-2e730aef3912\") " pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:58.068739 master-0 kubenswrapper[4790]: I1011 10:52:58.064048 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c13cb0d1-c50f-44fa-824a-46ece423a7cc-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.086734 master-0 kubenswrapper[4790]: I1011 10:52:58.079988 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1ebbbaaf-f668-4c40-b437-2e730aef3912-tls-assets\") pod \"alertmanager-metric-storage-1\" (UID: \"1ebbbaaf-f668-4c40-b437-2e730aef3912\") " pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:58.086734 master-0 kubenswrapper[4790]: I1011 10:52:58.083163 4790 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:52:58.086734 master-0 kubenswrapper[4790]: I1011 10:52:58.083224 4790 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-91b83a1d-eeb3-4510-8d45-14992a484dba\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c2c7f44e-079f-46cd-a04e-fcf37ad4dbd2\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/1114453ea83085dab1ca278aa52d732a8f54ac9ebf0fc42d5b564ce2eb10c0e8/globalmount\"" pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.086734 master-0 kubenswrapper[4790]: I1011 10:52:58.084244 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c13cb0d1-c50f-44fa-824a-46ece423a7cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.086734 master-0 kubenswrapper[4790]: I1011 10:52:58.085107 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c13cb0d1-c50f-44fa-824a-46ece423a7cc-config-data\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.099735 master-0 kubenswrapper[4790]: I1011 10:52:58.094241 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1ebbbaaf-f668-4c40-b437-2e730aef3912-config-out\") pod \"alertmanager-metric-storage-1\" (UID: \"1ebbbaaf-f668-4c40-b437-2e730aef3912\") " pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:58.099735 master-0 kubenswrapper[4790]: I1011 10:52:58.095692 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c13cb0d1-c50f-44fa-824a-46ece423a7cc-server-conf\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.099735 master-0 kubenswrapper[4790]: I1011 10:52:58.096544 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kvmh\" (UniqueName: \"kubernetes.io/projected/c13cb0d1-c50f-44fa-824a-46ece423a7cc-kube-api-access-2kvmh\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.099735 master-0 kubenswrapper[4790]: I1011 10:52:58.096997 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c13cb0d1-c50f-44fa-824a-46ece423a7cc-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.099735 master-0 kubenswrapper[4790]: I1011 10:52:58.097734 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c13cb0d1-c50f-44fa-824a-46ece423a7cc-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.125261 master-0 kubenswrapper[4790]: I1011 10:52:58.101441 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c13cb0d1-c50f-44fa-824a-46ece423a7cc-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.126622 master-2 kubenswrapper[4776]: I1011 10:52:58.126568 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/8d885df4-18cc-401a-8226-cd3d17b3f770-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.127229 master-2 kubenswrapper[4776]: I1011 10:52:58.126657 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8d885df4-18cc-401a-8226-cd3d17b3f770-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.127229 master-2 kubenswrapper[4776]: I1011 10:52:58.126695 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8d885df4-18cc-401a-8226-cd3d17b3f770-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.127229 master-2 kubenswrapper[4776]: I1011 10:52:58.126716 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8d885df4-18cc-401a-8226-cd3d17b3f770-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.127229 master-2 kubenswrapper[4776]: I1011 10:52:58.126780 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8d885df4-18cc-401a-8226-cd3d17b3f770-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.127229 master-2 kubenswrapper[4776]: I1011 10:52:58.126813 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48qkj\" (UniqueName: \"kubernetes.io/projected/8d885df4-18cc-401a-8226-cd3d17b3f770-kube-api-access-48qkj\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.127608 master-2 kubenswrapper[4776]: I1011 10:52:58.127575 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/8d885df4-18cc-401a-8226-cd3d17b3f770-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.130287 master-0 kubenswrapper[4790]: I1011 10:52:58.129190 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c13cb0d1-c50f-44fa-824a-46ece423a7cc-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.130287 master-0 kubenswrapper[4790]: I1011 10:52:58.129790 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1ebbbaaf-f668-4c40-b437-2e730aef3912-web-config\") pod \"alertmanager-metric-storage-1\" (UID: \"1ebbbaaf-f668-4c40-b437-2e730aef3912\") " pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:58.130451 master-2 kubenswrapper[4776]: I1011 10:52:58.130418 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8d885df4-18cc-401a-8226-cd3d17b3f770-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.131074 master-2 kubenswrapper[4776]: I1011 10:52:58.131034 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8d885df4-18cc-401a-8226-cd3d17b3f770-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.131997 master-2 kubenswrapper[4776]: I1011 10:52:58.131930 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8d885df4-18cc-401a-8226-cd3d17b3f770-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.134214 master-2 kubenswrapper[4776]: I1011 10:52:58.134153 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8d885df4-18cc-401a-8226-cd3d17b3f770-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.151743 master-0 kubenswrapper[4790]: I1011 10:52:58.147845 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c13cb0d1-c50f-44fa-824a-46ece423a7cc-pod-info\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:58.151743 master-0 kubenswrapper[4790]: I1011 10:52:58.149854 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktw9d\" (UniqueName: \"kubernetes.io/projected/1ebbbaaf-f668-4c40-b437-2e730aef3912-kube-api-access-ktw9d\") pod \"alertmanager-metric-storage-1\" (UID: \"1ebbbaaf-f668-4c40-b437-2e730aef3912\") " pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:58.165803 master-2 kubenswrapper[4776]: I1011 10:52:58.165755 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48qkj\" (UniqueName: \"kubernetes.io/projected/8d885df4-18cc-401a-8226-cd3d17b3f770-kube-api-access-48qkj\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.184819 master-0 kubenswrapper[4790]: I1011 10:52:58.179369 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1ebbbaaf-f668-4c40-b437-2e730aef3912-config-volume\") pod \"alertmanager-metric-storage-1\" (UID: \"1ebbbaaf-f668-4c40-b437-2e730aef3912\") " pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:58.234140 master-2 kubenswrapper[4776]: I1011 10:52:58.234092 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.252257 master-0 kubenswrapper[4790]: I1011 10:52:58.248599 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-1" Oct 11 10:52:59.677868 master-0 kubenswrapper[4790]: I1011 10:52:59.677786 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-91b83a1d-eeb3-4510-8d45-14992a484dba\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c2c7f44e-079f-46cd-a04e-fcf37ad4dbd2\") pod \"rabbitmq-server-1\" (UID: \"c13cb0d1-c50f-44fa-824a-46ece423a7cc\") " pod="openstack/rabbitmq-server-1" Oct 11 10:52:59.704997 master-0 kubenswrapper[4790]: I1011 10:52:59.704935 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Oct 11 10:53:00.876004 master-2 kubenswrapper[4776]: I1011 10:53:00.873983 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Oct 11 10:53:00.876004 master-2 kubenswrapper[4776]: I1011 10:53:00.875107 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.878073 master-2 kubenswrapper[4776]: I1011 10:53:00.878034 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 11 10:53:00.880739 master-2 kubenswrapper[4776]: I1011 10:53:00.878323 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 11 10:53:00.880739 master-2 kubenswrapper[4776]: I1011 10:53:00.878796 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 11 10:53:00.880739 master-2 kubenswrapper[4776]: I1011 10:53:00.879885 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 11 10:53:00.880739 master-2 kubenswrapper[4776]: I1011 10:53:00.880047 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 11 10:53:00.880739 master-2 kubenswrapper[4776]: I1011 10:53:00.880474 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 11 10:53:00.883690 master-2 kubenswrapper[4776]: I1011 10:53:00.883646 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Oct 11 10:53:00.971928 master-2 kubenswrapper[4776]: I1011 10:53:00.971861 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.971928 master-2 kubenswrapper[4776]: I1011 10:53:00.971923 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.972201 master-2 kubenswrapper[4776]: I1011 10:53:00.971960 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.972201 master-2 kubenswrapper[4776]: I1011 10:53:00.972040 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkqfd\" (UniqueName: \"kubernetes.io/projected/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-kube-api-access-lkqfd\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.972201 master-2 kubenswrapper[4776]: I1011 10:53:00.972063 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6a4048e9-376b-49f0-a75f-a9d480ba8c96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b284115-4926-4f52-9901-1ca5f504b0f5\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.972201 master-2 kubenswrapper[4776]: I1011 10:53:00.972091 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.972201 master-2 kubenswrapper[4776]: I1011 10:53:00.972116 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.972201 master-2 kubenswrapper[4776]: I1011 10:53:00.972129 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.972201 master-2 kubenswrapper[4776]: I1011 10:53:00.972150 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.972201 master-2 kubenswrapper[4776]: I1011 10:53:00.972191 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.972201 master-2 kubenswrapper[4776]: I1011 10:53:00.972211 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-config-data\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.073868 master-2 kubenswrapper[4776]: I1011 10:53:01.073803 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.074073 master-2 kubenswrapper[4776]: I1011 10:53:01.073876 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.074073 master-2 kubenswrapper[4776]: I1011 10:53:01.073903 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.074073 master-2 kubenswrapper[4776]: I1011 10:53:01.073934 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.074073 master-2 kubenswrapper[4776]: I1011 10:53:01.074049 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.074266 master-2 kubenswrapper[4776]: I1011 10:53:01.074206 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-config-data\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.074266 master-2 kubenswrapper[4776]: I1011 10:53:01.074245 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.077437 master-2 kubenswrapper[4776]: I1011 10:53:01.074719 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.077437 master-2 kubenswrapper[4776]: I1011 10:53:01.074780 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.077437 master-2 kubenswrapper[4776]: I1011 10:53:01.074826 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkqfd\" (UniqueName: \"kubernetes.io/projected/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-kube-api-access-lkqfd\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.077437 master-2 kubenswrapper[4776]: I1011 10:53:01.074858 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6a4048e9-376b-49f0-a75f-a9d480ba8c96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b284115-4926-4f52-9901-1ca5f504b0f5\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.077437 master-2 kubenswrapper[4776]: I1011 10:53:01.074858 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.077437 master-2 kubenswrapper[4776]: I1011 10:53:01.074872 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.077437 master-2 kubenswrapper[4776]: I1011 10:53:01.075804 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-config-data\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.077437 master-2 kubenswrapper[4776]: I1011 10:53:01.076716 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.077437 master-2 kubenswrapper[4776]: I1011 10:53:01.077229 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.078514 master-2 kubenswrapper[4776]: I1011 10:53:01.078489 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.080713 master-2 kubenswrapper[4776]: I1011 10:53:01.080339 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:01.080713 master-2 kubenswrapper[4776]: I1011 10:53:01.080430 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6a4048e9-376b-49f0-a75f-a9d480ba8c96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b284115-4926-4f52-9901-1ca5f504b0f5\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/5078d70166a213ba8ee51375445357cfc2acb4996c86c167f6517a7246ad420b/globalmount\"" pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.080713 master-2 kubenswrapper[4776]: I1011 10:53:01.080638 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.089195 master-2 kubenswrapper[4776]: I1011 10:53:01.089091 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.092282 master-0 kubenswrapper[4790]: I1011 10:53:01.092185 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-52t2l"] Oct 11 10:53:01.095081 master-2 kubenswrapper[4776]: I1011 10:53:01.095020 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.101132 master-0 kubenswrapper[4790]: I1011 10:53:01.101068 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.117446 master-1 kubenswrapper[4771]: I1011 10:53:01.116496 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mtzk7"] Oct 11 10:53:01.120969 master-1 kubenswrapper[4771]: I1011 10:53:01.120884 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.126243 master-1 kubenswrapper[4771]: I1011 10:53:01.126187 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 11 10:53:01.126776 master-1 kubenswrapper[4771]: I1011 10:53:01.126713 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 11 10:53:01.127050 master-1 kubenswrapper[4771]: I1011 10:53:01.126964 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-mvzxp"] Oct 11 10:53:01.131341 master-1 kubenswrapper[4771]: I1011 10:53:01.131277 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.137816 master-2 kubenswrapper[4776]: I1011 10:53:01.128119 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkqfd\" (UniqueName: \"kubernetes.io/projected/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-kube-api-access-lkqfd\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.137816 master-2 kubenswrapper[4776]: I1011 10:53:01.135936 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8qhqm"] Oct 11 10:53:01.140161 master-1 kubenswrapper[4771]: I1011 10:53:01.140020 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mtzk7"] Oct 11 10:53:01.147518 master-1 kubenswrapper[4771]: I1011 10:53:01.147460 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mvzxp"] Oct 11 10:53:01.149699 master-2 kubenswrapper[4776]: I1011 10:53:01.147897 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-4m6km"] Oct 11 10:53:01.149699 master-2 kubenswrapper[4776]: I1011 10:53:01.148384 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.153741 master-2 kubenswrapper[4776]: I1011 10:53:01.150411 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.155704 master-2 kubenswrapper[4776]: I1011 10:53:01.155482 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 11 10:53:01.155805 master-2 kubenswrapper[4776]: I1011 10:53:01.155750 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 11 10:53:01.174738 master-2 kubenswrapper[4776]: I1011 10:53:01.165371 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8qhqm"] Oct 11 10:53:01.174738 master-2 kubenswrapper[4776]: I1011 10:53:01.165903 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 11 10:53:01.175872 master-0 kubenswrapper[4790]: I1011 10:53:01.175808 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-52t2l"] Oct 11 10:53:01.176542 master-0 kubenswrapper[4790]: I1011 10:53:01.176395 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 11 10:53:01.176542 master-0 kubenswrapper[4790]: I1011 10:53:01.176445 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 11 10:53:01.178906 master-0 kubenswrapper[4790]: I1011 10:53:01.177669 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 11 10:53:01.182711 master-2 kubenswrapper[4776]: I1011 10:53:01.176636 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/065373ca-8c0f-489c-a72e-4d1aee1263ba-combined-ca-bundle\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.182711 master-2 kubenswrapper[4776]: I1011 10:53:01.181914 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/065373ca-8c0f-489c-a72e-4d1aee1263ba-scripts\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.182711 master-2 kubenswrapper[4776]: I1011 10:53:01.182032 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/065373ca-8c0f-489c-a72e-4d1aee1263ba-var-log-ovn\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.182711 master-2 kubenswrapper[4776]: I1011 10:53:01.182076 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbk4s\" (UniqueName: \"kubernetes.io/projected/894f72d0-cdc8-4904-b8a4-0e808ce0b855-kube-api-access-xbk4s\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.182711 master-2 kubenswrapper[4776]: I1011 10:53:01.182243 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-etc-ovs\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.182711 master-2 kubenswrapper[4776]: I1011 10:53:01.182277 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-var-log\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.182711 master-2 kubenswrapper[4776]: I1011 10:53:01.182305 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-var-run\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.182711 master-2 kubenswrapper[4776]: I1011 10:53:01.182329 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-var-lib\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.182711 master-2 kubenswrapper[4776]: I1011 10:53:01.182479 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4m6km"] Oct 11 10:53:01.185059 master-0 kubenswrapper[4790]: I1011 10:53:01.179276 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-dw8wx"] Oct 11 10:53:01.185059 master-0 kubenswrapper[4790]: I1011 10:53:01.180538 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.190661 master-0 kubenswrapper[4790]: I1011 10:53:01.190109 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-dw8wx"] Oct 11 10:53:01.195053 master-2 kubenswrapper[4776]: I1011 10:53:01.195007 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/894f72d0-cdc8-4904-b8a4-0e808ce0b855-scripts\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.195289 master-2 kubenswrapper[4776]: I1011 10:53:01.195143 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/065373ca-8c0f-489c-a72e-4d1aee1263ba-var-run-ovn\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.195289 master-2 kubenswrapper[4776]: I1011 10:53:01.195190 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/065373ca-8c0f-489c-a72e-4d1aee1263ba-var-run\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.195289 master-2 kubenswrapper[4776]: I1011 10:53:01.195218 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/065373ca-8c0f-489c-a72e-4d1aee1263ba-ovn-controller-tls-certs\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.195289 master-2 kubenswrapper[4776]: I1011 10:53:01.195281 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7qq6\" (UniqueName: \"kubernetes.io/projected/065373ca-8c0f-489c-a72e-4d1aee1263ba-kube-api-access-s7qq6\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.234010 master-0 kubenswrapper[4790]: I1011 10:53:01.232686 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfjnz\" (UniqueName: \"kubernetes.io/projected/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-kube-api-access-hfjnz\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.234010 master-0 kubenswrapper[4790]: I1011 10:53:01.232804 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-var-run-ovn\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.234010 master-0 kubenswrapper[4790]: I1011 10:53:01.232847 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-var-log-ovn\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.234010 master-0 kubenswrapper[4790]: I1011 10:53:01.232882 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-combined-ca-bundle\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.234010 master-0 kubenswrapper[4790]: I1011 10:53:01.232930 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-ovn-controller-tls-certs\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.234010 master-0 kubenswrapper[4790]: I1011 10:53:01.232962 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-var-run\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.234010 master-0 kubenswrapper[4790]: I1011 10:53:01.232989 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-scripts\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.238797 master-1 kubenswrapper[4771]: I1011 10:53:01.232741 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 11 10:53:01.254466 master-1 kubenswrapper[4771]: I1011 10:53:01.252554 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5ldr\" (UniqueName: \"kubernetes.io/projected/71b1c323-2ebf-4a37-9327-840d3f04eda1-kube-api-access-v5ldr\") pod \"ovn-controller-ovs-mvzxp\" (UID: \"71b1c323-2ebf-4a37-9327-840d3f04eda1\") " pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.254466 master-1 kubenswrapper[4771]: I1011 10:53:01.252613 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/71b1c323-2ebf-4a37-9327-840d3f04eda1-var-log\") pod \"ovn-controller-ovs-mvzxp\" (UID: \"71b1c323-2ebf-4a37-9327-840d3f04eda1\") " pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.254466 master-1 kubenswrapper[4771]: I1011 10:53:01.252638 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-var-run-ovn\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.254466 master-1 kubenswrapper[4771]: I1011 10:53:01.252737 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-var-run\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.254466 master-1 kubenswrapper[4771]: I1011 10:53:01.252804 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4jzl\" (UniqueName: \"kubernetes.io/projected/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-kube-api-access-p4jzl\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.254466 master-1 kubenswrapper[4771]: I1011 10:53:01.252854 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-combined-ca-bundle\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.254466 master-1 kubenswrapper[4771]: I1011 10:53:01.252935 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-var-log-ovn\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.254466 master-1 kubenswrapper[4771]: I1011 10:53:01.252978 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-scripts\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.254466 master-1 kubenswrapper[4771]: I1011 10:53:01.253006 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/71b1c323-2ebf-4a37-9327-840d3f04eda1-etc-ovs\") pod \"ovn-controller-ovs-mvzxp\" (UID: \"71b1c323-2ebf-4a37-9327-840d3f04eda1\") " pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.254466 master-1 kubenswrapper[4771]: I1011 10:53:01.253175 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-ovn-controller-tls-certs\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.254466 master-1 kubenswrapper[4771]: I1011 10:53:01.253250 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/71b1c323-2ebf-4a37-9327-840d3f04eda1-var-lib\") pod \"ovn-controller-ovs-mvzxp\" (UID: \"71b1c323-2ebf-4a37-9327-840d3f04eda1\") " pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.254466 master-1 kubenswrapper[4771]: I1011 10:53:01.253407 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/71b1c323-2ebf-4a37-9327-840d3f04eda1-var-run\") pod \"ovn-controller-ovs-mvzxp\" (UID: \"71b1c323-2ebf-4a37-9327-840d3f04eda1\") " pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.254466 master-1 kubenswrapper[4771]: I1011 10:53:01.253458 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71b1c323-2ebf-4a37-9327-840d3f04eda1-scripts\") pod \"ovn-controller-ovs-mvzxp\" (UID: \"71b1c323-2ebf-4a37-9327-840d3f04eda1\") " pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.282061 master-1 kubenswrapper[4771]: I1011 10:53:01.281997 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b4bcc4d85-hc9b5"] Oct 11 10:53:01.286270 master-1 kubenswrapper[4771]: W1011 10:53:01.286233 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37015f12_0983_4016_9f76_6d0e3f641f28.slice/crio-9218883a276b672e1ae5ba8960bf79ca7ce273871d0ff647d43beb2d404896a3 WatchSource:0}: Error finding container 9218883a276b672e1ae5ba8960bf79ca7ce273871d0ff647d43beb2d404896a3: Status 404 returned error can't find the container with id 9218883a276b672e1ae5ba8960bf79ca7ce273871d0ff647d43beb2d404896a3 Oct 11 10:53:01.298062 master-2 kubenswrapper[4776]: I1011 10:53:01.297987 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-etc-ovs\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.298062 master-2 kubenswrapper[4776]: I1011 10:53:01.298050 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-var-log\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298071 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-var-lib\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298103 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-var-run\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298121 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/894f72d0-cdc8-4904-b8a4-0e808ce0b855-scripts\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298175 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/065373ca-8c0f-489c-a72e-4d1aee1263ba-var-run-ovn\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298200 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/065373ca-8c0f-489c-a72e-4d1aee1263ba-ovn-controller-tls-certs\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298221 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/065373ca-8c0f-489c-a72e-4d1aee1263ba-var-run\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298268 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7qq6\" (UniqueName: \"kubernetes.io/projected/065373ca-8c0f-489c-a72e-4d1aee1263ba-kube-api-access-s7qq6\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298298 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/065373ca-8c0f-489c-a72e-4d1aee1263ba-combined-ca-bundle\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298329 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/065373ca-8c0f-489c-a72e-4d1aee1263ba-scripts\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298361 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/065373ca-8c0f-489c-a72e-4d1aee1263ba-var-log-ovn\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298383 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbk4s\" (UniqueName: \"kubernetes.io/projected/894f72d0-cdc8-4904-b8a4-0e808ce0b855-kube-api-access-xbk4s\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.300150 master-2 kubenswrapper[4776]: I1011 10:53:01.299626 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-etc-ovs\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.300150 master-2 kubenswrapper[4776]: I1011 10:53:01.299989 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-var-log\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.300391 master-2 kubenswrapper[4776]: I1011 10:53:01.300188 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/065373ca-8c0f-489c-a72e-4d1aee1263ba-var-run\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.300477 master-2 kubenswrapper[4776]: I1011 10:53:01.300433 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/065373ca-8c0f-489c-a72e-4d1aee1263ba-var-log-ovn\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.300963 master-2 kubenswrapper[4776]: I1011 10:53:01.300891 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-var-run\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.301164 master-2 kubenswrapper[4776]: I1011 10:53:01.301093 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-var-lib\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.301270 master-2 kubenswrapper[4776]: I1011 10:53:01.301255 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/065373ca-8c0f-489c-a72e-4d1aee1263ba-var-run-ovn\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.304626 master-2 kubenswrapper[4776]: I1011 10:53:01.303979 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/065373ca-8c0f-489c-a72e-4d1aee1263ba-ovn-controller-tls-certs\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.305335 master-2 kubenswrapper[4776]: I1011 10:53:01.305241 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/065373ca-8c0f-489c-a72e-4d1aee1263ba-scripts\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.306142 master-2 kubenswrapper[4776]: I1011 10:53:01.306049 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/065373ca-8c0f-489c-a72e-4d1aee1263ba-combined-ca-bundle\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.306697 master-2 kubenswrapper[4776]: I1011 10:53:01.306639 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/894f72d0-cdc8-4904-b8a4-0e808ce0b855-scripts\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.319514 master-2 kubenswrapper[4776]: I1011 10:53:01.319465 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbk4s\" (UniqueName: \"kubernetes.io/projected/894f72d0-cdc8-4904-b8a4-0e808ce0b855-kube-api-access-xbk4s\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.323114 master-2 kubenswrapper[4776]: I1011 10:53:01.323073 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7qq6\" (UniqueName: \"kubernetes.io/projected/065373ca-8c0f-489c-a72e-4d1aee1263ba-kube-api-access-s7qq6\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.335429 master-0 kubenswrapper[4790]: I1011 10:53:01.334282 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-combined-ca-bundle\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.335429 master-0 kubenswrapper[4790]: I1011 10:53:01.334349 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0-scripts\") pod \"ovn-controller-ovs-dw8wx\" (UID: \"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0\") " pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.335429 master-0 kubenswrapper[4790]: I1011 10:53:01.334403 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-ovn-controller-tls-certs\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.335429 master-0 kubenswrapper[4790]: I1011 10:53:01.334426 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-var-run\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.335429 master-0 kubenswrapper[4790]: I1011 10:53:01.334452 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-scripts\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.335429 master-0 kubenswrapper[4790]: I1011 10:53:01.334474 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0-var-lib\") pod \"ovn-controller-ovs-dw8wx\" (UID: \"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0\") " pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.335429 master-0 kubenswrapper[4790]: I1011 10:53:01.334504 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0-etc-ovs\") pod \"ovn-controller-ovs-dw8wx\" (UID: \"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0\") " pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.335429 master-0 kubenswrapper[4790]: I1011 10:53:01.334527 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfjnz\" (UniqueName: \"kubernetes.io/projected/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-kube-api-access-hfjnz\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.335429 master-0 kubenswrapper[4790]: I1011 10:53:01.334567 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-var-run-ovn\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.335429 master-0 kubenswrapper[4790]: I1011 10:53:01.334591 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0-var-run\") pod \"ovn-controller-ovs-dw8wx\" (UID: \"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0\") " pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.335429 master-0 kubenswrapper[4790]: I1011 10:53:01.334609 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl7v6\" (UniqueName: \"kubernetes.io/projected/c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0-kube-api-access-zl7v6\") pod \"ovn-controller-ovs-dw8wx\" (UID: \"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0\") " pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.335429 master-0 kubenswrapper[4790]: I1011 10:53:01.334631 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-var-log-ovn\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.335429 master-0 kubenswrapper[4790]: I1011 10:53:01.334648 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0-var-log\") pod \"ovn-controller-ovs-dw8wx\" (UID: \"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0\") " pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.336447 master-0 kubenswrapper[4790]: I1011 10:53:01.335686 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-var-log-ovn\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.336447 master-0 kubenswrapper[4790]: I1011 10:53:01.336004 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-var-run-ovn\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.336447 master-0 kubenswrapper[4790]: I1011 10:53:01.336051 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-var-run\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.338353 master-0 kubenswrapper[4790]: I1011 10:53:01.338295 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-ovn-controller-tls-certs\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.344826 master-0 kubenswrapper[4790]: I1011 10:53:01.340441 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-scripts\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.344826 master-0 kubenswrapper[4790]: I1011 10:53:01.342043 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-combined-ca-bundle\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.356548 master-0 kubenswrapper[4790]: I1011 10:53:01.355652 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfjnz\" (UniqueName: \"kubernetes.io/projected/8c164a4b-a2d5-4570-aed3-86dbb1f3d47c-kube-api-access-hfjnz\") pod \"ovn-controller-52t2l\" (UID: \"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c\") " pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.357119 master-1 kubenswrapper[4771]: I1011 10:53:01.357065 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5ldr\" (UniqueName: \"kubernetes.io/projected/71b1c323-2ebf-4a37-9327-840d3f04eda1-kube-api-access-v5ldr\") pod \"ovn-controller-ovs-mvzxp\" (UID: \"71b1c323-2ebf-4a37-9327-840d3f04eda1\") " pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.357119 master-1 kubenswrapper[4771]: I1011 10:53:01.357126 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/71b1c323-2ebf-4a37-9327-840d3f04eda1-var-log\") pod \"ovn-controller-ovs-mvzxp\" (UID: \"71b1c323-2ebf-4a37-9327-840d3f04eda1\") " pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.357332 master-1 kubenswrapper[4771]: I1011 10:53:01.357149 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-var-run-ovn\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.357332 master-1 kubenswrapper[4771]: I1011 10:53:01.357172 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-var-run\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.357332 master-1 kubenswrapper[4771]: I1011 10:53:01.357194 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4jzl\" (UniqueName: \"kubernetes.io/projected/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-kube-api-access-p4jzl\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.357332 master-1 kubenswrapper[4771]: I1011 10:53:01.357216 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-combined-ca-bundle\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.357332 master-1 kubenswrapper[4771]: I1011 10:53:01.357239 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-var-log-ovn\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.357332 master-1 kubenswrapper[4771]: I1011 10:53:01.357267 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-scripts\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.357332 master-1 kubenswrapper[4771]: I1011 10:53:01.357284 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/71b1c323-2ebf-4a37-9327-840d3f04eda1-etc-ovs\") pod \"ovn-controller-ovs-mvzxp\" (UID: \"71b1c323-2ebf-4a37-9327-840d3f04eda1\") " pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.357332 master-1 kubenswrapper[4771]: I1011 10:53:01.357316 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-ovn-controller-tls-certs\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.357617 master-1 kubenswrapper[4771]: I1011 10:53:01.357451 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/71b1c323-2ebf-4a37-9327-840d3f04eda1-var-lib\") pod \"ovn-controller-ovs-mvzxp\" (UID: \"71b1c323-2ebf-4a37-9327-840d3f04eda1\") " pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.357617 master-1 kubenswrapper[4771]: I1011 10:53:01.357489 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/71b1c323-2ebf-4a37-9327-840d3f04eda1-var-run\") pod \"ovn-controller-ovs-mvzxp\" (UID: \"71b1c323-2ebf-4a37-9327-840d3f04eda1\") " pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.357617 master-1 kubenswrapper[4771]: I1011 10:53:01.357512 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71b1c323-2ebf-4a37-9327-840d3f04eda1-scripts\") pod \"ovn-controller-ovs-mvzxp\" (UID: \"71b1c323-2ebf-4a37-9327-840d3f04eda1\") " pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.359095 master-1 kubenswrapper[4771]: I1011 10:53:01.359068 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/71b1c323-2ebf-4a37-9327-840d3f04eda1-var-log\") pod \"ovn-controller-ovs-mvzxp\" (UID: \"71b1c323-2ebf-4a37-9327-840d3f04eda1\") " pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.359268 master-1 kubenswrapper[4771]: I1011 10:53:01.359215 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-var-run-ovn\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.359627 master-1 kubenswrapper[4771]: I1011 10:53:01.359584 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/71b1c323-2ebf-4a37-9327-840d3f04eda1-etc-ovs\") pod \"ovn-controller-ovs-mvzxp\" (UID: \"71b1c323-2ebf-4a37-9327-840d3f04eda1\") " pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.359676 master-1 kubenswrapper[4771]: I1011 10:53:01.359640 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/71b1c323-2ebf-4a37-9327-840d3f04eda1-var-run\") pod \"ovn-controller-ovs-mvzxp\" (UID: \"71b1c323-2ebf-4a37-9327-840d3f04eda1\") " pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.359729 master-1 kubenswrapper[4771]: I1011 10:53:01.359641 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/71b1c323-2ebf-4a37-9327-840d3f04eda1-var-lib\") pod \"ovn-controller-ovs-mvzxp\" (UID: \"71b1c323-2ebf-4a37-9327-840d3f04eda1\") " pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.359729 master-1 kubenswrapper[4771]: I1011 10:53:01.359635 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-var-run\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.359888 master-1 kubenswrapper[4771]: I1011 10:53:01.359865 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-var-log-ovn\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.361294 master-1 kubenswrapper[4771]: I1011 10:53:01.361248 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71b1c323-2ebf-4a37-9327-840d3f04eda1-scripts\") pod \"ovn-controller-ovs-mvzxp\" (UID: \"71b1c323-2ebf-4a37-9327-840d3f04eda1\") " pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.361440 master-1 kubenswrapper[4771]: I1011 10:53:01.361413 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-scripts\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.363133 master-1 kubenswrapper[4771]: I1011 10:53:01.363074 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-ovn-controller-tls-certs\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.364543 master-1 kubenswrapper[4771]: I1011 10:53:01.364513 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-combined-ca-bundle\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.382077 master-1 kubenswrapper[4771]: I1011 10:53:01.381041 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4jzl\" (UniqueName: \"kubernetes.io/projected/4ab25521-7fba-40c9-b3db-377b1d0ec7a1-kube-api-access-p4jzl\") pod \"ovn-controller-mtzk7\" (UID: \"4ab25521-7fba-40c9-b3db-377b1d0ec7a1\") " pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.382968 master-1 kubenswrapper[4771]: I1011 10:53:01.382920 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5ldr\" (UniqueName: \"kubernetes.io/projected/71b1c323-2ebf-4a37-9327-840d3f04eda1-kube-api-access-v5ldr\") pod \"ovn-controller-ovs-mvzxp\" (UID: \"71b1c323-2ebf-4a37-9327-840d3f04eda1\") " pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.388827 master-1 kubenswrapper[4771]: I1011 10:53:01.388788 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 11 10:53:01.397147 master-1 kubenswrapper[4771]: W1011 10:53:01.397083 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fe99fba_e358_4203_a516_04b9ae19d789.slice/crio-338665f1b49aeaee26ab96a1458b85f97acd4442412f04354b46796f50305999 WatchSource:0}: Error finding container 338665f1b49aeaee26ab96a1458b85f97acd4442412f04354b46796f50305999: Status 404 returned error can't find the container with id 338665f1b49aeaee26ab96a1458b85f97acd4442412f04354b46796f50305999 Oct 11 10:53:01.436744 master-0 kubenswrapper[4790]: I1011 10:53:01.436659 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0-var-lib\") pod \"ovn-controller-ovs-dw8wx\" (UID: \"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0\") " pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.436997 master-0 kubenswrapper[4790]: I1011 10:53:01.436782 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0-etc-ovs\") pod \"ovn-controller-ovs-dw8wx\" (UID: \"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0\") " pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.437236 master-0 kubenswrapper[4790]: I1011 10:53:01.437199 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0-var-run\") pod \"ovn-controller-ovs-dw8wx\" (UID: \"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0\") " pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.437236 master-0 kubenswrapper[4790]: I1011 10:53:01.437232 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl7v6\" (UniqueName: \"kubernetes.io/projected/c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0-kube-api-access-zl7v6\") pod \"ovn-controller-ovs-dw8wx\" (UID: \"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0\") " pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.437336 master-0 kubenswrapper[4790]: I1011 10:53:01.437262 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0-var-log\") pod \"ovn-controller-ovs-dw8wx\" (UID: \"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0\") " pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.437507 master-0 kubenswrapper[4790]: I1011 10:53:01.437450 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0-var-run\") pod \"ovn-controller-ovs-dw8wx\" (UID: \"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0\") " pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.437704 master-0 kubenswrapper[4790]: I1011 10:53:01.437636 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0-etc-ovs\") pod \"ovn-controller-ovs-dw8wx\" (UID: \"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0\") " pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.437806 master-0 kubenswrapper[4790]: I1011 10:53:01.437649 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0-var-lib\") pod \"ovn-controller-ovs-dw8wx\" (UID: \"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0\") " pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.437806 master-0 kubenswrapper[4790]: I1011 10:53:01.437797 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0-var-log\") pod \"ovn-controller-ovs-dw8wx\" (UID: \"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0\") " pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.437905 master-0 kubenswrapper[4790]: I1011 10:53:01.437681 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0-scripts\") pod \"ovn-controller-ovs-dw8wx\" (UID: \"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0\") " pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.439950 master-0 kubenswrapper[4790]: I1011 10:53:01.439918 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0-scripts\") pod \"ovn-controller-ovs-dw8wx\" (UID: \"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0\") " pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.453497 master-1 kubenswrapper[4771]: I1011 10:53:01.453425 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:01.457795 master-0 kubenswrapper[4790]: I1011 10:53:01.457679 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-52t2l" Oct 11 10:53:01.469489 master-0 kubenswrapper[4790]: I1011 10:53:01.469421 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl7v6\" (UniqueName: \"kubernetes.io/projected/c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0-kube-api-access-zl7v6\") pod \"ovn-controller-ovs-dw8wx\" (UID: \"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0\") " pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.478276 master-1 kubenswrapper[4771]: I1011 10:53:01.478222 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:01.498237 master-2 kubenswrapper[4776]: I1011 10:53:01.498174 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.511570 master-2 kubenswrapper[4776]: I1011 10:53:01.511519 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.514513 master-0 kubenswrapper[4790]: I1011 10:53:01.514452 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:01.882392 master-1 kubenswrapper[4771]: I1011 10:53:01.882318 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6fe99fba-e358-4203-a516-04b9ae19d789","Type":"ContainerStarted","Data":"338665f1b49aeaee26ab96a1458b85f97acd4442412f04354b46796f50305999"} Oct 11 10:53:01.883667 master-1 kubenswrapper[4771]: I1011 10:53:01.883625 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3843977f-b9b9-4f98-9205-5dbe3113fa5e","Type":"ContainerStarted","Data":"2b6e1835ec110d7e4a6c5ea6f76cf1b56d9f129352c900bbd1f9ec66b48b8068"} Oct 11 10:53:01.885662 master-1 kubenswrapper[4771]: I1011 10:53:01.885588 4771 generic.go:334] "Generic (PLEG): container finished" podID="37015f12-0983-4016-9f76-6d0e3f641f28" containerID="6500f337e1810213b3c48514bdf7915497fff03dfebcbb66402d535cebd46613" exitCode=0 Oct 11 10:53:01.885662 master-1 kubenswrapper[4771]: I1011 10:53:01.885649 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" event={"ID":"37015f12-0983-4016-9f76-6d0e3f641f28","Type":"ContainerDied","Data":"6500f337e1810213b3c48514bdf7915497fff03dfebcbb66402d535cebd46613"} Oct 11 10:53:01.885821 master-1 kubenswrapper[4771]: I1011 10:53:01.885669 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" event={"ID":"37015f12-0983-4016-9f76-6d0e3f641f28","Type":"ContainerStarted","Data":"9218883a276b672e1ae5ba8960bf79ca7ce273871d0ff647d43beb2d404896a3"} Oct 11 10:53:01.888479 master-1 kubenswrapper[4771]: I1011 10:53:01.888429 4771 generic.go:334] "Generic (PLEG): container finished" podID="7be8543e-04fe-4e9d-91c9-8219b843b991" containerID="d370c75dd26b77e221c3a247825030d6ec6b2e51f63eff5e38438108d29ede13" exitCode=0 Oct 11 10:53:01.888585 master-1 kubenswrapper[4771]: I1011 10:53:01.888488 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd846fcd9-m5nng" event={"ID":"7be8543e-04fe-4e9d-91c9-8219b843b991","Type":"ContainerDied","Data":"d370c75dd26b77e221c3a247825030d6ec6b2e51f63eff5e38438108d29ede13"} Oct 11 10:53:02.054644 master-1 kubenswrapper[4771]: I1011 10:53:02.054557 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mtzk7"] Oct 11 10:53:02.070103 master-1 kubenswrapper[4771]: W1011 10:53:02.068226 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ab25521_7fba_40c9_b3db_377b1d0ec7a1.slice/crio-0c36d3f1d42190760f1c7787f28e6a160afd93a10732a347576180547afc6468 WatchSource:0}: Error finding container 0c36d3f1d42190760f1c7787f28e6a160afd93a10732a347576180547afc6468: Status 404 returned error can't find the container with id 0c36d3f1d42190760f1c7787f28e6a160afd93a10732a347576180547afc6468 Oct 11 10:53:02.152376 master-1 kubenswrapper[4771]: E1011 10:53:02.143529 4771 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Oct 11 10:53:02.152376 master-1 kubenswrapper[4771]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/37015f12-0983-4016-9f76-6d0e3f641f28/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 11 10:53:02.152376 master-1 kubenswrapper[4771]: > podSandboxID="9218883a276b672e1ae5ba8960bf79ca7ce273871d0ff647d43beb2d404896a3" Oct 11 10:53:02.152376 master-1 kubenswrapper[4771]: E1011 10:53:02.143767 4771 kuberuntime_manager.go:1274] "Unhandled Error" err=< Oct 11 10:53:02.152376 master-1 kubenswrapper[4771]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n86h5f9h86h67dh684h587h58hcbh68fh5d9h9hb7h659hc5h5c9h589h676h67ch579h668hc5h57h695h678h5f8hd8h664h698h5ffh664h54fhd8q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8r45c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000790000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5b4bcc4d85-hc9b5_openstack(37015f12-0983-4016-9f76-6d0e3f641f28): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/37015f12-0983-4016-9f76-6d0e3f641f28/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 11 10:53:02.152376 master-1 kubenswrapper[4771]: > logger="UnhandledError" Oct 11 10:53:02.152376 master-1 kubenswrapper[4771]: E1011 10:53:02.150639 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/37015f12-0983-4016-9f76-6d0e3f641f28/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" podUID="37015f12-0983-4016-9f76-6d0e3f641f28" Oct 11 10:53:02.243109 master-2 kubenswrapper[4776]: I1011 10:53:02.243046 4776 generic.go:334] "Generic (PLEG): container finished" podID="977f2c8c-eb07-4fb7-ae7e-6d0688c6081f" containerID="d18dd7e7c04452149778ab81532866efd7ae9784ef1c20a46c1951260c713b9f" exitCode=0 Oct 11 10:53:02.243109 master-2 kubenswrapper[4776]: I1011 10:53:02.243107 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" event={"ID":"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f","Type":"ContainerDied","Data":"d18dd7e7c04452149778ab81532866efd7ae9784ef1c20a46c1951260c713b9f"} Oct 11 10:53:02.284535 master-1 kubenswrapper[4771]: I1011 10:53:02.284470 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd846fcd9-m5nng" Oct 11 10:53:02.307170 master-1 kubenswrapper[4771]: I1011 10:53:02.307089 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 11 10:53:02.307540 master-1 kubenswrapper[4771]: E1011 10:53:02.307374 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be8543e-04fe-4e9d-91c9-8219b843b991" containerName="init" Oct 11 10:53:02.307540 master-1 kubenswrapper[4771]: I1011 10:53:02.307389 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be8543e-04fe-4e9d-91c9-8219b843b991" containerName="init" Oct 11 10:53:02.307540 master-1 kubenswrapper[4771]: I1011 10:53:02.307533 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be8543e-04fe-4e9d-91c9-8219b843b991" containerName="init" Oct 11 10:53:02.308321 master-1 kubenswrapper[4771]: I1011 10:53:02.308285 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.311684 master-1 kubenswrapper[4771]: I1011 10:53:02.310986 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 11 10:53:02.311684 master-1 kubenswrapper[4771]: I1011 10:53:02.311433 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 11 10:53:02.312030 master-1 kubenswrapper[4771]: I1011 10:53:02.312005 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 11 10:53:02.312189 master-1 kubenswrapper[4771]: I1011 10:53:02.312157 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 11 10:53:02.312325 master-1 kubenswrapper[4771]: I1011 10:53:02.312296 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 11 10:53:02.312564 master-1 kubenswrapper[4771]: I1011 10:53:02.312539 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 11 10:53:02.332637 master-1 kubenswrapper[4771]: I1011 10:53:02.332566 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 11 10:53:02.377503 master-1 kubenswrapper[4771]: I1011 10:53:02.377431 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be8543e-04fe-4e9d-91c9-8219b843b991-config\") pod \"7be8543e-04fe-4e9d-91c9-8219b843b991\" (UID: \"7be8543e-04fe-4e9d-91c9-8219b843b991\") " Oct 11 10:53:02.378075 master-1 kubenswrapper[4771]: I1011 10:53:02.377614 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpgxh\" (UniqueName: \"kubernetes.io/projected/7be8543e-04fe-4e9d-91c9-8219b843b991-kube-api-access-cpgxh\") pod \"7be8543e-04fe-4e9d-91c9-8219b843b991\" (UID: \"7be8543e-04fe-4e9d-91c9-8219b843b991\") " Oct 11 10:53:02.383894 master-1 kubenswrapper[4771]: I1011 10:53:02.383847 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be8543e-04fe-4e9d-91c9-8219b843b991-kube-api-access-cpgxh" (OuterVolumeSpecName: "kube-api-access-cpgxh") pod "7be8543e-04fe-4e9d-91c9-8219b843b991" (UID: "7be8543e-04fe-4e9d-91c9-8219b843b991"). InnerVolumeSpecName "kube-api-access-cpgxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:02.416464 master-1 kubenswrapper[4771]: I1011 10:53:02.416401 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7be8543e-04fe-4e9d-91c9-8219b843b991-config" (OuterVolumeSpecName: "config") pod "7be8543e-04fe-4e9d-91c9-8219b843b991" (UID: "7be8543e-04fe-4e9d-91c9-8219b843b991"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:02.457898 master-2 kubenswrapper[4776]: I1011 10:53:02.445159 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Oct 11 10:53:02.474701 master-2 kubenswrapper[4776]: I1011 10:53:02.470108 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86d565bb9-rbc2v"] Oct 11 10:53:02.487459 master-1 kubenswrapper[4771]: I1011 10:53:02.485799 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmtjs\" (UniqueName: \"kubernetes.io/projected/831321b9-20ce-409b-8bdb-ec231aef5f35-kube-api-access-tmtjs\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.487459 master-1 kubenswrapper[4771]: I1011 10:53:02.485851 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/831321b9-20ce-409b-8bdb-ec231aef5f35-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.487459 master-1 kubenswrapper[4771]: I1011 10:53:02.485875 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/831321b9-20ce-409b-8bdb-ec231aef5f35-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.487459 master-1 kubenswrapper[4771]: I1011 10:53:02.485946 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ac5ccdc8-e999-4333-8daa-1020a63a77e7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6a793077-de91-441f-80c8-5b4445c0ddaf\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.487459 master-1 kubenswrapper[4771]: I1011 10:53:02.485973 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/831321b9-20ce-409b-8bdb-ec231aef5f35-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.487459 master-1 kubenswrapper[4771]: I1011 10:53:02.485997 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/831321b9-20ce-409b-8bdb-ec231aef5f35-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.487459 master-1 kubenswrapper[4771]: I1011 10:53:02.486014 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/831321b9-20ce-409b-8bdb-ec231aef5f35-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.487459 master-1 kubenswrapper[4771]: I1011 10:53:02.486040 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/831321b9-20ce-409b-8bdb-ec231aef5f35-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.487459 master-1 kubenswrapper[4771]: I1011 10:53:02.486055 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/831321b9-20ce-409b-8bdb-ec231aef5f35-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.487459 master-1 kubenswrapper[4771]: I1011 10:53:02.486218 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/831321b9-20ce-409b-8bdb-ec231aef5f35-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.487459 master-1 kubenswrapper[4771]: I1011 10:53:02.486237 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/831321b9-20ce-409b-8bdb-ec231aef5f35-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.487459 master-1 kubenswrapper[4771]: I1011 10:53:02.486280 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpgxh\" (UniqueName: \"kubernetes.io/projected/7be8543e-04fe-4e9d-91c9-8219b843b991-kube-api-access-cpgxh\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:02.487459 master-1 kubenswrapper[4771]: I1011 10:53:02.486295 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be8543e-04fe-4e9d-91c9-8219b843b991-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:02.589592 master-1 kubenswrapper[4771]: I1011 10:53:02.588166 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmtjs\" (UniqueName: \"kubernetes.io/projected/831321b9-20ce-409b-8bdb-ec231aef5f35-kube-api-access-tmtjs\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.589592 master-1 kubenswrapper[4771]: I1011 10:53:02.588271 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/831321b9-20ce-409b-8bdb-ec231aef5f35-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.589592 master-1 kubenswrapper[4771]: I1011 10:53:02.588302 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/831321b9-20ce-409b-8bdb-ec231aef5f35-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.589592 master-1 kubenswrapper[4771]: I1011 10:53:02.588339 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ac5ccdc8-e999-4333-8daa-1020a63a77e7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6a793077-de91-441f-80c8-5b4445c0ddaf\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.589592 master-1 kubenswrapper[4771]: I1011 10:53:02.588406 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/831321b9-20ce-409b-8bdb-ec231aef5f35-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.589592 master-1 kubenswrapper[4771]: I1011 10:53:02.588442 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/831321b9-20ce-409b-8bdb-ec231aef5f35-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.589592 master-1 kubenswrapper[4771]: I1011 10:53:02.588469 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/831321b9-20ce-409b-8bdb-ec231aef5f35-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.589592 master-1 kubenswrapper[4771]: I1011 10:53:02.588517 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/831321b9-20ce-409b-8bdb-ec231aef5f35-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.589592 master-1 kubenswrapper[4771]: I1011 10:53:02.588539 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/831321b9-20ce-409b-8bdb-ec231aef5f35-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.589592 master-1 kubenswrapper[4771]: I1011 10:53:02.588606 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/831321b9-20ce-409b-8bdb-ec231aef5f35-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.589592 master-1 kubenswrapper[4771]: I1011 10:53:02.588632 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/831321b9-20ce-409b-8bdb-ec231aef5f35-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.589592 master-1 kubenswrapper[4771]: I1011 10:53:02.589155 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/831321b9-20ce-409b-8bdb-ec231aef5f35-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.590693 master-1 kubenswrapper[4771]: I1011 10:53:02.589989 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/831321b9-20ce-409b-8bdb-ec231aef5f35-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.590693 master-1 kubenswrapper[4771]: I1011 10:53:02.590274 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/831321b9-20ce-409b-8bdb-ec231aef5f35-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.591540 master-1 kubenswrapper[4771]: I1011 10:53:02.591493 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/831321b9-20ce-409b-8bdb-ec231aef5f35-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.591934 master-1 kubenswrapper[4771]: I1011 10:53:02.591902 4771 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:02.591993 master-1 kubenswrapper[4771]: I1011 10:53:02.591935 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ac5ccdc8-e999-4333-8daa-1020a63a77e7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6a793077-de91-441f-80c8-5b4445c0ddaf\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/db0f3e0ea24975b05cacd28424c426b113bc2abc30785b5ccd7afb590600ef18/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.592833 master-1 kubenswrapper[4771]: I1011 10:53:02.592790 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/831321b9-20ce-409b-8bdb-ec231aef5f35-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.593274 master-1 kubenswrapper[4771]: I1011 10:53:02.593239 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/831321b9-20ce-409b-8bdb-ec231aef5f35-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.593274 master-1 kubenswrapper[4771]: I1011 10:53:02.593258 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/831321b9-20ce-409b-8bdb-ec231aef5f35-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.595242 master-1 kubenswrapper[4771]: I1011 10:53:02.595203 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/831321b9-20ce-409b-8bdb-ec231aef5f35-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.595972 master-1 kubenswrapper[4771]: I1011 10:53:02.595936 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/831321b9-20ce-409b-8bdb-ec231aef5f35-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.596039 master-1 kubenswrapper[4771]: I1011 10:53:02.596018 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mvzxp"] Oct 11 10:53:02.613301 master-1 kubenswrapper[4771]: I1011 10:53:02.613248 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmtjs\" (UniqueName: \"kubernetes.io/projected/831321b9-20ce-409b-8bdb-ec231aef5f35-kube-api-access-tmtjs\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:02.614432 master-1 kubenswrapper[4771]: W1011 10:53:02.614392 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71b1c323_2ebf_4a37_9327_840d3f04eda1.slice/crio-b886f32cad28d7dd63a7e66891a5ff4e56a52d75e0ab3b67c6a7dd18be269991 WatchSource:0}: Error finding container b886f32cad28d7dd63a7e66891a5ff4e56a52d75e0ab3b67c6a7dd18be269991: Status 404 returned error can't find the container with id b886f32cad28d7dd63a7e66891a5ff4e56a52d75e0ab3b67c6a7dd18be269991 Oct 11 10:53:02.640842 master-2 kubenswrapper[4776]: I1011 10:53:02.640783 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6a4048e9-376b-49f0-a75f-a9d480ba8c96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b284115-4926-4f52-9901-1ca5f504b0f5\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:02.680348 master-0 kubenswrapper[4790]: I1011 10:53:02.678147 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Oct 11 10:53:02.704712 master-2 kubenswrapper[4776]: I1011 10:53:02.704656 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8qhqm"] Oct 11 10:53:02.706659 master-2 kubenswrapper[4776]: W1011 10:53:02.706549 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod065373ca_8c0f_489c_a72e_4d1aee1263ba.slice/crio-e6d108dd528e0cd5a6ce7bcd2b922cb955856026c53b06005f1ffc09c8b171fe WatchSource:0}: Error finding container e6d108dd528e0cd5a6ce7bcd2b922cb955856026c53b06005f1ffc09c8b171fe: Status 404 returned error can't find the container with id e6d108dd528e0cd5a6ce7bcd2b922cb955856026c53b06005f1ffc09c8b171fe Oct 11 10:53:02.712931 master-2 kubenswrapper[4776]: I1011 10:53:02.712885 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 10:53:02.731746 master-2 kubenswrapper[4776]: I1011 10:53:02.731595 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Oct 11 10:53:02.792417 master-0 kubenswrapper[4790]: I1011 10:53:02.790094 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-52t2l"] Oct 11 10:53:02.805010 master-0 kubenswrapper[4790]: I1011 10:53:02.804342 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-1"] Oct 11 10:53:02.866462 master-0 kubenswrapper[4790]: I1011 10:53:02.865727 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-82rlt"] Oct 11 10:53:02.867112 master-0 kubenswrapper[4790]: I1011 10:53:02.867086 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:02.872291 master-0 kubenswrapper[4790]: I1011 10:53:02.872234 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 11 10:53:02.875103 master-0 kubenswrapper[4790]: I1011 10:53:02.873880 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 11 10:53:02.890834 master-2 kubenswrapper[4776]: I1011 10:53:02.890601 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-kcjm9"] Oct 11 10:53:02.891879 master-2 kubenswrapper[4776]: I1011 10:53:02.891842 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:02.892875 master-1 kubenswrapper[4771]: I1011 10:53:02.892794 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-xvjvs"] Oct 11 10:53:02.893905 master-1 kubenswrapper[4771]: I1011 10:53:02.893879 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:02.897175 master-1 kubenswrapper[4771]: I1011 10:53:02.897121 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 11 10:53:02.897421 master-1 kubenswrapper[4771]: I1011 10:53:02.897221 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 11 10:53:02.899371 master-2 kubenswrapper[4776]: I1011 10:53:02.897568 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 11 10:53:02.899371 master-2 kubenswrapper[4776]: I1011 10:53:02.897771 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 11 10:53:02.905059 master-1 kubenswrapper[4771]: I1011 10:53:02.904997 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mvzxp" event={"ID":"71b1c323-2ebf-4a37-9327-840d3f04eda1","Type":"ContainerStarted","Data":"b886f32cad28d7dd63a7e66891a5ff4e56a52d75e0ab3b67c6a7dd18be269991"} Oct 11 10:53:02.909178 master-1 kubenswrapper[4771]: I1011 10:53:02.909075 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mtzk7" event={"ID":"4ab25521-7fba-40c9-b3db-377b1d0ec7a1","Type":"ContainerStarted","Data":"0c36d3f1d42190760f1c7787f28e6a160afd93a10732a347576180547afc6468"} Oct 11 10:53:02.911994 master-1 kubenswrapper[4771]: I1011 10:53:02.911961 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xvjvs"] Oct 11 10:53:02.912677 master-1 kubenswrapper[4771]: I1011 10:53:02.912652 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd846fcd9-m5nng" Oct 11 10:53:02.912765 master-1 kubenswrapper[4771]: I1011 10:53:02.912689 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd846fcd9-m5nng" event={"ID":"7be8543e-04fe-4e9d-91c9-8219b843b991","Type":"ContainerDied","Data":"05a1b34b2d5123ecb163a3c60cc5758f73fbbac730006935705b7b57187290ed"} Oct 11 10:53:02.912836 master-1 kubenswrapper[4771]: I1011 10:53:02.912764 4771 scope.go:117] "RemoveContainer" containerID="d370c75dd26b77e221c3a247825030d6ec6b2e51f63eff5e38438108d29ede13" Oct 11 10:53:02.914534 master-0 kubenswrapper[4790]: I1011 10:53:02.914449 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-82rlt"] Oct 11 10:53:02.916324 master-2 kubenswrapper[4776]: I1011 10:53:02.912514 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-kcjm9"] Oct 11 10:53:02.959120 master-2 kubenswrapper[4776]: I1011 10:53:02.958271 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebedfef5-9861-41cd-a97e-c59ff798091b-config\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:02.959120 master-2 kubenswrapper[4776]: I1011 10:53:02.958327 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ebedfef5-9861-41cd-a97e-c59ff798091b-ovs-rundir\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:02.959120 master-2 kubenswrapper[4776]: I1011 10:53:02.958347 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebedfef5-9861-41cd-a97e-c59ff798091b-combined-ca-bundle\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:02.959120 master-2 kubenswrapper[4776]: I1011 10:53:02.958399 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebedfef5-9861-41cd-a97e-c59ff798091b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:02.959120 master-2 kubenswrapper[4776]: I1011 10:53:02.958424 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ebedfef5-9861-41cd-a97e-c59ff798091b-ovn-rundir\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:02.959120 master-2 kubenswrapper[4776]: I1011 10:53:02.958444 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cpnt\" (UniqueName: \"kubernetes.io/projected/ebedfef5-9861-41cd-a97e-c59ff798091b-kube-api-access-2cpnt\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:02.966892 master-0 kubenswrapper[4790]: I1011 10:53:02.966844 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4c572ba-fc98-4468-939a-bbe0eadb7b63-config\") pod \"ovn-controller-metrics-82rlt\" (UID: \"f4c572ba-fc98-4468-939a-bbe0eadb7b63\") " pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:02.967004 master-0 kubenswrapper[4790]: I1011 10:53:02.966900 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f4c572ba-fc98-4468-939a-bbe0eadb7b63-ovn-rundir\") pod \"ovn-controller-metrics-82rlt\" (UID: \"f4c572ba-fc98-4468-939a-bbe0eadb7b63\") " pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:02.967004 master-0 kubenswrapper[4790]: I1011 10:53:02.966940 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4c572ba-fc98-4468-939a-bbe0eadb7b63-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-82rlt\" (UID: \"f4c572ba-fc98-4468-939a-bbe0eadb7b63\") " pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:02.967004 master-0 kubenswrapper[4790]: I1011 10:53:02.966968 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbvjm\" (UniqueName: \"kubernetes.io/projected/f4c572ba-fc98-4468-939a-bbe0eadb7b63-kube-api-access-pbvjm\") pod \"ovn-controller-metrics-82rlt\" (UID: \"f4c572ba-fc98-4468-939a-bbe0eadb7b63\") " pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:02.967105 master-0 kubenswrapper[4790]: I1011 10:53:02.967016 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f4c572ba-fc98-4468-939a-bbe0eadb7b63-ovs-rundir\") pod \"ovn-controller-metrics-82rlt\" (UID: \"f4c572ba-fc98-4468-939a-bbe0eadb7b63\") " pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:02.967105 master-0 kubenswrapper[4790]: I1011 10:53:02.967033 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4c572ba-fc98-4468-939a-bbe0eadb7b63-combined-ca-bundle\") pod \"ovn-controller-metrics-82rlt\" (UID: \"f4c572ba-fc98-4468-939a-bbe0eadb7b63\") " pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:02.982682 master-1 kubenswrapper[4771]: I1011 10:53:02.982557 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd846fcd9-m5nng"] Oct 11 10:53:02.993172 master-1 kubenswrapper[4771]: I1011 10:53:02.992999 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fd846fcd9-m5nng"] Oct 11 10:53:02.994916 master-1 kubenswrapper[4771]: I1011 10:53:02.994890 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0617ee1-7793-4630-9daa-5d4d02f1c5fe-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xvjvs\" (UID: \"a0617ee1-7793-4630-9daa-5d4d02f1c5fe\") " pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:02.995137 master-1 kubenswrapper[4771]: I1011 10:53:02.995117 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0617ee1-7793-4630-9daa-5d4d02f1c5fe-config\") pod \"ovn-controller-metrics-xvjvs\" (UID: \"a0617ee1-7793-4630-9daa-5d4d02f1c5fe\") " pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:02.995291 master-1 kubenswrapper[4771]: I1011 10:53:02.995271 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a0617ee1-7793-4630-9daa-5d4d02f1c5fe-ovs-rundir\") pod \"ovn-controller-metrics-xvjvs\" (UID: \"a0617ee1-7793-4630-9daa-5d4d02f1c5fe\") " pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:02.995476 master-1 kubenswrapper[4771]: I1011 10:53:02.995461 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a0617ee1-7793-4630-9daa-5d4d02f1c5fe-ovn-rundir\") pod \"ovn-controller-metrics-xvjvs\" (UID: \"a0617ee1-7793-4630-9daa-5d4d02f1c5fe\") " pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:02.995593 master-1 kubenswrapper[4771]: I1011 10:53:02.995573 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0617ee1-7793-4630-9daa-5d4d02f1c5fe-combined-ca-bundle\") pod \"ovn-controller-metrics-xvjvs\" (UID: \"a0617ee1-7793-4630-9daa-5d4d02f1c5fe\") " pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:02.995740 master-1 kubenswrapper[4771]: I1011 10:53:02.995725 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rnsn\" (UniqueName: \"kubernetes.io/projected/a0617ee1-7793-4630-9daa-5d4d02f1c5fe-kube-api-access-8rnsn\") pod \"ovn-controller-metrics-xvjvs\" (UID: \"a0617ee1-7793-4630-9daa-5d4d02f1c5fe\") " pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:03.068400 master-0 kubenswrapper[4790]: I1011 10:53:03.068330 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f4c572ba-fc98-4468-939a-bbe0eadb7b63-ovs-rundir\") pod \"ovn-controller-metrics-82rlt\" (UID: \"f4c572ba-fc98-4468-939a-bbe0eadb7b63\") " pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:03.068400 master-0 kubenswrapper[4790]: I1011 10:53:03.068384 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4c572ba-fc98-4468-939a-bbe0eadb7b63-combined-ca-bundle\") pod \"ovn-controller-metrics-82rlt\" (UID: \"f4c572ba-fc98-4468-939a-bbe0eadb7b63\") " pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:03.068400 master-0 kubenswrapper[4790]: I1011 10:53:03.068416 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4c572ba-fc98-4468-939a-bbe0eadb7b63-config\") pod \"ovn-controller-metrics-82rlt\" (UID: \"f4c572ba-fc98-4468-939a-bbe0eadb7b63\") " pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:03.069057 master-0 kubenswrapper[4790]: I1011 10:53:03.068436 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f4c572ba-fc98-4468-939a-bbe0eadb7b63-ovn-rundir\") pod \"ovn-controller-metrics-82rlt\" (UID: \"f4c572ba-fc98-4468-939a-bbe0eadb7b63\") " pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:03.069057 master-0 kubenswrapper[4790]: I1011 10:53:03.068468 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4c572ba-fc98-4468-939a-bbe0eadb7b63-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-82rlt\" (UID: \"f4c572ba-fc98-4468-939a-bbe0eadb7b63\") " pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:03.069057 master-0 kubenswrapper[4790]: I1011 10:53:03.068492 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbvjm\" (UniqueName: \"kubernetes.io/projected/f4c572ba-fc98-4468-939a-bbe0eadb7b63-kube-api-access-pbvjm\") pod \"ovn-controller-metrics-82rlt\" (UID: \"f4c572ba-fc98-4468-939a-bbe0eadb7b63\") " pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:03.069057 master-0 kubenswrapper[4790]: I1011 10:53:03.068939 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f4c572ba-fc98-4468-939a-bbe0eadb7b63-ovs-rundir\") pod \"ovn-controller-metrics-82rlt\" (UID: \"f4c572ba-fc98-4468-939a-bbe0eadb7b63\") " pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:03.072005 master-0 kubenswrapper[4790]: I1011 10:53:03.069407 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f4c572ba-fc98-4468-939a-bbe0eadb7b63-ovn-rundir\") pod \"ovn-controller-metrics-82rlt\" (UID: \"f4c572ba-fc98-4468-939a-bbe0eadb7b63\") " pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:03.072005 master-0 kubenswrapper[4790]: I1011 10:53:03.070283 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4c572ba-fc98-4468-939a-bbe0eadb7b63-config\") pod \"ovn-controller-metrics-82rlt\" (UID: \"f4c572ba-fc98-4468-939a-bbe0eadb7b63\") " pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:03.072601 master-2 kubenswrapper[4776]: I1011 10:53:03.071666 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebedfef5-9861-41cd-a97e-c59ff798091b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.072601 master-2 kubenswrapper[4776]: I1011 10:53:03.071748 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ebedfef5-9861-41cd-a97e-c59ff798091b-ovn-rundir\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.072601 master-2 kubenswrapper[4776]: I1011 10:53:03.071775 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cpnt\" (UniqueName: \"kubernetes.io/projected/ebedfef5-9861-41cd-a97e-c59ff798091b-kube-api-access-2cpnt\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.072601 master-2 kubenswrapper[4776]: I1011 10:53:03.071910 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebedfef5-9861-41cd-a97e-c59ff798091b-config\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.072601 master-2 kubenswrapper[4776]: I1011 10:53:03.071970 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ebedfef5-9861-41cd-a97e-c59ff798091b-ovs-rundir\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.072601 master-2 kubenswrapper[4776]: I1011 10:53:03.071989 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebedfef5-9861-41cd-a97e-c59ff798091b-combined-ca-bundle\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.072601 master-2 kubenswrapper[4776]: I1011 10:53:03.072218 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ebedfef5-9861-41cd-a97e-c59ff798091b-ovn-rundir\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.073954 master-2 kubenswrapper[4776]: I1011 10:53:03.073861 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ebedfef5-9861-41cd-a97e-c59ff798091b-ovs-rundir\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.076542 master-2 kubenswrapper[4776]: I1011 10:53:03.076452 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebedfef5-9861-41cd-a97e-c59ff798091b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.076929 master-0 kubenswrapper[4790]: I1011 10:53:03.076573 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4c572ba-fc98-4468-939a-bbe0eadb7b63-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-82rlt\" (UID: \"f4c572ba-fc98-4468-939a-bbe0eadb7b63\") " pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:03.077115 master-0 kubenswrapper[4790]: I1011 10:53:03.076846 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4c572ba-fc98-4468-939a-bbe0eadb7b63-combined-ca-bundle\") pod \"ovn-controller-metrics-82rlt\" (UID: \"f4c572ba-fc98-4468-939a-bbe0eadb7b63\") " pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:03.077318 master-2 kubenswrapper[4776]: I1011 10:53:03.077277 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebedfef5-9861-41cd-a97e-c59ff798091b-config\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.078101 master-2 kubenswrapper[4776]: I1011 10:53:03.078005 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebedfef5-9861-41cd-a97e-c59ff798091b-combined-ca-bundle\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.094790 master-0 kubenswrapper[4790]: I1011 10:53:03.094673 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbvjm\" (UniqueName: \"kubernetes.io/projected/f4c572ba-fc98-4468-939a-bbe0eadb7b63-kube-api-access-pbvjm\") pod \"ovn-controller-metrics-82rlt\" (UID: \"f4c572ba-fc98-4468-939a-bbe0eadb7b63\") " pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:03.102296 master-2 kubenswrapper[4776]: I1011 10:53:03.102248 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:53:03.102927 master-1 kubenswrapper[4771]: I1011 10:53:03.097024 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0617ee1-7793-4630-9daa-5d4d02f1c5fe-combined-ca-bundle\") pod \"ovn-controller-metrics-xvjvs\" (UID: \"a0617ee1-7793-4630-9daa-5d4d02f1c5fe\") " pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:03.103645 master-1 kubenswrapper[4771]: I1011 10:53:03.100575 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0617ee1-7793-4630-9daa-5d4d02f1c5fe-combined-ca-bundle\") pod \"ovn-controller-metrics-xvjvs\" (UID: \"a0617ee1-7793-4630-9daa-5d4d02f1c5fe\") " pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:03.103837 master-1 kubenswrapper[4771]: I1011 10:53:03.103798 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rnsn\" (UniqueName: \"kubernetes.io/projected/a0617ee1-7793-4630-9daa-5d4d02f1c5fe-kube-api-access-8rnsn\") pod \"ovn-controller-metrics-xvjvs\" (UID: \"a0617ee1-7793-4630-9daa-5d4d02f1c5fe\") " pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:03.104020 master-1 kubenswrapper[4771]: I1011 10:53:03.104001 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0617ee1-7793-4630-9daa-5d4d02f1c5fe-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xvjvs\" (UID: \"a0617ee1-7793-4630-9daa-5d4d02f1c5fe\") " pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:03.104344 master-1 kubenswrapper[4771]: I1011 10:53:03.104325 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0617ee1-7793-4630-9daa-5d4d02f1c5fe-config\") pod \"ovn-controller-metrics-xvjvs\" (UID: \"a0617ee1-7793-4630-9daa-5d4d02f1c5fe\") " pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:03.104520 master-1 kubenswrapper[4771]: I1011 10:53:03.104501 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a0617ee1-7793-4630-9daa-5d4d02f1c5fe-ovs-rundir\") pod \"ovn-controller-metrics-xvjvs\" (UID: \"a0617ee1-7793-4630-9daa-5d4d02f1c5fe\") " pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:03.104623 master-1 kubenswrapper[4771]: I1011 10:53:03.104594 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a0617ee1-7793-4630-9daa-5d4d02f1c5fe-ovs-rundir\") pod \"ovn-controller-metrics-xvjvs\" (UID: \"a0617ee1-7793-4630-9daa-5d4d02f1c5fe\") " pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:03.104658 master-2 kubenswrapper[4776]: I1011 10:53:03.104606 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cpnt\" (UniqueName: \"kubernetes.io/projected/ebedfef5-9861-41cd-a97e-c59ff798091b-kube-api-access-2cpnt\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.104904 master-1 kubenswrapper[4771]: I1011 10:53:03.104881 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a0617ee1-7793-4630-9daa-5d4d02f1c5fe-ovn-rundir\") pod \"ovn-controller-metrics-xvjvs\" (UID: \"a0617ee1-7793-4630-9daa-5d4d02f1c5fe\") " pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:03.105073 master-1 kubenswrapper[4771]: I1011 10:53:03.104948 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a0617ee1-7793-4630-9daa-5d4d02f1c5fe-ovn-rundir\") pod \"ovn-controller-metrics-xvjvs\" (UID: \"a0617ee1-7793-4630-9daa-5d4d02f1c5fe\") " pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:03.105557 master-1 kubenswrapper[4771]: I1011 10:53:03.105492 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0617ee1-7793-4630-9daa-5d4d02f1c5fe-config\") pod \"ovn-controller-metrics-xvjvs\" (UID: \"a0617ee1-7793-4630-9daa-5d4d02f1c5fe\") " pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:03.108770 master-1 kubenswrapper[4771]: I1011 10:53:03.108312 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0617ee1-7793-4630-9daa-5d4d02f1c5fe-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xvjvs\" (UID: \"a0617ee1-7793-4630-9daa-5d4d02f1c5fe\") " pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:03.124485 master-1 kubenswrapper[4771]: I1011 10:53:03.124428 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rnsn\" (UniqueName: \"kubernetes.io/projected/a0617ee1-7793-4630-9daa-5d4d02f1c5fe-kube-api-access-8rnsn\") pod \"ovn-controller-metrics-xvjvs\" (UID: \"a0617ee1-7793-4630-9daa-5d4d02f1c5fe\") " pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:03.175496 master-2 kubenswrapper[4776]: I1011 10:53:03.173395 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-config\") pod \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\" (UID: \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\") " Oct 11 10:53:03.175496 master-2 kubenswrapper[4776]: I1011 10:53:03.173650 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htpgt\" (UniqueName: \"kubernetes.io/projected/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-kube-api-access-htpgt\") pod \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\" (UID: \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\") " Oct 11 10:53:03.208809 master-2 kubenswrapper[4776]: I1011 10:53:03.194012 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-config" (OuterVolumeSpecName: "config") pod "977f2c8c-eb07-4fb7-ae7e-6d0688c6081f" (UID: "977f2c8c-eb07-4fb7-ae7e-6d0688c6081f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:03.208809 master-2 kubenswrapper[4776]: I1011 10:53:03.196830 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-kube-api-access-htpgt" (OuterVolumeSpecName: "kube-api-access-htpgt") pod "977f2c8c-eb07-4fb7-ae7e-6d0688c6081f" (UID: "977f2c8c-eb07-4fb7-ae7e-6d0688c6081f"). InnerVolumeSpecName "kube-api-access-htpgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:03.218058 master-0 kubenswrapper[4790]: I1011 10:53:03.217890 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-82rlt" Oct 11 10:53:03.218443 master-1 kubenswrapper[4771]: I1011 10:53:03.217178 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xvjvs" Oct 11 10:53:03.218739 master-2 kubenswrapper[4776]: I1011 10:53:03.218601 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.262319 master-2 kubenswrapper[4776]: I1011 10:53:03.262247 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"8d885df4-18cc-401a-8226-cd3d17b3f770","Type":"ContainerStarted","Data":"c5722ea7768634ccef469501bce467b2c6a8e3ecf85e25ba38414154e4e1054f"} Oct 11 10:53:03.264568 master-2 kubenswrapper[4776]: I1011 10:53:03.264404 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" event={"ID":"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f","Type":"ContainerDied","Data":"bcff671aa0831c6673eae62f2a6c1c6fa0565bd88455b6fc9735c4678ae0771e"} Oct 11 10:53:03.264652 master-2 kubenswrapper[4776]: I1011 10:53:03.264574 4776 scope.go:117] "RemoveContainer" containerID="d18dd7e7c04452149778ab81532866efd7ae9784ef1c20a46c1951260c713b9f" Oct 11 10:53:03.264736 master-2 kubenswrapper[4776]: I1011 10:53:03.264719 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:53:03.275738 master-2 kubenswrapper[4776]: I1011 10:53:03.273555 4776 generic.go:334] "Generic (PLEG): container finished" podID="c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" containerID="c0ec6bd81c8aa0ea34befa100e7e8df08ded596440992a0b1a3ffb750f413afb" exitCode=0 Oct 11 10:53:03.275738 master-2 kubenswrapper[4776]: I1011 10:53:03.273662 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" event={"ID":"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5","Type":"ContainerDied","Data":"c0ec6bd81c8aa0ea34befa100e7e8df08ded596440992a0b1a3ffb750f413afb"} Oct 11 10:53:03.275738 master-2 kubenswrapper[4776]: I1011 10:53:03.273715 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" event={"ID":"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5","Type":"ContainerStarted","Data":"f2516a6982d29372b482ef8dfc55f32264f7df4b11429f9906ad40bd48d5344a"} Oct 11 10:53:03.280713 master-2 kubenswrapper[4776]: I1011 10:53:03.280683 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htpgt\" (UniqueName: \"kubernetes.io/projected/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-kube-api-access-htpgt\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:03.280793 master-2 kubenswrapper[4776]: I1011 10:53:03.280716 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:03.283034 master-2 kubenswrapper[4776]: I1011 10:53:03.282727 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8qhqm" event={"ID":"065373ca-8c0f-489c-a72e-4d1aee1263ba","Type":"ContainerStarted","Data":"e6d108dd528e0cd5a6ce7bcd2b922cb955856026c53b06005f1ffc09c8b171fe"} Oct 11 10:53:03.284357 master-2 kubenswrapper[4776]: I1011 10:53:03.284290 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cbfedacb-2045-4297-be8f-3582dd2bcd7b","Type":"ContainerStarted","Data":"b03215961b9c5150bde0b4e7e2b48a359893ef2938e93f7fad3388b4aeef63a0"} Oct 11 10:53:03.332124 master-2 kubenswrapper[4776]: I1011 10:53:03.332089 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Oct 11 10:53:03.344082 master-2 kubenswrapper[4776]: W1011 10:53:03.343498 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod914ac6d0_5a85_4b2d_b4d4_202def09b0d8.slice/crio-366dcf107e1a9b7c922316fe84f7467f4c51d97c95f597e79b9f42a1c01e06d8 WatchSource:0}: Error finding container 366dcf107e1a9b7c922316fe84f7467f4c51d97c95f597e79b9f42a1c01e06d8: Status 404 returned error can't find the container with id 366dcf107e1a9b7c922316fe84f7467f4c51d97c95f597e79b9f42a1c01e06d8 Oct 11 10:53:03.346765 master-0 kubenswrapper[4790]: I1011 10:53:03.346204 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-1"] Oct 11 10:53:03.347534 master-0 kubenswrapper[4790]: I1011 10:53:03.347499 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.353339 master-0 kubenswrapper[4790]: I1011 10:53:03.353266 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 11 10:53:03.353566 master-0 kubenswrapper[4790]: I1011 10:53:03.353502 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 11 10:53:03.354686 master-0 kubenswrapper[4790]: I1011 10:53:03.354651 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 11 10:53:03.354931 master-0 kubenswrapper[4790]: I1011 10:53:03.354900 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 11 10:53:03.356628 master-0 kubenswrapper[4790]: I1011 10:53:03.355115 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 11 10:53:03.356628 master-0 kubenswrapper[4790]: I1011 10:53:03.356265 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 11 10:53:03.382502 master-0 kubenswrapper[4790]: I1011 10:53:03.381827 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-1"] Oct 11 10:53:03.391906 master-2 kubenswrapper[4776]: W1011 10:53:03.391853 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod894f72d0_cdc8_4904_b8a4_0e808ce0b855.slice/crio-37de6888943e38243a5011cb787ae7bd9b5c53a123894b556732a4933a52457d WatchSource:0}: Error finding container 37de6888943e38243a5011cb787ae7bd9b5c53a123894b556732a4933a52457d: Status 404 returned error can't find the container with id 37de6888943e38243a5011cb787ae7bd9b5c53a123894b556732a4933a52457d Oct 11 10:53:03.454724 master-2 kubenswrapper[4776]: I1011 10:53:03.454634 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd846fcd9-l5ldp"] Oct 11 10:53:03.460384 master-2 kubenswrapper[4776]: I1011 10:53:03.460251 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fd846fcd9-l5ldp"] Oct 11 10:53:03.475187 master-0 kubenswrapper[4790]: I1011 10:53:03.475055 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6d71dd31-2a9e-4c3d-bff4-ee1b201f04c4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^62c55997-0912-4363-a44a-b273574b90ee\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.475187 master-0 kubenswrapper[4790]: I1011 10:53:03.475132 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/be929908-6474-451d-8b87-e4effd7c6de4-plugins-conf\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.475421 master-0 kubenswrapper[4790]: I1011 10:53:03.475187 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/be929908-6474-451d-8b87-e4effd7c6de4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.475421 master-0 kubenswrapper[4790]: I1011 10:53:03.475234 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be929908-6474-451d-8b87-e4effd7c6de4-config-data\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.475421 master-0 kubenswrapper[4790]: I1011 10:53:03.475263 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/be929908-6474-451d-8b87-e4effd7c6de4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.475421 master-0 kubenswrapper[4790]: I1011 10:53:03.475288 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/be929908-6474-451d-8b87-e4effd7c6de4-pod-info\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.475421 master-0 kubenswrapper[4790]: I1011 10:53:03.475310 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/be929908-6474-451d-8b87-e4effd7c6de4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.475421 master-0 kubenswrapper[4790]: I1011 10:53:03.475325 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/be929908-6474-451d-8b87-e4effd7c6de4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.475421 master-0 kubenswrapper[4790]: I1011 10:53:03.475344 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/be929908-6474-451d-8b87-e4effd7c6de4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.475421 master-0 kubenswrapper[4790]: I1011 10:53:03.475358 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7sl2\" (UniqueName: \"kubernetes.io/projected/be929908-6474-451d-8b87-e4effd7c6de4-kube-api-access-q7sl2\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.475421 master-0 kubenswrapper[4790]: I1011 10:53:03.475382 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/be929908-6474-451d-8b87-e4effd7c6de4-server-conf\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.481049 master-2 kubenswrapper[4776]: I1011 10:53:03.466662 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4m6km"] Oct 11 10:53:03.484296 master-0 kubenswrapper[4790]: I1011 10:53:03.478963 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-1" event={"ID":"1ebbbaaf-f668-4c40-b437-2e730aef3912","Type":"ContainerStarted","Data":"7f9ca70ea9150158ea01389d4f5d47ae8eb1c96eba28945e19777e7f6cd26a21"} Oct 11 10:53:03.484296 master-0 kubenswrapper[4790]: I1011 10:53:03.480373 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-52t2l" event={"ID":"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c","Type":"ContainerStarted","Data":"ffd8516e1a802f15260c59750ec313118428488f3fac69d6ce8787ab8d39ef71"} Oct 11 10:53:03.484296 master-0 kubenswrapper[4790]: I1011 10:53:03.482339 4790 generic.go:334] "Generic (PLEG): container finished" podID="f97fbf89-0d03-4ed8-a0d2-4f796e705e20" containerID="00ae648526dd85e825feb5e90f78264a37eaa038e7b0cc07f203779fbd465caa" exitCode=0 Oct 11 10:53:03.484296 master-0 kubenswrapper[4790]: I1011 10:53:03.482416 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6944757b7f-plhvq" event={"ID":"f97fbf89-0d03-4ed8-a0d2-4f796e705e20","Type":"ContainerDied","Data":"00ae648526dd85e825feb5e90f78264a37eaa038e7b0cc07f203779fbd465caa"} Oct 11 10:53:03.484889 master-0 kubenswrapper[4790]: I1011 10:53:03.484603 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c13cb0d1-c50f-44fa-824a-46ece423a7cc","Type":"ContainerStarted","Data":"55aa020178354779f63bbec7883cd55060aea9bddab75564cd660ae8d36eac99"} Oct 11 10:53:03.577608 master-0 kubenswrapper[4790]: I1011 10:53:03.577532 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6d71dd31-2a9e-4c3d-bff4-ee1b201f04c4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^62c55997-0912-4363-a44a-b273574b90ee\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.578009 master-0 kubenswrapper[4790]: I1011 10:53:03.577634 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/be929908-6474-451d-8b87-e4effd7c6de4-plugins-conf\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.578009 master-0 kubenswrapper[4790]: I1011 10:53:03.577682 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/be929908-6474-451d-8b87-e4effd7c6de4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.578009 master-0 kubenswrapper[4790]: I1011 10:53:03.577770 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be929908-6474-451d-8b87-e4effd7c6de4-config-data\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.578009 master-0 kubenswrapper[4790]: I1011 10:53:03.577801 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/be929908-6474-451d-8b87-e4effd7c6de4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.578009 master-0 kubenswrapper[4790]: I1011 10:53:03.577835 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/be929908-6474-451d-8b87-e4effd7c6de4-pod-info\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.578009 master-0 kubenswrapper[4790]: I1011 10:53:03.577866 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/be929908-6474-451d-8b87-e4effd7c6de4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.578009 master-0 kubenswrapper[4790]: I1011 10:53:03.577889 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/be929908-6474-451d-8b87-e4effd7c6de4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.578009 master-0 kubenswrapper[4790]: I1011 10:53:03.577923 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/be929908-6474-451d-8b87-e4effd7c6de4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.578009 master-0 kubenswrapper[4790]: I1011 10:53:03.577950 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7sl2\" (UniqueName: \"kubernetes.io/projected/be929908-6474-451d-8b87-e4effd7c6de4-kube-api-access-q7sl2\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.578009 master-0 kubenswrapper[4790]: I1011 10:53:03.577978 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/be929908-6474-451d-8b87-e4effd7c6de4-server-conf\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.580316 master-0 kubenswrapper[4790]: I1011 10:53:03.580245 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/be929908-6474-451d-8b87-e4effd7c6de4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.580569 master-0 kubenswrapper[4790]: I1011 10:53:03.580513 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/be929908-6474-451d-8b87-e4effd7c6de4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.580834 master-0 kubenswrapper[4790]: I1011 10:53:03.580808 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/be929908-6474-451d-8b87-e4effd7c6de4-server-conf\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.582827 master-0 kubenswrapper[4790]: I1011 10:53:03.582686 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be929908-6474-451d-8b87-e4effd7c6de4-config-data\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.586675 master-0 kubenswrapper[4790]: I1011 10:53:03.584964 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/be929908-6474-451d-8b87-e4effd7c6de4-pod-info\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.586675 master-0 kubenswrapper[4790]: I1011 10:53:03.585561 4790 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:03.586675 master-0 kubenswrapper[4790]: I1011 10:53:03.585592 4790 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6d71dd31-2a9e-4c3d-bff4-ee1b201f04c4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^62c55997-0912-4363-a44a-b273574b90ee\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/0325335efdc6373b1da0b492b8c5ad80b94f8a3a314c03c4818d28b6fb013145/globalmount\"" pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.586675 master-0 kubenswrapper[4790]: I1011 10:53:03.586587 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/be929908-6474-451d-8b87-e4effd7c6de4-plugins-conf\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.589672 master-0 kubenswrapper[4790]: I1011 10:53:03.589612 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/be929908-6474-451d-8b87-e4effd7c6de4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.591258 master-0 kubenswrapper[4790]: I1011 10:53:03.589955 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/be929908-6474-451d-8b87-e4effd7c6de4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.591398 master-0 kubenswrapper[4790]: I1011 10:53:03.591293 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/be929908-6474-451d-8b87-e4effd7c6de4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.603781 master-0 kubenswrapper[4790]: I1011 10:53:03.603695 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7sl2\" (UniqueName: \"kubernetes.io/projected/be929908-6474-451d-8b87-e4effd7c6de4-kube-api-access-q7sl2\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:03.693462 master-0 kubenswrapper[4790]: E1011 10:53:03.693282 4790 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Oct 11 10:53:03.693462 master-0 kubenswrapper[4790]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/f97fbf89-0d03-4ed8-a0d2-4f796e705e20/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 11 10:53:03.693462 master-0 kubenswrapper[4790]: > podSandboxID="a4a9276558748dc6921cea20229a43f4acf57679858b58aec74901d23d4a131c" Oct 11 10:53:03.695366 master-0 kubenswrapper[4790]: E1011 10:53:03.693539 4790 kuberuntime_manager.go:1274] "Unhandled Error" err=< Oct 11 10:53:03.695366 master-0 kubenswrapper[4790]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n58fh579h64dh56ch657h674h656h9fh547h5hf7hc6h557hfdh566h66fh69h5cdhfh59fh58ch678h587h68ch675h6ch559h5f4h549h5f7h56fh586q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wx2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000790000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6944757b7f-plhvq_openstack(f97fbf89-0d03-4ed8-a0d2-4f796e705e20): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/f97fbf89-0d03-4ed8-a0d2-4f796e705e20/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 11 10:53:03.695366 master-0 kubenswrapper[4790]: > logger="UnhandledError" Oct 11 10:53:03.695366 master-0 kubenswrapper[4790]: E1011 10:53:03.694945 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/f97fbf89-0d03-4ed8-a0d2-4f796e705e20/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-6944757b7f-plhvq" podUID="f97fbf89-0d03-4ed8-a0d2-4f796e705e20" Oct 11 10:53:03.722323 master-0 kubenswrapper[4790]: I1011 10:53:03.720684 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-82rlt"] Oct 11 10:53:03.793595 master-2 kubenswrapper[4776]: I1011 10:53:03.793517 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-kcjm9"] Oct 11 10:53:03.837874 master-2 kubenswrapper[4776]: W1011 10:53:03.837804 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebedfef5_9861_41cd_a97e_c59ff798091b.slice/crio-300ffb022d5bded4d28f3ad72b64861cfb74455a2a9db59fbf18eeb7fea6cde8 WatchSource:0}: Error finding container 300ffb022d5bded4d28f3ad72b64861cfb74455a2a9db59fbf18eeb7fea6cde8: Status 404 returned error can't find the container with id 300ffb022d5bded4d28f3ad72b64861cfb74455a2a9db59fbf18eeb7fea6cde8 Oct 11 10:53:04.072857 master-2 kubenswrapper[4776]: I1011 10:53:04.072642 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="977f2c8c-eb07-4fb7-ae7e-6d0688c6081f" path="/var/lib/kubelet/pods/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f/volumes" Oct 11 10:53:04.100413 master-0 kubenswrapper[4790]: I1011 10:53:04.100376 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-dw8wx"] Oct 11 10:53:04.226311 master-1 kubenswrapper[4771]: I1011 10:53:04.226258 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ac5ccdc8-e999-4333-8daa-1020a63a77e7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6a793077-de91-441f-80c8-5b4445c0ddaf\") pod \"rabbitmq-cell1-server-0\" (UID: \"831321b9-20ce-409b-8bdb-ec231aef5f35\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:04.317576 master-2 kubenswrapper[4776]: I1011 10:53:04.316799 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" event={"ID":"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5","Type":"ContainerStarted","Data":"21c4e7fcdc124283dbaf4b8ea241a8c8865c7c94e13f81aee9cfb5aab43aac68"} Oct 11 10:53:04.317576 master-2 kubenswrapper[4776]: I1011 10:53:04.316879 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:53:04.319642 master-2 kubenswrapper[4776]: I1011 10:53:04.319599 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"914ac6d0-5a85-4b2d-b4d4-202def09b0d8","Type":"ContainerStarted","Data":"366dcf107e1a9b7c922316fe84f7467f4c51d97c95f597e79b9f42a1c01e06d8"} Oct 11 10:53:04.322584 master-2 kubenswrapper[4776]: I1011 10:53:04.322485 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4m6km" event={"ID":"894f72d0-cdc8-4904-b8a4-0e808ce0b855","Type":"ContainerStarted","Data":"37de6888943e38243a5011cb787ae7bd9b5c53a123894b556732a4933a52457d"} Oct 11 10:53:04.325218 master-2 kubenswrapper[4776]: I1011 10:53:04.324891 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-kcjm9" event={"ID":"ebedfef5-9861-41cd-a97e-c59ff798091b","Type":"ContainerStarted","Data":"300ffb022d5bded4d28f3ad72b64861cfb74455a2a9db59fbf18eeb7fea6cde8"} Oct 11 10:53:04.347151 master-2 kubenswrapper[4776]: I1011 10:53:04.346503 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-2"] Oct 11 10:53:04.347151 master-2 kubenswrapper[4776]: E1011 10:53:04.346869 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977f2c8c-eb07-4fb7-ae7e-6d0688c6081f" containerName="init" Oct 11 10:53:04.347151 master-2 kubenswrapper[4776]: I1011 10:53:04.346883 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="977f2c8c-eb07-4fb7-ae7e-6d0688c6081f" containerName="init" Oct 11 10:53:04.347151 master-2 kubenswrapper[4776]: I1011 10:53:04.347021 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="977f2c8c-eb07-4fb7-ae7e-6d0688c6081f" containerName="init" Oct 11 10:53:04.355322 master-2 kubenswrapper[4776]: I1011 10:53:04.355243 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" podStartSLOduration=13.355222144 podStartE2EDuration="13.355222144s" podCreationTimestamp="2025-10-11 10:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:04.352022908 +0000 UTC m=+1619.136449627" watchObservedRunningTime="2025-10-11 10:53:04.355222144 +0000 UTC m=+1619.139648853" Oct 11 10:53:04.361016 master-2 kubenswrapper[4776]: I1011 10:53:04.360972 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.363919 master-2 kubenswrapper[4776]: I1011 10:53:04.363874 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 11 10:53:04.364127 master-2 kubenswrapper[4776]: I1011 10:53:04.364097 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 11 10:53:04.364200 master-2 kubenswrapper[4776]: I1011 10:53:04.364161 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 11 10:53:04.364295 master-2 kubenswrapper[4776]: I1011 10:53:04.364257 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 11 10:53:04.364367 master-2 kubenswrapper[4776]: I1011 10:53:04.364339 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 11 10:53:04.364544 master-2 kubenswrapper[4776]: I1011 10:53:04.364507 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 11 10:53:04.372766 master-2 kubenswrapper[4776]: I1011 10:53:04.369392 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-2"] Oct 11 10:53:04.440500 master-1 kubenswrapper[4771]: I1011 10:53:04.439929 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:04.449691 master-1 kubenswrapper[4771]: I1011 10:53:04.449629 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be8543e-04fe-4e9d-91c9-8219b843b991" path="/var/lib/kubelet/pods/7be8543e-04fe-4e9d-91c9-8219b843b991/volumes" Oct 11 10:53:04.493228 master-0 kubenswrapper[4790]: I1011 10:53:04.493177 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dw8wx" event={"ID":"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0","Type":"ContainerStarted","Data":"62cb6b7233fe70010bd1bf3163750a8ee8f35a71e5521a83b5ec97f307536726"} Oct 11 10:53:04.495944 master-0 kubenswrapper[4790]: I1011 10:53:04.495837 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-82rlt" event={"ID":"f4c572ba-fc98-4468-939a-bbe0eadb7b63","Type":"ContainerStarted","Data":"f6bc5207e4efea93fc1bba72f36dea738790b37fd7ecba635a7f8b806c0ce82e"} Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509278 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509337 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7a37e421-023d-428b-918c-6aa5e4cec760\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a34ed919-8cce-4af6-9f47-66263cc58dfa\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509431 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509449 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-server-conf\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509486 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-pod-info\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509506 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509537 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-plugins-conf\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509727 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509849 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk48s\" (UniqueName: \"kubernetes.io/projected/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-kube-api-access-hk48s\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509865 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509901 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-config-data\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612575 master-2 kubenswrapper[4776]: I1011 10:53:04.612529 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-server-conf\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612710 master-2 kubenswrapper[4776]: I1011 10:53:04.612579 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612710 master-2 kubenswrapper[4776]: I1011 10:53:04.612601 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-pod-info\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612710 master-2 kubenswrapper[4776]: I1011 10:53:04.612621 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612710 master-2 kubenswrapper[4776]: I1011 10:53:04.612652 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-plugins-conf\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612710 master-2 kubenswrapper[4776]: I1011 10:53:04.612667 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612710 master-2 kubenswrapper[4776]: I1011 10:53:04.612707 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk48s\" (UniqueName: \"kubernetes.io/projected/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-kube-api-access-hk48s\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612886 master-2 kubenswrapper[4776]: I1011 10:53:04.612723 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612886 master-2 kubenswrapper[4776]: I1011 10:53:04.612745 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-config-data\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612886 master-2 kubenswrapper[4776]: I1011 10:53:04.612779 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612886 master-2 kubenswrapper[4776]: I1011 10:53:04.612800 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7a37e421-023d-428b-918c-6aa5e4cec760\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a34ed919-8cce-4af6-9f47-66263cc58dfa\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.613899 master-2 kubenswrapper[4776]: I1011 10:53:04.613873 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-plugins-conf\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.614786 master-2 kubenswrapper[4776]: I1011 10:53:04.614763 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.618634 master-2 kubenswrapper[4776]: I1011 10:53:04.615326 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-config-data\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.618634 master-2 kubenswrapper[4776]: I1011 10:53:04.615770 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.618634 master-2 kubenswrapper[4776]: I1011 10:53:04.616401 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-server-conf\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.624444 master-2 kubenswrapper[4776]: I1011 10:53:04.624387 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-pod-info\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.624753 master-2 kubenswrapper[4776]: I1011 10:53:04.624710 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.624868 master-2 kubenswrapper[4776]: I1011 10:53:04.624841 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.624906 master-2 kubenswrapper[4776]: I1011 10:53:04.624841 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.625922 master-2 kubenswrapper[4776]: I1011 10:53:04.625860 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:04.626042 master-2 kubenswrapper[4776]: I1011 10:53:04.625950 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7a37e421-023d-428b-918c-6aa5e4cec760\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a34ed919-8cce-4af6-9f47-66263cc58dfa\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/a8b6e9479ea03f681dc5123c419d7b32975608d13dab5d7e5a7d7a4095c8fa00/globalmount\"" pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.641448 master-2 kubenswrapper[4776]: I1011 10:53:04.641409 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk48s\" (UniqueName: \"kubernetes.io/projected/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-kube-api-access-hk48s\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:05.150634 master-0 kubenswrapper[4790]: I1011 10:53:05.150584 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6d71dd31-2a9e-4c3d-bff4-ee1b201f04c4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^62c55997-0912-4363-a44a-b273574b90ee\") pod \"rabbitmq-cell1-server-1\" (UID: \"be929908-6474-451d-8b87-e4effd7c6de4\") " pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:05.361108 master-1 kubenswrapper[4771]: I1011 10:53:05.361008 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 11 10:53:05.362503 master-1 kubenswrapper[4771]: I1011 10:53:05.362476 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 11 10:53:05.365873 master-1 kubenswrapper[4771]: I1011 10:53:05.365817 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 11 10:53:05.366284 master-1 kubenswrapper[4771]: I1011 10:53:05.366249 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 11 10:53:05.366641 master-1 kubenswrapper[4771]: I1011 10:53:05.366617 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 11 10:53:05.374729 master-1 kubenswrapper[4771]: I1011 10:53:05.374688 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 11 10:53:05.490382 master-1 kubenswrapper[4771]: I1011 10:53:05.484063 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 11 10:53:05.511365 master-0 kubenswrapper[4790]: I1011 10:53:05.510131 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:05.551092 master-1 kubenswrapper[4771]: I1011 10:53:05.551015 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.551400 master-1 kubenswrapper[4771]: I1011 10:53:05.551225 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-secrets\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.551400 master-1 kubenswrapper[4771]: I1011 10:53:05.551372 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-config-data-default\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.551477 master-1 kubenswrapper[4771]: I1011 10:53:05.551409 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.551546 master-1 kubenswrapper[4771]: I1011 10:53:05.551522 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.551664 master-1 kubenswrapper[4771]: I1011 10:53:05.551641 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnm9t\" (UniqueName: \"kubernetes.io/projected/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-kube-api-access-fnm9t\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.551740 master-1 kubenswrapper[4771]: I1011 10:53:05.551727 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ff4ce29-e30c-42fb-a4c8-2f59d5ba9d6a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^050fd1fe-ba08-48fe-9f1b-0beca688083e\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.551826 master-1 kubenswrapper[4771]: I1011 10:53:05.551809 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-kolla-config\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.551929 master-1 kubenswrapper[4771]: I1011 10:53:05.551843 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.654209 master-1 kubenswrapper[4771]: I1011 10:53:05.653966 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-config-data-default\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.654209 master-1 kubenswrapper[4771]: I1011 10:53:05.654107 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.654762 master-1 kubenswrapper[4771]: I1011 10:53:05.654180 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.654762 master-1 kubenswrapper[4771]: I1011 10:53:05.654591 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnm9t\" (UniqueName: \"kubernetes.io/projected/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-kube-api-access-fnm9t\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.654762 master-1 kubenswrapper[4771]: I1011 10:53:05.654692 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6ff4ce29-e30c-42fb-a4c8-2f59d5ba9d6a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^050fd1fe-ba08-48fe-9f1b-0beca688083e\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.654893 master-1 kubenswrapper[4771]: I1011 10:53:05.654774 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-kolla-config\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.654893 master-1 kubenswrapper[4771]: I1011 10:53:05.654801 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.654893 master-1 kubenswrapper[4771]: I1011 10:53:05.654876 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.654990 master-1 kubenswrapper[4771]: I1011 10:53:05.654944 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-secrets\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.655955 master-1 kubenswrapper[4771]: I1011 10:53:05.655717 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.655955 master-1 kubenswrapper[4771]: I1011 10:53:05.655724 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-config-data-default\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.656119 master-1 kubenswrapper[4771]: I1011 10:53:05.656049 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-kolla-config\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.656898 master-1 kubenswrapper[4771]: I1011 10:53:05.656844 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.657642 master-1 kubenswrapper[4771]: I1011 10:53:05.657604 4771 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:05.657708 master-1 kubenswrapper[4771]: I1011 10:53:05.657654 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6ff4ce29-e30c-42fb-a4c8-2f59d5ba9d6a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^050fd1fe-ba08-48fe-9f1b-0beca688083e\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/dba4f94046e1f12f87d8083348b61fdbaae5a90360f23733cc71d53b91882c14/globalmount\"" pod="openstack/openstack-galera-0" Oct 11 10:53:05.658709 master-1 kubenswrapper[4771]: I1011 10:53:05.658659 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.670947 master-1 kubenswrapper[4771]: I1011 10:53:05.670890 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-secrets\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.673307 master-1 kubenswrapper[4771]: I1011 10:53:05.673150 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.741275 master-1 kubenswrapper[4771]: I1011 10:53:05.741214 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnm9t\" (UniqueName: \"kubernetes.io/projected/3c0fb436-6e71-4a5a-844e-e8c8e83eacdd-kube-api-access-fnm9t\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:05.992673 master-2 kubenswrapper[4776]: I1011 10:53:05.992584 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7a37e421-023d-428b-918c-6aa5e4cec760\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a34ed919-8cce-4af6-9f47-66263cc58dfa\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:06.521703 master-2 kubenswrapper[4776]: I1011 10:53:06.521607 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:06.730051 master-1 kubenswrapper[4771]: I1011 10:53:06.729995 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xvjvs"] Oct 11 10:53:06.759883 master-1 kubenswrapper[4771]: W1011 10:53:06.759746 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0617ee1_7793_4630_9daa_5d4d02f1c5fe.slice/crio-05cb2f5713429f2a19083b5387159f271af13f2214f8228eb364a1855c606d0e WatchSource:0}: Error finding container 05cb2f5713429f2a19083b5387159f271af13f2214f8228eb364a1855c606d0e: Status 404 returned error can't find the container with id 05cb2f5713429f2a19083b5387159f271af13f2214f8228eb364a1855c606d0e Oct 11 10:53:06.873624 master-1 kubenswrapper[4771]: I1011 10:53:06.873557 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6ff4ce29-e30c-42fb-a4c8-2f59d5ba9d6a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^050fd1fe-ba08-48fe-9f1b-0beca688083e\") pod \"openstack-galera-0\" (UID: \"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd\") " pod="openstack/openstack-galera-0" Oct 11 10:53:06.940757 master-1 kubenswrapper[4771]: I1011 10:53:06.940686 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mvzxp" event={"ID":"71b1c323-2ebf-4a37-9327-840d3f04eda1","Type":"ContainerStarted","Data":"b6fb75d7f46e4440adb60340f2b232acddf760ea45aaaef384a768bccc89f0ea"} Oct 11 10:53:06.942061 master-1 kubenswrapper[4771]: I1011 10:53:06.941998 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xvjvs" event={"ID":"a0617ee1-7793-4630-9daa-5d4d02f1c5fe","Type":"ContainerStarted","Data":"05cb2f5713429f2a19083b5387159f271af13f2214f8228eb364a1855c606d0e"} Oct 11 10:53:06.944034 master-1 kubenswrapper[4771]: I1011 10:53:06.943985 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" event={"ID":"37015f12-0983-4016-9f76-6d0e3f641f28","Type":"ContainerStarted","Data":"b3e3b46fc901080e41385771d41f43fbd31766c94a2a5ea0b8a9cb3a8c03ad18"} Oct 11 10:53:06.944285 master-1 kubenswrapper[4771]: I1011 10:53:06.944245 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" Oct 11 10:53:06.987970 master-1 kubenswrapper[4771]: I1011 10:53:06.987907 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 11 10:53:07.192282 master-1 kubenswrapper[4771]: I1011 10:53:07.192201 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 11 10:53:07.244599 master-1 kubenswrapper[4771]: W1011 10:53:07.244534 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod831321b9_20ce_409b_8bdb_ec231aef5f35.slice/crio-a626dcc125802c613cc9d03770554cd54066251711845b038bb38fd2f77a5968 WatchSource:0}: Error finding container a626dcc125802c613cc9d03770554cd54066251711845b038bb38fd2f77a5968: Status 404 returned error can't find the container with id a626dcc125802c613cc9d03770554cd54066251711845b038bb38fd2f77a5968 Oct 11 10:53:07.873052 master-1 kubenswrapper[4771]: I1011 10:53:07.872938 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" podStartSLOduration=17.872900991 podStartE2EDuration="17.872900991s" podCreationTimestamp="2025-10-11 10:52:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:07.609632453 +0000 UTC m=+1619.583858894" watchObservedRunningTime="2025-10-11 10:53:07.872900991 +0000 UTC m=+1619.847127432" Oct 11 10:53:07.873590 master-1 kubenswrapper[4771]: I1011 10:53:07.873214 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 11 10:53:07.907680 master-1 kubenswrapper[4771]: W1011 10:53:07.907621 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c0fb436_6e71_4a5a_844e_e8c8e83eacdd.slice/crio-ca1dcb6fb520857d20bf0b01a4ba1243fb560719ce5a2f79cf56d4478eb48e6f WatchSource:0}: Error finding container ca1dcb6fb520857d20bf0b01a4ba1243fb560719ce5a2f79cf56d4478eb48e6f: Status 404 returned error can't find the container with id ca1dcb6fb520857d20bf0b01a4ba1243fb560719ce5a2f79cf56d4478eb48e6f Oct 11 10:53:07.952336 master-1 kubenswrapper[4771]: I1011 10:53:07.952292 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mtzk7" event={"ID":"4ab25521-7fba-40c9-b3db-377b1d0ec7a1","Type":"ContainerStarted","Data":"ab1b69d2441c8c2ff63860b07a0aa63ef1fb0d5772284a7feadcbd7ce53fcb08"} Oct 11 10:53:07.952607 master-1 kubenswrapper[4771]: I1011 10:53:07.952594 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:07.954080 master-1 kubenswrapper[4771]: I1011 10:53:07.954002 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6fe99fba-e358-4203-a516-04b9ae19d789","Type":"ContainerStarted","Data":"7f8fc71d7ad02d8da77907079a53d04db1a0fb1212260a6e3e48d8f38e321946"} Oct 11 10:53:07.955678 master-1 kubenswrapper[4771]: I1011 10:53:07.955626 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3843977f-b9b9-4f98-9205-5dbe3113fa5e","Type":"ContainerStarted","Data":"3e0020e65364a58f1137aff3bd83a82293e06f3c004bc46674cb85c4c72b8928"} Oct 11 10:53:07.955749 master-1 kubenswrapper[4771]: I1011 10:53:07.955722 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 11 10:53:07.956928 master-1 kubenswrapper[4771]: I1011 10:53:07.956877 4771 generic.go:334] "Generic (PLEG): container finished" podID="71b1c323-2ebf-4a37-9327-840d3f04eda1" containerID="b6fb75d7f46e4440adb60340f2b232acddf760ea45aaaef384a768bccc89f0ea" exitCode=0 Oct 11 10:53:07.956995 master-1 kubenswrapper[4771]: I1011 10:53:07.956923 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mvzxp" event={"ID":"71b1c323-2ebf-4a37-9327-840d3f04eda1","Type":"ContainerDied","Data":"b6fb75d7f46e4440adb60340f2b232acddf760ea45aaaef384a768bccc89f0ea"} Oct 11 10:53:07.957896 master-1 kubenswrapper[4771]: I1011 10:53:07.957864 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd","Type":"ContainerStarted","Data":"ca1dcb6fb520857d20bf0b01a4ba1243fb560719ce5a2f79cf56d4478eb48e6f"} Oct 11 10:53:07.959090 master-1 kubenswrapper[4771]: I1011 10:53:07.959065 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"831321b9-20ce-409b-8bdb-ec231aef5f35","Type":"ContainerStarted","Data":"a626dcc125802c613cc9d03770554cd54066251711845b038bb38fd2f77a5968"} Oct 11 10:53:08.190613 master-1 kubenswrapper[4771]: I1011 10:53:08.190519 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mtzk7" podStartSLOduration=2.905738598 podStartE2EDuration="7.190496864s" podCreationTimestamp="2025-10-11 10:53:01 +0000 UTC" firstStartedPulling="2025-10-11 10:53:02.071044426 +0000 UTC m=+1614.045270867" lastFinishedPulling="2025-10-11 10:53:06.355802652 +0000 UTC m=+1618.330029133" observedRunningTime="2025-10-11 10:53:08.190210816 +0000 UTC m=+1620.164437267" watchObservedRunningTime="2025-10-11 10:53:08.190496864 +0000 UTC m=+1620.164723305" Oct 11 10:53:08.706094 master-1 kubenswrapper[4771]: I1011 10:53:08.705918 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=8.626226719 podStartE2EDuration="13.705897399s" podCreationTimestamp="2025-10-11 10:52:55 +0000 UTC" firstStartedPulling="2025-10-11 10:53:01.245504935 +0000 UTC m=+1613.219731376" lastFinishedPulling="2025-10-11 10:53:06.325175615 +0000 UTC m=+1618.299402056" observedRunningTime="2025-10-11 10:53:08.702575902 +0000 UTC m=+1620.676802383" watchObservedRunningTime="2025-10-11 10:53:08.705897399 +0000 UTC m=+1620.680123840" Oct 11 10:53:08.983816 master-1 kubenswrapper[4771]: I1011 10:53:08.983685 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mvzxp" event={"ID":"71b1c323-2ebf-4a37-9327-840d3f04eda1","Type":"ContainerStarted","Data":"8c9042b1be70f7fdb6a80fca7cde394effe42af7205cc1fcf3a6254bf3330806"} Oct 11 10:53:08.991288 master-1 kubenswrapper[4771]: I1011 10:53:08.991059 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"831321b9-20ce-409b-8bdb-ec231aef5f35","Type":"ContainerStarted","Data":"46e81e63ab3ceec54c8e0da9448541aeaf71c73eb9783cb511b8ceaa6d4dbd06"} Oct 11 10:53:09.408637 master-2 kubenswrapper[4776]: I1011 10:53:09.408563 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-2"] Oct 11 10:53:09.409730 master-2 kubenswrapper[4776]: I1011 10:53:09.409681 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-2" Oct 11 10:53:09.412541 master-2 kubenswrapper[4776]: I1011 10:53:09.412467 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 11 10:53:09.414051 master-2 kubenswrapper[4776]: I1011 10:53:09.414025 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 11 10:53:09.414384 master-2 kubenswrapper[4776]: I1011 10:53:09.414358 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 11 10:53:09.414629 master-2 kubenswrapper[4776]: I1011 10:53:09.414605 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 11 10:53:09.455771 master-2 kubenswrapper[4776]: I1011 10:53:09.455416 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-2"] Oct 11 10:53:09.692793 master-2 kubenswrapper[4776]: I1011 10:53:09.692735 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/72994ad3-2bca-4875-97f7-f98c00f64626-kolla-config\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.692793 master-2 kubenswrapper[4776]: I1011 10:53:09.692799 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72994ad3-2bca-4875-97f7-f98c00f64626-operator-scripts\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.693047 master-2 kubenswrapper[4776]: I1011 10:53:09.692827 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9fdda533-7818-4a6f-97c7-9229dafba44c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b09f7c95-55c7-406f-9f86-bdedf27fc584\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.693047 master-2 kubenswrapper[4776]: I1011 10:53:09.692850 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpq5w\" (UniqueName: \"kubernetes.io/projected/72994ad3-2bca-4875-97f7-f98c00f64626-kube-api-access-tpq5w\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.693047 master-2 kubenswrapper[4776]: I1011 10:53:09.692878 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72994ad3-2bca-4875-97f7-f98c00f64626-combined-ca-bundle\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.693047 master-2 kubenswrapper[4776]: I1011 10:53:09.692915 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/72994ad3-2bca-4875-97f7-f98c00f64626-secrets\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.693047 master-2 kubenswrapper[4776]: I1011 10:53:09.692954 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/72994ad3-2bca-4875-97f7-f98c00f64626-config-data-generated\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.693047 master-2 kubenswrapper[4776]: I1011 10:53:09.692986 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/72994ad3-2bca-4875-97f7-f98c00f64626-galera-tls-certs\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.693047 master-2 kubenswrapper[4776]: I1011 10:53:09.693041 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/72994ad3-2bca-4875-97f7-f98c00f64626-config-data-default\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.794472 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/72994ad3-2bca-4875-97f7-f98c00f64626-config-data-default\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.794548 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/72994ad3-2bca-4875-97f7-f98c00f64626-kolla-config\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.794577 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72994ad3-2bca-4875-97f7-f98c00f64626-operator-scripts\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.794604 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9fdda533-7818-4a6f-97c7-9229dafba44c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b09f7c95-55c7-406f-9f86-bdedf27fc584\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.794625 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpq5w\" (UniqueName: \"kubernetes.io/projected/72994ad3-2bca-4875-97f7-f98c00f64626-kube-api-access-tpq5w\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.794655 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72994ad3-2bca-4875-97f7-f98c00f64626-combined-ca-bundle\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.794774 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/72994ad3-2bca-4875-97f7-f98c00f64626-secrets\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.794818 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/72994ad3-2bca-4875-97f7-f98c00f64626-config-data-generated\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.794847 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/72994ad3-2bca-4875-97f7-f98c00f64626-galera-tls-certs\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.795479 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/72994ad3-2bca-4875-97f7-f98c00f64626-config-data-default\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.796232 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/72994ad3-2bca-4875-97f7-f98c00f64626-config-data-generated\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.796248 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/72994ad3-2bca-4875-97f7-f98c00f64626-kolla-config\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.796550 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72994ad3-2bca-4875-97f7-f98c00f64626-operator-scripts\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.797231 master-2 kubenswrapper[4776]: I1011 10:53:09.796987 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:09.797231 master-2 kubenswrapper[4776]: I1011 10:53:09.797008 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9fdda533-7818-4a6f-97c7-9229dafba44c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b09f7c95-55c7-406f-9f86-bdedf27fc584\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/8dbd96f1521e66451c285a8c0c796fd9fb0ab49b3c5a56ee0fe73e1b546de5b3/globalmount\"" pod="openstack/openstack-galera-2" Oct 11 10:53:09.797940 master-2 kubenswrapper[4776]: I1011 10:53:09.797917 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/72994ad3-2bca-4875-97f7-f98c00f64626-galera-tls-certs\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.798235 master-2 kubenswrapper[4776]: I1011 10:53:09.798211 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/72994ad3-2bca-4875-97f7-f98c00f64626-secrets\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.799441 master-2 kubenswrapper[4776]: I1011 10:53:09.799397 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72994ad3-2bca-4875-97f7-f98c00f64626-combined-ca-bundle\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.854002 master-2 kubenswrapper[4776]: I1011 10:53:09.851896 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpq5w\" (UniqueName: \"kubernetes.io/projected/72994ad3-2bca-4875-97f7-f98c00f64626-kube-api-access-tpq5w\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:10.092926 master-2 kubenswrapper[4776]: I1011 10:53:10.092807 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-2"] Oct 11 10:53:10.366548 master-0 kubenswrapper[4790]: I1011 10:53:10.366380 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-1"] Oct 11 10:53:10.367941 master-0 kubenswrapper[4790]: I1011 10:53:10.367913 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-1" Oct 11 10:53:10.371280 master-0 kubenswrapper[4790]: I1011 10:53:10.371247 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 11 10:53:10.371578 master-0 kubenswrapper[4790]: I1011 10:53:10.371530 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 11 10:53:10.371820 master-0 kubenswrapper[4790]: I1011 10:53:10.371794 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 11 10:53:10.372098 master-0 kubenswrapper[4790]: I1011 10:53:10.372067 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 11 10:53:10.417991 master-0 kubenswrapper[4790]: I1011 10:53:10.417922 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-1"] Oct 11 10:53:10.506756 master-0 kubenswrapper[4790]: I1011 10:53:10.505816 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-galera-tls-certs\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.506756 master-0 kubenswrapper[4790]: I1011 10:53:10.505882 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-config-data-generated\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.506756 master-0 kubenswrapper[4790]: I1011 10:53:10.505906 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7bdz\" (UniqueName: \"kubernetes.io/projected/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-kube-api-access-k7bdz\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.506756 master-0 kubenswrapper[4790]: I1011 10:53:10.505933 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-secrets\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.506756 master-0 kubenswrapper[4790]: I1011 10:53:10.506006 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4c3a1a9c-ad77-4ede-a234-173033baae18\" (UniqueName: \"kubernetes.io/csi/topolvm.io^af3ccd44-89a2-4e58-bc75-402f4b9b5935\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.506756 master-0 kubenswrapper[4790]: I1011 10:53:10.506028 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-config-data-default\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.506756 master-0 kubenswrapper[4790]: I1011 10:53:10.506491 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-kolla-config\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.506756 master-0 kubenswrapper[4790]: I1011 10:53:10.506588 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-operator-scripts\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.506756 master-0 kubenswrapper[4790]: I1011 10:53:10.506668 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-combined-ca-bundle\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.608065 master-0 kubenswrapper[4790]: I1011 10:53:10.608006 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-kolla-config\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.608065 master-0 kubenswrapper[4790]: I1011 10:53:10.608078 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-operator-scripts\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.608341 master-0 kubenswrapper[4790]: I1011 10:53:10.608123 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-combined-ca-bundle\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.608341 master-0 kubenswrapper[4790]: I1011 10:53:10.608155 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-galera-tls-certs\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.608341 master-0 kubenswrapper[4790]: I1011 10:53:10.608182 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7bdz\" (UniqueName: \"kubernetes.io/projected/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-kube-api-access-k7bdz\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.608341 master-0 kubenswrapper[4790]: I1011 10:53:10.608210 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-config-data-generated\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.608341 master-0 kubenswrapper[4790]: I1011 10:53:10.608236 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-secrets\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.608341 master-0 kubenswrapper[4790]: I1011 10:53:10.608297 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4c3a1a9c-ad77-4ede-a234-173033baae18\" (UniqueName: \"kubernetes.io/csi/topolvm.io^af3ccd44-89a2-4e58-bc75-402f4b9b5935\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.608341 master-0 kubenswrapper[4790]: I1011 10:53:10.608317 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-config-data-default\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.609098 master-0 kubenswrapper[4790]: I1011 10:53:10.609063 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-config-data-generated\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.609377 master-0 kubenswrapper[4790]: I1011 10:53:10.609281 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-kolla-config\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.609693 master-0 kubenswrapper[4790]: I1011 10:53:10.609639 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-config-data-default\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.610323 master-0 kubenswrapper[4790]: I1011 10:53:10.610246 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-operator-scripts\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.612120 master-0 kubenswrapper[4790]: I1011 10:53:10.612073 4790 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:10.612201 master-0 kubenswrapper[4790]: I1011 10:53:10.612129 4790 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4c3a1a9c-ad77-4ede-a234-173033baae18\" (UniqueName: \"kubernetes.io/csi/topolvm.io^af3ccd44-89a2-4e58-bc75-402f4b9b5935\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/773ba8a3ebdf6ed1b03af49c5e8575a4250fd6ceb2d7c89ab924e3ac620fe81d/globalmount\"" pod="openstack/openstack-galera-1" Oct 11 10:53:10.612685 master-0 kubenswrapper[4790]: I1011 10:53:10.612635 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-secrets\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.612928 master-0 kubenswrapper[4790]: I1011 10:53:10.612880 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-combined-ca-bundle\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.614130 master-0 kubenswrapper[4790]: I1011 10:53:10.614076 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-galera-tls-certs\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:10.680439 master-0 kubenswrapper[4790]: I1011 10:53:10.680362 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7bdz\" (UniqueName: \"kubernetes.io/projected/ce689fd9-58ba-45f5-bec1-ff7b79e377ac-kube-api-access-k7bdz\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:11.082261 master-0 kubenswrapper[4790]: I1011 10:53:11.081577 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-1"] Oct 11 10:53:11.134224 master-0 kubenswrapper[4790]: W1011 10:53:11.134078 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe929908_6474_451d_8b87_e4effd7c6de4.slice/crio-90758d7643e6263d46a1f152d835bc723beeacfae63fd6b9b412589cc960a694 WatchSource:0}: Error finding container 90758d7643e6263d46a1f152d835bc723beeacfae63fd6b9b412589cc960a694: Status 404 returned error can't find the container with id 90758d7643e6263d46a1f152d835bc723beeacfae63fd6b9b412589cc960a694 Oct 11 10:53:11.175255 master-2 kubenswrapper[4776]: I1011 10:53:11.175202 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9fdda533-7818-4a6f-97c7-9229dafba44c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b09f7c95-55c7-406f-9f86-bdedf27fc584\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:11.198808 master-1 kubenswrapper[4771]: I1011 10:53:11.198724 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" Oct 11 10:53:11.212607 master-2 kubenswrapper[4776]: W1011 10:53:11.212541 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a8ba065_7ef6_4bab_b20a_3bb274c93fa0.slice/crio-b456290778239efbd3488a050ffc910f25294af1c27bf54359535d2e0a2c4ff0 WatchSource:0}: Error finding container b456290778239efbd3488a050ffc910f25294af1c27bf54359535d2e0a2c4ff0: Status 404 returned error can't find the container with id b456290778239efbd3488a050ffc910f25294af1c27bf54359535d2e0a2c4ff0 Oct 11 10:53:11.260760 master-2 kubenswrapper[4776]: I1011 10:53:11.260655 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-2" Oct 11 10:53:11.376787 master-2 kubenswrapper[4776]: I1011 10:53:11.376709 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-2" event={"ID":"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0","Type":"ContainerStarted","Data":"b456290778239efbd3488a050ffc910f25294af1c27bf54359535d2e0a2c4ff0"} Oct 11 10:53:11.472950 master-2 kubenswrapper[4776]: I1011 10:53:11.472873 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:53:11.557954 master-0 kubenswrapper[4790]: I1011 10:53:11.557874 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-52t2l" event={"ID":"8c164a4b-a2d5-4570-aed3-86dbb1f3d47c","Type":"ContainerStarted","Data":"b5a30090d52e04bd1585718bea7397f834cc2eb435df5a374c553c8fcde615e5"} Oct 11 10:53:11.558760 master-0 kubenswrapper[4790]: I1011 10:53:11.558006 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-52t2l" Oct 11 10:53:11.561260 master-0 kubenswrapper[4790]: I1011 10:53:11.561016 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dw8wx" event={"ID":"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0","Type":"ContainerStarted","Data":"ba831bf6dddeb3f64993c649a1f54543cd4a62c4464b0ea7cd2cf9d325aa66f4"} Oct 11 10:53:11.563207 master-0 kubenswrapper[4790]: I1011 10:53:11.563142 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-1" event={"ID":"be929908-6474-451d-8b87-e4effd7c6de4","Type":"ContainerStarted","Data":"90758d7643e6263d46a1f152d835bc723beeacfae63fd6b9b412589cc960a694"} Oct 11 10:53:11.566465 master-0 kubenswrapper[4790]: I1011 10:53:11.566428 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6944757b7f-plhvq" event={"ID":"f97fbf89-0d03-4ed8-a0d2-4f796e705e20","Type":"ContainerStarted","Data":"92b71e314ad33488ce83aa439ada543420fe070228f28b65e21677eb1d95f2b2"} Oct 11 10:53:11.567652 master-0 kubenswrapper[4790]: I1011 10:53:11.567631 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6944757b7f-plhvq" Oct 11 10:53:11.570897 master-0 kubenswrapper[4790]: I1011 10:53:11.570830 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-82rlt" event={"ID":"f4c572ba-fc98-4468-939a-bbe0eadb7b63","Type":"ContainerStarted","Data":"89db518426b8fdbecdfe04227dd0757e12bf32a3706a5be2c7e5de68c3e46acd"} Oct 11 10:53:11.709073 master-0 kubenswrapper[4790]: I1011 10:53:11.708974 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-52t2l" podStartSLOduration=2.739323252 podStartE2EDuration="10.70894888s" podCreationTimestamp="2025-10-11 10:53:01 +0000 UTC" firstStartedPulling="2025-10-11 10:53:02.823870941 +0000 UTC m=+859.378331233" lastFinishedPulling="2025-10-11 10:53:10.793496569 +0000 UTC m=+867.347956861" observedRunningTime="2025-10-11 10:53:11.706860873 +0000 UTC m=+868.261321245" watchObservedRunningTime="2025-10-11 10:53:11.70894888 +0000 UTC m=+868.263409172" Oct 11 10:53:11.888975 master-0 kubenswrapper[4790]: I1011 10:53:11.888877 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4c3a1a9c-ad77-4ede-a234-173033baae18\" (UniqueName: \"kubernetes.io/csi/topolvm.io^af3ccd44-89a2-4e58-bc75-402f4b9b5935\") pod \"openstack-galera-1\" (UID: \"ce689fd9-58ba-45f5-bec1-ff7b79e377ac\") " pod="openstack/openstack-galera-1" Oct 11 10:53:12.011645 master-2 kubenswrapper[4776]: I1011 10:53:12.011516 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 11 10:53:12.012983 master-2 kubenswrapper[4776]: I1011 10:53:12.012951 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.016303 master-2 kubenswrapper[4776]: I1011 10:53:12.016256 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 11 10:53:12.016487 master-2 kubenswrapper[4776]: I1011 10:53:12.016452 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 11 10:53:12.016564 master-2 kubenswrapper[4776]: I1011 10:53:12.016542 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 11 10:53:12.071996 master-2 kubenswrapper[4776]: I1011 10:53:12.071940 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 11 10:53:12.080860 master-0 kubenswrapper[4790]: I1011 10:53:12.080777 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6944757b7f-plhvq"] Oct 11 10:53:12.197132 master-0 kubenswrapper[4790]: I1011 10:53:12.196985 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-1" Oct 11 10:53:12.205213 master-2 kubenswrapper[4776]: I1011 10:53:12.204921 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-2"] Oct 11 10:53:12.210152 master-1 kubenswrapper[4771]: I1011 10:53:12.210073 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86d565bb9-85bsq"] Oct 11 10:53:12.211308 master-1 kubenswrapper[4771]: I1011 10:53:12.211259 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d565bb9-85bsq" Oct 11 10:53:12.265309 master-0 kubenswrapper[4790]: I1011 10:53:12.265197 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-82rlt" podStartSLOduration=3.014561065 podStartE2EDuration="10.265173926s" podCreationTimestamp="2025-10-11 10:53:02 +0000 UTC" firstStartedPulling="2025-10-11 10:53:03.79662037 +0000 UTC m=+860.351080662" lastFinishedPulling="2025-10-11 10:53:11.047233231 +0000 UTC m=+867.601693523" observedRunningTime="2025-10-11 10:53:12.265083494 +0000 UTC m=+868.819543796" watchObservedRunningTime="2025-10-11 10:53:12.265173926 +0000 UTC m=+868.819634218" Oct 11 10:53:12.273718 master-1 kubenswrapper[4771]: I1011 10:53:12.273626 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86d565bb9-85bsq"] Oct 11 10:53:12.318524 master-1 kubenswrapper[4771]: I1011 10:53:12.318450 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ae0f8e3-9e87-45b8-8313-a0b65cf33106-dns-svc\") pod \"dnsmasq-dns-86d565bb9-85bsq\" (UID: \"0ae0f8e3-9e87-45b8-8313-a0b65cf33106\") " pod="openstack/dnsmasq-dns-86d565bb9-85bsq" Oct 11 10:53:12.318788 master-1 kubenswrapper[4771]: I1011 10:53:12.318639 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ae0f8e3-9e87-45b8-8313-a0b65cf33106-config\") pod \"dnsmasq-dns-86d565bb9-85bsq\" (UID: \"0ae0f8e3-9e87-45b8-8313-a0b65cf33106\") " pod="openstack/dnsmasq-dns-86d565bb9-85bsq" Oct 11 10:53:12.318788 master-1 kubenswrapper[4771]: I1011 10:53:12.318691 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b4pv\" (UniqueName: \"kubernetes.io/projected/0ae0f8e3-9e87-45b8-8313-a0b65cf33106-kube-api-access-7b4pv\") pod \"dnsmasq-dns-86d565bb9-85bsq\" (UID: \"0ae0f8e3-9e87-45b8-8313-a0b65cf33106\") " pod="openstack/dnsmasq-dns-86d565bb9-85bsq" Oct 11 10:53:12.336105 master-2 kubenswrapper[4776]: I1011 10:53:12.336048 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.336105 master-2 kubenswrapper[4776]: I1011 10:53:12.336103 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.336343 master-2 kubenswrapper[4776]: I1011 10:53:12.336210 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.336343 master-2 kubenswrapper[4776]: I1011 10:53:12.336233 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.336343 master-2 kubenswrapper[4776]: I1011 10:53:12.336269 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n4s9\" (UniqueName: \"kubernetes.io/projected/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-kube-api-access-8n4s9\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.336343 master-2 kubenswrapper[4776]: I1011 10:53:12.336303 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.336343 master-2 kubenswrapper[4776]: I1011 10:53:12.336321 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.336487 master-2 kubenswrapper[4776]: I1011 10:53:12.336346 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-057ac20b-a0d2-4376-998d-12784c232497\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0888aa49-e062-4b2f-84d8-747b79ae5873\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.336487 master-2 kubenswrapper[4776]: I1011 10:53:12.336450 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.394405 master-2 kubenswrapper[4776]: I1011 10:53:12.394368 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-2" event={"ID":"72994ad3-2bca-4875-97f7-f98c00f64626","Type":"ContainerStarted","Data":"c9e5ea80cfca9480a9e66559a3e097434f14a45d5494bb45edefe34f299f1163"} Oct 11 10:53:12.395838 master-2 kubenswrapper[4776]: I1011 10:53:12.395811 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8qhqm" event={"ID":"065373ca-8c0f-489c-a72e-4d1aee1263ba","Type":"ContainerStarted","Data":"2f405c7ed435c3f0da0862e4abd74ebd7954c57283604b84d8fd56ee58376df3"} Oct 11 10:53:12.396074 master-2 kubenswrapper[4776]: I1011 10:53:12.396043 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:12.398623 master-2 kubenswrapper[4776]: I1011 10:53:12.398376 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cbfedacb-2045-4297-be8f-3582dd2bcd7b","Type":"ContainerStarted","Data":"6f7f88ff1f83ffde680d343beac013ec48c1906c3184c197f7e12a2ad3e3ad7b"} Oct 11 10:53:12.398623 master-2 kubenswrapper[4776]: I1011 10:53:12.398476 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 11 10:53:12.401420 master-2 kubenswrapper[4776]: I1011 10:53:12.401386 4776 generic.go:334] "Generic (PLEG): container finished" podID="894f72d0-cdc8-4904-b8a4-0e808ce0b855" containerID="a9b3fc0f39b790a2509ba4093c0b48a1cff8525f410d5e2ce8e8fd0c638fec0a" exitCode=0 Oct 11 10:53:12.401492 master-2 kubenswrapper[4776]: I1011 10:53:12.401454 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4m6km" event={"ID":"894f72d0-cdc8-4904-b8a4-0e808ce0b855","Type":"ContainerDied","Data":"a9b3fc0f39b790a2509ba4093c0b48a1cff8525f410d5e2ce8e8fd0c638fec0a"} Oct 11 10:53:12.405489 master-2 kubenswrapper[4776]: I1011 10:53:12.405456 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-kcjm9" event={"ID":"ebedfef5-9861-41cd-a97e-c59ff798091b","Type":"ContainerStarted","Data":"548b2d960258bc09cc9405ae02a280ce9e01bfa702edff63d1f914a2612eb61f"} Oct 11 10:53:12.420531 master-1 kubenswrapper[4771]: I1011 10:53:12.420429 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ae0f8e3-9e87-45b8-8313-a0b65cf33106-config\") pod \"dnsmasq-dns-86d565bb9-85bsq\" (UID: \"0ae0f8e3-9e87-45b8-8313-a0b65cf33106\") " pod="openstack/dnsmasq-dns-86d565bb9-85bsq" Oct 11 10:53:12.420813 master-1 kubenswrapper[4771]: I1011 10:53:12.420790 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b4pv\" (UniqueName: \"kubernetes.io/projected/0ae0f8e3-9e87-45b8-8313-a0b65cf33106-kube-api-access-7b4pv\") pod \"dnsmasq-dns-86d565bb9-85bsq\" (UID: \"0ae0f8e3-9e87-45b8-8313-a0b65cf33106\") " pod="openstack/dnsmasq-dns-86d565bb9-85bsq" Oct 11 10:53:12.421094 master-1 kubenswrapper[4771]: I1011 10:53:12.421075 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ae0f8e3-9e87-45b8-8313-a0b65cf33106-dns-svc\") pod \"dnsmasq-dns-86d565bb9-85bsq\" (UID: \"0ae0f8e3-9e87-45b8-8313-a0b65cf33106\") " pod="openstack/dnsmasq-dns-86d565bb9-85bsq" Oct 11 10:53:12.421523 master-1 kubenswrapper[4771]: I1011 10:53:12.421473 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ae0f8e3-9e87-45b8-8313-a0b65cf33106-config\") pod \"dnsmasq-dns-86d565bb9-85bsq\" (UID: \"0ae0f8e3-9e87-45b8-8313-a0b65cf33106\") " pod="openstack/dnsmasq-dns-86d565bb9-85bsq" Oct 11 10:53:12.422096 master-1 kubenswrapper[4771]: I1011 10:53:12.422064 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ae0f8e3-9e87-45b8-8313-a0b65cf33106-dns-svc\") pod \"dnsmasq-dns-86d565bb9-85bsq\" (UID: \"0ae0f8e3-9e87-45b8-8313-a0b65cf33106\") " pod="openstack/dnsmasq-dns-86d565bb9-85bsq" Oct 11 10:53:12.438227 master-2 kubenswrapper[4776]: I1011 10:53:12.438104 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.438331 master-2 kubenswrapper[4776]: I1011 10:53:12.438232 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.438331 master-2 kubenswrapper[4776]: I1011 10:53:12.438286 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.438429 master-2 kubenswrapper[4776]: I1011 10:53:12.438386 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.438429 master-2 kubenswrapper[4776]: I1011 10:53:12.438427 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.438769 master-2 kubenswrapper[4776]: I1011 10:53:12.438463 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n4s9\" (UniqueName: \"kubernetes.io/projected/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-kube-api-access-8n4s9\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.438769 master-2 kubenswrapper[4776]: I1011 10:53:12.438510 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.438769 master-2 kubenswrapper[4776]: I1011 10:53:12.438529 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.438769 master-2 kubenswrapper[4776]: I1011 10:53:12.438551 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-057ac20b-a0d2-4376-998d-12784c232497\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0888aa49-e062-4b2f-84d8-747b79ae5873\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.438769 master-2 kubenswrapper[4776]: I1011 10:53:12.438630 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.440141 master-2 kubenswrapper[4776]: I1011 10:53:12.439631 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.440141 master-2 kubenswrapper[4776]: I1011 10:53:12.440093 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.440736 master-2 kubenswrapper[4776]: I1011 10:53:12.440704 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.442113 master-2 kubenswrapper[4776]: I1011 10:53:12.441984 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:12.442189 master-2 kubenswrapper[4776]: I1011 10:53:12.442116 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-057ac20b-a0d2-4376-998d-12784c232497\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0888aa49-e062-4b2f-84d8-747b79ae5873\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/8b49609e0b8df1fc3e0f10240da586e838b76c0525e8f270f27010349d1c9159/globalmount\"" pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.443553 master-2 kubenswrapper[4776]: I1011 10:53:12.443518 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.444344 master-2 kubenswrapper[4776]: I1011 10:53:12.444235 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.445259 master-2 kubenswrapper[4776]: I1011 10:53:12.445206 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.560211 master-0 kubenswrapper[4790]: I1011 10:53:12.559480 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6944757b7f-plhvq" podStartSLOduration=11.347103393 podStartE2EDuration="24.559447762s" podCreationTimestamp="2025-10-11 10:52:48 +0000 UTC" firstStartedPulling="2025-10-11 10:52:49.019844924 +0000 UTC m=+845.574305226" lastFinishedPulling="2025-10-11 10:53:02.232189283 +0000 UTC m=+858.786649595" observedRunningTime="2025-10-11 10:53:12.552366596 +0000 UTC m=+869.106826888" watchObservedRunningTime="2025-10-11 10:53:12.559447762 +0000 UTC m=+869.113908074" Oct 11 10:53:12.581340 master-0 kubenswrapper[4790]: I1011 10:53:12.581262 4790 generic.go:334] "Generic (PLEG): container finished" podID="c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0" containerID="ba831bf6dddeb3f64993c649a1f54543cd4a62c4464b0ea7cd2cf9d325aa66f4" exitCode=0 Oct 11 10:53:12.581605 master-0 kubenswrapper[4790]: I1011 10:53:12.581367 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dw8wx" event={"ID":"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0","Type":"ContainerDied","Data":"ba831bf6dddeb3f64993c649a1f54543cd4a62c4464b0ea7cd2cf9d325aa66f4"} Oct 11 10:53:12.583566 master-0 kubenswrapper[4790]: I1011 10:53:12.583522 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-1" event={"ID":"be929908-6474-451d-8b87-e4effd7c6de4","Type":"ContainerStarted","Data":"ec0350a7355f6c5da814acb3e4b50a985a39478e4987d701f3bdf168b3a6530a"} Oct 11 10:53:12.585689 master-0 kubenswrapper[4790]: I1011 10:53:12.585624 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c13cb0d1-c50f-44fa-824a-46ece423a7cc","Type":"ContainerStarted","Data":"c4be72b5de1183ec2ba8473fed3fe3c9aa390d6a7ab90458f3ecfc26bf72839f"} Oct 11 10:53:12.613656 master-2 kubenswrapper[4776]: I1011 10:53:12.613591 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n4s9\" (UniqueName: \"kubernetes.io/projected/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-kube-api-access-8n4s9\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.614189 master-1 kubenswrapper[4771]: I1011 10:53:12.614112 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b4pv\" (UniqueName: \"kubernetes.io/projected/0ae0f8e3-9e87-45b8-8313-a0b65cf33106-kube-api-access-7b4pv\") pod \"dnsmasq-dns-86d565bb9-85bsq\" (UID: \"0ae0f8e3-9e87-45b8-8313-a0b65cf33106\") " pod="openstack/dnsmasq-dns-86d565bb9-85bsq" Oct 11 10:53:12.703345 master-2 kubenswrapper[4776]: I1011 10:53:12.703275 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-8qhqm" podStartSLOduration=3.196463907 podStartE2EDuration="11.703253062s" podCreationTimestamp="2025-10-11 10:53:01 +0000 UTC" firstStartedPulling="2025-10-11 10:53:02.709795051 +0000 UTC m=+1617.494221760" lastFinishedPulling="2025-10-11 10:53:11.216584206 +0000 UTC m=+1626.001010915" observedRunningTime="2025-10-11 10:53:12.664622095 +0000 UTC m=+1627.449048804" watchObservedRunningTime="2025-10-11 10:53:12.703253062 +0000 UTC m=+1627.487679771" Oct 11 10:53:12.829034 master-1 kubenswrapper[4771]: I1011 10:53:12.828920 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d565bb9-85bsq" Oct 11 10:53:12.849898 master-2 kubenswrapper[4776]: I1011 10:53:12.849817 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=7.10464565 podStartE2EDuration="15.849799361s" podCreationTimestamp="2025-10-11 10:52:57 +0000 UTC" firstStartedPulling="2025-10-11 10:53:02.726222286 +0000 UTC m=+1617.510648995" lastFinishedPulling="2025-10-11 10:53:11.471375997 +0000 UTC m=+1626.255802706" observedRunningTime="2025-10-11 10:53:12.84460599 +0000 UTC m=+1627.629032699" watchObservedRunningTime="2025-10-11 10:53:12.849799361 +0000 UTC m=+1627.634226070" Oct 11 10:53:12.906576 master-1 kubenswrapper[4771]: I1011 10:53:12.906497 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-1"] Oct 11 10:53:12.907863 master-1 kubenswrapper[4771]: I1011 10:53:12.907823 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:12.911841 master-1 kubenswrapper[4771]: I1011 10:53:12.911803 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 11 10:53:12.912128 master-1 kubenswrapper[4771]: I1011 10:53:12.912095 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 11 10:53:12.912825 master-1 kubenswrapper[4771]: I1011 10:53:12.912785 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 11 10:53:13.006364 master-0 kubenswrapper[4790]: I1011 10:53:13.006307 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-1"] Oct 11 10:53:13.009390 master-1 kubenswrapper[4771]: I1011 10:53:13.009200 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-1"] Oct 11 10:53:13.011715 master-2 kubenswrapper[4776]: I1011 10:53:13.011568 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-kcjm9" podStartSLOduration=3.361600166 podStartE2EDuration="11.011549942s" podCreationTimestamp="2025-10-11 10:53:02 +0000 UTC" firstStartedPulling="2025-10-11 10:53:03.839771718 +0000 UTC m=+1618.624198427" lastFinishedPulling="2025-10-11 10:53:11.489721494 +0000 UTC m=+1626.274148203" observedRunningTime="2025-10-11 10:53:12.997126581 +0000 UTC m=+1627.781553290" watchObservedRunningTime="2025-10-11 10:53:13.011549942 +0000 UTC m=+1627.795976651" Oct 11 10:53:13.013172 master-0 kubenswrapper[4790]: W1011 10:53:13.013120 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce689fd9_58ba_45f5_bec1_ff7b79e377ac.slice/crio-4fbef24be0c8b38f770cf0fc87b71794ae38966ded791ade342b14b09efe4324 WatchSource:0}: Error finding container 4fbef24be0c8b38f770cf0fc87b71794ae38966ded791ade342b14b09efe4324: Status 404 returned error can't find the container with id 4fbef24be0c8b38f770cf0fc87b71794ae38966ded791ade342b14b09efe4324 Oct 11 10:53:13.038896 master-1 kubenswrapper[4771]: I1011 10:53:13.038648 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv8z4\" (UniqueName: \"kubernetes.io/projected/66c0cd85-28ea-42de-8432-8803026d3124-kube-api-access-zv8z4\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.038896 master-1 kubenswrapper[4771]: I1011 10:53:13.038756 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/66c0cd85-28ea-42de-8432-8803026d3124-galera-tls-certs\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.038896 master-1 kubenswrapper[4771]: I1011 10:53:13.038804 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a83a0ef9-545f-41b1-a315-e924a92d6f81\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c48462de-767c-4d18-883d-e2a0b148d485\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.038896 master-1 kubenswrapper[4771]: I1011 10:53:13.038852 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/66c0cd85-28ea-42de-8432-8803026d3124-config-data-default\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.039219 master-1 kubenswrapper[4771]: I1011 10:53:13.038912 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/66c0cd85-28ea-42de-8432-8803026d3124-kolla-config\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.039219 master-1 kubenswrapper[4771]: I1011 10:53:13.038968 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/66c0cd85-28ea-42de-8432-8803026d3124-secrets\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.039219 master-1 kubenswrapper[4771]: I1011 10:53:13.038998 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c0cd85-28ea-42de-8432-8803026d3124-operator-scripts\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.039219 master-1 kubenswrapper[4771]: I1011 10:53:13.039036 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c0cd85-28ea-42de-8432-8803026d3124-combined-ca-bundle\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.039219 master-1 kubenswrapper[4771]: I1011 10:53:13.039072 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/66c0cd85-28ea-42de-8432-8803026d3124-config-data-generated\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.143427 master-1 kubenswrapper[4771]: I1011 10:53:13.143383 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/66c0cd85-28ea-42de-8432-8803026d3124-kolla-config\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.143640 master-1 kubenswrapper[4771]: I1011 10:53:13.143442 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/66c0cd85-28ea-42de-8432-8803026d3124-secrets\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.143640 master-1 kubenswrapper[4771]: I1011 10:53:13.143464 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c0cd85-28ea-42de-8432-8803026d3124-operator-scripts\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.143640 master-1 kubenswrapper[4771]: I1011 10:53:13.143487 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c0cd85-28ea-42de-8432-8803026d3124-combined-ca-bundle\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.143640 master-1 kubenswrapper[4771]: I1011 10:53:13.143509 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/66c0cd85-28ea-42de-8432-8803026d3124-config-data-generated\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.143640 master-1 kubenswrapper[4771]: I1011 10:53:13.143568 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv8z4\" (UniqueName: \"kubernetes.io/projected/66c0cd85-28ea-42de-8432-8803026d3124-kube-api-access-zv8z4\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.143640 master-1 kubenswrapper[4771]: I1011 10:53:13.143597 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/66c0cd85-28ea-42de-8432-8803026d3124-galera-tls-certs\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.143640 master-1 kubenswrapper[4771]: I1011 10:53:13.143625 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a83a0ef9-545f-41b1-a315-e924a92d6f81\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c48462de-767c-4d18-883d-e2a0b148d485\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.143640 master-1 kubenswrapper[4771]: I1011 10:53:13.143647 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/66c0cd85-28ea-42de-8432-8803026d3124-config-data-default\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.144233 master-1 kubenswrapper[4771]: I1011 10:53:13.144131 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/66c0cd85-28ea-42de-8432-8803026d3124-config-data-generated\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.146253 master-1 kubenswrapper[4771]: I1011 10:53:13.146206 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/66c0cd85-28ea-42de-8432-8803026d3124-kolla-config\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.146911 master-1 kubenswrapper[4771]: I1011 10:53:13.146874 4771 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:13.146975 master-1 kubenswrapper[4771]: I1011 10:53:13.146921 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a83a0ef9-545f-41b1-a315-e924a92d6f81\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c48462de-767c-4d18-883d-e2a0b148d485\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/937b2836a25f50d4b06480c0eec40bfcaffaa9f58948ca2aaf81b69d8b3600d7/globalmount\"" pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.148641 master-1 kubenswrapper[4771]: I1011 10:53:13.148601 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/66c0cd85-28ea-42de-8432-8803026d3124-config-data-default\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.150461 master-1 kubenswrapper[4771]: I1011 10:53:13.150432 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c0cd85-28ea-42de-8432-8803026d3124-operator-scripts\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.153668 master-1 kubenswrapper[4771]: I1011 10:53:13.153645 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/66c0cd85-28ea-42de-8432-8803026d3124-galera-tls-certs\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.153800 master-1 kubenswrapper[4771]: I1011 10:53:13.153740 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/66c0cd85-28ea-42de-8432-8803026d3124-secrets\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.154343 master-1 kubenswrapper[4771]: I1011 10:53:13.154304 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c0cd85-28ea-42de-8432-8803026d3124-combined-ca-bundle\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.171436 master-1 kubenswrapper[4771]: I1011 10:53:13.169780 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv8z4\" (UniqueName: \"kubernetes.io/projected/66c0cd85-28ea-42de-8432-8803026d3124-kube-api-access-zv8z4\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:13.411103 master-1 kubenswrapper[4771]: I1011 10:53:13.411022 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86d565bb9-85bsq"] Oct 11 10:53:13.416757 master-2 kubenswrapper[4776]: I1011 10:53:13.416694 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"914ac6d0-5a85-4b2d-b4d4-202def09b0d8","Type":"ContainerStarted","Data":"e2e689b6171c3c9555e3da0b9467cad86d6df71fe90a288a26e58488f8162ae3"} Oct 11 10:53:13.419472 master-2 kubenswrapper[4776]: I1011 10:53:13.419433 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4m6km" event={"ID":"894f72d0-cdc8-4904-b8a4-0e808ce0b855","Type":"ContainerStarted","Data":"9d2238cd76bcd430302dc7e823623c13022114d0eab4f21df80493efa4fd846b"} Oct 11 10:53:13.419472 master-2 kubenswrapper[4776]: I1011 10:53:13.419472 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4m6km" event={"ID":"894f72d0-cdc8-4904-b8a4-0e808ce0b855","Type":"ContainerStarted","Data":"53ee7d18ef164052bc930ceab96f754b7c6524fc3805bf0c6265c2a82527031f"} Oct 11 10:53:13.419635 master-2 kubenswrapper[4776]: I1011 10:53:13.419596 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:13.421920 master-2 kubenswrapper[4776]: I1011 10:53:13.421892 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-2" event={"ID":"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0","Type":"ContainerStarted","Data":"95de2747230b1e6ce345bd8b5619b82e79500b7d95b612f55c02d195c5ea9860"} Oct 11 10:53:13.425883 master-1 kubenswrapper[4771]: W1011 10:53:13.425813 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ae0f8e3_9e87_45b8_8313_a0b65cf33106.slice/crio-1af26e0c3df30b750253930a520b32ea24880275e960e13e9910257c86f202ff WatchSource:0}: Error finding container 1af26e0c3df30b750253930a520b32ea24880275e960e13e9910257c86f202ff: Status 404 returned error can't find the container with id 1af26e0c3df30b750253930a520b32ea24880275e960e13e9910257c86f202ff Oct 11 10:53:13.533332 master-2 kubenswrapper[4776]: I1011 10:53:13.530963 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-4m6km" podStartSLOduration=6.4354767299999995 podStartE2EDuration="12.530945939s" podCreationTimestamp="2025-10-11 10:53:01 +0000 UTC" firstStartedPulling="2025-10-11 10:53:03.395081879 +0000 UTC m=+1618.179508588" lastFinishedPulling="2025-10-11 10:53:09.490551088 +0000 UTC m=+1624.274977797" observedRunningTime="2025-10-11 10:53:13.480866003 +0000 UTC m=+1628.265292712" watchObservedRunningTime="2025-10-11 10:53:13.530945939 +0000 UTC m=+1628.315372648" Oct 11 10:53:13.598552 master-0 kubenswrapper[4790]: I1011 10:53:13.598464 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-1" event={"ID":"ce689fd9-58ba-45f5-bec1-ff7b79e377ac","Type":"ContainerStarted","Data":"4fbef24be0c8b38f770cf0fc87b71794ae38966ded791ade342b14b09efe4324"} Oct 11 10:53:13.608318 master-0 kubenswrapper[4790]: I1011 10:53:13.608232 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dw8wx" event={"ID":"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0","Type":"ContainerStarted","Data":"e05059712b2d4bb66b7f19403495dbad2195c6a1d67625b3268de7b4d7fdeb2b"} Oct 11 10:53:13.608536 master-0 kubenswrapper[4790]: I1011 10:53:13.608340 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:13.608536 master-0 kubenswrapper[4790]: I1011 10:53:13.608354 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dw8wx" event={"ID":"c3f53fa4-305a-45b2-8cf3-f4e97d6a5ea0","Type":"ContainerStarted","Data":"f451a78016f5347b80416239e80aefa101a979e977d58b1ffeeba6fb0a2ad963"} Oct 11 10:53:13.610830 master-0 kubenswrapper[4790]: I1011 10:53:13.610549 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-1" event={"ID":"1ebbbaaf-f668-4c40-b437-2e730aef3912","Type":"ContainerStarted","Data":"dc657647e19f7fc9de9910df8248a796ef7cb75c8ad8bbbaa74f15d4511985e3"} Oct 11 10:53:13.610830 master-0 kubenswrapper[4790]: I1011 10:53:13.610750 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6944757b7f-plhvq" podUID="f97fbf89-0d03-4ed8-a0d2-4f796e705e20" containerName="dnsmasq-dns" containerID="cri-o://92b71e314ad33488ce83aa439ada543420fe070228f28b65e21677eb1d95f2b2" gracePeriod=10 Oct 11 10:53:13.653033 master-0 kubenswrapper[4790]: I1011 10:53:13.652919 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-dw8wx" podStartSLOduration=5.96967277 podStartE2EDuration="12.652896096s" podCreationTimestamp="2025-10-11 10:53:01 +0000 UTC" firstStartedPulling="2025-10-11 10:53:04.11015318 +0000 UTC m=+860.664613472" lastFinishedPulling="2025-10-11 10:53:10.793376506 +0000 UTC m=+867.347836798" observedRunningTime="2025-10-11 10:53:13.651407746 +0000 UTC m=+870.205868038" watchObservedRunningTime="2025-10-11 10:53:13.652896096 +0000 UTC m=+870.207356378" Oct 11 10:53:13.667309 master-2 kubenswrapper[4776]: I1011 10:53:13.667237 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-057ac20b-a0d2-4376-998d-12784c232497\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0888aa49-e062-4b2f-84d8-747b79ae5873\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:13.835366 master-2 kubenswrapper[4776]: I1011 10:53:13.835306 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:13.887545 master-0 kubenswrapper[4790]: I1011 10:53:13.887463 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-2"] Oct 11 10:53:13.890523 master-0 kubenswrapper[4790]: I1011 10:53:13.890465 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:13.893105 master-0 kubenswrapper[4790]: I1011 10:53:13.893047 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 11 10:53:13.893897 master-0 kubenswrapper[4790]: I1011 10:53:13.893842 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 11 10:53:13.902376 master-0 kubenswrapper[4790]: I1011 10:53:13.902308 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-2"] Oct 11 10:53:13.906260 master-0 kubenswrapper[4790]: I1011 10:53:13.906213 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 11 10:53:13.987537 master-0 kubenswrapper[4790]: I1011 10:53:13.987415 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5059e0b0-120f-4498-8076-e3e9239b5688-kolla-config\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:13.987537 master-0 kubenswrapper[4790]: I1011 10:53:13.987517 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/5059e0b0-120f-4498-8076-e3e9239b5688-secrets\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:13.987820 master-0 kubenswrapper[4790]: I1011 10:53:13.987552 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5059e0b0-120f-4498-8076-e3e9239b5688-galera-tls-certs\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:13.987820 master-0 kubenswrapper[4790]: I1011 10:53:13.987578 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5059e0b0-120f-4498-8076-e3e9239b5688-config-data-default\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:13.987820 master-0 kubenswrapper[4790]: I1011 10:53:13.987616 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf6h8\" (UniqueName: \"kubernetes.io/projected/5059e0b0-120f-4498-8076-e3e9239b5688-kube-api-access-pf6h8\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:13.987820 master-0 kubenswrapper[4790]: I1011 10:53:13.987640 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5059e0b0-120f-4498-8076-e3e9239b5688-operator-scripts\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:13.987820 master-0 kubenswrapper[4790]: I1011 10:53:13.987670 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c45139fd-bd49-4792-a4db-ab4cf7143532\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0ad3b211-066d-4f82-9b08-1cd78cfd71a3\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:13.987820 master-0 kubenswrapper[4790]: I1011 10:53:13.987705 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5059e0b0-120f-4498-8076-e3e9239b5688-config-data-generated\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:13.987820 master-0 kubenswrapper[4790]: I1011 10:53:13.987770 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5059e0b0-120f-4498-8076-e3e9239b5688-combined-ca-bundle\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:14.047670 master-1 kubenswrapper[4771]: I1011 10:53:14.047569 4771 generic.go:334] "Generic (PLEG): container finished" podID="0ae0f8e3-9e87-45b8-8313-a0b65cf33106" containerID="fd43d772f12b2955515b0207673e261220c08cfd99b72815c0e4dd5a30cfab8c" exitCode=0 Oct 11 10:53:14.048327 master-1 kubenswrapper[4771]: I1011 10:53:14.047764 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d565bb9-85bsq" event={"ID":"0ae0f8e3-9e87-45b8-8313-a0b65cf33106","Type":"ContainerDied","Data":"fd43d772f12b2955515b0207673e261220c08cfd99b72815c0e4dd5a30cfab8c"} Oct 11 10:53:14.048546 master-1 kubenswrapper[4771]: I1011 10:53:14.048339 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d565bb9-85bsq" event={"ID":"0ae0f8e3-9e87-45b8-8313-a0b65cf33106","Type":"ContainerStarted","Data":"1af26e0c3df30b750253930a520b32ea24880275e960e13e9910257c86f202ff"} Oct 11 10:53:14.053460 master-1 kubenswrapper[4771]: I1011 10:53:14.053351 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mvzxp" event={"ID":"71b1c323-2ebf-4a37-9327-840d3f04eda1","Type":"ContainerStarted","Data":"63099c14644e997d6b7af177d0bc5a9cb1666fc0b8bfdde4fc39c2515570d309"} Oct 11 10:53:14.053752 master-1 kubenswrapper[4771]: I1011 10:53:14.053668 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:14.056075 master-1 kubenswrapper[4771]: I1011 10:53:14.055274 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd","Type":"ContainerStarted","Data":"d123df26764927b959f6821afc87dfe457d42e15c12942ada27bc877e1b79e1d"} Oct 11 10:53:14.061723 master-1 kubenswrapper[4771]: I1011 10:53:14.058298 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xvjvs" event={"ID":"a0617ee1-7793-4630-9daa-5d4d02f1c5fe","Type":"ContainerStarted","Data":"880b4d3b6aa1c158a513f02fc144ecdba62fe2fbc3a04f13a4b6bff459d1e020"} Oct 11 10:53:14.104950 master-0 kubenswrapper[4790]: I1011 10:53:14.089888 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5059e0b0-120f-4498-8076-e3e9239b5688-galera-tls-certs\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:14.104950 master-0 kubenswrapper[4790]: I1011 10:53:14.089948 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5059e0b0-120f-4498-8076-e3e9239b5688-config-data-default\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:14.104950 master-0 kubenswrapper[4790]: I1011 10:53:14.089996 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf6h8\" (UniqueName: \"kubernetes.io/projected/5059e0b0-120f-4498-8076-e3e9239b5688-kube-api-access-pf6h8\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:14.104950 master-0 kubenswrapper[4790]: I1011 10:53:14.090132 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5059e0b0-120f-4498-8076-e3e9239b5688-operator-scripts\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:14.104950 master-0 kubenswrapper[4790]: I1011 10:53:14.090169 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c45139fd-bd49-4792-a4db-ab4cf7143532\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0ad3b211-066d-4f82-9b08-1cd78cfd71a3\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:14.104950 master-0 kubenswrapper[4790]: I1011 10:53:14.090230 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5059e0b0-120f-4498-8076-e3e9239b5688-config-data-generated\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:14.104950 master-0 kubenswrapper[4790]: I1011 10:53:14.090267 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5059e0b0-120f-4498-8076-e3e9239b5688-combined-ca-bundle\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:14.104950 master-0 kubenswrapper[4790]: I1011 10:53:14.090302 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5059e0b0-120f-4498-8076-e3e9239b5688-kolla-config\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:14.104950 master-0 kubenswrapper[4790]: I1011 10:53:14.090365 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/5059e0b0-120f-4498-8076-e3e9239b5688-secrets\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:14.104950 master-0 kubenswrapper[4790]: I1011 10:53:14.092956 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5059e0b0-120f-4498-8076-e3e9239b5688-operator-scripts\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:14.104950 master-0 kubenswrapper[4790]: I1011 10:53:14.094512 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/5059e0b0-120f-4498-8076-e3e9239b5688-secrets\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:14.104950 master-0 kubenswrapper[4790]: I1011 10:53:14.094962 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5059e0b0-120f-4498-8076-e3e9239b5688-config-data-default\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:14.104950 master-0 kubenswrapper[4790]: I1011 10:53:14.095030 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5059e0b0-120f-4498-8076-e3e9239b5688-config-data-generated\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:14.104950 master-0 kubenswrapper[4790]: I1011 10:53:14.095648 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5059e0b0-120f-4498-8076-e3e9239b5688-kolla-config\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:14.109811 master-0 kubenswrapper[4790]: I1011 10:53:14.108612 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5059e0b0-120f-4498-8076-e3e9239b5688-galera-tls-certs\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:14.110224 master-0 kubenswrapper[4790]: I1011 10:53:14.110168 4790 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:14.110312 master-0 kubenswrapper[4790]: I1011 10:53:14.110244 4790 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c45139fd-bd49-4792-a4db-ab4cf7143532\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0ad3b211-066d-4f82-9b08-1cd78cfd71a3\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/2d27f890eb1486f45bcfb322dc6233cd405573b410a561ca95e0dd6cc109f5f4/globalmount\"" pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:14.123377 master-0 kubenswrapper[4790]: I1011 10:53:14.122829 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5059e0b0-120f-4498-8076-e3e9239b5688-combined-ca-bundle\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:14.123682 master-0 kubenswrapper[4790]: I1011 10:53:14.123641 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf6h8\" (UniqueName: \"kubernetes.io/projected/5059e0b0-120f-4498-8076-e3e9239b5688-kube-api-access-pf6h8\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:14.155091 master-1 kubenswrapper[4771]: I1011 10:53:14.154975 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-xvjvs" podStartSLOduration=5.980741034 podStartE2EDuration="12.154948073s" podCreationTimestamp="2025-10-11 10:53:02 +0000 UTC" firstStartedPulling="2025-10-11 10:53:06.762219839 +0000 UTC m=+1618.736446280" lastFinishedPulling="2025-10-11 10:53:12.936426848 +0000 UTC m=+1624.910653319" observedRunningTime="2025-10-11 10:53:14.150926496 +0000 UTC m=+1626.125152997" watchObservedRunningTime="2025-10-11 10:53:14.154948073 +0000 UTC m=+1626.129174554" Oct 11 10:53:14.157844 master-0 kubenswrapper[4790]: I1011 10:53:14.157436 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6944757b7f-plhvq" Oct 11 10:53:14.193494 master-0 kubenswrapper[4790]: I1011 10:53:14.193016 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wx2k\" (UniqueName: \"kubernetes.io/projected/f97fbf89-0d03-4ed8-a0d2-4f796e705e20-kube-api-access-7wx2k\") pod \"f97fbf89-0d03-4ed8-a0d2-4f796e705e20\" (UID: \"f97fbf89-0d03-4ed8-a0d2-4f796e705e20\") " Oct 11 10:53:14.193494 master-0 kubenswrapper[4790]: I1011 10:53:14.193132 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97fbf89-0d03-4ed8-a0d2-4f796e705e20-config\") pod \"f97fbf89-0d03-4ed8-a0d2-4f796e705e20\" (UID: \"f97fbf89-0d03-4ed8-a0d2-4f796e705e20\") " Oct 11 10:53:14.193494 master-0 kubenswrapper[4790]: I1011 10:53:14.193231 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97fbf89-0d03-4ed8-a0d2-4f796e705e20-dns-svc\") pod \"f97fbf89-0d03-4ed8-a0d2-4f796e705e20\" (UID: \"f97fbf89-0d03-4ed8-a0d2-4f796e705e20\") " Oct 11 10:53:14.199056 master-1 kubenswrapper[4771]: I1011 10:53:14.188545 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-mvzxp" podStartSLOduration=9.46569798 podStartE2EDuration="13.188525973s" podCreationTimestamp="2025-10-11 10:53:01 +0000 UTC" firstStartedPulling="2025-10-11 10:53:02.617670826 +0000 UTC m=+1614.591897267" lastFinishedPulling="2025-10-11 10:53:06.340498769 +0000 UTC m=+1618.314725260" observedRunningTime="2025-10-11 10:53:14.186565447 +0000 UTC m=+1626.160791928" watchObservedRunningTime="2025-10-11 10:53:14.188525973 +0000 UTC m=+1626.162752414" Oct 11 10:53:14.212090 master-0 kubenswrapper[4790]: I1011 10:53:14.197847 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97fbf89-0d03-4ed8-a0d2-4f796e705e20-kube-api-access-7wx2k" (OuterVolumeSpecName: "kube-api-access-7wx2k") pod "f97fbf89-0d03-4ed8-a0d2-4f796e705e20" (UID: "f97fbf89-0d03-4ed8-a0d2-4f796e705e20"). InnerVolumeSpecName "kube-api-access-7wx2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:14.212871 master-0 kubenswrapper[4790]: I1011 10:53:14.212825 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wx2k\" (UniqueName: \"kubernetes.io/projected/f97fbf89-0d03-4ed8-a0d2-4f796e705e20-kube-api-access-7wx2k\") on node \"master-0\" DevicePath \"\"" Oct 11 10:53:14.230645 master-0 kubenswrapper[4790]: I1011 10:53:14.230569 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97fbf89-0d03-4ed8-a0d2-4f796e705e20-config" (OuterVolumeSpecName: "config") pod "f97fbf89-0d03-4ed8-a0d2-4f796e705e20" (UID: "f97fbf89-0d03-4ed8-a0d2-4f796e705e20"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:14.233139 master-0 kubenswrapper[4790]: I1011 10:53:14.233120 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97fbf89-0d03-4ed8-a0d2-4f796e705e20-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f97fbf89-0d03-4ed8-a0d2-4f796e705e20" (UID: "f97fbf89-0d03-4ed8-a0d2-4f796e705e20"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:14.272605 master-2 kubenswrapper[4776]: I1011 10:53:14.272557 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 11 10:53:14.276764 master-2 kubenswrapper[4776]: W1011 10:53:14.276615 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda37aeaa8_8c12_4ab0_a4a6_89b3c92886d9.slice/crio-742f359ce2f15ab94aa45089d97142c58f7bc9946b63a84664086d6b121215ec WatchSource:0}: Error finding container 742f359ce2f15ab94aa45089d97142c58f7bc9946b63a84664086d6b121215ec: Status 404 returned error can't find the container with id 742f359ce2f15ab94aa45089d97142c58f7bc9946b63a84664086d6b121215ec Oct 11 10:53:14.316941 master-0 kubenswrapper[4790]: I1011 10:53:14.316874 4790 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97fbf89-0d03-4ed8-a0d2-4f796e705e20-config\") on node \"master-0\" DevicePath \"\"" Oct 11 10:53:14.316941 master-0 kubenswrapper[4790]: I1011 10:53:14.316926 4790 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97fbf89-0d03-4ed8-a0d2-4f796e705e20-dns-svc\") on node \"master-0\" DevicePath \"\"" Oct 11 10:53:14.436800 master-2 kubenswrapper[4776]: I1011 10:53:14.436721 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9","Type":"ContainerStarted","Data":"742f359ce2f15ab94aa45089d97142c58f7bc9946b63a84664086d6b121215ec"} Oct 11 10:53:14.440706 master-2 kubenswrapper[4776]: I1011 10:53:14.440612 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"8d885df4-18cc-401a-8226-cd3d17b3f770","Type":"ContainerStarted","Data":"fb03c91ad7a42c396933f0f240b96a175ee42b923ebf15def67f14a383df0288"} Oct 11 10:53:14.441593 master-2 kubenswrapper[4776]: I1011 10:53:14.441552 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:14.620022 master-0 kubenswrapper[4790]: I1011 10:53:14.619965 4790 generic.go:334] "Generic (PLEG): container finished" podID="f97fbf89-0d03-4ed8-a0d2-4f796e705e20" containerID="92b71e314ad33488ce83aa439ada543420fe070228f28b65e21677eb1d95f2b2" exitCode=0 Oct 11 10:53:14.621212 master-0 kubenswrapper[4790]: I1011 10:53:14.620027 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6944757b7f-plhvq" event={"ID":"f97fbf89-0d03-4ed8-a0d2-4f796e705e20","Type":"ContainerDied","Data":"92b71e314ad33488ce83aa439ada543420fe070228f28b65e21677eb1d95f2b2"} Oct 11 10:53:14.621323 master-0 kubenswrapper[4790]: I1011 10:53:14.621308 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6944757b7f-plhvq" event={"ID":"f97fbf89-0d03-4ed8-a0d2-4f796e705e20","Type":"ContainerDied","Data":"a4a9276558748dc6921cea20229a43f4acf57679858b58aec74901d23d4a131c"} Oct 11 10:53:14.621398 master-0 kubenswrapper[4790]: I1011 10:53:14.621387 4790 scope.go:117] "RemoveContainer" containerID="92b71e314ad33488ce83aa439ada543420fe070228f28b65e21677eb1d95f2b2" Oct 11 10:53:14.621498 master-0 kubenswrapper[4790]: I1011 10:53:14.621486 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:14.621562 master-0 kubenswrapper[4790]: I1011 10:53:14.620067 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6944757b7f-plhvq" Oct 11 10:53:14.639875 master-0 kubenswrapper[4790]: I1011 10:53:14.639845 4790 scope.go:117] "RemoveContainer" containerID="00ae648526dd85e825feb5e90f78264a37eaa038e7b0cc07f203779fbd465caa" Oct 11 10:53:14.658485 master-0 kubenswrapper[4790]: I1011 10:53:14.658418 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6944757b7f-plhvq"] Oct 11 10:53:14.663402 master-0 kubenswrapper[4790]: I1011 10:53:14.663381 4790 scope.go:117] "RemoveContainer" containerID="92b71e314ad33488ce83aa439ada543420fe070228f28b65e21677eb1d95f2b2" Oct 11 10:53:14.663745 master-0 kubenswrapper[4790]: E1011 10:53:14.663705 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b71e314ad33488ce83aa439ada543420fe070228f28b65e21677eb1d95f2b2\": container with ID starting with 92b71e314ad33488ce83aa439ada543420fe070228f28b65e21677eb1d95f2b2 not found: ID does not exist" containerID="92b71e314ad33488ce83aa439ada543420fe070228f28b65e21677eb1d95f2b2" Oct 11 10:53:14.663805 master-0 kubenswrapper[4790]: I1011 10:53:14.663756 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b71e314ad33488ce83aa439ada543420fe070228f28b65e21677eb1d95f2b2"} err="failed to get container status \"92b71e314ad33488ce83aa439ada543420fe070228f28b65e21677eb1d95f2b2\": rpc error: code = NotFound desc = could not find container \"92b71e314ad33488ce83aa439ada543420fe070228f28b65e21677eb1d95f2b2\": container with ID starting with 92b71e314ad33488ce83aa439ada543420fe070228f28b65e21677eb1d95f2b2 not found: ID does not exist" Oct 11 10:53:14.663805 master-0 kubenswrapper[4790]: I1011 10:53:14.663783 4790 scope.go:117] "RemoveContainer" containerID="00ae648526dd85e825feb5e90f78264a37eaa038e7b0cc07f203779fbd465caa" Oct 11 10:53:14.664077 master-0 kubenswrapper[4790]: E1011 10:53:14.664056 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00ae648526dd85e825feb5e90f78264a37eaa038e7b0cc07f203779fbd465caa\": container with ID starting with 00ae648526dd85e825feb5e90f78264a37eaa038e7b0cc07f203779fbd465caa not found: ID does not exist" containerID="00ae648526dd85e825feb5e90f78264a37eaa038e7b0cc07f203779fbd465caa" Oct 11 10:53:14.664134 master-0 kubenswrapper[4790]: I1011 10:53:14.664080 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00ae648526dd85e825feb5e90f78264a37eaa038e7b0cc07f203779fbd465caa"} err="failed to get container status \"00ae648526dd85e825feb5e90f78264a37eaa038e7b0cc07f203779fbd465caa\": rpc error: code = NotFound desc = could not find container \"00ae648526dd85e825feb5e90f78264a37eaa038e7b0cc07f203779fbd465caa\": container with ID starting with 00ae648526dd85e825feb5e90f78264a37eaa038e7b0cc07f203779fbd465caa not found: ID does not exist" Oct 11 10:53:14.710460 master-0 kubenswrapper[4790]: I1011 10:53:14.710336 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6944757b7f-plhvq"] Oct 11 10:53:15.077173 master-1 kubenswrapper[4771]: I1011 10:53:15.077119 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d565bb9-85bsq" event={"ID":"0ae0f8e3-9e87-45b8-8313-a0b65cf33106","Type":"ContainerStarted","Data":"14ed7d218f9217fbceb4436ac3f26fb55858bf77044f44bd18a2d4ffe4eacee3"} Oct 11 10:53:15.077664 master-1 kubenswrapper[4771]: I1011 10:53:15.077586 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86d565bb9-85bsq" Oct 11 10:53:15.078189 master-1 kubenswrapper[4771]: I1011 10:53:15.078131 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:15.088924 master-1 kubenswrapper[4771]: I1011 10:53:15.088863 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a83a0ef9-545f-41b1-a315-e924a92d6f81\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c48462de-767c-4d18-883d-e2a0b148d485\") pod \"openstack-cell1-galera-1\" (UID: \"66c0cd85-28ea-42de-8432-8803026d3124\") " pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:15.108726 master-1 kubenswrapper[4771]: I1011 10:53:15.108637 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86d565bb9-85bsq" podStartSLOduration=3.108616252 podStartE2EDuration="3.108616252s" podCreationTimestamp="2025-10-11 10:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:15.102887367 +0000 UTC m=+1627.077113838" watchObservedRunningTime="2025-10-11 10:53:15.108616252 +0000 UTC m=+1627.082842703" Oct 11 10:53:15.336965 master-1 kubenswrapper[4771]: I1011 10:53:15.336855 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:15.400885 master-0 kubenswrapper[4790]: I1011 10:53:15.400831 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c45139fd-bd49-4792-a4db-ab4cf7143532\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0ad3b211-066d-4f82-9b08-1cd78cfd71a3\") pod \"openstack-cell1-galera-2\" (UID: \"5059e0b0-120f-4498-8076-e3e9239b5688\") " pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:15.471543 master-1 kubenswrapper[4771]: I1011 10:53:15.471436 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 11 10:53:15.475906 master-1 kubenswrapper[4771]: I1011 10:53:15.475812 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.482272 master-1 kubenswrapper[4771]: I1011 10:53:15.479262 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Oct 11 10:53:15.482272 master-1 kubenswrapper[4771]: I1011 10:53:15.479722 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Oct 11 10:53:15.482272 master-1 kubenswrapper[4771]: I1011 10:53:15.479720 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Oct 11 10:53:15.482272 master-1 kubenswrapper[4771]: I1011 10:53:15.480945 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Oct 11 10:53:15.489917 master-1 kubenswrapper[4771]: I1011 10:53:15.489428 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Oct 11 10:53:15.494266 master-1 kubenswrapper[4771]: I1011 10:53:15.493342 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 11 10:53:15.527176 master-1 kubenswrapper[4771]: I1011 10:53:15.527111 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 11 10:53:15.605538 master-2 kubenswrapper[4776]: I1011 10:53:15.605474 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-1"] Oct 11 10:53:15.606512 master-2 kubenswrapper[4776]: I1011 10:53:15.606479 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-1" Oct 11 10:53:15.611751 master-2 kubenswrapper[4776]: I1011 10:53:15.611726 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 11 10:53:15.611869 master-2 kubenswrapper[4776]: I1011 10:53:15.611828 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 11 10:53:15.614130 master-1 kubenswrapper[4771]: I1011 10:53:15.613928 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.614130 master-1 kubenswrapper[4771]: I1011 10:53:15.614002 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.614469 master-1 kubenswrapper[4771]: I1011 10:53:15.614172 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.614469 master-1 kubenswrapper[4771]: I1011 10:53:15.614342 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.614469 master-1 kubenswrapper[4771]: I1011 10:53:15.614459 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.614618 master-1 kubenswrapper[4771]: I1011 10:53:15.614495 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-config\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.614618 master-1 kubenswrapper[4771]: I1011 10:53:15.614522 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq956\" (UniqueName: \"kubernetes.io/projected/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-kube-api-access-lq956\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.615242 master-1 kubenswrapper[4771]: I1011 10:53:15.614984 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d985c5d6-9363-4eef-9640-f61388292365\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0990b59f-0f58-467f-8736-1b158b7725d2\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.623116 master-2 kubenswrapper[4776]: I1011 10:53:15.623067 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-1"] Oct 11 10:53:15.710838 master-0 kubenswrapper[4790]: I1011 10:53:15.710545 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:15.719730 master-1 kubenswrapper[4771]: I1011 10:53:15.716964 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.719730 master-1 kubenswrapper[4771]: I1011 10:53:15.717030 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.719730 master-1 kubenswrapper[4771]: I1011 10:53:15.717066 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.719730 master-1 kubenswrapper[4771]: I1011 10:53:15.717097 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-config\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.719730 master-1 kubenswrapper[4771]: I1011 10:53:15.717117 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq956\" (UniqueName: \"kubernetes.io/projected/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-kube-api-access-lq956\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.719730 master-1 kubenswrapper[4771]: I1011 10:53:15.717145 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d985c5d6-9363-4eef-9640-f61388292365\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0990b59f-0f58-467f-8736-1b158b7725d2\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.719730 master-1 kubenswrapper[4771]: I1011 10:53:15.717178 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.719730 master-1 kubenswrapper[4771]: I1011 10:53:15.717202 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.719730 master-1 kubenswrapper[4771]: I1011 10:53:15.718645 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.720677 master-1 kubenswrapper[4771]: I1011 10:53:15.720626 4771 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:15.720747 master-1 kubenswrapper[4771]: I1011 10:53:15.720697 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d985c5d6-9363-4eef-9640-f61388292365\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0990b59f-0f58-467f-8736-1b158b7725d2\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/37699e2b75858a97c8af891d5e1a76727de9abb22a62dc041bfd38b0b8d8c160/globalmount\"" pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.720929 master-1 kubenswrapper[4771]: I1011 10:53:15.720876 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.721889 master-1 kubenswrapper[4771]: I1011 10:53:15.721844 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-config\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.723094 master-1 kubenswrapper[4771]: I1011 10:53:15.722850 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.723201 master-1 kubenswrapper[4771]: I1011 10:53:15.723167 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.728831 master-1 kubenswrapper[4771]: I1011 10:53:15.728775 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.743822 master-1 kubenswrapper[4771]: I1011 10:53:15.743720 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq956\" (UniqueName: \"kubernetes.io/projected/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-kube-api-access-lq956\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:15.801824 master-2 kubenswrapper[4776]: I1011 10:53:15.801769 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92bf399-88cc-4b7b-8048-81fda1a2e172-combined-ca-bundle\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.802211 master-2 kubenswrapper[4776]: I1011 10:53:15.801919 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f92bf399-88cc-4b7b-8048-81fda1a2e172-kolla-config\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.802211 master-2 kubenswrapper[4776]: I1011 10:53:15.802040 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f92bf399-88cc-4b7b-8048-81fda1a2e172-memcached-tls-certs\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.802211 master-2 kubenswrapper[4776]: I1011 10:53:15.802111 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f92bf399-88cc-4b7b-8048-81fda1a2e172-config-data\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.802211 master-2 kubenswrapper[4776]: I1011 10:53:15.802172 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqk9w\" (UniqueName: \"kubernetes.io/projected/f92bf399-88cc-4b7b-8048-81fda1a2e172-kube-api-access-rqk9w\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.803194 master-0 kubenswrapper[4790]: I1011 10:53:15.803116 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 11 10:53:15.803466 master-0 kubenswrapper[4790]: E1011 10:53:15.803386 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97fbf89-0d03-4ed8-a0d2-4f796e705e20" containerName="dnsmasq-dns" Oct 11 10:53:15.803466 master-0 kubenswrapper[4790]: I1011 10:53:15.803405 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97fbf89-0d03-4ed8-a0d2-4f796e705e20" containerName="dnsmasq-dns" Oct 11 10:53:15.803466 master-0 kubenswrapper[4790]: E1011 10:53:15.803428 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97fbf89-0d03-4ed8-a0d2-4f796e705e20" containerName="init" Oct 11 10:53:15.803466 master-0 kubenswrapper[4790]: I1011 10:53:15.803438 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97fbf89-0d03-4ed8-a0d2-4f796e705e20" containerName="init" Oct 11 10:53:15.803628 master-0 kubenswrapper[4790]: I1011 10:53:15.803558 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97fbf89-0d03-4ed8-a0d2-4f796e705e20" containerName="dnsmasq-dns" Oct 11 10:53:15.804355 master-0 kubenswrapper[4790]: I1011 10:53:15.804332 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.813183 master-0 kubenswrapper[4790]: I1011 10:53:15.811982 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 11 10:53:15.813183 master-0 kubenswrapper[4790]: I1011 10:53:15.812018 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 11 10:53:15.813183 master-0 kubenswrapper[4790]: I1011 10:53:15.812018 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 11 10:53:15.820270 master-0 kubenswrapper[4790]: I1011 10:53:15.819777 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 11 10:53:15.842533 master-0 kubenswrapper[4790]: I1011 10:53:15.842212 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d84ced78-2c7e-4de1-b3cb-1356f0f0b1e1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^1409533a-19dc-4ab6-9c9b-35e2a16fe065\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.842533 master-0 kubenswrapper[4790]: I1011 10:53:15.842266 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.842533 master-0 kubenswrapper[4790]: I1011 10:53:15.842304 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qgl8\" (UniqueName: \"kubernetes.io/projected/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-kube-api-access-9qgl8\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.842533 master-0 kubenswrapper[4790]: I1011 10:53:15.842325 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.842533 master-0 kubenswrapper[4790]: I1011 10:53:15.842348 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-config\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.842533 master-0 kubenswrapper[4790]: I1011 10:53:15.842379 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.842533 master-0 kubenswrapper[4790]: I1011 10:53:15.842412 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.842533 master-0 kubenswrapper[4790]: I1011 10:53:15.842437 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.847790 master-1 kubenswrapper[4771]: I1011 10:53:15.847722 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-1"] Oct 11 10:53:15.904239 master-2 kubenswrapper[4776]: I1011 10:53:15.904044 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92bf399-88cc-4b7b-8048-81fda1a2e172-combined-ca-bundle\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.904239 master-2 kubenswrapper[4776]: I1011 10:53:15.904101 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f92bf399-88cc-4b7b-8048-81fda1a2e172-kolla-config\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.904239 master-2 kubenswrapper[4776]: I1011 10:53:15.904139 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f92bf399-88cc-4b7b-8048-81fda1a2e172-memcached-tls-certs\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.904239 master-2 kubenswrapper[4776]: I1011 10:53:15.904169 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f92bf399-88cc-4b7b-8048-81fda1a2e172-config-data\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.904239 master-2 kubenswrapper[4776]: I1011 10:53:15.904193 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqk9w\" (UniqueName: \"kubernetes.io/projected/f92bf399-88cc-4b7b-8048-81fda1a2e172-kube-api-access-rqk9w\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.905496 master-2 kubenswrapper[4776]: I1011 10:53:15.905446 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f92bf399-88cc-4b7b-8048-81fda1a2e172-config-data\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.905606 master-2 kubenswrapper[4776]: I1011 10:53:15.905575 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f92bf399-88cc-4b7b-8048-81fda1a2e172-kolla-config\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.907555 master-2 kubenswrapper[4776]: I1011 10:53:15.907521 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92bf399-88cc-4b7b-8048-81fda1a2e172-combined-ca-bundle\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.908921 master-2 kubenswrapper[4776]: I1011 10:53:15.908890 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f92bf399-88cc-4b7b-8048-81fda1a2e172-memcached-tls-certs\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.923901 master-2 kubenswrapper[4776]: I1011 10:53:15.923868 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqk9w\" (UniqueName: \"kubernetes.io/projected/f92bf399-88cc-4b7b-8048-81fda1a2e172-kube-api-access-rqk9w\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.929455 master-2 kubenswrapper[4776]: I1011 10:53:15.929317 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-1" Oct 11 10:53:15.943987 master-0 kubenswrapper[4790]: I1011 10:53:15.943894 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.943987 master-0 kubenswrapper[4790]: I1011 10:53:15.943954 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d84ced78-2c7e-4de1-b3cb-1356f0f0b1e1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^1409533a-19dc-4ab6-9c9b-35e2a16fe065\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.943987 master-0 kubenswrapper[4790]: I1011 10:53:15.943984 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qgl8\" (UniqueName: \"kubernetes.io/projected/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-kube-api-access-9qgl8\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.943987 master-0 kubenswrapper[4790]: I1011 10:53:15.944004 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.944513 master-0 kubenswrapper[4790]: I1011 10:53:15.944026 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-config\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.944513 master-0 kubenswrapper[4790]: I1011 10:53:15.944054 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.944513 master-0 kubenswrapper[4790]: I1011 10:53:15.944082 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.944513 master-0 kubenswrapper[4790]: I1011 10:53:15.944108 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.947939 master-0 kubenswrapper[4790]: I1011 10:53:15.947890 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.951342 master-0 kubenswrapper[4790]: I1011 10:53:15.950410 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.951750 master-0 kubenswrapper[4790]: I1011 10:53:15.951522 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-config\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.951750 master-0 kubenswrapper[4790]: I1011 10:53:15.951586 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.954114 master-0 kubenswrapper[4790]: I1011 10:53:15.953542 4790 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:15.954114 master-0 kubenswrapper[4790]: I1011 10:53:15.953602 4790 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d84ced78-2c7e-4de1-b3cb-1356f0f0b1e1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^1409533a-19dc-4ab6-9c9b-35e2a16fe065\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/30d66e2fad6d0ab5943ae8ba29bb4bba33e71d3d5a88128992813110331d9b74/globalmount\"" pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.954528 master-0 kubenswrapper[4790]: I1011 10:53:15.954448 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.966574 master-0 kubenswrapper[4790]: I1011 10:53:15.966450 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:15.972289 master-0 kubenswrapper[4790]: I1011 10:53:15.972026 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qgl8\" (UniqueName: \"kubernetes.io/projected/c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70-kube-api-access-9qgl8\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:16.090563 master-1 kubenswrapper[4771]: I1011 10:53:16.090452 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-1" event={"ID":"66c0cd85-28ea-42de-8432-8803026d3124","Type":"ContainerStarted","Data":"a314c1f4c176eb2e0cd74daae6e743d6300a11be6408bfc38cc9f82fcaeaf8f6"} Oct 11 10:53:16.090563 master-1 kubenswrapper[4771]: I1011 10:53:16.090536 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-1" event={"ID":"66c0cd85-28ea-42de-8432-8803026d3124","Type":"ContainerStarted","Data":"987c66295a0779686f133c30ffd38495bdaeed6a9bd1459e540ed23e5b590709"} Oct 11 10:53:16.302302 master-0 kubenswrapper[4790]: I1011 10:53:16.302243 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f97fbf89-0d03-4ed8-a0d2-4f796e705e20" path="/var/lib/kubelet/pods/f97fbf89-0d03-4ed8-a0d2-4f796e705e20/volumes" Oct 11 10:53:16.900766 master-2 kubenswrapper[4776]: I1011 10:53:16.900718 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-1"] Oct 11 10:53:16.904686 master-2 kubenswrapper[4776]: W1011 10:53:16.904637 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf92bf399_88cc_4b7b_8048_81fda1a2e172.slice/crio-cc4040e788101c436998ccffe07ed7d3bc4b40a2a6cddb30658fd48a58139457 WatchSource:0}: Error finding container cc4040e788101c436998ccffe07ed7d3bc4b40a2a6cddb30658fd48a58139457: Status 404 returned error can't find the container with id cc4040e788101c436998ccffe07ed7d3bc4b40a2a6cddb30658fd48a58139457 Oct 11 10:53:17.366519 master-0 kubenswrapper[4790]: I1011 10:53:17.365542 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d84ced78-2c7e-4de1-b3cb-1356f0f0b1e1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^1409533a-19dc-4ab6-9c9b-35e2a16fe065\") pod \"ovsdbserver-nb-0\" (UID: \"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70\") " pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:17.464794 master-2 kubenswrapper[4776]: I1011 10:53:17.464734 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 11 10:53:17.476843 master-2 kubenswrapper[4776]: I1011 10:53:17.476746 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-1" event={"ID":"f92bf399-88cc-4b7b-8048-81fda1a2e172","Type":"ContainerStarted","Data":"cc4040e788101c436998ccffe07ed7d3bc4b40a2a6cddb30658fd48a58139457"} Oct 11 10:53:17.478474 master-2 kubenswrapper[4776]: I1011 10:53:17.478426 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-2" event={"ID":"72994ad3-2bca-4875-97f7-f98c00f64626","Type":"ContainerStarted","Data":"fbba402b233426c847ae83ebba46e37146dd642ee086952f7ff24eada01f5d5e"} Oct 11 10:53:17.480728 master-2 kubenswrapper[4776]: I1011 10:53:17.480231 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9","Type":"ContainerStarted","Data":"8d344e8337f6b6811ddbf71dee0be912d8e27d3cf52765f3b09cc5a456d0a2bf"} Oct 11 10:53:17.639518 master-0 kubenswrapper[4790]: I1011 10:53:17.639426 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:17.725101 master-1 kubenswrapper[4771]: I1011 10:53:17.725013 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d985c5d6-9363-4eef-9640-f61388292365\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0990b59f-0f58-467f-8736-1b158b7725d2\") pod \"prometheus-metric-storage-0\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:17.832568 master-1 kubenswrapper[4771]: I1011 10:53:17.832515 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Oct 11 10:53:17.835041 master-1 kubenswrapper[4771]: I1011 10:53:17.835017 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:17.838541 master-1 kubenswrapper[4771]: I1011 10:53:17.838472 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 11 10:53:17.839024 master-1 kubenswrapper[4771]: I1011 10:53:17.838954 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 11 10:53:17.847126 master-1 kubenswrapper[4771]: I1011 10:53:17.846975 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Oct 11 10:53:17.854153 master-1 kubenswrapper[4771]: I1011 10:53:17.853825 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 11 10:53:17.898198 master-1 kubenswrapper[4771]: I1011 10:53:17.898127 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:17.900933 master-0 kubenswrapper[4790]: I1011 10:53:17.900866 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-2"] Oct 11 10:53:17.907361 master-0 kubenswrapper[4790]: W1011 10:53:17.907302 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5059e0b0_120f_4498_8076_e3e9239b5688.slice/crio-b7980a4d132fab58838654556376c0979ed90f167353b3483e55d73bcabd1c9d WatchSource:0}: Error finding container b7980a4d132fab58838654556376c0979ed90f167353b3483e55d73bcabd1c9d: Status 404 returned error can't find the container with id b7980a4d132fab58838654556376c0979ed90f167353b3483e55d73bcabd1c9d Oct 11 10:53:17.972915 master-1 kubenswrapper[4771]: I1011 10:53:17.972800 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9b018c-eb14-4c27-adcb-ba613238c78b-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:17.972915 master-1 kubenswrapper[4771]: I1011 10:53:17.972883 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9b018c-eb14-4c27-adcb-ba613238c78b-config\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:17.973401 master-1 kubenswrapper[4771]: I1011 10:53:17.973223 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f9b018c-eb14-4c27-adcb-ba613238c78b-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:17.973401 master-1 kubenswrapper[4771]: I1011 10:53:17.973309 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6rkx\" (UniqueName: \"kubernetes.io/projected/8f9b018c-eb14-4c27-adcb-ba613238c78b-kube-api-access-t6rkx\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:17.973401 master-1 kubenswrapper[4771]: I1011 10:53:17.973382 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9b018c-eb14-4c27-adcb-ba613238c78b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:17.973558 master-1 kubenswrapper[4771]: I1011 10:53:17.973420 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8f9b018c-eb14-4c27-adcb-ba613238c78b-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:17.973558 master-1 kubenswrapper[4771]: I1011 10:53:17.973494 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9b018c-eb14-4c27-adcb-ba613238c78b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:17.973558 master-1 kubenswrapper[4771]: I1011 10:53:17.973545 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ca866366-3239-46ad-903c-e44f9f2ec0e3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f06a64ac-0910-4092-b155-298c9316de58\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:18.075640 master-1 kubenswrapper[4771]: I1011 10:53:18.075529 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ca866366-3239-46ad-903c-e44f9f2ec0e3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f06a64ac-0910-4092-b155-298c9316de58\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:18.075640 master-1 kubenswrapper[4771]: I1011 10:53:18.075635 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9b018c-eb14-4c27-adcb-ba613238c78b-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:18.075640 master-1 kubenswrapper[4771]: I1011 10:53:18.075662 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9b018c-eb14-4c27-adcb-ba613238c78b-config\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:18.076052 master-1 kubenswrapper[4771]: I1011 10:53:18.075741 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f9b018c-eb14-4c27-adcb-ba613238c78b-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:18.076052 master-1 kubenswrapper[4771]: I1011 10:53:18.075770 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6rkx\" (UniqueName: \"kubernetes.io/projected/8f9b018c-eb14-4c27-adcb-ba613238c78b-kube-api-access-t6rkx\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:18.076052 master-1 kubenswrapper[4771]: I1011 10:53:18.075798 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9b018c-eb14-4c27-adcb-ba613238c78b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:18.076052 master-1 kubenswrapper[4771]: I1011 10:53:18.075826 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8f9b018c-eb14-4c27-adcb-ba613238c78b-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:18.076052 master-1 kubenswrapper[4771]: I1011 10:53:18.075855 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9b018c-eb14-4c27-adcb-ba613238c78b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:18.077806 master-1 kubenswrapper[4771]: I1011 10:53:18.077734 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8f9b018c-eb14-4c27-adcb-ba613238c78b-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:18.078615 master-1 kubenswrapper[4771]: I1011 10:53:18.078540 4771 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:18.078783 master-1 kubenswrapper[4771]: I1011 10:53:18.078626 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ca866366-3239-46ad-903c-e44f9f2ec0e3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f06a64ac-0910-4092-b155-298c9316de58\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/3fdbc2eaaf604b357bf2e0714ac3de7e5d4c02faf07fa6a0914801badbeb5a83/globalmount\"" pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:18.080287 master-1 kubenswrapper[4771]: I1011 10:53:18.080204 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9b018c-eb14-4c27-adcb-ba613238c78b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:18.081218 master-1 kubenswrapper[4771]: I1011 10:53:18.081130 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9b018c-eb14-4c27-adcb-ba613238c78b-config\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:18.081713 master-1 kubenswrapper[4771]: I1011 10:53:18.081654 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f9b018c-eb14-4c27-adcb-ba613238c78b-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:18.084308 master-1 kubenswrapper[4771]: I1011 10:53:18.084242 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9b018c-eb14-4c27-adcb-ba613238c78b-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:18.084308 master-1 kubenswrapper[4771]: I1011 10:53:18.084266 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9b018c-eb14-4c27-adcb-ba613238c78b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:18.109735 master-1 kubenswrapper[4771]: I1011 10:53:18.109655 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6rkx\" (UniqueName: \"kubernetes.io/projected/8f9b018c-eb14-4c27-adcb-ba613238c78b-kube-api-access-t6rkx\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:18.128441 master-1 kubenswrapper[4771]: I1011 10:53:18.122023 4771 generic.go:334] "Generic (PLEG): container finished" podID="3c0fb436-6e71-4a5a-844e-e8c8e83eacdd" containerID="d123df26764927b959f6821afc87dfe457d42e15c12942ada27bc877e1b79e1d" exitCode=0 Oct 11 10:53:18.128441 master-1 kubenswrapper[4771]: I1011 10:53:18.122093 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd","Type":"ContainerDied","Data":"d123df26764927b959f6821afc87dfe457d42e15c12942ada27bc877e1b79e1d"} Oct 11 10:53:18.220417 master-0 kubenswrapper[4790]: I1011 10:53:18.220358 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 11 10:53:18.224965 master-0 kubenswrapper[4790]: W1011 10:53:18.224891 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8612b0c_02a4_40ef_b5e7_71a8b8d6fe70.slice/crio-ffd70f6e5ebed6781320f8727f067080b40abc5ca784a9bc6515e0b0d19646ef WatchSource:0}: Error finding container ffd70f6e5ebed6781320f8727f067080b40abc5ca784a9bc6515e0b0d19646ef: Status 404 returned error can't find the container with id ffd70f6e5ebed6781320f8727f067080b40abc5ca784a9bc6515e0b0d19646ef Oct 11 10:53:18.445173 master-1 kubenswrapper[4771]: W1011 10:53:18.445113 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11c30d1f_16d5_4106_bfae_e6c2d2f64f13.slice/crio-b089e8e4b13a2f28ac29e4b38aaf2c96910827eb74888a884dedec830020d9fe WatchSource:0}: Error finding container b089e8e4b13a2f28ac29e4b38aaf2c96910827eb74888a884dedec830020d9fe: Status 404 returned error can't find the container with id b089e8e4b13a2f28ac29e4b38aaf2c96910827eb74888a884dedec830020d9fe Oct 11 10:53:18.445983 master-1 kubenswrapper[4771]: I1011 10:53:18.445881 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 11 10:53:18.657617 master-0 kubenswrapper[4790]: I1011 10:53:18.657531 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-2" event={"ID":"5059e0b0-120f-4498-8076-e3e9239b5688","Type":"ContainerStarted","Data":"16951753cab39fff859f3907a86a88a98732302e37279ed54b73a1c204defd9f"} Oct 11 10:53:18.657617 master-0 kubenswrapper[4790]: I1011 10:53:18.657627 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-2" event={"ID":"5059e0b0-120f-4498-8076-e3e9239b5688","Type":"ContainerStarted","Data":"b7980a4d132fab58838654556376c0979ed90f167353b3483e55d73bcabd1c9d"} Oct 11 10:53:18.660908 master-0 kubenswrapper[4790]: I1011 10:53:18.660863 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-1" event={"ID":"ce689fd9-58ba-45f5-bec1-ff7b79e377ac","Type":"ContainerStarted","Data":"c48a5ea5685722211bada29d1dc5f400eb9f08b775cb3b65b8aa9fa7c05afb6a"} Oct 11 10:53:18.662458 master-0 kubenswrapper[4790]: I1011 10:53:18.662399 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70","Type":"ContainerStarted","Data":"ffd70f6e5ebed6781320f8727f067080b40abc5ca784a9bc6515e0b0d19646ef"} Oct 11 10:53:18.833457 master-2 kubenswrapper[4776]: I1011 10:53:18.833385 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Oct 11 10:53:18.834612 master-2 kubenswrapper[4776]: I1011 10:53:18.834570 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:18.838023 master-2 kubenswrapper[4776]: I1011 10:53:18.837770 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 11 10:53:18.838023 master-2 kubenswrapper[4776]: I1011 10:53:18.837933 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 11 10:53:18.839477 master-2 kubenswrapper[4776]: I1011 10:53:18.839341 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 11 10:53:18.854453 master-2 kubenswrapper[4776]: I1011 10:53:18.854409 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Oct 11 10:53:18.959911 master-2 kubenswrapper[4776]: I1011 10:53:18.959871 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/773e37f4-d372-40a8-936f-5b148ca7dabf-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:18.960187 master-2 kubenswrapper[4776]: I1011 10:53:18.960172 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773e37f4-d372-40a8-936f-5b148ca7dabf-config\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:18.960360 master-2 kubenswrapper[4776]: I1011 10:53:18.960348 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/773e37f4-d372-40a8-936f-5b148ca7dabf-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:18.960488 master-2 kubenswrapper[4776]: I1011 10:53:18.960474 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-623a62ea-a36e-4deb-939a-2ce2044bc61a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^80f154a4-9ed7-483e-9641-598f7985618e\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:18.960595 master-2 kubenswrapper[4776]: I1011 10:53:18.960583 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773e37f4-d372-40a8-936f-5b148ca7dabf-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:18.960752 master-2 kubenswrapper[4776]: I1011 10:53:18.960733 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/773e37f4-d372-40a8-936f-5b148ca7dabf-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:18.960878 master-2 kubenswrapper[4776]: I1011 10:53:18.960864 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/773e37f4-d372-40a8-936f-5b148ca7dabf-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:18.960994 master-2 kubenswrapper[4776]: I1011 10:53:18.960981 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw8lr\" (UniqueName: \"kubernetes.io/projected/773e37f4-d372-40a8-936f-5b148ca7dabf-kube-api-access-fw8lr\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.058210 master-2 kubenswrapper[4776]: I1011 10:53:19.058157 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 11 10:53:19.059715 master-2 kubenswrapper[4776]: I1011 10:53:19.059698 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.062724 master-2 kubenswrapper[4776]: I1011 10:53:19.062097 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/773e37f4-d372-40a8-936f-5b148ca7dabf-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.062724 master-2 kubenswrapper[4776]: I1011 10:53:19.062138 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773e37f4-d372-40a8-936f-5b148ca7dabf-config\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.062724 master-2 kubenswrapper[4776]: I1011 10:53:19.062165 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/773e37f4-d372-40a8-936f-5b148ca7dabf-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.062724 master-2 kubenswrapper[4776]: I1011 10:53:19.062194 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-623a62ea-a36e-4deb-939a-2ce2044bc61a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^80f154a4-9ed7-483e-9641-598f7985618e\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.062724 master-2 kubenswrapper[4776]: I1011 10:53:19.062210 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773e37f4-d372-40a8-936f-5b148ca7dabf-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.062724 master-2 kubenswrapper[4776]: I1011 10:53:19.062248 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/773e37f4-d372-40a8-936f-5b148ca7dabf-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.062724 master-2 kubenswrapper[4776]: I1011 10:53:19.062277 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/773e37f4-d372-40a8-936f-5b148ca7dabf-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.062724 master-2 kubenswrapper[4776]: I1011 10:53:19.062294 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8lr\" (UniqueName: \"kubernetes.io/projected/773e37f4-d372-40a8-936f-5b148ca7dabf-kube-api-access-fw8lr\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.064042 master-2 kubenswrapper[4776]: I1011 10:53:19.064022 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773e37f4-d372-40a8-936f-5b148ca7dabf-config\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.064874 master-2 kubenswrapper[4776]: I1011 10:53:19.064857 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/773e37f4-d372-40a8-936f-5b148ca7dabf-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.066172 master-2 kubenswrapper[4776]: I1011 10:53:19.066127 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 11 10:53:19.066418 master-2 kubenswrapper[4776]: I1011 10:53:19.066392 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 11 10:53:19.066657 master-2 kubenswrapper[4776]: I1011 10:53:19.066635 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 11 10:53:19.067586 master-2 kubenswrapper[4776]: I1011 10:53:19.067561 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/773e37f4-d372-40a8-936f-5b148ca7dabf-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.068742 master-2 kubenswrapper[4776]: I1011 10:53:19.068500 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/773e37f4-d372-40a8-936f-5b148ca7dabf-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.072869 master-2 kubenswrapper[4776]: I1011 10:53:19.071475 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:19.072869 master-2 kubenswrapper[4776]: I1011 10:53:19.071531 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-623a62ea-a36e-4deb-939a-2ce2044bc61a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^80f154a4-9ed7-483e-9641-598f7985618e\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/dae1067f286c2d27a912f7c78728bf47135dfe55e1e4bb4669097781af956b57/globalmount\"" pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.075994 master-2 kubenswrapper[4776]: I1011 10:53:19.075373 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 11 10:53:19.079714 master-2 kubenswrapper[4776]: I1011 10:53:19.079655 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773e37f4-d372-40a8-936f-5b148ca7dabf-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.081475 master-2 kubenswrapper[4776]: I1011 10:53:19.081429 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/773e37f4-d372-40a8-936f-5b148ca7dabf-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.131960 master-1 kubenswrapper[4771]: I1011 10:53:19.131783 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3c0fb436-6e71-4a5a-844e-e8c8e83eacdd","Type":"ContainerStarted","Data":"75ee61d883ca29a4a72af36026e1ef45716f46c2f0b47656f0af04d68b4ea69b"} Oct 11 10:53:19.134190 master-2 kubenswrapper[4776]: I1011 10:53:19.133536 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw8lr\" (UniqueName: \"kubernetes.io/projected/773e37f4-d372-40a8-936f-5b148ca7dabf-kube-api-access-fw8lr\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.134920 master-1 kubenswrapper[4771]: I1011 10:53:19.134878 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11c30d1f-16d5-4106-bfae-e6c2d2f64f13","Type":"ContainerStarted","Data":"b089e8e4b13a2f28ac29e4b38aaf2c96910827eb74888a884dedec830020d9fe"} Oct 11 10:53:19.163744 master-2 kubenswrapper[4776]: I1011 10:53:19.163589 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b7d3c19-b426-4026-94db-329447ffb1d5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.163744 master-2 kubenswrapper[4776]: I1011 10:53:19.163709 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7d3c19-b426-4026-94db-329447ffb1d5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.163962 master-2 kubenswrapper[4776]: I1011 10:53:19.163766 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b7d3c19-b426-4026-94db-329447ffb1d5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.163962 master-2 kubenswrapper[4776]: I1011 10:53:19.163820 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7d3c19-b426-4026-94db-329447ffb1d5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.163962 master-2 kubenswrapper[4776]: I1011 10:53:19.163838 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qgf4\" (UniqueName: \"kubernetes.io/projected/3b7d3c19-b426-4026-94db-329447ffb1d5-kube-api-access-7qgf4\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.163962 master-2 kubenswrapper[4776]: I1011 10:53:19.163880 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b7d3c19-b426-4026-94db-329447ffb1d5-config\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.163962 master-2 kubenswrapper[4776]: I1011 10:53:19.163940 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4c914b16-9f88-4062-9dcc-b1e3200271a6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^be314f8f-443e-4a70-b706-bc6a52464ec7\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.164114 master-2 kubenswrapper[4776]: I1011 10:53:19.163998 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7d3c19-b426-4026-94db-329447ffb1d5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.265777 master-2 kubenswrapper[4776]: I1011 10:53:19.265717 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7d3c19-b426-4026-94db-329447ffb1d5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.265777 master-2 kubenswrapper[4776]: I1011 10:53:19.265766 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qgf4\" (UniqueName: \"kubernetes.io/projected/3b7d3c19-b426-4026-94db-329447ffb1d5-kube-api-access-7qgf4\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.266004 master-2 kubenswrapper[4776]: I1011 10:53:19.265835 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b7d3c19-b426-4026-94db-329447ffb1d5-config\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.266004 master-2 kubenswrapper[4776]: I1011 10:53:19.265866 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4c914b16-9f88-4062-9dcc-b1e3200271a6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^be314f8f-443e-4a70-b706-bc6a52464ec7\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.266004 master-2 kubenswrapper[4776]: I1011 10:53:19.265895 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7d3c19-b426-4026-94db-329447ffb1d5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.266004 master-2 kubenswrapper[4776]: I1011 10:53:19.265926 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b7d3c19-b426-4026-94db-329447ffb1d5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.266004 master-2 kubenswrapper[4776]: I1011 10:53:19.265951 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7d3c19-b426-4026-94db-329447ffb1d5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.266004 master-2 kubenswrapper[4776]: I1011 10:53:19.265980 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b7d3c19-b426-4026-94db-329447ffb1d5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.267828 master-2 kubenswrapper[4776]: I1011 10:53:19.267617 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b7d3c19-b426-4026-94db-329447ffb1d5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.267828 master-2 kubenswrapper[4776]: I1011 10:53:19.267765 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b7d3c19-b426-4026-94db-329447ffb1d5-config\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.267975 master-2 kubenswrapper[4776]: I1011 10:53:19.267938 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b7d3c19-b426-4026-94db-329447ffb1d5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.268031 master-2 kubenswrapper[4776]: I1011 10:53:19.267977 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:19.268031 master-2 kubenswrapper[4776]: I1011 10:53:19.268002 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4c914b16-9f88-4062-9dcc-b1e3200271a6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^be314f8f-443e-4a70-b706-bc6a52464ec7\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/a6a45de1af4698cfd258e4b0cc4c9b6b0e6a932c19773f8cb77ec5494d801c93/globalmount\"" pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.270136 master-2 kubenswrapper[4776]: I1011 10:53:19.270091 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7d3c19-b426-4026-94db-329447ffb1d5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.270263 master-2 kubenswrapper[4776]: I1011 10:53:19.270218 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7d3c19-b426-4026-94db-329447ffb1d5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.270592 master-2 kubenswrapper[4776]: I1011 10:53:19.270555 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7d3c19-b426-4026-94db-329447ffb1d5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.285222 master-2 kubenswrapper[4776]: I1011 10:53:19.285128 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qgf4\" (UniqueName: \"kubernetes.io/projected/3b7d3c19-b426-4026-94db-329447ffb1d5-kube-api-access-7qgf4\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.503954 master-2 kubenswrapper[4776]: I1011 10:53:19.503901 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-1" event={"ID":"f92bf399-88cc-4b7b-8048-81fda1a2e172","Type":"ContainerStarted","Data":"eb58a07c315c6bd8633e9756d3be3d693a76dd4977c59794237d670da2037df0"} Oct 11 10:53:19.504841 master-2 kubenswrapper[4776]: I1011 10:53:19.504814 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-1" Oct 11 10:53:19.577613 master-1 kubenswrapper[4771]: I1011 10:53:19.575829 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ca866366-3239-46ad-903c-e44f9f2ec0e3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f06a64ac-0910-4092-b155-298c9316de58\") pod \"ovsdbserver-nb-1\" (UID: \"8f9b018c-eb14-4c27-adcb-ba613238c78b\") " pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:19.671377 master-1 kubenswrapper[4771]: I1011 10:53:19.671237 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:19.677985 master-0 kubenswrapper[4790]: I1011 10:53:19.677908 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70","Type":"ContainerStarted","Data":"c825292e80a7f9119d113b3bb262f7f111d6f5ad5e4978b8a04781a9d2fb1028"} Oct 11 10:53:20.220193 master-1 kubenswrapper[4771]: I1011 10:53:20.220102 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=23.171684584 podStartE2EDuration="28.220082001s" podCreationTimestamp="2025-10-11 10:52:52 +0000 UTC" firstStartedPulling="2025-10-11 10:53:07.910333706 +0000 UTC m=+1619.884560147" lastFinishedPulling="2025-10-11 10:53:12.958731123 +0000 UTC m=+1624.932957564" observedRunningTime="2025-10-11 10:53:19.163291351 +0000 UTC m=+1631.137517792" watchObservedRunningTime="2025-10-11 10:53:20.220082001 +0000 UTC m=+1632.194308442" Oct 11 10:53:20.222779 master-1 kubenswrapper[4771]: I1011 10:53:20.222746 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Oct 11 10:53:20.238404 master-1 kubenswrapper[4771]: W1011 10:53:20.229050 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f9b018c_eb14_4c27_adcb_ba613238c78b.slice/crio-3d39ea0144914b5f9427ce6ad71ccee33c30187fbaa0472d6fa81f99b090257f WatchSource:0}: Error finding container 3d39ea0144914b5f9427ce6ad71ccee33c30187fbaa0472d6fa81f99b090257f: Status 404 returned error can't find the container with id 3d39ea0144914b5f9427ce6ad71ccee33c30187fbaa0472d6fa81f99b090257f Oct 11 10:53:20.512040 master-2 kubenswrapper[4776]: I1011 10:53:20.511995 4776 generic.go:334] "Generic (PLEG): container finished" podID="8d885df4-18cc-401a-8226-cd3d17b3f770" containerID="fb03c91ad7a42c396933f0f240b96a175ee42b923ebf15def67f14a383df0288" exitCode=0 Oct 11 10:53:20.512757 master-2 kubenswrapper[4776]: I1011 10:53:20.512729 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"8d885df4-18cc-401a-8226-cd3d17b3f770","Type":"ContainerDied","Data":"fb03c91ad7a42c396933f0f240b96a175ee42b923ebf15def67f14a383df0288"} Oct 11 10:53:20.556801 master-2 kubenswrapper[4776]: I1011 10:53:20.556715 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-1" podStartSLOduration=3.690140503 podStartE2EDuration="5.556672817s" podCreationTimestamp="2025-10-11 10:53:15 +0000 UTC" firstStartedPulling="2025-10-11 10:53:16.907247464 +0000 UTC m=+1631.691674173" lastFinishedPulling="2025-10-11 10:53:18.773779778 +0000 UTC m=+1633.558206487" observedRunningTime="2025-10-11 10:53:19.53406699 +0000 UTC m=+1634.318493699" watchObservedRunningTime="2025-10-11 10:53:20.556672817 +0000 UTC m=+1635.341099526" Oct 11 10:53:20.590785 master-2 kubenswrapper[4776]: I1011 10:53:20.576192 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-623a62ea-a36e-4deb-939a-2ce2044bc61a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^80f154a4-9ed7-483e-9641-598f7985618e\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:20.689485 master-0 kubenswrapper[4790]: I1011 10:53:20.689338 4790 generic.go:334] "Generic (PLEG): container finished" podID="1ebbbaaf-f668-4c40-b437-2e730aef3912" containerID="dc657647e19f7fc9de9910df8248a796ef7cb75c8ad8bbbaa74f15d4511985e3" exitCode=0 Oct 11 10:53:20.689485 master-0 kubenswrapper[4790]: I1011 10:53:20.689464 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-1" event={"ID":"1ebbbaaf-f668-4c40-b437-2e730aef3912","Type":"ContainerDied","Data":"dc657647e19f7fc9de9910df8248a796ef7cb75c8ad8bbbaa74f15d4511985e3"} Oct 11 10:53:20.694694 master-0 kubenswrapper[4790]: I1011 10:53:20.694465 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c8612b0c-02a4-40ef-b5e7-71a8b8d6fe70","Type":"ContainerStarted","Data":"26985baa1c4b8f8f21210ddc96b676ad82501b72c746d0d846d399ec0fb50b76"} Oct 11 10:53:20.770509 master-0 kubenswrapper[4790]: I1011 10:53:20.770404 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=19.530043721 podStartE2EDuration="20.770378617s" podCreationTimestamp="2025-10-11 10:53:00 +0000 UTC" firstStartedPulling="2025-10-11 10:53:18.228899331 +0000 UTC m=+874.783359623" lastFinishedPulling="2025-10-11 10:53:19.469234217 +0000 UTC m=+876.023694519" observedRunningTime="2025-10-11 10:53:20.762669913 +0000 UTC m=+877.317130235" watchObservedRunningTime="2025-10-11 10:53:20.770378617 +0000 UTC m=+877.324838929" Oct 11 10:53:20.777118 master-2 kubenswrapper[4776]: I1011 10:53:20.776983 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:21.081157 master-1 kubenswrapper[4771]: I1011 10:53:21.081088 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Oct 11 10:53:21.082530 master-1 kubenswrapper[4771]: I1011 10:53:21.082506 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.086554 master-1 kubenswrapper[4771]: I1011 10:53:21.086488 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 11 10:53:21.086800 master-1 kubenswrapper[4771]: I1011 10:53:21.086697 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 11 10:53:21.086941 master-1 kubenswrapper[4771]: I1011 10:53:21.086916 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 11 10:53:21.101783 master-1 kubenswrapper[4771]: I1011 10:53:21.101725 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Oct 11 10:53:21.141572 master-1 kubenswrapper[4771]: I1011 10:53:21.141489 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6e032cb-3993-4b10-afe4-7f77821c8583-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.141572 master-1 kubenswrapper[4771]: I1011 10:53:21.141558 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e032cb-3993-4b10-afe4-7f77821c8583-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.141798 master-1 kubenswrapper[4771]: I1011 10:53:21.141601 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c6e032cb-3993-4b10-afe4-7f77821c8583-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.141798 master-1 kubenswrapper[4771]: I1011 10:53:21.141634 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e032cb-3993-4b10-afe4-7f77821c8583-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.141798 master-1 kubenswrapper[4771]: I1011 10:53:21.141670 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2cc50c9a-367c-43a5-9e92-aa42989ee215\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a44ae47b-cc90-4016-a02d-1f3945269ee7\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.141798 master-1 kubenswrapper[4771]: I1011 10:53:21.141692 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6e032cb-3993-4b10-afe4-7f77821c8583-config\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.141798 master-1 kubenswrapper[4771]: I1011 10:53:21.141721 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v24vl\" (UniqueName: \"kubernetes.io/projected/c6e032cb-3993-4b10-afe4-7f77821c8583-kube-api-access-v24vl\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.141798 master-1 kubenswrapper[4771]: I1011 10:53:21.141749 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e032cb-3993-4b10-afe4-7f77821c8583-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.149557 master-1 kubenswrapper[4771]: I1011 10:53:21.149470 4771 generic.go:334] "Generic (PLEG): container finished" podID="66c0cd85-28ea-42de-8432-8803026d3124" containerID="a314c1f4c176eb2e0cd74daae6e743d6300a11be6408bfc38cc9f82fcaeaf8f6" exitCode=0 Oct 11 10:53:21.149615 master-1 kubenswrapper[4771]: I1011 10:53:21.149561 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-1" event={"ID":"66c0cd85-28ea-42de-8432-8803026d3124","Type":"ContainerDied","Data":"a314c1f4c176eb2e0cd74daae6e743d6300a11be6408bfc38cc9f82fcaeaf8f6"} Oct 11 10:53:21.151026 master-1 kubenswrapper[4771]: I1011 10:53:21.150621 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"8f9b018c-eb14-4c27-adcb-ba613238c78b","Type":"ContainerStarted","Data":"3d39ea0144914b5f9427ce6ad71ccee33c30187fbaa0472d6fa81f99b090257f"} Oct 11 10:53:21.243594 master-1 kubenswrapper[4771]: I1011 10:53:21.243527 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e032cb-3993-4b10-afe4-7f77821c8583-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.244160 master-1 kubenswrapper[4771]: I1011 10:53:21.243616 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2cc50c9a-367c-43a5-9e92-aa42989ee215\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a44ae47b-cc90-4016-a02d-1f3945269ee7\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.244160 master-1 kubenswrapper[4771]: I1011 10:53:21.243651 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6e032cb-3993-4b10-afe4-7f77821c8583-config\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.244160 master-1 kubenswrapper[4771]: I1011 10:53:21.243687 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v24vl\" (UniqueName: \"kubernetes.io/projected/c6e032cb-3993-4b10-afe4-7f77821c8583-kube-api-access-v24vl\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.244160 master-1 kubenswrapper[4771]: I1011 10:53:21.243724 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e032cb-3993-4b10-afe4-7f77821c8583-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.244160 master-1 kubenswrapper[4771]: I1011 10:53:21.243759 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6e032cb-3993-4b10-afe4-7f77821c8583-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.244160 master-1 kubenswrapper[4771]: I1011 10:53:21.243780 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e032cb-3993-4b10-afe4-7f77821c8583-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.244160 master-1 kubenswrapper[4771]: I1011 10:53:21.243818 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c6e032cb-3993-4b10-afe4-7f77821c8583-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.244482 master-1 kubenswrapper[4771]: I1011 10:53:21.244379 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c6e032cb-3993-4b10-afe4-7f77821c8583-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.245836 master-1 kubenswrapper[4771]: I1011 10:53:21.245779 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6e032cb-3993-4b10-afe4-7f77821c8583-config\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.246272 master-1 kubenswrapper[4771]: I1011 10:53:21.246231 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6e032cb-3993-4b10-afe4-7f77821c8583-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.247000 master-1 kubenswrapper[4771]: I1011 10:53:21.246956 4771 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:21.247220 master-1 kubenswrapper[4771]: I1011 10:53:21.247074 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2cc50c9a-367c-43a5-9e92-aa42989ee215\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a44ae47b-cc90-4016-a02d-1f3945269ee7\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/18d97a7788bd76d183e89e62f30289e4ce86d7e202c647c24a185e51515d686f/globalmount\"" pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.256449 master-1 kubenswrapper[4771]: I1011 10:53:21.249642 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e032cb-3993-4b10-afe4-7f77821c8583-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.256449 master-1 kubenswrapper[4771]: I1011 10:53:21.251238 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e032cb-3993-4b10-afe4-7f77821c8583-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.256449 master-1 kubenswrapper[4771]: I1011 10:53:21.251332 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e032cb-3993-4b10-afe4-7f77821c8583-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.277330 master-1 kubenswrapper[4771]: I1011 10:53:21.277299 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v24vl\" (UniqueName: \"kubernetes.io/projected/c6e032cb-3993-4b10-afe4-7f77821c8583-kube-api-access-v24vl\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:21.303244 master-2 kubenswrapper[4776]: I1011 10:53:21.303193 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Oct 11 10:53:21.307039 master-2 kubenswrapper[4776]: W1011 10:53:21.306995 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod773e37f4_d372_40a8_936f_5b148ca7dabf.slice/crio-71f294f185ce2b30983bfd4fed413d88a92bad22a24f92e5870c9c9f40b58587 WatchSource:0}: Error finding container 71f294f185ce2b30983bfd4fed413d88a92bad22a24f92e5870c9c9f40b58587: Status 404 returned error can't find the container with id 71f294f185ce2b30983bfd4fed413d88a92bad22a24f92e5870c9c9f40b58587 Oct 11 10:53:21.520027 master-2 kubenswrapper[4776]: I1011 10:53:21.519890 4776 generic.go:334] "Generic (PLEG): container finished" podID="72994ad3-2bca-4875-97f7-f98c00f64626" containerID="fbba402b233426c847ae83ebba46e37146dd642ee086952f7ff24eada01f5d5e" exitCode=0 Oct 11 10:53:21.520027 master-2 kubenswrapper[4776]: I1011 10:53:21.519971 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-2" event={"ID":"72994ad3-2bca-4875-97f7-f98c00f64626","Type":"ContainerDied","Data":"fbba402b233426c847ae83ebba46e37146dd642ee086952f7ff24eada01f5d5e"} Oct 11 10:53:21.523597 master-2 kubenswrapper[4776]: I1011 10:53:21.523555 4776 generic.go:334] "Generic (PLEG): container finished" podID="a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9" containerID="8d344e8337f6b6811ddbf71dee0be912d8e27d3cf52765f3b09cc5a456d0a2bf" exitCode=0 Oct 11 10:53:21.523703 master-2 kubenswrapper[4776]: I1011 10:53:21.523624 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9","Type":"ContainerDied","Data":"8d344e8337f6b6811ddbf71dee0be912d8e27d3cf52765f3b09cc5a456d0a2bf"} Oct 11 10:53:21.525861 master-2 kubenswrapper[4776]: I1011 10:53:21.525827 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"773e37f4-d372-40a8-936f-5b148ca7dabf","Type":"ContainerStarted","Data":"71f294f185ce2b30983bfd4fed413d88a92bad22a24f92e5870c9c9f40b58587"} Oct 11 10:53:21.704743 master-0 kubenswrapper[4790]: I1011 10:53:21.704653 4790 generic.go:334] "Generic (PLEG): container finished" podID="ce689fd9-58ba-45f5-bec1-ff7b79e377ac" containerID="c48a5ea5685722211bada29d1dc5f400eb9f08b775cb3b65b8aa9fa7c05afb6a" exitCode=0 Oct 11 10:53:21.705565 master-0 kubenswrapper[4790]: I1011 10:53:21.704752 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-1" event={"ID":"ce689fd9-58ba-45f5-bec1-ff7b79e377ac","Type":"ContainerDied","Data":"c48a5ea5685722211bada29d1dc5f400eb9f08b775cb3b65b8aa9fa7c05afb6a"} Oct 11 10:53:21.884582 master-2 kubenswrapper[4776]: I1011 10:53:21.884540 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4c914b16-9f88-4062-9dcc-b1e3200271a6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^be314f8f-443e-4a70-b706-bc6a52464ec7\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:22.091807 master-0 kubenswrapper[4790]: I1011 10:53:22.091692 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Oct 11 10:53:22.094507 master-0 kubenswrapper[4790]: I1011 10:53:22.094474 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.097786 master-0 kubenswrapper[4790]: I1011 10:53:22.097700 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 11 10:53:22.097920 master-0 kubenswrapper[4790]: I1011 10:53:22.097699 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 11 10:53:22.105445 master-0 kubenswrapper[4790]: I1011 10:53:22.105298 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 11 10:53:22.110780 master-0 kubenswrapper[4790]: I1011 10:53:22.110508 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Oct 11 10:53:22.170279 master-2 kubenswrapper[4776]: I1011 10:53:22.170228 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:22.261667 master-0 kubenswrapper[4790]: I1011 10:53:22.260921 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/403c433e-e6f3-4732-9d44-95e68ac5d36d-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.261667 master-0 kubenswrapper[4790]: I1011 10:53:22.261162 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/403c433e-e6f3-4732-9d44-95e68ac5d36d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.261667 master-0 kubenswrapper[4790]: I1011 10:53:22.261383 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-888b07c6-0727-400b-ade1-5d81d76c7ca4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6484ec26-9b3d-4dbd-a479-93aa7b5ec920\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.261667 master-0 kubenswrapper[4790]: I1011 10:53:22.261536 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/403c433e-e6f3-4732-9d44-95e68ac5d36d-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.262092 master-0 kubenswrapper[4790]: I1011 10:53:22.261765 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/403c433e-e6f3-4732-9d44-95e68ac5d36d-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.262092 master-0 kubenswrapper[4790]: I1011 10:53:22.261868 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/403c433e-e6f3-4732-9d44-95e68ac5d36d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.262092 master-0 kubenswrapper[4790]: I1011 10:53:22.262048 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbj45\" (UniqueName: \"kubernetes.io/projected/403c433e-e6f3-4732-9d44-95e68ac5d36d-kube-api-access-rbj45\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.262228 master-0 kubenswrapper[4790]: I1011 10:53:22.262105 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/403c433e-e6f3-4732-9d44-95e68ac5d36d-config\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.364238 master-0 kubenswrapper[4790]: I1011 10:53:22.364077 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/403c433e-e6f3-4732-9d44-95e68ac5d36d-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.364238 master-0 kubenswrapper[4790]: I1011 10:53:22.364161 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/403c433e-e6f3-4732-9d44-95e68ac5d36d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.364238 master-0 kubenswrapper[4790]: I1011 10:53:22.364198 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-888b07c6-0727-400b-ade1-5d81d76c7ca4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6484ec26-9b3d-4dbd-a479-93aa7b5ec920\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.364238 master-0 kubenswrapper[4790]: I1011 10:53:22.364237 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/403c433e-e6f3-4732-9d44-95e68ac5d36d-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.364579 master-0 kubenswrapper[4790]: I1011 10:53:22.364278 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/403c433e-e6f3-4732-9d44-95e68ac5d36d-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.364579 master-0 kubenswrapper[4790]: I1011 10:53:22.364303 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/403c433e-e6f3-4732-9d44-95e68ac5d36d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.364579 master-0 kubenswrapper[4790]: I1011 10:53:22.364355 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbj45\" (UniqueName: \"kubernetes.io/projected/403c433e-e6f3-4732-9d44-95e68ac5d36d-kube-api-access-rbj45\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.364579 master-0 kubenswrapper[4790]: I1011 10:53:22.364393 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/403c433e-e6f3-4732-9d44-95e68ac5d36d-config\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.364874 master-0 kubenswrapper[4790]: I1011 10:53:22.364822 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/403c433e-e6f3-4732-9d44-95e68ac5d36d-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.365924 master-0 kubenswrapper[4790]: I1011 10:53:22.365899 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/403c433e-e6f3-4732-9d44-95e68ac5d36d-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.366437 master-0 kubenswrapper[4790]: I1011 10:53:22.366389 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/403c433e-e6f3-4732-9d44-95e68ac5d36d-config\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.369492 master-0 kubenswrapper[4790]: I1011 10:53:22.369466 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/403c433e-e6f3-4732-9d44-95e68ac5d36d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.369566 master-0 kubenswrapper[4790]: I1011 10:53:22.369540 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/403c433e-e6f3-4732-9d44-95e68ac5d36d-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.369755 master-0 kubenswrapper[4790]: I1011 10:53:22.369696 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/403c433e-e6f3-4732-9d44-95e68ac5d36d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.369812 master-0 kubenswrapper[4790]: I1011 10:53:22.369790 4790 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:22.369875 master-0 kubenswrapper[4790]: I1011 10:53:22.369846 4790 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-888b07c6-0727-400b-ade1-5d81d76c7ca4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6484ec26-9b3d-4dbd-a479-93aa7b5ec920\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/f6c73a3ac7e9ba275a8482e3555a12863487c6a78cc7244a68d6b0801064717f/globalmount\"" pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.398836 master-0 kubenswrapper[4790]: I1011 10:53:22.398779 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbj45\" (UniqueName: \"kubernetes.io/projected/403c433e-e6f3-4732-9d44-95e68ac5d36d-kube-api-access-rbj45\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:22.535288 master-2 kubenswrapper[4776]: I1011 10:53:22.534844 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9","Type":"ContainerStarted","Data":"77c5dc7e3fe0b716852a2a401ba9ab5958e35df34584b93d6320493441342b90"} Oct 11 10:53:22.536373 master-2 kubenswrapper[4776]: I1011 10:53:22.536328 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"773e37f4-d372-40a8-936f-5b148ca7dabf","Type":"ContainerStarted","Data":"062c9e2240151337e31541822816c848fa2cd72677bad1cdf81d23f0f7a3a352"} Oct 11 10:53:22.538262 master-2 kubenswrapper[4776]: I1011 10:53:22.538208 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-2" event={"ID":"72994ad3-2bca-4875-97f7-f98c00f64626","Type":"ContainerStarted","Data":"196e365f5d38da11085297740b0c9f63bd9bc4a7f8864c167ea8e54ddb465ba2"} Oct 11 10:53:22.566123 master-2 kubenswrapper[4776]: I1011 10:53:22.566025 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.32865161 podStartE2EDuration="29.566004858s" podCreationTimestamp="2025-10-11 10:52:53 +0000 UTC" firstStartedPulling="2025-10-11 10:53:14.279099422 +0000 UTC m=+1629.063526131" lastFinishedPulling="2025-10-11 10:53:16.51645268 +0000 UTC m=+1631.300879379" observedRunningTime="2025-10-11 10:53:22.561308822 +0000 UTC m=+1637.345735531" watchObservedRunningTime="2025-10-11 10:53:22.566004858 +0000 UTC m=+1637.350431567" Oct 11 10:53:22.595471 master-2 kubenswrapper[4776]: I1011 10:53:22.594005 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-2" podStartSLOduration=26.339087765 podStartE2EDuration="30.593986106s" podCreationTimestamp="2025-10-11 10:52:52 +0000 UTC" firstStartedPulling="2025-10-11 10:53:12.26789111 +0000 UTC m=+1627.052317819" lastFinishedPulling="2025-10-11 10:53:16.522789451 +0000 UTC m=+1631.307216160" observedRunningTime="2025-10-11 10:53:22.59116351 +0000 UTC m=+1637.375590219" watchObservedRunningTime="2025-10-11 10:53:22.593986106 +0000 UTC m=+1637.378412815" Oct 11 10:53:22.639824 master-0 kubenswrapper[4790]: I1011 10:53:22.639620 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:22.700436 master-2 kubenswrapper[4776]: I1011 10:53:22.700378 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 11 10:53:22.701013 master-1 kubenswrapper[4771]: I1011 10:53:22.700949 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2cc50c9a-367c-43a5-9e92-aa42989ee215\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a44ae47b-cc90-4016-a02d-1f3945269ee7\") pod \"ovsdbserver-sb-1\" (UID: \"c6e032cb-3993-4b10-afe4-7f77821c8583\") " pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:22.725478 master-0 kubenswrapper[4790]: I1011 10:53:22.725411 4790 generic.go:334] "Generic (PLEG): container finished" podID="5059e0b0-120f-4498-8076-e3e9239b5688" containerID="16951753cab39fff859f3907a86a88a98732302e37279ed54b73a1c204defd9f" exitCode=0 Oct 11 10:53:22.726208 master-0 kubenswrapper[4790]: I1011 10:53:22.725496 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-2" event={"ID":"5059e0b0-120f-4498-8076-e3e9239b5688","Type":"ContainerDied","Data":"16951753cab39fff859f3907a86a88a98732302e37279ed54b73a1c204defd9f"} Oct 11 10:53:22.737775 master-0 kubenswrapper[4790]: I1011 10:53:22.734786 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-1" event={"ID":"ce689fd9-58ba-45f5-bec1-ff7b79e377ac","Type":"ContainerStarted","Data":"218379a1cbe6c195eee20a66e0037e4a1d22bbf093737b2701289a4711a702af"} Oct 11 10:53:22.833608 master-1 kubenswrapper[4771]: I1011 10:53:22.830668 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86d565bb9-85bsq" Oct 11 10:53:22.914804 master-1 kubenswrapper[4771]: I1011 10:53:22.914746 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:22.919215 master-1 kubenswrapper[4771]: I1011 10:53:22.919133 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b4bcc4d85-hc9b5"] Oct 11 10:53:22.919543 master-1 kubenswrapper[4771]: I1011 10:53:22.919495 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" podUID="37015f12-0983-4016-9f76-6d0e3f641f28" containerName="dnsmasq-dns" containerID="cri-o://b3e3b46fc901080e41385771d41f43fbd31766c94a2a5ea0b8a9cb3a8c03ad18" gracePeriod=10 Oct 11 10:53:23.169871 master-1 kubenswrapper[4771]: I1011 10:53:23.169803 4771 generic.go:334] "Generic (PLEG): container finished" podID="37015f12-0983-4016-9f76-6d0e3f641f28" containerID="b3e3b46fc901080e41385771d41f43fbd31766c94a2a5ea0b8a9cb3a8c03ad18" exitCode=0 Oct 11 10:53:23.169871 master-1 kubenswrapper[4771]: I1011 10:53:23.169866 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" event={"ID":"37015f12-0983-4016-9f76-6d0e3f641f28","Type":"ContainerDied","Data":"b3e3b46fc901080e41385771d41f43fbd31766c94a2a5ea0b8a9cb3a8c03ad18"} Oct 11 10:53:23.558647 master-2 kubenswrapper[4776]: I1011 10:53:23.558591 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"773e37f4-d372-40a8-936f-5b148ca7dabf","Type":"ContainerStarted","Data":"43156a111bf2bd6fe0db4b57580b70f4a1d5677b297e3635ff0ea32d04252bf0"} Oct 11 10:53:23.561084 master-2 kubenswrapper[4776]: I1011 10:53:23.561045 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"8d885df4-18cc-401a-8226-cd3d17b3f770","Type":"ContainerStarted","Data":"8f461b244015850ac05e373cc1ad030b00b88df90fa2baec16784ede4b40d659"} Oct 11 10:53:23.562358 master-2 kubenswrapper[4776]: I1011 10:53:23.562320 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3b7d3c19-b426-4026-94db-329447ffb1d5","Type":"ContainerStarted","Data":"06686e3fb84c12253dcb579f99feb711eb43f473276d120edabaa14c17179836"} Oct 11 10:53:23.594998 master-2 kubenswrapper[4776]: I1011 10:53:23.594919 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=22.732869398 podStartE2EDuration="23.594900486s" podCreationTimestamp="2025-10-11 10:53:00 +0000 UTC" firstStartedPulling="2025-10-11 10:53:21.310469773 +0000 UTC m=+1636.094896482" lastFinishedPulling="2025-10-11 10:53:22.172500861 +0000 UTC m=+1636.956927570" observedRunningTime="2025-10-11 10:53:23.593565069 +0000 UTC m=+1638.377991778" watchObservedRunningTime="2025-10-11 10:53:23.594900486 +0000 UTC m=+1638.379327195" Oct 11 10:53:23.640105 master-0 kubenswrapper[4790]: I1011 10:53:23.640042 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:23.682737 master-0 kubenswrapper[4790]: I1011 10:53:23.682670 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:23.720121 master-0 kubenswrapper[4790]: I1011 10:53:23.720021 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-1" podStartSLOduration=27.223252139 podStartE2EDuration="31.719995306s" podCreationTimestamp="2025-10-11 10:52:52 +0000 UTC" firstStartedPulling="2025-10-11 10:53:13.017481127 +0000 UTC m=+869.571941419" lastFinishedPulling="2025-10-11 10:53:17.514224284 +0000 UTC m=+874.068684586" observedRunningTime="2025-10-11 10:53:22.82061776 +0000 UTC m=+879.375078072" watchObservedRunningTime="2025-10-11 10:53:23.719995306 +0000 UTC m=+880.274455598" Oct 11 10:53:23.729505 master-0 kubenswrapper[4790]: I1011 10:53:23.729435 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-888b07c6-0727-400b-ade1-5d81d76c7ca4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6484ec26-9b3d-4dbd-a479-93aa7b5ec920\") pod \"ovsdbserver-sb-2\" (UID: \"403c433e-e6f3-4732-9d44-95e68ac5d36d\") " pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:23.742201 master-0 kubenswrapper[4790]: I1011 10:53:23.742124 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-1" event={"ID":"1ebbbaaf-f668-4c40-b437-2e730aef3912","Type":"ContainerStarted","Data":"b3ffe691ff1ca6cc8bb2407f87f44ff24c582eea93bce9c33941b8085aadde4d"} Oct 11 10:53:23.744563 master-0 kubenswrapper[4790]: I1011 10:53:23.744521 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-2" event={"ID":"5059e0b0-120f-4498-8076-e3e9239b5688","Type":"ContainerStarted","Data":"c7f6c6a8ec8f298fea18ce5bdc6aa316147d7be0432d97fcbb7db7b50ab1c846"} Oct 11 10:53:23.777368 master-2 kubenswrapper[4776]: I1011 10:53:23.777225 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:23.782042 master-0 kubenswrapper[4790]: I1011 10:53:23.781945 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-2" podStartSLOduration=30.781923112 podStartE2EDuration="30.781923112s" podCreationTimestamp="2025-10-11 10:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:23.779378072 +0000 UTC m=+880.333838354" watchObservedRunningTime="2025-10-11 10:53:23.781923112 +0000 UTC m=+880.336383404" Oct 11 10:53:23.836639 master-2 kubenswrapper[4776]: I1011 10:53:23.836288 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:23.836639 master-2 kubenswrapper[4776]: I1011 10:53:23.836339 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:23.921414 master-0 kubenswrapper[4790]: I1011 10:53:23.921257 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:24.480745 master-0 kubenswrapper[4790]: I1011 10:53:24.478087 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Oct 11 10:53:24.493193 master-0 kubenswrapper[4790]: W1011 10:53:24.493117 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod403c433e_e6f3_4732_9d44_95e68ac5d36d.slice/crio-ace6ed88778d45d44bc00c0181b4956b0d16db77e488399fca98accbdae65b5f WatchSource:0}: Error finding container ace6ed88778d45d44bc00c0181b4956b0d16db77e488399fca98accbdae65b5f: Status 404 returned error can't find the container with id ace6ed88778d45d44bc00c0181b4956b0d16db77e488399fca98accbdae65b5f Oct 11 10:53:24.571294 master-2 kubenswrapper[4776]: I1011 10:53:24.571238 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3b7d3c19-b426-4026-94db-329447ffb1d5","Type":"ContainerStarted","Data":"8bafe9e73c48a6f2ca91ecc18aded5fe6f748cd8cc1be007f768ae6e593a7052"} Oct 11 10:53:24.571294 master-2 kubenswrapper[4776]: I1011 10:53:24.571299 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3b7d3c19-b426-4026-94db-329447ffb1d5","Type":"ContainerStarted","Data":"4663b1bc12048409eb6f262f8d66bee58e0b78548547d2517a69cf91f411b327"} Oct 11 10:53:24.598622 master-2 kubenswrapper[4776]: I1011 10:53:24.598563 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=14.645665920999999 podStartE2EDuration="15.598546789s" podCreationTimestamp="2025-10-11 10:53:09 +0000 UTC" firstStartedPulling="2025-10-11 10:53:23.010810836 +0000 UTC m=+1637.795237545" lastFinishedPulling="2025-10-11 10:53:23.963691704 +0000 UTC m=+1638.748118413" observedRunningTime="2025-10-11 10:53:24.59638537 +0000 UTC m=+1639.380812079" watchObservedRunningTime="2025-10-11 10:53:24.598546789 +0000 UTC m=+1639.382973498" Oct 11 10:53:24.755635 master-0 kubenswrapper[4790]: I1011 10:53:24.755499 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"403c433e-e6f3-4732-9d44-95e68ac5d36d","Type":"ContainerStarted","Data":"ace6ed88778d45d44bc00c0181b4956b0d16db77e488399fca98accbdae65b5f"} Oct 11 10:53:24.787379 master-1 kubenswrapper[4771]: I1011 10:53:24.787299 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" Oct 11 10:53:24.791727 master-0 kubenswrapper[4790]: I1011 10:53:24.791659 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 11 10:53:24.976275 master-1 kubenswrapper[4771]: I1011 10:53:24.976216 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37015f12-0983-4016-9f76-6d0e3f641f28-dns-svc\") pod \"37015f12-0983-4016-9f76-6d0e3f641f28\" (UID: \"37015f12-0983-4016-9f76-6d0e3f641f28\") " Oct 11 10:53:24.976409 master-1 kubenswrapper[4771]: I1011 10:53:24.976344 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r45c\" (UniqueName: \"kubernetes.io/projected/37015f12-0983-4016-9f76-6d0e3f641f28-kube-api-access-8r45c\") pod \"37015f12-0983-4016-9f76-6d0e3f641f28\" (UID: \"37015f12-0983-4016-9f76-6d0e3f641f28\") " Oct 11 10:53:24.976639 master-1 kubenswrapper[4771]: I1011 10:53:24.976600 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37015f12-0983-4016-9f76-6d0e3f641f28-config\") pod \"37015f12-0983-4016-9f76-6d0e3f641f28\" (UID: \"37015f12-0983-4016-9f76-6d0e3f641f28\") " Oct 11 10:53:24.980629 master-1 kubenswrapper[4771]: I1011 10:53:24.980527 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37015f12-0983-4016-9f76-6d0e3f641f28-kube-api-access-8r45c" (OuterVolumeSpecName: "kube-api-access-8r45c") pod "37015f12-0983-4016-9f76-6d0e3f641f28" (UID: "37015f12-0983-4016-9f76-6d0e3f641f28"). InnerVolumeSpecName "kube-api-access-8r45c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:25.020713 master-1 kubenswrapper[4771]: I1011 10:53:25.020649 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37015f12-0983-4016-9f76-6d0e3f641f28-config" (OuterVolumeSpecName: "config") pod "37015f12-0983-4016-9f76-6d0e3f641f28" (UID: "37015f12-0983-4016-9f76-6d0e3f641f28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:25.032992 master-1 kubenswrapper[4771]: I1011 10:53:25.032945 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37015f12-0983-4016-9f76-6d0e3f641f28-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "37015f12-0983-4016-9f76-6d0e3f641f28" (UID: "37015f12-0983-4016-9f76-6d0e3f641f28"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:25.083963 master-1 kubenswrapper[4771]: I1011 10:53:25.083900 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r45c\" (UniqueName: \"kubernetes.io/projected/37015f12-0983-4016-9f76-6d0e3f641f28-kube-api-access-8r45c\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:25.083963 master-1 kubenswrapper[4771]: I1011 10:53:25.083966 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37015f12-0983-4016-9f76-6d0e3f641f28-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:25.084147 master-1 kubenswrapper[4771]: I1011 10:53:25.083983 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37015f12-0983-4016-9f76-6d0e3f641f28-dns-svc\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:25.144658 master-1 kubenswrapper[4771]: I1011 10:53:25.144585 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Oct 11 10:53:25.153500 master-1 kubenswrapper[4771]: W1011 10:53:25.153435 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6e032cb_3993_4b10_afe4_7f77821c8583.slice/crio-f72206a2629114d93d81b6f349520e61cc1f1bea049e521be9a6c27daec9927c WatchSource:0}: Error finding container f72206a2629114d93d81b6f349520e61cc1f1bea049e521be9a6c27daec9927c: Status 404 returned error can't find the container with id f72206a2629114d93d81b6f349520e61cc1f1bea049e521be9a6c27daec9927c Oct 11 10:53:25.170160 master-0 kubenswrapper[4790]: I1011 10:53:25.170081 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77b589666c-j6ppm"] Oct 11 10:53:25.171114 master-2 kubenswrapper[4776]: I1011 10:53:25.171057 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:25.171542 master-0 kubenswrapper[4790]: I1011 10:53:25.171337 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b589666c-j6ppm" Oct 11 10:53:25.176152 master-0 kubenswrapper[4790]: I1011 10:53:25.175143 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 11 10:53:25.176152 master-0 kubenswrapper[4790]: I1011 10:53:25.175388 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 10:53:25.176152 master-0 kubenswrapper[4790]: I1011 10:53:25.175538 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 10:53:25.193998 master-0 kubenswrapper[4790]: I1011 10:53:25.193916 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77b589666c-j6ppm"] Oct 11 10:53:25.201567 master-1 kubenswrapper[4771]: I1011 10:53:25.201477 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" event={"ID":"37015f12-0983-4016-9f76-6d0e3f641f28","Type":"ContainerDied","Data":"9218883a276b672e1ae5ba8960bf79ca7ce273871d0ff647d43beb2d404896a3"} Oct 11 10:53:25.201747 master-1 kubenswrapper[4771]: I1011 10:53:25.201564 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4bcc4d85-hc9b5" Oct 11 10:53:25.201747 master-1 kubenswrapper[4771]: I1011 10:53:25.201597 4771 scope.go:117] "RemoveContainer" containerID="b3e3b46fc901080e41385771d41f43fbd31766c94a2a5ea0b8a9cb3a8c03ad18" Oct 11 10:53:25.204119 master-1 kubenswrapper[4771]: I1011 10:53:25.204074 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-1" event={"ID":"66c0cd85-28ea-42de-8432-8803026d3124","Type":"ContainerStarted","Data":"2240330aade2ce1379e6a3490bcdde650fa18d121d70f13d7932e07f7c1be46f"} Oct 11 10:53:25.205669 master-1 kubenswrapper[4771]: I1011 10:53:25.205619 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"c6e032cb-3993-4b10-afe4-7f77821c8583","Type":"ContainerStarted","Data":"f72206a2629114d93d81b6f349520e61cc1f1bea049e521be9a6c27daec9927c"} Oct 11 10:53:25.207852 master-1 kubenswrapper[4771]: I1011 10:53:25.207788 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"8f9b018c-eb14-4c27-adcb-ba613238c78b","Type":"ContainerStarted","Data":"c80aa209259f95b1296293e51f0616ef8ca222bf514bed45d9e3a7971dacd61a"} Oct 11 10:53:25.207852 master-1 kubenswrapper[4771]: I1011 10:53:25.207861 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"8f9b018c-eb14-4c27-adcb-ba613238c78b","Type":"ContainerStarted","Data":"38bed839fd575b2e9be016ec166012e515b5818caf5cbb15907f50b015b3bc84"} Oct 11 10:53:25.224084 master-1 kubenswrapper[4771]: I1011 10:53:25.222946 4771 scope.go:117] "RemoveContainer" containerID="6500f337e1810213b3c48514bdf7915497fff03dfebcbb66402d535cebd46613" Oct 11 10:53:25.236255 master-1 kubenswrapper[4771]: I1011 10:53:25.236189 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-1" podStartSLOduration=32.236170963 podStartE2EDuration="32.236170963s" podCreationTimestamp="2025-10-11 10:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:25.232558128 +0000 UTC m=+1637.206784609" watchObservedRunningTime="2025-10-11 10:53:25.236170963 +0000 UTC m=+1637.210397414" Oct 11 10:53:25.263419 master-1 kubenswrapper[4771]: I1011 10:53:25.263182 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=20.938178932 podStartE2EDuration="25.263150502s" podCreationTimestamp="2025-10-11 10:53:00 +0000 UTC" firstStartedPulling="2025-10-11 10:53:20.231672136 +0000 UTC m=+1632.205898577" lastFinishedPulling="2025-10-11 10:53:24.556643696 +0000 UTC m=+1636.530870147" observedRunningTime="2025-10-11 10:53:25.256672925 +0000 UTC m=+1637.230899366" watchObservedRunningTime="2025-10-11 10:53:25.263150502 +0000 UTC m=+1637.237376963" Oct 11 10:53:25.281029 master-1 kubenswrapper[4771]: I1011 10:53:25.279930 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b4bcc4d85-hc9b5"] Oct 11 10:53:25.285824 master-1 kubenswrapper[4771]: I1011 10:53:25.285766 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b4bcc4d85-hc9b5"] Oct 11 10:53:25.313875 master-0 kubenswrapper[4790]: I1011 10:53:25.313121 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-ovsdbserver-nb\") pod \"dnsmasq-dns-77b589666c-j6ppm\" (UID: \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\") " pod="openstack/dnsmasq-dns-77b589666c-j6ppm" Oct 11 10:53:25.313875 master-0 kubenswrapper[4790]: I1011 10:53:25.313215 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-687pg\" (UniqueName: \"kubernetes.io/projected/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-kube-api-access-687pg\") pod \"dnsmasq-dns-77b589666c-j6ppm\" (UID: \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\") " pod="openstack/dnsmasq-dns-77b589666c-j6ppm" Oct 11 10:53:25.313875 master-0 kubenswrapper[4790]: I1011 10:53:25.313262 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-dns-svc\") pod \"dnsmasq-dns-77b589666c-j6ppm\" (UID: \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\") " pod="openstack/dnsmasq-dns-77b589666c-j6ppm" Oct 11 10:53:25.313875 master-0 kubenswrapper[4790]: I1011 10:53:25.313292 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-config\") pod \"dnsmasq-dns-77b589666c-j6ppm\" (UID: \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\") " pod="openstack/dnsmasq-dns-77b589666c-j6ppm" Oct 11 10:53:25.337881 master-1 kubenswrapper[4771]: I1011 10:53:25.337815 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:25.338004 master-1 kubenswrapper[4771]: I1011 10:53:25.337909 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:25.414864 master-0 kubenswrapper[4790]: I1011 10:53:25.414794 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-config\") pod \"dnsmasq-dns-77b589666c-j6ppm\" (UID: \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\") " pod="openstack/dnsmasq-dns-77b589666c-j6ppm" Oct 11 10:53:25.415113 master-0 kubenswrapper[4790]: I1011 10:53:25.414900 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-ovsdbserver-nb\") pod \"dnsmasq-dns-77b589666c-j6ppm\" (UID: \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\") " pod="openstack/dnsmasq-dns-77b589666c-j6ppm" Oct 11 10:53:25.415113 master-0 kubenswrapper[4790]: I1011 10:53:25.414951 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-687pg\" (UniqueName: \"kubernetes.io/projected/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-kube-api-access-687pg\") pod \"dnsmasq-dns-77b589666c-j6ppm\" (UID: \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\") " pod="openstack/dnsmasq-dns-77b589666c-j6ppm" Oct 11 10:53:25.415113 master-0 kubenswrapper[4790]: I1011 10:53:25.414997 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-dns-svc\") pod \"dnsmasq-dns-77b589666c-j6ppm\" (UID: \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\") " pod="openstack/dnsmasq-dns-77b589666c-j6ppm" Oct 11 10:53:25.415772 master-0 kubenswrapper[4790]: I1011 10:53:25.415746 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-dns-svc\") pod \"dnsmasq-dns-77b589666c-j6ppm\" (UID: \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\") " pod="openstack/dnsmasq-dns-77b589666c-j6ppm" Oct 11 10:53:25.415836 master-0 kubenswrapper[4790]: I1011 10:53:25.415753 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-config\") pod \"dnsmasq-dns-77b589666c-j6ppm\" (UID: \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\") " pod="openstack/dnsmasq-dns-77b589666c-j6ppm" Oct 11 10:53:25.416446 master-0 kubenswrapper[4790]: I1011 10:53:25.416388 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-ovsdbserver-nb\") pod \"dnsmasq-dns-77b589666c-j6ppm\" (UID: \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\") " pod="openstack/dnsmasq-dns-77b589666c-j6ppm" Oct 11 10:53:25.438754 master-0 kubenswrapper[4790]: I1011 10:53:25.438674 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-687pg\" (UniqueName: \"kubernetes.io/projected/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-kube-api-access-687pg\") pod \"dnsmasq-dns-77b589666c-j6ppm\" (UID: \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\") " pod="openstack/dnsmasq-dns-77b589666c-j6ppm" Oct 11 10:53:25.536124 master-0 kubenswrapper[4790]: I1011 10:53:25.536002 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b589666c-j6ppm" Oct 11 10:53:25.592966 master-2 kubenswrapper[4776]: I1011 10:53:25.592914 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"8d885df4-18cc-401a-8226-cd3d17b3f770","Type":"ContainerStarted","Data":"1d059243776dcb16eb25efaa222ac0ecf3d460ee9115a462cc85e773e5ee73c3"} Oct 11 10:53:25.593783 master-2 kubenswrapper[4776]: I1011 10:53:25.593664 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Oct 11 10:53:25.628040 master-2 kubenswrapper[4776]: I1011 10:53:25.627872 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=8.071524966 podStartE2EDuration="28.627852767s" podCreationTimestamp="2025-10-11 10:52:57 +0000 UTC" firstStartedPulling="2025-10-11 10:53:02.501048986 +0000 UTC m=+1617.285475695" lastFinishedPulling="2025-10-11 10:53:23.057376777 +0000 UTC m=+1637.841803496" observedRunningTime="2025-10-11 10:53:25.619958564 +0000 UTC m=+1640.404385273" watchObservedRunningTime="2025-10-11 10:53:25.627852767 +0000 UTC m=+1640.412279476" Oct 11 10:53:25.672760 master-1 kubenswrapper[4771]: I1011 10:53:25.672662 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:25.711904 master-0 kubenswrapper[4790]: I1011 10:53:25.711846 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:25.713369 master-0 kubenswrapper[4790]: I1011 10:53:25.712767 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:25.771734 master-0 kubenswrapper[4790]: I1011 10:53:25.769775 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"403c433e-e6f3-4732-9d44-95e68ac5d36d","Type":"ContainerStarted","Data":"f6cb2f899d9b65c53d73e37291a08fdecba73d9678a629ead4b84f1bd180b1f8"} Oct 11 10:53:25.774883 master-0 kubenswrapper[4790]: I1011 10:53:25.772543 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-1" event={"ID":"1ebbbaaf-f668-4c40-b437-2e730aef3912","Type":"ContainerStarted","Data":"3e61583376711b9abec6bb152782b40e7556f75aa64fba32d72957df4aae9f91"} Oct 11 10:53:25.777510 master-2 kubenswrapper[4776]: I1011 10:53:25.777446 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:25.805743 master-0 kubenswrapper[4790]: I1011 10:53:25.804891 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-1" podStartSLOduration=8.341887527 podStartE2EDuration="28.804871048s" podCreationTimestamp="2025-10-11 10:52:57 +0000 UTC" firstStartedPulling="2025-10-11 10:53:02.829045044 +0000 UTC m=+859.383505336" lastFinishedPulling="2025-10-11 10:53:23.292028565 +0000 UTC m=+879.846488857" observedRunningTime="2025-10-11 10:53:25.801507375 +0000 UTC m=+882.355967687" watchObservedRunningTime="2025-10-11 10:53:25.804871048 +0000 UTC m=+882.359331340" Oct 11 10:53:25.931254 master-2 kubenswrapper[4776]: I1011 10:53:25.931205 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-1" Oct 11 10:53:25.958097 master-0 kubenswrapper[4790]: W1011 10:53:25.957576 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2fc4e6a_fc2f_4ed0_a753_f5a54f83777c.slice/crio-b82148781df79646d5b0c5a4a653ab9892598923f30fe65b95aab44fedfae266 WatchSource:0}: Error finding container b82148781df79646d5b0c5a4a653ab9892598923f30fe65b95aab44fedfae266: Status 404 returned error can't find the container with id b82148781df79646d5b0c5a4a653ab9892598923f30fe65b95aab44fedfae266 Oct 11 10:53:25.959588 master-0 kubenswrapper[4790]: I1011 10:53:25.959505 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77b589666c-j6ppm"] Oct 11 10:53:26.000004 master-0 kubenswrapper[4790]: I1011 10:53:25.999758 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-2"] Oct 11 10:53:26.001254 master-0 kubenswrapper[4790]: I1011 10:53:26.001224 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-2" Oct 11 10:53:26.014357 master-0 kubenswrapper[4790]: I1011 10:53:26.007989 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 11 10:53:26.014357 master-0 kubenswrapper[4790]: I1011 10:53:26.008255 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 11 10:53:26.018641 master-0 kubenswrapper[4790]: I1011 10:53:26.017945 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-2"] Oct 11 10:53:26.132186 master-0 kubenswrapper[4790]: I1011 10:53:26.132097 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d07581a-888a-4d3f-890b-550587e5657e-combined-ca-bundle\") pod \"memcached-2\" (UID: \"2d07581a-888a-4d3f-890b-550587e5657e\") " pod="openstack/memcached-2" Oct 11 10:53:26.132767 master-0 kubenswrapper[4790]: I1011 10:53:26.132232 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2d07581a-888a-4d3f-890b-550587e5657e-kolla-config\") pod \"memcached-2\" (UID: \"2d07581a-888a-4d3f-890b-550587e5657e\") " pod="openstack/memcached-2" Oct 11 10:53:26.132767 master-0 kubenswrapper[4790]: I1011 10:53:26.132279 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d07581a-888a-4d3f-890b-550587e5657e-memcached-tls-certs\") pod \"memcached-2\" (UID: \"2d07581a-888a-4d3f-890b-550587e5657e\") " pod="openstack/memcached-2" Oct 11 10:53:26.132767 master-0 kubenswrapper[4790]: I1011 10:53:26.132304 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jpn6\" (UniqueName: \"kubernetes.io/projected/2d07581a-888a-4d3f-890b-550587e5657e-kube-api-access-7jpn6\") pod \"memcached-2\" (UID: \"2d07581a-888a-4d3f-890b-550587e5657e\") " pod="openstack/memcached-2" Oct 11 10:53:26.132767 master-0 kubenswrapper[4790]: I1011 10:53:26.132335 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d07581a-888a-4d3f-890b-550587e5657e-config-data\") pod \"memcached-2\" (UID: \"2d07581a-888a-4d3f-890b-550587e5657e\") " pod="openstack/memcached-2" Oct 11 10:53:26.233625 master-0 kubenswrapper[4790]: I1011 10:53:26.233471 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2d07581a-888a-4d3f-890b-550587e5657e-kolla-config\") pod \"memcached-2\" (UID: \"2d07581a-888a-4d3f-890b-550587e5657e\") " pod="openstack/memcached-2" Oct 11 10:53:26.233625 master-0 kubenswrapper[4790]: I1011 10:53:26.233569 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d07581a-888a-4d3f-890b-550587e5657e-memcached-tls-certs\") pod \"memcached-2\" (UID: \"2d07581a-888a-4d3f-890b-550587e5657e\") " pod="openstack/memcached-2" Oct 11 10:53:26.233625 master-0 kubenswrapper[4790]: I1011 10:53:26.233596 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jpn6\" (UniqueName: \"kubernetes.io/projected/2d07581a-888a-4d3f-890b-550587e5657e-kube-api-access-7jpn6\") pod \"memcached-2\" (UID: \"2d07581a-888a-4d3f-890b-550587e5657e\") " pod="openstack/memcached-2" Oct 11 10:53:26.233625 master-0 kubenswrapper[4790]: I1011 10:53:26.233632 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d07581a-888a-4d3f-890b-550587e5657e-config-data\") pod \"memcached-2\" (UID: \"2d07581a-888a-4d3f-890b-550587e5657e\") " pod="openstack/memcached-2" Oct 11 10:53:26.233950 master-0 kubenswrapper[4790]: I1011 10:53:26.233658 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d07581a-888a-4d3f-890b-550587e5657e-combined-ca-bundle\") pod \"memcached-2\" (UID: \"2d07581a-888a-4d3f-890b-550587e5657e\") " pod="openstack/memcached-2" Oct 11 10:53:26.235333 master-0 kubenswrapper[4790]: I1011 10:53:26.234557 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2d07581a-888a-4d3f-890b-550587e5657e-kolla-config\") pod \"memcached-2\" (UID: \"2d07581a-888a-4d3f-890b-550587e5657e\") " pod="openstack/memcached-2" Oct 11 10:53:26.235333 master-0 kubenswrapper[4790]: I1011 10:53:26.235237 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d07581a-888a-4d3f-890b-550587e5657e-config-data\") pod \"memcached-2\" (UID: \"2d07581a-888a-4d3f-890b-550587e5657e\") " pod="openstack/memcached-2" Oct 11 10:53:26.237536 master-0 kubenswrapper[4790]: I1011 10:53:26.237484 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d07581a-888a-4d3f-890b-550587e5657e-memcached-tls-certs\") pod \"memcached-2\" (UID: \"2d07581a-888a-4d3f-890b-550587e5657e\") " pod="openstack/memcached-2" Oct 11 10:53:26.239470 master-0 kubenswrapper[4790]: I1011 10:53:26.239431 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d07581a-888a-4d3f-890b-550587e5657e-combined-ca-bundle\") pod \"memcached-2\" (UID: \"2d07581a-888a-4d3f-890b-550587e5657e\") " pod="openstack/memcached-2" Oct 11 10:53:26.264455 master-0 kubenswrapper[4790]: I1011 10:53:26.264363 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jpn6\" (UniqueName: \"kubernetes.io/projected/2d07581a-888a-4d3f-890b-550587e5657e-kube-api-access-7jpn6\") pod \"memcached-2\" (UID: \"2d07581a-888a-4d3f-890b-550587e5657e\") " pod="openstack/memcached-2" Oct 11 10:53:26.341942 master-0 kubenswrapper[4790]: I1011 10:53:26.341858 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-2" Oct 11 10:53:26.450018 master-1 kubenswrapper[4771]: I1011 10:53:26.449930 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37015f12-0983-4016-9f76-6d0e3f641f28" path="/var/lib/kubelet/pods/37015f12-0983-4016-9f76-6d0e3f641f28/volumes" Oct 11 10:53:26.600411 master-2 kubenswrapper[4776]: I1011 10:53:26.600313 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Oct 11 10:53:26.784165 master-0 kubenswrapper[4790]: I1011 10:53:26.784007 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"403c433e-e6f3-4732-9d44-95e68ac5d36d","Type":"ContainerStarted","Data":"861af966cb71f0799412276771ee1a44c25b273948db6102376d80a616fe7ce5"} Oct 11 10:53:26.786790 master-0 kubenswrapper[4790]: I1011 10:53:26.785540 4790 generic.go:334] "Generic (PLEG): container finished" podID="c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c" containerID="5420918b5cf67f23160281d51d5a46679486566bda34999aa7dc361bb9c90720" exitCode=0 Oct 11 10:53:26.786790 master-0 kubenswrapper[4790]: I1011 10:53:26.785610 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b589666c-j6ppm" event={"ID":"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c","Type":"ContainerDied","Data":"5420918b5cf67f23160281d51d5a46679486566bda34999aa7dc361bb9c90720"} Oct 11 10:53:26.786790 master-0 kubenswrapper[4790]: I1011 10:53:26.785645 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b589666c-j6ppm" event={"ID":"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c","Type":"ContainerStarted","Data":"b82148781df79646d5b0c5a4a653ab9892598923f30fe65b95aab44fedfae266"} Oct 11 10:53:26.786790 master-0 kubenswrapper[4790]: I1011 10:53:26.786754 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-1" Oct 11 10:53:26.794026 master-0 kubenswrapper[4790]: I1011 10:53:26.793974 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-1" Oct 11 10:53:26.795677 master-0 kubenswrapper[4790]: I1011 10:53:26.795628 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-2"] Oct 11 10:53:26.803540 master-0 kubenswrapper[4790]: W1011 10:53:26.803478 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d07581a_888a_4d3f_890b_550587e5657e.slice/crio-07126fb93a2731bc90528e8dbe62a96535875a4d288aaa47518cc37c60307378 WatchSource:0}: Error finding container 07126fb93a2731bc90528e8dbe62a96535875a4d288aaa47518cc37c60307378: Status 404 returned error can't find the container with id 07126fb93a2731bc90528e8dbe62a96535875a4d288aaa47518cc37c60307378 Oct 11 10:53:26.814693 master-2 kubenswrapper[4776]: I1011 10:53:26.814530 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:26.822588 master-0 kubenswrapper[4790]: I1011 10:53:26.822520 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=16.800470886 podStartE2EDuration="17.822491741s" podCreationTimestamp="2025-10-11 10:53:09 +0000 UTC" firstStartedPulling="2025-10-11 10:53:24.499287434 +0000 UTC m=+881.053747726" lastFinishedPulling="2025-10-11 10:53:25.521308289 +0000 UTC m=+882.075768581" observedRunningTime="2025-10-11 10:53:26.808143503 +0000 UTC m=+883.362603825" watchObservedRunningTime="2025-10-11 10:53:26.822491741 +0000 UTC m=+883.376952033" Oct 11 10:53:26.922649 master-0 kubenswrapper[4790]: I1011 10:53:26.922429 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:27.170842 master-2 kubenswrapper[4776]: I1011 10:53:27.170757 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:27.193187 master-1 kubenswrapper[4771]: I1011 10:53:27.193107 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 11 10:53:27.193187 master-1 kubenswrapper[4771]: I1011 10:53:27.193187 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 11 10:53:27.243951 master-1 kubenswrapper[4771]: I1011 10:53:27.243836 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11c30d1f-16d5-4106-bfae-e6c2d2f64f13","Type":"ContainerStarted","Data":"1023c854292d2503211da52aaf16aa7e2199948c97ebed99bad537459ca3e33b"} Oct 11 10:53:27.254467 master-1 kubenswrapper[4771]: I1011 10:53:27.254401 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"c6e032cb-3993-4b10-afe4-7f77821c8583","Type":"ContainerStarted","Data":"bba61c0f4e00cda3da7bc8031cef1f99b6da7cf2a2f76cdd21295a1ccf9fbdbd"} Oct 11 10:53:27.254467 master-1 kubenswrapper[4771]: I1011 10:53:27.254475 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"c6e032cb-3993-4b10-afe4-7f77821c8583","Type":"ContainerStarted","Data":"a7d15c920a660899f780bf876eea2e54e7c7d2393a551aa2fbb8276e512a0a28"} Oct 11 10:53:27.337235 master-1 kubenswrapper[4771]: I1011 10:53:27.337121 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=17.48348072 podStartE2EDuration="18.337090499s" podCreationTimestamp="2025-10-11 10:53:09 +0000 UTC" firstStartedPulling="2025-10-11 10:53:25.155531992 +0000 UTC m=+1637.129758433" lastFinishedPulling="2025-10-11 10:53:26.009141731 +0000 UTC m=+1637.983368212" observedRunningTime="2025-10-11 10:53:27.32677016 +0000 UTC m=+1639.300996601" watchObservedRunningTime="2025-10-11 10:53:27.337090499 +0000 UTC m=+1639.311316960" Oct 11 10:53:27.641406 master-2 kubenswrapper[4776]: I1011 10:53:27.641357 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:27.795761 master-0 kubenswrapper[4790]: I1011 10:53:27.795558 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b589666c-j6ppm" event={"ID":"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c","Type":"ContainerStarted","Data":"0cc3af213ad614ffbe7a12dc806856290a11df301f2407682b989da7fc2469d3"} Oct 11 10:53:27.795761 master-0 kubenswrapper[4790]: I1011 10:53:27.795745 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77b589666c-j6ppm" Oct 11 10:53:27.798545 master-0 kubenswrapper[4790]: I1011 10:53:27.798466 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-2" event={"ID":"2d07581a-888a-4d3f-890b-550587e5657e","Type":"ContainerStarted","Data":"07126fb93a2731bc90528e8dbe62a96535875a4d288aaa47518cc37c60307378"} Oct 11 10:53:27.822062 master-0 kubenswrapper[4790]: I1011 10:53:27.821955 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77b589666c-j6ppm" podStartSLOduration=2.82192503 podStartE2EDuration="2.82192503s" podCreationTimestamp="2025-10-11 10:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:27.818793534 +0000 UTC m=+884.373253826" watchObservedRunningTime="2025-10-11 10:53:27.82192503 +0000 UTC m=+884.376385332" Oct 11 10:53:27.916215 master-1 kubenswrapper[4771]: I1011 10:53:27.916100 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:28.210077 master-2 kubenswrapper[4776]: I1011 10:53:28.210005 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:28.720023 master-1 kubenswrapper[4771]: I1011 10:53:28.719848 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:28.720739 master-1 kubenswrapper[4771]: I1011 10:53:28.720676 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:28.916848 master-1 kubenswrapper[4771]: I1011 10:53:28.916730 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:28.921559 master-0 kubenswrapper[4790]: I1011 10:53:28.921518 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:29.304195 master-1 kubenswrapper[4771]: I1011 10:53:29.304142 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Oct 11 10:53:29.837473 master-0 kubenswrapper[4790]: I1011 10:53:29.837330 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-2" event={"ID":"2d07581a-888a-4d3f-890b-550587e5657e","Type":"ContainerStarted","Data":"3d7d2e12092061298f8a990c80b59dad5f4f26ac715667c9d48cd66a44e6d3b6"} Oct 11 10:53:29.837942 master-0 kubenswrapper[4790]: I1011 10:53:29.837734 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-2" Oct 11 10:53:29.895902 master-0 kubenswrapper[4790]: I1011 10:53:29.895797 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-2" podStartSLOduration=2.941227627 podStartE2EDuration="4.895764457s" podCreationTimestamp="2025-10-11 10:53:25 +0000 UTC" firstStartedPulling="2025-10-11 10:53:26.807556007 +0000 UTC m=+883.362016299" lastFinishedPulling="2025-10-11 10:53:28.762092827 +0000 UTC m=+885.316553129" observedRunningTime="2025-10-11 10:53:29.884583428 +0000 UTC m=+886.439043750" watchObservedRunningTime="2025-10-11 10:53:29.895764457 +0000 UTC m=+886.450224779" Oct 11 10:53:29.968226 master-0 kubenswrapper[4790]: I1011 10:53:29.968120 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:30.907056 master-0 kubenswrapper[4790]: I1011 10:53:30.906963 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Oct 11 10:53:31.261770 master-2 kubenswrapper[4776]: I1011 10:53:31.261548 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-2" Oct 11 10:53:31.261770 master-2 kubenswrapper[4776]: I1011 10:53:31.261610 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-2" Oct 11 10:53:31.309975 master-0 kubenswrapper[4790]: I1011 10:53:31.309840 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77b589666c-j6ppm"] Oct 11 10:53:31.310663 master-0 kubenswrapper[4790]: I1011 10:53:31.310217 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77b589666c-j6ppm" podUID="c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c" containerName="dnsmasq-dns" containerID="cri-o://0cc3af213ad614ffbe7a12dc806856290a11df301f2407682b989da7fc2469d3" gracePeriod=10 Oct 11 10:53:31.314387 master-2 kubenswrapper[4776]: I1011 10:53:31.314339 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-2" Oct 11 10:53:31.315897 master-0 kubenswrapper[4790]: I1011 10:53:31.315852 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77b589666c-j6ppm" Oct 11 10:53:31.359173 master-1 kubenswrapper[4771]: I1011 10:53:31.359006 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b64bc6b99-wp674"] Oct 11 10:53:31.360142 master-1 kubenswrapper[4771]: E1011 10:53:31.360102 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37015f12-0983-4016-9f76-6d0e3f641f28" containerName="init" Oct 11 10:53:31.360142 master-1 kubenswrapper[4771]: I1011 10:53:31.360136 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="37015f12-0983-4016-9f76-6d0e3f641f28" containerName="init" Oct 11 10:53:31.360230 master-1 kubenswrapper[4771]: E1011 10:53:31.360190 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37015f12-0983-4016-9f76-6d0e3f641f28" containerName="dnsmasq-dns" Oct 11 10:53:31.360230 master-1 kubenswrapper[4771]: I1011 10:53:31.360201 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="37015f12-0983-4016-9f76-6d0e3f641f28" containerName="dnsmasq-dns" Oct 11 10:53:31.360556 master-1 kubenswrapper[4771]: I1011 10:53:31.360479 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="37015f12-0983-4016-9f76-6d0e3f641f28" containerName="dnsmasq-dns" Oct 11 10:53:31.365849 master-1 kubenswrapper[4771]: I1011 10:53:31.365780 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:53:31.370596 master-1 kubenswrapper[4771]: I1011 10:53:31.370087 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 11 10:53:31.371469 master-1 kubenswrapper[4771]: I1011 10:53:31.370382 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 11 10:53:31.414899 master-1 kubenswrapper[4771]: I1011 10:53:31.407710 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b64bc6b99-wp674"] Oct 11 10:53:31.414899 master-1 kubenswrapper[4771]: I1011 10:53:31.414434 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 11 10:53:31.416540 master-1 kubenswrapper[4771]: I1011 10:53:31.416504 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 11 10:53:31.422968 master-1 kubenswrapper[4771]: I1011 10:53:31.422905 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 11 10:53:31.423071 master-1 kubenswrapper[4771]: I1011 10:53:31.422999 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 11 10:53:31.423129 master-1 kubenswrapper[4771]: I1011 10:53:31.423043 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 11 10:53:31.423396 master-1 kubenswrapper[4771]: I1011 10:53:31.423343 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 11 10:53:31.441775 master-1 kubenswrapper[4771]: I1011 10:53:31.441616 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-config\") pod \"dnsmasq-dns-7b64bc6b99-wp674\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:53:31.441775 master-1 kubenswrapper[4771]: I1011 10:53:31.441717 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgbhb\" (UniqueName: \"kubernetes.io/projected/49a589aa-7d75-4aba-aca3-9fffa3d86378-kube-api-access-lgbhb\") pod \"dnsmasq-dns-7b64bc6b99-wp674\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:53:31.441775 master-1 kubenswrapper[4771]: I1011 10:53:31.441764 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-dns-svc\") pod \"dnsmasq-dns-7b64bc6b99-wp674\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:53:31.441961 master-1 kubenswrapper[4771]: I1011 10:53:31.441793 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-ovsdbserver-nb\") pod \"dnsmasq-dns-7b64bc6b99-wp674\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:53:31.441961 master-1 kubenswrapper[4771]: I1011 10:53:31.441827 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-ovsdbserver-sb\") pod \"dnsmasq-dns-7b64bc6b99-wp674\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:53:31.544103 master-1 kubenswrapper[4771]: I1011 10:53:31.544001 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-dns-svc\") pod \"dnsmasq-dns-7b64bc6b99-wp674\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:53:31.544103 master-1 kubenswrapper[4771]: I1011 10:53:31.544085 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-ovsdbserver-nb\") pod \"dnsmasq-dns-7b64bc6b99-wp674\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:53:31.544103 master-1 kubenswrapper[4771]: I1011 10:53:31.544120 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e142309b-a5e4-48bc-913f-89bb35b61a51-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.544582 master-1 kubenswrapper[4771]: I1011 10:53:31.544143 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-ovsdbserver-sb\") pod \"dnsmasq-dns-7b64bc6b99-wp674\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:53:31.544582 master-1 kubenswrapper[4771]: I1011 10:53:31.544182 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e142309b-a5e4-48bc-913f-89bb35b61a51-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.544582 master-1 kubenswrapper[4771]: I1011 10:53:31.544231 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e142309b-a5e4-48bc-913f-89bb35b61a51-config\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.544582 master-1 kubenswrapper[4771]: I1011 10:53:31.544269 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp87r\" (UniqueName: \"kubernetes.io/projected/e142309b-a5e4-48bc-913f-89bb35b61a51-kube-api-access-lp87r\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.544582 master-1 kubenswrapper[4771]: I1011 10:53:31.544313 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e142309b-a5e4-48bc-913f-89bb35b61a51-scripts\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.544582 master-1 kubenswrapper[4771]: I1011 10:53:31.544383 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e142309b-a5e4-48bc-913f-89bb35b61a51-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.544582 master-1 kubenswrapper[4771]: I1011 10:53:31.544417 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-config\") pod \"dnsmasq-dns-7b64bc6b99-wp674\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:53:31.544582 master-1 kubenswrapper[4771]: I1011 10:53:31.544445 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e142309b-a5e4-48bc-913f-89bb35b61a51-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.544582 master-1 kubenswrapper[4771]: I1011 10:53:31.544475 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgbhb\" (UniqueName: \"kubernetes.io/projected/49a589aa-7d75-4aba-aca3-9fffa3d86378-kube-api-access-lgbhb\") pod \"dnsmasq-dns-7b64bc6b99-wp674\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:53:31.545693 master-1 kubenswrapper[4771]: I1011 10:53:31.545630 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-ovsdbserver-nb\") pod \"dnsmasq-dns-7b64bc6b99-wp674\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:53:31.545899 master-1 kubenswrapper[4771]: I1011 10:53:31.545803 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-dns-svc\") pod \"dnsmasq-dns-7b64bc6b99-wp674\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:53:31.545985 master-1 kubenswrapper[4771]: I1011 10:53:31.545922 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-config\") pod \"dnsmasq-dns-7b64bc6b99-wp674\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:53:31.546866 master-1 kubenswrapper[4771]: I1011 10:53:31.546827 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-ovsdbserver-sb\") pod \"dnsmasq-dns-7b64bc6b99-wp674\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:53:31.567394 master-1 kubenswrapper[4771]: I1011 10:53:31.567291 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgbhb\" (UniqueName: \"kubernetes.io/projected/49a589aa-7d75-4aba-aca3-9fffa3d86378-kube-api-access-lgbhb\") pod \"dnsmasq-dns-7b64bc6b99-wp674\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:53:31.646106 master-1 kubenswrapper[4771]: I1011 10:53:31.646006 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e142309b-a5e4-48bc-913f-89bb35b61a51-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.646106 master-1 kubenswrapper[4771]: I1011 10:53:31.646103 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e142309b-a5e4-48bc-913f-89bb35b61a51-config\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.646596 master-1 kubenswrapper[4771]: I1011 10:53:31.646131 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp87r\" (UniqueName: \"kubernetes.io/projected/e142309b-a5e4-48bc-913f-89bb35b61a51-kube-api-access-lp87r\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.646596 master-1 kubenswrapper[4771]: I1011 10:53:31.646173 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e142309b-a5e4-48bc-913f-89bb35b61a51-scripts\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.646596 master-1 kubenswrapper[4771]: I1011 10:53:31.646205 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e142309b-a5e4-48bc-913f-89bb35b61a51-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.646596 master-1 kubenswrapper[4771]: I1011 10:53:31.646240 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e142309b-a5e4-48bc-913f-89bb35b61a51-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.646596 master-1 kubenswrapper[4771]: I1011 10:53:31.646284 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e142309b-a5e4-48bc-913f-89bb35b61a51-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.647461 master-1 kubenswrapper[4771]: I1011 10:53:31.647318 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e142309b-a5e4-48bc-913f-89bb35b61a51-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.647994 master-1 kubenswrapper[4771]: I1011 10:53:31.647933 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e142309b-a5e4-48bc-913f-89bb35b61a51-scripts\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.648054 master-1 kubenswrapper[4771]: I1011 10:53:31.647974 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e142309b-a5e4-48bc-913f-89bb35b61a51-config\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.651376 master-1 kubenswrapper[4771]: I1011 10:53:31.651309 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e142309b-a5e4-48bc-913f-89bb35b61a51-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.652333 master-1 kubenswrapper[4771]: I1011 10:53:31.652282 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e142309b-a5e4-48bc-913f-89bb35b61a51-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.652468 master-1 kubenswrapper[4771]: I1011 10:53:31.652395 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e142309b-a5e4-48bc-913f-89bb35b61a51-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.674275 master-1 kubenswrapper[4771]: I1011 10:53:31.674073 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp87r\" (UniqueName: \"kubernetes.io/projected/e142309b-a5e4-48bc-913f-89bb35b61a51-kube-api-access-lp87r\") pod \"ovn-northd-0\" (UID: \"e142309b-a5e4-48bc-913f-89bb35b61a51\") " pod="openstack/ovn-northd-0" Oct 11 10:53:31.683175 master-2 kubenswrapper[4776]: I1011 10:53:31.683073 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-2" Oct 11 10:53:31.724523 master-1 kubenswrapper[4771]: I1011 10:53:31.724415 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:53:31.738519 master-1 kubenswrapper[4771]: I1011 10:53:31.738235 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 11 10:53:31.846157 master-0 kubenswrapper[4790]: I1011 10:53:31.846094 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b589666c-j6ppm" Oct 11 10:53:31.858507 master-0 kubenswrapper[4790]: I1011 10:53:31.858437 4790 generic.go:334] "Generic (PLEG): container finished" podID="c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c" containerID="0cc3af213ad614ffbe7a12dc806856290a11df301f2407682b989da7fc2469d3" exitCode=0 Oct 11 10:53:31.858663 master-0 kubenswrapper[4790]: I1011 10:53:31.858513 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b589666c-j6ppm" Oct 11 10:53:31.858663 master-0 kubenswrapper[4790]: I1011 10:53:31.858514 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b589666c-j6ppm" event={"ID":"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c","Type":"ContainerDied","Data":"0cc3af213ad614ffbe7a12dc806856290a11df301f2407682b989da7fc2469d3"} Oct 11 10:53:31.858663 master-0 kubenswrapper[4790]: I1011 10:53:31.858597 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b589666c-j6ppm" event={"ID":"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c","Type":"ContainerDied","Data":"b82148781df79646d5b0c5a4a653ab9892598923f30fe65b95aab44fedfae266"} Oct 11 10:53:31.858663 master-0 kubenswrapper[4790]: I1011 10:53:31.858628 4790 scope.go:117] "RemoveContainer" containerID="0cc3af213ad614ffbe7a12dc806856290a11df301f2407682b989da7fc2469d3" Oct 11 10:53:31.881934 master-0 kubenswrapper[4790]: I1011 10:53:31.880801 4790 scope.go:117] "RemoveContainer" containerID="5420918b5cf67f23160281d51d5a46679486566bda34999aa7dc361bb9c90720" Oct 11 10:53:31.899870 master-0 kubenswrapper[4790]: I1011 10:53:31.899836 4790 scope.go:117] "RemoveContainer" containerID="0cc3af213ad614ffbe7a12dc806856290a11df301f2407682b989da7fc2469d3" Oct 11 10:53:31.900407 master-0 kubenswrapper[4790]: E1011 10:53:31.900382 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cc3af213ad614ffbe7a12dc806856290a11df301f2407682b989da7fc2469d3\": container with ID starting with 0cc3af213ad614ffbe7a12dc806856290a11df301f2407682b989da7fc2469d3 not found: ID does not exist" containerID="0cc3af213ad614ffbe7a12dc806856290a11df301f2407682b989da7fc2469d3" Oct 11 10:53:31.900477 master-0 kubenswrapper[4790]: I1011 10:53:31.900423 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cc3af213ad614ffbe7a12dc806856290a11df301f2407682b989da7fc2469d3"} err="failed to get container status \"0cc3af213ad614ffbe7a12dc806856290a11df301f2407682b989da7fc2469d3\": rpc error: code = NotFound desc = could not find container \"0cc3af213ad614ffbe7a12dc806856290a11df301f2407682b989da7fc2469d3\": container with ID starting with 0cc3af213ad614ffbe7a12dc806856290a11df301f2407682b989da7fc2469d3 not found: ID does not exist" Oct 11 10:53:31.900477 master-0 kubenswrapper[4790]: I1011 10:53:31.900452 4790 scope.go:117] "RemoveContainer" containerID="5420918b5cf67f23160281d51d5a46679486566bda34999aa7dc361bb9c90720" Oct 11 10:53:31.901002 master-0 kubenswrapper[4790]: E1011 10:53:31.900978 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5420918b5cf67f23160281d51d5a46679486566bda34999aa7dc361bb9c90720\": container with ID starting with 5420918b5cf67f23160281d51d5a46679486566bda34999aa7dc361bb9c90720 not found: ID does not exist" containerID="5420918b5cf67f23160281d51d5a46679486566bda34999aa7dc361bb9c90720" Oct 11 10:53:31.901117 master-0 kubenswrapper[4790]: I1011 10:53:31.901092 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5420918b5cf67f23160281d51d5a46679486566bda34999aa7dc361bb9c90720"} err="failed to get container status \"5420918b5cf67f23160281d51d5a46679486566bda34999aa7dc361bb9c90720\": rpc error: code = NotFound desc = could not find container \"5420918b5cf67f23160281d51d5a46679486566bda34999aa7dc361bb9c90720\": container with ID starting with 5420918b5cf67f23160281d51d5a46679486566bda34999aa7dc361bb9c90720 not found: ID does not exist" Oct 11 10:53:31.958347 master-1 kubenswrapper[4771]: I1011 10:53:31.958169 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:31.983113 master-0 kubenswrapper[4790]: I1011 10:53:31.983030 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-config\") pod \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\" (UID: \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\") " Oct 11 10:53:31.983526 master-0 kubenswrapper[4790]: I1011 10:53:31.983221 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-ovsdbserver-nb\") pod \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\" (UID: \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\") " Oct 11 10:53:31.983526 master-0 kubenswrapper[4790]: I1011 10:53:31.983270 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-dns-svc\") pod \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\" (UID: \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\") " Oct 11 10:53:31.983526 master-0 kubenswrapper[4790]: I1011 10:53:31.983307 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-687pg\" (UniqueName: \"kubernetes.io/projected/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-kube-api-access-687pg\") pod \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\" (UID: \"c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c\") " Oct 11 10:53:31.988467 master-0 kubenswrapper[4790]: I1011 10:53:31.988395 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-kube-api-access-687pg" (OuterVolumeSpecName: "kube-api-access-687pg") pod "c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c" (UID: "c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c"). InnerVolumeSpecName "kube-api-access-687pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:32.010379 master-1 kubenswrapper[4771]: I1011 10:53:32.010307 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Oct 11 10:53:32.013264 master-0 kubenswrapper[4790]: I1011 10:53:32.013186 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c" (UID: "c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:32.013472 master-0 kubenswrapper[4790]: I1011 10:53:32.013197 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c" (UID: "c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:32.020245 master-0 kubenswrapper[4790]: I1011 10:53:32.020163 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-config" (OuterVolumeSpecName: "config") pod "c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c" (UID: "c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:32.085616 master-0 kubenswrapper[4790]: I1011 10:53:32.085552 4790 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Oct 11 10:53:32.085616 master-0 kubenswrapper[4790]: I1011 10:53:32.085603 4790 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-dns-svc\") on node \"master-0\" DevicePath \"\"" Oct 11 10:53:32.085616 master-0 kubenswrapper[4790]: I1011 10:53:32.085613 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-687pg\" (UniqueName: \"kubernetes.io/projected/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-kube-api-access-687pg\") on node \"master-0\" DevicePath \"\"" Oct 11 10:53:32.085616 master-0 kubenswrapper[4790]: I1011 10:53:32.085626 4790 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c-config\") on node \"master-0\" DevicePath \"\"" Oct 11 10:53:32.197491 master-0 kubenswrapper[4790]: I1011 10:53:32.197431 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-1" Oct 11 10:53:32.197662 master-0 kubenswrapper[4790]: I1011 10:53:32.197527 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-1" Oct 11 10:53:32.219408 master-1 kubenswrapper[4771]: W1011 10:53:32.218877 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode142309b_a5e4_48bc_913f_89bb35b61a51.slice/crio-1b05c8ea4ae87188bc6ea327174e59a0bd3129e474c9d9652dc8902c7ef5116f WatchSource:0}: Error finding container 1b05c8ea4ae87188bc6ea327174e59a0bd3129e474c9d9652dc8902c7ef5116f: Status 404 returned error can't find the container with id 1b05c8ea4ae87188bc6ea327174e59a0bd3129e474c9d9652dc8902c7ef5116f Oct 11 10:53:32.224477 master-0 kubenswrapper[4790]: I1011 10:53:32.224144 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77b589666c-j6ppm"] Oct 11 10:53:32.224496 master-1 kubenswrapper[4771]: I1011 10:53:32.224442 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 11 10:53:32.241814 master-0 kubenswrapper[4790]: I1011 10:53:32.241225 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77b589666c-j6ppm"] Oct 11 10:53:32.291236 master-1 kubenswrapper[4771]: I1011 10:53:32.291128 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e142309b-a5e4-48bc-913f-89bb35b61a51","Type":"ContainerStarted","Data":"1b05c8ea4ae87188bc6ea327174e59a0bd3129e474c9d9652dc8902c7ef5116f"} Oct 11 10:53:32.304306 master-0 kubenswrapper[4790]: I1011 10:53:32.304138 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c" path="/var/lib/kubelet/pods/c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c/volumes" Oct 11 10:53:32.311071 master-1 kubenswrapper[4771]: I1011 10:53:32.311006 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b64bc6b99-wp674"] Oct 11 10:53:32.312263 master-1 kubenswrapper[4771]: W1011 10:53:32.312185 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49a589aa_7d75_4aba_aca3_9fffa3d86378.slice/crio-e04ccf4542c4af977ce52340f83783b293bb99776af706005aa7ec0a114852af WatchSource:0}: Error finding container e04ccf4542c4af977ce52340f83783b293bb99776af706005aa7ec0a114852af: Status 404 returned error can't find the container with id e04ccf4542c4af977ce52340f83783b293bb99776af706005aa7ec0a114852af Oct 11 10:53:32.366988 master-2 kubenswrapper[4776]: I1011 10:53:32.366919 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:33.301185 master-1 kubenswrapper[4771]: I1011 10:53:33.301109 4771 generic.go:334] "Generic (PLEG): container finished" podID="49a589aa-7d75-4aba-aca3-9fffa3d86378" containerID="9c117e659394937d3c04e2bfcad7f394e379e5635acae77fe7dccd35b074376f" exitCode=0 Oct 11 10:53:33.301185 master-1 kubenswrapper[4771]: I1011 10:53:33.301175 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" event={"ID":"49a589aa-7d75-4aba-aca3-9fffa3d86378","Type":"ContainerDied","Data":"9c117e659394937d3c04e2bfcad7f394e379e5635acae77fe7dccd35b074376f"} Oct 11 10:53:33.301185 master-1 kubenswrapper[4771]: I1011 10:53:33.301205 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" event={"ID":"49a589aa-7d75-4aba-aca3-9fffa3d86378","Type":"ContainerStarted","Data":"e04ccf4542c4af977ce52340f83783b293bb99776af706005aa7ec0a114852af"} Oct 11 10:53:34.312439 master-1 kubenswrapper[4771]: I1011 10:53:34.312337 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e142309b-a5e4-48bc-913f-89bb35b61a51","Type":"ContainerStarted","Data":"63206d81ced3a439d95ad40d612eb8d5678c748f255189d6f39d3bfe64481a95"} Oct 11 10:53:34.313069 master-1 kubenswrapper[4771]: I1011 10:53:34.312457 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e142309b-a5e4-48bc-913f-89bb35b61a51","Type":"ContainerStarted","Data":"24f981b47ac3bf3f8c739d3b285b499930421c8e1dc44f02e79285a93ac186fa"} Oct 11 10:53:34.313069 master-1 kubenswrapper[4771]: I1011 10:53:34.312541 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 11 10:53:34.314914 master-1 kubenswrapper[4771]: I1011 10:53:34.314885 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" event={"ID":"49a589aa-7d75-4aba-aca3-9fffa3d86378","Type":"ContainerStarted","Data":"e888654814b1c6bb30b9a08fe67a78675ceee7885666f9f7dfa7244292fa27db"} Oct 11 10:53:34.315223 master-1 kubenswrapper[4771]: I1011 10:53:34.315160 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:53:34.316557 master-0 kubenswrapper[4790]: I1011 10:53:34.316494 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:34.367478 master-0 kubenswrapper[4790]: I1011 10:53:34.367379 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-2" Oct 11 10:53:34.620704 master-1 kubenswrapper[4771]: I1011 10:53:34.620571 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.491795666 podStartE2EDuration="3.620537736s" podCreationTimestamp="2025-10-11 10:53:31 +0000 UTC" firstStartedPulling="2025-10-11 10:53:32.225303305 +0000 UTC m=+1644.199529746" lastFinishedPulling="2025-10-11 10:53:33.354045375 +0000 UTC m=+1645.328271816" observedRunningTime="2025-10-11 10:53:34.616918691 +0000 UTC m=+1646.591145152" watchObservedRunningTime="2025-10-11 10:53:34.620537736 +0000 UTC m=+1646.594764187" Oct 11 10:53:34.823833 master-1 kubenswrapper[4771]: I1011 10:53:34.823709 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" podStartSLOduration=3.823690777 podStartE2EDuration="3.823690777s" podCreationTimestamp="2025-10-11 10:53:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:34.821908205 +0000 UTC m=+1646.796134706" watchObservedRunningTime="2025-10-11 10:53:34.823690777 +0000 UTC m=+1646.797917228" Oct 11 10:53:35.692058 master-2 kubenswrapper[4776]: I1011 10:53:35.691993 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-d7297"] Oct 11 10:53:35.693312 master-2 kubenswrapper[4776]: I1011 10:53:35.693294 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d7297" Oct 11 10:53:35.716188 master-2 kubenswrapper[4776]: I1011 10:53:35.716133 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-d7297"] Oct 11 10:53:35.765610 master-2 kubenswrapper[4776]: I1011 10:53:35.765548 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpqcd\" (UniqueName: \"kubernetes.io/projected/829885e7-9e39-447e-a4f0-2ac128443d04-kube-api-access-zpqcd\") pod \"keystone-db-create-d7297\" (UID: \"829885e7-9e39-447e-a4f0-2ac128443d04\") " pod="openstack/keystone-db-create-d7297" Oct 11 10:53:35.866985 master-2 kubenswrapper[4776]: I1011 10:53:35.866841 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpqcd\" (UniqueName: \"kubernetes.io/projected/829885e7-9e39-447e-a4f0-2ac128443d04-kube-api-access-zpqcd\") pod \"keystone-db-create-d7297\" (UID: \"829885e7-9e39-447e-a4f0-2ac128443d04\") " pod="openstack/keystone-db-create-d7297" Oct 11 10:53:35.889928 master-2 kubenswrapper[4776]: I1011 10:53:35.889864 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpqcd\" (UniqueName: \"kubernetes.io/projected/829885e7-9e39-447e-a4f0-2ac128443d04-kube-api-access-zpqcd\") pod \"keystone-db-create-d7297\" (UID: \"829885e7-9e39-447e-a4f0-2ac128443d04\") " pod="openstack/keystone-db-create-d7297" Oct 11 10:53:36.053600 master-2 kubenswrapper[4776]: I1011 10:53:36.053411 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d7297" Oct 11 10:53:36.081425 master-2 kubenswrapper[4776]: I1011 10:53:36.081346 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-v9tlh"] Oct 11 10:53:36.082373 master-2 kubenswrapper[4776]: I1011 10:53:36.082328 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v9tlh" Oct 11 10:53:36.195710 master-2 kubenswrapper[4776]: I1011 10:53:36.195461 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v9tlh"] Oct 11 10:53:36.272912 master-2 kubenswrapper[4776]: I1011 10:53:36.272824 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6tvr\" (UniqueName: \"kubernetes.io/projected/1a7cb456-8a0b-4e56-9dc5-93b488813f77-kube-api-access-g6tvr\") pod \"placement-db-create-v9tlh\" (UID: \"1a7cb456-8a0b-4e56-9dc5-93b488813f77\") " pod="openstack/placement-db-create-v9tlh" Oct 11 10:53:36.336437 master-1 kubenswrapper[4771]: I1011 10:53:36.336343 4771 generic.go:334] "Generic (PLEG): container finished" podID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" containerID="1023c854292d2503211da52aaf16aa7e2199948c97ebed99bad537459ca3e33b" exitCode=0 Oct 11 10:53:36.337041 master-1 kubenswrapper[4771]: I1011 10:53:36.336459 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11c30d1f-16d5-4106-bfae-e6c2d2f64f13","Type":"ContainerDied","Data":"1023c854292d2503211da52aaf16aa7e2199948c97ebed99bad537459ca3e33b"} Oct 11 10:53:36.344298 master-0 kubenswrapper[4790]: I1011 10:53:36.344211 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-2" Oct 11 10:53:36.374070 master-2 kubenswrapper[4776]: I1011 10:53:36.374021 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6tvr\" (UniqueName: \"kubernetes.io/projected/1a7cb456-8a0b-4e56-9dc5-93b488813f77-kube-api-access-g6tvr\") pod \"placement-db-create-v9tlh\" (UID: \"1a7cb456-8a0b-4e56-9dc5-93b488813f77\") " pod="openstack/placement-db-create-v9tlh" Oct 11 10:53:36.494482 master-2 kubenswrapper[4776]: I1011 10:53:36.494025 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6tvr\" (UniqueName: \"kubernetes.io/projected/1a7cb456-8a0b-4e56-9dc5-93b488813f77-kube-api-access-g6tvr\") pod \"placement-db-create-v9tlh\" (UID: \"1a7cb456-8a0b-4e56-9dc5-93b488813f77\") " pod="openstack/placement-db-create-v9tlh" Oct 11 10:53:36.506363 master-2 kubenswrapper[4776]: I1011 10:53:36.506304 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-d7297"] Oct 11 10:53:36.671811 master-2 kubenswrapper[4776]: I1011 10:53:36.671759 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d7297" event={"ID":"829885e7-9e39-447e-a4f0-2ac128443d04","Type":"ContainerStarted","Data":"c3918c1895cf6e1513c6d1aad5e8a7d95dd81e5fff0ab3e9dc4c012a2a71eedd"} Oct 11 10:53:36.768655 master-2 kubenswrapper[4776]: I1011 10:53:36.768473 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v9tlh" Oct 11 10:53:37.267358 master-2 kubenswrapper[4776]: I1011 10:53:37.267310 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v9tlh"] Oct 11 10:53:37.270216 master-2 kubenswrapper[4776]: W1011 10:53:37.270177 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a7cb456_8a0b_4e56_9dc5_93b488813f77.slice/crio-a1eaf76101cdb0c957000d8935800012f9d2b669a931ab2203ec03e40847d3ae WatchSource:0}: Error finding container a1eaf76101cdb0c957000d8935800012f9d2b669a931ab2203ec03e40847d3ae: Status 404 returned error can't find the container with id a1eaf76101cdb0c957000d8935800012f9d2b669a931ab2203ec03e40847d3ae Oct 11 10:53:37.566315 master-1 kubenswrapper[4771]: I1011 10:53:37.566237 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-dnsz5"] Oct 11 10:53:37.567678 master-1 kubenswrapper[4771]: I1011 10:53:37.567630 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dnsz5" Oct 11 10:53:37.602344 master-1 kubenswrapper[4771]: I1011 10:53:37.602290 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dnsz5"] Oct 11 10:53:37.681060 master-2 kubenswrapper[4776]: I1011 10:53:37.680941 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d7297" event={"ID":"829885e7-9e39-447e-a4f0-2ac128443d04","Type":"ContainerStarted","Data":"6868226996ff13456bac113fdafadf8a4b2b3ab56a80c2fc96fb7d2ab46abffa"} Oct 11 10:53:37.683069 master-2 kubenswrapper[4776]: I1011 10:53:37.682992 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v9tlh" event={"ID":"1a7cb456-8a0b-4e56-9dc5-93b488813f77","Type":"ContainerStarted","Data":"c49bb5e7b7b29373f91460103b137215ef53ea437154f26d2c8b1bb8b44e90c5"} Oct 11 10:53:37.683069 master-2 kubenswrapper[4776]: I1011 10:53:37.683069 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v9tlh" event={"ID":"1a7cb456-8a0b-4e56-9dc5-93b488813f77","Type":"ContainerStarted","Data":"a1eaf76101cdb0c957000d8935800012f9d2b669a931ab2203ec03e40847d3ae"} Oct 11 10:53:37.695396 master-1 kubenswrapper[4771]: I1011 10:53:37.695268 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dnbv\" (UniqueName: \"kubernetes.io/projected/57ac9130-d850-4420-a75e-53ec744b16eb-kube-api-access-2dnbv\") pod \"glance-db-create-dnsz5\" (UID: \"57ac9130-d850-4420-a75e-53ec744b16eb\") " pod="openstack/glance-db-create-dnsz5" Oct 11 10:53:37.797423 master-1 kubenswrapper[4771]: I1011 10:53:37.797340 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dnbv\" (UniqueName: \"kubernetes.io/projected/57ac9130-d850-4420-a75e-53ec744b16eb-kube-api-access-2dnbv\") pod \"glance-db-create-dnsz5\" (UID: \"57ac9130-d850-4420-a75e-53ec744b16eb\") " pod="openstack/glance-db-create-dnsz5" Oct 11 10:53:37.804158 master-1 kubenswrapper[4771]: E1011 10:53:37.803993 4771 projected.go:194] Error preparing data for projected volume kube-api-access-2dnbv for pod openstack/glance-db-create-dnsz5: failed to fetch token: pod "glance-db-create-dnsz5" not found Oct 11 10:53:37.804158 master-1 kubenswrapper[4771]: E1011 10:53:37.804104 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/57ac9130-d850-4420-a75e-53ec744b16eb-kube-api-access-2dnbv podName:57ac9130-d850-4420-a75e-53ec744b16eb nodeName:}" failed. No retries permitted until 2025-10-11 10:53:38.304080708 +0000 UTC m=+1650.278307149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2dnbv" (UniqueName: "kubernetes.io/projected/57ac9130-d850-4420-a75e-53ec744b16eb-kube-api-access-2dnbv") pod "glance-db-create-dnsz5" (UID: "57ac9130-d850-4420-a75e-53ec744b16eb") : failed to fetch token: pod "glance-db-create-dnsz5" not found Oct 11 10:53:38.313685 master-1 kubenswrapper[4771]: I1011 10:53:38.312913 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dnbv\" (UniqueName: \"kubernetes.io/projected/57ac9130-d850-4420-a75e-53ec744b16eb-kube-api-access-2dnbv\") pod \"glance-db-create-dnsz5\" (UID: \"57ac9130-d850-4420-a75e-53ec744b16eb\") " pod="openstack/glance-db-create-dnsz5" Oct 11 10:53:38.347149 master-1 kubenswrapper[4771]: I1011 10:53:38.346739 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dnbv\" (UniqueName: \"kubernetes.io/projected/57ac9130-d850-4420-a75e-53ec744b16eb-kube-api-access-2dnbv\") pod \"glance-db-create-dnsz5\" (UID: \"57ac9130-d850-4420-a75e-53ec744b16eb\") " pod="openstack/glance-db-create-dnsz5" Oct 11 10:53:38.518916 master-1 kubenswrapper[4771]: I1011 10:53:38.518843 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dnsz5" Oct 11 10:53:38.878277 master-2 kubenswrapper[4776]: I1011 10:53:38.878188 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-d7297" podStartSLOduration=3.8781667840000003 podStartE2EDuration="3.878166784s" podCreationTimestamp="2025-10-11 10:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:38.846581959 +0000 UTC m=+1653.631008678" watchObservedRunningTime="2025-10-11 10:53:38.878166784 +0000 UTC m=+1653.662593513" Oct 11 10:53:38.884460 master-2 kubenswrapper[4776]: I1011 10:53:38.884382 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-v9tlh" podStartSLOduration=2.884363192 podStartE2EDuration="2.884363192s" podCreationTimestamp="2025-10-11 10:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:38.878005599 +0000 UTC m=+1653.662432328" watchObservedRunningTime="2025-10-11 10:53:38.884363192 +0000 UTC m=+1653.668789901" Oct 11 10:53:39.362222 master-1 kubenswrapper[4771]: I1011 10:53:39.362145 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dnsz5"] Oct 11 10:53:39.373491 master-1 kubenswrapper[4771]: W1011 10:53:39.373411 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57ac9130_d850_4420_a75e_53ec744b16eb.slice/crio-2e1666340b01e4c1a48c00414db10d0677e3b3f3c913a32f6266f8c21f8e5f13 WatchSource:0}: Error finding container 2e1666340b01e4c1a48c00414db10d0677e3b3f3c913a32f6266f8c21f8e5f13: Status 404 returned error can't find the container with id 2e1666340b01e4c1a48c00414db10d0677e3b3f3c913a32f6266f8c21f8e5f13 Oct 11 10:53:40.372207 master-1 kubenswrapper[4771]: I1011 10:53:40.372131 4771 generic.go:334] "Generic (PLEG): container finished" podID="6fe99fba-e358-4203-a516-04b9ae19d789" containerID="7f8fc71d7ad02d8da77907079a53d04db1a0fb1212260a6e3e48d8f38e321946" exitCode=0 Oct 11 10:53:40.372924 master-1 kubenswrapper[4771]: I1011 10:53:40.372269 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6fe99fba-e358-4203-a516-04b9ae19d789","Type":"ContainerDied","Data":"7f8fc71d7ad02d8da77907079a53d04db1a0fb1212260a6e3e48d8f38e321946"} Oct 11 10:53:40.374957 master-1 kubenswrapper[4771]: I1011 10:53:40.374915 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dnsz5" event={"ID":"57ac9130-d850-4420-a75e-53ec744b16eb","Type":"ContainerStarted","Data":"497e015434504b4db642357797a1c623d7b35238dcc0952d89c6a79885be7010"} Oct 11 10:53:40.375012 master-1 kubenswrapper[4771]: I1011 10:53:40.374963 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dnsz5" event={"ID":"57ac9130-d850-4420-a75e-53ec744b16eb","Type":"ContainerStarted","Data":"2e1666340b01e4c1a48c00414db10d0677e3b3f3c913a32f6266f8c21f8e5f13"} Oct 11 10:53:41.387134 master-1 kubenswrapper[4771]: I1011 10:53:41.387038 4771 generic.go:334] "Generic (PLEG): container finished" podID="831321b9-20ce-409b-8bdb-ec231aef5f35" containerID="46e81e63ab3ceec54c8e0da9448541aeaf71c73eb9783cb511b8ceaa6d4dbd06" exitCode=0 Oct 11 10:53:41.387874 master-1 kubenswrapper[4771]: I1011 10:53:41.387189 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"831321b9-20ce-409b-8bdb-ec231aef5f35","Type":"ContainerDied","Data":"46e81e63ab3ceec54c8e0da9448541aeaf71c73eb9783cb511b8ceaa6d4dbd06"} Oct 11 10:53:41.390136 master-1 kubenswrapper[4771]: I1011 10:53:41.390090 4771 generic.go:334] "Generic (PLEG): container finished" podID="57ac9130-d850-4420-a75e-53ec744b16eb" containerID="497e015434504b4db642357797a1c623d7b35238dcc0952d89c6a79885be7010" exitCode=0 Oct 11 10:53:41.390246 master-1 kubenswrapper[4771]: I1011 10:53:41.390148 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dnsz5" event={"ID":"57ac9130-d850-4420-a75e-53ec744b16eb","Type":"ContainerDied","Data":"497e015434504b4db642357797a1c623d7b35238dcc0952d89c6a79885be7010"} Oct 11 10:53:41.490981 master-1 kubenswrapper[4771]: I1011 10:53:41.490884 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mtzk7" podUID="4ab25521-7fba-40c9-b3db-377b1d0ec7a1" containerName="ovn-controller" probeResult="failure" output=< Oct 11 10:53:41.490981 master-1 kubenswrapper[4771]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 11 10:53:41.490981 master-1 kubenswrapper[4771]: > Oct 11 10:53:41.504904 master-0 kubenswrapper[4790]: I1011 10:53:41.504782 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-52t2l" podUID="8c164a4b-a2d5-4570-aed3-86dbb1f3d47c" containerName="ovn-controller" probeResult="failure" output=< Oct 11 10:53:41.504904 master-0 kubenswrapper[4790]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 11 10:53:41.504904 master-0 kubenswrapper[4790]: > Oct 11 10:53:41.529430 master-1 kubenswrapper[4771]: I1011 10:53:41.523947 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:41.532176 master-2 kubenswrapper[4776]: I1011 10:53:41.532105 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8qhqm" podUID="065373ca-8c0f-489c-a72e-4d1aee1263ba" containerName="ovn-controller" probeResult="failure" output=< Oct 11 10:53:41.532176 master-2 kubenswrapper[4776]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 11 10:53:41.532176 master-2 kubenswrapper[4776]: > Oct 11 10:53:41.712722 master-2 kubenswrapper[4776]: I1011 10:53:41.712616 4776 generic.go:334] "Generic (PLEG): container finished" podID="1a7cb456-8a0b-4e56-9dc5-93b488813f77" containerID="c49bb5e7b7b29373f91460103b137215ef53ea437154f26d2c8b1bb8b44e90c5" exitCode=0 Oct 11 10:53:41.712980 master-2 kubenswrapper[4776]: I1011 10:53:41.712742 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v9tlh" event={"ID":"1a7cb456-8a0b-4e56-9dc5-93b488813f77","Type":"ContainerDied","Data":"c49bb5e7b7b29373f91460103b137215ef53ea437154f26d2c8b1bb8b44e90c5"} Oct 11 10:53:41.715221 master-2 kubenswrapper[4776]: I1011 10:53:41.715173 4776 generic.go:334] "Generic (PLEG): container finished" podID="829885e7-9e39-447e-a4f0-2ac128443d04" containerID="6868226996ff13456bac113fdafadf8a4b2b3ab56a80c2fc96fb7d2ab46abffa" exitCode=0 Oct 11 10:53:41.715316 master-2 kubenswrapper[4776]: I1011 10:53:41.715221 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d7297" event={"ID":"829885e7-9e39-447e-a4f0-2ac128443d04","Type":"ContainerDied","Data":"6868226996ff13456bac113fdafadf8a4b2b3ab56a80c2fc96fb7d2ab46abffa"} Oct 11 10:53:41.727551 master-1 kubenswrapper[4771]: I1011 10:53:41.727478 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:53:42.012814 master-1 kubenswrapper[4771]: I1011 10:53:42.012765 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86d565bb9-85bsq"] Oct 11 10:53:42.013308 master-1 kubenswrapper[4771]: I1011 10:53:42.013273 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86d565bb9-85bsq" podUID="0ae0f8e3-9e87-45b8-8313-a0b65cf33106" containerName="dnsmasq-dns" containerID="cri-o://14ed7d218f9217fbceb4436ac3f26fb55858bf77044f44bd18a2d4ffe4eacee3" gracePeriod=10 Oct 11 10:53:42.134828 master-0 kubenswrapper[4790]: I1011 10:53:42.134764 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c99f4877f-dv8jt"] Oct 11 10:53:42.135229 master-0 kubenswrapper[4790]: E1011 10:53:42.135180 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c" containerName="init" Oct 11 10:53:42.135229 master-0 kubenswrapper[4790]: I1011 10:53:42.135196 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c" containerName="init" Oct 11 10:53:42.135229 master-0 kubenswrapper[4790]: E1011 10:53:42.135224 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c" containerName="dnsmasq-dns" Oct 11 10:53:42.135229 master-0 kubenswrapper[4790]: I1011 10:53:42.135230 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c" containerName="dnsmasq-dns" Oct 11 10:53:42.135377 master-0 kubenswrapper[4790]: I1011 10:53:42.135369 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2fc4e6a-fc2f-4ed0-a753-f5a54f83777c" containerName="dnsmasq-dns" Oct 11 10:53:42.136478 master-0 kubenswrapper[4790]: I1011 10:53:42.136420 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:53:42.140027 master-0 kubenswrapper[4790]: I1011 10:53:42.139973 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 11 10:53:42.140298 master-0 kubenswrapper[4790]: I1011 10:53:42.140255 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 10:53:42.140463 master-0 kubenswrapper[4790]: I1011 10:53:42.140440 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 10:53:42.140736 master-0 kubenswrapper[4790]: I1011 10:53:42.140689 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 11 10:53:42.159791 master-0 kubenswrapper[4790]: I1011 10:53:42.157226 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c99f4877f-dv8jt"] Oct 11 10:53:42.181162 master-0 kubenswrapper[4790]: I1011 10:53:42.181039 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-ovsdbserver-nb\") pod \"dnsmasq-dns-6c99f4877f-dv8jt\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:53:42.181503 master-0 kubenswrapper[4790]: I1011 10:53:42.181300 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-ovsdbserver-sb\") pod \"dnsmasq-dns-6c99f4877f-dv8jt\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:53:42.181503 master-0 kubenswrapper[4790]: I1011 10:53:42.181372 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-dns-svc\") pod \"dnsmasq-dns-6c99f4877f-dv8jt\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:53:42.181503 master-0 kubenswrapper[4790]: I1011 10:53:42.181451 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjqms\" (UniqueName: \"kubernetes.io/projected/6a4dc537-c4a3-4538-887f-62fe3919d5f0-kube-api-access-kjqms\") pod \"dnsmasq-dns-6c99f4877f-dv8jt\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:53:42.181620 master-0 kubenswrapper[4790]: I1011 10:53:42.181557 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-config\") pod \"dnsmasq-dns-6c99f4877f-dv8jt\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:53:42.283281 master-0 kubenswrapper[4790]: I1011 10:53:42.283214 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjqms\" (UniqueName: \"kubernetes.io/projected/6a4dc537-c4a3-4538-887f-62fe3919d5f0-kube-api-access-kjqms\") pod \"dnsmasq-dns-6c99f4877f-dv8jt\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:53:42.283281 master-0 kubenswrapper[4790]: I1011 10:53:42.283302 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-config\") pod \"dnsmasq-dns-6c99f4877f-dv8jt\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:53:42.283628 master-0 kubenswrapper[4790]: I1011 10:53:42.283353 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-ovsdbserver-nb\") pod \"dnsmasq-dns-6c99f4877f-dv8jt\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:53:42.283628 master-0 kubenswrapper[4790]: I1011 10:53:42.283393 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-ovsdbserver-sb\") pod \"dnsmasq-dns-6c99f4877f-dv8jt\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:53:42.283628 master-0 kubenswrapper[4790]: I1011 10:53:42.283422 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-dns-svc\") pod \"dnsmasq-dns-6c99f4877f-dv8jt\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:53:42.284962 master-0 kubenswrapper[4790]: I1011 10:53:42.284842 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-dns-svc\") pod \"dnsmasq-dns-6c99f4877f-dv8jt\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:53:42.284962 master-0 kubenswrapper[4790]: I1011 10:53:42.284840 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-config\") pod \"dnsmasq-dns-6c99f4877f-dv8jt\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:53:42.285072 master-0 kubenswrapper[4790]: I1011 10:53:42.284984 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-ovsdbserver-sb\") pod \"dnsmasq-dns-6c99f4877f-dv8jt\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:53:42.286574 master-0 kubenswrapper[4790]: I1011 10:53:42.286528 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-ovsdbserver-nb\") pod \"dnsmasq-dns-6c99f4877f-dv8jt\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:53:42.305842 master-0 kubenswrapper[4790]: I1011 10:53:42.305787 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjqms\" (UniqueName: \"kubernetes.io/projected/6a4dc537-c4a3-4538-887f-62fe3919d5f0-kube-api-access-kjqms\") pod \"dnsmasq-dns-6c99f4877f-dv8jt\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:53:42.401509 master-1 kubenswrapper[4771]: I1011 10:53:42.401449 4771 generic.go:334] "Generic (PLEG): container finished" podID="0ae0f8e3-9e87-45b8-8313-a0b65cf33106" containerID="14ed7d218f9217fbceb4436ac3f26fb55858bf77044f44bd18a2d4ffe4eacee3" exitCode=0 Oct 11 10:53:42.402471 master-1 kubenswrapper[4771]: I1011 10:53:42.401533 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d565bb9-85bsq" event={"ID":"0ae0f8e3-9e87-45b8-8313-a0b65cf33106","Type":"ContainerDied","Data":"14ed7d218f9217fbceb4436ac3f26fb55858bf77044f44bd18a2d4ffe4eacee3"} Oct 11 10:53:42.404544 master-1 kubenswrapper[4771]: I1011 10:53:42.404435 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"831321b9-20ce-409b-8bdb-ec231aef5f35","Type":"ContainerStarted","Data":"2e859e6aac8242725e1925bb9f62522cc9495b0f285ec9272dc591ce52c18bcd"} Oct 11 10:53:42.405426 master-1 kubenswrapper[4771]: I1011 10:53:42.405400 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:53:42.407540 master-1 kubenswrapper[4771]: I1011 10:53:42.407508 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11c30d1f-16d5-4106-bfae-e6c2d2f64f13","Type":"ContainerStarted","Data":"dceeb58fe69a42771858429ecb1c63834e43aa17881864b8125d37688c790df5"} Oct 11 10:53:42.410874 master-1 kubenswrapper[4771]: I1011 10:53:42.410832 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6fe99fba-e358-4203-a516-04b9ae19d789","Type":"ContainerStarted","Data":"deea6ef83eb5b94b2f3189eda972012964d768194580bfc4a7d45bbaf474b35e"} Oct 11 10:53:42.411740 master-1 kubenswrapper[4771]: I1011 10:53:42.411705 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 11 10:53:42.470929 master-0 kubenswrapper[4790]: I1011 10:53:42.470218 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:53:42.475195 master-1 kubenswrapper[4771]: I1011 10:53:42.475109 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=47.558349019 podStartE2EDuration="52.475084508s" podCreationTimestamp="2025-10-11 10:52:50 +0000 UTC" firstStartedPulling="2025-10-11 10:53:01.408273851 +0000 UTC m=+1613.382500292" lastFinishedPulling="2025-10-11 10:53:06.3250093 +0000 UTC m=+1618.299235781" observedRunningTime="2025-10-11 10:53:42.473725728 +0000 UTC m=+1654.447952189" watchObservedRunningTime="2025-10-11 10:53:42.475084508 +0000 UTC m=+1654.449310949" Oct 11 10:53:42.475785 master-1 kubenswrapper[4771]: I1011 10:53:42.475742 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=51.475734236 podStartE2EDuration="51.475734236s" podCreationTimestamp="2025-10-11 10:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:42.438947163 +0000 UTC m=+1654.413173604" watchObservedRunningTime="2025-10-11 10:53:42.475734236 +0000 UTC m=+1654.449960687" Oct 11 10:53:42.599563 master-1 kubenswrapper[4771]: I1011 10:53:42.599497 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d565bb9-85bsq" Oct 11 10:53:42.710908 master-1 kubenswrapper[4771]: I1011 10:53:42.710653 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ae0f8e3-9e87-45b8-8313-a0b65cf33106-dns-svc\") pod \"0ae0f8e3-9e87-45b8-8313-a0b65cf33106\" (UID: \"0ae0f8e3-9e87-45b8-8313-a0b65cf33106\") " Oct 11 10:53:42.710908 master-1 kubenswrapper[4771]: I1011 10:53:42.710885 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b4pv\" (UniqueName: \"kubernetes.io/projected/0ae0f8e3-9e87-45b8-8313-a0b65cf33106-kube-api-access-7b4pv\") pod \"0ae0f8e3-9e87-45b8-8313-a0b65cf33106\" (UID: \"0ae0f8e3-9e87-45b8-8313-a0b65cf33106\") " Oct 11 10:53:42.712862 master-1 kubenswrapper[4771]: I1011 10:53:42.711151 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ae0f8e3-9e87-45b8-8313-a0b65cf33106-config\") pod \"0ae0f8e3-9e87-45b8-8313-a0b65cf33106\" (UID: \"0ae0f8e3-9e87-45b8-8313-a0b65cf33106\") " Oct 11 10:53:42.715369 master-1 kubenswrapper[4771]: I1011 10:53:42.715155 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ae0f8e3-9e87-45b8-8313-a0b65cf33106-kube-api-access-7b4pv" (OuterVolumeSpecName: "kube-api-access-7b4pv") pod "0ae0f8e3-9e87-45b8-8313-a0b65cf33106" (UID: "0ae0f8e3-9e87-45b8-8313-a0b65cf33106"). InnerVolumeSpecName "kube-api-access-7b4pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:42.759235 master-1 kubenswrapper[4771]: I1011 10:53:42.758101 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ae0f8e3-9e87-45b8-8313-a0b65cf33106-config" (OuterVolumeSpecName: "config") pod "0ae0f8e3-9e87-45b8-8313-a0b65cf33106" (UID: "0ae0f8e3-9e87-45b8-8313-a0b65cf33106"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:42.767659 master-1 kubenswrapper[4771]: I1011 10:53:42.767590 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ae0f8e3-9e87-45b8-8313-a0b65cf33106-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ae0f8e3-9e87-45b8-8313-a0b65cf33106" (UID: "0ae0f8e3-9e87-45b8-8313-a0b65cf33106"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:42.779767 master-1 kubenswrapper[4771]: I1011 10:53:42.779446 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dnsz5" Oct 11 10:53:42.814452 master-1 kubenswrapper[4771]: I1011 10:53:42.812881 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dnbv\" (UniqueName: \"kubernetes.io/projected/57ac9130-d850-4420-a75e-53ec744b16eb-kube-api-access-2dnbv\") pod \"57ac9130-d850-4420-a75e-53ec744b16eb\" (UID: \"57ac9130-d850-4420-a75e-53ec744b16eb\") " Oct 11 10:53:42.814452 master-1 kubenswrapper[4771]: I1011 10:53:42.813438 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b4pv\" (UniqueName: \"kubernetes.io/projected/0ae0f8e3-9e87-45b8-8313-a0b65cf33106-kube-api-access-7b4pv\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:42.814452 master-1 kubenswrapper[4771]: I1011 10:53:42.813459 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ae0f8e3-9e87-45b8-8313-a0b65cf33106-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:42.814452 master-1 kubenswrapper[4771]: I1011 10:53:42.813473 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ae0f8e3-9e87-45b8-8313-a0b65cf33106-dns-svc\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:42.822206 master-1 kubenswrapper[4771]: I1011 10:53:42.822124 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57ac9130-d850-4420-a75e-53ec744b16eb-kube-api-access-2dnbv" (OuterVolumeSpecName: "kube-api-access-2dnbv") pod "57ac9130-d850-4420-a75e-53ec744b16eb" (UID: "57ac9130-d850-4420-a75e-53ec744b16eb"). InnerVolumeSpecName "kube-api-access-2dnbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:42.914775 master-1 kubenswrapper[4771]: I1011 10:53:42.914710 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dnbv\" (UniqueName: \"kubernetes.io/projected/57ac9130-d850-4420-a75e-53ec744b16eb-kube-api-access-2dnbv\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:42.961768 master-0 kubenswrapper[4790]: I1011 10:53:42.961671 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c99f4877f-dv8jt"] Oct 11 10:53:42.971901 master-0 kubenswrapper[4790]: W1011 10:53:42.971433 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a4dc537_c4a3_4538_887f_62fe3919d5f0.slice/crio-ff20f3c33adc8038a2f30426436b1b65b746807b9fb9fe4c5e10d86eebcbb3ee WatchSource:0}: Error finding container ff20f3c33adc8038a2f30426436b1b65b746807b9fb9fe4c5e10d86eebcbb3ee: Status 404 returned error can't find the container with id ff20f3c33adc8038a2f30426436b1b65b746807b9fb9fe4c5e10d86eebcbb3ee Oct 11 10:53:43.365742 master-2 kubenswrapper[4776]: I1011 10:53:43.365695 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v9tlh" Oct 11 10:53:43.410438 master-2 kubenswrapper[4776]: I1011 10:53:43.409417 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6tvr\" (UniqueName: \"kubernetes.io/projected/1a7cb456-8a0b-4e56-9dc5-93b488813f77-kube-api-access-g6tvr\") pod \"1a7cb456-8a0b-4e56-9dc5-93b488813f77\" (UID: \"1a7cb456-8a0b-4e56-9dc5-93b488813f77\") " Oct 11 10:53:43.414824 master-2 kubenswrapper[4776]: I1011 10:53:43.414281 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7cb456-8a0b-4e56-9dc5-93b488813f77-kube-api-access-g6tvr" (OuterVolumeSpecName: "kube-api-access-g6tvr") pod "1a7cb456-8a0b-4e56-9dc5-93b488813f77" (UID: "1a7cb456-8a0b-4e56-9dc5-93b488813f77"). InnerVolumeSpecName "kube-api-access-g6tvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:43.430003 master-1 kubenswrapper[4771]: I1011 10:53:43.429938 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dnsz5" Oct 11 10:53:43.430003 master-1 kubenswrapper[4771]: I1011 10:53:43.429939 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dnsz5" event={"ID":"57ac9130-d850-4420-a75e-53ec744b16eb","Type":"ContainerDied","Data":"2e1666340b01e4c1a48c00414db10d0677e3b3f3c913a32f6266f8c21f8e5f13"} Oct 11 10:53:43.430003 master-1 kubenswrapper[4771]: I1011 10:53:43.430011 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e1666340b01e4c1a48c00414db10d0677e3b3f3c913a32f6266f8c21f8e5f13" Oct 11 10:53:43.436613 master-1 kubenswrapper[4771]: I1011 10:53:43.436553 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d565bb9-85bsq" Oct 11 10:53:43.437495 master-1 kubenswrapper[4771]: I1011 10:53:43.437441 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d565bb9-85bsq" event={"ID":"0ae0f8e3-9e87-45b8-8313-a0b65cf33106","Type":"ContainerDied","Data":"1af26e0c3df30b750253930a520b32ea24880275e960e13e9910257c86f202ff"} Oct 11 10:53:43.437570 master-1 kubenswrapper[4771]: I1011 10:53:43.437528 4771 scope.go:117] "RemoveContainer" containerID="14ed7d218f9217fbceb4436ac3f26fb55858bf77044f44bd18a2d4ffe4eacee3" Oct 11 10:53:43.494039 master-1 kubenswrapper[4771]: I1011 10:53:43.493986 4771 scope.go:117] "RemoveContainer" containerID="fd43d772f12b2955515b0207673e261220c08cfd99b72815c0e4dd5a30cfab8c" Oct 11 10:53:43.511557 master-2 kubenswrapper[4776]: I1011 10:53:43.511229 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6tvr\" (UniqueName: \"kubernetes.io/projected/1a7cb456-8a0b-4e56-9dc5-93b488813f77-kube-api-access-g6tvr\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:43.546059 master-1 kubenswrapper[4771]: I1011 10:53:43.545997 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86d565bb9-85bsq"] Oct 11 10:53:43.555806 master-1 kubenswrapper[4771]: I1011 10:53:43.555717 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86d565bb9-85bsq"] Oct 11 10:53:43.728923 master-2 kubenswrapper[4776]: I1011 10:53:43.728875 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d7297" Oct 11 10:53:43.736768 master-2 kubenswrapper[4776]: I1011 10:53:43.736635 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v9tlh" event={"ID":"1a7cb456-8a0b-4e56-9dc5-93b488813f77","Type":"ContainerDied","Data":"a1eaf76101cdb0c957000d8935800012f9d2b669a931ab2203ec03e40847d3ae"} Oct 11 10:53:43.736768 master-2 kubenswrapper[4776]: I1011 10:53:43.736722 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1eaf76101cdb0c957000d8935800012f9d2b669a931ab2203ec03e40847d3ae" Oct 11 10:53:43.736991 master-2 kubenswrapper[4776]: I1011 10:53:43.736645 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v9tlh" Oct 11 10:53:43.737900 master-2 kubenswrapper[4776]: I1011 10:53:43.737825 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d7297" event={"ID":"829885e7-9e39-447e-a4f0-2ac128443d04","Type":"ContainerDied","Data":"c3918c1895cf6e1513c6d1aad5e8a7d95dd81e5fff0ab3e9dc4c012a2a71eedd"} Oct 11 10:53:43.737967 master-2 kubenswrapper[4776]: I1011 10:53:43.737899 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d7297" Oct 11 10:53:43.738015 master-2 kubenswrapper[4776]: I1011 10:53:43.737902 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3918c1895cf6e1513c6d1aad5e8a7d95dd81e5fff0ab3e9dc4c012a2a71eedd" Oct 11 10:53:43.815012 master-2 kubenswrapper[4776]: I1011 10:53:43.814831 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpqcd\" (UniqueName: \"kubernetes.io/projected/829885e7-9e39-447e-a4f0-2ac128443d04-kube-api-access-zpqcd\") pod \"829885e7-9e39-447e-a4f0-2ac128443d04\" (UID: \"829885e7-9e39-447e-a4f0-2ac128443d04\") " Oct 11 10:53:43.817847 master-2 kubenswrapper[4776]: I1011 10:53:43.817778 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/829885e7-9e39-447e-a4f0-2ac128443d04-kube-api-access-zpqcd" (OuterVolumeSpecName: "kube-api-access-zpqcd") pod "829885e7-9e39-447e-a4f0-2ac128443d04" (UID: "829885e7-9e39-447e-a4f0-2ac128443d04"). InnerVolumeSpecName "kube-api-access-zpqcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:43.916529 master-2 kubenswrapper[4776]: I1011 10:53:43.916465 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpqcd\" (UniqueName: \"kubernetes.io/projected/829885e7-9e39-447e-a4f0-2ac128443d04-kube-api-access-zpqcd\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:43.979013 master-0 kubenswrapper[4790]: I1011 10:53:43.978951 4790 generic.go:334] "Generic (PLEG): container finished" podID="6a4dc537-c4a3-4538-887f-62fe3919d5f0" containerID="42923cd7993a370d966eb589a3c5dfe41bcbc3a27770fa8b1538dbc31e8e9a97" exitCode=0 Oct 11 10:53:43.979676 master-0 kubenswrapper[4790]: I1011 10:53:43.979030 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" event={"ID":"6a4dc537-c4a3-4538-887f-62fe3919d5f0","Type":"ContainerDied","Data":"42923cd7993a370d966eb589a3c5dfe41bcbc3a27770fa8b1538dbc31e8e9a97"} Oct 11 10:53:43.979676 master-0 kubenswrapper[4790]: I1011 10:53:43.979113 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" event={"ID":"6a4dc537-c4a3-4538-887f-62fe3919d5f0","Type":"ContainerStarted","Data":"ff20f3c33adc8038a2f30426436b1b65b746807b9fb9fe4c5e10d86eebcbb3ee"} Oct 11 10:53:44.135630 master-0 kubenswrapper[4790]: I1011 10:53:44.135301 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Oct 11 10:53:44.145620 master-0 kubenswrapper[4790]: I1011 10:53:44.145460 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 11 10:53:44.190846 master-0 kubenswrapper[4790]: I1011 10:53:44.190776 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 11 10:53:44.191685 master-0 kubenswrapper[4790]: I1011 10:53:44.191636 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 11 10:53:44.192134 master-0 kubenswrapper[4790]: I1011 10:53:44.192100 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Oct 11 10:53:44.192305 master-0 kubenswrapper[4790]: I1011 10:53:44.192279 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 11 10:53:44.226314 master-0 kubenswrapper[4790]: I1011 10:53:44.226265 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-1" Oct 11 10:53:44.237674 master-0 kubenswrapper[4790]: I1011 10:53:44.237404 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrlnr\" (UniqueName: \"kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-kube-api-access-lrlnr\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:53:44.237674 master-0 kubenswrapper[4790]: I1011 10:53:44.237538 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-lock\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:53:44.237674 master-0 kubenswrapper[4790]: I1011 10:53:44.237676 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-etc-swift\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:53:44.263830 master-0 kubenswrapper[4790]: I1011 10:53:44.263315 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-cache\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:53:44.264093 master-0 kubenswrapper[4790]: I1011 10:53:44.263676 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fec5bf36-c7f7-4a22-93a7-214a8670a9dd\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6b422b68-2a82-4e2e-ab23-eb4dc08f75f3\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:53:44.312791 master-0 kubenswrapper[4790]: I1011 10:53:44.312721 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-1" Oct 11 10:53:44.369204 master-0 kubenswrapper[4790]: I1011 10:53:44.368792 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-etc-swift\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:53:44.369204 master-0 kubenswrapper[4790]: I1011 10:53:44.368966 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-cache\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:53:44.369204 master-0 kubenswrapper[4790]: I1011 10:53:44.368995 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fec5bf36-c7f7-4a22-93a7-214a8670a9dd\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6b422b68-2a82-4e2e-ab23-eb4dc08f75f3\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:53:44.369204 master-0 kubenswrapper[4790]: I1011 10:53:44.369023 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrlnr\" (UniqueName: \"kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-kube-api-access-lrlnr\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:53:44.369204 master-0 kubenswrapper[4790]: I1011 10:53:44.369038 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-lock\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:53:44.369591 master-0 kubenswrapper[4790]: I1011 10:53:44.369562 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-lock\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:53:44.369917 master-0 kubenswrapper[4790]: E1011 10:53:44.369880 4790 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 11 10:53:44.369971 master-0 kubenswrapper[4790]: E1011 10:53:44.369918 4790 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 11 10:53:44.370008 master-0 kubenswrapper[4790]: E1011 10:53:44.369974 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-etc-swift podName:c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27 nodeName:}" failed. No retries permitted until 2025-10-11 10:53:44.869948229 +0000 UTC m=+901.424408521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-etc-swift") pod "swift-storage-0" (UID: "c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27") : configmap "swift-ring-files" not found Oct 11 10:53:44.377242 master-0 kubenswrapper[4790]: I1011 10:53:44.377190 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-cache\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:53:44.382346 master-0 kubenswrapper[4790]: I1011 10:53:44.379039 4790 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:44.382346 master-0 kubenswrapper[4790]: I1011 10:53:44.379109 4790 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fec5bf36-c7f7-4a22-93a7-214a8670a9dd\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6b422b68-2a82-4e2e-ab23-eb4dc08f75f3\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/00bcb7e430e97a6ecb5450e981c0e90813b177e374e1633380e36c3e3673697d/globalmount\"" pod="openstack/swift-storage-0" Oct 11 10:53:44.423402 master-0 kubenswrapper[4790]: I1011 10:53:44.423330 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrlnr\" (UniqueName: \"kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-kube-api-access-lrlnr\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:53:44.454432 master-1 kubenswrapper[4771]: I1011 10:53:44.454369 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ae0f8e3-9e87-45b8-8313-a0b65cf33106" path="/var/lib/kubelet/pods/0ae0f8e3-9e87-45b8-8313-a0b65cf33106/volumes" Oct 11 10:53:44.458423 master-1 kubenswrapper[4771]: I1011 10:53:44.458336 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11c30d1f-16d5-4106-bfae-e6c2d2f64f13","Type":"ContainerStarted","Data":"71476de09aca12c350c9bd4fa53bcc64aae8fab0a0998dc69e0421858e6b79c4"} Oct 11 10:53:44.748867 master-2 kubenswrapper[4776]: I1011 10:53:44.748799 4776 generic.go:334] "Generic (PLEG): container finished" podID="914ac6d0-5a85-4b2d-b4d4-202def09b0d8" containerID="e2e689b6171c3c9555e3da0b9467cad86d6df71fe90a288a26e58488f8162ae3" exitCode=0 Oct 11 10:53:44.749610 master-2 kubenswrapper[4776]: I1011 10:53:44.748917 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"914ac6d0-5a85-4b2d-b4d4-202def09b0d8","Type":"ContainerDied","Data":"e2e689b6171c3c9555e3da0b9467cad86d6df71fe90a288a26e58488f8162ae3"} Oct 11 10:53:44.751025 master-2 kubenswrapper[4776]: I1011 10:53:44.750780 4776 generic.go:334] "Generic (PLEG): container finished" podID="5a8ba065-7ef6-4bab-b20a-3bb274c93fa0" containerID="95de2747230b1e6ce345bd8b5619b82e79500b7d95b612f55c02d195c5ea9860" exitCode=0 Oct 11 10:53:44.751025 master-2 kubenswrapper[4776]: I1011 10:53:44.750828 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-2" event={"ID":"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0","Type":"ContainerDied","Data":"95de2747230b1e6ce345bd8b5619b82e79500b7d95b612f55c02d195c5ea9860"} Oct 11 10:53:44.817124 master-2 kubenswrapper[4776]: I1011 10:53:44.817006 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-sc5rx"] Oct 11 10:53:44.817477 master-2 kubenswrapper[4776]: E1011 10:53:44.817447 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a7cb456-8a0b-4e56-9dc5-93b488813f77" containerName="mariadb-database-create" Oct 11 10:53:44.817477 master-2 kubenswrapper[4776]: I1011 10:53:44.817471 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a7cb456-8a0b-4e56-9dc5-93b488813f77" containerName="mariadb-database-create" Oct 11 10:53:44.817551 master-2 kubenswrapper[4776]: E1011 10:53:44.817499 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829885e7-9e39-447e-a4f0-2ac128443d04" containerName="mariadb-database-create" Oct 11 10:53:44.817551 master-2 kubenswrapper[4776]: I1011 10:53:44.817508 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="829885e7-9e39-447e-a4f0-2ac128443d04" containerName="mariadb-database-create" Oct 11 10:53:44.828994 master-2 kubenswrapper[4776]: I1011 10:53:44.828953 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="829885e7-9e39-447e-a4f0-2ac128443d04" containerName="mariadb-database-create" Oct 11 10:53:44.829053 master-2 kubenswrapper[4776]: I1011 10:53:44.828995 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a7cb456-8a0b-4e56-9dc5-93b488813f77" containerName="mariadb-database-create" Oct 11 10:53:44.831540 master-2 kubenswrapper[4776]: I1011 10:53:44.831511 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:44.845497 master-2 kubenswrapper[4776]: I1011 10:53:44.839642 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 11 10:53:44.845497 master-2 kubenswrapper[4776]: I1011 10:53:44.840035 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 11 10:53:44.845497 master-2 kubenswrapper[4776]: I1011 10:53:44.840097 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 11 10:53:44.845497 master-2 kubenswrapper[4776]: I1011 10:53:44.840925 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 11 10:53:44.845497 master-2 kubenswrapper[4776]: I1011 10:53:44.844772 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sc5rx"] Oct 11 10:53:44.878275 master-0 kubenswrapper[4790]: I1011 10:53:44.878198 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-etc-swift\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:53:44.878554 master-0 kubenswrapper[4790]: E1011 10:53:44.878413 4790 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 11 10:53:44.878554 master-0 kubenswrapper[4790]: E1011 10:53:44.878440 4790 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 11 10:53:44.878554 master-0 kubenswrapper[4790]: E1011 10:53:44.878513 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-etc-swift podName:c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27 nodeName:}" failed. No retries permitted until 2025-10-11 10:53:45.878487804 +0000 UTC m=+902.432948096 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-etc-swift") pod "swift-storage-0" (UID: "c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27") : configmap "swift-ring-files" not found Oct 11 10:53:44.987848 master-0 kubenswrapper[4790]: I1011 10:53:44.987762 4790 generic.go:334] "Generic (PLEG): container finished" podID="be929908-6474-451d-8b87-e4effd7c6de4" containerID="ec0350a7355f6c5da814acb3e4b50a985a39478e4987d701f3bdf168b3a6530a" exitCode=0 Oct 11 10:53:44.988407 master-0 kubenswrapper[4790]: I1011 10:53:44.987888 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-1" event={"ID":"be929908-6474-451d-8b87-e4effd7c6de4","Type":"ContainerDied","Data":"ec0350a7355f6c5da814acb3e4b50a985a39478e4987d701f3bdf168b3a6530a"} Oct 11 10:53:44.990074 master-0 kubenswrapper[4790]: I1011 10:53:44.990005 4790 generic.go:334] "Generic (PLEG): container finished" podID="c13cb0d1-c50f-44fa-824a-46ece423a7cc" containerID="c4be72b5de1183ec2ba8473fed3fe3c9aa390d6a7ab90458f3ecfc26bf72839f" exitCode=0 Oct 11 10:53:44.990137 master-0 kubenswrapper[4790]: I1011 10:53:44.990078 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c13cb0d1-c50f-44fa-824a-46ece423a7cc","Type":"ContainerDied","Data":"c4be72b5de1183ec2ba8473fed3fe3c9aa390d6a7ab90458f3ecfc26bf72839f"} Oct 11 10:53:44.992944 master-0 kubenswrapper[4790]: I1011 10:53:44.992874 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" event={"ID":"6a4dc537-c4a3-4538-887f-62fe3919d5f0","Type":"ContainerStarted","Data":"381e97f0118ecd0a55897d40739e39a7fe13db1d358b9b0a4b193c8f784e5a52"} Oct 11 10:53:44.993075 master-0 kubenswrapper[4790]: I1011 10:53:44.993007 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:53:45.038351 master-2 kubenswrapper[4776]: I1011 10:53:45.038279 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02c36342-76bf-457d-804c-cc6420176307-etc-swift\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.038611 master-2 kubenswrapper[4776]: I1011 10:53:45.038428 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-dispersionconf\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.038611 master-2 kubenswrapper[4776]: I1011 10:53:45.038495 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-ring-data-devices\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.038611 master-2 kubenswrapper[4776]: I1011 10:53:45.038532 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjdjr\" (UniqueName: \"kubernetes.io/projected/02c36342-76bf-457d-804c-cc6420176307-kube-api-access-jjdjr\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.038611 master-2 kubenswrapper[4776]: I1011 10:53:45.038575 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-combined-ca-bundle\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.038824 master-2 kubenswrapper[4776]: I1011 10:53:45.038782 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-scripts\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.038867 master-2 kubenswrapper[4776]: I1011 10:53:45.038837 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-swiftconf\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.099804 master-0 kubenswrapper[4790]: I1011 10:53:45.099665 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" podStartSLOduration=3.099640413 podStartE2EDuration="3.099640413s" podCreationTimestamp="2025-10-11 10:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:45.098379168 +0000 UTC m=+901.652839460" watchObservedRunningTime="2025-10-11 10:53:45.099640413 +0000 UTC m=+901.654100705" Oct 11 10:53:45.141423 master-2 kubenswrapper[4776]: I1011 10:53:45.141280 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjdjr\" (UniqueName: \"kubernetes.io/projected/02c36342-76bf-457d-804c-cc6420176307-kube-api-access-jjdjr\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.141423 master-2 kubenswrapper[4776]: I1011 10:53:45.141351 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-combined-ca-bundle\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.141653 master-2 kubenswrapper[4776]: I1011 10:53:45.141429 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-scripts\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.141653 master-2 kubenswrapper[4776]: I1011 10:53:45.141464 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-swiftconf\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.141653 master-2 kubenswrapper[4776]: I1011 10:53:45.141499 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02c36342-76bf-457d-804c-cc6420176307-etc-swift\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.141653 master-2 kubenswrapper[4776]: I1011 10:53:45.141555 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-dispersionconf\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.141653 master-2 kubenswrapper[4776]: I1011 10:53:45.141616 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-ring-data-devices\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.142157 master-2 kubenswrapper[4776]: I1011 10:53:45.142117 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02c36342-76bf-457d-804c-cc6420176307-etc-swift\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.142431 master-2 kubenswrapper[4776]: I1011 10:53:45.142405 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-ring-data-devices\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.142935 master-2 kubenswrapper[4776]: I1011 10:53:45.142903 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-scripts\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.144664 master-2 kubenswrapper[4776]: I1011 10:53:45.144621 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-dispersionconf\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.146023 master-2 kubenswrapper[4776]: I1011 10:53:45.145982 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-combined-ca-bundle\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.146240 master-2 kubenswrapper[4776]: I1011 10:53:45.146206 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-swiftconf\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.160468 master-0 kubenswrapper[4790]: I1011 10:53:45.155168 4790 trace.go:236] Trace[1484285432]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-1" (11-Oct-2025 10:53:44.136) (total time: 1018ms): Oct 11 10:53:45.160468 master-0 kubenswrapper[4790]: Trace[1484285432]: [1.018910799s] [1.018910799s] END Oct 11 10:53:45.165151 master-2 kubenswrapper[4776]: I1011 10:53:45.165072 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjdjr\" (UniqueName: \"kubernetes.io/projected/02c36342-76bf-457d-804c-cc6420176307-kube-api-access-jjdjr\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.247008 master-2 kubenswrapper[4776]: I1011 10:53:45.246933 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.732176 master-2 kubenswrapper[4776]: W1011 10:53:45.732130 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02c36342_76bf_457d_804c_cc6420176307.slice/crio-de04857ed802def71b5901a8ca752c6b0be20d594b21cb7c6b16ee036e171a13 WatchSource:0}: Error finding container de04857ed802def71b5901a8ca752c6b0be20d594b21cb7c6b16ee036e171a13: Status 404 returned error can't find the container with id de04857ed802def71b5901a8ca752c6b0be20d594b21cb7c6b16ee036e171a13 Oct 11 10:53:45.745644 master-2 kubenswrapper[4776]: I1011 10:53:45.745611 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sc5rx"] Oct 11 10:53:45.765733 master-2 kubenswrapper[4776]: I1011 10:53:45.765612 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sc5rx" event={"ID":"02c36342-76bf-457d-804c-cc6420176307","Type":"ContainerStarted","Data":"de04857ed802def71b5901a8ca752c6b0be20d594b21cb7c6b16ee036e171a13"} Oct 11 10:53:45.769199 master-2 kubenswrapper[4776]: I1011 10:53:45.769124 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"914ac6d0-5a85-4b2d-b4d4-202def09b0d8","Type":"ContainerStarted","Data":"d7c3aa2bccdc7f11b2391184419d5b484fdaf3ce4012a302733c1e2ef52543ca"} Oct 11 10:53:45.769882 master-2 kubenswrapper[4776]: I1011 10:53:45.769844 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Oct 11 10:53:45.772943 master-2 kubenswrapper[4776]: I1011 10:53:45.772900 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-2" event={"ID":"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0","Type":"ContainerStarted","Data":"d758fbf1cca948c7a73d92ffb31a3f8f4613783c271c227f599d4158dac754e6"} Oct 11 10:53:45.773272 master-2 kubenswrapper[4776]: I1011 10:53:45.773114 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:45.782912 master-0 kubenswrapper[4790]: I1011 10:53:45.782810 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-2" podUID="5059e0b0-120f-4498-8076-e3e9239b5688" containerName="galera" probeResult="failure" output=< Oct 11 10:53:45.782912 master-0 kubenswrapper[4790]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Oct 11 10:53:45.782912 master-0 kubenswrapper[4790]: > Oct 11 10:53:45.852951 master-2 kubenswrapper[4776]: I1011 10:53:45.852717 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=49.710245713 podStartE2EDuration="55.852696145s" podCreationTimestamp="2025-10-11 10:52:50 +0000 UTC" firstStartedPulling="2025-10-11 10:53:03.348224219 +0000 UTC m=+1618.132650928" lastFinishedPulling="2025-10-11 10:53:09.490674641 +0000 UTC m=+1624.275101360" observedRunningTime="2025-10-11 10:53:45.84476236 +0000 UTC m=+1660.629189079" watchObservedRunningTime="2025-10-11 10:53:45.852696145 +0000 UTC m=+1660.637122854" Oct 11 10:53:45.881172 master-2 kubenswrapper[4776]: I1011 10:53:45.880762 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-2" podStartSLOduration=54.880744065 podStartE2EDuration="54.880744065s" podCreationTimestamp="2025-10-11 10:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:45.875861412 +0000 UTC m=+1660.660288111" watchObservedRunningTime="2025-10-11 10:53:45.880744065 +0000 UTC m=+1660.665170774" Oct 11 10:53:45.897408 master-0 kubenswrapper[4790]: I1011 10:53:45.897337 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-etc-swift\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:53:45.897672 master-0 kubenswrapper[4790]: E1011 10:53:45.897576 4790 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 11 10:53:45.897672 master-0 kubenswrapper[4790]: E1011 10:53:45.897598 4790 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 11 10:53:45.897672 master-0 kubenswrapper[4790]: E1011 10:53:45.897657 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-etc-swift podName:c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27 nodeName:}" failed. No retries permitted until 2025-10-11 10:53:47.89763736 +0000 UTC m=+904.452097652 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-etc-swift") pod "swift-storage-0" (UID: "c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27") : configmap "swift-ring-files" not found Oct 11 10:53:46.016386 master-0 kubenswrapper[4790]: I1011 10:53:46.016319 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-1" event={"ID":"be929908-6474-451d-8b87-e4effd7c6de4","Type":"ContainerStarted","Data":"6eeb4898825ff63a64166d6c91cc65fe503c754c39bb17b8a91974c755363ef9"} Oct 11 10:53:46.017482 master-0 kubenswrapper[4790]: I1011 10:53:46.017404 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:53:46.021280 master-0 kubenswrapper[4790]: I1011 10:53:46.021239 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c13cb0d1-c50f-44fa-824a-46ece423a7cc","Type":"ContainerStarted","Data":"8c4e3c9ea6d44bef7da8dfa27820fa539dc9c608507aa54b405dc8481ca9a2de"} Oct 11 10:53:46.021520 master-0 kubenswrapper[4790]: I1011 10:53:46.021481 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Oct 11 10:53:46.058124 master-0 kubenswrapper[4790]: I1011 10:53:46.057980 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-1" podStartSLOduration=55.057919572 podStartE2EDuration="55.057919572s" podCreationTimestamp="2025-10-11 10:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:46.051134583 +0000 UTC m=+902.605594885" watchObservedRunningTime="2025-10-11 10:53:46.057919572 +0000 UTC m=+902.612379874" Oct 11 10:53:46.096876 master-0 kubenswrapper[4790]: I1011 10:53:46.096775 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=48.004478761 podStartE2EDuration="56.096745628s" podCreationTimestamp="2025-10-11 10:52:50 +0000 UTC" firstStartedPulling="2025-10-11 10:53:02.701315784 +0000 UTC m=+859.255776076" lastFinishedPulling="2025-10-11 10:53:10.793582661 +0000 UTC m=+867.348042943" observedRunningTime="2025-10-11 10:53:46.091697328 +0000 UTC m=+902.646157640" watchObservedRunningTime="2025-10-11 10:53:46.096745628 +0000 UTC m=+902.651206160" Oct 11 10:53:46.496375 master-0 kubenswrapper[4790]: I1011 10:53:46.495561 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-52t2l" podUID="8c164a4b-a2d5-4570-aed3-86dbb1f3d47c" containerName="ovn-controller" probeResult="failure" output=< Oct 11 10:53:46.496375 master-0 kubenswrapper[4790]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 11 10:53:46.496375 master-0 kubenswrapper[4790]: > Oct 11 10:53:46.518724 master-1 kubenswrapper[4771]: I1011 10:53:46.518568 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mtzk7" podUID="4ab25521-7fba-40c9-b3db-377b1d0ec7a1" containerName="ovn-controller" probeResult="failure" output=< Oct 11 10:53:46.518724 master-1 kubenswrapper[4771]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 11 10:53:46.518724 master-1 kubenswrapper[4771]: > Oct 11 10:53:46.523664 master-0 kubenswrapper[4790]: I1011 10:53:46.523536 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fec5bf36-c7f7-4a22-93a7-214a8670a9dd\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6b422b68-2a82-4e2e-ab23-eb4dc08f75f3\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:53:46.533171 master-2 kubenswrapper[4776]: I1011 10:53:46.533012 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8qhqm" podUID="065373ca-8c0f-489c-a72e-4d1aee1263ba" containerName="ovn-controller" probeResult="failure" output=< Oct 11 10:53:46.533171 master-2 kubenswrapper[4776]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 11 10:53:46.533171 master-2 kubenswrapper[4776]: > Oct 11 10:53:46.556738 master-1 kubenswrapper[4771]: I1011 10:53:46.556679 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mvzxp" Oct 11 10:53:46.580693 master-2 kubenswrapper[4776]: I1011 10:53:46.580621 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:46.599640 master-0 kubenswrapper[4790]: I1011 10:53:46.599181 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:46.610554 master-2 kubenswrapper[4776]: I1011 10:53:46.610446 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:46.623270 master-0 kubenswrapper[4790]: I1011 10:53:46.623200 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-dw8wx" Oct 11 10:53:46.814052 master-1 kubenswrapper[4771]: I1011 10:53:46.813969 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 11 10:53:46.923935 master-1 kubenswrapper[4771]: I1011 10:53:46.923822 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mtzk7-config-gp4zd"] Oct 11 10:53:46.925197 master-1 kubenswrapper[4771]: E1011 10:53:46.924328 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57ac9130-d850-4420-a75e-53ec744b16eb" containerName="mariadb-database-create" Oct 11 10:53:46.925197 master-1 kubenswrapper[4771]: I1011 10:53:46.924445 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ac9130-d850-4420-a75e-53ec744b16eb" containerName="mariadb-database-create" Oct 11 10:53:46.925197 master-1 kubenswrapper[4771]: E1011 10:53:46.924465 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae0f8e3-9e87-45b8-8313-a0b65cf33106" containerName="dnsmasq-dns" Oct 11 10:53:46.925197 master-1 kubenswrapper[4771]: I1011 10:53:46.924475 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae0f8e3-9e87-45b8-8313-a0b65cf33106" containerName="dnsmasq-dns" Oct 11 10:53:46.925197 master-1 kubenswrapper[4771]: E1011 10:53:46.924489 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae0f8e3-9e87-45b8-8313-a0b65cf33106" containerName="init" Oct 11 10:53:46.925197 master-1 kubenswrapper[4771]: I1011 10:53:46.924500 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae0f8e3-9e87-45b8-8313-a0b65cf33106" containerName="init" Oct 11 10:53:46.925197 master-1 kubenswrapper[4771]: I1011 10:53:46.924665 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="57ac9130-d850-4420-a75e-53ec744b16eb" containerName="mariadb-database-create" Oct 11 10:53:46.925197 master-1 kubenswrapper[4771]: I1011 10:53:46.924677 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae0f8e3-9e87-45b8-8313-a0b65cf33106" containerName="dnsmasq-dns" Oct 11 10:53:46.925492 master-1 kubenswrapper[4771]: I1011 10:53:46.925310 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:46.929003 master-1 kubenswrapper[4771]: I1011 10:53:46.928548 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 11 10:53:46.943705 master-1 kubenswrapper[4771]: I1011 10:53:46.943665 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mtzk7-config-gp4zd"] Oct 11 10:53:47.106347 master-1 kubenswrapper[4771]: I1011 10:53:47.106213 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-additional-scripts\") pod \"ovn-controller-mtzk7-config-gp4zd\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:47.106347 master-1 kubenswrapper[4771]: I1011 10:53:47.106313 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-var-run-ovn\") pod \"ovn-controller-mtzk7-config-gp4zd\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:47.106588 master-1 kubenswrapper[4771]: I1011 10:53:47.106346 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-var-run\") pod \"ovn-controller-mtzk7-config-gp4zd\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:47.106588 master-1 kubenswrapper[4771]: I1011 10:53:47.106410 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-var-log-ovn\") pod \"ovn-controller-mtzk7-config-gp4zd\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:47.106588 master-1 kubenswrapper[4771]: I1011 10:53:47.106437 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-scripts\") pod \"ovn-controller-mtzk7-config-gp4zd\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:47.106588 master-1 kubenswrapper[4771]: I1011 10:53:47.106497 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhn6l\" (UniqueName: \"kubernetes.io/projected/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-kube-api-access-nhn6l\") pod \"ovn-controller-mtzk7-config-gp4zd\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:47.209332 master-1 kubenswrapper[4771]: I1011 10:53:47.209227 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-var-log-ovn\") pod \"ovn-controller-mtzk7-config-gp4zd\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:47.209332 master-1 kubenswrapper[4771]: I1011 10:53:47.209322 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-scripts\") pod \"ovn-controller-mtzk7-config-gp4zd\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:47.209677 master-1 kubenswrapper[4771]: I1011 10:53:47.209441 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhn6l\" (UniqueName: \"kubernetes.io/projected/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-kube-api-access-nhn6l\") pod \"ovn-controller-mtzk7-config-gp4zd\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:47.209677 master-1 kubenswrapper[4771]: I1011 10:53:47.209534 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-var-log-ovn\") pod \"ovn-controller-mtzk7-config-gp4zd\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:47.209677 master-1 kubenswrapper[4771]: I1011 10:53:47.209588 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-additional-scripts\") pod \"ovn-controller-mtzk7-config-gp4zd\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:47.210833 master-1 kubenswrapper[4771]: I1011 10:53:47.209873 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-var-run-ovn\") pod \"ovn-controller-mtzk7-config-gp4zd\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:47.210833 master-1 kubenswrapper[4771]: I1011 10:53:47.209922 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-var-run\") pod \"ovn-controller-mtzk7-config-gp4zd\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:47.210833 master-1 kubenswrapper[4771]: I1011 10:53:47.210215 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-var-run-ovn\") pod \"ovn-controller-mtzk7-config-gp4zd\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:47.210833 master-1 kubenswrapper[4771]: I1011 10:53:47.210287 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-var-run\") pod \"ovn-controller-mtzk7-config-gp4zd\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:47.212399 master-1 kubenswrapper[4771]: I1011 10:53:47.211440 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-additional-scripts\") pod \"ovn-controller-mtzk7-config-gp4zd\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:47.214190 master-1 kubenswrapper[4771]: I1011 10:53:47.214114 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-scripts\") pod \"ovn-controller-mtzk7-config-gp4zd\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:47.235243 master-1 kubenswrapper[4771]: I1011 10:53:47.235084 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhn6l\" (UniqueName: \"kubernetes.io/projected/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-kube-api-access-nhn6l\") pod \"ovn-controller-mtzk7-config-gp4zd\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:47.321015 master-1 kubenswrapper[4771]: I1011 10:53:47.320938 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:47.904246 master-1 kubenswrapper[4771]: I1011 10:53:47.904135 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 11 10:53:47.935109 master-0 kubenswrapper[4790]: I1011 10:53:47.935020 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-etc-swift\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:53:47.938426 master-1 kubenswrapper[4771]: I1011 10:53:47.938276 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:47.939960 master-0 kubenswrapper[4790]: E1011 10:53:47.935326 4790 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 11 10:53:47.939960 master-0 kubenswrapper[4790]: E1011 10:53:47.935379 4790 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 11 10:53:47.939960 master-0 kubenswrapper[4790]: E1011 10:53:47.935458 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-etc-swift podName:c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27 nodeName:}" failed. No retries permitted until 2025-10-11 10:53:51.935431017 +0000 UTC m=+908.489891299 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-etc-swift") pod "swift-storage-0" (UID: "c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27") : configmap "swift-ring-files" not found Oct 11 10:53:47.972788 master-1 kubenswrapper[4771]: I1011 10:53:47.972726 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 11 10:53:48.010538 master-1 kubenswrapper[4771]: I1011 10:53:48.010315 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-1" Oct 11 10:53:48.146851 master-1 kubenswrapper[4771]: I1011 10:53:48.146781 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mtzk7-config-gp4zd"] Oct 11 10:53:48.154509 master-1 kubenswrapper[4771]: W1011 10:53:48.154460 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf12bbc67_6a5d_4c1c_a685_c26c2b70a0a1.slice/crio-d2ed419b04ee231b2641d62eb1fa5b9efe937fb234dd74ec625c225a1769b347 WatchSource:0}: Error finding container d2ed419b04ee231b2641d62eb1fa5b9efe937fb234dd74ec625c225a1769b347: Status 404 returned error can't find the container with id d2ed419b04ee231b2641d62eb1fa5b9efe937fb234dd74ec625c225a1769b347 Oct 11 10:53:48.509106 master-1 kubenswrapper[4771]: I1011 10:53:48.509020 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mtzk7-config-gp4zd" event={"ID":"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1","Type":"ContainerStarted","Data":"d4634f70346f96ae4f97fe711847f0e072862de8b631ac6a0aaa341026f8675e"} Oct 11 10:53:48.509509 master-1 kubenswrapper[4771]: I1011 10:53:48.509491 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mtzk7-config-gp4zd" event={"ID":"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1","Type":"ContainerStarted","Data":"d2ed419b04ee231b2641d62eb1fa5b9efe937fb234dd74ec625c225a1769b347"} Oct 11 10:53:48.513201 master-1 kubenswrapper[4771]: I1011 10:53:48.513100 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11c30d1f-16d5-4106-bfae-e6c2d2f64f13","Type":"ContainerStarted","Data":"92f430c78bc4b24a68001349b3a9ac48c77542314f208344451dc9fc116683d7"} Oct 11 10:53:48.555949 master-1 kubenswrapper[4771]: I1011 10:53:48.555867 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mtzk7-config-gp4zd" podStartSLOduration=2.555845437 podStartE2EDuration="2.555845437s" podCreationTimestamp="2025-10-11 10:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:48.552683236 +0000 UTC m=+1660.526909757" watchObservedRunningTime="2025-10-11 10:53:48.555845437 +0000 UTC m=+1660.530071878" Oct 11 10:53:48.589557 master-1 kubenswrapper[4771]: I1011 10:53:48.589473 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=22.308501306 podStartE2EDuration="51.589453208s" podCreationTimestamp="2025-10-11 10:52:57 +0000 UTC" firstStartedPulling="2025-10-11 10:53:18.448247126 +0000 UTC m=+1630.422473567" lastFinishedPulling="2025-10-11 10:53:47.729198998 +0000 UTC m=+1659.703425469" observedRunningTime="2025-10-11 10:53:48.588350117 +0000 UTC m=+1660.562576648" watchObservedRunningTime="2025-10-11 10:53:48.589453208 +0000 UTC m=+1660.563679649" Oct 11 10:53:48.968460 master-0 kubenswrapper[4790]: I1011 10:53:48.968382 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-113b-account-create-twjxb"] Oct 11 10:53:48.970157 master-0 kubenswrapper[4790]: I1011 10:53:48.970119 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-113b-account-create-twjxb" Oct 11 10:53:48.973585 master-0 kubenswrapper[4790]: I1011 10:53:48.973542 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 11 10:53:48.979584 master-0 kubenswrapper[4790]: I1011 10:53:48.979521 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-113b-account-create-twjxb"] Oct 11 10:53:49.050784 master-0 kubenswrapper[4790]: I1011 10:53:49.050687 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxcmd\" (UniqueName: \"kubernetes.io/projected/70cbbe93-7c50-40cb-91f4-f75c8875580d-kube-api-access-hxcmd\") pod \"glance-113b-account-create-twjxb\" (UID: \"70cbbe93-7c50-40cb-91f4-f75c8875580d\") " pod="openstack/glance-113b-account-create-twjxb" Oct 11 10:53:49.153683 master-0 kubenswrapper[4790]: I1011 10:53:49.153584 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxcmd\" (UniqueName: \"kubernetes.io/projected/70cbbe93-7c50-40cb-91f4-f75c8875580d-kube-api-access-hxcmd\") pod \"glance-113b-account-create-twjxb\" (UID: \"70cbbe93-7c50-40cb-91f4-f75c8875580d\") " pod="openstack/glance-113b-account-create-twjxb" Oct 11 10:53:49.177485 master-0 kubenswrapper[4790]: I1011 10:53:49.177415 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxcmd\" (UniqueName: \"kubernetes.io/projected/70cbbe93-7c50-40cb-91f4-f75c8875580d-kube-api-access-hxcmd\") pod \"glance-113b-account-create-twjxb\" (UID: \"70cbbe93-7c50-40cb-91f4-f75c8875580d\") " pod="openstack/glance-113b-account-create-twjxb" Oct 11 10:53:49.322640 master-0 kubenswrapper[4790]: I1011 10:53:49.322561 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-113b-account-create-twjxb" Oct 11 10:53:49.522287 master-1 kubenswrapper[4771]: I1011 10:53:49.522194 4771 generic.go:334] "Generic (PLEG): container finished" podID="f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1" containerID="d4634f70346f96ae4f97fe711847f0e072862de8b631ac6a0aaa341026f8675e" exitCode=0 Oct 11 10:53:49.524867 master-1 kubenswrapper[4771]: I1011 10:53:49.524819 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mtzk7-config-gp4zd" event={"ID":"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1","Type":"ContainerDied","Data":"d4634f70346f96ae4f97fe711847f0e072862de8b631ac6a0aaa341026f8675e"} Oct 11 10:53:49.762853 master-0 kubenswrapper[4790]: I1011 10:53:49.761370 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-113b-account-create-twjxb"] Oct 11 10:53:49.768729 master-0 kubenswrapper[4790]: W1011 10:53:49.767428 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70cbbe93_7c50_40cb_91f4_f75c8875580d.slice/crio-3c49cfcc6699f3b2309e444c646f6f84fab947b9d15c58ff2909f3cf16e3437e WatchSource:0}: Error finding container 3c49cfcc6699f3b2309e444c646f6f84fab947b9d15c58ff2909f3cf16e3437e: Status 404 returned error can't find the container with id 3c49cfcc6699f3b2309e444c646f6f84fab947b9d15c58ff2909f3cf16e3437e Oct 11 10:53:49.816180 master-2 kubenswrapper[4776]: I1011 10:53:49.816118 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sc5rx" event={"ID":"02c36342-76bf-457d-804c-cc6420176307","Type":"ContainerStarted","Data":"0c5b403d67b7cebb505d1369362477def7d8aa2286a84930fcb8513bb4ad37ac"} Oct 11 10:53:49.851534 master-2 kubenswrapper[4776]: I1011 10:53:49.851397 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-sc5rx" podStartSLOduration=2.473593882 podStartE2EDuration="5.851368787s" podCreationTimestamp="2025-10-11 10:53:44 +0000 UTC" firstStartedPulling="2025-10-11 10:53:45.735352767 +0000 UTC m=+1660.519779476" lastFinishedPulling="2025-10-11 10:53:49.113127682 +0000 UTC m=+1663.897554381" observedRunningTime="2025-10-11 10:53:49.844806929 +0000 UTC m=+1664.629233648" watchObservedRunningTime="2025-10-11 10:53:49.851368787 +0000 UTC m=+1664.635795496" Oct 11 10:53:50.057181 master-0 kubenswrapper[4790]: I1011 10:53:50.057050 4790 generic.go:334] "Generic (PLEG): container finished" podID="70cbbe93-7c50-40cb-91f4-f75c8875580d" containerID="1cde782f190214155e020d24bbe2e2d5c9f2dc24b3fea8e9236ee944da092a1c" exitCode=0 Oct 11 10:53:50.057181 master-0 kubenswrapper[4790]: I1011 10:53:50.057133 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-113b-account-create-twjxb" event={"ID":"70cbbe93-7c50-40cb-91f4-f75c8875580d","Type":"ContainerDied","Data":"1cde782f190214155e020d24bbe2e2d5c9f2dc24b3fea8e9236ee944da092a1c"} Oct 11 10:53:50.057852 master-0 kubenswrapper[4790]: I1011 10:53:50.057296 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-113b-account-create-twjxb" event={"ID":"70cbbe93-7c50-40cb-91f4-f75c8875580d","Type":"ContainerStarted","Data":"3c49cfcc6699f3b2309e444c646f6f84fab947b9d15c58ff2909f3cf16e3437e"} Oct 11 10:53:50.265638 master-0 kubenswrapper[4790]: I1011 10:53:50.265570 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-52t2l-config-8jkrq"] Oct 11 10:53:50.266906 master-0 kubenswrapper[4790]: I1011 10:53:50.266876 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:50.270180 master-0 kubenswrapper[4790]: I1011 10:53:50.270046 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 11 10:53:50.283306 master-0 kubenswrapper[4790]: I1011 10:53:50.283236 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c874e686-cbb2-4de3-8058-382a74b5742d-var-run\") pod \"ovn-controller-52t2l-config-8jkrq\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:50.283520 master-0 kubenswrapper[4790]: I1011 10:53:50.283327 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c874e686-cbb2-4de3-8058-382a74b5742d-additional-scripts\") pod \"ovn-controller-52t2l-config-8jkrq\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:50.283520 master-0 kubenswrapper[4790]: I1011 10:53:50.283403 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c874e686-cbb2-4de3-8058-382a74b5742d-var-run-ovn\") pod \"ovn-controller-52t2l-config-8jkrq\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:50.283520 master-0 kubenswrapper[4790]: I1011 10:53:50.283487 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c874e686-cbb2-4de3-8058-382a74b5742d-scripts\") pod \"ovn-controller-52t2l-config-8jkrq\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:50.283520 master-0 kubenswrapper[4790]: I1011 10:53:50.283489 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-52t2l-config-8jkrq"] Oct 11 10:53:50.283914 master-0 kubenswrapper[4790]: I1011 10:53:50.283849 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c874e686-cbb2-4de3-8058-382a74b5742d-var-log-ovn\") pod \"ovn-controller-52t2l-config-8jkrq\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:50.283988 master-0 kubenswrapper[4790]: I1011 10:53:50.283953 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nwqg\" (UniqueName: \"kubernetes.io/projected/c874e686-cbb2-4de3-8058-382a74b5742d-kube-api-access-6nwqg\") pod \"ovn-controller-52t2l-config-8jkrq\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:50.363011 master-2 kubenswrapper[4776]: I1011 10:53:50.362952 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:50.386476 master-0 kubenswrapper[4790]: I1011 10:53:50.386362 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c874e686-cbb2-4de3-8058-382a74b5742d-var-run\") pod \"ovn-controller-52t2l-config-8jkrq\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:50.386476 master-0 kubenswrapper[4790]: I1011 10:53:50.386491 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c874e686-cbb2-4de3-8058-382a74b5742d-additional-scripts\") pod \"ovn-controller-52t2l-config-8jkrq\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:50.386833 master-0 kubenswrapper[4790]: I1011 10:53:50.386526 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c874e686-cbb2-4de3-8058-382a74b5742d-var-run-ovn\") pod \"ovn-controller-52t2l-config-8jkrq\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:50.386833 master-0 kubenswrapper[4790]: I1011 10:53:50.386569 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c874e686-cbb2-4de3-8058-382a74b5742d-scripts\") pod \"ovn-controller-52t2l-config-8jkrq\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:50.386833 master-0 kubenswrapper[4790]: I1011 10:53:50.386603 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c874e686-cbb2-4de3-8058-382a74b5742d-var-log-ovn\") pod \"ovn-controller-52t2l-config-8jkrq\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:50.386833 master-0 kubenswrapper[4790]: I1011 10:53:50.386638 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nwqg\" (UniqueName: \"kubernetes.io/projected/c874e686-cbb2-4de3-8058-382a74b5742d-kube-api-access-6nwqg\") pod \"ovn-controller-52t2l-config-8jkrq\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:50.386833 master-0 kubenswrapper[4790]: I1011 10:53:50.386652 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c874e686-cbb2-4de3-8058-382a74b5742d-var-run\") pod \"ovn-controller-52t2l-config-8jkrq\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:50.387094 master-0 kubenswrapper[4790]: I1011 10:53:50.387014 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c874e686-cbb2-4de3-8058-382a74b5742d-var-run-ovn\") pod \"ovn-controller-52t2l-config-8jkrq\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:50.387374 master-0 kubenswrapper[4790]: I1011 10:53:50.387335 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c874e686-cbb2-4de3-8058-382a74b5742d-var-log-ovn\") pod \"ovn-controller-52t2l-config-8jkrq\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:50.387896 master-0 kubenswrapper[4790]: I1011 10:53:50.387864 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c874e686-cbb2-4de3-8058-382a74b5742d-additional-scripts\") pod \"ovn-controller-52t2l-config-8jkrq\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:50.391856 master-0 kubenswrapper[4790]: I1011 10:53:50.391811 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c874e686-cbb2-4de3-8058-382a74b5742d-scripts\") pod \"ovn-controller-52t2l-config-8jkrq\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:50.414074 master-2 kubenswrapper[4776]: I1011 10:53:50.414029 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:50.421358 master-0 kubenswrapper[4790]: I1011 10:53:50.421309 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nwqg\" (UniqueName: \"kubernetes.io/projected/c874e686-cbb2-4de3-8058-382a74b5742d-kube-api-access-6nwqg\") pod \"ovn-controller-52t2l-config-8jkrq\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:50.638126 master-0 kubenswrapper[4790]: I1011 10:53:50.638038 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:51.039494 master-1 kubenswrapper[4771]: I1011 10:53:51.039439 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:51.125668 master-0 kubenswrapper[4790]: W1011 10:53:51.125598 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc874e686_cbb2_4de3_8058_382a74b5742d.slice/crio-4d1096b4dae647b1b4423270ddfe174d7a5e98b7791d42acd49047577bd6f195 WatchSource:0}: Error finding container 4d1096b4dae647b1b4423270ddfe174d7a5e98b7791d42acd49047577bd6f195: Status 404 returned error can't find the container with id 4d1096b4dae647b1b4423270ddfe174d7a5e98b7791d42acd49047577bd6f195 Oct 11 10:53:51.126188 master-0 kubenswrapper[4790]: I1011 10:53:51.126087 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-52t2l-config-8jkrq"] Oct 11 10:53:51.199976 master-1 kubenswrapper[4771]: I1011 10:53:51.199746 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-additional-scripts\") pod \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " Oct 11 10:53:51.199976 master-1 kubenswrapper[4771]: I1011 10:53:51.199817 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhn6l\" (UniqueName: \"kubernetes.io/projected/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-kube-api-access-nhn6l\") pod \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " Oct 11 10:53:51.199976 master-1 kubenswrapper[4771]: I1011 10:53:51.199850 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-scripts\") pod \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " Oct 11 10:53:51.199976 master-1 kubenswrapper[4771]: I1011 10:53:51.199924 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-var-run-ovn\") pod \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " Oct 11 10:53:51.201004 master-1 kubenswrapper[4771]: I1011 10:53:51.200022 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-var-log-ovn\") pod \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " Oct 11 10:53:51.201004 master-1 kubenswrapper[4771]: I1011 10:53:51.200047 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-var-run\") pod \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\" (UID: \"f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1\") " Oct 11 10:53:51.201004 master-1 kubenswrapper[4771]: I1011 10:53:51.200467 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-var-run" (OuterVolumeSpecName: "var-run") pod "f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1" (UID: "f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:51.201004 master-1 kubenswrapper[4771]: I1011 10:53:51.200625 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1" (UID: "f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:51.201004 master-1 kubenswrapper[4771]: I1011 10:53:51.200705 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1" (UID: "f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:51.201004 master-1 kubenswrapper[4771]: I1011 10:53:51.200719 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1" (UID: "f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:51.201487 master-1 kubenswrapper[4771]: I1011 10:53:51.201304 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-scripts" (OuterVolumeSpecName: "scripts") pod "f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1" (UID: "f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:51.204581 master-1 kubenswrapper[4771]: I1011 10:53:51.203774 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-kube-api-access-nhn6l" (OuterVolumeSpecName: "kube-api-access-nhn6l") pod "f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1" (UID: "f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1"). InnerVolumeSpecName "kube-api-access-nhn6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:51.302334 master-1 kubenswrapper[4771]: I1011 10:53:51.302238 4771 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-var-run-ovn\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:51.302334 master-1 kubenswrapper[4771]: I1011 10:53:51.302298 4771 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-var-log-ovn\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:51.302334 master-1 kubenswrapper[4771]: I1011 10:53:51.302325 4771 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-var-run\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:51.302334 master-1 kubenswrapper[4771]: I1011 10:53:51.302342 4771 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-additional-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:51.302334 master-1 kubenswrapper[4771]: I1011 10:53:51.302389 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhn6l\" (UniqueName: \"kubernetes.io/projected/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-kube-api-access-nhn6l\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:51.302864 master-1 kubenswrapper[4771]: I1011 10:53:51.302405 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:51.348955 master-1 kubenswrapper[4771]: I1011 10:53:51.348854 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mtzk7-config-gp4zd"] Oct 11 10:53:51.357424 master-1 kubenswrapper[4771]: I1011 10:53:51.357328 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mtzk7-config-gp4zd"] Oct 11 10:53:51.368520 master-0 kubenswrapper[4790]: I1011 10:53:51.368472 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-113b-account-create-twjxb" Oct 11 10:53:51.437127 master-0 kubenswrapper[4790]: I1011 10:53:51.437057 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxcmd\" (UniqueName: \"kubernetes.io/projected/70cbbe93-7c50-40cb-91f4-f75c8875580d-kube-api-access-hxcmd\") pod \"70cbbe93-7c50-40cb-91f4-f75c8875580d\" (UID: \"70cbbe93-7c50-40cb-91f4-f75c8875580d\") " Oct 11 10:53:51.442813 master-0 kubenswrapper[4790]: I1011 10:53:51.441296 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70cbbe93-7c50-40cb-91f4-f75c8875580d-kube-api-access-hxcmd" (OuterVolumeSpecName: "kube-api-access-hxcmd") pod "70cbbe93-7c50-40cb-91f4-f75c8875580d" (UID: "70cbbe93-7c50-40cb-91f4-f75c8875580d"). InnerVolumeSpecName "kube-api-access-hxcmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:51.497990 master-0 kubenswrapper[4790]: I1011 10:53:51.496950 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-52t2l" Oct 11 10:53:51.515237 master-1 kubenswrapper[4771]: I1011 10:53:51.510627 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-mtzk7" Oct 11 10:53:51.529551 master-2 kubenswrapper[4776]: I1011 10:53:51.529486 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8qhqm" podUID="065373ca-8c0f-489c-a72e-4d1aee1263ba" containerName="ovn-controller" probeResult="failure" output=< Oct 11 10:53:51.529551 master-2 kubenswrapper[4776]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 11 10:53:51.529551 master-2 kubenswrapper[4776]: > Oct 11 10:53:51.539464 master-2 kubenswrapper[4776]: I1011 10:53:51.539290 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8qhqm-config-wt655"] Oct 11 10:53:51.540569 master-2 kubenswrapper[4776]: I1011 10:53:51.540517 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.541396 master-0 kubenswrapper[4790]: I1011 10:53:51.541338 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxcmd\" (UniqueName: \"kubernetes.io/projected/70cbbe93-7c50-40cb-91f4-f75c8875580d-kube-api-access-hxcmd\") on node \"master-0\" DevicePath \"\"" Oct 11 10:53:51.548141 master-2 kubenswrapper[4776]: I1011 10:53:51.546088 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 11 10:53:51.556548 master-1 kubenswrapper[4771]: I1011 10:53:51.556475 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2ed419b04ee231b2641d62eb1fa5b9efe937fb234dd74ec625c225a1769b347" Oct 11 10:53:51.556871 master-1 kubenswrapper[4771]: I1011 10:53:51.556594 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mtzk7-config-gp4zd" Oct 11 10:53:51.567669 master-2 kubenswrapper[4776]: I1011 10:53:51.567299 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8qhqm-config-wt655"] Oct 11 10:53:51.638883 master-1 kubenswrapper[4771]: I1011 10:53:51.638805 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mtzk7-config-9fxhw"] Oct 11 10:53:51.640003 master-1 kubenswrapper[4771]: E1011 10:53:51.639970 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1" containerName="ovn-config" Oct 11 10:53:51.640409 master-1 kubenswrapper[4771]: I1011 10:53:51.640341 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1" containerName="ovn-config" Oct 11 10:53:51.640881 master-1 kubenswrapper[4771]: I1011 10:53:51.640854 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1" containerName="ovn-config" Oct 11 10:53:51.642209 master-1 kubenswrapper[4771]: I1011 10:53:51.642176 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:51.646912 master-1 kubenswrapper[4771]: I1011 10:53:51.646876 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 11 10:53:51.660501 master-1 kubenswrapper[4771]: I1011 10:53:51.659556 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mtzk7-config-9fxhw"] Oct 11 10:53:51.689660 master-2 kubenswrapper[4776]: I1011 10:53:51.689596 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-scripts\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.689892 master-2 kubenswrapper[4776]: I1011 10:53:51.689747 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlxnz\" (UniqueName: \"kubernetes.io/projected/36684b23-8b0f-409c-8b8b-c2402189f68e-kube-api-access-nlxnz\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.689892 master-2 kubenswrapper[4776]: I1011 10:53:51.689800 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.689892 master-2 kubenswrapper[4776]: I1011 10:53:51.689836 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-additional-scripts\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.689892 master-2 kubenswrapper[4776]: I1011 10:53:51.689864 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run-ovn\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.689892 master-2 kubenswrapper[4776]: I1011 10:53:51.689887 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-log-ovn\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.791413 master-2 kubenswrapper[4776]: I1011 10:53:51.791302 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-scripts\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.791413 master-2 kubenswrapper[4776]: I1011 10:53:51.791407 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlxnz\" (UniqueName: \"kubernetes.io/projected/36684b23-8b0f-409c-8b8b-c2402189f68e-kube-api-access-nlxnz\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.791621 master-2 kubenswrapper[4776]: I1011 10:53:51.791449 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.791621 master-2 kubenswrapper[4776]: I1011 10:53:51.791475 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-additional-scripts\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.791621 master-2 kubenswrapper[4776]: I1011 10:53:51.791497 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run-ovn\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.791621 master-2 kubenswrapper[4776]: I1011 10:53:51.791515 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-log-ovn\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.791621 master-2 kubenswrapper[4776]: I1011 10:53:51.791620 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-log-ovn\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.791953 master-2 kubenswrapper[4776]: I1011 10:53:51.791929 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.792548 master-2 kubenswrapper[4776]: I1011 10:53:51.792515 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-additional-scripts\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.792597 master-2 kubenswrapper[4776]: I1011 10:53:51.792571 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run-ovn\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.793459 master-2 kubenswrapper[4776]: I1011 10:53:51.793414 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-scripts\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.817697 master-2 kubenswrapper[4776]: I1011 10:53:51.812464 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlxnz\" (UniqueName: \"kubernetes.io/projected/36684b23-8b0f-409c-8b8b-c2402189f68e-kube-api-access-nlxnz\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.839475 master-1 kubenswrapper[4771]: I1011 10:53:51.839326 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqprx\" (UniqueName: \"kubernetes.io/projected/c637296f-4521-4528-b3f3-e247deac1ad8-kube-api-access-zqprx\") pod \"ovn-controller-mtzk7-config-9fxhw\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:51.840320 master-1 kubenswrapper[4771]: I1011 10:53:51.840248 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c637296f-4521-4528-b3f3-e247deac1ad8-var-log-ovn\") pod \"ovn-controller-mtzk7-config-9fxhw\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:51.840535 master-1 kubenswrapper[4771]: I1011 10:53:51.840483 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c637296f-4521-4528-b3f3-e247deac1ad8-scripts\") pod \"ovn-controller-mtzk7-config-9fxhw\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:51.840624 master-1 kubenswrapper[4771]: I1011 10:53:51.840572 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c637296f-4521-4528-b3f3-e247deac1ad8-var-run\") pod \"ovn-controller-mtzk7-config-9fxhw\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:51.840700 master-1 kubenswrapper[4771]: I1011 10:53:51.840643 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c637296f-4521-4528-b3f3-e247deac1ad8-var-run-ovn\") pod \"ovn-controller-mtzk7-config-9fxhw\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:51.840913 master-1 kubenswrapper[4771]: I1011 10:53:51.840854 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c637296f-4521-4528-b3f3-e247deac1ad8-additional-scripts\") pod \"ovn-controller-mtzk7-config-9fxhw\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:51.892159 master-2 kubenswrapper[4776]: I1011 10:53:51.892088 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.942894 master-1 kubenswrapper[4771]: I1011 10:53:51.942782 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqprx\" (UniqueName: \"kubernetes.io/projected/c637296f-4521-4528-b3f3-e247deac1ad8-kube-api-access-zqprx\") pod \"ovn-controller-mtzk7-config-9fxhw\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:51.942894 master-1 kubenswrapper[4771]: I1011 10:53:51.942888 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c637296f-4521-4528-b3f3-e247deac1ad8-var-log-ovn\") pod \"ovn-controller-mtzk7-config-9fxhw\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:51.943457 master-1 kubenswrapper[4771]: I1011 10:53:51.942991 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c637296f-4521-4528-b3f3-e247deac1ad8-scripts\") pod \"ovn-controller-mtzk7-config-9fxhw\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:51.943457 master-1 kubenswrapper[4771]: I1011 10:53:51.943024 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c637296f-4521-4528-b3f3-e247deac1ad8-var-run\") pod \"ovn-controller-mtzk7-config-9fxhw\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:51.943457 master-1 kubenswrapper[4771]: I1011 10:53:51.943065 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c637296f-4521-4528-b3f3-e247deac1ad8-var-run-ovn\") pod \"ovn-controller-mtzk7-config-9fxhw\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:51.943457 master-1 kubenswrapper[4771]: I1011 10:53:51.943116 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c637296f-4521-4528-b3f3-e247deac1ad8-additional-scripts\") pod \"ovn-controller-mtzk7-config-9fxhw\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:51.943457 master-1 kubenswrapper[4771]: I1011 10:53:51.943181 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c637296f-4521-4528-b3f3-e247deac1ad8-var-log-ovn\") pod \"ovn-controller-mtzk7-config-9fxhw\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:51.943457 master-1 kubenswrapper[4771]: I1011 10:53:51.943348 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c637296f-4521-4528-b3f3-e247deac1ad8-var-run-ovn\") pod \"ovn-controller-mtzk7-config-9fxhw\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:51.943457 master-1 kubenswrapper[4771]: I1011 10:53:51.943403 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c637296f-4521-4528-b3f3-e247deac1ad8-var-run\") pod \"ovn-controller-mtzk7-config-9fxhw\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:51.944826 master-1 kubenswrapper[4771]: I1011 10:53:51.944774 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c637296f-4521-4528-b3f3-e247deac1ad8-additional-scripts\") pod \"ovn-controller-mtzk7-config-9fxhw\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:51.947752 master-1 kubenswrapper[4771]: I1011 10:53:51.947671 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c637296f-4521-4528-b3f3-e247deac1ad8-scripts\") pod \"ovn-controller-mtzk7-config-9fxhw\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:51.948520 master-0 kubenswrapper[4790]: I1011 10:53:51.948356 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-etc-swift\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:53:51.948745 master-0 kubenswrapper[4790]: E1011 10:53:51.948600 4790 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 11 10:53:51.948745 master-0 kubenswrapper[4790]: E1011 10:53:51.948627 4790 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 11 10:53:51.948745 master-0 kubenswrapper[4790]: E1011 10:53:51.948725 4790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-etc-swift podName:c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27 nodeName:}" failed. No retries permitted until 2025-10-11 10:53:59.948680924 +0000 UTC m=+916.503141216 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-etc-swift") pod "swift-storage-0" (UID: "c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27") : configmap "swift-ring-files" not found Oct 11 10:53:51.970455 master-1 kubenswrapper[4771]: I1011 10:53:51.969257 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqprx\" (UniqueName: \"kubernetes.io/projected/c637296f-4521-4528-b3f3-e247deac1ad8-kube-api-access-zqprx\") pod \"ovn-controller-mtzk7-config-9fxhw\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:51.971738 master-1 kubenswrapper[4771]: I1011 10:53:51.971692 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:52.073931 master-0 kubenswrapper[4790]: I1011 10:53:52.073868 4790 generic.go:334] "Generic (PLEG): container finished" podID="c874e686-cbb2-4de3-8058-382a74b5742d" containerID="cc0e410018cdbb38cb0a44455ce0c9bcffaa24fb5b85e7a4f71ece632724bed8" exitCode=0 Oct 11 10:53:52.074212 master-0 kubenswrapper[4790]: I1011 10:53:52.073987 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-52t2l-config-8jkrq" event={"ID":"c874e686-cbb2-4de3-8058-382a74b5742d","Type":"ContainerDied","Data":"cc0e410018cdbb38cb0a44455ce0c9bcffaa24fb5b85e7a4f71ece632724bed8"} Oct 11 10:53:52.074212 master-0 kubenswrapper[4790]: I1011 10:53:52.074095 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-52t2l-config-8jkrq" event={"ID":"c874e686-cbb2-4de3-8058-382a74b5742d","Type":"ContainerStarted","Data":"4d1096b4dae647b1b4423270ddfe174d7a5e98b7791d42acd49047577bd6f195"} Oct 11 10:53:52.076228 master-0 kubenswrapper[4790]: I1011 10:53:52.076187 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-113b-account-create-twjxb" event={"ID":"70cbbe93-7c50-40cb-91f4-f75c8875580d","Type":"ContainerDied","Data":"3c49cfcc6699f3b2309e444c646f6f84fab947b9d15c58ff2909f3cf16e3437e"} Oct 11 10:53:52.076228 master-0 kubenswrapper[4790]: I1011 10:53:52.076216 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c49cfcc6699f3b2309e444c646f6f84fab947b9d15c58ff2909f3cf16e3437e" Oct 11 10:53:52.076366 master-0 kubenswrapper[4790]: I1011 10:53:52.076327 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-113b-account-create-twjxb" Oct 11 10:53:52.312629 master-2 kubenswrapper[4776]: I1011 10:53:52.298920 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8qhqm-config-wt655"] Oct 11 10:53:52.312629 master-2 kubenswrapper[4776]: W1011 10:53:52.301444 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36684b23_8b0f_409c_8b8b_c2402189f68e.slice/crio-52d1a0b423f1130d7c8ac76ca9e53742c68d853d59ebefff64b3fc353e522a46 WatchSource:0}: Error finding container 52d1a0b423f1130d7c8ac76ca9e53742c68d853d59ebefff64b3fc353e522a46: Status 404 returned error can't find the container with id 52d1a0b423f1130d7c8ac76ca9e53742c68d853d59ebefff64b3fc353e522a46 Oct 11 10:53:52.426417 master-1 kubenswrapper[4771]: I1011 10:53:52.426312 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mtzk7-config-9fxhw"] Oct 11 10:53:52.431311 master-1 kubenswrapper[4771]: W1011 10:53:52.431248 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc637296f_4521_4528_b3f3_e247deac1ad8.slice/crio-85ec1a1327a46e2b03fec9261ebaecc8a7d06803923507fa15cf1db0337a471c WatchSource:0}: Error finding container 85ec1a1327a46e2b03fec9261ebaecc8a7d06803923507fa15cf1db0337a471c: Status 404 returned error can't find the container with id 85ec1a1327a46e2b03fec9261ebaecc8a7d06803923507fa15cf1db0337a471c Oct 11 10:53:52.455685 master-1 kubenswrapper[4771]: I1011 10:53:52.455574 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1" path="/var/lib/kubelet/pods/f12bbc67-6a5d-4c1c-a685-c26c2b70a0a1/volumes" Oct 11 10:53:52.474040 master-0 kubenswrapper[4790]: I1011 10:53:52.473918 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:53:52.570835 master-1 kubenswrapper[4771]: I1011 10:53:52.570705 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mtzk7-config-9fxhw" event={"ID":"c637296f-4521-4528-b3f3-e247deac1ad8","Type":"ContainerStarted","Data":"85ec1a1327a46e2b03fec9261ebaecc8a7d06803923507fa15cf1db0337a471c"} Oct 11 10:53:52.600932 master-2 kubenswrapper[4776]: I1011 10:53:52.600807 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86d565bb9-rbc2v"] Oct 11 10:53:52.601493 master-2 kubenswrapper[4776]: I1011 10:53:52.601112 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" podUID="c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" containerName="dnsmasq-dns" containerID="cri-o://21c4e7fcdc124283dbaf4b8ea241a8c8865c7c94e13f81aee9cfb5aab43aac68" gracePeriod=10 Oct 11 10:53:52.622904 master-1 kubenswrapper[4771]: I1011 10:53:52.622845 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c99f4877f-vjhdt"] Oct 11 10:53:52.624886 master-1 kubenswrapper[4771]: I1011 10:53:52.624830 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:53:52.652315 master-1 kubenswrapper[4771]: I1011 10:53:52.646320 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c99f4877f-vjhdt"] Oct 11 10:53:52.663217 master-1 kubenswrapper[4771]: I1011 10:53:52.663152 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-ovsdbserver-sb\") pod \"dnsmasq-dns-6c99f4877f-vjhdt\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:53:52.663473 master-1 kubenswrapper[4771]: I1011 10:53:52.663240 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-dns-svc\") pod \"dnsmasq-dns-6c99f4877f-vjhdt\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:53:52.663541 master-1 kubenswrapper[4771]: I1011 10:53:52.663447 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-ovsdbserver-nb\") pod \"dnsmasq-dns-6c99f4877f-vjhdt\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:53:52.663755 master-1 kubenswrapper[4771]: I1011 10:53:52.663706 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-config\") pod \"dnsmasq-dns-6c99f4877f-vjhdt\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:53:52.663811 master-1 kubenswrapper[4771]: I1011 10:53:52.663774 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m5q5\" (UniqueName: \"kubernetes.io/projected/30700706-219b-47c1-83cd-278584a3f182-kube-api-access-2m5q5\") pod \"dnsmasq-dns-6c99f4877f-vjhdt\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:53:52.765813 master-1 kubenswrapper[4771]: I1011 10:53:52.765738 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-config\") pod \"dnsmasq-dns-6c99f4877f-vjhdt\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:53:52.765813 master-1 kubenswrapper[4771]: I1011 10:53:52.765790 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m5q5\" (UniqueName: \"kubernetes.io/projected/30700706-219b-47c1-83cd-278584a3f182-kube-api-access-2m5q5\") pod \"dnsmasq-dns-6c99f4877f-vjhdt\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:53:52.766021 master-1 kubenswrapper[4771]: I1011 10:53:52.765844 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-ovsdbserver-sb\") pod \"dnsmasq-dns-6c99f4877f-vjhdt\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:53:52.766021 master-1 kubenswrapper[4771]: I1011 10:53:52.765872 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-dns-svc\") pod \"dnsmasq-dns-6c99f4877f-vjhdt\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:53:52.766021 master-1 kubenswrapper[4771]: I1011 10:53:52.765897 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-ovsdbserver-nb\") pod \"dnsmasq-dns-6c99f4877f-vjhdt\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:53:52.767141 master-1 kubenswrapper[4771]: I1011 10:53:52.767087 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-config\") pod \"dnsmasq-dns-6c99f4877f-vjhdt\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:53:52.768837 master-1 kubenswrapper[4771]: I1011 10:53:52.768776 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-dns-svc\") pod \"dnsmasq-dns-6c99f4877f-vjhdt\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:53:52.771411 master-1 kubenswrapper[4771]: I1011 10:53:52.771331 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-ovsdbserver-nb\") pod \"dnsmasq-dns-6c99f4877f-vjhdt\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:53:52.780168 master-1 kubenswrapper[4771]: I1011 10:53:52.780093 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-ovsdbserver-sb\") pod \"dnsmasq-dns-6c99f4877f-vjhdt\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:53:52.791792 master-1 kubenswrapper[4771]: I1011 10:53:52.791714 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m5q5\" (UniqueName: \"kubernetes.io/projected/30700706-219b-47c1-83cd-278584a3f182-kube-api-access-2m5q5\") pod \"dnsmasq-dns-6c99f4877f-vjhdt\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:53:52.841698 master-2 kubenswrapper[4776]: I1011 10:53:52.840867 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8qhqm-config-wt655" event={"ID":"36684b23-8b0f-409c-8b8b-c2402189f68e","Type":"ContainerStarted","Data":"33f6c7d3d49df3aa00341fc88f0a2426a5e9fdd61368c7621163e1d8e6740b52"} Oct 11 10:53:52.841698 master-2 kubenswrapper[4776]: I1011 10:53:52.840913 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8qhqm-config-wt655" event={"ID":"36684b23-8b0f-409c-8b8b-c2402189f68e","Type":"ContainerStarted","Data":"52d1a0b423f1130d7c8ac76ca9e53742c68d853d59ebefff64b3fc353e522a46"} Oct 11 10:53:52.870102 master-2 kubenswrapper[4776]: I1011 10:53:52.868218 4776 generic.go:334] "Generic (PLEG): container finished" podID="c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" containerID="21c4e7fcdc124283dbaf4b8ea241a8c8865c7c94e13f81aee9cfb5aab43aac68" exitCode=0 Oct 11 10:53:52.870102 master-2 kubenswrapper[4776]: I1011 10:53:52.868320 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" event={"ID":"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5","Type":"ContainerDied","Data":"21c4e7fcdc124283dbaf4b8ea241a8c8865c7c94e13f81aee9cfb5aab43aac68"} Oct 11 10:53:52.878371 master-2 kubenswrapper[4776]: I1011 10:53:52.878222 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-8qhqm-config-wt655" podStartSLOduration=1.878187667 podStartE2EDuration="1.878187667s" podCreationTimestamp="2025-10-11 10:53:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:52.871922557 +0000 UTC m=+1667.656349286" watchObservedRunningTime="2025-10-11 10:53:52.878187667 +0000 UTC m=+1667.662614366" Oct 11 10:53:52.898382 master-1 kubenswrapper[4771]: I1011 10:53:52.898314 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Oct 11 10:53:52.947449 master-1 kubenswrapper[4771]: I1011 10:53:52.947386 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:53:53.396994 master-1 kubenswrapper[4771]: I1011 10:53:53.396924 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c99f4877f-vjhdt"] Oct 11 10:53:53.399851 master-1 kubenswrapper[4771]: W1011 10:53:53.399730 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30700706_219b_47c1_83cd_278584a3f182.slice/crio-cc8ace77e3138b8e3e45d04fc9090a4ecc543f54342ec2f309cdbc89855a76b5 WatchSource:0}: Error finding container cc8ace77e3138b8e3e45d04fc9090a4ecc543f54342ec2f309cdbc89855a76b5: Status 404 returned error can't find the container with id cc8ace77e3138b8e3e45d04fc9090a4ecc543f54342ec2f309cdbc89855a76b5 Oct 11 10:53:53.407602 master-2 kubenswrapper[4776]: I1011 10:53:53.407553 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:53:53.502700 master-0 kubenswrapper[4790]: I1011 10:53:53.502633 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:53.532004 master-2 kubenswrapper[4776]: I1011 10:53:53.531954 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-dns-svc\") pod \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " Oct 11 10:53:53.532557 master-2 kubenswrapper[4776]: I1011 10:53:53.532529 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-config\") pod \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " Oct 11 10:53:53.532628 master-2 kubenswrapper[4776]: I1011 10:53:53.532617 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnm67\" (UniqueName: \"kubernetes.io/projected/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-kube-api-access-vnm67\") pod \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " Oct 11 10:53:53.547512 master-2 kubenswrapper[4776]: I1011 10:53:53.547452 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-kube-api-access-vnm67" (OuterVolumeSpecName: "kube-api-access-vnm67") pod "c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" (UID: "c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5"). InnerVolumeSpecName "kube-api-access-vnm67". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:53.564474 master-2 kubenswrapper[4776]: I1011 10:53:53.564436 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" (UID: "c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:53.565974 master-2 kubenswrapper[4776]: I1011 10:53:53.565924 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-config" (OuterVolumeSpecName: "config") pod "c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" (UID: "c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:53.580188 master-0 kubenswrapper[4790]: I1011 10:53:53.580090 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nwqg\" (UniqueName: \"kubernetes.io/projected/c874e686-cbb2-4de3-8058-382a74b5742d-kube-api-access-6nwqg\") pod \"c874e686-cbb2-4de3-8058-382a74b5742d\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " Oct 11 10:53:53.580506 master-0 kubenswrapper[4790]: I1011 10:53:53.580228 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c874e686-cbb2-4de3-8058-382a74b5742d-additional-scripts\") pod \"c874e686-cbb2-4de3-8058-382a74b5742d\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " Oct 11 10:53:53.580506 master-0 kubenswrapper[4790]: I1011 10:53:53.580272 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c874e686-cbb2-4de3-8058-382a74b5742d-var-run\") pod \"c874e686-cbb2-4de3-8058-382a74b5742d\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " Oct 11 10:53:53.580506 master-0 kubenswrapper[4790]: I1011 10:53:53.580299 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c874e686-cbb2-4de3-8058-382a74b5742d-var-log-ovn\") pod \"c874e686-cbb2-4de3-8058-382a74b5742d\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " Oct 11 10:53:53.580506 master-0 kubenswrapper[4790]: I1011 10:53:53.580450 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c874e686-cbb2-4de3-8058-382a74b5742d-scripts\") pod \"c874e686-cbb2-4de3-8058-382a74b5742d\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " Oct 11 10:53:53.580506 master-0 kubenswrapper[4790]: I1011 10:53:53.580480 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c874e686-cbb2-4de3-8058-382a74b5742d-var-run-ovn\") pod \"c874e686-cbb2-4de3-8058-382a74b5742d\" (UID: \"c874e686-cbb2-4de3-8058-382a74b5742d\") " Oct 11 10:53:53.580920 master-0 kubenswrapper[4790]: I1011 10:53:53.580857 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c874e686-cbb2-4de3-8058-382a74b5742d-var-run" (OuterVolumeSpecName: "var-run") pod "c874e686-cbb2-4de3-8058-382a74b5742d" (UID: "c874e686-cbb2-4de3-8058-382a74b5742d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:53.580986 master-0 kubenswrapper[4790]: I1011 10:53:53.580959 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c874e686-cbb2-4de3-8058-382a74b5742d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c874e686-cbb2-4de3-8058-382a74b5742d" (UID: "c874e686-cbb2-4de3-8058-382a74b5742d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:53.581022 master-0 kubenswrapper[4790]: I1011 10:53:53.581007 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c874e686-cbb2-4de3-8058-382a74b5742d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c874e686-cbb2-4de3-8058-382a74b5742d" (UID: "c874e686-cbb2-4de3-8058-382a74b5742d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:53.581695 master-0 kubenswrapper[4790]: I1011 10:53:53.581662 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c874e686-cbb2-4de3-8058-382a74b5742d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c874e686-cbb2-4de3-8058-382a74b5742d" (UID: "c874e686-cbb2-4de3-8058-382a74b5742d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:53.581988 master-0 kubenswrapper[4790]: I1011 10:53:53.581947 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c874e686-cbb2-4de3-8058-382a74b5742d-scripts" (OuterVolumeSpecName: "scripts") pod "c874e686-cbb2-4de3-8058-382a74b5742d" (UID: "c874e686-cbb2-4de3-8058-382a74b5742d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:53.584841 master-0 kubenswrapper[4790]: I1011 10:53:53.584796 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c874e686-cbb2-4de3-8058-382a74b5742d-kube-api-access-6nwqg" (OuterVolumeSpecName: "kube-api-access-6nwqg") pod "c874e686-cbb2-4de3-8058-382a74b5742d" (UID: "c874e686-cbb2-4de3-8058-382a74b5742d"). InnerVolumeSpecName "kube-api-access-6nwqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:53.589159 master-1 kubenswrapper[4771]: I1011 10:53:53.589050 4771 generic.go:334] "Generic (PLEG): container finished" podID="c637296f-4521-4528-b3f3-e247deac1ad8" containerID="94fe8e005fb0a8b586a5c6a1e344905a51e3390259171a77f131bc97d101f438" exitCode=0 Oct 11 10:53:53.590079 master-1 kubenswrapper[4771]: I1011 10:53:53.589560 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mtzk7-config-9fxhw" event={"ID":"c637296f-4521-4528-b3f3-e247deac1ad8","Type":"ContainerDied","Data":"94fe8e005fb0a8b586a5c6a1e344905a51e3390259171a77f131bc97d101f438"} Oct 11 10:53:53.592309 master-1 kubenswrapper[4771]: I1011 10:53:53.592259 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" event={"ID":"30700706-219b-47c1-83cd-278584a3f182","Type":"ContainerStarted","Data":"cc8ace77e3138b8e3e45d04fc9090a4ecc543f54342ec2f309cdbc89855a76b5"} Oct 11 10:53:53.635413 master-2 kubenswrapper[4776]: I1011 10:53:53.635282 4776 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-dns-svc\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:53.635413 master-2 kubenswrapper[4776]: I1011 10:53:53.635333 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:53.635413 master-2 kubenswrapper[4776]: I1011 10:53:53.635344 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnm67\" (UniqueName: \"kubernetes.io/projected/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-kube-api-access-vnm67\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:53.682304 master-0 kubenswrapper[4790]: I1011 10:53:53.682149 4790 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c874e686-cbb2-4de3-8058-382a74b5742d-scripts\") on node \"master-0\" DevicePath \"\"" Oct 11 10:53:53.682304 master-0 kubenswrapper[4790]: I1011 10:53:53.682190 4790 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c874e686-cbb2-4de3-8058-382a74b5742d-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Oct 11 10:53:53.682304 master-0 kubenswrapper[4790]: I1011 10:53:53.682201 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nwqg\" (UniqueName: \"kubernetes.io/projected/c874e686-cbb2-4de3-8058-382a74b5742d-kube-api-access-6nwqg\") on node \"master-0\" DevicePath \"\"" Oct 11 10:53:53.682304 master-0 kubenswrapper[4790]: I1011 10:53:53.682211 4790 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c874e686-cbb2-4de3-8058-382a74b5742d-additional-scripts\") on node \"master-0\" DevicePath \"\"" Oct 11 10:53:53.682304 master-0 kubenswrapper[4790]: I1011 10:53:53.682223 4790 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c874e686-cbb2-4de3-8058-382a74b5742d-var-run\") on node \"master-0\" DevicePath \"\"" Oct 11 10:53:53.682304 master-0 kubenswrapper[4790]: I1011 10:53:53.682233 4790 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c874e686-cbb2-4de3-8058-382a74b5742d-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Oct 11 10:53:53.877718 master-2 kubenswrapper[4776]: I1011 10:53:53.875732 4776 generic.go:334] "Generic (PLEG): container finished" podID="36684b23-8b0f-409c-8b8b-c2402189f68e" containerID="33f6c7d3d49df3aa00341fc88f0a2426a5e9fdd61368c7621163e1d8e6740b52" exitCode=0 Oct 11 10:53:53.877718 master-2 kubenswrapper[4776]: I1011 10:53:53.875802 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8qhqm-config-wt655" event={"ID":"36684b23-8b0f-409c-8b8b-c2402189f68e","Type":"ContainerDied","Data":"33f6c7d3d49df3aa00341fc88f0a2426a5e9fdd61368c7621163e1d8e6740b52"} Oct 11 10:53:53.878064 master-2 kubenswrapper[4776]: I1011 10:53:53.878022 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" event={"ID":"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5","Type":"ContainerDied","Data":"f2516a6982d29372b482ef8dfc55f32264f7df4b11429f9906ad40bd48d5344a"} Oct 11 10:53:53.878064 master-2 kubenswrapper[4776]: I1011 10:53:53.878062 4776 scope.go:117] "RemoveContainer" containerID="21c4e7fcdc124283dbaf4b8ea241a8c8865c7c94e13f81aee9cfb5aab43aac68" Oct 11 10:53:53.878217 master-2 kubenswrapper[4776]: I1011 10:53:53.878184 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:53:53.897857 master-2 kubenswrapper[4776]: I1011 10:53:53.897814 4776 scope.go:117] "RemoveContainer" containerID="c0ec6bd81c8aa0ea34befa100e7e8df08ded596440992a0b1a3ffb750f413afb" Oct 11 10:53:53.937073 master-2 kubenswrapper[4776]: I1011 10:53:53.936448 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86d565bb9-rbc2v"] Oct 11 10:53:53.938844 master-2 kubenswrapper[4776]: E1011 10:53:53.938313 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8f57d5f_9c0f_4081_8ed7_3ef2cad93bb5.slice/crio-f2516a6982d29372b482ef8dfc55f32264f7df4b11429f9906ad40bd48d5344a\": RecentStats: unable to find data in memory cache]" Oct 11 10:53:53.942345 master-2 kubenswrapper[4776]: I1011 10:53:53.942191 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86d565bb9-rbc2v"] Oct 11 10:53:54.069329 master-2 kubenswrapper[4776]: I1011 10:53:54.069263 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" path="/var/lib/kubelet/pods/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5/volumes" Oct 11 10:53:54.100758 master-0 kubenswrapper[4790]: I1011 10:53:54.100505 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-52t2l-config-8jkrq" event={"ID":"c874e686-cbb2-4de3-8058-382a74b5742d","Type":"ContainerDied","Data":"4d1096b4dae647b1b4423270ddfe174d7a5e98b7791d42acd49047577bd6f195"} Oct 11 10:53:54.100758 master-0 kubenswrapper[4790]: I1011 10:53:54.100622 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d1096b4dae647b1b4423270ddfe174d7a5e98b7791d42acd49047577bd6f195" Oct 11 10:53:54.100758 master-0 kubenswrapper[4790]: I1011 10:53:54.100657 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-52t2l-config-8jkrq" Oct 11 10:53:54.127588 master-2 kubenswrapper[4776]: I1011 10:53:54.127521 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-848z9"] Oct 11 10:53:54.128043 master-2 kubenswrapper[4776]: E1011 10:53:54.128014 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" containerName="dnsmasq-dns" Oct 11 10:53:54.128043 master-2 kubenswrapper[4776]: I1011 10:53:54.128041 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" containerName="dnsmasq-dns" Oct 11 10:53:54.128170 master-2 kubenswrapper[4776]: E1011 10:53:54.128084 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" containerName="init" Oct 11 10:53:54.128170 master-2 kubenswrapper[4776]: I1011 10:53:54.128095 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" containerName="init" Oct 11 10:53:54.128312 master-2 kubenswrapper[4776]: I1011 10:53:54.128293 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" containerName="dnsmasq-dns" Oct 11 10:53:54.129112 master-2 kubenswrapper[4776]: I1011 10:53:54.129082 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.131817 master-2 kubenswrapper[4776]: I1011 10:53:54.131781 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-config-data" Oct 11 10:53:54.143136 master-2 kubenswrapper[4776]: I1011 10:53:54.142442 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-combined-ca-bundle\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.143136 master-2 kubenswrapper[4776]: I1011 10:53:54.142519 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-config-data\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.143136 master-2 kubenswrapper[4776]: I1011 10:53:54.142591 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhdjz\" (UniqueName: \"kubernetes.io/projected/25099d7a-e434-48d2-a175-088e5ad2caf2-kube-api-access-jhdjz\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.143136 master-2 kubenswrapper[4776]: I1011 10:53:54.142615 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-db-sync-config-data\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.144990 master-2 kubenswrapper[4776]: I1011 10:53:54.144852 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-848z9"] Oct 11 10:53:54.243728 master-2 kubenswrapper[4776]: I1011 10:53:54.243638 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-combined-ca-bundle\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.243937 master-2 kubenswrapper[4776]: I1011 10:53:54.243758 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-config-data\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.243937 master-2 kubenswrapper[4776]: I1011 10:53:54.243827 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhdjz\" (UniqueName: \"kubernetes.io/projected/25099d7a-e434-48d2-a175-088e5ad2caf2-kube-api-access-jhdjz\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.243937 master-2 kubenswrapper[4776]: I1011 10:53:54.243851 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-db-sync-config-data\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.247324 master-2 kubenswrapper[4776]: I1011 10:53:54.247269 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-db-sync-config-data\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.247385 master-2 kubenswrapper[4776]: I1011 10:53:54.247341 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-config-data\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.247419 master-2 kubenswrapper[4776]: I1011 10:53:54.247344 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-combined-ca-bundle\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.264709 master-2 kubenswrapper[4776]: I1011 10:53:54.263963 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhdjz\" (UniqueName: \"kubernetes.io/projected/25099d7a-e434-48d2-a175-088e5ad2caf2-kube-api-access-jhdjz\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.444836 master-1 kubenswrapper[4771]: I1011 10:53:54.444724 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="831321b9-20ce-409b-8bdb-ec231aef5f35" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.129.0.107:5671: connect: connection refused" Oct 11 10:53:54.448964 master-2 kubenswrapper[4776]: I1011 10:53:54.448883 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.605725 master-1 kubenswrapper[4771]: I1011 10:53:54.605636 4771 generic.go:334] "Generic (PLEG): container finished" podID="30700706-219b-47c1-83cd-278584a3f182" containerID="06ae06abec101801ffcb11de5d066d694be4874cbd2110b56c80026a91417fc8" exitCode=0 Oct 11 10:53:54.606656 master-1 kubenswrapper[4771]: I1011 10:53:54.605801 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" event={"ID":"30700706-219b-47c1-83cd-278584a3f182","Type":"ContainerDied","Data":"06ae06abec101801ffcb11de5d066d694be4874cbd2110b56c80026a91417fc8"} Oct 11 10:53:54.622736 master-0 kubenswrapper[4790]: I1011 10:53:54.621820 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-52t2l-config-8jkrq"] Oct 11 10:53:54.627211 master-0 kubenswrapper[4790]: I1011 10:53:54.627145 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-52t2l-config-8jkrq"] Oct 11 10:53:55.013372 master-2 kubenswrapper[4776]: I1011 10:53:55.013317 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-848z9"] Oct 11 10:53:55.020612 master-2 kubenswrapper[4776]: W1011 10:53:55.020555 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25099d7a_e434_48d2_a175_088e5ad2caf2.slice/crio-0be39d2643cd63048b0bc49a71ba425692d3f2086de0e9779d98622fc7802a88 WatchSource:0}: Error finding container 0be39d2643cd63048b0bc49a71ba425692d3f2086de0e9779d98622fc7802a88: Status 404 returned error can't find the container with id 0be39d2643cd63048b0bc49a71ba425692d3f2086de0e9779d98622fc7802a88 Oct 11 10:53:55.057533 master-1 kubenswrapper[4771]: I1011 10:53:55.057486 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:55.193497 master-0 kubenswrapper[4790]: I1011 10:53:55.193395 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0a7a-account-create-9c44k"] Oct 11 10:53:55.193904 master-0 kubenswrapper[4790]: E1011 10:53:55.193876 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c874e686-cbb2-4de3-8058-382a74b5742d" containerName="ovn-config" Oct 11 10:53:55.193962 master-0 kubenswrapper[4790]: I1011 10:53:55.193911 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="c874e686-cbb2-4de3-8058-382a74b5742d" containerName="ovn-config" Oct 11 10:53:55.193962 master-0 kubenswrapper[4790]: E1011 10:53:55.193940 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70cbbe93-7c50-40cb-91f4-f75c8875580d" containerName="mariadb-account-create" Oct 11 10:53:55.193962 master-0 kubenswrapper[4790]: I1011 10:53:55.193950 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="70cbbe93-7c50-40cb-91f4-f75c8875580d" containerName="mariadb-account-create" Oct 11 10:53:55.194154 master-0 kubenswrapper[4790]: I1011 10:53:55.194133 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="70cbbe93-7c50-40cb-91f4-f75c8875580d" containerName="mariadb-account-create" Oct 11 10:53:55.194197 master-0 kubenswrapper[4790]: I1011 10:53:55.194161 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="c874e686-cbb2-4de3-8058-382a74b5742d" containerName="ovn-config" Oct 11 10:53:55.196148 master-0 kubenswrapper[4790]: I1011 10:53:55.196109 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0a7a-account-create-9c44k" Oct 11 10:53:55.200535 master-0 kubenswrapper[4790]: I1011 10:53:55.200278 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 11 10:53:55.213089 master-1 kubenswrapper[4771]: I1011 10:53:55.212983 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqprx\" (UniqueName: \"kubernetes.io/projected/c637296f-4521-4528-b3f3-e247deac1ad8-kube-api-access-zqprx\") pod \"c637296f-4521-4528-b3f3-e247deac1ad8\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " Oct 11 10:53:55.213411 master-1 kubenswrapper[4771]: I1011 10:53:55.213120 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c637296f-4521-4528-b3f3-e247deac1ad8-scripts\") pod \"c637296f-4521-4528-b3f3-e247deac1ad8\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " Oct 11 10:53:55.213655 master-1 kubenswrapper[4771]: I1011 10:53:55.213612 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c637296f-4521-4528-b3f3-e247deac1ad8-var-log-ovn\") pod \"c637296f-4521-4528-b3f3-e247deac1ad8\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " Oct 11 10:53:55.213755 master-1 kubenswrapper[4771]: I1011 10:53:55.213667 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c637296f-4521-4528-b3f3-e247deac1ad8-additional-scripts\") pod \"c637296f-4521-4528-b3f3-e247deac1ad8\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " Oct 11 10:53:55.213755 master-1 kubenswrapper[4771]: I1011 10:53:55.213732 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c637296f-4521-4528-b3f3-e247deac1ad8-var-run-ovn\") pod \"c637296f-4521-4528-b3f3-e247deac1ad8\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " Oct 11 10:53:55.213890 master-1 kubenswrapper[4771]: I1011 10:53:55.213766 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c637296f-4521-4528-b3f3-e247deac1ad8-var-run\") pod \"c637296f-4521-4528-b3f3-e247deac1ad8\" (UID: \"c637296f-4521-4528-b3f3-e247deac1ad8\") " Oct 11 10:53:55.213890 master-1 kubenswrapper[4771]: I1011 10:53:55.213818 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c637296f-4521-4528-b3f3-e247deac1ad8-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c637296f-4521-4528-b3f3-e247deac1ad8" (UID: "c637296f-4521-4528-b3f3-e247deac1ad8"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:55.214018 master-1 kubenswrapper[4771]: I1011 10:53:55.213905 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c637296f-4521-4528-b3f3-e247deac1ad8-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c637296f-4521-4528-b3f3-e247deac1ad8" (UID: "c637296f-4521-4528-b3f3-e247deac1ad8"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:55.214018 master-1 kubenswrapper[4771]: I1011 10:53:55.213931 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c637296f-4521-4528-b3f3-e247deac1ad8-var-run" (OuterVolumeSpecName: "var-run") pod "c637296f-4521-4528-b3f3-e247deac1ad8" (UID: "c637296f-4521-4528-b3f3-e247deac1ad8"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:55.215154 master-0 kubenswrapper[4790]: I1011 10:53:55.215033 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0a7a-account-create-9c44k"] Oct 11 10:53:55.215215 master-1 kubenswrapper[4771]: I1011 10:53:55.215164 4771 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c637296f-4521-4528-b3f3-e247deac1ad8-var-log-ovn\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:55.215323 master-1 kubenswrapper[4771]: I1011 10:53:55.215217 4771 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c637296f-4521-4528-b3f3-e247deac1ad8-var-run-ovn\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:55.215432 master-1 kubenswrapper[4771]: I1011 10:53:55.215349 4771 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c637296f-4521-4528-b3f3-e247deac1ad8-var-run\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:55.215432 master-1 kubenswrapper[4771]: I1011 10:53:55.215145 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c637296f-4521-4528-b3f3-e247deac1ad8-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c637296f-4521-4528-b3f3-e247deac1ad8" (UID: "c637296f-4521-4528-b3f3-e247deac1ad8"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:55.215432 master-1 kubenswrapper[4771]: I1011 10:53:55.215272 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c637296f-4521-4528-b3f3-e247deac1ad8-scripts" (OuterVolumeSpecName: "scripts") pod "c637296f-4521-4528-b3f3-e247deac1ad8" (UID: "c637296f-4521-4528-b3f3-e247deac1ad8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:55.218323 master-1 kubenswrapper[4771]: I1011 10:53:55.218253 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c637296f-4521-4528-b3f3-e247deac1ad8-kube-api-access-zqprx" (OuterVolumeSpecName: "kube-api-access-zqprx") pod "c637296f-4521-4528-b3f3-e247deac1ad8" (UID: "c637296f-4521-4528-b3f3-e247deac1ad8"). InnerVolumeSpecName "kube-api-access-zqprx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:55.316667 master-0 kubenswrapper[4790]: I1011 10:53:55.316504 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmddg\" (UniqueName: \"kubernetes.io/projected/c21954bc-9fb3-4d4e-8085-b2fcf628e0a5-kube-api-access-dmddg\") pod \"keystone-0a7a-account-create-9c44k\" (UID: \"c21954bc-9fb3-4d4e-8085-b2fcf628e0a5\") " pod="openstack/keystone-0a7a-account-create-9c44k" Oct 11 10:53:55.317274 master-1 kubenswrapper[4771]: I1011 10:53:55.317161 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqprx\" (UniqueName: \"kubernetes.io/projected/c637296f-4521-4528-b3f3-e247deac1ad8-kube-api-access-zqprx\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:55.317274 master-1 kubenswrapper[4771]: I1011 10:53:55.317237 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c637296f-4521-4528-b3f3-e247deac1ad8-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:55.317274 master-1 kubenswrapper[4771]: I1011 10:53:55.317258 4771 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c637296f-4521-4528-b3f3-e247deac1ad8-additional-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:53:55.418215 master-0 kubenswrapper[4790]: I1011 10:53:55.418103 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmddg\" (UniqueName: \"kubernetes.io/projected/c21954bc-9fb3-4d4e-8085-b2fcf628e0a5-kube-api-access-dmddg\") pod \"keystone-0a7a-account-create-9c44k\" (UID: \"c21954bc-9fb3-4d4e-8085-b2fcf628e0a5\") " pod="openstack/keystone-0a7a-account-create-9c44k" Oct 11 10:53:55.446297 master-0 kubenswrapper[4790]: I1011 10:53:55.446124 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmddg\" (UniqueName: \"kubernetes.io/projected/c21954bc-9fb3-4d4e-8085-b2fcf628e0a5-kube-api-access-dmddg\") pod \"keystone-0a7a-account-create-9c44k\" (UID: \"c21954bc-9fb3-4d4e-8085-b2fcf628e0a5\") " pod="openstack/keystone-0a7a-account-create-9c44k" Oct 11 10:53:55.512385 master-0 kubenswrapper[4790]: I1011 10:53:55.512294 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-1" podUID="be929908-6474-451d-8b87-e4effd7c6de4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.130.0.67:5671: connect: connection refused" Oct 11 10:53:55.515778 master-0 kubenswrapper[4790]: I1011 10:53:55.515721 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0a7a-account-create-9c44k" Oct 11 10:53:55.565784 master-2 kubenswrapper[4776]: I1011 10:53:55.565737 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:55.621516 master-1 kubenswrapper[4771]: I1011 10:53:55.620053 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" event={"ID":"30700706-219b-47c1-83cd-278584a3f182","Type":"ContainerStarted","Data":"2dcdb27cf0dbce506998b4c8cfe73f7847cd892689b076fcba313e976b8a5349"} Oct 11 10:53:55.621516 master-1 kubenswrapper[4771]: I1011 10:53:55.620328 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:53:55.622195 master-1 kubenswrapper[4771]: I1011 10:53:55.622179 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mtzk7-config-9fxhw" event={"ID":"c637296f-4521-4528-b3f3-e247deac1ad8","Type":"ContainerDied","Data":"85ec1a1327a46e2b03fec9261ebaecc8a7d06803923507fa15cf1db0337a471c"} Oct 11 10:53:55.622250 master-1 kubenswrapper[4771]: I1011 10:53:55.622205 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85ec1a1327a46e2b03fec9261ebaecc8a7d06803923507fa15cf1db0337a471c" Oct 11 10:53:55.622325 master-1 kubenswrapper[4771]: I1011 10:53:55.622290 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mtzk7-config-9fxhw" Oct 11 10:53:55.643815 master-0 kubenswrapper[4790]: I1011 10:53:55.640607 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-af51-account-create-tz8f4"] Oct 11 10:53:55.643815 master-0 kubenswrapper[4790]: I1011 10:53:55.642542 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-af51-account-create-tz8f4"] Oct 11 10:53:55.643815 master-0 kubenswrapper[4790]: I1011 10:53:55.642783 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-af51-account-create-tz8f4" Oct 11 10:53:55.655746 master-0 kubenswrapper[4790]: I1011 10:53:55.655217 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 11 10:53:55.665393 master-1 kubenswrapper[4771]: I1011 10:53:55.665269 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" podStartSLOduration=3.665236964 podStartE2EDuration="3.665236964s" podCreationTimestamp="2025-10-11 10:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:55.660325332 +0000 UTC m=+1667.634551793" watchObservedRunningTime="2025-10-11 10:53:55.665236964 +0000 UTC m=+1667.639463405" Oct 11 10:53:55.677797 master-2 kubenswrapper[4776]: I1011 10:53:55.677716 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-scripts\") pod \"36684b23-8b0f-409c-8b8b-c2402189f68e\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " Oct 11 10:53:55.677797 master-2 kubenswrapper[4776]: I1011 10:53:55.677793 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run\") pod \"36684b23-8b0f-409c-8b8b-c2402189f68e\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " Oct 11 10:53:55.678130 master-2 kubenswrapper[4776]: I1011 10:53:55.677816 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run-ovn\") pod \"36684b23-8b0f-409c-8b8b-c2402189f68e\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " Oct 11 10:53:55.678130 master-2 kubenswrapper[4776]: I1011 10:53:55.677942 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "36684b23-8b0f-409c-8b8b-c2402189f68e" (UID: "36684b23-8b0f-409c-8b8b-c2402189f68e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:55.678130 master-2 kubenswrapper[4776]: I1011 10:53:55.677983 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-log-ovn\") pod \"36684b23-8b0f-409c-8b8b-c2402189f68e\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " Oct 11 10:53:55.678130 master-2 kubenswrapper[4776]: I1011 10:53:55.677957 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run" (OuterVolumeSpecName: "var-run") pod "36684b23-8b0f-409c-8b8b-c2402189f68e" (UID: "36684b23-8b0f-409c-8b8b-c2402189f68e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:55.678130 master-2 kubenswrapper[4776]: I1011 10:53:55.678043 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "36684b23-8b0f-409c-8b8b-c2402189f68e" (UID: "36684b23-8b0f-409c-8b8b-c2402189f68e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:55.678130 master-2 kubenswrapper[4776]: I1011 10:53:55.678063 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlxnz\" (UniqueName: \"kubernetes.io/projected/36684b23-8b0f-409c-8b8b-c2402189f68e-kube-api-access-nlxnz\") pod \"36684b23-8b0f-409c-8b8b-c2402189f68e\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " Oct 11 10:53:55.678399 master-2 kubenswrapper[4776]: I1011 10:53:55.678257 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-additional-scripts\") pod \"36684b23-8b0f-409c-8b8b-c2402189f68e\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " Oct 11 10:53:55.679032 master-2 kubenswrapper[4776]: I1011 10:53:55.678967 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "36684b23-8b0f-409c-8b8b-c2402189f68e" (UID: "36684b23-8b0f-409c-8b8b-c2402189f68e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:55.679196 master-2 kubenswrapper[4776]: I1011 10:53:55.679168 4776 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:55.679242 master-2 kubenswrapper[4776]: I1011 10:53:55.679195 4776 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run-ovn\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:55.679274 master-2 kubenswrapper[4776]: I1011 10:53:55.679244 4776 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-log-ovn\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:55.679274 master-2 kubenswrapper[4776]: I1011 10:53:55.679257 4776 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-additional-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:55.680440 master-2 kubenswrapper[4776]: I1011 10:53:55.680337 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-scripts" (OuterVolumeSpecName: "scripts") pod "36684b23-8b0f-409c-8b8b-c2402189f68e" (UID: "36684b23-8b0f-409c-8b8b-c2402189f68e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:55.685364 master-2 kubenswrapper[4776]: I1011 10:53:55.685322 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36684b23-8b0f-409c-8b8b-c2402189f68e-kube-api-access-nlxnz" (OuterVolumeSpecName: "kube-api-access-nlxnz") pod "36684b23-8b0f-409c-8b8b-c2402189f68e" (UID: "36684b23-8b0f-409c-8b8b-c2402189f68e"). InnerVolumeSpecName "kube-api-access-nlxnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:55.724008 master-0 kubenswrapper[4790]: I1011 10:53:55.723855 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldn48\" (UniqueName: \"kubernetes.io/projected/4838cae2-31c3-4b4d-a914-e95b0b6308be-kube-api-access-ldn48\") pod \"placement-af51-account-create-tz8f4\" (UID: \"4838cae2-31c3-4b4d-a914-e95b0b6308be\") " pod="openstack/placement-af51-account-create-tz8f4" Oct 11 10:53:55.780943 master-2 kubenswrapper[4776]: I1011 10:53:55.780887 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:55.780943 master-2 kubenswrapper[4776]: I1011 10:53:55.780924 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlxnz\" (UniqueName: \"kubernetes.io/projected/36684b23-8b0f-409c-8b8b-c2402189f68e-kube-api-access-nlxnz\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:55.826430 master-0 kubenswrapper[4790]: I1011 10:53:55.826326 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldn48\" (UniqueName: \"kubernetes.io/projected/4838cae2-31c3-4b4d-a914-e95b0b6308be-kube-api-access-ldn48\") pod \"placement-af51-account-create-tz8f4\" (UID: \"4838cae2-31c3-4b4d-a914-e95b0b6308be\") " pod="openstack/placement-af51-account-create-tz8f4" Oct 11 10:53:55.846437 master-0 kubenswrapper[4790]: I1011 10:53:55.846366 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldn48\" (UniqueName: \"kubernetes.io/projected/4838cae2-31c3-4b4d-a914-e95b0b6308be-kube-api-access-ldn48\") pod \"placement-af51-account-create-tz8f4\" (UID: \"4838cae2-31c3-4b4d-a914-e95b0b6308be\") " pod="openstack/placement-af51-account-create-tz8f4" Oct 11 10:53:55.895109 master-2 kubenswrapper[4776]: I1011 10:53:55.895065 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8qhqm-config-wt655" event={"ID":"36684b23-8b0f-409c-8b8b-c2402189f68e","Type":"ContainerDied","Data":"52d1a0b423f1130d7c8ac76ca9e53742c68d853d59ebefff64b3fc353e522a46"} Oct 11 10:53:55.895109 master-2 kubenswrapper[4776]: I1011 10:53:55.895106 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:55.895346 master-2 kubenswrapper[4776]: I1011 10:53:55.895134 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52d1a0b423f1130d7c8ac76ca9e53742c68d853d59ebefff64b3fc353e522a46" Oct 11 10:53:55.896617 master-2 kubenswrapper[4776]: I1011 10:53:55.896573 4776 generic.go:334] "Generic (PLEG): container finished" podID="02c36342-76bf-457d-804c-cc6420176307" containerID="0c5b403d67b7cebb505d1369362477def7d8aa2286a84930fcb8513bb4ad37ac" exitCode=0 Oct 11 10:53:55.896617 master-2 kubenswrapper[4776]: I1011 10:53:55.896605 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sc5rx" event={"ID":"02c36342-76bf-457d-804c-cc6420176307","Type":"ContainerDied","Data":"0c5b403d67b7cebb505d1369362477def7d8aa2286a84930fcb8513bb4ad37ac"} Oct 11 10:53:55.898162 master-2 kubenswrapper[4776]: I1011 10:53:55.898111 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-848z9" event={"ID":"25099d7a-e434-48d2-a175-088e5ad2caf2","Type":"ContainerStarted","Data":"0be39d2643cd63048b0bc49a71ba425692d3f2086de0e9779d98622fc7802a88"} Oct 11 10:53:55.956891 master-0 kubenswrapper[4790]: I1011 10:53:55.956802 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0a7a-account-create-9c44k"] Oct 11 10:53:55.959070 master-0 kubenswrapper[4790]: W1011 10:53:55.959002 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc21954bc_9fb3_4d4e_8085_b2fcf628e0a5.slice/crio-e7aabdd1f9e81707ceac24468611fbf243bb7fbd9a16bb4d480bb8af844a4e45 WatchSource:0}: Error finding container e7aabdd1f9e81707ceac24468611fbf243bb7fbd9a16bb4d480bb8af844a4e45: Status 404 returned error can't find the container with id e7aabdd1f9e81707ceac24468611fbf243bb7fbd9a16bb4d480bb8af844a4e45 Oct 11 10:53:55.975414 master-0 kubenswrapper[4790]: I1011 10:53:55.975247 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-af51-account-create-tz8f4" Oct 11 10:53:55.981764 master-2 kubenswrapper[4776]: I1011 10:53:55.981586 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-8qhqm-config-wt655"] Oct 11 10:53:55.988567 master-2 kubenswrapper[4776]: I1011 10:53:55.988506 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-8qhqm-config-wt655"] Oct 11 10:53:56.072784 master-2 kubenswrapper[4776]: I1011 10:53:56.072517 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36684b23-8b0f-409c-8b8b-c2402189f68e" path="/var/lib/kubelet/pods/36684b23-8b0f-409c-8b8b-c2402189f68e/volumes" Oct 11 10:53:56.109557 master-2 kubenswrapper[4776]: I1011 10:53:56.109489 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8qhqm-config-pnhck"] Oct 11 10:53:56.110389 master-2 kubenswrapper[4776]: E1011 10:53:56.110354 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36684b23-8b0f-409c-8b8b-c2402189f68e" containerName="ovn-config" Oct 11 10:53:56.110389 master-2 kubenswrapper[4776]: I1011 10:53:56.110390 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="36684b23-8b0f-409c-8b8b-c2402189f68e" containerName="ovn-config" Oct 11 10:53:56.110587 master-2 kubenswrapper[4776]: I1011 10:53:56.110569 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="36684b23-8b0f-409c-8b8b-c2402189f68e" containerName="ovn-config" Oct 11 10:53:56.111431 master-2 kubenswrapper[4776]: I1011 10:53:56.111411 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.115936 master-2 kubenswrapper[4776]: I1011 10:53:56.115893 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 11 10:53:56.125589 master-2 kubenswrapper[4776]: I1011 10:53:56.125545 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8qhqm-config-pnhck"] Oct 11 10:53:56.128670 master-0 kubenswrapper[4790]: I1011 10:53:56.128577 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0a7a-account-create-9c44k" event={"ID":"c21954bc-9fb3-4d4e-8085-b2fcf628e0a5","Type":"ContainerStarted","Data":"3b1f890304ef9d089479614d38d81aabcf093a42157b52f334fa1a99c8f86aa6"} Oct 11 10:53:56.129068 master-0 kubenswrapper[4790]: I1011 10:53:56.129045 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0a7a-account-create-9c44k" event={"ID":"c21954bc-9fb3-4d4e-8085-b2fcf628e0a5","Type":"ContainerStarted","Data":"e7aabdd1f9e81707ceac24468611fbf243bb7fbd9a16bb4d480bb8af844a4e45"} Oct 11 10:53:56.156011 master-0 kubenswrapper[4790]: I1011 10:53:56.155924 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-0a7a-account-create-9c44k" podStartSLOduration=1.1559017169999999 podStartE2EDuration="1.155901717s" podCreationTimestamp="2025-10-11 10:53:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:56.155470614 +0000 UTC m=+912.709930916" watchObservedRunningTime="2025-10-11 10:53:56.155901717 +0000 UTC m=+912.710362009" Oct 11 10:53:56.184182 master-1 kubenswrapper[4771]: I1011 10:53:56.184089 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mtzk7-config-9fxhw"] Oct 11 10:53:56.191213 master-1 kubenswrapper[4771]: I1011 10:53:56.191135 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mtzk7-config-9fxhw"] Oct 11 10:53:56.289734 master-2 kubenswrapper[4776]: I1011 10:53:56.289614 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-additional-scripts\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.289734 master-2 kubenswrapper[4776]: I1011 10:53:56.289714 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpnvp\" (UniqueName: \"kubernetes.io/projected/bedd9a1a-d96f-49da-93c4-971885dafbfa-kube-api-access-kpnvp\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.290076 master-2 kubenswrapper[4776]: I1011 10:53:56.289994 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-scripts\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.290238 master-2 kubenswrapper[4776]: I1011 10:53:56.290203 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run-ovn\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.290371 master-2 kubenswrapper[4776]: I1011 10:53:56.290350 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.290462 master-2 kubenswrapper[4776]: I1011 10:53:56.290447 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-log-ovn\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.316393 master-0 kubenswrapper[4790]: I1011 10:53:56.316270 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c874e686-cbb2-4de3-8058-382a74b5742d" path="/var/lib/kubelet/pods/c874e686-cbb2-4de3-8058-382a74b5742d/volumes" Oct 11 10:53:56.360400 master-1 kubenswrapper[4771]: I1011 10:53:56.360275 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mtzk7-config-292gb"] Oct 11 10:53:56.360850 master-1 kubenswrapper[4771]: E1011 10:53:56.360814 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c637296f-4521-4528-b3f3-e247deac1ad8" containerName="ovn-config" Oct 11 10:53:56.360850 master-1 kubenswrapper[4771]: I1011 10:53:56.360837 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c637296f-4521-4528-b3f3-e247deac1ad8" containerName="ovn-config" Oct 11 10:53:56.361103 master-1 kubenswrapper[4771]: I1011 10:53:56.361075 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c637296f-4521-4528-b3f3-e247deac1ad8" containerName="ovn-config" Oct 11 10:53:56.363428 master-1 kubenswrapper[4771]: I1011 10:53:56.362100 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:56.367110 master-1 kubenswrapper[4771]: I1011 10:53:56.366833 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 11 10:53:56.372178 master-1 kubenswrapper[4771]: I1011 10:53:56.369186 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mtzk7-config-292gb"] Oct 11 10:53:56.391572 master-2 kubenswrapper[4776]: I1011 10:53:56.391509 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-log-ovn\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.391572 master-2 kubenswrapper[4776]: I1011 10:53:56.391573 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-additional-scripts\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.391907 master-2 kubenswrapper[4776]: I1011 10:53:56.391649 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpnvp\" (UniqueName: \"kubernetes.io/projected/bedd9a1a-d96f-49da-93c4-971885dafbfa-kube-api-access-kpnvp\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.391907 master-2 kubenswrapper[4776]: I1011 10:53:56.391697 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-scripts\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.391907 master-2 kubenswrapper[4776]: I1011 10:53:56.391720 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run-ovn\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.391907 master-2 kubenswrapper[4776]: I1011 10:53:56.391759 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.391907 master-2 kubenswrapper[4776]: I1011 10:53:56.391768 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-log-ovn\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.391907 master-2 kubenswrapper[4776]: I1011 10:53:56.391886 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.392188 master-2 kubenswrapper[4776]: I1011 10:53:56.391938 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run-ovn\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.392501 master-2 kubenswrapper[4776]: I1011 10:53:56.392439 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-additional-scripts\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.394153 master-2 kubenswrapper[4776]: I1011 10:53:56.394110 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-scripts\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.436742 master-2 kubenswrapper[4776]: I1011 10:53:56.436701 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpnvp\" (UniqueName: \"kubernetes.io/projected/bedd9a1a-d96f-49da-93c4-971885dafbfa-kube-api-access-kpnvp\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.439830 master-2 kubenswrapper[4776]: I1011 10:53:56.439786 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.446073 master-1 kubenswrapper[4771]: I1011 10:53:56.445937 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c637296f-4521-4528-b3f3-e247deac1ad8" path="/var/lib/kubelet/pods/c637296f-4521-4528-b3f3-e247deac1ad8/volumes" Oct 11 10:53:56.481783 master-0 kubenswrapper[4790]: I1011 10:53:56.481681 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-af51-account-create-tz8f4"] Oct 11 10:53:56.484357 master-0 kubenswrapper[4790]: W1011 10:53:56.484285 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4838cae2_31c3_4b4d_a914_e95b0b6308be.slice/crio-0bb2113a61f088d79d33b08877923a158ef88b748a4b601ff9e7693ef5d257f9 WatchSource:0}: Error finding container 0bb2113a61f088d79d33b08877923a158ef88b748a4b601ff9e7693ef5d257f9: Status 404 returned error can't find the container with id 0bb2113a61f088d79d33b08877923a158ef88b748a4b601ff9e7693ef5d257f9 Oct 11 10:53:56.523962 master-2 kubenswrapper[4776]: I1011 10:53:56.523905 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-2" podUID="5a8ba065-7ef6-4bab-b20a-3bb274c93fa0" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.128.0.122:5671: connect: connection refused" Oct 11 10:53:56.537107 master-2 kubenswrapper[4776]: I1011 10:53:56.537062 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:56.548048 master-1 kubenswrapper[4771]: I1011 10:53:56.547976 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5a75aab2-123f-491f-af07-939ade33aadc-var-run-ovn\") pod \"ovn-controller-mtzk7-config-292gb\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:56.548048 master-1 kubenswrapper[4771]: I1011 10:53:56.548048 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg8cm\" (UniqueName: \"kubernetes.io/projected/5a75aab2-123f-491f-af07-939ade33aadc-kube-api-access-jg8cm\") pod \"ovn-controller-mtzk7-config-292gb\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:56.548236 master-1 kubenswrapper[4771]: I1011 10:53:56.548104 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5a75aab2-123f-491f-af07-939ade33aadc-additional-scripts\") pod \"ovn-controller-mtzk7-config-292gb\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:56.548236 master-1 kubenswrapper[4771]: I1011 10:53:56.548207 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5a75aab2-123f-491f-af07-939ade33aadc-var-log-ovn\") pod \"ovn-controller-mtzk7-config-292gb\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:56.548328 master-1 kubenswrapper[4771]: I1011 10:53:56.548259 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5a75aab2-123f-491f-af07-939ade33aadc-var-run\") pod \"ovn-controller-mtzk7-config-292gb\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:56.548328 master-1 kubenswrapper[4771]: I1011 10:53:56.548298 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a75aab2-123f-491f-af07-939ade33aadc-scripts\") pod \"ovn-controller-mtzk7-config-292gb\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:56.650897 master-1 kubenswrapper[4771]: I1011 10:53:56.650279 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5a75aab2-123f-491f-af07-939ade33aadc-additional-scripts\") pod \"ovn-controller-mtzk7-config-292gb\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:56.650897 master-1 kubenswrapper[4771]: I1011 10:53:56.650401 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5a75aab2-123f-491f-af07-939ade33aadc-var-log-ovn\") pod \"ovn-controller-mtzk7-config-292gb\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:56.650897 master-1 kubenswrapper[4771]: I1011 10:53:56.650443 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5a75aab2-123f-491f-af07-939ade33aadc-var-run\") pod \"ovn-controller-mtzk7-config-292gb\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:56.650897 master-1 kubenswrapper[4771]: I1011 10:53:56.650480 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a75aab2-123f-491f-af07-939ade33aadc-scripts\") pod \"ovn-controller-mtzk7-config-292gb\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:56.650897 master-1 kubenswrapper[4771]: I1011 10:53:56.650541 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5a75aab2-123f-491f-af07-939ade33aadc-var-run-ovn\") pod \"ovn-controller-mtzk7-config-292gb\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:56.650897 master-1 kubenswrapper[4771]: I1011 10:53:56.650567 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg8cm\" (UniqueName: \"kubernetes.io/projected/5a75aab2-123f-491f-af07-939ade33aadc-kube-api-access-jg8cm\") pod \"ovn-controller-mtzk7-config-292gb\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:56.652082 master-1 kubenswrapper[4771]: I1011 10:53:56.651831 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5a75aab2-123f-491f-af07-939ade33aadc-additional-scripts\") pod \"ovn-controller-mtzk7-config-292gb\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:56.652082 master-1 kubenswrapper[4771]: I1011 10:53:56.651982 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5a75aab2-123f-491f-af07-939ade33aadc-var-run\") pod \"ovn-controller-mtzk7-config-292gb\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:56.652225 master-1 kubenswrapper[4771]: I1011 10:53:56.652093 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5a75aab2-123f-491f-af07-939ade33aadc-var-log-ovn\") pod \"ovn-controller-mtzk7-config-292gb\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:56.652225 master-1 kubenswrapper[4771]: I1011 10:53:56.652157 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5a75aab2-123f-491f-af07-939ade33aadc-var-run-ovn\") pod \"ovn-controller-mtzk7-config-292gb\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:56.653957 master-1 kubenswrapper[4771]: I1011 10:53:56.653902 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a75aab2-123f-491f-af07-939ade33aadc-scripts\") pod \"ovn-controller-mtzk7-config-292gb\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:56.767321 master-1 kubenswrapper[4771]: I1011 10:53:56.766318 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg8cm\" (UniqueName: \"kubernetes.io/projected/5a75aab2-123f-491f-af07-939ade33aadc-kube-api-access-jg8cm\") pod \"ovn-controller-mtzk7-config-292gb\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:56.963838 master-2 kubenswrapper[4776]: I1011 10:53:56.963790 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8qhqm-config-pnhck"] Oct 11 10:53:56.964027 master-2 kubenswrapper[4776]: W1011 10:53:56.963959 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbedd9a1a_d96f_49da_93c4_971885dafbfa.slice/crio-c788c22ef2bb450434f745677a534f4ed876639c6e3b38f9a131643958abd1d0 WatchSource:0}: Error finding container c788c22ef2bb450434f745677a534f4ed876639c6e3b38f9a131643958abd1d0: Status 404 returned error can't find the container with id c788c22ef2bb450434f745677a534f4ed876639c6e3b38f9a131643958abd1d0 Oct 11 10:53:56.990754 master-1 kubenswrapper[4771]: I1011 10:53:56.990546 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:53:57.011757 master-1 kubenswrapper[4771]: I1011 10:53:57.011652 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="6fe99fba-e358-4203-a516-04b9ae19d789" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.129.0.103:5671: connect: connection refused" Oct 11 10:53:57.138853 master-0 kubenswrapper[4790]: I1011 10:53:57.138761 4790 generic.go:334] "Generic (PLEG): container finished" podID="c21954bc-9fb3-4d4e-8085-b2fcf628e0a5" containerID="3b1f890304ef9d089479614d38d81aabcf093a42157b52f334fa1a99c8f86aa6" exitCode=0 Oct 11 10:53:57.139685 master-0 kubenswrapper[4790]: I1011 10:53:57.138889 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0a7a-account-create-9c44k" event={"ID":"c21954bc-9fb3-4d4e-8085-b2fcf628e0a5","Type":"ContainerDied","Data":"3b1f890304ef9d089479614d38d81aabcf093a42157b52f334fa1a99c8f86aa6"} Oct 11 10:53:57.141855 master-0 kubenswrapper[4790]: I1011 10:53:57.141779 4790 generic.go:334] "Generic (PLEG): container finished" podID="4838cae2-31c3-4b4d-a914-e95b0b6308be" containerID="677c51ee7ba248bdecdce7b7bb9d050175056a091f08201d76d54e3406eb2697" exitCode=0 Oct 11 10:53:57.141855 master-0 kubenswrapper[4790]: I1011 10:53:57.141832 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-af51-account-create-tz8f4" event={"ID":"4838cae2-31c3-4b4d-a914-e95b0b6308be","Type":"ContainerDied","Data":"677c51ee7ba248bdecdce7b7bb9d050175056a091f08201d76d54e3406eb2697"} Oct 11 10:53:57.142163 master-0 kubenswrapper[4790]: I1011 10:53:57.141888 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-af51-account-create-tz8f4" event={"ID":"4838cae2-31c3-4b4d-a914-e95b0b6308be","Type":"ContainerStarted","Data":"0bb2113a61f088d79d33b08877923a158ef88b748a4b601ff9e7693ef5d257f9"} Oct 11 10:53:57.482169 master-1 kubenswrapper[4771]: I1011 10:53:57.482118 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mtzk7-config-292gb"] Oct 11 10:53:57.485526 master-1 kubenswrapper[4771]: W1011 10:53:57.485459 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a75aab2_123f_491f_af07_939ade33aadc.slice/crio-c4bf6c763f65cec074c6888894519b5a728d8d1eb1da7613bf076459d93a73cc WatchSource:0}: Error finding container c4bf6c763f65cec074c6888894519b5a728d8d1eb1da7613bf076459d93a73cc: Status 404 returned error can't find the container with id c4bf6c763f65cec074c6888894519b5a728d8d1eb1da7613bf076459d93a73cc Oct 11 10:53:57.540603 master-2 kubenswrapper[4776]: I1011 10:53:57.540324 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:57.631667 master-2 kubenswrapper[4776]: I1011 10:53:57.631264 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-combined-ca-bundle\") pod \"02c36342-76bf-457d-804c-cc6420176307\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " Oct 11 10:53:57.631667 master-2 kubenswrapper[4776]: I1011 10:53:57.631418 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02c36342-76bf-457d-804c-cc6420176307-etc-swift\") pod \"02c36342-76bf-457d-804c-cc6420176307\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " Oct 11 10:53:57.631667 master-2 kubenswrapper[4776]: I1011 10:53:57.631446 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-ring-data-devices\") pod \"02c36342-76bf-457d-804c-cc6420176307\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " Oct 11 10:53:57.631667 master-2 kubenswrapper[4776]: I1011 10:53:57.631492 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-scripts\") pod \"02c36342-76bf-457d-804c-cc6420176307\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " Oct 11 10:53:57.631667 master-2 kubenswrapper[4776]: I1011 10:53:57.631517 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjdjr\" (UniqueName: \"kubernetes.io/projected/02c36342-76bf-457d-804c-cc6420176307-kube-api-access-jjdjr\") pod \"02c36342-76bf-457d-804c-cc6420176307\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " Oct 11 10:53:57.631667 master-2 kubenswrapper[4776]: I1011 10:53:57.631574 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-dispersionconf\") pod \"02c36342-76bf-457d-804c-cc6420176307\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " Oct 11 10:53:57.631667 master-2 kubenswrapper[4776]: I1011 10:53:57.631597 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-swiftconf\") pod \"02c36342-76bf-457d-804c-cc6420176307\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " Oct 11 10:53:57.633698 master-2 kubenswrapper[4776]: I1011 10:53:57.633620 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "02c36342-76bf-457d-804c-cc6420176307" (UID: "02c36342-76bf-457d-804c-cc6420176307"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:57.634176 master-2 kubenswrapper[4776]: I1011 10:53:57.634135 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c36342-76bf-457d-804c-cc6420176307-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "02c36342-76bf-457d-804c-cc6420176307" (UID: "02c36342-76bf-457d-804c-cc6420176307"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:53:57.636781 master-2 kubenswrapper[4776]: I1011 10:53:57.636467 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c36342-76bf-457d-804c-cc6420176307-kube-api-access-jjdjr" (OuterVolumeSpecName: "kube-api-access-jjdjr") pod "02c36342-76bf-457d-804c-cc6420176307" (UID: "02c36342-76bf-457d-804c-cc6420176307"). InnerVolumeSpecName "kube-api-access-jjdjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:57.640645 master-1 kubenswrapper[4771]: I1011 10:53:57.640564 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mtzk7-config-292gb" event={"ID":"5a75aab2-123f-491f-af07-939ade33aadc","Type":"ContainerStarted","Data":"c4bf6c763f65cec074c6888894519b5a728d8d1eb1da7613bf076459d93a73cc"} Oct 11 10:53:57.641019 master-2 kubenswrapper[4776]: I1011 10:53:57.640985 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "02c36342-76bf-457d-804c-cc6420176307" (UID: "02c36342-76bf-457d-804c-cc6420176307"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:53:57.653092 master-2 kubenswrapper[4776]: I1011 10:53:57.652142 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-scripts" (OuterVolumeSpecName: "scripts") pod "02c36342-76bf-457d-804c-cc6420176307" (UID: "02c36342-76bf-457d-804c-cc6420176307"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:57.654752 master-2 kubenswrapper[4776]: I1011 10:53:57.654721 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02c36342-76bf-457d-804c-cc6420176307" (UID: "02c36342-76bf-457d-804c-cc6420176307"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:53:57.658320 master-2 kubenswrapper[4776]: I1011 10:53:57.658267 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "02c36342-76bf-457d-804c-cc6420176307" (UID: "02c36342-76bf-457d-804c-cc6420176307"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:53:57.733632 master-2 kubenswrapper[4776]: I1011 10:53:57.733564 4776 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-dispersionconf\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:57.733632 master-2 kubenswrapper[4776]: I1011 10:53:57.733625 4776 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-swiftconf\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:57.733632 master-2 kubenswrapper[4776]: I1011 10:53:57.733634 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:57.733905 master-2 kubenswrapper[4776]: I1011 10:53:57.733644 4776 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02c36342-76bf-457d-804c-cc6420176307-etc-swift\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:57.733905 master-2 kubenswrapper[4776]: I1011 10:53:57.733653 4776 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-ring-data-devices\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:57.733905 master-2 kubenswrapper[4776]: I1011 10:53:57.733663 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:57.733905 master-2 kubenswrapper[4776]: I1011 10:53:57.733691 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjdjr\" (UniqueName: \"kubernetes.io/projected/02c36342-76bf-457d-804c-cc6420176307-kube-api-access-jjdjr\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:57.922317 master-2 kubenswrapper[4776]: I1011 10:53:57.922255 4776 generic.go:334] "Generic (PLEG): container finished" podID="bedd9a1a-d96f-49da-93c4-971885dafbfa" containerID="4c31e081cf778b32f0c186718d90bb335248738e9e0b592d92f2f4e35cfd127e" exitCode=0 Oct 11 10:53:57.922577 master-2 kubenswrapper[4776]: I1011 10:53:57.922315 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8qhqm-config-pnhck" event={"ID":"bedd9a1a-d96f-49da-93c4-971885dafbfa","Type":"ContainerDied","Data":"4c31e081cf778b32f0c186718d90bb335248738e9e0b592d92f2f4e35cfd127e"} Oct 11 10:53:57.922577 master-2 kubenswrapper[4776]: I1011 10:53:57.922374 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8qhqm-config-pnhck" event={"ID":"bedd9a1a-d96f-49da-93c4-971885dafbfa","Type":"ContainerStarted","Data":"c788c22ef2bb450434f745677a534f4ed876639c6e3b38f9a131643958abd1d0"} Oct 11 10:53:57.924346 master-2 kubenswrapper[4776]: I1011 10:53:57.924299 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sc5rx" event={"ID":"02c36342-76bf-457d-804c-cc6420176307","Type":"ContainerDied","Data":"de04857ed802def71b5901a8ca752c6b0be20d594b21cb7c6b16ee036e171a13"} Oct 11 10:53:57.924346 master-2 kubenswrapper[4776]: I1011 10:53:57.924326 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:57.924346 master-2 kubenswrapper[4776]: I1011 10:53:57.924344 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de04857ed802def71b5901a8ca752c6b0be20d594b21cb7c6b16ee036e171a13" Oct 11 10:53:58.599451 master-0 kubenswrapper[4790]: I1011 10:53:58.599388 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0a7a-account-create-9c44k" Oct 11 10:53:58.654201 master-1 kubenswrapper[4771]: I1011 10:53:58.654107 4771 generic.go:334] "Generic (PLEG): container finished" podID="5a75aab2-123f-491f-af07-939ade33aadc" containerID="b9f9706961ea78a9f4e52f8e9ebb80aedc250ae90f55ef49b6cd39d2d53a0f62" exitCode=0 Oct 11 10:53:58.654201 master-1 kubenswrapper[4771]: I1011 10:53:58.654190 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mtzk7-config-292gb" event={"ID":"5a75aab2-123f-491f-af07-939ade33aadc","Type":"ContainerDied","Data":"b9f9706961ea78a9f4e52f8e9ebb80aedc250ae90f55ef49b6cd39d2d53a0f62"} Oct 11 10:53:58.663817 master-0 kubenswrapper[4790]: I1011 10:53:58.661247 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-af51-account-create-tz8f4" Oct 11 10:53:58.691700 master-0 kubenswrapper[4790]: I1011 10:53:58.691613 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmddg\" (UniqueName: \"kubernetes.io/projected/c21954bc-9fb3-4d4e-8085-b2fcf628e0a5-kube-api-access-dmddg\") pod \"c21954bc-9fb3-4d4e-8085-b2fcf628e0a5\" (UID: \"c21954bc-9fb3-4d4e-8085-b2fcf628e0a5\") " Oct 11 10:53:58.695402 master-0 kubenswrapper[4790]: I1011 10:53:58.695353 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21954bc-9fb3-4d4e-8085-b2fcf628e0a5-kube-api-access-dmddg" (OuterVolumeSpecName: "kube-api-access-dmddg") pod "c21954bc-9fb3-4d4e-8085-b2fcf628e0a5" (UID: "c21954bc-9fb3-4d4e-8085-b2fcf628e0a5"). InnerVolumeSpecName "kube-api-access-dmddg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:58.793569 master-0 kubenswrapper[4790]: I1011 10:53:58.793409 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldn48\" (UniqueName: \"kubernetes.io/projected/4838cae2-31c3-4b4d-a914-e95b0b6308be-kube-api-access-ldn48\") pod \"4838cae2-31c3-4b4d-a914-e95b0b6308be\" (UID: \"4838cae2-31c3-4b4d-a914-e95b0b6308be\") " Oct 11 10:53:58.794078 master-0 kubenswrapper[4790]: I1011 10:53:58.794039 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmddg\" (UniqueName: \"kubernetes.io/projected/c21954bc-9fb3-4d4e-8085-b2fcf628e0a5-kube-api-access-dmddg\") on node \"master-0\" DevicePath \"\"" Oct 11 10:53:58.796856 master-0 kubenswrapper[4790]: I1011 10:53:58.796788 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4838cae2-31c3-4b4d-a914-e95b0b6308be-kube-api-access-ldn48" (OuterVolumeSpecName: "kube-api-access-ldn48") pod "4838cae2-31c3-4b4d-a914-e95b0b6308be" (UID: "4838cae2-31c3-4b4d-a914-e95b0b6308be"). InnerVolumeSpecName "kube-api-access-ldn48". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:58.897053 master-0 kubenswrapper[4790]: I1011 10:53:58.896983 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldn48\" (UniqueName: \"kubernetes.io/projected/4838cae2-31c3-4b4d-a914-e95b0b6308be-kube-api-access-ldn48\") on node \"master-0\" DevicePath \"\"" Oct 11 10:53:59.158417 master-0 kubenswrapper[4790]: I1011 10:53:59.158177 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0a7a-account-create-9c44k" event={"ID":"c21954bc-9fb3-4d4e-8085-b2fcf628e0a5","Type":"ContainerDied","Data":"e7aabdd1f9e81707ceac24468611fbf243bb7fbd9a16bb4d480bb8af844a4e45"} Oct 11 10:53:59.158417 master-0 kubenswrapper[4790]: I1011 10:53:59.158241 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0a7a-account-create-9c44k" Oct 11 10:53:59.158417 master-0 kubenswrapper[4790]: I1011 10:53:59.158251 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7aabdd1f9e81707ceac24468611fbf243bb7fbd9a16bb4d480bb8af844a4e45" Oct 11 10:53:59.159822 master-0 kubenswrapper[4790]: I1011 10:53:59.159801 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-af51-account-create-tz8f4" event={"ID":"4838cae2-31c3-4b4d-a914-e95b0b6308be","Type":"ContainerDied","Data":"0bb2113a61f088d79d33b08877923a158ef88b748a4b601ff9e7693ef5d257f9"} Oct 11 10:53:59.159941 master-0 kubenswrapper[4790]: I1011 10:53:59.159824 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bb2113a61f088d79d33b08877923a158ef88b748a4b601ff9e7693ef5d257f9" Oct 11 10:53:59.159941 master-0 kubenswrapper[4790]: I1011 10:53:59.159900 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-af51-account-create-tz8f4" Oct 11 10:53:59.658319 master-2 kubenswrapper[4776]: I1011 10:53:59.658281 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:59.665954 master-2 kubenswrapper[4776]: I1011 10:53:59.665919 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run\") pod \"bedd9a1a-d96f-49da-93c4-971885dafbfa\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " Oct 11 10:53:59.666089 master-2 kubenswrapper[4776]: I1011 10:53:59.665970 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run-ovn\") pod \"bedd9a1a-d96f-49da-93c4-971885dafbfa\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " Oct 11 10:53:59.666089 master-2 kubenswrapper[4776]: I1011 10:53:59.666010 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-scripts\") pod \"bedd9a1a-d96f-49da-93c4-971885dafbfa\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " Oct 11 10:53:59.666089 master-2 kubenswrapper[4776]: I1011 10:53:59.666022 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run" (OuterVolumeSpecName: "var-run") pod "bedd9a1a-d96f-49da-93c4-971885dafbfa" (UID: "bedd9a1a-d96f-49da-93c4-971885dafbfa"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:59.666089 master-2 kubenswrapper[4776]: I1011 10:53:59.666048 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "bedd9a1a-d96f-49da-93c4-971885dafbfa" (UID: "bedd9a1a-d96f-49da-93c4-971885dafbfa"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:59.666089 master-2 kubenswrapper[4776]: I1011 10:53:59.666094 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-additional-scripts\") pod \"bedd9a1a-d96f-49da-93c4-971885dafbfa\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " Oct 11 10:53:59.666260 master-2 kubenswrapper[4776]: I1011 10:53:59.666135 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-log-ovn\") pod \"bedd9a1a-d96f-49da-93c4-971885dafbfa\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " Oct 11 10:53:59.666260 master-2 kubenswrapper[4776]: I1011 10:53:59.666200 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpnvp\" (UniqueName: \"kubernetes.io/projected/bedd9a1a-d96f-49da-93c4-971885dafbfa-kube-api-access-kpnvp\") pod \"bedd9a1a-d96f-49da-93c4-971885dafbfa\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " Oct 11 10:53:59.666325 master-2 kubenswrapper[4776]: I1011 10:53:59.666273 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "bedd9a1a-d96f-49da-93c4-971885dafbfa" (UID: "bedd9a1a-d96f-49da-93c4-971885dafbfa"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:59.666518 master-2 kubenswrapper[4776]: I1011 10:53:59.666497 4776 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-log-ovn\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:59.666518 master-2 kubenswrapper[4776]: I1011 10:53:59.666515 4776 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:59.666596 master-2 kubenswrapper[4776]: I1011 10:53:59.666525 4776 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run-ovn\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:59.666870 master-2 kubenswrapper[4776]: I1011 10:53:59.666839 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "bedd9a1a-d96f-49da-93c4-971885dafbfa" (UID: "bedd9a1a-d96f-49da-93c4-971885dafbfa"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:59.667202 master-2 kubenswrapper[4776]: I1011 10:53:59.667161 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-scripts" (OuterVolumeSpecName: "scripts") pod "bedd9a1a-d96f-49da-93c4-971885dafbfa" (UID: "bedd9a1a-d96f-49da-93c4-971885dafbfa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:59.670872 master-2 kubenswrapper[4776]: I1011 10:53:59.670843 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bedd9a1a-d96f-49da-93c4-971885dafbfa-kube-api-access-kpnvp" (OuterVolumeSpecName: "kube-api-access-kpnvp") pod "bedd9a1a-d96f-49da-93c4-971885dafbfa" (UID: "bedd9a1a-d96f-49da-93c4-971885dafbfa"). InnerVolumeSpecName "kube-api-access-kpnvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:59.709194 master-0 kubenswrapper[4790]: I1011 10:53:59.709014 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Oct 11 10:53:59.770452 master-2 kubenswrapper[4776]: I1011 10:53:59.770392 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:59.770452 master-2 kubenswrapper[4776]: I1011 10:53:59.770443 4776 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-additional-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:59.770452 master-2 kubenswrapper[4776]: I1011 10:53:59.770459 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpnvp\" (UniqueName: \"kubernetes.io/projected/bedd9a1a-d96f-49da-93c4-971885dafbfa-kube-api-access-kpnvp\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:59.943841 master-2 kubenswrapper[4776]: I1011 10:53:59.943801 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8qhqm-config-pnhck" event={"ID":"bedd9a1a-d96f-49da-93c4-971885dafbfa","Type":"ContainerDied","Data":"c788c22ef2bb450434f745677a534f4ed876639c6e3b38f9a131643958abd1d0"} Oct 11 10:53:59.943841 master-2 kubenswrapper[4776]: I1011 10:53:59.943839 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c788c22ef2bb450434f745677a534f4ed876639c6e3b38f9a131643958abd1d0" Oct 11 10:53:59.944126 master-2 kubenswrapper[4776]: I1011 10:53:59.943891 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:54:00.031928 master-0 kubenswrapper[4790]: I1011 10:54:00.031849 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-etc-swift\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:54:00.037758 master-0 kubenswrapper[4790]: I1011 10:54:00.037227 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27-etc-swift\") pod \"swift-storage-0\" (UID: \"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27\") " pod="openstack/swift-storage-0" Oct 11 10:54:00.122249 master-1 kubenswrapper[4771]: I1011 10:54:00.122102 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:54:00.169200 master-0 kubenswrapper[4790]: I1011 10:54:00.169105 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 11 10:54:00.241942 master-1 kubenswrapper[4771]: I1011 10:54:00.241845 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a75aab2-123f-491f-af07-939ade33aadc-scripts\") pod \"5a75aab2-123f-491f-af07-939ade33aadc\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " Oct 11 10:54:00.242314 master-1 kubenswrapper[4771]: I1011 10:54:00.241983 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5a75aab2-123f-491f-af07-939ade33aadc-var-run\") pod \"5a75aab2-123f-491f-af07-939ade33aadc\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " Oct 11 10:54:00.242314 master-1 kubenswrapper[4771]: I1011 10:54:00.242033 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5a75aab2-123f-491f-af07-939ade33aadc-var-run-ovn\") pod \"5a75aab2-123f-491f-af07-939ade33aadc\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " Oct 11 10:54:00.242314 master-1 kubenswrapper[4771]: I1011 10:54:00.242072 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5a75aab2-123f-491f-af07-939ade33aadc-additional-scripts\") pod \"5a75aab2-123f-491f-af07-939ade33aadc\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " Oct 11 10:54:00.242314 master-1 kubenswrapper[4771]: I1011 10:54:00.242122 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a75aab2-123f-491f-af07-939ade33aadc-var-run" (OuterVolumeSpecName: "var-run") pod "5a75aab2-123f-491f-af07-939ade33aadc" (UID: "5a75aab2-123f-491f-af07-939ade33aadc"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:54:00.242314 master-1 kubenswrapper[4771]: I1011 10:54:00.242156 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5a75aab2-123f-491f-af07-939ade33aadc-var-log-ovn\") pod \"5a75aab2-123f-491f-af07-939ade33aadc\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " Oct 11 10:54:00.242314 master-1 kubenswrapper[4771]: I1011 10:54:00.242257 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg8cm\" (UniqueName: \"kubernetes.io/projected/5a75aab2-123f-491f-af07-939ade33aadc-kube-api-access-jg8cm\") pod \"5a75aab2-123f-491f-af07-939ade33aadc\" (UID: \"5a75aab2-123f-491f-af07-939ade33aadc\") " Oct 11 10:54:00.242973 master-1 kubenswrapper[4771]: I1011 10:54:00.242140 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a75aab2-123f-491f-af07-939ade33aadc-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "5a75aab2-123f-491f-af07-939ade33aadc" (UID: "5a75aab2-123f-491f-af07-939ade33aadc"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:54:00.242973 master-1 kubenswrapper[4771]: I1011 10:54:00.242210 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a75aab2-123f-491f-af07-939ade33aadc-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "5a75aab2-123f-491f-af07-939ade33aadc" (UID: "5a75aab2-123f-491f-af07-939ade33aadc"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:54:00.242973 master-1 kubenswrapper[4771]: I1011 10:54:00.242932 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a75aab2-123f-491f-af07-939ade33aadc-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "5a75aab2-123f-491f-af07-939ade33aadc" (UID: "5a75aab2-123f-491f-af07-939ade33aadc"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:00.243265 master-1 kubenswrapper[4771]: I1011 10:54:00.243151 4771 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5a75aab2-123f-491f-af07-939ade33aadc-additional-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:00.243265 master-1 kubenswrapper[4771]: I1011 10:54:00.243169 4771 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5a75aab2-123f-491f-af07-939ade33aadc-var-log-ovn\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:00.243265 master-1 kubenswrapper[4771]: I1011 10:54:00.243179 4771 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5a75aab2-123f-491f-af07-939ade33aadc-var-run\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:00.243265 master-1 kubenswrapper[4771]: I1011 10:54:00.243189 4771 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5a75aab2-123f-491f-af07-939ade33aadc-var-run-ovn\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:00.244259 master-1 kubenswrapper[4771]: I1011 10:54:00.244153 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a75aab2-123f-491f-af07-939ade33aadc-scripts" (OuterVolumeSpecName: "scripts") pod "5a75aab2-123f-491f-af07-939ade33aadc" (UID: "5a75aab2-123f-491f-af07-939ade33aadc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:00.247624 master-1 kubenswrapper[4771]: I1011 10:54:00.247559 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a75aab2-123f-491f-af07-939ade33aadc-kube-api-access-jg8cm" (OuterVolumeSpecName: "kube-api-access-jg8cm") pod "5a75aab2-123f-491f-af07-939ade33aadc" (UID: "5a75aab2-123f-491f-af07-939ade33aadc"). InnerVolumeSpecName "kube-api-access-jg8cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:00.346033 master-1 kubenswrapper[4771]: I1011 10:54:00.345896 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg8cm\" (UniqueName: \"kubernetes.io/projected/5a75aab2-123f-491f-af07-939ade33aadc-kube-api-access-jg8cm\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:00.346033 master-1 kubenswrapper[4771]: I1011 10:54:00.346009 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a75aab2-123f-491f-af07-939ade33aadc-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:00.677582 master-1 kubenswrapper[4771]: I1011 10:54:00.677452 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mtzk7-config-292gb" event={"ID":"5a75aab2-123f-491f-af07-939ade33aadc","Type":"ContainerDied","Data":"c4bf6c763f65cec074c6888894519b5a728d8d1eb1da7613bf076459d93a73cc"} Oct 11 10:54:00.677582 master-1 kubenswrapper[4771]: I1011 10:54:00.677557 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mtzk7-config-292gb" Oct 11 10:54:00.678095 master-1 kubenswrapper[4771]: I1011 10:54:00.677567 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4bf6c763f65cec074c6888894519b5a728d8d1eb1da7613bf076459d93a73cc" Oct 11 10:54:00.708650 master-0 kubenswrapper[4790]: I1011 10:54:00.708578 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 11 10:54:00.710743 master-0 kubenswrapper[4790]: W1011 10:54:00.710620 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9fc8ec6_f4bb_4b20_a262_f416bb5d2e27.slice/crio-e82dbb6268c1705ebb0fa6f09840b38a2e47b597bdabb61f3383dd3a977d3c51 WatchSource:0}: Error finding container e82dbb6268c1705ebb0fa6f09840b38a2e47b597bdabb61f3383dd3a977d3c51: Status 404 returned error can't find the container with id e82dbb6268c1705ebb0fa6f09840b38a2e47b597bdabb61f3383dd3a977d3c51 Oct 11 10:54:00.781682 master-2 kubenswrapper[4776]: I1011 10:54:00.781585 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-8qhqm-config-pnhck"] Oct 11 10:54:00.788606 master-2 kubenswrapper[4776]: I1011 10:54:00.788520 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-8qhqm-config-pnhck"] Oct 11 10:54:01.181341 master-0 kubenswrapper[4790]: I1011 10:54:01.181259 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27","Type":"ContainerStarted","Data":"e82dbb6268c1705ebb0fa6f09840b38a2e47b597bdabb61f3383dd3a977d3c51"} Oct 11 10:54:01.258727 master-1 kubenswrapper[4771]: I1011 10:54:01.258629 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mtzk7-config-292gb"] Oct 11 10:54:01.269535 master-1 kubenswrapper[4771]: I1011 10:54:01.269459 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mtzk7-config-292gb"] Oct 11 10:54:02.066629 master-2 kubenswrapper[4776]: I1011 10:54:02.066570 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bedd9a1a-d96f-49da-93c4-971885dafbfa" path="/var/lib/kubelet/pods/bedd9a1a-d96f-49da-93c4-971885dafbfa/volumes" Oct 11 10:54:02.452569 master-1 kubenswrapper[4771]: I1011 10:54:02.452466 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a75aab2-123f-491f-af07-939ade33aadc" path="/var/lib/kubelet/pods/5a75aab2-123f-491f-af07-939ade33aadc/volumes" Oct 11 10:54:02.735949 master-2 kubenswrapper[4776]: I1011 10:54:02.735881 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Oct 11 10:54:02.899989 master-1 kubenswrapper[4771]: I1011 10:54:02.898962 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:02.903191 master-1 kubenswrapper[4771]: I1011 10:54:02.903103 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:02.950084 master-1 kubenswrapper[4771]: I1011 10:54:02.950016 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:54:03.048184 master-1 kubenswrapper[4771]: I1011 10:54:03.048023 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b64bc6b99-wp674"] Oct 11 10:54:03.049211 master-1 kubenswrapper[4771]: I1011 10:54:03.049144 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" podUID="49a589aa-7d75-4aba-aca3-9fffa3d86378" containerName="dnsmasq-dns" containerID="cri-o://e888654814b1c6bb30b9a08fe67a78675ceee7885666f9f7dfa7244292fa27db" gracePeriod=10 Oct 11 10:54:03.201315 master-0 kubenswrapper[4790]: I1011 10:54:03.201243 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27","Type":"ContainerStarted","Data":"6d0b7d52579ae74fda5ef88219a149ef056d5d599e0ed232bb25bf15c7464b8c"} Oct 11 10:54:03.654927 master-1 kubenswrapper[4771]: I1011 10:54:03.654749 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:54:03.712534 master-1 kubenswrapper[4771]: I1011 10:54:03.711473 4771 generic.go:334] "Generic (PLEG): container finished" podID="49a589aa-7d75-4aba-aca3-9fffa3d86378" containerID="e888654814b1c6bb30b9a08fe67a78675ceee7885666f9f7dfa7244292fa27db" exitCode=0 Oct 11 10:54:03.712534 master-1 kubenswrapper[4771]: I1011 10:54:03.711561 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" Oct 11 10:54:03.712534 master-1 kubenswrapper[4771]: I1011 10:54:03.711608 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" event={"ID":"49a589aa-7d75-4aba-aca3-9fffa3d86378","Type":"ContainerDied","Data":"e888654814b1c6bb30b9a08fe67a78675ceee7885666f9f7dfa7244292fa27db"} Oct 11 10:54:03.712534 master-1 kubenswrapper[4771]: I1011 10:54:03.711737 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b64bc6b99-wp674" event={"ID":"49a589aa-7d75-4aba-aca3-9fffa3d86378","Type":"ContainerDied","Data":"e04ccf4542c4af977ce52340f83783b293bb99776af706005aa7ec0a114852af"} Oct 11 10:54:03.712534 master-1 kubenswrapper[4771]: I1011 10:54:03.711781 4771 scope.go:117] "RemoveContainer" containerID="e888654814b1c6bb30b9a08fe67a78675ceee7885666f9f7dfa7244292fa27db" Oct 11 10:54:03.714501 master-1 kubenswrapper[4771]: I1011 10:54:03.714464 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:03.745490 master-1 kubenswrapper[4771]: I1011 10:54:03.745428 4771 scope.go:117] "RemoveContainer" containerID="9c117e659394937d3c04e2bfcad7f394e379e5635acae77fe7dccd35b074376f" Oct 11 10:54:03.788182 master-1 kubenswrapper[4771]: I1011 10:54:03.788102 4771 scope.go:117] "RemoveContainer" containerID="e888654814b1c6bb30b9a08fe67a78675ceee7885666f9f7dfa7244292fa27db" Oct 11 10:54:03.788892 master-1 kubenswrapper[4771]: E1011 10:54:03.788843 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e888654814b1c6bb30b9a08fe67a78675ceee7885666f9f7dfa7244292fa27db\": container with ID starting with e888654814b1c6bb30b9a08fe67a78675ceee7885666f9f7dfa7244292fa27db not found: ID does not exist" containerID="e888654814b1c6bb30b9a08fe67a78675ceee7885666f9f7dfa7244292fa27db" Oct 11 10:54:03.789061 master-1 kubenswrapper[4771]: I1011 10:54:03.788889 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e888654814b1c6bb30b9a08fe67a78675ceee7885666f9f7dfa7244292fa27db"} err="failed to get container status \"e888654814b1c6bb30b9a08fe67a78675ceee7885666f9f7dfa7244292fa27db\": rpc error: code = NotFound desc = could not find container \"e888654814b1c6bb30b9a08fe67a78675ceee7885666f9f7dfa7244292fa27db\": container with ID starting with e888654814b1c6bb30b9a08fe67a78675ceee7885666f9f7dfa7244292fa27db not found: ID does not exist" Oct 11 10:54:03.789061 master-1 kubenswrapper[4771]: I1011 10:54:03.788923 4771 scope.go:117] "RemoveContainer" containerID="9c117e659394937d3c04e2bfcad7f394e379e5635acae77fe7dccd35b074376f" Oct 11 10:54:03.789821 master-1 kubenswrapper[4771]: E1011 10:54:03.789733 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c117e659394937d3c04e2bfcad7f394e379e5635acae77fe7dccd35b074376f\": container with ID starting with 9c117e659394937d3c04e2bfcad7f394e379e5635acae77fe7dccd35b074376f not found: ID does not exist" containerID="9c117e659394937d3c04e2bfcad7f394e379e5635acae77fe7dccd35b074376f" Oct 11 10:54:03.789971 master-1 kubenswrapper[4771]: I1011 10:54:03.789818 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c117e659394937d3c04e2bfcad7f394e379e5635acae77fe7dccd35b074376f"} err="failed to get container status \"9c117e659394937d3c04e2bfcad7f394e379e5635acae77fe7dccd35b074376f\": rpc error: code = NotFound desc = could not find container \"9c117e659394937d3c04e2bfcad7f394e379e5635acae77fe7dccd35b074376f\": container with ID starting with 9c117e659394937d3c04e2bfcad7f394e379e5635acae77fe7dccd35b074376f not found: ID does not exist" Oct 11 10:54:03.835218 master-1 kubenswrapper[4771]: I1011 10:54:03.835148 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-dns-svc\") pod \"49a589aa-7d75-4aba-aca3-9fffa3d86378\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " Oct 11 10:54:03.835218 master-1 kubenswrapper[4771]: I1011 10:54:03.835195 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-config\") pod \"49a589aa-7d75-4aba-aca3-9fffa3d86378\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " Oct 11 10:54:03.835218 master-1 kubenswrapper[4771]: I1011 10:54:03.835235 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgbhb\" (UniqueName: \"kubernetes.io/projected/49a589aa-7d75-4aba-aca3-9fffa3d86378-kube-api-access-lgbhb\") pod \"49a589aa-7d75-4aba-aca3-9fffa3d86378\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " Oct 11 10:54:03.835655 master-1 kubenswrapper[4771]: I1011 10:54:03.835284 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-ovsdbserver-nb\") pod \"49a589aa-7d75-4aba-aca3-9fffa3d86378\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " Oct 11 10:54:03.835655 master-1 kubenswrapper[4771]: I1011 10:54:03.835370 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-ovsdbserver-sb\") pod \"49a589aa-7d75-4aba-aca3-9fffa3d86378\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " Oct 11 10:54:03.840427 master-1 kubenswrapper[4771]: I1011 10:54:03.840271 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49a589aa-7d75-4aba-aca3-9fffa3d86378-kube-api-access-lgbhb" (OuterVolumeSpecName: "kube-api-access-lgbhb") pod "49a589aa-7d75-4aba-aca3-9fffa3d86378" (UID: "49a589aa-7d75-4aba-aca3-9fffa3d86378"). InnerVolumeSpecName "kube-api-access-lgbhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:03.869021 master-1 kubenswrapper[4771]: I1011 10:54:03.868906 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "49a589aa-7d75-4aba-aca3-9fffa3d86378" (UID: "49a589aa-7d75-4aba-aca3-9fffa3d86378"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:03.883186 master-1 kubenswrapper[4771]: I1011 10:54:03.883099 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "49a589aa-7d75-4aba-aca3-9fffa3d86378" (UID: "49a589aa-7d75-4aba-aca3-9fffa3d86378"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:03.902275 master-1 kubenswrapper[4771]: E1011 10:54:03.901718 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-config podName:49a589aa-7d75-4aba-aca3-9fffa3d86378 nodeName:}" failed. No retries permitted until 2025-10-11 10:54:04.401683652 +0000 UTC m=+1676.375910113 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-config") pod "49a589aa-7d75-4aba-aca3-9fffa3d86378" (UID: "49a589aa-7d75-4aba-aca3-9fffa3d86378") : error deleting /var/lib/kubelet/pods/49a589aa-7d75-4aba-aca3-9fffa3d86378/volume-subpaths: remove /var/lib/kubelet/pods/49a589aa-7d75-4aba-aca3-9fffa3d86378/volume-subpaths: no such file or directory Oct 11 10:54:03.902275 master-1 kubenswrapper[4771]: I1011 10:54:03.902112 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "49a589aa-7d75-4aba-aca3-9fffa3d86378" (UID: "49a589aa-7d75-4aba-aca3-9fffa3d86378"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:03.937609 master-1 kubenswrapper[4771]: I1011 10:54:03.937552 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-dns-svc\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:03.937609 master-1 kubenswrapper[4771]: I1011 10:54:03.937593 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgbhb\" (UniqueName: \"kubernetes.io/projected/49a589aa-7d75-4aba-aca3-9fffa3d86378-kube-api-access-lgbhb\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:03.937609 master-1 kubenswrapper[4771]: I1011 10:54:03.937605 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-ovsdbserver-nb\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:03.937609 master-1 kubenswrapper[4771]: I1011 10:54:03.937614 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-ovsdbserver-sb\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:04.219528 master-0 kubenswrapper[4790]: I1011 10:54:04.219335 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27","Type":"ContainerStarted","Data":"37820c5383e5d4e38231b402469255b405cc0fdd5561fb4624d2034da9cbb9d0"} Oct 11 10:54:04.219528 master-0 kubenswrapper[4790]: I1011 10:54:04.219407 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27","Type":"ContainerStarted","Data":"6590447afdfc62a9ba2ca9fa5c523b088ed4df3cc12cef99a7ede955e8ce36c3"} Oct 11 10:54:04.219528 master-0 kubenswrapper[4790]: I1011 10:54:04.219421 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27","Type":"ContainerStarted","Data":"0695998edd7fc240eb37cac4d5324b8c178273057c6c8002869e01e96d4b9981"} Oct 11 10:54:04.450236 master-1 kubenswrapper[4771]: I1011 10:54:04.450176 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-config\") pod \"49a589aa-7d75-4aba-aca3-9fffa3d86378\" (UID: \"49a589aa-7d75-4aba-aca3-9fffa3d86378\") " Oct 11 10:54:04.450982 master-1 kubenswrapper[4771]: I1011 10:54:04.450942 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-config" (OuterVolumeSpecName: "config") pod "49a589aa-7d75-4aba-aca3-9fffa3d86378" (UID: "49a589aa-7d75-4aba-aca3-9fffa3d86378"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:04.455437 master-1 kubenswrapper[4771]: I1011 10:54:04.455404 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 11 10:54:04.552200 master-1 kubenswrapper[4771]: I1011 10:54:04.552109 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49a589aa-7d75-4aba-aca3-9fffa3d86378-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:04.675453 master-1 kubenswrapper[4771]: I1011 10:54:04.675400 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b64bc6b99-wp674"] Oct 11 10:54:04.684663 master-1 kubenswrapper[4771]: I1011 10:54:04.684602 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b64bc6b99-wp674"] Oct 11 10:54:05.231306 master-0 kubenswrapper[4790]: I1011 10:54:05.231250 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27","Type":"ContainerStarted","Data":"867964b4fb417e2b33378e4504b98f42f2d04e7627aa3f2866d64cb2ad2c84f3"} Oct 11 10:54:05.514005 master-0 kubenswrapper[4790]: I1011 10:54:05.513946 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-1" Oct 11 10:54:06.250670 master-0 kubenswrapper[4790]: I1011 10:54:06.250504 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27","Type":"ContainerStarted","Data":"5313bf3db3332a64e56381450b110677c0c55749837d6ead93e1209b714da374"} Oct 11 10:54:06.250670 master-0 kubenswrapper[4790]: I1011 10:54:06.250573 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27","Type":"ContainerStarted","Data":"00a1fe54fd54b62b900d96589f4dbc1f0c193ade025c3d940ed3f5b16684e35e"} Oct 11 10:54:06.250670 master-0 kubenswrapper[4790]: I1011 10:54:06.250588 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27","Type":"ContainerStarted","Data":"83ad5d1bee1a61b4fdd539a237e5a303a9d8452b0460832b0320f9e19bb21759"} Oct 11 10:54:06.456638 master-1 kubenswrapper[4771]: I1011 10:54:06.456541 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49a589aa-7d75-4aba-aca3-9fffa3d86378" path="/var/lib/kubelet/pods/49a589aa-7d75-4aba-aca3-9fffa3d86378/volumes" Oct 11 10:54:06.525419 master-2 kubenswrapper[4776]: I1011 10:54:06.525319 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:54:06.973050 master-1 kubenswrapper[4771]: I1011 10:54:06.972917 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 11 10:54:06.973295 master-1 kubenswrapper[4771]: I1011 10:54:06.973212 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" containerName="prometheus" containerID="cri-o://dceeb58fe69a42771858429ecb1c63834e43aa17881864b8125d37688c790df5" gracePeriod=600 Oct 11 10:54:06.973405 master-1 kubenswrapper[4771]: I1011 10:54:06.973322 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" containerName="thanos-sidecar" containerID="cri-o://92f430c78bc4b24a68001349b3a9ac48c77542314f208344451dc9fc116683d7" gracePeriod=600 Oct 11 10:54:06.973492 master-1 kubenswrapper[4771]: I1011 10:54:06.973322 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" containerName="config-reloader" containerID="cri-o://71476de09aca12c350c9bd4fa53bcc64aae8fab0a0998dc69e0421858e6b79c4" gracePeriod=600 Oct 11 10:54:07.009648 master-1 kubenswrapper[4771]: I1011 10:54:07.009590 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 11 10:54:07.018305 master-2 kubenswrapper[4776]: I1011 10:54:07.017543 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-848z9" event={"ID":"25099d7a-e434-48d2-a175-088e5ad2caf2","Type":"ContainerStarted","Data":"f7b2a1a1f6cfb4760b3cd06bae0a54d2a7eb0b82c31eb466cd4d45304c8c9826"} Oct 11 10:54:07.049751 master-2 kubenswrapper[4776]: I1011 10:54:07.049540 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-848z9" podStartSLOduration=1.875301651 podStartE2EDuration="13.049518599s" podCreationTimestamp="2025-10-11 10:53:54 +0000 UTC" firstStartedPulling="2025-10-11 10:53:55.023609614 +0000 UTC m=+1669.808036323" lastFinishedPulling="2025-10-11 10:54:06.197826562 +0000 UTC m=+1680.982253271" observedRunningTime="2025-10-11 10:54:07.042833588 +0000 UTC m=+1681.827260297" watchObservedRunningTime="2025-10-11 10:54:07.049518599 +0000 UTC m=+1681.833945308" Oct 11 10:54:07.290583 master-0 kubenswrapper[4790]: I1011 10:54:07.290519 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27","Type":"ContainerStarted","Data":"bafaa58ca2ae0811bca6369faf1f3eeffaa7f6659d4cd55d3a5f75b343aa6af7"} Oct 11 10:54:07.769719 master-1 kubenswrapper[4771]: I1011 10:54:07.768863 4771 generic.go:334] "Generic (PLEG): container finished" podID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" containerID="92f430c78bc4b24a68001349b3a9ac48c77542314f208344451dc9fc116683d7" exitCode=0 Oct 11 10:54:07.769719 master-1 kubenswrapper[4771]: I1011 10:54:07.768911 4771 generic.go:334] "Generic (PLEG): container finished" podID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" containerID="71476de09aca12c350c9bd4fa53bcc64aae8fab0a0998dc69e0421858e6b79c4" exitCode=0 Oct 11 10:54:07.769719 master-1 kubenswrapper[4771]: I1011 10:54:07.768921 4771 generic.go:334] "Generic (PLEG): container finished" podID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" containerID="dceeb58fe69a42771858429ecb1c63834e43aa17881864b8125d37688c790df5" exitCode=0 Oct 11 10:54:07.769719 master-1 kubenswrapper[4771]: I1011 10:54:07.768950 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11c30d1f-16d5-4106-bfae-e6c2d2f64f13","Type":"ContainerDied","Data":"92f430c78bc4b24a68001349b3a9ac48c77542314f208344451dc9fc116683d7"} Oct 11 10:54:07.769719 master-1 kubenswrapper[4771]: I1011 10:54:07.768984 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11c30d1f-16d5-4106-bfae-e6c2d2f64f13","Type":"ContainerDied","Data":"71476de09aca12c350c9bd4fa53bcc64aae8fab0a0998dc69e0421858e6b79c4"} Oct 11 10:54:07.769719 master-1 kubenswrapper[4771]: I1011 10:54:07.768998 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11c30d1f-16d5-4106-bfae-e6c2d2f64f13","Type":"ContainerDied","Data":"dceeb58fe69a42771858429ecb1c63834e43aa17881864b8125d37688c790df5"} Oct 11 10:54:08.046279 master-1 kubenswrapper[4771]: I1011 10:54:08.046239 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:08.131056 master-1 kubenswrapper[4771]: I1011 10:54:08.130991 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-tls-assets\") pod \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " Oct 11 10:54:08.131305 master-1 kubenswrapper[4771]: I1011 10:54:08.131076 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-config-out\") pod \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " Oct 11 10:54:08.131365 master-1 kubenswrapper[4771]: I1011 10:54:08.131312 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0990b59f-0f58-467f-8736-1b158b7725d2\") pod \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " Oct 11 10:54:08.131497 master-1 kubenswrapper[4771]: I1011 10:54:08.131464 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq956\" (UniqueName: \"kubernetes.io/projected/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-kube-api-access-lq956\") pod \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " Oct 11 10:54:08.131543 master-1 kubenswrapper[4771]: I1011 10:54:08.131507 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-config\") pod \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " Oct 11 10:54:08.131613 master-1 kubenswrapper[4771]: I1011 10:54:08.131585 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-web-config\") pod \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " Oct 11 10:54:08.131693 master-1 kubenswrapper[4771]: I1011 10:54:08.131665 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-prometheus-metric-storage-rulefiles-0\") pod \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " Oct 11 10:54:08.131746 master-1 kubenswrapper[4771]: I1011 10:54:08.131724 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-thanos-prometheus-http-client-file\") pod \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\" (UID: \"11c30d1f-16d5-4106-bfae-e6c2d2f64f13\") " Oct 11 10:54:08.134217 master-1 kubenswrapper[4771]: I1011 10:54:08.134159 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "11c30d1f-16d5-4106-bfae-e6c2d2f64f13" (UID: "11c30d1f-16d5-4106-bfae-e6c2d2f64f13"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:08.134775 master-1 kubenswrapper[4771]: I1011 10:54:08.134736 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "11c30d1f-16d5-4106-bfae-e6c2d2f64f13" (UID: "11c30d1f-16d5-4106-bfae-e6c2d2f64f13"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:08.138842 master-1 kubenswrapper[4771]: I1011 10:54:08.138745 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-config-out" (OuterVolumeSpecName: "config-out") pod "11c30d1f-16d5-4106-bfae-e6c2d2f64f13" (UID: "11c30d1f-16d5-4106-bfae-e6c2d2f64f13"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:54:08.139264 master-1 kubenswrapper[4771]: I1011 10:54:08.139202 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "11c30d1f-16d5-4106-bfae-e6c2d2f64f13" (UID: "11c30d1f-16d5-4106-bfae-e6c2d2f64f13"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:08.139472 master-1 kubenswrapper[4771]: I1011 10:54:08.139433 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-config" (OuterVolumeSpecName: "config") pod "11c30d1f-16d5-4106-bfae-e6c2d2f64f13" (UID: "11c30d1f-16d5-4106-bfae-e6c2d2f64f13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:08.156020 master-1 kubenswrapper[4771]: I1011 10:54:08.155989 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-web-config" (OuterVolumeSpecName: "web-config") pod "11c30d1f-16d5-4106-bfae-e6c2d2f64f13" (UID: "11c30d1f-16d5-4106-bfae-e6c2d2f64f13"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:08.157007 master-1 kubenswrapper[4771]: I1011 10:54:08.156938 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-kube-api-access-lq956" (OuterVolumeSpecName: "kube-api-access-lq956") pod "11c30d1f-16d5-4106-bfae-e6c2d2f64f13" (UID: "11c30d1f-16d5-4106-bfae-e6c2d2f64f13"). InnerVolumeSpecName "kube-api-access-lq956". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:08.167338 master-1 kubenswrapper[4771]: I1011 10:54:08.167027 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^0990b59f-0f58-467f-8736-1b158b7725d2" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "11c30d1f-16d5-4106-bfae-e6c2d2f64f13" (UID: "11c30d1f-16d5-4106-bfae-e6c2d2f64f13"). InnerVolumeSpecName "pvc-d985c5d6-9363-4eef-9640-f61388292365". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 11 10:54:08.234465 master-1 kubenswrapper[4771]: I1011 10:54:08.234036 4771 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-web-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:08.234465 master-1 kubenswrapper[4771]: I1011 10:54:08.234076 4771 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-prometheus-metric-storage-rulefiles-0\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:08.234465 master-1 kubenswrapper[4771]: I1011 10:54:08.234090 4771 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-thanos-prometheus-http-client-file\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:08.234465 master-1 kubenswrapper[4771]: I1011 10:54:08.234103 4771 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-tls-assets\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:08.234465 master-1 kubenswrapper[4771]: I1011 10:54:08.234114 4771 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-config-out\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:08.234465 master-1 kubenswrapper[4771]: I1011 10:54:08.234155 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d985c5d6-9363-4eef-9640-f61388292365\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0990b59f-0f58-467f-8736-1b158b7725d2\") on node \"master-1\" " Oct 11 10:54:08.234465 master-1 kubenswrapper[4771]: I1011 10:54:08.234167 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq956\" (UniqueName: \"kubernetes.io/projected/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-kube-api-access-lq956\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:08.234465 master-1 kubenswrapper[4771]: I1011 10:54:08.234176 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/11c30d1f-16d5-4106-bfae-e6c2d2f64f13-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:08.248292 master-1 kubenswrapper[4771]: I1011 10:54:08.248248 4771 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 11 10:54:08.248530 master-1 kubenswrapper[4771]: I1011 10:54:08.248428 4771 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d985c5d6-9363-4eef-9640-f61388292365" (UniqueName: "kubernetes.io/csi/topolvm.io^0990b59f-0f58-467f-8736-1b158b7725d2") on node "master-1" Oct 11 10:54:08.307350 master-0 kubenswrapper[4790]: I1011 10:54:08.307268 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27","Type":"ContainerStarted","Data":"b10f815cdec53ca320a8e1670becf4898c676fe9c38c1a504586e32f90b9087b"} Oct 11 10:54:08.307350 master-0 kubenswrapper[4790]: I1011 10:54:08.307325 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27","Type":"ContainerStarted","Data":"e584c55d849dfecfe9f65ef5697b782a271d717a25007b147cd6f193a202d356"} Oct 11 10:54:08.307350 master-0 kubenswrapper[4790]: I1011 10:54:08.307340 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27","Type":"ContainerStarted","Data":"d39148494859b731e1f949702c63e3c8957a139d80a7700e1139a4b645942301"} Oct 11 10:54:08.336866 master-1 kubenswrapper[4771]: I1011 10:54:08.336753 4771 reconciler_common.go:293] "Volume detached for volume \"pvc-d985c5d6-9363-4eef-9640-f61388292365\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0990b59f-0f58-467f-8736-1b158b7725d2\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:08.749249 master-0 kubenswrapper[4790]: I1011 10:54:08.749184 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-7vsxp"] Oct 11 10:54:08.753224 master-0 kubenswrapper[4790]: E1011 10:54:08.753140 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4838cae2-31c3-4b4d-a914-e95b0b6308be" containerName="mariadb-account-create" Oct 11 10:54:08.753224 master-0 kubenswrapper[4790]: I1011 10:54:08.753205 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="4838cae2-31c3-4b4d-a914-e95b0b6308be" containerName="mariadb-account-create" Oct 11 10:54:08.753478 master-0 kubenswrapper[4790]: E1011 10:54:08.753243 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21954bc-9fb3-4d4e-8085-b2fcf628e0a5" containerName="mariadb-account-create" Oct 11 10:54:08.753478 master-0 kubenswrapper[4790]: I1011 10:54:08.753255 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21954bc-9fb3-4d4e-8085-b2fcf628e0a5" containerName="mariadb-account-create" Oct 11 10:54:08.753570 master-0 kubenswrapper[4790]: I1011 10:54:08.753480 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21954bc-9fb3-4d4e-8085-b2fcf628e0a5" containerName="mariadb-account-create" Oct 11 10:54:08.753570 master-0 kubenswrapper[4790]: I1011 10:54:08.753504 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="4838cae2-31c3-4b4d-a914-e95b0b6308be" containerName="mariadb-account-create" Oct 11 10:54:08.758015 master-0 kubenswrapper[4790]: I1011 10:54:08.755278 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-7vsxp" Oct 11 10:54:08.777683 master-1 kubenswrapper[4771]: I1011 10:54:08.777547 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11c30d1f-16d5-4106-bfae-e6c2d2f64f13","Type":"ContainerDied","Data":"b089e8e4b13a2f28ac29e4b38aaf2c96910827eb74888a884dedec830020d9fe"} Oct 11 10:54:08.777683 master-1 kubenswrapper[4771]: I1011 10:54:08.777617 4771 scope.go:117] "RemoveContainer" containerID="92f430c78bc4b24a68001349b3a9ac48c77542314f208344451dc9fc116683d7" Oct 11 10:54:08.778617 master-1 kubenswrapper[4771]: I1011 10:54:08.778582 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:08.790164 master-0 kubenswrapper[4790]: I1011 10:54:08.789634 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-7vsxp"] Oct 11 10:54:08.799890 master-1 kubenswrapper[4771]: I1011 10:54:08.799820 4771 scope.go:117] "RemoveContainer" containerID="71476de09aca12c350c9bd4fa53bcc64aae8fab0a0998dc69e0421858e6b79c4" Oct 11 10:54:08.813952 master-1 kubenswrapper[4771]: I1011 10:54:08.812728 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 11 10:54:08.819445 master-1 kubenswrapper[4771]: I1011 10:54:08.819412 4771 scope.go:117] "RemoveContainer" containerID="dceeb58fe69a42771858429ecb1c63834e43aa17881864b8125d37688c790df5" Oct 11 10:54:08.822136 master-1 kubenswrapper[4771]: I1011 10:54:08.822107 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 11 10:54:08.850618 master-1 kubenswrapper[4771]: I1011 10:54:08.850461 4771 scope.go:117] "RemoveContainer" containerID="1023c854292d2503211da52aaf16aa7e2199948c97ebed99bad537459ca3e33b" Oct 11 10:54:08.853684 master-1 kubenswrapper[4771]: I1011 10:54:08.853642 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 11 10:54:08.853989 master-1 kubenswrapper[4771]: E1011 10:54:08.853962 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a589aa-7d75-4aba-aca3-9fffa3d86378" containerName="dnsmasq-dns" Oct 11 10:54:08.853989 master-1 kubenswrapper[4771]: I1011 10:54:08.853986 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a589aa-7d75-4aba-aca3-9fffa3d86378" containerName="dnsmasq-dns" Oct 11 10:54:08.853989 master-1 kubenswrapper[4771]: E1011 10:54:08.854002 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a75aab2-123f-491f-af07-939ade33aadc" containerName="ovn-config" Oct 11 10:54:08.853989 master-1 kubenswrapper[4771]: I1011 10:54:08.854012 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a75aab2-123f-491f-af07-939ade33aadc" containerName="ovn-config" Oct 11 10:54:08.854238 master-1 kubenswrapper[4771]: E1011 10:54:08.854026 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a589aa-7d75-4aba-aca3-9fffa3d86378" containerName="init" Oct 11 10:54:08.854238 master-1 kubenswrapper[4771]: I1011 10:54:08.854037 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a589aa-7d75-4aba-aca3-9fffa3d86378" containerName="init" Oct 11 10:54:08.854238 master-1 kubenswrapper[4771]: E1011 10:54:08.854046 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" containerName="thanos-sidecar" Oct 11 10:54:08.854238 master-1 kubenswrapper[4771]: I1011 10:54:08.854054 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" containerName="thanos-sidecar" Oct 11 10:54:08.854238 master-1 kubenswrapper[4771]: E1011 10:54:08.854080 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" containerName="init-config-reloader" Oct 11 10:54:08.854238 master-1 kubenswrapper[4771]: I1011 10:54:08.854089 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" containerName="init-config-reloader" Oct 11 10:54:08.854238 master-1 kubenswrapper[4771]: E1011 10:54:08.854108 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" containerName="prometheus" Oct 11 10:54:08.854238 master-1 kubenswrapper[4771]: I1011 10:54:08.854116 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" containerName="prometheus" Oct 11 10:54:08.854238 master-1 kubenswrapper[4771]: E1011 10:54:08.854127 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" containerName="config-reloader" Oct 11 10:54:08.854238 master-1 kubenswrapper[4771]: I1011 10:54:08.854137 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" containerName="config-reloader" Oct 11 10:54:08.855028 master-1 kubenswrapper[4771]: I1011 10:54:08.854278 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" containerName="thanos-sidecar" Oct 11 10:54:08.855028 master-1 kubenswrapper[4771]: I1011 10:54:08.854294 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="49a589aa-7d75-4aba-aca3-9fffa3d86378" containerName="dnsmasq-dns" Oct 11 10:54:08.855028 master-1 kubenswrapper[4771]: I1011 10:54:08.854308 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a75aab2-123f-491f-af07-939ade33aadc" containerName="ovn-config" Oct 11 10:54:08.855028 master-1 kubenswrapper[4771]: I1011 10:54:08.854320 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" containerName="prometheus" Oct 11 10:54:08.855028 master-1 kubenswrapper[4771]: I1011 10:54:08.854330 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" containerName="config-reloader" Oct 11 10:54:08.856001 master-1 kubenswrapper[4771]: I1011 10:54:08.855972 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:08.861913 master-1 kubenswrapper[4771]: I1011 10:54:08.861400 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Oct 11 10:54:08.861913 master-1 kubenswrapper[4771]: I1011 10:54:08.861647 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Oct 11 10:54:08.862124 master-1 kubenswrapper[4771]: I1011 10:54:08.861699 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Oct 11 10:54:08.865520 master-1 kubenswrapper[4771]: I1011 10:54:08.863365 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Oct 11 10:54:08.865520 master-1 kubenswrapper[4771]: I1011 10:54:08.863890 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Oct 11 10:54:08.869476 master-0 kubenswrapper[4790]: I1011 10:54:08.869343 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tfdt\" (UniqueName: \"kubernetes.io/projected/d58f3b14-e8da-4046-afb1-c376a65ef16e-kube-api-access-7tfdt\") pod \"heat-db-create-7vsxp\" (UID: \"d58f3b14-e8da-4046-afb1-c376a65ef16e\") " pod="openstack/heat-db-create-7vsxp" Oct 11 10:54:08.876447 master-1 kubenswrapper[4771]: I1011 10:54:08.876261 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Oct 11 10:54:08.883309 master-1 kubenswrapper[4771]: I1011 10:54:08.883130 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 11 10:54:08.906914 master-0 kubenswrapper[4790]: I1011 10:54:08.906823 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-l7rcp"] Oct 11 10:54:08.908233 master-0 kubenswrapper[4790]: I1011 10:54:08.908202 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-l7rcp" Oct 11 10:54:08.931429 master-0 kubenswrapper[4790]: I1011 10:54:08.931292 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-l7rcp"] Oct 11 10:54:08.951380 master-1 kubenswrapper[4771]: I1011 10:54:08.948853 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/abb599b9-db44-492a-bc73-6ff5a2c212d5-config\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:08.951380 master-1 kubenswrapper[4771]: I1011 10:54:08.948912 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnmc9\" (UniqueName: \"kubernetes.io/projected/abb599b9-db44-492a-bc73-6ff5a2c212d5-kube-api-access-jnmc9\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:08.951380 master-1 kubenswrapper[4771]: I1011 10:54:08.948944 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/abb599b9-db44-492a-bc73-6ff5a2c212d5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:08.951380 master-1 kubenswrapper[4771]: I1011 10:54:08.948962 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/abb599b9-db44-492a-bc73-6ff5a2c212d5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:08.951380 master-1 kubenswrapper[4771]: I1011 10:54:08.948987 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/abb599b9-db44-492a-bc73-6ff5a2c212d5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:08.951380 master-1 kubenswrapper[4771]: I1011 10:54:08.949013 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/abb599b9-db44-492a-bc73-6ff5a2c212d5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:08.951380 master-1 kubenswrapper[4771]: I1011 10:54:08.949036 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/abb599b9-db44-492a-bc73-6ff5a2c212d5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:08.951380 master-1 kubenswrapper[4771]: I1011 10:54:08.949060 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/abb599b9-db44-492a-bc73-6ff5a2c212d5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:08.951380 master-1 kubenswrapper[4771]: I1011 10:54:08.949092 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/abb599b9-db44-492a-bc73-6ff5a2c212d5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:08.951380 master-1 kubenswrapper[4771]: I1011 10:54:08.949115 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d985c5d6-9363-4eef-9640-f61388292365\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0990b59f-0f58-467f-8736-1b158b7725d2\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:08.951380 master-1 kubenswrapper[4771]: I1011 10:54:08.949143 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abb599b9-db44-492a-bc73-6ff5a2c212d5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:08.971668 master-0 kubenswrapper[4790]: I1011 10:54:08.971581 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tfdt\" (UniqueName: \"kubernetes.io/projected/d58f3b14-e8da-4046-afb1-c376a65ef16e-kube-api-access-7tfdt\") pod \"heat-db-create-7vsxp\" (UID: \"d58f3b14-e8da-4046-afb1-c376a65ef16e\") " pod="openstack/heat-db-create-7vsxp" Oct 11 10:54:08.994375 master-0 kubenswrapper[4790]: I1011 10:54:08.994298 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tfdt\" (UniqueName: \"kubernetes.io/projected/d58f3b14-e8da-4046-afb1-c376a65ef16e-kube-api-access-7tfdt\") pod \"heat-db-create-7vsxp\" (UID: \"d58f3b14-e8da-4046-afb1-c376a65ef16e\") " pod="openstack/heat-db-create-7vsxp" Oct 11 10:54:09.040952 master-0 kubenswrapper[4790]: I1011 10:54:09.040657 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-jmmst"] Oct 11 10:54:09.041867 master-0 kubenswrapper[4790]: I1011 10:54:09.041825 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jmmst" Oct 11 10:54:09.062534 master-0 kubenswrapper[4790]: I1011 10:54:09.062460 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jmmst"] Oct 11 10:54:09.062550 master-1 kubenswrapper[4771]: I1011 10:54:09.062441 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/abb599b9-db44-492a-bc73-6ff5a2c212d5-config\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.062728 master-1 kubenswrapper[4771]: I1011 10:54:09.062580 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnmc9\" (UniqueName: \"kubernetes.io/projected/abb599b9-db44-492a-bc73-6ff5a2c212d5-kube-api-access-jnmc9\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.062728 master-1 kubenswrapper[4771]: I1011 10:54:09.062630 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/abb599b9-db44-492a-bc73-6ff5a2c212d5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.062728 master-1 kubenswrapper[4771]: I1011 10:54:09.062664 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/abb599b9-db44-492a-bc73-6ff5a2c212d5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.062728 master-1 kubenswrapper[4771]: I1011 10:54:09.062691 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/abb599b9-db44-492a-bc73-6ff5a2c212d5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.062867 master-1 kubenswrapper[4771]: I1011 10:54:09.062733 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/abb599b9-db44-492a-bc73-6ff5a2c212d5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.062867 master-1 kubenswrapper[4771]: I1011 10:54:09.062763 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/abb599b9-db44-492a-bc73-6ff5a2c212d5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.062867 master-1 kubenswrapper[4771]: I1011 10:54:09.062791 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/abb599b9-db44-492a-bc73-6ff5a2c212d5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.062867 master-1 kubenswrapper[4771]: I1011 10:54:09.062828 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/abb599b9-db44-492a-bc73-6ff5a2c212d5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.062867 master-1 kubenswrapper[4771]: I1011 10:54:09.062857 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d985c5d6-9363-4eef-9640-f61388292365\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0990b59f-0f58-467f-8736-1b158b7725d2\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.063016 master-1 kubenswrapper[4771]: I1011 10:54:09.062886 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abb599b9-db44-492a-bc73-6ff5a2c212d5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.068123 master-1 kubenswrapper[4771]: I1011 10:54:09.068086 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/abb599b9-db44-492a-bc73-6ff5a2c212d5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.068123 master-1 kubenswrapper[4771]: I1011 10:54:09.068122 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abb599b9-db44-492a-bc73-6ff5a2c212d5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.068406 master-1 kubenswrapper[4771]: I1011 10:54:09.068371 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/abb599b9-db44-492a-bc73-6ff5a2c212d5-config\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.069171 master-1 kubenswrapper[4771]: I1011 10:54:09.068803 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/abb599b9-db44-492a-bc73-6ff5a2c212d5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.069535 master-1 kubenswrapper[4771]: I1011 10:54:09.069504 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/abb599b9-db44-492a-bc73-6ff5a2c212d5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.073016 master-1 kubenswrapper[4771]: I1011 10:54:09.072053 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/abb599b9-db44-492a-bc73-6ff5a2c212d5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.073016 master-1 kubenswrapper[4771]: I1011 10:54:09.072965 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/abb599b9-db44-492a-bc73-6ff5a2c212d5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.073682 master-1 kubenswrapper[4771]: I1011 10:54:09.073629 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/abb599b9-db44-492a-bc73-6ff5a2c212d5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.073886 master-1 kubenswrapper[4771]: I1011 10:54:09.073843 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/abb599b9-db44-492a-bc73-6ff5a2c212d5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.074382 master-1 kubenswrapper[4771]: I1011 10:54:09.074288 4771 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:54:09.074382 master-1 kubenswrapper[4771]: I1011 10:54:09.074342 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d985c5d6-9363-4eef-9640-f61388292365\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0990b59f-0f58-467f-8736-1b158b7725d2\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/37699e2b75858a97c8af891d5e1a76727de9abb22a62dc041bfd38b0b8d8c160/globalmount\"" pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.081732 master-0 kubenswrapper[4790]: I1011 10:54:09.080871 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-7vsxp" Oct 11 10:54:09.087732 master-0 kubenswrapper[4790]: I1011 10:54:09.083186 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mx4m\" (UniqueName: \"kubernetes.io/projected/dfe51cce-787f-4883-8b8f-f1ed50caa3d3-kube-api-access-7mx4m\") pod \"cinder-db-create-l7rcp\" (UID: \"dfe51cce-787f-4883-8b8f-f1ed50caa3d3\") " pod="openstack/cinder-db-create-l7rcp" Oct 11 10:54:09.102342 master-1 kubenswrapper[4771]: I1011 10:54:09.102247 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnmc9\" (UniqueName: \"kubernetes.io/projected/abb599b9-db44-492a-bc73-6ff5a2c212d5-kube-api-access-jnmc9\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:09.147312 master-2 kubenswrapper[4776]: I1011 10:54:09.147200 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-8dnfj"] Oct 11 10:54:09.147768 master-2 kubenswrapper[4776]: E1011 10:54:09.147542 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c36342-76bf-457d-804c-cc6420176307" containerName="swift-ring-rebalance" Oct 11 10:54:09.147768 master-2 kubenswrapper[4776]: I1011 10:54:09.147555 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c36342-76bf-457d-804c-cc6420176307" containerName="swift-ring-rebalance" Oct 11 10:54:09.147768 master-2 kubenswrapper[4776]: E1011 10:54:09.147572 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bedd9a1a-d96f-49da-93c4-971885dafbfa" containerName="ovn-config" Oct 11 10:54:09.147768 master-2 kubenswrapper[4776]: I1011 10:54:09.147578 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedd9a1a-d96f-49da-93c4-971885dafbfa" containerName="ovn-config" Oct 11 10:54:09.147768 master-2 kubenswrapper[4776]: I1011 10:54:09.147734 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="bedd9a1a-d96f-49da-93c4-971885dafbfa" containerName="ovn-config" Oct 11 10:54:09.147768 master-2 kubenswrapper[4776]: I1011 10:54:09.147750 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c36342-76bf-457d-804c-cc6420176307" containerName="swift-ring-rebalance" Oct 11 10:54:09.148343 master-2 kubenswrapper[4776]: I1011 10:54:09.148322 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.151396 master-2 kubenswrapper[4776]: I1011 10:54:09.151323 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 11 10:54:09.151650 master-2 kubenswrapper[4776]: I1011 10:54:09.151385 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 11 10:54:09.152337 master-2 kubenswrapper[4776]: I1011 10:54:09.152311 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 11 10:54:09.171534 master-2 kubenswrapper[4776]: I1011 10:54:09.171462 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8dnfj"] Oct 11 10:54:09.185858 master-0 kubenswrapper[4790]: I1011 10:54:09.185778 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp54h\" (UniqueName: \"kubernetes.io/projected/d40b588a-5009-41c8-b8b0-b417de6693ac-kube-api-access-wp54h\") pod \"neutron-db-create-jmmst\" (UID: \"d40b588a-5009-41c8-b8b0-b417de6693ac\") " pod="openstack/neutron-db-create-jmmst" Oct 11 10:54:09.186125 master-0 kubenswrapper[4790]: I1011 10:54:09.185905 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mx4m\" (UniqueName: \"kubernetes.io/projected/dfe51cce-787f-4883-8b8f-f1ed50caa3d3-kube-api-access-7mx4m\") pod \"cinder-db-create-l7rcp\" (UID: \"dfe51cce-787f-4883-8b8f-f1ed50caa3d3\") " pod="openstack/cinder-db-create-l7rcp" Oct 11 10:54:09.213988 master-0 kubenswrapper[4790]: I1011 10:54:09.213747 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mx4m\" (UniqueName: \"kubernetes.io/projected/dfe51cce-787f-4883-8b8f-f1ed50caa3d3-kube-api-access-7mx4m\") pod \"cinder-db-create-l7rcp\" (UID: \"dfe51cce-787f-4883-8b8f-f1ed50caa3d3\") " pod="openstack/cinder-db-create-l7rcp" Oct 11 10:54:09.235262 master-0 kubenswrapper[4790]: I1011 10:54:09.235185 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-l7rcp" Oct 11 10:54:09.251287 master-2 kubenswrapper[4776]: I1011 10:54:09.250776 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-config-data\") pod \"keystone-db-sync-8dnfj\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.251287 master-2 kubenswrapper[4776]: I1011 10:54:09.250868 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-combined-ca-bundle\") pod \"keystone-db-sync-8dnfj\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.251287 master-2 kubenswrapper[4776]: I1011 10:54:09.251191 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nqtk\" (UniqueName: \"kubernetes.io/projected/e873bed5-1a50-4fb0-81b1-2225f4893b28-kube-api-access-4nqtk\") pod \"keystone-db-sync-8dnfj\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.292472 master-0 kubenswrapper[4790]: I1011 10:54:09.292284 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp54h\" (UniqueName: \"kubernetes.io/projected/d40b588a-5009-41c8-b8b0-b417de6693ac-kube-api-access-wp54h\") pod \"neutron-db-create-jmmst\" (UID: \"d40b588a-5009-41c8-b8b0-b417de6693ac\") " pod="openstack/neutron-db-create-jmmst" Oct 11 10:54:09.316364 master-0 kubenswrapper[4790]: I1011 10:54:09.316280 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp54h\" (UniqueName: \"kubernetes.io/projected/d40b588a-5009-41c8-b8b0-b417de6693ac-kube-api-access-wp54h\") pod \"neutron-db-create-jmmst\" (UID: \"d40b588a-5009-41c8-b8b0-b417de6693ac\") " pod="openstack/neutron-db-create-jmmst" Oct 11 10:54:09.353600 master-2 kubenswrapper[4776]: I1011 10:54:09.353309 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-config-data\") pod \"keystone-db-sync-8dnfj\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.353889 master-2 kubenswrapper[4776]: I1011 10:54:09.353634 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-combined-ca-bundle\") pod \"keystone-db-sync-8dnfj\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.353889 master-2 kubenswrapper[4776]: I1011 10:54:09.353763 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nqtk\" (UniqueName: \"kubernetes.io/projected/e873bed5-1a50-4fb0-81b1-2225f4893b28-kube-api-access-4nqtk\") pod \"keystone-db-sync-8dnfj\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.358461 master-2 kubenswrapper[4776]: I1011 10:54:09.358386 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-combined-ca-bundle\") pod \"keystone-db-sync-8dnfj\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.362853 master-2 kubenswrapper[4776]: I1011 10:54:09.362795 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-config-data\") pod \"keystone-db-sync-8dnfj\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.364912 master-0 kubenswrapper[4790]: I1011 10:54:09.364630 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jmmst" Oct 11 10:54:09.374918 master-2 kubenswrapper[4776]: I1011 10:54:09.374866 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nqtk\" (UniqueName: \"kubernetes.io/projected/e873bed5-1a50-4fb0-81b1-2225f4893b28-kube-api-access-4nqtk\") pod \"keystone-db-sync-8dnfj\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.476345 master-2 kubenswrapper[4776]: I1011 10:54:09.476255 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.890936 master-2 kubenswrapper[4776]: I1011 10:54:09.890875 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8dnfj"] Oct 11 10:54:09.891409 master-2 kubenswrapper[4776]: W1011 10:54:09.891384 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode873bed5_1a50_4fb0_81b1_2225f4893b28.slice/crio-0b06641be085fcf01d7e46fe302bfe6e5b8dfbdf3e87a305d5995d2fa7ab62cb WatchSource:0}: Error finding container 0b06641be085fcf01d7e46fe302bfe6e5b8dfbdf3e87a305d5995d2fa7ab62cb: Status 404 returned error can't find the container with id 0b06641be085fcf01d7e46fe302bfe6e5b8dfbdf3e87a305d5995d2fa7ab62cb Oct 11 10:54:10.053218 master-2 kubenswrapper[4776]: I1011 10:54:10.053156 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8dnfj" event={"ID":"e873bed5-1a50-4fb0-81b1-2225f4893b28","Type":"ContainerStarted","Data":"0b06641be085fcf01d7e46fe302bfe6e5b8dfbdf3e87a305d5995d2fa7ab62cb"} Oct 11 10:54:10.371337 master-0 kubenswrapper[4790]: I1011 10:54:10.371252 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27","Type":"ContainerStarted","Data":"88f16cf90fed3886e443e53b5fb532d3ed1d15ab715a3fc898671a81b5f8041f"} Oct 11 10:54:10.451725 master-1 kubenswrapper[4771]: I1011 10:54:10.451533 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" path="/var/lib/kubelet/pods/11c30d1f-16d5-4106-bfae-e6c2d2f64f13/volumes" Oct 11 10:54:10.536869 master-0 kubenswrapper[4790]: I1011 10:54:10.536814 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jmmst"] Oct 11 10:54:10.542139 master-0 kubenswrapper[4790]: W1011 10:54:10.541429 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd40b588a_5009_41c8_b8b0_b417de6693ac.slice/crio-21ccb6e31874630690c4dcc47df17c20cab9612171ef0f3a7877de641d394997 WatchSource:0}: Error finding container 21ccb6e31874630690c4dcc47df17c20cab9612171ef0f3a7877de641d394997: Status 404 returned error can't find the container with id 21ccb6e31874630690c4dcc47df17c20cab9612171ef0f3a7877de641d394997 Oct 11 10:54:10.585879 master-1 kubenswrapper[4771]: I1011 10:54:10.585751 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d985c5d6-9363-4eef-9640-f61388292365\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0990b59f-0f58-467f-8736-1b158b7725d2\") pod \"prometheus-metric-storage-0\" (UID: \"abb599b9-db44-492a-bc73-6ff5a2c212d5\") " pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:10.631267 master-0 kubenswrapper[4790]: I1011 10:54:10.631174 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-l7rcp"] Oct 11 10:54:10.645538 master-0 kubenswrapper[4790]: W1011 10:54:10.645260 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfe51cce_787f_4883_8b8f_f1ed50caa3d3.slice/crio-905e6ab440134109d05c0c22661864d57e4a601ca8e1bc57224de6d60efefdcf WatchSource:0}: Error finding container 905e6ab440134109d05c0c22661864d57e4a601ca8e1bc57224de6d60efefdcf: Status 404 returned error can't find the container with id 905e6ab440134109d05c0c22661864d57e4a601ca8e1bc57224de6d60efefdcf Oct 11 10:54:10.703571 master-1 kubenswrapper[4771]: I1011 10:54:10.703446 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:10.713923 master-0 kubenswrapper[4790]: I1011 10:54:10.713784 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-7vsxp"] Oct 11 10:54:10.748586 master-0 kubenswrapper[4790]: W1011 10:54:10.748521 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd58f3b14_e8da_4046_afb1_c376a65ef16e.slice/crio-bf7f6808c9467f778d59d0800baa52d59254b1f21ee42f31e32aab7d481b8f7a WatchSource:0}: Error finding container bf7f6808c9467f778d59d0800baa52d59254b1f21ee42f31e32aab7d481b8f7a: Status 404 returned error can't find the container with id bf7f6808c9467f778d59d0800baa52d59254b1f21ee42f31e32aab7d481b8f7a Oct 11 10:54:10.899839 master-1 kubenswrapper[4771]: I1011 10:54:10.899050 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="11c30d1f-16d5-4106-bfae-e6c2d2f64f13" containerName="prometheus" probeResult="failure" output="Get \"http://10.129.0.112:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:54:11.381458 master-0 kubenswrapper[4790]: I1011 10:54:11.381368 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-7vsxp" event={"ID":"d58f3b14-e8da-4046-afb1-c376a65ef16e","Type":"ContainerStarted","Data":"6c3235d5d6bf2dc75b5b651ccfac846d26bd4957ea4e687fad602fa916728d6b"} Oct 11 10:54:11.381458 master-0 kubenswrapper[4790]: I1011 10:54:11.381430 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-7vsxp" event={"ID":"d58f3b14-e8da-4046-afb1-c376a65ef16e","Type":"ContainerStarted","Data":"bf7f6808c9467f778d59d0800baa52d59254b1f21ee42f31e32aab7d481b8f7a"} Oct 11 10:54:11.384550 master-0 kubenswrapper[4790]: I1011 10:54:11.384468 4790 generic.go:334] "Generic (PLEG): container finished" podID="d40b588a-5009-41c8-b8b0-b417de6693ac" containerID="07ec7b09db6fb5294fadc7bd8337f6b789e9b95a2303619665336b8735fa4bfe" exitCode=0 Oct 11 10:54:11.384550 master-0 kubenswrapper[4790]: I1011 10:54:11.384510 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jmmst" event={"ID":"d40b588a-5009-41c8-b8b0-b417de6693ac","Type":"ContainerDied","Data":"07ec7b09db6fb5294fadc7bd8337f6b789e9b95a2303619665336b8735fa4bfe"} Oct 11 10:54:11.384550 master-0 kubenswrapper[4790]: I1011 10:54:11.384543 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jmmst" event={"ID":"d40b588a-5009-41c8-b8b0-b417de6693ac","Type":"ContainerStarted","Data":"21ccb6e31874630690c4dcc47df17c20cab9612171ef0f3a7877de641d394997"} Oct 11 10:54:11.387077 master-0 kubenswrapper[4790]: I1011 10:54:11.387019 4790 generic.go:334] "Generic (PLEG): container finished" podID="dfe51cce-787f-4883-8b8f-f1ed50caa3d3" containerID="4a77a0e25a1bbd76eb350e88d6052fb5f4963ac556fb275beeaf9d30c06320df" exitCode=0 Oct 11 10:54:11.387361 master-0 kubenswrapper[4790]: I1011 10:54:11.387104 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-l7rcp" event={"ID":"dfe51cce-787f-4883-8b8f-f1ed50caa3d3","Type":"ContainerDied","Data":"4a77a0e25a1bbd76eb350e88d6052fb5f4963ac556fb275beeaf9d30c06320df"} Oct 11 10:54:11.387361 master-0 kubenswrapper[4790]: I1011 10:54:11.387133 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-l7rcp" event={"ID":"dfe51cce-787f-4883-8b8f-f1ed50caa3d3","Type":"ContainerStarted","Data":"905e6ab440134109d05c0c22661864d57e4a601ca8e1bc57224de6d60efefdcf"} Oct 11 10:54:11.405810 master-0 kubenswrapper[4790]: I1011 10:54:11.398893 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27","Type":"ContainerStarted","Data":"ad43c9b41681c6e6739e54f5322e411f22ca601d6079dba8191993837f1dc376"} Oct 11 10:54:11.405810 master-0 kubenswrapper[4790]: I1011 10:54:11.398978 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9fc8ec6-f4bb-4b20-a262-f416bb5d2e27","Type":"ContainerStarted","Data":"b29805ffd3e058a9848dcd6fafa8543718c5c95eef835aa813c84a10a47e1889"} Oct 11 10:54:11.445987 master-1 kubenswrapper[4771]: I1011 10:54:11.441529 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 11 10:54:11.648552 master-0 kubenswrapper[4790]: I1011 10:54:11.648449 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-7vsxp" podStartSLOduration=3.648420251 podStartE2EDuration="3.648420251s" podCreationTimestamp="2025-10-11 10:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:11.622022649 +0000 UTC m=+928.176482941" watchObservedRunningTime="2025-10-11 10:54:11.648420251 +0000 UTC m=+928.202880543" Oct 11 10:54:11.809122 master-1 kubenswrapper[4771]: I1011 10:54:11.809076 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"abb599b9-db44-492a-bc73-6ff5a2c212d5","Type":"ContainerStarted","Data":"95d32491bd30dbc01dd3b8c79622bf69717fc1d9bfc98c189bf88a06df6c01e7"} Oct 11 10:54:11.886108 master-0 kubenswrapper[4790]: I1011 10:54:11.886008 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=21.555880263 podStartE2EDuration="30.885982186s" podCreationTimestamp="2025-10-11 10:53:41 +0000 UTC" firstStartedPulling="2025-10-11 10:54:00.714254151 +0000 UTC m=+917.268714473" lastFinishedPulling="2025-10-11 10:54:10.044356104 +0000 UTC m=+926.598816396" observedRunningTime="2025-10-11 10:54:11.881590833 +0000 UTC m=+928.436051165" watchObservedRunningTime="2025-10-11 10:54:11.885982186 +0000 UTC m=+928.440442478" Oct 11 10:54:12.409129 master-0 kubenswrapper[4790]: I1011 10:54:12.409042 4790 generic.go:334] "Generic (PLEG): container finished" podID="d58f3b14-e8da-4046-afb1-c376a65ef16e" containerID="6c3235d5d6bf2dc75b5b651ccfac846d26bd4957ea4e687fad602fa916728d6b" exitCode=0 Oct 11 10:54:12.410023 master-0 kubenswrapper[4790]: I1011 10:54:12.409130 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-7vsxp" event={"ID":"d58f3b14-e8da-4046-afb1-c376a65ef16e","Type":"ContainerDied","Data":"6c3235d5d6bf2dc75b5b651ccfac846d26bd4957ea4e687fad602fa916728d6b"} Oct 11 10:54:12.787456 master-0 kubenswrapper[4790]: I1011 10:54:12.787408 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-l7rcp" Oct 11 10:54:12.872611 master-0 kubenswrapper[4790]: I1011 10:54:12.872508 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mx4m\" (UniqueName: \"kubernetes.io/projected/dfe51cce-787f-4883-8b8f-f1ed50caa3d3-kube-api-access-7mx4m\") pod \"dfe51cce-787f-4883-8b8f-f1ed50caa3d3\" (UID: \"dfe51cce-787f-4883-8b8f-f1ed50caa3d3\") " Oct 11 10:54:12.875664 master-0 kubenswrapper[4790]: I1011 10:54:12.875591 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe51cce-787f-4883-8b8f-f1ed50caa3d3-kube-api-access-7mx4m" (OuterVolumeSpecName: "kube-api-access-7mx4m") pod "dfe51cce-787f-4883-8b8f-f1ed50caa3d3" (UID: "dfe51cce-787f-4883-8b8f-f1ed50caa3d3"). InnerVolumeSpecName "kube-api-access-7mx4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:12.928982 master-0 kubenswrapper[4790]: I1011 10:54:12.928922 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jmmst" Oct 11 10:54:12.974648 master-0 kubenswrapper[4790]: I1011 10:54:12.974510 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mx4m\" (UniqueName: \"kubernetes.io/projected/dfe51cce-787f-4883-8b8f-f1ed50caa3d3-kube-api-access-7mx4m\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:13.075915 master-0 kubenswrapper[4790]: I1011 10:54:13.075819 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp54h\" (UniqueName: \"kubernetes.io/projected/d40b588a-5009-41c8-b8b0-b417de6693ac-kube-api-access-wp54h\") pod \"d40b588a-5009-41c8-b8b0-b417de6693ac\" (UID: \"d40b588a-5009-41c8-b8b0-b417de6693ac\") " Oct 11 10:54:13.079057 master-0 kubenswrapper[4790]: I1011 10:54:13.079009 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d40b588a-5009-41c8-b8b0-b417de6693ac-kube-api-access-wp54h" (OuterVolumeSpecName: "kube-api-access-wp54h") pod "d40b588a-5009-41c8-b8b0-b417de6693ac" (UID: "d40b588a-5009-41c8-b8b0-b417de6693ac"). InnerVolumeSpecName "kube-api-access-wp54h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:13.178583 master-0 kubenswrapper[4790]: I1011 10:54:13.178468 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp54h\" (UniqueName: \"kubernetes.io/projected/d40b588a-5009-41c8-b8b0-b417de6693ac-kube-api-access-wp54h\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:13.421466 master-0 kubenswrapper[4790]: I1011 10:54:13.421346 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jmmst" event={"ID":"d40b588a-5009-41c8-b8b0-b417de6693ac","Type":"ContainerDied","Data":"21ccb6e31874630690c4dcc47df17c20cab9612171ef0f3a7877de641d394997"} Oct 11 10:54:13.421466 master-0 kubenswrapper[4790]: I1011 10:54:13.421434 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21ccb6e31874630690c4dcc47df17c20cab9612171ef0f3a7877de641d394997" Oct 11 10:54:13.421466 master-0 kubenswrapper[4790]: I1011 10:54:13.421433 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jmmst" Oct 11 10:54:13.423884 master-0 kubenswrapper[4790]: I1011 10:54:13.423797 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-l7rcp" event={"ID":"dfe51cce-787f-4883-8b8f-f1ed50caa3d3","Type":"ContainerDied","Data":"905e6ab440134109d05c0c22661864d57e4a601ca8e1bc57224de6d60efefdcf"} Oct 11 10:54:13.423984 master-0 kubenswrapper[4790]: I1011 10:54:13.423886 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-l7rcp" Oct 11 10:54:13.424121 master-0 kubenswrapper[4790]: I1011 10:54:13.423892 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="905e6ab440134109d05c0c22661864d57e4a601ca8e1bc57224de6d60efefdcf" Oct 11 10:54:13.566323 master-2 kubenswrapper[4776]: I1011 10:54:13.566243 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8c6bd7965-l58cd"] Oct 11 10:54:13.567977 master-2 kubenswrapper[4776]: I1011 10:54:13.567951 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.577841 master-2 kubenswrapper[4776]: I1011 10:54:13.572100 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 10:54:13.577841 master-2 kubenswrapper[4776]: I1011 10:54:13.572185 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 11 10:54:13.577841 master-2 kubenswrapper[4776]: I1011 10:54:13.572220 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 11 10:54:13.577841 master-2 kubenswrapper[4776]: I1011 10:54:13.572304 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 11 10:54:13.577841 master-2 kubenswrapper[4776]: I1011 10:54:13.572845 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 10:54:13.586340 master-2 kubenswrapper[4776]: I1011 10:54:13.584630 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8c6bd7965-l58cd"] Oct 11 10:54:13.635524 master-2 kubenswrapper[4776]: I1011 10:54:13.632828 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-config\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.635524 master-2 kubenswrapper[4776]: I1011 10:54:13.632957 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-svc\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.635524 master-2 kubenswrapper[4776]: I1011 10:54:13.632985 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.635524 master-2 kubenswrapper[4776]: I1011 10:54:13.633013 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.635524 master-2 kubenswrapper[4776]: I1011 10:54:13.633055 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.635524 master-2 kubenswrapper[4776]: I1011 10:54:13.633082 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndm6h\" (UniqueName: \"kubernetes.io/projected/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-kube-api-access-ndm6h\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.735096 master-2 kubenswrapper[4776]: I1011 10:54:13.734811 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.735096 master-2 kubenswrapper[4776]: I1011 10:54:13.734884 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndm6h\" (UniqueName: \"kubernetes.io/projected/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-kube-api-access-ndm6h\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.735096 master-2 kubenswrapper[4776]: I1011 10:54:13.734930 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-config\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.735096 master-2 kubenswrapper[4776]: I1011 10:54:13.735008 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-svc\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.735096 master-2 kubenswrapper[4776]: I1011 10:54:13.735031 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.735096 master-2 kubenswrapper[4776]: I1011 10:54:13.735061 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.736145 master-2 kubenswrapper[4776]: I1011 10:54:13.736118 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.736726 master-2 kubenswrapper[4776]: I1011 10:54:13.736702 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-svc\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.737517 master-2 kubenswrapper[4776]: I1011 10:54:13.737370 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.738198 master-2 kubenswrapper[4776]: I1011 10:54:13.738146 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.739114 master-2 kubenswrapper[4776]: I1011 10:54:13.739052 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-config\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.765421 master-2 kubenswrapper[4776]: I1011 10:54:13.765362 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndm6h\" (UniqueName: \"kubernetes.io/projected/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-kube-api-access-ndm6h\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.905685 master-0 kubenswrapper[4790]: I1011 10:54:13.905608 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-7vsxp" Oct 11 10:54:13.906307 master-2 kubenswrapper[4776]: I1011 10:54:13.906258 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.999027 master-0 kubenswrapper[4790]: I1011 10:54:13.998846 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tfdt\" (UniqueName: \"kubernetes.io/projected/d58f3b14-e8da-4046-afb1-c376a65ef16e-kube-api-access-7tfdt\") pod \"d58f3b14-e8da-4046-afb1-c376a65ef16e\" (UID: \"d58f3b14-e8da-4046-afb1-c376a65ef16e\") " Oct 11 10:54:14.002053 master-0 kubenswrapper[4790]: I1011 10:54:14.001986 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d58f3b14-e8da-4046-afb1-c376a65ef16e-kube-api-access-7tfdt" (OuterVolumeSpecName: "kube-api-access-7tfdt") pod "d58f3b14-e8da-4046-afb1-c376a65ef16e" (UID: "d58f3b14-e8da-4046-afb1-c376a65ef16e"). InnerVolumeSpecName "kube-api-access-7tfdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:14.104737 master-0 kubenswrapper[4790]: I1011 10:54:14.104225 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tfdt\" (UniqueName: \"kubernetes.io/projected/d58f3b14-e8da-4046-afb1-c376a65ef16e-kube-api-access-7tfdt\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:14.432799 master-0 kubenswrapper[4790]: I1011 10:54:14.432682 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-7vsxp" event={"ID":"d58f3b14-e8da-4046-afb1-c376a65ef16e","Type":"ContainerDied","Data":"bf7f6808c9467f778d59d0800baa52d59254b1f21ee42f31e32aab7d481b8f7a"} Oct 11 10:54:14.432799 master-0 kubenswrapper[4790]: I1011 10:54:14.432795 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf7f6808c9467f778d59d0800baa52d59254b1f21ee42f31e32aab7d481b8f7a" Oct 11 10:54:14.433718 master-0 kubenswrapper[4790]: I1011 10:54:14.433671 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-7vsxp" Oct 11 10:54:14.773937 master-2 kubenswrapper[4776]: I1011 10:54:14.773899 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8c6bd7965-l58cd"] Oct 11 10:54:14.836604 master-1 kubenswrapper[4771]: I1011 10:54:14.836519 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"abb599b9-db44-492a-bc73-6ff5a2c212d5","Type":"ContainerStarted","Data":"5e072db00dc200fff13cfbd416cac5a518950f3f680e875e683c4e6353c36aa5"} Oct 11 10:54:15.091920 master-2 kubenswrapper[4776]: I1011 10:54:15.091859 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" event={"ID":"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb","Type":"ContainerStarted","Data":"10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b"} Oct 11 10:54:15.091920 master-2 kubenswrapper[4776]: I1011 10:54:15.091923 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" event={"ID":"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb","Type":"ContainerStarted","Data":"2ec4ba703e7155177399cd4035e4698136a2123a7fc01c195d9d7ff62132b49e"} Oct 11 10:54:15.093109 master-2 kubenswrapper[4776]: I1011 10:54:15.093053 4776 generic.go:334] "Generic (PLEG): container finished" podID="25099d7a-e434-48d2-a175-088e5ad2caf2" containerID="f7b2a1a1f6cfb4760b3cd06bae0a54d2a7eb0b82c31eb466cd4d45304c8c9826" exitCode=0 Oct 11 10:54:15.093184 master-2 kubenswrapper[4776]: I1011 10:54:15.093133 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-848z9" event={"ID":"25099d7a-e434-48d2-a175-088e5ad2caf2","Type":"ContainerDied","Data":"f7b2a1a1f6cfb4760b3cd06bae0a54d2a7eb0b82c31eb466cd4d45304c8c9826"} Oct 11 10:54:15.094419 master-2 kubenswrapper[4776]: I1011 10:54:15.094390 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8dnfj" event={"ID":"e873bed5-1a50-4fb0-81b1-2225f4893b28","Type":"ContainerStarted","Data":"9cbf7752342665c7e92852ff9e1ea1e5f0d5dc7a3ede8988348adb50918c085e"} Oct 11 10:54:15.206802 master-2 kubenswrapper[4776]: I1011 10:54:15.206720 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-8dnfj" podStartSLOduration=1.7169767500000002 podStartE2EDuration="6.206703141s" podCreationTimestamp="2025-10-11 10:54:09 +0000 UTC" firstStartedPulling="2025-10-11 10:54:09.893648201 +0000 UTC m=+1684.678074900" lastFinishedPulling="2025-10-11 10:54:14.383374582 +0000 UTC m=+1689.167801291" observedRunningTime="2025-10-11 10:54:15.201201662 +0000 UTC m=+1689.985628371" watchObservedRunningTime="2025-10-11 10:54:15.206703141 +0000 UTC m=+1689.991129850" Oct 11 10:54:16.102503 master-2 kubenswrapper[4776]: I1011 10:54:16.102359 4776 generic.go:334] "Generic (PLEG): container finished" podID="8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" containerID="10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b" exitCode=0 Oct 11 10:54:16.102503 master-2 kubenswrapper[4776]: I1011 10:54:16.102419 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" event={"ID":"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb","Type":"ContainerDied","Data":"10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b"} Oct 11 10:54:17.020489 master-2 kubenswrapper[4776]: I1011 10:54:17.020434 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-848z9" Oct 11 10:54:17.102690 master-2 kubenswrapper[4776]: I1011 10:54:17.102605 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-config-data\") pod \"25099d7a-e434-48d2-a175-088e5ad2caf2\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " Oct 11 10:54:17.103292 master-2 kubenswrapper[4776]: I1011 10:54:17.102723 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhdjz\" (UniqueName: \"kubernetes.io/projected/25099d7a-e434-48d2-a175-088e5ad2caf2-kube-api-access-jhdjz\") pod \"25099d7a-e434-48d2-a175-088e5ad2caf2\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " Oct 11 10:54:17.103292 master-2 kubenswrapper[4776]: I1011 10:54:17.102824 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-db-sync-config-data\") pod \"25099d7a-e434-48d2-a175-088e5ad2caf2\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " Oct 11 10:54:17.103292 master-2 kubenswrapper[4776]: I1011 10:54:17.102874 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-combined-ca-bundle\") pod \"25099d7a-e434-48d2-a175-088e5ad2caf2\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " Oct 11 10:54:17.107520 master-2 kubenswrapper[4776]: I1011 10:54:17.107467 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "25099d7a-e434-48d2-a175-088e5ad2caf2" (UID: "25099d7a-e434-48d2-a175-088e5ad2caf2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:17.107778 master-2 kubenswrapper[4776]: I1011 10:54:17.107739 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25099d7a-e434-48d2-a175-088e5ad2caf2-kube-api-access-jhdjz" (OuterVolumeSpecName: "kube-api-access-jhdjz") pod "25099d7a-e434-48d2-a175-088e5ad2caf2" (UID: "25099d7a-e434-48d2-a175-088e5ad2caf2"). InnerVolumeSpecName "kube-api-access-jhdjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:17.114729 master-2 kubenswrapper[4776]: I1011 10:54:17.114626 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" event={"ID":"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb","Type":"ContainerStarted","Data":"04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced"} Oct 11 10:54:17.114958 master-2 kubenswrapper[4776]: I1011 10:54:17.114932 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:17.117312 master-2 kubenswrapper[4776]: I1011 10:54:17.116636 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-848z9" event={"ID":"25099d7a-e434-48d2-a175-088e5ad2caf2","Type":"ContainerDied","Data":"0be39d2643cd63048b0bc49a71ba425692d3f2086de0e9779d98622fc7802a88"} Oct 11 10:54:17.117312 master-2 kubenswrapper[4776]: I1011 10:54:17.116663 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0be39d2643cd63048b0bc49a71ba425692d3f2086de0e9779d98622fc7802a88" Oct 11 10:54:17.117312 master-2 kubenswrapper[4776]: I1011 10:54:17.116742 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-848z9" Oct 11 10:54:17.123553 master-2 kubenswrapper[4776]: I1011 10:54:17.123468 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25099d7a-e434-48d2-a175-088e5ad2caf2" (UID: "25099d7a-e434-48d2-a175-088e5ad2caf2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:17.143764 master-2 kubenswrapper[4776]: I1011 10:54:17.143617 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-config-data" (OuterVolumeSpecName: "config-data") pod "25099d7a-e434-48d2-a175-088e5ad2caf2" (UID: "25099d7a-e434-48d2-a175-088e5ad2caf2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:17.144433 master-2 kubenswrapper[4776]: I1011 10:54:17.144331 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" podStartSLOduration=4.14431242 podStartE2EDuration="4.14431242s" podCreationTimestamp="2025-10-11 10:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:17.142127921 +0000 UTC m=+1691.926554630" watchObservedRunningTime="2025-10-11 10:54:17.14431242 +0000 UTC m=+1691.928739149" Oct 11 10:54:17.204701 master-2 kubenswrapper[4776]: I1011 10:54:17.204551 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:17.204701 master-2 kubenswrapper[4776]: I1011 10:54:17.204592 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhdjz\" (UniqueName: \"kubernetes.io/projected/25099d7a-e434-48d2-a175-088e5ad2caf2-kube-api-access-jhdjz\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:17.204701 master-2 kubenswrapper[4776]: I1011 10:54:17.204604 4776 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-db-sync-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:17.204701 master-2 kubenswrapper[4776]: I1011 10:54:17.204612 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:17.704596 master-2 kubenswrapper[4776]: I1011 10:54:17.704536 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8c6bd7965-l58cd"] Oct 11 10:54:17.739520 master-2 kubenswrapper[4776]: I1011 10:54:17.739006 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bb5fd84f-98bcs"] Oct 11 10:54:17.739520 master-2 kubenswrapper[4776]: E1011 10:54:17.739319 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25099d7a-e434-48d2-a175-088e5ad2caf2" containerName="glance-db-sync" Oct 11 10:54:17.739520 master-2 kubenswrapper[4776]: I1011 10:54:17.739331 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="25099d7a-e434-48d2-a175-088e5ad2caf2" containerName="glance-db-sync" Oct 11 10:54:17.739795 master-2 kubenswrapper[4776]: I1011 10:54:17.739558 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="25099d7a-e434-48d2-a175-088e5ad2caf2" containerName="glance-db-sync" Oct 11 10:54:17.740755 master-2 kubenswrapper[4776]: I1011 10:54:17.740712 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.763945 master-2 kubenswrapper[4776]: I1011 10:54:17.763916 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb5fd84f-98bcs"] Oct 11 10:54:17.826322 master-2 kubenswrapper[4776]: I1011 10:54:17.826010 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-svc\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.826322 master-2 kubenswrapper[4776]: I1011 10:54:17.826074 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8llfr\" (UniqueName: \"kubernetes.io/projected/7da38dfb-a995-4843-a05a-351e5dc557ae-kube-api-access-8llfr\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.826322 master-2 kubenswrapper[4776]: I1011 10:54:17.826155 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-nb\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.826910 master-2 kubenswrapper[4776]: I1011 10:54:17.826685 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-config\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.826910 master-2 kubenswrapper[4776]: I1011 10:54:17.826721 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-swift-storage-0\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.826910 master-2 kubenswrapper[4776]: I1011 10:54:17.826760 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-sb\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.928883 master-2 kubenswrapper[4776]: I1011 10:54:17.928817 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-config\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.928883 master-2 kubenswrapper[4776]: I1011 10:54:17.928881 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-swift-storage-0\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.929154 master-2 kubenswrapper[4776]: I1011 10:54:17.928921 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-sb\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.929154 master-2 kubenswrapper[4776]: I1011 10:54:17.929025 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-svc\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.929154 master-2 kubenswrapper[4776]: I1011 10:54:17.929055 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8llfr\" (UniqueName: \"kubernetes.io/projected/7da38dfb-a995-4843-a05a-351e5dc557ae-kube-api-access-8llfr\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.929154 master-2 kubenswrapper[4776]: I1011 10:54:17.929108 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-nb\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.929917 master-2 kubenswrapper[4776]: I1011 10:54:17.929889 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-swift-storage-0\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.930060 master-2 kubenswrapper[4776]: I1011 10:54:17.930019 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-config\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.930108 master-2 kubenswrapper[4776]: I1011 10:54:17.930030 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-sb\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.930225 master-2 kubenswrapper[4776]: I1011 10:54:17.930188 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-svc\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.930263 master-2 kubenswrapper[4776]: I1011 10:54:17.930234 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-nb\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.965085 master-2 kubenswrapper[4776]: I1011 10:54:17.965039 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8llfr\" (UniqueName: \"kubernetes.io/projected/7da38dfb-a995-4843-a05a-351e5dc557ae-kube-api-access-8llfr\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:18.064694 master-2 kubenswrapper[4776]: I1011 10:54:18.064587 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:18.546629 master-2 kubenswrapper[4776]: I1011 10:54:18.546579 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb5fd84f-98bcs"] Oct 11 10:54:18.555038 master-2 kubenswrapper[4776]: W1011 10:54:18.549933 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7da38dfb_a995_4843_a05a_351e5dc557ae.slice/crio-585cb1a3dd3493a9c6dcca4a20afacedee65336427561c983a60785158d1da86 WatchSource:0}: Error finding container 585cb1a3dd3493a9c6dcca4a20afacedee65336427561c983a60785158d1da86: Status 404 returned error can't find the container with id 585cb1a3dd3493a9c6dcca4a20afacedee65336427561c983a60785158d1da86 Oct 11 10:54:19.134418 master-2 kubenswrapper[4776]: I1011 10:54:19.134309 4776 generic.go:334] "Generic (PLEG): container finished" podID="7da38dfb-a995-4843-a05a-351e5dc557ae" containerID="456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe" exitCode=0 Oct 11 10:54:19.134418 master-2 kubenswrapper[4776]: I1011 10:54:19.134382 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" event={"ID":"7da38dfb-a995-4843-a05a-351e5dc557ae","Type":"ContainerDied","Data":"456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe"} Oct 11 10:54:19.135018 master-2 kubenswrapper[4776]: I1011 10:54:19.134418 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" event={"ID":"7da38dfb-a995-4843-a05a-351e5dc557ae","Type":"ContainerStarted","Data":"585cb1a3dd3493a9c6dcca4a20afacedee65336427561c983a60785158d1da86"} Oct 11 10:54:19.136302 master-2 kubenswrapper[4776]: I1011 10:54:19.136264 4776 generic.go:334] "Generic (PLEG): container finished" podID="e873bed5-1a50-4fb0-81b1-2225f4893b28" containerID="9cbf7752342665c7e92852ff9e1ea1e5f0d5dc7a3ede8988348adb50918c085e" exitCode=0 Oct 11 10:54:19.136355 master-2 kubenswrapper[4776]: I1011 10:54:19.136330 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8dnfj" event={"ID":"e873bed5-1a50-4fb0-81b1-2225f4893b28","Type":"ContainerDied","Data":"9cbf7752342665c7e92852ff9e1ea1e5f0d5dc7a3ede8988348adb50918c085e"} Oct 11 10:54:19.136474 master-2 kubenswrapper[4776]: I1011 10:54:19.136441 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" podUID="8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" containerName="dnsmasq-dns" containerID="cri-o://04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced" gracePeriod=10 Oct 11 10:54:19.967106 master-2 kubenswrapper[4776]: I1011 10:54:19.967065 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:20.063527 master-2 kubenswrapper[4776]: I1011 10:54:20.063452 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-nb\") pod \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " Oct 11 10:54:20.063755 master-2 kubenswrapper[4776]: I1011 10:54:20.063541 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndm6h\" (UniqueName: \"kubernetes.io/projected/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-kube-api-access-ndm6h\") pod \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " Oct 11 10:54:20.063755 master-2 kubenswrapper[4776]: I1011 10:54:20.063577 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-config\") pod \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " Oct 11 10:54:20.063755 master-2 kubenswrapper[4776]: I1011 10:54:20.063643 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-svc\") pod \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " Oct 11 10:54:20.063857 master-2 kubenswrapper[4776]: I1011 10:54:20.063777 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-swift-storage-0\") pod \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " Oct 11 10:54:20.063857 master-2 kubenswrapper[4776]: I1011 10:54:20.063808 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-sb\") pod \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " Oct 11 10:54:20.075526 master-2 kubenswrapper[4776]: I1011 10:54:20.075446 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-kube-api-access-ndm6h" (OuterVolumeSpecName: "kube-api-access-ndm6h") pod "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" (UID: "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb"). InnerVolumeSpecName "kube-api-access-ndm6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:20.101808 master-2 kubenswrapper[4776]: I1011 10:54:20.101709 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-config" (OuterVolumeSpecName: "config") pod "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" (UID: "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:20.102210 master-2 kubenswrapper[4776]: I1011 10:54:20.102160 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" (UID: "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:20.105555 master-2 kubenswrapper[4776]: I1011 10:54:20.105518 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" (UID: "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:20.108082 master-2 kubenswrapper[4776]: I1011 10:54:20.108027 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" (UID: "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:20.108628 master-2 kubenswrapper[4776]: I1011 10:54:20.108595 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" (UID: "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:20.144185 master-2 kubenswrapper[4776]: I1011 10:54:20.144017 4776 generic.go:334] "Generic (PLEG): container finished" podID="8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" containerID="04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced" exitCode=0 Oct 11 10:54:20.144185 master-2 kubenswrapper[4776]: I1011 10:54:20.144118 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" event={"ID":"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb","Type":"ContainerDied","Data":"04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced"} Oct 11 10:54:20.144526 master-2 kubenswrapper[4776]: I1011 10:54:20.144238 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:20.144526 master-2 kubenswrapper[4776]: I1011 10:54:20.144328 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" event={"ID":"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb","Type":"ContainerDied","Data":"2ec4ba703e7155177399cd4035e4698136a2123a7fc01c195d9d7ff62132b49e"} Oct 11 10:54:20.144526 master-2 kubenswrapper[4776]: I1011 10:54:20.144353 4776 scope.go:117] "RemoveContainer" containerID="04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced" Oct 11 10:54:20.147122 master-2 kubenswrapper[4776]: I1011 10:54:20.147076 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" event={"ID":"7da38dfb-a995-4843-a05a-351e5dc557ae","Type":"ContainerStarted","Data":"bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7"} Oct 11 10:54:20.147185 master-2 kubenswrapper[4776]: I1011 10:54:20.147154 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:20.165302 master-2 kubenswrapper[4776]: I1011 10:54:20.165253 4776 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-swift-storage-0\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:20.165302 master-2 kubenswrapper[4776]: I1011 10:54:20.165290 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-sb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:20.165302 master-2 kubenswrapper[4776]: I1011 10:54:20.165299 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-nb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:20.165302 master-2 kubenswrapper[4776]: I1011 10:54:20.165311 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndm6h\" (UniqueName: \"kubernetes.io/projected/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-kube-api-access-ndm6h\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:20.165635 master-2 kubenswrapper[4776]: I1011 10:54:20.165325 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:20.165635 master-2 kubenswrapper[4776]: I1011 10:54:20.165339 4776 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-svc\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:20.186783 master-2 kubenswrapper[4776]: I1011 10:54:20.186596 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" podStartSLOduration=3.186579808 podStartE2EDuration="3.186579808s" podCreationTimestamp="2025-10-11 10:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:20.181792468 +0000 UTC m=+1694.966219187" watchObservedRunningTime="2025-10-11 10:54:20.186579808 +0000 UTC m=+1694.971006517" Oct 11 10:54:20.214969 master-2 kubenswrapper[4776]: I1011 10:54:20.213509 4776 scope.go:117] "RemoveContainer" containerID="10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b" Oct 11 10:54:20.251238 master-2 kubenswrapper[4776]: I1011 10:54:20.236045 4776 scope.go:117] "RemoveContainer" containerID="04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced" Oct 11 10:54:20.251238 master-2 kubenswrapper[4776]: E1011 10:54:20.237003 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced\": container with ID starting with 04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced not found: ID does not exist" containerID="04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced" Oct 11 10:54:20.251238 master-2 kubenswrapper[4776]: I1011 10:54:20.237030 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced"} err="failed to get container status \"04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced\": rpc error: code = NotFound desc = could not find container \"04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced\": container with ID starting with 04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced not found: ID does not exist" Oct 11 10:54:20.251238 master-2 kubenswrapper[4776]: I1011 10:54:20.237057 4776 scope.go:117] "RemoveContainer" containerID="10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b" Oct 11 10:54:20.251238 master-2 kubenswrapper[4776]: E1011 10:54:20.237330 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b\": container with ID starting with 10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b not found: ID does not exist" containerID="10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b" Oct 11 10:54:20.251238 master-2 kubenswrapper[4776]: I1011 10:54:20.237368 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b"} err="failed to get container status \"10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b\": rpc error: code = NotFound desc = could not find container \"10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b\": container with ID starting with 10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b not found: ID does not exist" Oct 11 10:54:20.251238 master-2 kubenswrapper[4776]: I1011 10:54:20.239048 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8c6bd7965-l58cd"] Oct 11 10:54:20.251238 master-2 kubenswrapper[4776]: I1011 10:54:20.245882 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8c6bd7965-l58cd"] Oct 11 10:54:20.961016 master-2 kubenswrapper[4776]: I1011 10:54:20.960824 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:21.084313 master-2 kubenswrapper[4776]: I1011 10:54:21.084229 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-config-data\") pod \"e873bed5-1a50-4fb0-81b1-2225f4893b28\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " Oct 11 10:54:21.087088 master-2 kubenswrapper[4776]: I1011 10:54:21.087049 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-combined-ca-bundle\") pod \"e873bed5-1a50-4fb0-81b1-2225f4893b28\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " Oct 11 10:54:21.087451 master-2 kubenswrapper[4776]: I1011 10:54:21.087422 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nqtk\" (UniqueName: \"kubernetes.io/projected/e873bed5-1a50-4fb0-81b1-2225f4893b28-kube-api-access-4nqtk\") pod \"e873bed5-1a50-4fb0-81b1-2225f4893b28\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " Oct 11 10:54:21.090025 master-2 kubenswrapper[4776]: I1011 10:54:21.089981 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e873bed5-1a50-4fb0-81b1-2225f4893b28-kube-api-access-4nqtk" (OuterVolumeSpecName: "kube-api-access-4nqtk") pod "e873bed5-1a50-4fb0-81b1-2225f4893b28" (UID: "e873bed5-1a50-4fb0-81b1-2225f4893b28"). InnerVolumeSpecName "kube-api-access-4nqtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:21.108732 master-2 kubenswrapper[4776]: I1011 10:54:21.108503 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e873bed5-1a50-4fb0-81b1-2225f4893b28" (UID: "e873bed5-1a50-4fb0-81b1-2225f4893b28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:21.129705 master-2 kubenswrapper[4776]: I1011 10:54:21.129636 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-config-data" (OuterVolumeSpecName: "config-data") pod "e873bed5-1a50-4fb0-81b1-2225f4893b28" (UID: "e873bed5-1a50-4fb0-81b1-2225f4893b28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:21.156309 master-2 kubenswrapper[4776]: I1011 10:54:21.156260 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:21.157017 master-2 kubenswrapper[4776]: I1011 10:54:21.156974 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8dnfj" event={"ID":"e873bed5-1a50-4fb0-81b1-2225f4893b28","Type":"ContainerDied","Data":"0b06641be085fcf01d7e46fe302bfe6e5b8dfbdf3e87a305d5995d2fa7ab62cb"} Oct 11 10:54:21.157017 master-2 kubenswrapper[4776]: I1011 10:54:21.157014 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b06641be085fcf01d7e46fe302bfe6e5b8dfbdf3e87a305d5995d2fa7ab62cb" Oct 11 10:54:21.192878 master-2 kubenswrapper[4776]: I1011 10:54:21.192826 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:21.192878 master-2 kubenswrapper[4776]: I1011 10:54:21.192881 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nqtk\" (UniqueName: \"kubernetes.io/projected/e873bed5-1a50-4fb0-81b1-2225f4893b28-kube-api-access-4nqtk\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:21.193224 master-2 kubenswrapper[4776]: I1011 10:54:21.192895 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:21.855042 master-2 kubenswrapper[4776]: I1011 10:54:21.854907 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb5fd84f-98bcs"] Oct 11 10:54:21.922814 master-2 kubenswrapper[4776]: I1011 10:54:21.922730 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jdggk"] Oct 11 10:54:21.923051 master-2 kubenswrapper[4776]: E1011 10:54:21.923024 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e873bed5-1a50-4fb0-81b1-2225f4893b28" containerName="keystone-db-sync" Oct 11 10:54:21.923051 master-2 kubenswrapper[4776]: I1011 10:54:21.923040 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="e873bed5-1a50-4fb0-81b1-2225f4893b28" containerName="keystone-db-sync" Oct 11 10:54:21.923153 master-2 kubenswrapper[4776]: E1011 10:54:21.923055 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" containerName="dnsmasq-dns" Oct 11 10:54:21.923153 master-2 kubenswrapper[4776]: I1011 10:54:21.923062 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" containerName="dnsmasq-dns" Oct 11 10:54:21.923153 master-2 kubenswrapper[4776]: E1011 10:54:21.923074 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" containerName="init" Oct 11 10:54:21.923153 master-2 kubenswrapper[4776]: I1011 10:54:21.923079 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" containerName="init" Oct 11 10:54:21.923323 master-2 kubenswrapper[4776]: I1011 10:54:21.923254 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" containerName="dnsmasq-dns" Oct 11 10:54:21.923323 master-2 kubenswrapper[4776]: I1011 10:54:21.923266 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="e873bed5-1a50-4fb0-81b1-2225f4893b28" containerName="keystone-db-sync" Oct 11 10:54:21.923876 master-2 kubenswrapper[4776]: I1011 10:54:21.923848 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:21.927147 master-2 kubenswrapper[4776]: I1011 10:54:21.927118 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 11 10:54:21.927317 master-2 kubenswrapper[4776]: I1011 10:54:21.927299 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 11 10:54:21.927954 master-2 kubenswrapper[4776]: I1011 10:54:21.927927 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 11 10:54:21.991142 master-2 kubenswrapper[4776]: I1011 10:54:21.991086 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59bd749d9f-z7w7d"] Oct 11 10:54:21.992590 master-2 kubenswrapper[4776]: I1011 10:54:21.992560 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.006634 master-2 kubenswrapper[4776]: I1011 10:54:22.006580 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jdggk"] Oct 11 10:54:22.007227 master-2 kubenswrapper[4776]: I1011 10:54:22.007190 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-config-data\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.007294 master-2 kubenswrapper[4776]: I1011 10:54:22.007240 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-nb\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.007294 master-2 kubenswrapper[4776]: I1011 10:54:22.007263 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-combined-ca-bundle\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.007381 master-2 kubenswrapper[4776]: I1011 10:54:22.007296 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-scripts\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.007470 master-2 kubenswrapper[4776]: I1011 10:54:22.007427 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhkv4\" (UniqueName: \"kubernetes.io/projected/316381d6-4304-44b3-a742-50e80da7acd1-kube-api-access-xhkv4\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.007521 master-2 kubenswrapper[4776]: I1011 10:54:22.007477 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-swift-storage-0\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.007521 master-2 kubenswrapper[4776]: I1011 10:54:22.007513 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-svc\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.007597 master-2 kubenswrapper[4776]: I1011 10:54:22.007569 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-credential-keys\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.007660 master-2 kubenswrapper[4776]: I1011 10:54:22.007638 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shtfx\" (UniqueName: \"kubernetes.io/projected/e8ac662a-494b-422a-84fd-2e40681d4ae6-kube-api-access-shtfx\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.007770 master-2 kubenswrapper[4776]: I1011 10:54:22.007748 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-sb\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.007826 master-2 kubenswrapper[4776]: I1011 10:54:22.007804 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-fernet-keys\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.007921 master-2 kubenswrapper[4776]: I1011 10:54:22.007903 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-config\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.017035 master-2 kubenswrapper[4776]: I1011 10:54:22.016977 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59bd749d9f-z7w7d"] Oct 11 10:54:22.067978 master-2 kubenswrapper[4776]: I1011 10:54:22.067904 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" path="/var/lib/kubelet/pods/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb/volumes" Oct 11 10:54:22.109610 master-2 kubenswrapper[4776]: I1011 10:54:22.109484 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-config\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.109610 master-2 kubenswrapper[4776]: I1011 10:54:22.109552 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-config-data\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.109610 master-2 kubenswrapper[4776]: I1011 10:54:22.109581 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-nb\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.109610 master-2 kubenswrapper[4776]: I1011 10:54:22.109603 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-combined-ca-bundle\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.110189 master-2 kubenswrapper[4776]: I1011 10:54:22.109625 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-scripts\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.110189 master-2 kubenswrapper[4776]: I1011 10:54:22.109655 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-swift-storage-0\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.110189 master-2 kubenswrapper[4776]: I1011 10:54:22.109686 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhkv4\" (UniqueName: \"kubernetes.io/projected/316381d6-4304-44b3-a742-50e80da7acd1-kube-api-access-xhkv4\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.110189 master-2 kubenswrapper[4776]: I1011 10:54:22.109701 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-svc\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.110189 master-2 kubenswrapper[4776]: I1011 10:54:22.109722 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-credential-keys\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.110189 master-2 kubenswrapper[4776]: I1011 10:54:22.109747 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shtfx\" (UniqueName: \"kubernetes.io/projected/e8ac662a-494b-422a-84fd-2e40681d4ae6-kube-api-access-shtfx\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.110189 master-2 kubenswrapper[4776]: I1011 10:54:22.109775 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-sb\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.110189 master-2 kubenswrapper[4776]: I1011 10:54:22.109798 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-fernet-keys\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.110472 master-2 kubenswrapper[4776]: I1011 10:54:22.110379 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-config\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.111036 master-2 kubenswrapper[4776]: I1011 10:54:22.110973 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-svc\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.111767 master-2 kubenswrapper[4776]: I1011 10:54:22.111740 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-nb\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.112220 master-2 kubenswrapper[4776]: I1011 10:54:22.112143 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-swift-storage-0\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.113220 master-2 kubenswrapper[4776]: I1011 10:54:22.113182 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-scripts\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.113347 master-2 kubenswrapper[4776]: I1011 10:54:22.113309 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-sb\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.113517 master-2 kubenswrapper[4776]: I1011 10:54:22.113486 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-fernet-keys\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.113648 master-2 kubenswrapper[4776]: I1011 10:54:22.113614 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-credential-keys\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.114068 master-2 kubenswrapper[4776]: I1011 10:54:22.114017 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-config-data\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.114721 master-2 kubenswrapper[4776]: I1011 10:54:22.114690 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-combined-ca-bundle\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.169847 master-2 kubenswrapper[4776]: I1011 10:54:22.169753 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" podUID="7da38dfb-a995-4843-a05a-351e5dc557ae" containerName="dnsmasq-dns" containerID="cri-o://bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7" gracePeriod=10 Oct 11 10:54:22.236532 master-2 kubenswrapper[4776]: I1011 10:54:22.236389 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhkv4\" (UniqueName: \"kubernetes.io/projected/316381d6-4304-44b3-a742-50e80da7acd1-kube-api-access-xhkv4\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.240594 master-2 kubenswrapper[4776]: I1011 10:54:22.237273 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.240594 master-2 kubenswrapper[4776]: I1011 10:54:22.237978 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shtfx\" (UniqueName: \"kubernetes.io/projected/e8ac662a-494b-422a-84fd-2e40681d4ae6-kube-api-access-shtfx\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.316921 master-2 kubenswrapper[4776]: I1011 10:54:22.316227 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.410926 master-0 kubenswrapper[4790]: I1011 10:54:22.410864 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-gvzlv"] Oct 11 10:54:22.411678 master-0 kubenswrapper[4790]: E1011 10:54:22.411195 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe51cce-787f-4883-8b8f-f1ed50caa3d3" containerName="mariadb-database-create" Oct 11 10:54:22.411678 master-0 kubenswrapper[4790]: I1011 10:54:22.411210 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe51cce-787f-4883-8b8f-f1ed50caa3d3" containerName="mariadb-database-create" Oct 11 10:54:22.411678 master-0 kubenswrapper[4790]: E1011 10:54:22.411224 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40b588a-5009-41c8-b8b0-b417de6693ac" containerName="mariadb-database-create" Oct 11 10:54:22.411678 master-0 kubenswrapper[4790]: I1011 10:54:22.411231 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40b588a-5009-41c8-b8b0-b417de6693ac" containerName="mariadb-database-create" Oct 11 10:54:22.411678 master-0 kubenswrapper[4790]: E1011 10:54:22.411250 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d58f3b14-e8da-4046-afb1-c376a65ef16e" containerName="mariadb-database-create" Oct 11 10:54:22.411678 master-0 kubenswrapper[4790]: I1011 10:54:22.411257 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="d58f3b14-e8da-4046-afb1-c376a65ef16e" containerName="mariadb-database-create" Oct 11 10:54:22.411678 master-0 kubenswrapper[4790]: I1011 10:54:22.411404 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfe51cce-787f-4883-8b8f-f1ed50caa3d3" containerName="mariadb-database-create" Oct 11 10:54:22.411678 master-0 kubenswrapper[4790]: I1011 10:54:22.411426 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40b588a-5009-41c8-b8b0-b417de6693ac" containerName="mariadb-database-create" Oct 11 10:54:22.411678 master-0 kubenswrapper[4790]: I1011 10:54:22.411435 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="d58f3b14-e8da-4046-afb1-c376a65ef16e" containerName="mariadb-database-create" Oct 11 10:54:22.412083 master-0 kubenswrapper[4790]: I1011 10:54:22.412052 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-gvzlv" Oct 11 10:54:22.455798 master-0 kubenswrapper[4790]: I1011 10:54:22.453093 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-gvzlv"] Oct 11 10:54:22.472714 master-1 kubenswrapper[4771]: I1011 10:54:22.472345 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:54:22.474935 master-1 kubenswrapper[4771]: I1011 10:54:22.474895 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:54:22.477792 master-1 kubenswrapper[4771]: I1011 10:54:22.477404 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 11 10:54:22.477792 master-1 kubenswrapper[4771]: I1011 10:54:22.477693 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 11 10:54:22.504343 master-1 kubenswrapper[4771]: I1011 10:54:22.500383 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:54:22.538823 master-1 kubenswrapper[4771]: I1011 10:54:22.538673 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6508f17e-afc7-44dd-89b4-2efa8a124b12-run-httpd\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.538823 master-1 kubenswrapper[4771]: I1011 10:54:22.538744 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-config-data\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.538823 master-1 kubenswrapper[4771]: I1011 10:54:22.538786 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6508f17e-afc7-44dd-89b4-2efa8a124b12-log-httpd\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.539147 master-1 kubenswrapper[4771]: I1011 10:54:22.538859 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.539147 master-1 kubenswrapper[4771]: I1011 10:54:22.538887 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76sj2\" (UniqueName: \"kubernetes.io/projected/6508f17e-afc7-44dd-89b4-2efa8a124b12-kube-api-access-76sj2\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.539147 master-1 kubenswrapper[4771]: I1011 10:54:22.539121 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-scripts\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.539147 master-1 kubenswrapper[4771]: I1011 10:54:22.539144 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.556262 master-0 kubenswrapper[4790]: I1011 10:54:22.556210 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2hcp\" (UniqueName: \"kubernetes.io/projected/2137512f-c759-4935-944d-48248c99c2ec-kube-api-access-p2hcp\") pod \"ironic-db-create-gvzlv\" (UID: \"2137512f-c759-4935-944d-48248c99c2ec\") " pod="openstack/ironic-db-create-gvzlv" Oct 11 10:54:22.559699 master-2 kubenswrapper[4776]: I1011 10:54:22.557372 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59bd749d9f-z7w7d"] Oct 11 10:54:22.588714 master-2 kubenswrapper[4776]: I1011 10:54:22.580872 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-5pz76"] Oct 11 10:54:22.588714 master-2 kubenswrapper[4776]: I1011 10:54:22.582135 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.588714 master-2 kubenswrapper[4776]: I1011 10:54:22.585649 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 11 10:54:22.588714 master-2 kubenswrapper[4776]: I1011 10:54:22.585785 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 11 10:54:22.602807 master-2 kubenswrapper[4776]: I1011 10:54:22.602761 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5pz76"] Oct 11 10:54:22.606156 master-1 kubenswrapper[4771]: I1011 10:54:22.606093 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-595686b98f-blmgp"] Oct 11 10:54:22.607714 master-1 kubenswrapper[4771]: I1011 10:54:22.607608 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.612544 master-1 kubenswrapper[4771]: I1011 10:54:22.612348 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 11 10:54:22.623146 master-1 kubenswrapper[4771]: I1011 10:54:22.623072 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-595686b98f-blmgp"] Oct 11 10:54:22.634203 master-2 kubenswrapper[4776]: I1011 10:54:22.634079 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-scripts\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.634452 master-2 kubenswrapper[4776]: I1011 10:54:22.634433 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-combined-ca-bundle\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.634569 master-2 kubenswrapper[4776]: I1011 10:54:22.634553 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-config-data\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.634736 master-2 kubenswrapper[4776]: I1011 10:54:22.634718 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cef6f34-fa11-4593-b4d8-c9ac415f1967-logs\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.634856 master-2 kubenswrapper[4776]: I1011 10:54:22.634839 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkk8v\" (UniqueName: \"kubernetes.io/projected/7cef6f34-fa11-4593-b4d8-c9ac415f1967-kube-api-access-wkk8v\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.647490 master-1 kubenswrapper[4771]: I1011 10:54:22.646767 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.647490 master-1 kubenswrapper[4771]: I1011 10:54:22.646830 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76sj2\" (UniqueName: \"kubernetes.io/projected/6508f17e-afc7-44dd-89b4-2efa8a124b12-kube-api-access-76sj2\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.647490 master-1 kubenswrapper[4771]: I1011 10:54:22.646910 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-ovsdbserver-nb\") pod \"dnsmasq-dns-595686b98f-blmgp\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.647490 master-1 kubenswrapper[4771]: I1011 10:54:22.646950 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-scripts\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.647490 master-1 kubenswrapper[4771]: I1011 10:54:22.646977 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.647490 master-1 kubenswrapper[4771]: I1011 10:54:22.647010 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-config\") pod \"dnsmasq-dns-595686b98f-blmgp\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.647490 master-1 kubenswrapper[4771]: I1011 10:54:22.647061 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-dns-swift-storage-0\") pod \"dnsmasq-dns-595686b98f-blmgp\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.647490 master-1 kubenswrapper[4771]: I1011 10:54:22.647089 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6508f17e-afc7-44dd-89b4-2efa8a124b12-run-httpd\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.647490 master-1 kubenswrapper[4771]: I1011 10:54:22.647117 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-dns-svc\") pod \"dnsmasq-dns-595686b98f-blmgp\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.647490 master-1 kubenswrapper[4771]: I1011 10:54:22.647141 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-config-data\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.647490 master-1 kubenswrapper[4771]: I1011 10:54:22.647177 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6508f17e-afc7-44dd-89b4-2efa8a124b12-log-httpd\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.647490 master-1 kubenswrapper[4771]: I1011 10:54:22.647210 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtcx9\" (UniqueName: \"kubernetes.io/projected/a50b2fec-a3b6-4245-9080-5987b411b581-kube-api-access-mtcx9\") pod \"dnsmasq-dns-595686b98f-blmgp\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.647490 master-1 kubenswrapper[4771]: I1011 10:54:22.647239 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-ovsdbserver-sb\") pod \"dnsmasq-dns-595686b98f-blmgp\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.652783 master-1 kubenswrapper[4771]: I1011 10:54:22.652695 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.653223 master-1 kubenswrapper[4771]: I1011 10:54:22.653015 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6508f17e-afc7-44dd-89b4-2efa8a124b12-log-httpd\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.653387 master-1 kubenswrapper[4771]: I1011 10:54:22.653328 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-scripts\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.654547 master-1 kubenswrapper[4771]: I1011 10:54:22.654334 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6508f17e-afc7-44dd-89b4-2efa8a124b12-run-httpd\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.655730 master-1 kubenswrapper[4771]: I1011 10:54:22.655661 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.655819 master-1 kubenswrapper[4771]: I1011 10:54:22.655721 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-config-data\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.657868 master-0 kubenswrapper[4790]: I1011 10:54:22.657803 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2hcp\" (UniqueName: \"kubernetes.io/projected/2137512f-c759-4935-944d-48248c99c2ec-kube-api-access-p2hcp\") pod \"ironic-db-create-gvzlv\" (UID: \"2137512f-c759-4935-944d-48248c99c2ec\") " pod="openstack/ironic-db-create-gvzlv" Oct 11 10:54:22.675076 master-1 kubenswrapper[4771]: I1011 10:54:22.674983 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76sj2\" (UniqueName: \"kubernetes.io/projected/6508f17e-afc7-44dd-89b4-2efa8a124b12-kube-api-access-76sj2\") pod \"ceilometer-0\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " pod="openstack/ceilometer-0" Oct 11 10:54:22.687678 master-0 kubenswrapper[4790]: I1011 10:54:22.687516 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2hcp\" (UniqueName: \"kubernetes.io/projected/2137512f-c759-4935-944d-48248c99c2ec-kube-api-access-p2hcp\") pod \"ironic-db-create-gvzlv\" (UID: \"2137512f-c759-4935-944d-48248c99c2ec\") " pod="openstack/ironic-db-create-gvzlv" Oct 11 10:54:22.737054 master-2 kubenswrapper[4776]: I1011 10:54:22.736983 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cef6f34-fa11-4593-b4d8-c9ac415f1967-logs\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.737054 master-2 kubenswrapper[4776]: I1011 10:54:22.737033 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkk8v\" (UniqueName: \"kubernetes.io/projected/7cef6f34-fa11-4593-b4d8-c9ac415f1967-kube-api-access-wkk8v\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.740323 master-2 kubenswrapper[4776]: I1011 10:54:22.737147 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-scripts\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.740323 master-2 kubenswrapper[4776]: I1011 10:54:22.737172 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-combined-ca-bundle\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.740323 master-2 kubenswrapper[4776]: I1011 10:54:22.737197 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-config-data\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.740323 master-2 kubenswrapper[4776]: I1011 10:54:22.738601 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cef6f34-fa11-4593-b4d8-c9ac415f1967-logs\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.741357 master-2 kubenswrapper[4776]: I1011 10:54:22.741308 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-scripts\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.741382 master-0 kubenswrapper[4790]: I1011 10:54:22.741324 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-gvzlv" Oct 11 10:54:22.741988 master-2 kubenswrapper[4776]: I1011 10:54:22.741584 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-config-data\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.745359 master-2 kubenswrapper[4776]: I1011 10:54:22.745337 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-combined-ca-bundle\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.749225 master-1 kubenswrapper[4771]: I1011 10:54:22.749160 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-ovsdbserver-nb\") pod \"dnsmasq-dns-595686b98f-blmgp\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.749570 master-1 kubenswrapper[4771]: I1011 10:54:22.749252 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-config\") pod \"dnsmasq-dns-595686b98f-blmgp\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.749570 master-1 kubenswrapper[4771]: I1011 10:54:22.749295 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-dns-swift-storage-0\") pod \"dnsmasq-dns-595686b98f-blmgp\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.749570 master-1 kubenswrapper[4771]: I1011 10:54:22.749322 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-dns-svc\") pod \"dnsmasq-dns-595686b98f-blmgp\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.749570 master-1 kubenswrapper[4771]: I1011 10:54:22.749372 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtcx9\" (UniqueName: \"kubernetes.io/projected/a50b2fec-a3b6-4245-9080-5987b411b581-kube-api-access-mtcx9\") pod \"dnsmasq-dns-595686b98f-blmgp\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.749570 master-1 kubenswrapper[4771]: I1011 10:54:22.749393 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-ovsdbserver-sb\") pod \"dnsmasq-dns-595686b98f-blmgp\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.750741 master-1 kubenswrapper[4771]: I1011 10:54:22.750328 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-ovsdbserver-sb\") pod \"dnsmasq-dns-595686b98f-blmgp\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.750741 master-1 kubenswrapper[4771]: I1011 10:54:22.750520 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-config\") pod \"dnsmasq-dns-595686b98f-blmgp\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.750741 master-1 kubenswrapper[4771]: I1011 10:54:22.750683 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-dns-svc\") pod \"dnsmasq-dns-595686b98f-blmgp\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.752332 master-1 kubenswrapper[4771]: I1011 10:54:22.752292 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-dns-swift-storage-0\") pod \"dnsmasq-dns-595686b98f-blmgp\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.753131 master-1 kubenswrapper[4771]: I1011 10:54:22.753094 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-ovsdbserver-nb\") pod \"dnsmasq-dns-595686b98f-blmgp\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.766136 master-2 kubenswrapper[4776]: I1011 10:54:22.766102 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkk8v\" (UniqueName: \"kubernetes.io/projected/7cef6f34-fa11-4593-b4d8-c9ac415f1967-kube-api-access-wkk8v\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.772820 master-1 kubenswrapper[4771]: I1011 10:54:22.772750 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtcx9\" (UniqueName: \"kubernetes.io/projected/a50b2fec-a3b6-4245-9080-5987b411b581-kube-api-access-mtcx9\") pod \"dnsmasq-dns-595686b98f-blmgp\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.789374 master-1 kubenswrapper[4771]: I1011 10:54:22.789315 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:54:22.907248 master-2 kubenswrapper[4776]: I1011 10:54:22.907196 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.930464 master-1 kubenswrapper[4771]: I1011 10:54:22.930410 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:22.934473 master-2 kubenswrapper[4776]: I1011 10:54:22.934431 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jdggk"] Oct 11 10:54:23.101867 master-2 kubenswrapper[4776]: I1011 10:54:23.101572 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:23.145258 master-2 kubenswrapper[4776]: I1011 10:54:23.142441 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-svc\") pod \"7da38dfb-a995-4843-a05a-351e5dc557ae\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " Oct 11 10:54:23.145258 master-2 kubenswrapper[4776]: I1011 10:54:23.142514 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-swift-storage-0\") pod \"7da38dfb-a995-4843-a05a-351e5dc557ae\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " Oct 11 10:54:23.145258 master-2 kubenswrapper[4776]: I1011 10:54:23.142543 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-nb\") pod \"7da38dfb-a995-4843-a05a-351e5dc557ae\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " Oct 11 10:54:23.145258 master-2 kubenswrapper[4776]: I1011 10:54:23.142695 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8llfr\" (UniqueName: \"kubernetes.io/projected/7da38dfb-a995-4843-a05a-351e5dc557ae-kube-api-access-8llfr\") pod \"7da38dfb-a995-4843-a05a-351e5dc557ae\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " Oct 11 10:54:23.145258 master-2 kubenswrapper[4776]: I1011 10:54:23.142741 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-sb\") pod \"7da38dfb-a995-4843-a05a-351e5dc557ae\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " Oct 11 10:54:23.145258 master-2 kubenswrapper[4776]: I1011 10:54:23.142771 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-config\") pod \"7da38dfb-a995-4843-a05a-351e5dc557ae\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " Oct 11 10:54:23.162110 master-2 kubenswrapper[4776]: I1011 10:54:23.152049 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7da38dfb-a995-4843-a05a-351e5dc557ae-kube-api-access-8llfr" (OuterVolumeSpecName: "kube-api-access-8llfr") pod "7da38dfb-a995-4843-a05a-351e5dc557ae" (UID: "7da38dfb-a995-4843-a05a-351e5dc557ae"). InnerVolumeSpecName "kube-api-access-8llfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:23.179336 master-0 kubenswrapper[4790]: I1011 10:54:23.179289 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-gvzlv"] Oct 11 10:54:23.193769 master-2 kubenswrapper[4776]: I1011 10:54:23.184018 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7da38dfb-a995-4843-a05a-351e5dc557ae" (UID: "7da38dfb-a995-4843-a05a-351e5dc557ae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:23.193769 master-2 kubenswrapper[4776]: I1011 10:54:23.188016 4776 generic.go:334] "Generic (PLEG): container finished" podID="7da38dfb-a995-4843-a05a-351e5dc557ae" containerID="bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7" exitCode=0 Oct 11 10:54:23.193769 master-2 kubenswrapper[4776]: I1011 10:54:23.188143 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" event={"ID":"7da38dfb-a995-4843-a05a-351e5dc557ae","Type":"ContainerDied","Data":"bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7"} Oct 11 10:54:23.193769 master-2 kubenswrapper[4776]: I1011 10:54:23.188180 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" event={"ID":"7da38dfb-a995-4843-a05a-351e5dc557ae","Type":"ContainerDied","Data":"585cb1a3dd3493a9c6dcca4a20afacedee65336427561c983a60785158d1da86"} Oct 11 10:54:23.193769 master-2 kubenswrapper[4776]: I1011 10:54:23.188200 4776 scope.go:117] "RemoveContainer" containerID="bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7" Oct 11 10:54:23.193769 master-2 kubenswrapper[4776]: I1011 10:54:23.188378 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:23.197487 master-2 kubenswrapper[4776]: I1011 10:54:23.196373 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59bd749d9f-z7w7d"] Oct 11 10:54:23.199320 master-2 kubenswrapper[4776]: I1011 10:54:23.199272 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jdggk" event={"ID":"316381d6-4304-44b3-a742-50e80da7acd1","Type":"ContainerStarted","Data":"1ef6065ef3373ebd1d031e48bef6566526ba170bc1411a15757ea04bbf481260"} Oct 11 10:54:23.199379 master-2 kubenswrapper[4776]: I1011 10:54:23.199323 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jdggk" event={"ID":"316381d6-4304-44b3-a742-50e80da7acd1","Type":"ContainerStarted","Data":"475d348503e4fe6007aebb3a7d38a0a9425dc9455cf353a62f41318f4614bcf0"} Oct 11 10:54:23.213095 master-2 kubenswrapper[4776]: I1011 10:54:23.213027 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7da38dfb-a995-4843-a05a-351e5dc557ae" (UID: "7da38dfb-a995-4843-a05a-351e5dc557ae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:23.214837 master-2 kubenswrapper[4776]: I1011 10:54:23.214796 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-config" (OuterVolumeSpecName: "config") pod "7da38dfb-a995-4843-a05a-351e5dc557ae" (UID: "7da38dfb-a995-4843-a05a-351e5dc557ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:23.223286 master-2 kubenswrapper[4776]: I1011 10:54:23.223009 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7da38dfb-a995-4843-a05a-351e5dc557ae" (UID: "7da38dfb-a995-4843-a05a-351e5dc557ae"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:23.237150 master-2 kubenswrapper[4776]: I1011 10:54:23.237061 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7da38dfb-a995-4843-a05a-351e5dc557ae" (UID: "7da38dfb-a995-4843-a05a-351e5dc557ae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:23.246076 master-2 kubenswrapper[4776]: I1011 10:54:23.246041 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8llfr\" (UniqueName: \"kubernetes.io/projected/7da38dfb-a995-4843-a05a-351e5dc557ae-kube-api-access-8llfr\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:23.246076 master-2 kubenswrapper[4776]: I1011 10:54:23.246074 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-sb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:23.246076 master-2 kubenswrapper[4776]: I1011 10:54:23.246085 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:23.246322 master-2 kubenswrapper[4776]: I1011 10:54:23.246095 4776 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-svc\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:23.246322 master-2 kubenswrapper[4776]: I1011 10:54:23.246104 4776 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-swift-storage-0\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:23.246322 master-2 kubenswrapper[4776]: I1011 10:54:23.246113 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-nb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:23.267882 master-2 kubenswrapper[4776]: I1011 10:54:23.267806 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jdggk" podStartSLOduration=2.267785561 podStartE2EDuration="2.267785561s" podCreationTimestamp="2025-10-11 10:54:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:23.266109975 +0000 UTC m=+1698.050536694" watchObservedRunningTime="2025-10-11 10:54:23.267785561 +0000 UTC m=+1698.052212270" Oct 11 10:54:23.271416 master-1 kubenswrapper[4771]: I1011 10:54:23.269557 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:54:23.279126 master-1 kubenswrapper[4771]: W1011 10:54:23.279075 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6508f17e_afc7_44dd_89b4_2efa8a124b12.slice/crio-7c53a7f6f217d02a6155b61317bdcecef01a919dcbe718b5d7a4a4096ceec2ae WatchSource:0}: Error finding container 7c53a7f6f217d02a6155b61317bdcecef01a919dcbe718b5d7a4a4096ceec2ae: Status 404 returned error can't find the container with id 7c53a7f6f217d02a6155b61317bdcecef01a919dcbe718b5d7a4a4096ceec2ae Oct 11 10:54:23.352500 master-2 kubenswrapper[4776]: I1011 10:54:23.352444 4776 scope.go:117] "RemoveContainer" containerID="456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe" Oct 11 10:54:23.366035 master-1 kubenswrapper[4771]: I1011 10:54:23.365928 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-595686b98f-blmgp"] Oct 11 10:54:23.373775 master-1 kubenswrapper[4771]: W1011 10:54:23.373540 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda50b2fec_a3b6_4245_9080_5987b411b581.slice/crio-df3edeb105ed637b9d7fa0933dc5cae9f70ee8feff1cdbfb3585b9bc6889a72c WatchSource:0}: Error finding container df3edeb105ed637b9d7fa0933dc5cae9f70ee8feff1cdbfb3585b9bc6889a72c: Status 404 returned error can't find the container with id df3edeb105ed637b9d7fa0933dc5cae9f70ee8feff1cdbfb3585b9bc6889a72c Oct 11 10:54:23.390663 master-2 kubenswrapper[4776]: I1011 10:54:23.390621 4776 scope.go:117] "RemoveContainer" containerID="bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7" Oct 11 10:54:23.390962 master-2 kubenswrapper[4776]: E1011 10:54:23.390928 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7\": container with ID starting with bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7 not found: ID does not exist" containerID="bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7" Oct 11 10:54:23.391035 master-2 kubenswrapper[4776]: I1011 10:54:23.390978 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7"} err="failed to get container status \"bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7\": rpc error: code = NotFound desc = could not find container \"bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7\": container with ID starting with bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7 not found: ID does not exist" Oct 11 10:54:23.391035 master-2 kubenswrapper[4776]: I1011 10:54:23.391001 4776 scope.go:117] "RemoveContainer" containerID="456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe" Oct 11 10:54:23.391250 master-2 kubenswrapper[4776]: E1011 10:54:23.391218 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe\": container with ID starting with 456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe not found: ID does not exist" containerID="456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe" Oct 11 10:54:23.391250 master-2 kubenswrapper[4776]: I1011 10:54:23.391238 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe"} err="failed to get container status \"456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe\": rpc error: code = NotFound desc = could not find container \"456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe\": container with ID starting with 456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe not found: ID does not exist" Oct 11 10:54:23.454050 master-2 kubenswrapper[4776]: I1011 10:54:23.451189 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5pz76"] Oct 11 10:54:23.522408 master-0 kubenswrapper[4790]: I1011 10:54:23.522305 4790 generic.go:334] "Generic (PLEG): container finished" podID="2137512f-c759-4935-944d-48248c99c2ec" containerID="fa9c7f461b0e315bcd532cda39de483b7f3baaed2714bed160ee9f75fc0f43db" exitCode=0 Oct 11 10:54:23.522408 master-0 kubenswrapper[4790]: I1011 10:54:23.522391 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-gvzlv" event={"ID":"2137512f-c759-4935-944d-48248c99c2ec","Type":"ContainerDied","Data":"fa9c7f461b0e315bcd532cda39de483b7f3baaed2714bed160ee9f75fc0f43db"} Oct 11 10:54:23.522408 master-0 kubenswrapper[4790]: I1011 10:54:23.522427 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-gvzlv" event={"ID":"2137512f-c759-4935-944d-48248c99c2ec","Type":"ContainerStarted","Data":"2c4922c1ad065c1a799c822a668072dcac70cccb7bbd31842edf52a4efb72f91"} Oct 11 10:54:23.618436 master-2 kubenswrapper[4776]: I1011 10:54:23.617369 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb5fd84f-98bcs"] Oct 11 10:54:23.626136 master-2 kubenswrapper[4776]: I1011 10:54:23.626091 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bb5fd84f-98bcs"] Oct 11 10:54:23.916855 master-1 kubenswrapper[4771]: I1011 10:54:23.916778 4771 generic.go:334] "Generic (PLEG): container finished" podID="abb599b9-db44-492a-bc73-6ff5a2c212d5" containerID="5e072db00dc200fff13cfbd416cac5a518950f3f680e875e683c4e6353c36aa5" exitCode=0 Oct 11 10:54:23.917606 master-1 kubenswrapper[4771]: I1011 10:54:23.916902 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"abb599b9-db44-492a-bc73-6ff5a2c212d5","Type":"ContainerDied","Data":"5e072db00dc200fff13cfbd416cac5a518950f3f680e875e683c4e6353c36aa5"} Oct 11 10:54:23.918315 master-1 kubenswrapper[4771]: I1011 10:54:23.918264 4771 generic.go:334] "Generic (PLEG): container finished" podID="a50b2fec-a3b6-4245-9080-5987b411b581" containerID="314a76b2857d795a4f3ebe7e8b09e8abca5d105e5ba862e3833d60a9a90b7cc3" exitCode=0 Oct 11 10:54:23.918430 master-1 kubenswrapper[4771]: I1011 10:54:23.918313 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595686b98f-blmgp" event={"ID":"a50b2fec-a3b6-4245-9080-5987b411b581","Type":"ContainerDied","Data":"314a76b2857d795a4f3ebe7e8b09e8abca5d105e5ba862e3833d60a9a90b7cc3"} Oct 11 10:54:23.918430 master-1 kubenswrapper[4771]: I1011 10:54:23.918381 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595686b98f-blmgp" event={"ID":"a50b2fec-a3b6-4245-9080-5987b411b581","Type":"ContainerStarted","Data":"df3edeb105ed637b9d7fa0933dc5cae9f70ee8feff1cdbfb3585b9bc6889a72c"} Oct 11 10:54:23.924112 master-1 kubenswrapper[4771]: I1011 10:54:23.923761 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6508f17e-afc7-44dd-89b4-2efa8a124b12","Type":"ContainerStarted","Data":"7c53a7f6f217d02a6155b61317bdcecef01a919dcbe718b5d7a4a4096ceec2ae"} Oct 11 10:54:24.073494 master-2 kubenswrapper[4776]: I1011 10:54:24.072453 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7da38dfb-a995-4843-a05a-351e5dc557ae" path="/var/lib/kubelet/pods/7da38dfb-a995-4843-a05a-351e5dc557ae/volumes" Oct 11 10:54:24.210489 master-2 kubenswrapper[4776]: I1011 10:54:24.210452 4776 generic.go:334] "Generic (PLEG): container finished" podID="e8ac662a-494b-422a-84fd-2e40681d4ae6" containerID="f192722015b7d9504bd8c55034d7826d00d5fb1a1a0d4260c835d69125d1e33c" exitCode=0 Oct 11 10:54:24.211610 master-2 kubenswrapper[4776]: I1011 10:54:24.210553 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" event={"ID":"e8ac662a-494b-422a-84fd-2e40681d4ae6","Type":"ContainerDied","Data":"f192722015b7d9504bd8c55034d7826d00d5fb1a1a0d4260c835d69125d1e33c"} Oct 11 10:54:24.211660 master-2 kubenswrapper[4776]: I1011 10:54:24.211627 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" event={"ID":"e8ac662a-494b-422a-84fd-2e40681d4ae6","Type":"ContainerStarted","Data":"9932a4fcd14aa933b55724fba2f6169f3c81fe174fbb3b8a282c132a474c40e2"} Oct 11 10:54:24.217574 master-2 kubenswrapper[4776]: I1011 10:54:24.216996 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5pz76" event={"ID":"7cef6f34-fa11-4593-b4d8-c9ac415f1967","Type":"ContainerStarted","Data":"e1f01b2335980f0a5eb4c3856d30b5245099ce20b959f182008d4ec5131f98bf"} Oct 11 10:54:24.327989 master-2 kubenswrapper[4776]: I1011 10:54:24.321992 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:54:24.330819 master-2 kubenswrapper[4776]: E1011 10:54:24.330788 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7da38dfb-a995-4843-a05a-351e5dc557ae" containerName="dnsmasq-dns" Oct 11 10:54:24.330898 master-2 kubenswrapper[4776]: I1011 10:54:24.330837 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da38dfb-a995-4843-a05a-351e5dc557ae" containerName="dnsmasq-dns" Oct 11 10:54:24.330898 master-2 kubenswrapper[4776]: E1011 10:54:24.330855 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7da38dfb-a995-4843-a05a-351e5dc557ae" containerName="init" Oct 11 10:54:24.330898 master-2 kubenswrapper[4776]: I1011 10:54:24.330861 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da38dfb-a995-4843-a05a-351e5dc557ae" containerName="init" Oct 11 10:54:24.331882 master-2 kubenswrapper[4776]: I1011 10:54:24.331851 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7da38dfb-a995-4843-a05a-351e5dc557ae" containerName="dnsmasq-dns" Oct 11 10:54:24.333344 master-2 kubenswrapper[4776]: I1011 10:54:24.332832 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.336197 master-2 kubenswrapper[4776]: I1011 10:54:24.335795 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 11 10:54:24.336197 master-2 kubenswrapper[4776]: I1011 10:54:24.335979 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-internal-config-data" Oct 11 10:54:24.336197 master-2 kubenswrapper[4776]: I1011 10:54:24.336109 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 11 10:54:24.337564 master-2 kubenswrapper[4776]: I1011 10:54:24.337545 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:54:24.466839 master-2 kubenswrapper[4776]: I1011 10:54:24.466768 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-config-data\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.466839 master-2 kubenswrapper[4776]: I1011 10:54:24.466835 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-httpd-run\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.467208 master-2 kubenswrapper[4776]: I1011 10:54:24.466864 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.467208 master-2 kubenswrapper[4776]: I1011 10:54:24.466913 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.467208 master-2 kubenswrapper[4776]: I1011 10:54:24.466984 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-internal-tls-certs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.467208 master-2 kubenswrapper[4776]: I1011 10:54:24.467031 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-scripts\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.467208 master-2 kubenswrapper[4776]: I1011 10:54:24.467149 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-logs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.467208 master-2 kubenswrapper[4776]: I1011 10:54:24.467190 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbzvc\" (UniqueName: \"kubernetes.io/projected/d0cc3965-fd34-4442-b408-a5ae441443e4-kube-api-access-jbzvc\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.569528 master-2 kubenswrapper[4776]: I1011 10:54:24.569486 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbzvc\" (UniqueName: \"kubernetes.io/projected/d0cc3965-fd34-4442-b408-a5ae441443e4-kube-api-access-jbzvc\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.570001 master-2 kubenswrapper[4776]: I1011 10:54:24.569978 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-config-data\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.570140 master-2 kubenswrapper[4776]: I1011 10:54:24.570124 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-httpd-run\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.570281 master-2 kubenswrapper[4776]: I1011 10:54:24.570265 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.570447 master-2 kubenswrapper[4776]: I1011 10:54:24.570428 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.570562 master-2 kubenswrapper[4776]: I1011 10:54:24.570546 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-internal-tls-certs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.570696 master-2 kubenswrapper[4776]: I1011 10:54:24.570662 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-scripts\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.570886 master-2 kubenswrapper[4776]: I1011 10:54:24.570839 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-logs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.571056 master-2 kubenswrapper[4776]: I1011 10:54:24.570840 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-httpd-run\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.572067 master-2 kubenswrapper[4776]: I1011 10:54:24.572036 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-logs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.602504 master-2 kubenswrapper[4776]: I1011 10:54:24.602440 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-scripts\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.602802 master-2 kubenswrapper[4776]: I1011 10:54:24.602546 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.602921 master-2 kubenswrapper[4776]: I1011 10:54:24.602833 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-internal-tls-certs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.608724 master-2 kubenswrapper[4776]: I1011 10:54:24.604071 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-config-data\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.608724 master-2 kubenswrapper[4776]: I1011 10:54:24.607211 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbzvc\" (UniqueName: \"kubernetes.io/projected/d0cc3965-fd34-4442-b408-a5ae441443e4-kube-api-access-jbzvc\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.610073 master-2 kubenswrapper[4776]: I1011 10:54:24.610055 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:54:24.610232 master-2 kubenswrapper[4776]: I1011 10:54:24.610211 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/c5f302d2b867cc737f2daf9c42090b10daaee38f14f31a51f3dbff0cf77a4fd1/globalmount\"" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.933718 master-1 kubenswrapper[4771]: I1011 10:54:24.933653 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"abb599b9-db44-492a-bc73-6ff5a2c212d5","Type":"ContainerStarted","Data":"31dcfe7ad738be71adcd29d247b4f18163e8ed27a14b555fe19eb6dfd4c64dfb"} Oct 11 10:54:24.935546 master-0 kubenswrapper[4790]: I1011 10:54:24.935466 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-gvzlv" Oct 11 10:54:24.937444 master-1 kubenswrapper[4771]: I1011 10:54:24.937393 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595686b98f-blmgp" event={"ID":"a50b2fec-a3b6-4245-9080-5987b411b581","Type":"ContainerStarted","Data":"64768fb3aaa57fbf977b42bcf01d911517cd3d56cc20742d472651a90c1c3f06"} Oct 11 10:54:24.937641 master-1 kubenswrapper[4771]: I1011 10:54:24.937602 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:24.962474 master-1 kubenswrapper[4771]: I1011 10:54:24.962349 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-595686b98f-blmgp" podStartSLOduration=2.962325671 podStartE2EDuration="2.962325671s" podCreationTimestamp="2025-10-11 10:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:24.961325092 +0000 UTC m=+1696.935551603" watchObservedRunningTime="2025-10-11 10:54:24.962325671 +0000 UTC m=+1696.936552172" Oct 11 10:54:25.000407 master-0 kubenswrapper[4790]: I1011 10:54:25.000347 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2hcp\" (UniqueName: \"kubernetes.io/projected/2137512f-c759-4935-944d-48248c99c2ec-kube-api-access-p2hcp\") pod \"2137512f-c759-4935-944d-48248c99c2ec\" (UID: \"2137512f-c759-4935-944d-48248c99c2ec\") " Oct 11 10:54:25.004124 master-0 kubenswrapper[4790]: I1011 10:54:25.004084 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2137512f-c759-4935-944d-48248c99c2ec-kube-api-access-p2hcp" (OuterVolumeSpecName: "kube-api-access-p2hcp") pod "2137512f-c759-4935-944d-48248c99c2ec" (UID: "2137512f-c759-4935-944d-48248c99c2ec"). InnerVolumeSpecName "kube-api-access-p2hcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:25.103049 master-0 kubenswrapper[4790]: I1011 10:54:25.102976 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2hcp\" (UniqueName: \"kubernetes.io/projected/2137512f-c759-4935-944d-48248c99c2ec-kube-api-access-p2hcp\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:25.142963 master-2 kubenswrapper[4776]: I1011 10:54:25.140339 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:25.229090 master-2 kubenswrapper[4776]: I1011 10:54:25.229036 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" event={"ID":"e8ac662a-494b-422a-84fd-2e40681d4ae6","Type":"ContainerDied","Data":"9932a4fcd14aa933b55724fba2f6169f3c81fe174fbb3b8a282c132a474c40e2"} Oct 11 10:54:25.229490 master-2 kubenswrapper[4776]: I1011 10:54:25.229101 4776 scope.go:117] "RemoveContainer" containerID="f192722015b7d9504bd8c55034d7826d00d5fb1a1a0d4260c835d69125d1e33c" Oct 11 10:54:25.229490 master-2 kubenswrapper[4776]: I1011 10:54:25.229147 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:25.293428 master-2 kubenswrapper[4776]: I1011 10:54:25.293377 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-swift-storage-0\") pod \"e8ac662a-494b-422a-84fd-2e40681d4ae6\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " Oct 11 10:54:25.293639 master-2 kubenswrapper[4776]: I1011 10:54:25.293466 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-sb\") pod \"e8ac662a-494b-422a-84fd-2e40681d4ae6\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " Oct 11 10:54:25.293639 master-2 kubenswrapper[4776]: I1011 10:54:25.293498 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-nb\") pod \"e8ac662a-494b-422a-84fd-2e40681d4ae6\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " Oct 11 10:54:25.293639 master-2 kubenswrapper[4776]: I1011 10:54:25.293597 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shtfx\" (UniqueName: \"kubernetes.io/projected/e8ac662a-494b-422a-84fd-2e40681d4ae6-kube-api-access-shtfx\") pod \"e8ac662a-494b-422a-84fd-2e40681d4ae6\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " Oct 11 10:54:25.293791 master-2 kubenswrapper[4776]: I1011 10:54:25.293724 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-config\") pod \"e8ac662a-494b-422a-84fd-2e40681d4ae6\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " Oct 11 10:54:25.293791 master-2 kubenswrapper[4776]: I1011 10:54:25.293779 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-svc\") pod \"e8ac662a-494b-422a-84fd-2e40681d4ae6\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " Oct 11 10:54:25.297430 master-2 kubenswrapper[4776]: I1011 10:54:25.297382 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8ac662a-494b-422a-84fd-2e40681d4ae6-kube-api-access-shtfx" (OuterVolumeSpecName: "kube-api-access-shtfx") pod "e8ac662a-494b-422a-84fd-2e40681d4ae6" (UID: "e8ac662a-494b-422a-84fd-2e40681d4ae6"). InnerVolumeSpecName "kube-api-access-shtfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:25.313944 master-2 kubenswrapper[4776]: I1011 10:54:25.313849 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e8ac662a-494b-422a-84fd-2e40681d4ae6" (UID: "e8ac662a-494b-422a-84fd-2e40681d4ae6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:25.316440 master-2 kubenswrapper[4776]: I1011 10:54:25.315376 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e8ac662a-494b-422a-84fd-2e40681d4ae6" (UID: "e8ac662a-494b-422a-84fd-2e40681d4ae6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:25.317999 master-1 kubenswrapper[4771]: I1011 10:54:25.317914 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-external-api-0"] Oct 11 10:54:25.319955 master-2 kubenswrapper[4776]: I1011 10:54:25.319883 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-config" (OuterVolumeSpecName: "config") pod "e8ac662a-494b-422a-84fd-2e40681d4ae6" (UID: "e8ac662a-494b-422a-84fd-2e40681d4ae6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:25.320290 master-1 kubenswrapper[4771]: I1011 10:54:25.320246 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.325417 master-1 kubenswrapper[4771]: I1011 10:54:25.325368 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 11 10:54:25.325643 master-1 kubenswrapper[4771]: I1011 10:54:25.325607 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 11 10:54:25.327033 master-2 kubenswrapper[4776]: E1011 10:54:25.326931 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-sb podName:e8ac662a-494b-422a-84fd-2e40681d4ae6 nodeName:}" failed. No retries permitted until 2025-10-11 10:54:25.826897741 +0000 UTC m=+1700.611324450 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ovsdbserver-sb" (UniqueName: "kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-sb") pod "e8ac662a-494b-422a-84fd-2e40681d4ae6" (UID: "e8ac662a-494b-422a-84fd-2e40681d4ae6") : error deleting /var/lib/kubelet/pods/e8ac662a-494b-422a-84fd-2e40681d4ae6/volume-subpaths: remove /var/lib/kubelet/pods/e8ac662a-494b-422a-84fd-2e40681d4ae6/volume-subpaths: no such file or directory Oct 11 10:54:25.327118 master-1 kubenswrapper[4771]: I1011 10:54:25.327066 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-external-config-data" Oct 11 10:54:25.327412 master-2 kubenswrapper[4776]: I1011 10:54:25.327373 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e8ac662a-494b-422a-84fd-2e40681d4ae6" (UID: "e8ac662a-494b-422a-84fd-2e40681d4ae6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:25.339997 master-1 kubenswrapper[4771]: I1011 10:54:25.339922 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-0"] Oct 11 10:54:25.396476 master-2 kubenswrapper[4776]: I1011 10:54:25.396423 4776 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-swift-storage-0\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:25.396476 master-2 kubenswrapper[4776]: I1011 10:54:25.396463 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-nb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:25.396476 master-2 kubenswrapper[4776]: I1011 10:54:25.396476 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shtfx\" (UniqueName: \"kubernetes.io/projected/e8ac662a-494b-422a-84fd-2e40681d4ae6-kube-api-access-shtfx\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:25.396476 master-2 kubenswrapper[4776]: I1011 10:54:25.396488 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:25.396914 master-2 kubenswrapper[4776]: I1011 10:54:25.396502 4776 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-svc\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:25.507631 master-1 kubenswrapper[4771]: I1011 10:54:25.507524 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-scripts\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.507631 master-1 kubenswrapper[4771]: I1011 10:54:25.507571 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jlzm\" (UniqueName: \"kubernetes.io/projected/18861a21-406e-479b-8712-9a62ca2ebf4a-kube-api-access-5jlzm\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.507855 master-1 kubenswrapper[4771]: I1011 10:54:25.507662 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-config-data\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.507855 master-1 kubenswrapper[4771]: I1011 10:54:25.507694 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18861a21-406e-479b-8712-9a62ca2ebf4a-httpd-run\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.507855 master-1 kubenswrapper[4771]: I1011 10:54:25.507718 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.507855 master-1 kubenswrapper[4771]: I1011 10:54:25.507734 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18861a21-406e-479b-8712-9a62ca2ebf4a-logs\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.507855 master-1 kubenswrapper[4771]: I1011 10:54:25.507769 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-public-tls-certs\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.507855 master-1 kubenswrapper[4771]: I1011 10:54:25.507790 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-combined-ca-bundle\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.545834 master-0 kubenswrapper[4790]: I1011 10:54:25.545663 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-gvzlv" event={"ID":"2137512f-c759-4935-944d-48248c99c2ec","Type":"ContainerDied","Data":"2c4922c1ad065c1a799c822a668072dcac70cccb7bbd31842edf52a4efb72f91"} Oct 11 10:54:25.545834 master-0 kubenswrapper[4790]: I1011 10:54:25.545767 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-gvzlv" Oct 11 10:54:25.545834 master-0 kubenswrapper[4790]: I1011 10:54:25.545764 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c4922c1ad065c1a799c822a668072dcac70cccb7bbd31842edf52a4efb72f91" Oct 11 10:54:25.609444 master-1 kubenswrapper[4771]: I1011 10:54:25.609374 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-scripts\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.609444 master-1 kubenswrapper[4771]: I1011 10:54:25.609430 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jlzm\" (UniqueName: \"kubernetes.io/projected/18861a21-406e-479b-8712-9a62ca2ebf4a-kube-api-access-5jlzm\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.609772 master-1 kubenswrapper[4771]: I1011 10:54:25.609485 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-config-data\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.609772 master-1 kubenswrapper[4771]: I1011 10:54:25.609512 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18861a21-406e-479b-8712-9a62ca2ebf4a-httpd-run\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.609772 master-1 kubenswrapper[4771]: I1011 10:54:25.609533 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.609772 master-1 kubenswrapper[4771]: I1011 10:54:25.609553 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18861a21-406e-479b-8712-9a62ca2ebf4a-logs\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.609772 master-1 kubenswrapper[4771]: I1011 10:54:25.609594 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-public-tls-certs\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.609772 master-1 kubenswrapper[4771]: I1011 10:54:25.609615 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-combined-ca-bundle\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.610134 master-1 kubenswrapper[4771]: I1011 10:54:25.610101 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18861a21-406e-479b-8712-9a62ca2ebf4a-httpd-run\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.610521 master-1 kubenswrapper[4771]: I1011 10:54:25.610475 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18861a21-406e-479b-8712-9a62ca2ebf4a-logs\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.614685 master-1 kubenswrapper[4771]: I1011 10:54:25.614631 4771 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:54:25.614798 master-1 kubenswrapper[4771]: I1011 10:54:25.614696 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/643ba808821ea6db76a2042d255ba68bbc43444ed3cc7e332598424f5540da0c/globalmount\"" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.615618 master-1 kubenswrapper[4771]: I1011 10:54:25.615570 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-combined-ca-bundle\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.616860 master-1 kubenswrapper[4771]: I1011 10:54:25.616823 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-config-data\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.618410 master-1 kubenswrapper[4771]: I1011 10:54:25.618346 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-scripts\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.638322 master-1 kubenswrapper[4771]: I1011 10:54:25.638242 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jlzm\" (UniqueName: \"kubernetes.io/projected/18861a21-406e-479b-8712-9a62ca2ebf4a-kube-api-access-5jlzm\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.717511 master-1 kubenswrapper[4771]: I1011 10:54:25.717400 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-public-tls-certs\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:25.830093 master-1 kubenswrapper[4771]: I1011 10:54:25.830031 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:54:25.919283 master-2 kubenswrapper[4776]: I1011 10:54:25.919191 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-sb\") pod \"e8ac662a-494b-422a-84fd-2e40681d4ae6\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " Oct 11 10:54:25.919876 master-2 kubenswrapper[4776]: I1011 10:54:25.919687 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e8ac662a-494b-422a-84fd-2e40681d4ae6" (UID: "e8ac662a-494b-422a-84fd-2e40681d4ae6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:25.920068 master-2 kubenswrapper[4776]: I1011 10:54:25.920007 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-sb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:26.178843 master-2 kubenswrapper[4776]: I1011 10:54:26.178708 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:26.273013 master-2 kubenswrapper[4776]: I1011 10:54:26.272957 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59bd749d9f-z7w7d"] Oct 11 10:54:26.277521 master-2 kubenswrapper[4776]: I1011 10:54:26.277470 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59bd749d9f-z7w7d"] Oct 11 10:54:26.327729 master-1 kubenswrapper[4771]: I1011 10:54:26.327669 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-internal-api-1"] Oct 11 10:54:26.329898 master-1 kubenswrapper[4771]: I1011 10:54:26.329850 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.332313 master-1 kubenswrapper[4771]: I1011 10:54:26.332271 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-internal-config-data" Oct 11 10:54:26.333662 master-1 kubenswrapper[4771]: I1011 10:54:26.333633 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 11 10:54:26.356565 master-1 kubenswrapper[4771]: I1011 10:54:26.356508 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-1"] Oct 11 10:54:26.435384 master-1 kubenswrapper[4771]: I1011 10:54:26.430201 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5hg2\" (UniqueName: \"kubernetes.io/projected/499f9e94-a738-484d-ae4b-0cc221750d1c-kube-api-access-k5hg2\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.435384 master-1 kubenswrapper[4771]: I1011 10:54:26.430296 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/499f9e94-a738-484d-ae4b-0cc221750d1c-logs\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.435384 master-1 kubenswrapper[4771]: I1011 10:54:26.430335 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-config-data\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.435384 master-1 kubenswrapper[4771]: I1011 10:54:26.430374 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.435384 master-1 kubenswrapper[4771]: I1011 10:54:26.430438 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.435384 master-1 kubenswrapper[4771]: I1011 10:54:26.430490 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/499f9e94-a738-484d-ae4b-0cc221750d1c-httpd-run\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.435384 master-1 kubenswrapper[4771]: I1011 10:54:26.430567 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-scripts\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.435384 master-1 kubenswrapper[4771]: I1011 10:54:26.430585 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-internal-tls-certs\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.456775 master-2 kubenswrapper[4776]: I1011 10:54:26.453321 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:26.536219 master-1 kubenswrapper[4771]: I1011 10:54:26.536127 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-config-data\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.536219 master-1 kubenswrapper[4771]: I1011 10:54:26.536230 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.536523 master-1 kubenswrapper[4771]: I1011 10:54:26.536306 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.536523 master-1 kubenswrapper[4771]: I1011 10:54:26.536443 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/499f9e94-a738-484d-ae4b-0cc221750d1c-httpd-run\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.536619 master-1 kubenswrapper[4771]: I1011 10:54:26.536529 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-scripts\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.536619 master-1 kubenswrapper[4771]: I1011 10:54:26.536558 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-internal-tls-certs\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.536619 master-1 kubenswrapper[4771]: I1011 10:54:26.536595 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5hg2\" (UniqueName: \"kubernetes.io/projected/499f9e94-a738-484d-ae4b-0cc221750d1c-kube-api-access-k5hg2\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.536756 master-1 kubenswrapper[4771]: I1011 10:54:26.536692 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/499f9e94-a738-484d-ae4b-0cc221750d1c-logs\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.537525 master-1 kubenswrapper[4771]: I1011 10:54:26.537495 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/499f9e94-a738-484d-ae4b-0cc221750d1c-logs\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.540066 master-1 kubenswrapper[4771]: I1011 10:54:26.539964 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/499f9e94-a738-484d-ae4b-0cc221750d1c-httpd-run\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.542820 master-1 kubenswrapper[4771]: I1011 10:54:26.542746 4771 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:54:26.542820 master-1 kubenswrapper[4771]: I1011 10:54:26.542798 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/319ddbbf14dc29e9dbd7eec9a997b70a9a11c6eca7f6496495d34ea4ac3ccad0/globalmount\"" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.545500 master-1 kubenswrapper[4771]: I1011 10:54:26.545412 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-config-data\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.547788 master-1 kubenswrapper[4771]: I1011 10:54:26.547746 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-internal-tls-certs\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.552939 master-1 kubenswrapper[4771]: I1011 10:54:26.552890 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.560070 master-1 kubenswrapper[4771]: I1011 10:54:26.560010 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5hg2\" (UniqueName: \"kubernetes.io/projected/499f9e94-a738-484d-ae4b-0cc221750d1c-kube-api-access-k5hg2\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.564262 master-1 kubenswrapper[4771]: I1011 10:54:26.564188 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-scripts\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:26.977704 master-1 kubenswrapper[4771]: I1011 10:54:26.977618 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"abb599b9-db44-492a-bc73-6ff5a2c212d5","Type":"ContainerStarted","Data":"527c6d5702318963faeb1a8a28a7b7506d42eaf7dcf4a74cf375d67c406096ba"} Oct 11 10:54:27.079935 master-1 kubenswrapper[4771]: I1011 10:54:27.079440 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b\") pod \"glance-b5802-default-external-api-0\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:27.192630 master-1 kubenswrapper[4771]: I1011 10:54:27.192547 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:27.279772 master-2 kubenswrapper[4776]: I1011 10:54:27.279720 4776 generic.go:334] "Generic (PLEG): container finished" podID="316381d6-4304-44b3-a742-50e80da7acd1" containerID="1ef6065ef3373ebd1d031e48bef6566526ba170bc1411a15757ea04bbf481260" exitCode=0 Oct 11 10:54:27.281105 master-2 kubenswrapper[4776]: I1011 10:54:27.279776 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jdggk" event={"ID":"316381d6-4304-44b3-a742-50e80da7acd1","Type":"ContainerDied","Data":"1ef6065ef3373ebd1d031e48bef6566526ba170bc1411a15757ea04bbf481260"} Oct 11 10:54:27.796610 master-2 kubenswrapper[4776]: I1011 10:54:27.796456 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-external-api-2"] Oct 11 10:54:27.796891 master-2 kubenswrapper[4776]: E1011 10:54:27.796867 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ac662a-494b-422a-84fd-2e40681d4ae6" containerName="init" Oct 11 10:54:27.796891 master-2 kubenswrapper[4776]: I1011 10:54:27.796885 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ac662a-494b-422a-84fd-2e40681d4ae6" containerName="init" Oct 11 10:54:27.797113 master-2 kubenswrapper[4776]: I1011 10:54:27.797065 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8ac662a-494b-422a-84fd-2e40681d4ae6" containerName="init" Oct 11 10:54:27.798124 master-2 kubenswrapper[4776]: I1011 10:54:27.798090 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:27.801641 master-2 kubenswrapper[4776]: I1011 10:54:27.801597 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-external-config-data" Oct 11 10:54:27.801977 master-2 kubenswrapper[4776]: I1011 10:54:27.801938 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 11 10:54:27.813414 master-2 kubenswrapper[4776]: I1011 10:54:27.813363 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-2"] Oct 11 10:54:27.970948 master-2 kubenswrapper[4776]: I1011 10:54:27.970899 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-public-tls-certs\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:27.971156 master-2 kubenswrapper[4776]: I1011 10:54:27.971114 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-config-data\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:27.971215 master-2 kubenswrapper[4776]: I1011 10:54:27.971172 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f984s\" (UniqueName: \"kubernetes.io/projected/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-kube-api-access-f984s\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:27.971262 master-2 kubenswrapper[4776]: I1011 10:54:27.971239 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-logs\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:27.971320 master-2 kubenswrapper[4776]: I1011 10:54:27.971290 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-httpd-run\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:27.971433 master-2 kubenswrapper[4776]: I1011 10:54:27.971402 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:27.971490 master-2 kubenswrapper[4776]: I1011 10:54:27.971446 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-scripts\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:27.971540 master-2 kubenswrapper[4776]: I1011 10:54:27.971494 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-combined-ca-bundle\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.007301 master-1 kubenswrapper[4771]: I1011 10:54:28.007223 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6508f17e-afc7-44dd-89b4-2efa8a124b12","Type":"ContainerStarted","Data":"5b743e4068a6cb14bb814fdd41d8cf6bd84e84d0f5ad68d1776ee95613452027"} Oct 11 10:54:28.013368 master-1 kubenswrapper[4771]: I1011 10:54:28.011504 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"abb599b9-db44-492a-bc73-6ff5a2c212d5","Type":"ContainerStarted","Data":"7285e766541fce2470ed4423abb2e429bac288761eb9a91edaf6349ae518f587"} Oct 11 10:54:28.061605 master-1 kubenswrapper[4771]: I1011 10:54:28.061153 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.061082293 podStartE2EDuration="20.061082293s" podCreationTimestamp="2025-10-11 10:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:28.055579644 +0000 UTC m=+1700.029806095" watchObservedRunningTime="2025-10-11 10:54:28.061082293 +0000 UTC m=+1700.035308734" Oct 11 10:54:28.073968 master-2 kubenswrapper[4776]: I1011 10:54:28.073846 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-public-tls-certs\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.073968 master-2 kubenswrapper[4776]: I1011 10:54:28.073961 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-config-data\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.074228 master-2 kubenswrapper[4776]: I1011 10:54:28.073997 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f984s\" (UniqueName: \"kubernetes.io/projected/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-kube-api-access-f984s\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.074228 master-2 kubenswrapper[4776]: I1011 10:54:28.074043 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-logs\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.074228 master-2 kubenswrapper[4776]: I1011 10:54:28.074074 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-httpd-run\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.074228 master-2 kubenswrapper[4776]: I1011 10:54:28.074117 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.074228 master-2 kubenswrapper[4776]: I1011 10:54:28.074146 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-scripts\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.074228 master-2 kubenswrapper[4776]: I1011 10:54:28.074174 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-combined-ca-bundle\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.075341 master-2 kubenswrapper[4776]: I1011 10:54:28.075307 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-httpd-run\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.076767 master-2 kubenswrapper[4776]: I1011 10:54:28.076694 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-logs\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.078317 master-2 kubenswrapper[4776]: I1011 10:54:28.078039 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-config-data\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.078317 master-2 kubenswrapper[4776]: I1011 10:54:28.078131 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:54:28.078317 master-2 kubenswrapper[4776]: I1011 10:54:28.078159 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/977628254c2695ff17425dccc1fbe376fb7c4f4d8dfcfd87eb3a48ca9779afa1/globalmount\"" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.078567 master-2 kubenswrapper[4776]: I1011 10:54:28.078432 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-combined-ca-bundle\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.079029 master-2 kubenswrapper[4776]: I1011 10:54:28.078916 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-scripts\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.080331 master-2 kubenswrapper[4776]: I1011 10:54:28.080283 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-public-tls-certs\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.081254 master-2 kubenswrapper[4776]: I1011 10:54:28.081212 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8ac662a-494b-422a-84fd-2e40681d4ae6" path="/var/lib/kubelet/pods/e8ac662a-494b-422a-84fd-2e40681d4ae6/volumes" Oct 11 10:54:28.105713 master-2 kubenswrapper[4776]: I1011 10:54:28.105647 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f984s\" (UniqueName: \"kubernetes.io/projected/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-kube-api-access-f984s\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.170125 master-1 kubenswrapper[4771]: I1011 10:54:28.170069 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-0"] Oct 11 10:54:28.174832 master-1 kubenswrapper[4771]: W1011 10:54:28.174745 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18861a21_406e_479b_8712_9a62ca2ebf4a.slice/crio-51aa777863a3d17bf81dc45f1659ccef0c9c30b6b9bf5305b555b52a6a626104 WatchSource:0}: Error finding container 51aa777863a3d17bf81dc45f1659ccef0c9c30b6b9bf5305b555b52a6a626104: Status 404 returned error can't find the container with id 51aa777863a3d17bf81dc45f1659ccef0c9c30b6b9bf5305b555b52a6a626104 Oct 11 10:54:28.341660 master-2 kubenswrapper[4776]: I1011 10:54:28.341593 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5pz76" event={"ID":"7cef6f34-fa11-4593-b4d8-c9ac415f1967","Type":"ContainerStarted","Data":"5a93f178b03e320516bcd32c99e245334eeef09b36cb1fbff0b5d12e1d56145d"} Oct 11 10:54:28.343624 master-0 kubenswrapper[4790]: I1011 10:54:28.343567 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-external-api-1"] Oct 11 10:54:28.344340 master-0 kubenswrapper[4790]: E1011 10:54:28.344194 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2137512f-c759-4935-944d-48248c99c2ec" containerName="mariadb-database-create" Oct 11 10:54:28.344340 master-0 kubenswrapper[4790]: I1011 10:54:28.344210 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="2137512f-c759-4935-944d-48248c99c2ec" containerName="mariadb-database-create" Oct 11 10:54:28.344836 master-0 kubenswrapper[4790]: I1011 10:54:28.344529 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="2137512f-c759-4935-944d-48248c99c2ec" containerName="mariadb-database-create" Oct 11 10:54:28.346121 master-0 kubenswrapper[4790]: I1011 10:54:28.346089 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.350731 master-0 kubenswrapper[4790]: I1011 10:54:28.350660 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 11 10:54:28.351099 master-0 kubenswrapper[4790]: I1011 10:54:28.351070 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 11 10:54:28.351386 master-0 kubenswrapper[4790]: I1011 10:54:28.351358 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-external-config-data" Oct 11 10:54:28.355873 master-0 kubenswrapper[4790]: I1011 10:54:28.355819 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-1"] Oct 11 10:54:28.397879 master-2 kubenswrapper[4776]: I1011 10:54:28.397598 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:54:28.421844 master-2 kubenswrapper[4776]: I1011 10:54:28.421768 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-5pz76" podStartSLOduration=2.020701722 podStartE2EDuration="6.421743903s" podCreationTimestamp="2025-10-11 10:54:22 +0000 UTC" firstStartedPulling="2025-10-11 10:54:23.462597707 +0000 UTC m=+1698.247024416" lastFinishedPulling="2025-10-11 10:54:27.863639878 +0000 UTC m=+1702.648066597" observedRunningTime="2025-10-11 10:54:28.405848212 +0000 UTC m=+1703.190274941" watchObservedRunningTime="2025-10-11 10:54:28.421743903 +0000 UTC m=+1703.206170612" Oct 11 10:54:28.464452 master-1 kubenswrapper[4771]: I1011 10:54:28.464387 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37\") pod \"glance-b5802-default-internal-api-1\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:28.525741 master-0 kubenswrapper[4790]: I1011 10:54:28.524547 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-config-data\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.525741 master-0 kubenswrapper[4790]: I1011 10:54:28.524625 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-httpd-run\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.525741 master-0 kubenswrapper[4790]: I1011 10:54:28.524653 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-public-tls-certs\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.525741 master-0 kubenswrapper[4790]: I1011 10:54:28.524823 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-scripts\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.525741 master-0 kubenswrapper[4790]: I1011 10:54:28.524892 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-combined-ca-bundle\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.525741 master-0 kubenswrapper[4790]: I1011 10:54:28.525139 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-logs\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.525741 master-0 kubenswrapper[4790]: I1011 10:54:28.525182 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wvf6\" (UniqueName: \"kubernetes.io/projected/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-kube-api-access-2wvf6\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.525741 master-0 kubenswrapper[4790]: I1011 10:54:28.525210 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c7212717-18be-4287-9071-f6f818672815\" (UniqueName: \"kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.627155 master-0 kubenswrapper[4790]: I1011 10:54:28.627077 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-logs\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.627155 master-0 kubenswrapper[4790]: I1011 10:54:28.627132 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wvf6\" (UniqueName: \"kubernetes.io/projected/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-kube-api-access-2wvf6\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.627155 master-0 kubenswrapper[4790]: I1011 10:54:28.627155 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c7212717-18be-4287-9071-f6f818672815\" (UniqueName: \"kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.630111 master-0 kubenswrapper[4790]: I1011 10:54:28.627218 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-config-data\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.630111 master-0 kubenswrapper[4790]: I1011 10:54:28.627272 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-httpd-run\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.630111 master-0 kubenswrapper[4790]: I1011 10:54:28.627299 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-public-tls-certs\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.630111 master-0 kubenswrapper[4790]: I1011 10:54:28.627323 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-scripts\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.630111 master-0 kubenswrapper[4790]: I1011 10:54:28.627461 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-combined-ca-bundle\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.630111 master-0 kubenswrapper[4790]: I1011 10:54:28.627726 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-logs\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.630111 master-0 kubenswrapper[4790]: I1011 10:54:28.627899 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-httpd-run\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.630576 master-0 kubenswrapper[4790]: I1011 10:54:28.630526 4790 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:54:28.630666 master-0 kubenswrapper[4790]: I1011 10:54:28.630591 4790 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c7212717-18be-4287-9071-f6f818672815\" (UniqueName: \"kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/95ab3ea1c73b905e55aa0f0a1e574a5056ec96dde23978388ab58fbe89465472/globalmount\"" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.632439 master-0 kubenswrapper[4790]: I1011 10:54:28.632375 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-scripts\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.633067 master-0 kubenswrapper[4790]: I1011 10:54:28.632989 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-combined-ca-bundle\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.633599 master-0 kubenswrapper[4790]: I1011 10:54:28.633562 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-config-data\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.646221 master-0 kubenswrapper[4790]: I1011 10:54:28.646177 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wvf6\" (UniqueName: \"kubernetes.io/projected/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-kube-api-access-2wvf6\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.647370 master-0 kubenswrapper[4790]: I1011 10:54:28.647327 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-public-tls-certs\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:28.766406 master-1 kubenswrapper[4771]: I1011 10:54:28.766223 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:28.790326 master-0 kubenswrapper[4790]: I1011 10:54:28.790162 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-9ac8-account-create-r5rxs"] Oct 11 10:54:28.791407 master-0 kubenswrapper[4790]: I1011 10:54:28.791374 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-9ac8-account-create-r5rxs" Oct 11 10:54:28.795083 master-0 kubenswrapper[4790]: I1011 10:54:28.794992 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Oct 11 10:54:28.799842 master-0 kubenswrapper[4790]: I1011 10:54:28.799773 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-9ac8-account-create-r5rxs"] Oct 11 10:54:28.938231 master-0 kubenswrapper[4790]: I1011 10:54:28.937882 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45c75\" (UniqueName: \"kubernetes.io/projected/cdd4a60e-f24a-48fe-afcb-c7ccab615f69-kube-api-access-45c75\") pod \"heat-9ac8-account-create-r5rxs\" (UID: \"cdd4a60e-f24a-48fe-afcb-c7ccab615f69\") " pod="openstack/heat-9ac8-account-create-r5rxs" Oct 11 10:54:28.973032 master-0 kubenswrapper[4790]: I1011 10:54:28.972955 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b634-account-create-vb2w7"] Oct 11 10:54:28.974291 master-0 kubenswrapper[4790]: I1011 10:54:28.974265 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b634-account-create-vb2w7" Oct 11 10:54:28.977548 master-0 kubenswrapper[4790]: I1011 10:54:28.977512 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 11 10:54:28.985005 master-0 kubenswrapper[4790]: I1011 10:54:28.984960 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b634-account-create-vb2w7"] Oct 11 10:54:29.015743 master-0 kubenswrapper[4790]: I1011 10:54:29.015645 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-internal-api-2"] Oct 11 10:54:29.017447 master-0 kubenswrapper[4790]: I1011 10:54:29.016988 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.020955 master-0 kubenswrapper[4790]: I1011 10:54:29.020890 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-internal-config-data" Oct 11 10:54:29.022924 master-0 kubenswrapper[4790]: I1011 10:54:29.022891 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 11 10:54:29.023615 master-1 kubenswrapper[4771]: I1011 10:54:29.023369 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-0" event={"ID":"18861a21-406e-479b-8712-9a62ca2ebf4a","Type":"ContainerStarted","Data":"51aa777863a3d17bf81dc45f1659ccef0c9c30b6b9bf5305b555b52a6a626104"} Oct 11 10:54:29.027491 master-0 kubenswrapper[4790]: I1011 10:54:29.027429 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-2"] Oct 11 10:54:29.049056 master-0 kubenswrapper[4790]: I1011 10:54:29.039470 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45c75\" (UniqueName: \"kubernetes.io/projected/cdd4a60e-f24a-48fe-afcb-c7ccab615f69-kube-api-access-45c75\") pod \"heat-9ac8-account-create-r5rxs\" (UID: \"cdd4a60e-f24a-48fe-afcb-c7ccab615f69\") " pod="openstack/heat-9ac8-account-create-r5rxs" Oct 11 10:54:29.061672 master-0 kubenswrapper[4790]: I1011 10:54:29.061604 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45c75\" (UniqueName: \"kubernetes.io/projected/cdd4a60e-f24a-48fe-afcb-c7ccab615f69-kube-api-access-45c75\") pod \"heat-9ac8-account-create-r5rxs\" (UID: \"cdd4a60e-f24a-48fe-afcb-c7ccab615f69\") " pod="openstack/heat-9ac8-account-create-r5rxs" Oct 11 10:54:29.117732 master-0 kubenswrapper[4790]: I1011 10:54:29.117133 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-9ac8-account-create-r5rxs" Oct 11 10:54:29.140841 master-0 kubenswrapper[4790]: I1011 10:54:29.140803 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-db4348ad-f9a3-4500-a9f3-efd0f3ffb9a5\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8217a7c0-7507-4763-8106-d527748a7b39\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.140941 master-0 kubenswrapper[4790]: I1011 10:54:29.140882 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.140941 master-0 kubenswrapper[4790]: I1011 10:54:29.140915 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-config-data\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.141001 master-0 kubenswrapper[4790]: I1011 10:54:29.140939 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0a5aa40-0146-4b81-83dd-761d514c557a-logs\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.141001 master-0 kubenswrapper[4790]: I1011 10:54:29.140975 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-scripts\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.141077 master-0 kubenswrapper[4790]: I1011 10:54:29.141017 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vczqt\" (UniqueName: \"kubernetes.io/projected/a0a5aa40-0146-4b81-83dd-761d514c557a-kube-api-access-vczqt\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.141077 master-0 kubenswrapper[4790]: I1011 10:54:29.141037 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0a5aa40-0146-4b81-83dd-761d514c557a-httpd-run\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.141077 master-0 kubenswrapper[4790]: I1011 10:54:29.141063 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkpf5\" (UniqueName: \"kubernetes.io/projected/09ddf95f-6e9c-4f3c-b742-87379c6594b2-kube-api-access-xkpf5\") pod \"cinder-b634-account-create-vb2w7\" (UID: \"09ddf95f-6e9c-4f3c-b742-87379c6594b2\") " pod="openstack/cinder-b634-account-create-vb2w7" Oct 11 10:54:29.141077 master-0 kubenswrapper[4790]: I1011 10:54:29.141274 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-internal-tls-certs\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.171858 master-0 kubenswrapper[4790]: I1011 10:54:29.171671 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2033-account-create-jh9gc"] Oct 11 10:54:29.173189 master-0 kubenswrapper[4790]: I1011 10:54:29.173169 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2033-account-create-jh9gc" Oct 11 10:54:29.175780 master-0 kubenswrapper[4790]: I1011 10:54:29.175760 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 11 10:54:29.184241 master-0 kubenswrapper[4790]: I1011 10:54:29.183619 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2033-account-create-jh9gc"] Oct 11 10:54:29.195839 master-2 kubenswrapper[4776]: I1011 10:54:29.194909 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:29.243199 master-0 kubenswrapper[4790]: I1011 10:54:29.243124 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-db4348ad-f9a3-4500-a9f3-efd0f3ffb9a5\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8217a7c0-7507-4763-8106-d527748a7b39\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.243534 master-0 kubenswrapper[4790]: I1011 10:54:29.243512 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.243650 master-0 kubenswrapper[4790]: I1011 10:54:29.243630 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-config-data\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.243800 master-0 kubenswrapper[4790]: I1011 10:54:29.243781 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0a5aa40-0146-4b81-83dd-761d514c557a-logs\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.243935 master-0 kubenswrapper[4790]: I1011 10:54:29.243917 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-scripts\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.244088 master-0 kubenswrapper[4790]: I1011 10:54:29.244070 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vczqt\" (UniqueName: \"kubernetes.io/projected/a0a5aa40-0146-4b81-83dd-761d514c557a-kube-api-access-vczqt\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.244201 master-0 kubenswrapper[4790]: I1011 10:54:29.244182 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0a5aa40-0146-4b81-83dd-761d514c557a-httpd-run\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.244298 master-0 kubenswrapper[4790]: I1011 10:54:29.244281 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkpf5\" (UniqueName: \"kubernetes.io/projected/09ddf95f-6e9c-4f3c-b742-87379c6594b2-kube-api-access-xkpf5\") pod \"cinder-b634-account-create-vb2w7\" (UID: \"09ddf95f-6e9c-4f3c-b742-87379c6594b2\") " pod="openstack/cinder-b634-account-create-vb2w7" Oct 11 10:54:29.244383 master-0 kubenswrapper[4790]: I1011 10:54:29.244370 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-internal-tls-certs\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.245568 master-0 kubenswrapper[4790]: I1011 10:54:29.245549 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0a5aa40-0146-4b81-83dd-761d514c557a-logs\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.246249 master-0 kubenswrapper[4790]: I1011 10:54:29.246192 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0a5aa40-0146-4b81-83dd-761d514c557a-httpd-run\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.251222 master-0 kubenswrapper[4790]: I1011 10:54:29.251182 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.252167 master-0 kubenswrapper[4790]: I1011 10:54:29.252146 4790 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:54:29.252310 master-0 kubenswrapper[4790]: I1011 10:54:29.252266 4790 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-db4348ad-f9a3-4500-a9f3-efd0f3ffb9a5\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8217a7c0-7507-4763-8106-d527748a7b39\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/b0c7c7eacbecbf6beec44181cd1a14327b215e622b505cc0fbc4653c9c57c6ce/globalmount\"" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.252807 master-0 kubenswrapper[4790]: I1011 10:54:29.252399 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-config-data\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.252807 master-0 kubenswrapper[4790]: I1011 10:54:29.252689 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-internal-tls-certs\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.254742 master-0 kubenswrapper[4790]: I1011 10:54:29.254680 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-scripts\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.271543 master-0 kubenswrapper[4790]: I1011 10:54:29.270535 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vczqt\" (UniqueName: \"kubernetes.io/projected/a0a5aa40-0146-4b81-83dd-761d514c557a-kube-api-access-vczqt\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:29.274669 master-0 kubenswrapper[4790]: I1011 10:54:29.274446 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkpf5\" (UniqueName: \"kubernetes.io/projected/09ddf95f-6e9c-4f3c-b742-87379c6594b2-kube-api-access-xkpf5\") pod \"cinder-b634-account-create-vb2w7\" (UID: \"09ddf95f-6e9c-4f3c-b742-87379c6594b2\") " pod="openstack/cinder-b634-account-create-vb2w7" Oct 11 10:54:29.302070 master-2 kubenswrapper[4776]: I1011 10:54:29.302021 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-combined-ca-bundle\") pod \"316381d6-4304-44b3-a742-50e80da7acd1\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " Oct 11 10:54:29.302215 master-2 kubenswrapper[4776]: I1011 10:54:29.302088 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhkv4\" (UniqueName: \"kubernetes.io/projected/316381d6-4304-44b3-a742-50e80da7acd1-kube-api-access-xhkv4\") pod \"316381d6-4304-44b3-a742-50e80da7acd1\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " Oct 11 10:54:29.302215 master-2 kubenswrapper[4776]: I1011 10:54:29.302109 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-credential-keys\") pod \"316381d6-4304-44b3-a742-50e80da7acd1\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " Oct 11 10:54:29.302215 master-2 kubenswrapper[4776]: I1011 10:54:29.302214 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-scripts\") pod \"316381d6-4304-44b3-a742-50e80da7acd1\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " Oct 11 10:54:29.302338 master-2 kubenswrapper[4776]: I1011 10:54:29.302236 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-fernet-keys\") pod \"316381d6-4304-44b3-a742-50e80da7acd1\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " Oct 11 10:54:29.302338 master-2 kubenswrapper[4776]: I1011 10:54:29.302311 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-config-data\") pod \"316381d6-4304-44b3-a742-50e80da7acd1\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " Oct 11 10:54:29.306050 master-2 kubenswrapper[4776]: I1011 10:54:29.305970 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-scripts" (OuterVolumeSpecName: "scripts") pod "316381d6-4304-44b3-a742-50e80da7acd1" (UID: "316381d6-4304-44b3-a742-50e80da7acd1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:29.306662 master-2 kubenswrapper[4776]: I1011 10:54:29.306581 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/316381d6-4304-44b3-a742-50e80da7acd1-kube-api-access-xhkv4" (OuterVolumeSpecName: "kube-api-access-xhkv4") pod "316381d6-4304-44b3-a742-50e80da7acd1" (UID: "316381d6-4304-44b3-a742-50e80da7acd1"). InnerVolumeSpecName "kube-api-access-xhkv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:29.307081 master-2 kubenswrapper[4776]: I1011 10:54:29.307025 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "316381d6-4304-44b3-a742-50e80da7acd1" (UID: "316381d6-4304-44b3-a742-50e80da7acd1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:29.309264 master-2 kubenswrapper[4776]: I1011 10:54:29.309079 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "316381d6-4304-44b3-a742-50e80da7acd1" (UID: "316381d6-4304-44b3-a742-50e80da7acd1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:29.311237 master-0 kubenswrapper[4790]: I1011 10:54:29.311038 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b634-account-create-vb2w7" Oct 11 10:54:29.335895 master-2 kubenswrapper[4776]: I1011 10:54:29.335847 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "316381d6-4304-44b3-a742-50e80da7acd1" (UID: "316381d6-4304-44b3-a742-50e80da7acd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:29.346426 master-0 kubenswrapper[4790]: I1011 10:54:29.346342 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbjx4\" (UniqueName: \"kubernetes.io/projected/08a325c6-b9b6-495b-87dc-d6e12b3f1029-kube-api-access-gbjx4\") pod \"neutron-2033-account-create-jh9gc\" (UID: \"08a325c6-b9b6-495b-87dc-d6e12b3f1029\") " pod="openstack/neutron-2033-account-create-jh9gc" Oct 11 10:54:29.368010 master-2 kubenswrapper[4776]: I1011 10:54:29.367949 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-config-data" (OuterVolumeSpecName: "config-data") pod "316381d6-4304-44b3-a742-50e80da7acd1" (UID: "316381d6-4304-44b3-a742-50e80da7acd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:29.370902 master-2 kubenswrapper[4776]: I1011 10:54:29.370856 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jdggk" event={"ID":"316381d6-4304-44b3-a742-50e80da7acd1","Type":"ContainerDied","Data":"475d348503e4fe6007aebb3a7d38a0a9425dc9455cf353a62f41318f4614bcf0"} Oct 11 10:54:29.370993 master-2 kubenswrapper[4776]: I1011 10:54:29.370893 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:29.370993 master-2 kubenswrapper[4776]: I1011 10:54:29.370914 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="475d348503e4fe6007aebb3a7d38a0a9425dc9455cf353a62f41318f4614bcf0" Oct 11 10:54:29.376853 master-2 kubenswrapper[4776]: I1011 10:54:29.376799 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"d0cc3965-fd34-4442-b408-a5ae441443e4","Type":"ContainerStarted","Data":"c0a0f68535f1045164a03ee0e1499295237f65e65bc92ffb7cde06bc73007d4d"} Oct 11 10:54:29.376853 master-2 kubenswrapper[4776]: I1011 10:54:29.376855 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"d0cc3965-fd34-4442-b408-a5ae441443e4","Type":"ContainerStarted","Data":"020cb51e8f192e46e701d1c522ecf5cc9d035525d4d7b945c86775cc56da8867"} Oct 11 10:54:29.404805 master-2 kubenswrapper[4776]: I1011 10:54:29.404753 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:29.404805 master-2 kubenswrapper[4776]: I1011 10:54:29.404788 4776 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-credential-keys\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:29.404805 master-2 kubenswrapper[4776]: I1011 10:54:29.404799 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhkv4\" (UniqueName: \"kubernetes.io/projected/316381d6-4304-44b3-a742-50e80da7acd1-kube-api-access-xhkv4\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:29.404805 master-2 kubenswrapper[4776]: I1011 10:54:29.404808 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:29.404805 master-2 kubenswrapper[4776]: I1011 10:54:29.404817 4776 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-fernet-keys\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:29.405197 master-2 kubenswrapper[4776]: I1011 10:54:29.404827 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:29.448531 master-0 kubenswrapper[4790]: I1011 10:54:29.448458 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbjx4\" (UniqueName: \"kubernetes.io/projected/08a325c6-b9b6-495b-87dc-d6e12b3f1029-kube-api-access-gbjx4\") pod \"neutron-2033-account-create-jh9gc\" (UID: \"08a325c6-b9b6-495b-87dc-d6e12b3f1029\") " pod="openstack/neutron-2033-account-create-jh9gc" Oct 11 10:54:29.451220 master-2 kubenswrapper[4776]: I1011 10:54:29.451161 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jdggk"] Oct 11 10:54:29.459388 master-2 kubenswrapper[4776]: I1011 10:54:29.459272 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jdggk"] Oct 11 10:54:29.476391 master-2 kubenswrapper[4776]: I1011 10:54:29.476287 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xptqx"] Oct 11 10:54:29.476788 master-2 kubenswrapper[4776]: E1011 10:54:29.476671 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316381d6-4304-44b3-a742-50e80da7acd1" containerName="keystone-bootstrap" Oct 11 10:54:29.476788 master-2 kubenswrapper[4776]: I1011 10:54:29.476705 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="316381d6-4304-44b3-a742-50e80da7acd1" containerName="keystone-bootstrap" Oct 11 10:54:29.477245 master-2 kubenswrapper[4776]: I1011 10:54:29.476899 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="316381d6-4304-44b3-a742-50e80da7acd1" containerName="keystone-bootstrap" Oct 11 10:54:29.479044 master-2 kubenswrapper[4776]: I1011 10:54:29.478996 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.482588 master-2 kubenswrapper[4776]: I1011 10:54:29.482507 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 11 10:54:29.483425 master-2 kubenswrapper[4776]: I1011 10:54:29.483400 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 11 10:54:29.483586 master-2 kubenswrapper[4776]: I1011 10:54:29.483564 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 11 10:54:29.488823 master-0 kubenswrapper[4790]: I1011 10:54:29.488460 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbjx4\" (UniqueName: \"kubernetes.io/projected/08a325c6-b9b6-495b-87dc-d6e12b3f1029-kube-api-access-gbjx4\") pod \"neutron-2033-account-create-jh9gc\" (UID: \"08a325c6-b9b6-495b-87dc-d6e12b3f1029\") " pod="openstack/neutron-2033-account-create-jh9gc" Oct 11 10:54:29.491463 master-2 kubenswrapper[4776]: I1011 10:54:29.491403 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xptqx"] Oct 11 10:54:29.539033 master-0 kubenswrapper[4790]: I1011 10:54:29.538864 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2033-account-create-jh9gc" Oct 11 10:54:29.608149 master-0 kubenswrapper[4790]: I1011 10:54:29.608080 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-9ac8-account-create-r5rxs"] Oct 11 10:54:29.609511 master-2 kubenswrapper[4776]: I1011 10:54:29.607352 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-credential-keys\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.609511 master-2 kubenswrapper[4776]: I1011 10:54:29.607432 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-fernet-keys\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.609511 master-2 kubenswrapper[4776]: I1011 10:54:29.607503 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-scripts\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.609511 master-2 kubenswrapper[4776]: I1011 10:54:29.607539 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb4d9\" (UniqueName: \"kubernetes.io/projected/30996a86-1b86-4a67-bfea-0e63f7417196-kube-api-access-kb4d9\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.609511 master-2 kubenswrapper[4776]: I1011 10:54:29.607565 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-config-data\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.609511 master-2 kubenswrapper[4776]: I1011 10:54:29.608779 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-combined-ca-bundle\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.632444 master-1 kubenswrapper[4771]: I1011 10:54:29.622437 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-1"] Oct 11 10:54:29.675590 master-2 kubenswrapper[4776]: I1011 10:54:29.674998 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:29.709959 master-2 kubenswrapper[4776]: I1011 10:54:29.709902 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-credential-keys\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.709959 master-2 kubenswrapper[4776]: I1011 10:54:29.709962 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-fernet-keys\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.710179 master-2 kubenswrapper[4776]: I1011 10:54:29.710004 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-scripts\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.710179 master-2 kubenswrapper[4776]: I1011 10:54:29.710036 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb4d9\" (UniqueName: \"kubernetes.io/projected/30996a86-1b86-4a67-bfea-0e63f7417196-kube-api-access-kb4d9\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.710179 master-2 kubenswrapper[4776]: I1011 10:54:29.710061 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-config-data\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.710279 master-2 kubenswrapper[4776]: I1011 10:54:29.710193 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-combined-ca-bundle\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.713834 master-2 kubenswrapper[4776]: I1011 10:54:29.713768 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-combined-ca-bundle\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.714015 master-2 kubenswrapper[4776]: I1011 10:54:29.713984 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-credential-keys\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.715585 master-2 kubenswrapper[4776]: I1011 10:54:29.715550 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-scripts\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.715869 master-2 kubenswrapper[4776]: I1011 10:54:29.715781 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-fernet-keys\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.717794 master-2 kubenswrapper[4776]: I1011 10:54:29.717754 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-config-data\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.743464 master-2 kubenswrapper[4776]: I1011 10:54:29.743423 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb4d9\" (UniqueName: \"kubernetes.io/projected/30996a86-1b86-4a67-bfea-0e63f7417196-kube-api-access-kb4d9\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.761757 master-0 kubenswrapper[4790]: I1011 10:54:29.760313 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b634-account-create-vb2w7"] Oct 11 10:54:29.794841 master-0 kubenswrapper[4790]: W1011 10:54:29.794473 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09ddf95f_6e9c_4f3c_b742_87379c6594b2.slice/crio-4a0da43bdbb6cb8eab5b1d1a5b54d52710e81cb9cc902f6c2bb9187ab577a089 WatchSource:0}: Error finding container 4a0da43bdbb6cb8eab5b1d1a5b54d52710e81cb9cc902f6c2bb9187ab577a089: Status 404 returned error can't find the container with id 4a0da43bdbb6cb8eab5b1d1a5b54d52710e81cb9cc902f6c2bb9187ab577a089 Oct 11 10:54:29.802120 master-2 kubenswrapper[4776]: I1011 10:54:29.802062 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.918150 master-2 kubenswrapper[4776]: I1011 10:54:29.917770 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:30.037849 master-1 kubenswrapper[4771]: I1011 10:54:30.037738 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-1" event={"ID":"499f9e94-a738-484d-ae4b-0cc221750d1c","Type":"ContainerStarted","Data":"ee8592262e70401d099ff1b266023cdb236d7c7195e76597576b3cf0944d23f5"} Oct 11 10:54:30.047677 master-1 kubenswrapper[4771]: I1011 10:54:30.047316 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6508f17e-afc7-44dd-89b4-2efa8a124b12","Type":"ContainerStarted","Data":"9ae9b7cbf86ceb270193377537f9fbc6af1ee710bbbb22c0a85e001eae30ccdb"} Oct 11 10:54:30.073293 master-0 kubenswrapper[4790]: I1011 10:54:30.073198 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2033-account-create-jh9gc"] Oct 11 10:54:30.077768 master-2 kubenswrapper[4776]: I1011 10:54:30.077715 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="316381d6-4304-44b3-a742-50e80da7acd1" path="/var/lib/kubelet/pods/316381d6-4304-44b3-a742-50e80da7acd1/volumes" Oct 11 10:54:30.392942 master-2 kubenswrapper[4776]: I1011 10:54:30.392889 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"d0cc3965-fd34-4442-b408-a5ae441443e4","Type":"ContainerStarted","Data":"30c682488fe3a4cbd0fa8fdf8a635610245b1344157964f363074a15ace75969"} Oct 11 10:54:30.429362 master-2 kubenswrapper[4776]: I1011 10:54:30.429278 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-internal-api-0" podStartSLOduration=8.429258955 podStartE2EDuration="8.429258955s" podCreationTimestamp="2025-10-11 10:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:30.423229252 +0000 UTC m=+1705.207655971" watchObservedRunningTime="2025-10-11 10:54:30.429258955 +0000 UTC m=+1705.213685664" Oct 11 10:54:30.451660 master-2 kubenswrapper[4776]: W1011 10:54:30.451615 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30996a86_1b86_4a67_bfea_0e63f7417196.slice/crio-740014b42f7f5352557d95dd0518b5f24506765ebf058a5db025a483fda2e186 WatchSource:0}: Error finding container 740014b42f7f5352557d95dd0518b5f24506765ebf058a5db025a483fda2e186: Status 404 returned error can't find the container with id 740014b42f7f5352557d95dd0518b5f24506765ebf058a5db025a483fda2e186 Oct 11 10:54:30.455457 master-2 kubenswrapper[4776]: I1011 10:54:30.455395 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xptqx"] Oct 11 10:54:30.593724 master-0 kubenswrapper[4790]: I1011 10:54:30.593646 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c7212717-18be-4287-9071-f6f818672815\" (UniqueName: \"kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7\") pod \"glance-b5802-default-external-api-1\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:30.608153 master-0 kubenswrapper[4790]: I1011 10:54:30.608090 4790 generic.go:334] "Generic (PLEG): container finished" podID="09ddf95f-6e9c-4f3c-b742-87379c6594b2" containerID="fd9735379426d4418da18546aac8b7806a6015a386e483f957b980d675840314" exitCode=0 Oct 11 10:54:30.608355 master-0 kubenswrapper[4790]: I1011 10:54:30.608166 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b634-account-create-vb2w7" event={"ID":"09ddf95f-6e9c-4f3c-b742-87379c6594b2","Type":"ContainerDied","Data":"fd9735379426d4418da18546aac8b7806a6015a386e483f957b980d675840314"} Oct 11 10:54:30.608355 master-0 kubenswrapper[4790]: I1011 10:54:30.608256 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b634-account-create-vb2w7" event={"ID":"09ddf95f-6e9c-4f3c-b742-87379c6594b2","Type":"ContainerStarted","Data":"4a0da43bdbb6cb8eab5b1d1a5b54d52710e81cb9cc902f6c2bb9187ab577a089"} Oct 11 10:54:30.610163 master-0 kubenswrapper[4790]: I1011 10:54:30.610110 4790 generic.go:334] "Generic (PLEG): container finished" podID="08a325c6-b9b6-495b-87dc-d6e12b3f1029" containerID="5ca20afffe15faa31f5c2c1443a96be8fe5b0268275280368238f1f4b32ef4f2" exitCode=0 Oct 11 10:54:30.610221 master-0 kubenswrapper[4790]: I1011 10:54:30.610196 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2033-account-create-jh9gc" event={"ID":"08a325c6-b9b6-495b-87dc-d6e12b3f1029","Type":"ContainerDied","Data":"5ca20afffe15faa31f5c2c1443a96be8fe5b0268275280368238f1f4b32ef4f2"} Oct 11 10:54:30.610299 master-0 kubenswrapper[4790]: I1011 10:54:30.610284 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2033-account-create-jh9gc" event={"ID":"08a325c6-b9b6-495b-87dc-d6e12b3f1029","Type":"ContainerStarted","Data":"064a003115b13d4cbf140f9fcffdd93b2d44775212a0623054da53071870c839"} Oct 11 10:54:30.613835 master-0 kubenswrapper[4790]: I1011 10:54:30.613802 4790 generic.go:334] "Generic (PLEG): container finished" podID="cdd4a60e-f24a-48fe-afcb-c7ccab615f69" containerID="050e70bb3b6a03db41f9fcb784b5238c3ff9d94ed85c503d6f9f58f7bd27daa0" exitCode=0 Oct 11 10:54:30.613835 master-0 kubenswrapper[4790]: I1011 10:54:30.613831 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-9ac8-account-create-r5rxs" event={"ID":"cdd4a60e-f24a-48fe-afcb-c7ccab615f69","Type":"ContainerDied","Data":"050e70bb3b6a03db41f9fcb784b5238c3ff9d94ed85c503d6f9f58f7bd27daa0"} Oct 11 10:54:30.614071 master-0 kubenswrapper[4790]: I1011 10:54:30.613849 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-9ac8-account-create-r5rxs" event={"ID":"cdd4a60e-f24a-48fe-afcb-c7ccab615f69","Type":"ContainerStarted","Data":"d6ae65f963950836b1a177c858bc761c8e099ea553721fa061ba026946ca1a96"} Oct 11 10:54:30.672722 master-2 kubenswrapper[4776]: I1011 10:54:30.671789 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-2"] Oct 11 10:54:30.705904 master-1 kubenswrapper[4771]: I1011 10:54:30.704315 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:30.818199 master-0 kubenswrapper[4790]: I1011 10:54:30.818133 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:31.397833 master-0 kubenswrapper[4790]: I1011 10:54:31.397734 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-1"] Oct 11 10:54:31.404854 master-0 kubenswrapper[4790]: W1011 10:54:31.404792 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06ebc6e3_ce04_4aac_bb04_ded9662f65e3.slice/crio-ba85d8283f47fbc71a069e2badeb37c9ec3baa560e78e0e496d6945e23ef982f WatchSource:0}: Error finding container ba85d8283f47fbc71a069e2badeb37c9ec3baa560e78e0e496d6945e23ef982f: Status 404 returned error can't find the container with id ba85d8283f47fbc71a069e2badeb37c9ec3baa560e78e0e496d6945e23ef982f Oct 11 10:54:31.409412 master-2 kubenswrapper[4776]: I1011 10:54:31.409326 4776 generic.go:334] "Generic (PLEG): container finished" podID="7cef6f34-fa11-4593-b4d8-c9ac415f1967" containerID="5a93f178b03e320516bcd32c99e245334eeef09b36cb1fbff0b5d12e1d56145d" exitCode=0 Oct 11 10:54:31.409981 master-2 kubenswrapper[4776]: I1011 10:54:31.409454 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5pz76" event={"ID":"7cef6f34-fa11-4593-b4d8-c9ac415f1967","Type":"ContainerDied","Data":"5a93f178b03e320516bcd32c99e245334eeef09b36cb1fbff0b5d12e1d56145d"} Oct 11 10:54:31.413389 master-2 kubenswrapper[4776]: I1011 10:54:31.413323 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-2" event={"ID":"a0831eef-0c2e-4d09-a44b-7276f30bc1cf","Type":"ContainerStarted","Data":"2d269532330f7a891baad11ad939ef35677256aa7bb563b56736833615edcf28"} Oct 11 10:54:31.413456 master-2 kubenswrapper[4776]: I1011 10:54:31.413391 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-2" event={"ID":"a0831eef-0c2e-4d09-a44b-7276f30bc1cf","Type":"ContainerStarted","Data":"5a48d5bbfd49d56d4f32777007bb97dc3fb7108ea65533008682389d26fd8acc"} Oct 11 10:54:31.415555 master-2 kubenswrapper[4776]: I1011 10:54:31.415497 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xptqx" event={"ID":"30996a86-1b86-4a67-bfea-0e63f7417196","Type":"ContainerStarted","Data":"7a37bc55a741b7925fe73a8333e051e4eed1c5b9263c43dfa0598438fa7d12fc"} Oct 11 10:54:31.415652 master-2 kubenswrapper[4776]: I1011 10:54:31.415574 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xptqx" event={"ID":"30996a86-1b86-4a67-bfea-0e63f7417196","Type":"ContainerStarted","Data":"740014b42f7f5352557d95dd0518b5f24506765ebf058a5db025a483fda2e186"} Oct 11 10:54:31.463879 master-2 kubenswrapper[4776]: I1011 10:54:31.463791 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xptqx" podStartSLOduration=2.4637330840000002 podStartE2EDuration="2.463733084s" podCreationTimestamp="2025-10-11 10:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:31.455797149 +0000 UTC m=+1706.240223878" watchObservedRunningTime="2025-10-11 10:54:31.463733084 +0000 UTC m=+1706.248159803" Oct 11 10:54:31.622082 master-0 kubenswrapper[4790]: I1011 10:54:31.621962 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-1" event={"ID":"06ebc6e3-ce04-4aac-bb04-ded9662f65e3","Type":"ContainerStarted","Data":"ba85d8283f47fbc71a069e2badeb37c9ec3baa560e78e0e496d6945e23ef982f"} Oct 11 10:54:32.103410 master-0 kubenswrapper[4790]: I1011 10:54:32.103351 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b634-account-create-vb2w7" Oct 11 10:54:32.226764 master-0 kubenswrapper[4790]: I1011 10:54:32.226666 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkpf5\" (UniqueName: \"kubernetes.io/projected/09ddf95f-6e9c-4f3c-b742-87379c6594b2-kube-api-access-xkpf5\") pod \"09ddf95f-6e9c-4f3c-b742-87379c6594b2\" (UID: \"09ddf95f-6e9c-4f3c-b742-87379c6594b2\") " Oct 11 10:54:32.229473 master-0 kubenswrapper[4790]: I1011 10:54:32.229408 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ddf95f-6e9c-4f3c-b742-87379c6594b2-kube-api-access-xkpf5" (OuterVolumeSpecName: "kube-api-access-xkpf5") pod "09ddf95f-6e9c-4f3c-b742-87379c6594b2" (UID: "09ddf95f-6e9c-4f3c-b742-87379c6594b2"). InnerVolumeSpecName "kube-api-access-xkpf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:32.276351 master-0 kubenswrapper[4790]: I1011 10:54:32.276297 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-9ac8-account-create-r5rxs" Oct 11 10:54:32.279349 master-0 kubenswrapper[4790]: I1011 10:54:32.279305 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-db4348ad-f9a3-4500-a9f3-efd0f3ffb9a5\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8217a7c0-7507-4763-8106-d527748a7b39\") pod \"glance-b5802-default-internal-api-2\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:32.282844 master-0 kubenswrapper[4790]: I1011 10:54:32.282800 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2033-account-create-jh9gc" Oct 11 10:54:32.330954 master-0 kubenswrapper[4790]: I1011 10:54:32.330893 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkpf5\" (UniqueName: \"kubernetes.io/projected/09ddf95f-6e9c-4f3c-b742-87379c6594b2-kube-api-access-xkpf5\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:32.333568 master-0 kubenswrapper[4790]: I1011 10:54:32.333480 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:32.425829 master-2 kubenswrapper[4776]: I1011 10:54:32.425785 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-2" event={"ID":"a0831eef-0c2e-4d09-a44b-7276f30bc1cf","Type":"ContainerStarted","Data":"5770704138c0c2f874908ca2cc1d2acaea03506574846327d58cb886820df9bf"} Oct 11 10:54:32.432061 master-0 kubenswrapper[4790]: I1011 10:54:32.431903 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbjx4\" (UniqueName: \"kubernetes.io/projected/08a325c6-b9b6-495b-87dc-d6e12b3f1029-kube-api-access-gbjx4\") pod \"08a325c6-b9b6-495b-87dc-d6e12b3f1029\" (UID: \"08a325c6-b9b6-495b-87dc-d6e12b3f1029\") " Oct 11 10:54:32.432061 master-0 kubenswrapper[4790]: I1011 10:54:32.432020 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45c75\" (UniqueName: \"kubernetes.io/projected/cdd4a60e-f24a-48fe-afcb-c7ccab615f69-kube-api-access-45c75\") pod \"cdd4a60e-f24a-48fe-afcb-c7ccab615f69\" (UID: \"cdd4a60e-f24a-48fe-afcb-c7ccab615f69\") " Oct 11 10:54:32.439084 master-0 kubenswrapper[4790]: I1011 10:54:32.437808 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08a325c6-b9b6-495b-87dc-d6e12b3f1029-kube-api-access-gbjx4" (OuterVolumeSpecName: "kube-api-access-gbjx4") pod "08a325c6-b9b6-495b-87dc-d6e12b3f1029" (UID: "08a325c6-b9b6-495b-87dc-d6e12b3f1029"). InnerVolumeSpecName "kube-api-access-gbjx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:32.440666 master-0 kubenswrapper[4790]: I1011 10:54:32.440155 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd4a60e-f24a-48fe-afcb-c7ccab615f69-kube-api-access-45c75" (OuterVolumeSpecName: "kube-api-access-45c75") pod "cdd4a60e-f24a-48fe-afcb-c7ccab615f69" (UID: "cdd4a60e-f24a-48fe-afcb-c7ccab615f69"). InnerVolumeSpecName "kube-api-access-45c75". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:32.459077 master-2 kubenswrapper[4776]: I1011 10:54:32.458990 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-external-api-2" podStartSLOduration=7.458968619 podStartE2EDuration="7.458968619s" podCreationTimestamp="2025-10-11 10:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:32.457874749 +0000 UTC m=+1707.242301468" watchObservedRunningTime="2025-10-11 10:54:32.458968619 +0000 UTC m=+1707.243395348" Oct 11 10:54:32.534227 master-0 kubenswrapper[4790]: I1011 10:54:32.534113 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-e1bf-account-create-9qds8"] Oct 11 10:54:32.534909 master-0 kubenswrapper[4790]: E1011 10:54:32.534466 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd4a60e-f24a-48fe-afcb-c7ccab615f69" containerName="mariadb-account-create" Oct 11 10:54:32.534909 master-0 kubenswrapper[4790]: I1011 10:54:32.534483 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd4a60e-f24a-48fe-afcb-c7ccab615f69" containerName="mariadb-account-create" Oct 11 10:54:32.534909 master-0 kubenswrapper[4790]: E1011 10:54:32.534527 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09ddf95f-6e9c-4f3c-b742-87379c6594b2" containerName="mariadb-account-create" Oct 11 10:54:32.534909 master-0 kubenswrapper[4790]: I1011 10:54:32.534544 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="09ddf95f-6e9c-4f3c-b742-87379c6594b2" containerName="mariadb-account-create" Oct 11 10:54:32.534909 master-0 kubenswrapper[4790]: E1011 10:54:32.534564 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a325c6-b9b6-495b-87dc-d6e12b3f1029" containerName="mariadb-account-create" Oct 11 10:54:32.534909 master-0 kubenswrapper[4790]: I1011 10:54:32.534572 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a325c6-b9b6-495b-87dc-d6e12b3f1029" containerName="mariadb-account-create" Oct 11 10:54:32.534909 master-0 kubenswrapper[4790]: I1011 10:54:32.534691 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a325c6-b9b6-495b-87dc-d6e12b3f1029" containerName="mariadb-account-create" Oct 11 10:54:32.534909 master-0 kubenswrapper[4790]: I1011 10:54:32.534519 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbjx4\" (UniqueName: \"kubernetes.io/projected/08a325c6-b9b6-495b-87dc-d6e12b3f1029-kube-api-access-gbjx4\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:32.534909 master-0 kubenswrapper[4790]: I1011 10:54:32.534730 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd4a60e-f24a-48fe-afcb-c7ccab615f69" containerName="mariadb-account-create" Oct 11 10:54:32.534909 master-0 kubenswrapper[4790]: I1011 10:54:32.534739 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45c75\" (UniqueName: \"kubernetes.io/projected/cdd4a60e-f24a-48fe-afcb-c7ccab615f69-kube-api-access-45c75\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:32.534909 master-0 kubenswrapper[4790]: I1011 10:54:32.534744 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="09ddf95f-6e9c-4f3c-b742-87379c6594b2" containerName="mariadb-account-create" Oct 11 10:54:32.537560 master-0 kubenswrapper[4790]: I1011 10:54:32.537005 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-e1bf-account-create-9qds8" Oct 11 10:54:32.542027 master-0 kubenswrapper[4790]: I1011 10:54:32.541957 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Oct 11 10:54:32.542397 master-0 kubenswrapper[4790]: I1011 10:54:32.542362 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-e1bf-account-create-9qds8"] Oct 11 10:54:32.631555 master-0 kubenswrapper[4790]: I1011 10:54:32.631482 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-9ac8-account-create-r5rxs" event={"ID":"cdd4a60e-f24a-48fe-afcb-c7ccab615f69","Type":"ContainerDied","Data":"d6ae65f963950836b1a177c858bc761c8e099ea553721fa061ba026946ca1a96"} Oct 11 10:54:32.631555 master-0 kubenswrapper[4790]: I1011 10:54:32.631561 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6ae65f963950836b1a177c858bc761c8e099ea553721fa061ba026946ca1a96" Oct 11 10:54:32.632222 master-0 kubenswrapper[4790]: I1011 10:54:32.631588 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-9ac8-account-create-r5rxs" Oct 11 10:54:32.633129 master-0 kubenswrapper[4790]: I1011 10:54:32.633068 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2033-account-create-jh9gc" event={"ID":"08a325c6-b9b6-495b-87dc-d6e12b3f1029","Type":"ContainerDied","Data":"064a003115b13d4cbf140f9fcffdd93b2d44775212a0623054da53071870c839"} Oct 11 10:54:32.633196 master-0 kubenswrapper[4790]: I1011 10:54:32.633112 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2033-account-create-jh9gc" Oct 11 10:54:32.633271 master-0 kubenswrapper[4790]: I1011 10:54:32.633129 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="064a003115b13d4cbf140f9fcffdd93b2d44775212a0623054da53071870c839" Oct 11 10:54:32.636120 master-0 kubenswrapper[4790]: I1011 10:54:32.636046 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b634-account-create-vb2w7" event={"ID":"09ddf95f-6e9c-4f3c-b742-87379c6594b2","Type":"ContainerDied","Data":"4a0da43bdbb6cb8eab5b1d1a5b54d52710e81cb9cc902f6c2bb9187ab577a089"} Oct 11 10:54:32.636186 master-0 kubenswrapper[4790]: I1011 10:54:32.636122 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a0da43bdbb6cb8eab5b1d1a5b54d52710e81cb9cc902f6c2bb9187ab577a089" Oct 11 10:54:32.636186 master-0 kubenswrapper[4790]: I1011 10:54:32.636139 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqxmr\" (UniqueName: \"kubernetes.io/projected/03b3f6bf-ef4b-41fa-b098-fc5620a92300-kube-api-access-lqxmr\") pod \"ironic-e1bf-account-create-9qds8\" (UID: \"03b3f6bf-ef4b-41fa-b098-fc5620a92300\") " pod="openstack/ironic-e1bf-account-create-9qds8" Oct 11 10:54:32.636186 master-0 kubenswrapper[4790]: I1011 10:54:32.636160 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b634-account-create-vb2w7" Oct 11 10:54:32.773257 master-0 kubenswrapper[4790]: I1011 10:54:32.773076 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqxmr\" (UniqueName: \"kubernetes.io/projected/03b3f6bf-ef4b-41fa-b098-fc5620a92300-kube-api-access-lqxmr\") pod \"ironic-e1bf-account-create-9qds8\" (UID: \"03b3f6bf-ef4b-41fa-b098-fc5620a92300\") " pod="openstack/ironic-e1bf-account-create-9qds8" Oct 11 10:54:32.798508 master-0 kubenswrapper[4790]: I1011 10:54:32.798453 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqxmr\" (UniqueName: \"kubernetes.io/projected/03b3f6bf-ef4b-41fa-b098-fc5620a92300-kube-api-access-lqxmr\") pod \"ironic-e1bf-account-create-9qds8\" (UID: \"03b3f6bf-ef4b-41fa-b098-fc5620a92300\") " pod="openstack/ironic-e1bf-account-create-9qds8" Oct 11 10:54:32.869287 master-0 kubenswrapper[4790]: I1011 10:54:32.869220 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-e1bf-account-create-9qds8" Oct 11 10:54:32.928292 master-0 kubenswrapper[4790]: I1011 10:54:32.928219 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-2"] Oct 11 10:54:32.932878 master-1 kubenswrapper[4771]: I1011 10:54:32.932805 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:54:32.937260 master-0 kubenswrapper[4790]: W1011 10:54:32.937179 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0a5aa40_0146_4b81_83dd_761d514c557a.slice/crio-a7f963bb2db0e5602164d87267bd8833838260ebb96c568d5c14b7ec412ff1e4 WatchSource:0}: Error finding container a7f963bb2db0e5602164d87267bd8833838260ebb96c568d5c14b7ec412ff1e4: Status 404 returned error can't find the container with id a7f963bb2db0e5602164d87267bd8833838260ebb96c568d5c14b7ec412ff1e4 Oct 11 10:54:33.010134 master-1 kubenswrapper[4771]: I1011 10:54:33.010072 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c99f4877f-vjhdt"] Oct 11 10:54:33.010856 master-1 kubenswrapper[4771]: I1011 10:54:33.010372 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" podUID="30700706-219b-47c1-83cd-278584a3f182" containerName="dnsmasq-dns" containerID="cri-o://2dcdb27cf0dbce506998b4c8cfe73f7847cd892689b076fcba313e976b8a5349" gracePeriod=10 Oct 11 10:54:33.108960 master-2 kubenswrapper[4776]: I1011 10:54:33.108894 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-595686b98f-4tcwj"] Oct 11 10:54:33.115694 master-2 kubenswrapper[4776]: I1011 10:54:33.115623 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.116608 master-2 kubenswrapper[4776]: I1011 10:54:33.116557 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-595686b98f-4tcwj"] Oct 11 10:54:33.121757 master-2 kubenswrapper[4776]: I1011 10:54:33.120151 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 10:54:33.121757 master-2 kubenswrapper[4776]: I1011 10:54:33.120352 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 11 10:54:33.121757 master-2 kubenswrapper[4776]: I1011 10:54:33.120479 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 11 10:54:33.121757 master-2 kubenswrapper[4776]: I1011 10:54:33.120584 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 11 10:54:33.121757 master-2 kubenswrapper[4776]: I1011 10:54:33.120743 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 10:54:33.278569 master-2 kubenswrapper[4776]: I1011 10:54:33.278506 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-svc\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.278724 master-2 kubenswrapper[4776]: I1011 10:54:33.278630 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-swift-storage-0\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.278855 master-2 kubenswrapper[4776]: I1011 10:54:33.278811 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znxxj\" (UniqueName: \"kubernetes.io/projected/70447ad9-31f0-4f6a-8c40-19fbe8141ada-kube-api-access-znxxj\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.278900 master-2 kubenswrapper[4776]: I1011 10:54:33.278881 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-nb\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.278946 master-2 kubenswrapper[4776]: I1011 10:54:33.278912 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-config\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.279047 master-2 kubenswrapper[4776]: I1011 10:54:33.279015 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-sb\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.289918 master-0 kubenswrapper[4790]: I1011 10:54:33.289857 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-e1bf-account-create-9qds8"] Oct 11 10:54:33.309421 master-0 kubenswrapper[4790]: W1011 10:54:33.309354 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03b3f6bf_ef4b_41fa_b098_fc5620a92300.slice/crio-06d9bd3df217f4f9548cd17d34579c4352b757d706e48b231fa744d232a50671 WatchSource:0}: Error finding container 06d9bd3df217f4f9548cd17d34579c4352b757d706e48b231fa744d232a50671: Status 404 returned error can't find the container with id 06d9bd3df217f4f9548cd17d34579c4352b757d706e48b231fa744d232a50671 Oct 11 10:54:33.341380 master-2 kubenswrapper[4776]: I1011 10:54:33.341325 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:33.386966 master-2 kubenswrapper[4776]: I1011 10:54:33.386850 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-svc\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.386966 master-2 kubenswrapper[4776]: I1011 10:54:33.386929 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-swift-storage-0\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.387181 master-2 kubenswrapper[4776]: I1011 10:54:33.387014 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znxxj\" (UniqueName: \"kubernetes.io/projected/70447ad9-31f0-4f6a-8c40-19fbe8141ada-kube-api-access-znxxj\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.387181 master-2 kubenswrapper[4776]: I1011 10:54:33.387050 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-config\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.387181 master-2 kubenswrapper[4776]: I1011 10:54:33.387073 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-nb\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.387181 master-2 kubenswrapper[4776]: I1011 10:54:33.387120 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-sb\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.389023 master-2 kubenswrapper[4776]: I1011 10:54:33.387901 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-svc\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.389023 master-2 kubenswrapper[4776]: I1011 10:54:33.388333 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-sb\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.389023 master-2 kubenswrapper[4776]: I1011 10:54:33.388790 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-swift-storage-0\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.391615 master-2 kubenswrapper[4776]: I1011 10:54:33.389238 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-config\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.391615 master-2 kubenswrapper[4776]: I1011 10:54:33.389464 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-nb\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.423962 master-2 kubenswrapper[4776]: I1011 10:54:33.423472 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znxxj\" (UniqueName: \"kubernetes.io/projected/70447ad9-31f0-4f6a-8c40-19fbe8141ada-kube-api-access-znxxj\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.433689 master-2 kubenswrapper[4776]: I1011 10:54:33.433630 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5pz76" event={"ID":"7cef6f34-fa11-4593-b4d8-c9ac415f1967","Type":"ContainerDied","Data":"e1f01b2335980f0a5eb4c3856d30b5245099ce20b959f182008d4ec5131f98bf"} Oct 11 10:54:33.434116 master-2 kubenswrapper[4776]: I1011 10:54:33.433720 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1f01b2335980f0a5eb4c3856d30b5245099ce20b959f182008d4ec5131f98bf" Oct 11 10:54:33.434116 master-2 kubenswrapper[4776]: I1011 10:54:33.433638 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:33.442144 master-2 kubenswrapper[4776]: I1011 10:54:33.442104 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.490509 master-2 kubenswrapper[4776]: I1011 10:54:33.490417 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cef6f34-fa11-4593-b4d8-c9ac415f1967-logs\") pod \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " Oct 11 10:54:33.490870 master-2 kubenswrapper[4776]: I1011 10:54:33.490593 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-combined-ca-bundle\") pod \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " Oct 11 10:54:33.490946 master-2 kubenswrapper[4776]: I1011 10:54:33.490928 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cef6f34-fa11-4593-b4d8-c9ac415f1967-logs" (OuterVolumeSpecName: "logs") pod "7cef6f34-fa11-4593-b4d8-c9ac415f1967" (UID: "7cef6f34-fa11-4593-b4d8-c9ac415f1967"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:54:33.492417 master-2 kubenswrapper[4776]: I1011 10:54:33.492239 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-scripts\") pod \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " Oct 11 10:54:33.492588 master-2 kubenswrapper[4776]: I1011 10:54:33.492572 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkk8v\" (UniqueName: \"kubernetes.io/projected/7cef6f34-fa11-4593-b4d8-c9ac415f1967-kube-api-access-wkk8v\") pod \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " Oct 11 10:54:33.492787 master-2 kubenswrapper[4776]: I1011 10:54:33.492764 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-config-data\") pod \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " Oct 11 10:54:33.493627 master-2 kubenswrapper[4776]: I1011 10:54:33.493608 4776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cef6f34-fa11-4593-b4d8-c9ac415f1967-logs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:33.495586 master-2 kubenswrapper[4776]: I1011 10:54:33.495550 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-scripts" (OuterVolumeSpecName: "scripts") pod "7cef6f34-fa11-4593-b4d8-c9ac415f1967" (UID: "7cef6f34-fa11-4593-b4d8-c9ac415f1967"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:33.495699 master-2 kubenswrapper[4776]: I1011 10:54:33.495654 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cef6f34-fa11-4593-b4d8-c9ac415f1967-kube-api-access-wkk8v" (OuterVolumeSpecName: "kube-api-access-wkk8v") pod "7cef6f34-fa11-4593-b4d8-c9ac415f1967" (UID: "7cef6f34-fa11-4593-b4d8-c9ac415f1967"). InnerVolumeSpecName "kube-api-access-wkk8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:33.526051 master-2 kubenswrapper[4776]: I1011 10:54:33.524931 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cef6f34-fa11-4593-b4d8-c9ac415f1967" (UID: "7cef6f34-fa11-4593-b4d8-c9ac415f1967"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:33.527316 master-2 kubenswrapper[4776]: I1011 10:54:33.527223 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-config-data" (OuterVolumeSpecName: "config-data") pod "7cef6f34-fa11-4593-b4d8-c9ac415f1967" (UID: "7cef6f34-fa11-4593-b4d8-c9ac415f1967"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:33.574070 master-2 kubenswrapper[4776]: I1011 10:54:33.573717 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6b597cbbf8-8sdfz"] Oct 11 10:54:33.574070 master-2 kubenswrapper[4776]: E1011 10:54:33.574028 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cef6f34-fa11-4593-b4d8-c9ac415f1967" containerName="placement-db-sync" Oct 11 10:54:33.574070 master-2 kubenswrapper[4776]: I1011 10:54:33.574040 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cef6f34-fa11-4593-b4d8-c9ac415f1967" containerName="placement-db-sync" Oct 11 10:54:33.577662 master-2 kubenswrapper[4776]: I1011 10:54:33.577610 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cef6f34-fa11-4593-b4d8-c9ac415f1967" containerName="placement-db-sync" Oct 11 10:54:33.578623 master-2 kubenswrapper[4776]: I1011 10:54:33.578607 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.582716 master-2 kubenswrapper[4776]: I1011 10:54:33.582130 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 11 10:54:33.582716 master-2 kubenswrapper[4776]: I1011 10:54:33.582542 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 11 10:54:33.598515 master-1 kubenswrapper[4771]: I1011 10:54:33.598318 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6b597cbbf8-8j29d"] Oct 11 10:54:33.600169 master-1 kubenswrapper[4771]: I1011 10:54:33.600140 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.602124 master-2 kubenswrapper[4776]: I1011 10:54:33.598702 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkk8v\" (UniqueName: \"kubernetes.io/projected/7cef6f34-fa11-4593-b4d8-c9ac415f1967-kube-api-access-wkk8v\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:33.602124 master-2 kubenswrapper[4776]: I1011 10:54:33.598815 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:33.602124 master-2 kubenswrapper[4776]: I1011 10:54:33.598858 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:33.602124 master-2 kubenswrapper[4776]: I1011 10:54:33.598873 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:33.602124 master-2 kubenswrapper[4776]: I1011 10:54:33.599302 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b597cbbf8-8sdfz"] Oct 11 10:54:33.603565 master-1 kubenswrapper[4771]: I1011 10:54:33.603521 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 11 10:54:33.603679 master-1 kubenswrapper[4771]: I1011 10:54:33.603582 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 11 10:54:33.603679 master-1 kubenswrapper[4771]: I1011 10:54:33.603578 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 11 10:54:33.603826 master-1 kubenswrapper[4771]: I1011 10:54:33.603782 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 11 10:54:33.616505 master-1 kubenswrapper[4771]: I1011 10:54:33.616422 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b597cbbf8-8j29d"] Oct 11 10:54:33.616685 master-0 kubenswrapper[4790]: I1011 10:54:33.614456 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6b597cbbf8-mh4z2"] Oct 11 10:54:33.618895 master-0 kubenswrapper[4790]: I1011 10:54:33.617923 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.622511 master-0 kubenswrapper[4790]: I1011 10:54:33.622464 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 11 10:54:33.622511 master-0 kubenswrapper[4790]: I1011 10:54:33.622503 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 11 10:54:33.622860 master-0 kubenswrapper[4790]: I1011 10:54:33.622544 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 11 10:54:33.622860 master-0 kubenswrapper[4790]: I1011 10:54:33.622738 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 11 10:54:33.623294 master-0 kubenswrapper[4790]: I1011 10:54:33.622964 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b597cbbf8-mh4z2"] Oct 11 10:54:33.647309 master-0 kubenswrapper[4790]: I1011 10:54:33.647224 4790 generic.go:334] "Generic (PLEG): container finished" podID="03b3f6bf-ef4b-41fa-b098-fc5620a92300" containerID="f18d3e7808bc3bd6d8d3dfcffe3def526d4ab16b836ac39e9bb14dfceb0d8247" exitCode=0 Oct 11 10:54:33.647975 master-0 kubenswrapper[4790]: I1011 10:54:33.647308 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-e1bf-account-create-9qds8" event={"ID":"03b3f6bf-ef4b-41fa-b098-fc5620a92300","Type":"ContainerDied","Data":"f18d3e7808bc3bd6d8d3dfcffe3def526d4ab16b836ac39e9bb14dfceb0d8247"} Oct 11 10:54:33.647975 master-0 kubenswrapper[4790]: I1011 10:54:33.647419 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-e1bf-account-create-9qds8" event={"ID":"03b3f6bf-ef4b-41fa-b098-fc5620a92300","Type":"ContainerStarted","Data":"06d9bd3df217f4f9548cd17d34579c4352b757d706e48b231fa744d232a50671"} Oct 11 10:54:33.649007 master-0 kubenswrapper[4790]: I1011 10:54:33.648975 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-2" event={"ID":"a0a5aa40-0146-4b81-83dd-761d514c557a","Type":"ContainerStarted","Data":"a7f963bb2db0e5602164d87267bd8833838260ebb96c568d5c14b7ec412ff1e4"} Oct 11 10:54:33.700697 master-2 kubenswrapper[4776]: I1011 10:54:33.700609 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-config-data\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.700697 master-2 kubenswrapper[4776]: I1011 10:54:33.700693 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-logs\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.700989 master-2 kubenswrapper[4776]: I1011 10:54:33.700735 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-scripts\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.700989 master-2 kubenswrapper[4776]: I1011 10:54:33.700769 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-combined-ca-bundle\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.700989 master-2 kubenswrapper[4776]: I1011 10:54:33.700785 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-public-tls-certs\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.700989 master-2 kubenswrapper[4776]: I1011 10:54:33.700813 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7szw\" (UniqueName: \"kubernetes.io/projected/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-kube-api-access-h7szw\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.700989 master-2 kubenswrapper[4776]: I1011 10:54:33.700846 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-internal-tls-certs\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.723284 master-1 kubenswrapper[4771]: I1011 10:54:33.723211 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f6e33d-5313-461d-ac81-59ab693324e8-combined-ca-bundle\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.723605 master-1 kubenswrapper[4771]: I1011 10:54:33.723314 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f6e33d-5313-461d-ac81-59ab693324e8-internal-tls-certs\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.723605 master-1 kubenswrapper[4771]: I1011 10:54:33.723347 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f6e33d-5313-461d-ac81-59ab693324e8-public-tls-certs\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.723605 master-1 kubenswrapper[4771]: I1011 10:54:33.723405 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f6e33d-5313-461d-ac81-59ab693324e8-config-data\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.723605 master-1 kubenswrapper[4771]: I1011 10:54:33.723441 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwr6t\" (UniqueName: \"kubernetes.io/projected/39f6e33d-5313-461d-ac81-59ab693324e8-kube-api-access-vwr6t\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.723605 master-1 kubenswrapper[4771]: I1011 10:54:33.723467 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f6e33d-5313-461d-ac81-59ab693324e8-scripts\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.723605 master-1 kubenswrapper[4771]: I1011 10:54:33.723568 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f6e33d-5313-461d-ac81-59ab693324e8-logs\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.793036 master-0 kubenswrapper[4790]: I1011 10:54:33.792969 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7322b229-2c7a-4d99-a73b-f3612dc2670e-config-data\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.793036 master-0 kubenswrapper[4790]: I1011 10:54:33.793039 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7322b229-2c7a-4d99-a73b-f3612dc2670e-combined-ca-bundle\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.793322 master-0 kubenswrapper[4790]: I1011 10:54:33.793099 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7322b229-2c7a-4d99-a73b-f3612dc2670e-public-tls-certs\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.793322 master-0 kubenswrapper[4790]: I1011 10:54:33.793120 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7322b229-2c7a-4d99-a73b-f3612dc2670e-logs\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.793416 master-0 kubenswrapper[4790]: I1011 10:54:33.793373 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mrvk\" (UniqueName: \"kubernetes.io/projected/7322b229-2c7a-4d99-a73b-f3612dc2670e-kube-api-access-7mrvk\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.793495 master-0 kubenswrapper[4790]: I1011 10:54:33.793444 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7322b229-2c7a-4d99-a73b-f3612dc2670e-internal-tls-certs\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.793668 master-0 kubenswrapper[4790]: I1011 10:54:33.793628 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7322b229-2c7a-4d99-a73b-f3612dc2670e-scripts\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.802490 master-2 kubenswrapper[4776]: I1011 10:54:33.802441 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-internal-tls-certs\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.803354 master-2 kubenswrapper[4776]: I1011 10:54:33.803338 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-config-data\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.803482 master-2 kubenswrapper[4776]: I1011 10:54:33.803468 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-logs\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.803606 master-2 kubenswrapper[4776]: I1011 10:54:33.803594 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-scripts\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.803710 master-2 kubenswrapper[4776]: I1011 10:54:33.803697 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-public-tls-certs\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.803802 master-2 kubenswrapper[4776]: I1011 10:54:33.803790 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-combined-ca-bundle\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.803894 master-2 kubenswrapper[4776]: I1011 10:54:33.803882 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7szw\" (UniqueName: \"kubernetes.io/projected/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-kube-api-access-h7szw\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.805427 master-2 kubenswrapper[4776]: I1011 10:54:33.805382 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-logs\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.807735 master-2 kubenswrapper[4776]: I1011 10:54:33.807715 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-internal-tls-certs\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.807859 master-2 kubenswrapper[4776]: I1011 10:54:33.807790 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-public-tls-certs\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.807936 master-2 kubenswrapper[4776]: I1011 10:54:33.807883 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-config-data\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.808208 master-2 kubenswrapper[4776]: I1011 10:54:33.808192 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-scripts\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.808325 master-2 kubenswrapper[4776]: I1011 10:54:33.808299 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-combined-ca-bundle\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.825033 master-1 kubenswrapper[4771]: I1011 10:54:33.824952 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwr6t\" (UniqueName: \"kubernetes.io/projected/39f6e33d-5313-461d-ac81-59ab693324e8-kube-api-access-vwr6t\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.825033 master-1 kubenswrapper[4771]: I1011 10:54:33.825039 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f6e33d-5313-461d-ac81-59ab693324e8-scripts\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.825355 master-2 kubenswrapper[4776]: I1011 10:54:33.825320 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7szw\" (UniqueName: \"kubernetes.io/projected/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-kube-api-access-h7szw\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.825439 master-1 kubenswrapper[4771]: I1011 10:54:33.825105 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f6e33d-5313-461d-ac81-59ab693324e8-logs\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.825439 master-1 kubenswrapper[4771]: I1011 10:54:33.825141 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f6e33d-5313-461d-ac81-59ab693324e8-combined-ca-bundle\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.825439 master-1 kubenswrapper[4771]: I1011 10:54:33.825207 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f6e33d-5313-461d-ac81-59ab693324e8-internal-tls-certs\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.825439 master-1 kubenswrapper[4771]: I1011 10:54:33.825234 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f6e33d-5313-461d-ac81-59ab693324e8-public-tls-certs\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.825439 master-1 kubenswrapper[4771]: I1011 10:54:33.825282 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f6e33d-5313-461d-ac81-59ab693324e8-config-data\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.826546 master-1 kubenswrapper[4771]: I1011 10:54:33.826483 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f6e33d-5313-461d-ac81-59ab693324e8-logs\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.831330 master-1 kubenswrapper[4771]: I1011 10:54:33.831274 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f6e33d-5313-461d-ac81-59ab693324e8-scripts\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.831694 master-1 kubenswrapper[4771]: I1011 10:54:33.831653 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f6e33d-5313-461d-ac81-59ab693324e8-config-data\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.831745 master-1 kubenswrapper[4771]: I1011 10:54:33.831686 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f6e33d-5313-461d-ac81-59ab693324e8-combined-ca-bundle\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.833332 master-1 kubenswrapper[4771]: I1011 10:54:33.833299 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f6e33d-5313-461d-ac81-59ab693324e8-public-tls-certs\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.834462 master-1 kubenswrapper[4771]: I1011 10:54:33.834428 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f6e33d-5313-461d-ac81-59ab693324e8-internal-tls-certs\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.848907 master-1 kubenswrapper[4771]: I1011 10:54:33.848720 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwr6t\" (UniqueName: \"kubernetes.io/projected/39f6e33d-5313-461d-ac81-59ab693324e8-kube-api-access-vwr6t\") pod \"placement-6b597cbbf8-8j29d\" (UID: \"39f6e33d-5313-461d-ac81-59ab693324e8\") " pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.895318 master-0 kubenswrapper[4790]: I1011 10:54:33.895069 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mrvk\" (UniqueName: \"kubernetes.io/projected/7322b229-2c7a-4d99-a73b-f3612dc2670e-kube-api-access-7mrvk\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.895318 master-0 kubenswrapper[4790]: I1011 10:54:33.895140 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7322b229-2c7a-4d99-a73b-f3612dc2670e-internal-tls-certs\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.895318 master-0 kubenswrapper[4790]: I1011 10:54:33.895204 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7322b229-2c7a-4d99-a73b-f3612dc2670e-scripts\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.895318 master-0 kubenswrapper[4790]: I1011 10:54:33.895270 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7322b229-2c7a-4d99-a73b-f3612dc2670e-config-data\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.895318 master-0 kubenswrapper[4790]: I1011 10:54:33.895302 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7322b229-2c7a-4d99-a73b-f3612dc2670e-combined-ca-bundle\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.895852 master-0 kubenswrapper[4790]: I1011 10:54:33.895356 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7322b229-2c7a-4d99-a73b-f3612dc2670e-public-tls-certs\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.895852 master-0 kubenswrapper[4790]: I1011 10:54:33.895381 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7322b229-2c7a-4d99-a73b-f3612dc2670e-logs\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.896018 master-0 kubenswrapper[4790]: I1011 10:54:33.895889 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7322b229-2c7a-4d99-a73b-f3612dc2670e-logs\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.900185 master-0 kubenswrapper[4790]: I1011 10:54:33.900135 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7322b229-2c7a-4d99-a73b-f3612dc2670e-scripts\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.900787 master-0 kubenswrapper[4790]: I1011 10:54:33.900754 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7322b229-2c7a-4d99-a73b-f3612dc2670e-config-data\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.901952 master-0 kubenswrapper[4790]: I1011 10:54:33.901902 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7322b229-2c7a-4d99-a73b-f3612dc2670e-internal-tls-certs\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.902517 master-0 kubenswrapper[4790]: I1011 10:54:33.902463 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7322b229-2c7a-4d99-a73b-f3612dc2670e-public-tls-certs\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.903900 master-0 kubenswrapper[4790]: I1011 10:54:33.903825 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7322b229-2c7a-4d99-a73b-f3612dc2670e-combined-ca-bundle\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.920889 master-0 kubenswrapper[4790]: I1011 10:54:33.920833 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mrvk\" (UniqueName: \"kubernetes.io/projected/7322b229-2c7a-4d99-a73b-f3612dc2670e-kube-api-access-7mrvk\") pod \"placement-6b597cbbf8-mh4z2\" (UID: \"7322b229-2c7a-4d99-a73b-f3612dc2670e\") " pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:33.933503 master-1 kubenswrapper[4771]: I1011 10:54:33.932148 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:33.937475 master-2 kubenswrapper[4776]: I1011 10:54:33.937406 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-595686b98f-4tcwj"] Oct 11 10:54:33.944584 master-2 kubenswrapper[4776]: W1011 10:54:33.943939 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70447ad9_31f0_4f6a_8c40_19fbe8141ada.slice/crio-deced8507734b7d702b6a986cf9629954a10ef889b4d591c0c86c8d9b9826ad7 WatchSource:0}: Error finding container deced8507734b7d702b6a986cf9629954a10ef889b4d591c0c86c8d9b9826ad7: Status 404 returned error can't find the container with id deced8507734b7d702b6a986cf9629954a10ef889b4d591c0c86c8d9b9826ad7 Oct 11 10:54:33.944584 master-2 kubenswrapper[4776]: I1011 10:54:33.943982 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-nz82h"] Oct 11 10:54:33.944584 master-2 kubenswrapper[4776]: I1011 10:54:33.944359 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.945480 master-2 kubenswrapper[4776]: I1011 10:54:33.945443 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:33.948930 master-2 kubenswrapper[4776]: I1011 10:54:33.948861 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Oct 11 10:54:33.959404 master-2 kubenswrapper[4776]: I1011 10:54:33.956741 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-nz82h"] Oct 11 10:54:33.968143 master-0 kubenswrapper[4790]: I1011 10:54:33.968080 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:34.008807 master-2 kubenswrapper[4776]: I1011 10:54:34.008540 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-config-data\") pod \"heat-db-sync-nz82h\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.008807 master-2 kubenswrapper[4776]: I1011 10:54:34.008617 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf8n6\" (UniqueName: \"kubernetes.io/projected/005f2579-b848-40fd-b3f3-2d3383344047-kube-api-access-hf8n6\") pod \"heat-db-sync-nz82h\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.008944 master-2 kubenswrapper[4776]: I1011 10:54:34.008814 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-combined-ca-bundle\") pod \"heat-db-sync-nz82h\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.086082 master-1 kubenswrapper[4771]: I1011 10:54:34.085956 4771 generic.go:334] "Generic (PLEG): container finished" podID="30700706-219b-47c1-83cd-278584a3f182" containerID="2dcdb27cf0dbce506998b4c8cfe73f7847cd892689b076fcba313e976b8a5349" exitCode=0 Oct 11 10:54:34.086082 master-1 kubenswrapper[4771]: I1011 10:54:34.086038 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" event={"ID":"30700706-219b-47c1-83cd-278584a3f182","Type":"ContainerDied","Data":"2dcdb27cf0dbce506998b4c8cfe73f7847cd892689b076fcba313e976b8a5349"} Oct 11 10:54:34.111739 master-2 kubenswrapper[4776]: I1011 10:54:34.111668 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-combined-ca-bundle\") pod \"heat-db-sync-nz82h\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.111838 master-2 kubenswrapper[4776]: I1011 10:54:34.111812 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-config-data\") pod \"heat-db-sync-nz82h\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.111887 master-2 kubenswrapper[4776]: I1011 10:54:34.111855 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf8n6\" (UniqueName: \"kubernetes.io/projected/005f2579-b848-40fd-b3f3-2d3383344047-kube-api-access-hf8n6\") pod \"heat-db-sync-nz82h\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.115769 master-2 kubenswrapper[4776]: I1011 10:54:34.115724 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-combined-ca-bundle\") pod \"heat-db-sync-nz82h\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.116068 master-2 kubenswrapper[4776]: I1011 10:54:34.116033 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-config-data\") pod \"heat-db-sync-nz82h\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.134631 master-2 kubenswrapper[4776]: I1011 10:54:34.134598 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf8n6\" (UniqueName: \"kubernetes.io/projected/005f2579-b848-40fd-b3f3-2d3383344047-kube-api-access-hf8n6\") pod \"heat-db-sync-nz82h\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.237906 master-2 kubenswrapper[4776]: I1011 10:54:34.237764 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-db-sync-4sh7r"] Oct 11 10:54:34.239070 master-2 kubenswrapper[4776]: I1011 10:54:34.239033 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.242024 master-2 kubenswrapper[4776]: I1011 10:54:34.241989 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-scripts" Oct 11 10:54:34.242264 master-2 kubenswrapper[4776]: I1011 10:54:34.242217 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-config-data" Oct 11 10:54:34.254310 master-2 kubenswrapper[4776]: I1011 10:54:34.254258 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-db-sync-4sh7r"] Oct 11 10:54:34.269928 master-2 kubenswrapper[4776]: I1011 10:54:34.269872 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.314482 master-2 kubenswrapper[4776]: I1011 10:54:34.314428 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-scripts\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.314656 master-2 kubenswrapper[4776]: I1011 10:54:34.314510 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-config-data\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.314974 master-2 kubenswrapper[4776]: I1011 10:54:34.314936 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-db-sync-config-data\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.315041 master-2 kubenswrapper[4776]: I1011 10:54:34.315020 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-etc-machine-id\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.315151 master-2 kubenswrapper[4776]: I1011 10:54:34.315131 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkn85\" (UniqueName: \"kubernetes.io/projected/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-kube-api-access-dkn85\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.315264 master-2 kubenswrapper[4776]: I1011 10:54:34.315244 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-combined-ca-bundle\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.414163 master-0 kubenswrapper[4790]: I1011 10:54:34.414098 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b597cbbf8-mh4z2"] Oct 11 10:54:34.416883 master-2 kubenswrapper[4776]: I1011 10:54:34.416571 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-db-sync-config-data\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.416883 master-2 kubenswrapper[4776]: I1011 10:54:34.416637 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-etc-machine-id\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.416883 master-2 kubenswrapper[4776]: I1011 10:54:34.416701 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkn85\" (UniqueName: \"kubernetes.io/projected/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-kube-api-access-dkn85\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.416883 master-2 kubenswrapper[4776]: I1011 10:54:34.416748 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-combined-ca-bundle\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.416883 master-2 kubenswrapper[4776]: I1011 10:54:34.416779 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-scripts\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.416883 master-2 kubenswrapper[4776]: I1011 10:54:34.416801 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-etc-machine-id\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.418614 master-2 kubenswrapper[4776]: I1011 10:54:34.416812 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-config-data\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.420309 master-2 kubenswrapper[4776]: I1011 10:54:34.420264 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-db-sync-config-data\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.421522 master-2 kubenswrapper[4776]: I1011 10:54:34.421483 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-config-data\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.421740 master-2 kubenswrapper[4776]: I1011 10:54:34.421222 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-combined-ca-bundle\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.422965 master-2 kubenswrapper[4776]: I1011 10:54:34.422919 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-scripts\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.440867 master-2 kubenswrapper[4776]: I1011 10:54:34.440818 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkn85\" (UniqueName: \"kubernetes.io/projected/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-kube-api-access-dkn85\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.448770 master-2 kubenswrapper[4776]: I1011 10:54:34.448712 4776 generic.go:334] "Generic (PLEG): container finished" podID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" containerID="304ada663003ba027290aa5f510ba1f9e62024cd530b437aab6c3371a94b50d9" exitCode=0 Oct 11 10:54:34.448862 master-2 kubenswrapper[4776]: I1011 10:54:34.448789 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" event={"ID":"70447ad9-31f0-4f6a-8c40-19fbe8141ada","Type":"ContainerDied","Data":"304ada663003ba027290aa5f510ba1f9e62024cd530b437aab6c3371a94b50d9"} Oct 11 10:54:34.448900 master-2 kubenswrapper[4776]: I1011 10:54:34.448879 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" event={"ID":"70447ad9-31f0-4f6a-8c40-19fbe8141ada","Type":"ContainerStarted","Data":"deced8507734b7d702b6a986cf9629954a10ef889b4d591c0c86c8d9b9826ad7"} Oct 11 10:54:34.454624 master-2 kubenswrapper[4776]: I1011 10:54:34.454581 4776 generic.go:334] "Generic (PLEG): container finished" podID="30996a86-1b86-4a67-bfea-0e63f7417196" containerID="7a37bc55a741b7925fe73a8333e051e4eed1c5b9263c43dfa0598438fa7d12fc" exitCode=0 Oct 11 10:54:34.454725 master-2 kubenswrapper[4776]: I1011 10:54:34.454644 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xptqx" event={"ID":"30996a86-1b86-4a67-bfea-0e63f7417196","Type":"ContainerDied","Data":"7a37bc55a741b7925fe73a8333e051e4eed1c5b9263c43dfa0598438fa7d12fc"} Oct 11 10:54:34.525153 master-2 kubenswrapper[4776]: I1011 10:54:34.525023 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-2dgxj"] Oct 11 10:54:34.526360 master-2 kubenswrapper[4776]: I1011 10:54:34.526323 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.529457 master-2 kubenswrapper[4776]: I1011 10:54:34.529351 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 11 10:54:34.530653 master-2 kubenswrapper[4776]: I1011 10:54:34.529564 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 11 10:54:34.538813 master-2 kubenswrapper[4776]: I1011 10:54:34.538778 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2dgxj"] Oct 11 10:54:34.557578 master-2 kubenswrapper[4776]: I1011 10:54:34.557515 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.581783 master-2 kubenswrapper[4776]: I1011 10:54:34.581744 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b597cbbf8-8sdfz"] Oct 11 10:54:34.587109 master-2 kubenswrapper[4776]: W1011 10:54:34.587058 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2cc5d2d_290c_487f_a0f9_3095b17e1fcb.slice/crio-ccb08834d598ceb26cdcac6fff1ec4cb2c1cf59d30c091f534685ff710dd9097 WatchSource:0}: Error finding container ccb08834d598ceb26cdcac6fff1ec4cb2c1cf59d30c091f534685ff710dd9097: Status 404 returned error can't find the container with id ccb08834d598ceb26cdcac6fff1ec4cb2c1cf59d30c091f534685ff710dd9097 Oct 11 10:54:34.621776 master-2 kubenswrapper[4776]: I1011 10:54:34.621667 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmbf9\" (UniqueName: \"kubernetes.io/projected/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-kube-api-access-rmbf9\") pod \"neutron-db-sync-2dgxj\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.622165 master-2 kubenswrapper[4776]: I1011 10:54:34.621820 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-combined-ca-bundle\") pod \"neutron-db-sync-2dgxj\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.622165 master-2 kubenswrapper[4776]: I1011 10:54:34.621856 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-config\") pod \"neutron-db-sync-2dgxj\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.658839 master-0 kubenswrapper[4790]: I1011 10:54:34.658694 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b597cbbf8-mh4z2" event={"ID":"7322b229-2c7a-4d99-a73b-f3612dc2670e","Type":"ContainerStarted","Data":"6768bfbd8eb1927a0a3fde9b51e4a592023d348c1f0cfddf53347324b56df409"} Oct 11 10:54:34.692409 master-2 kubenswrapper[4776]: I1011 10:54:34.692352 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-nz82h"] Oct 11 10:54:34.698535 master-2 kubenswrapper[4776]: W1011 10:54:34.698208 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod005f2579_b848_40fd_b3f3_2d3383344047.slice/crio-cc5b9c7250419dc166026542bf46853e7a0777efbedca2dc833bde7e55dd6960 WatchSource:0}: Error finding container cc5b9c7250419dc166026542bf46853e7a0777efbedca2dc833bde7e55dd6960: Status 404 returned error can't find the container with id cc5b9c7250419dc166026542bf46853e7a0777efbedca2dc833bde7e55dd6960 Oct 11 10:54:34.724922 master-2 kubenswrapper[4776]: I1011 10:54:34.724857 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmbf9\" (UniqueName: \"kubernetes.io/projected/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-kube-api-access-rmbf9\") pod \"neutron-db-sync-2dgxj\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.725142 master-2 kubenswrapper[4776]: I1011 10:54:34.724961 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-combined-ca-bundle\") pod \"neutron-db-sync-2dgxj\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.725142 master-2 kubenswrapper[4776]: I1011 10:54:34.724993 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-config\") pod \"neutron-db-sync-2dgxj\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.732297 master-2 kubenswrapper[4776]: I1011 10:54:34.732172 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-combined-ca-bundle\") pod \"neutron-db-sync-2dgxj\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.732909 master-2 kubenswrapper[4776]: I1011 10:54:34.732878 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-config\") pod \"neutron-db-sync-2dgxj\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.749478 master-2 kubenswrapper[4776]: I1011 10:54:34.749428 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmbf9\" (UniqueName: \"kubernetes.io/projected/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-kube-api-access-rmbf9\") pod \"neutron-db-sync-2dgxj\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.881499 master-2 kubenswrapper[4776]: I1011 10:54:34.881447 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:35.034425 master-2 kubenswrapper[4776]: W1011 10:54:35.033352 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4f2a1bf_160f_40ad_bc2c_a7286a90b988.slice/crio-fa7c55aca3df89af2240ffd38909152d52165a2a24214ef5226c24aacb7b9aca WatchSource:0}: Error finding container fa7c55aca3df89af2240ffd38909152d52165a2a24214ef5226c24aacb7b9aca: Status 404 returned error can't find the container with id fa7c55aca3df89af2240ffd38909152d52165a2a24214ef5226c24aacb7b9aca Oct 11 10:54:35.051879 master-2 kubenswrapper[4776]: I1011 10:54:35.051835 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-db-sync-4sh7r"] Oct 11 10:54:35.151321 master-0 kubenswrapper[4790]: I1011 10:54:35.151267 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-e1bf-account-create-9qds8" Oct 11 10:54:35.331941 master-2 kubenswrapper[4776]: I1011 10:54:35.331550 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2dgxj"] Oct 11 10:54:35.332123 master-2 kubenswrapper[4776]: W1011 10:54:35.331991 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd90a5c6e_6cd2_4396_b38c_dc0e03da9d38.slice/crio-f641f5141915e0e5383d1c25132b34d7f73e6431d60ee61f32ba0f43bc3081a9 WatchSource:0}: Error finding container f641f5141915e0e5383d1c25132b34d7f73e6431d60ee61f32ba0f43bc3081a9: Status 404 returned error can't find the container with id f641f5141915e0e5383d1c25132b34d7f73e6431d60ee61f32ba0f43bc3081a9 Oct 11 10:54:35.336885 master-0 kubenswrapper[4790]: I1011 10:54:35.336826 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqxmr\" (UniqueName: \"kubernetes.io/projected/03b3f6bf-ef4b-41fa-b098-fc5620a92300-kube-api-access-lqxmr\") pod \"03b3f6bf-ef4b-41fa-b098-fc5620a92300\" (UID: \"03b3f6bf-ef4b-41fa-b098-fc5620a92300\") " Oct 11 10:54:35.340218 master-0 kubenswrapper[4790]: I1011 10:54:35.340132 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03b3f6bf-ef4b-41fa-b098-fc5620a92300-kube-api-access-lqxmr" (OuterVolumeSpecName: "kube-api-access-lqxmr") pod "03b3f6bf-ef4b-41fa-b098-fc5620a92300" (UID: "03b3f6bf-ef4b-41fa-b098-fc5620a92300"). InnerVolumeSpecName "kube-api-access-lqxmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:35.439149 master-0 kubenswrapper[4790]: I1011 10:54:35.439100 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqxmr\" (UniqueName: \"kubernetes.io/projected/03b3f6bf-ef4b-41fa-b098-fc5620a92300-kube-api-access-lqxmr\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:35.467931 master-2 kubenswrapper[4776]: I1011 10:54:35.467880 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2dgxj" event={"ID":"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38","Type":"ContainerStarted","Data":"f641f5141915e0e5383d1c25132b34d7f73e6431d60ee61f32ba0f43bc3081a9"} Oct 11 10:54:35.470325 master-2 kubenswrapper[4776]: I1011 10:54:35.470304 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" event={"ID":"70447ad9-31f0-4f6a-8c40-19fbe8141ada","Type":"ContainerStarted","Data":"459c94f48331f93d40730495be1474d6c1c89c8df43275f7b4ad519ea521cb3d"} Oct 11 10:54:35.470472 master-2 kubenswrapper[4776]: I1011 10:54:35.470446 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:35.474191 master-2 kubenswrapper[4776]: I1011 10:54:35.474126 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b597cbbf8-8sdfz" event={"ID":"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb","Type":"ContainerStarted","Data":"cd4fe9616d55b89de28b5a72a4c471009475f011558061464d4070a257189b82"} Oct 11 10:54:35.474191 master-2 kubenswrapper[4776]: I1011 10:54:35.474176 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b597cbbf8-8sdfz" event={"ID":"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb","Type":"ContainerStarted","Data":"1336b4e886e0824650469ba3f4a22a5540aa9fdd626ba77cdbde215582498998"} Oct 11 10:54:35.474287 master-2 kubenswrapper[4776]: I1011 10:54:35.474194 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b597cbbf8-8sdfz" event={"ID":"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb","Type":"ContainerStarted","Data":"ccb08834d598ceb26cdcac6fff1ec4cb2c1cf59d30c091f534685ff710dd9097"} Oct 11 10:54:35.474287 master-2 kubenswrapper[4776]: I1011 10:54:35.474231 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:35.474350 master-2 kubenswrapper[4776]: I1011 10:54:35.474319 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:35.475681 master-2 kubenswrapper[4776]: I1011 10:54:35.475623 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-db-sync-4sh7r" event={"ID":"c4f2a1bf-160f-40ad-bc2c-a7286a90b988","Type":"ContainerStarted","Data":"fa7c55aca3df89af2240ffd38909152d52165a2a24214ef5226c24aacb7b9aca"} Oct 11 10:54:35.476830 master-2 kubenswrapper[4776]: I1011 10:54:35.476799 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nz82h" event={"ID":"005f2579-b848-40fd-b3f3-2d3383344047","Type":"ContainerStarted","Data":"cc5b9c7250419dc166026542bf46853e7a0777efbedca2dc833bde7e55dd6960"} Oct 11 10:54:35.502528 master-2 kubenswrapper[4776]: I1011 10:54:35.501953 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" podStartSLOduration=2.501935446 podStartE2EDuration="2.501935446s" podCreationTimestamp="2025-10-11 10:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:35.498436651 +0000 UTC m=+1710.282863360" watchObservedRunningTime="2025-10-11 10:54:35.501935446 +0000 UTC m=+1710.286362155" Oct 11 10:54:35.543197 master-2 kubenswrapper[4776]: I1011 10:54:35.543118 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6b597cbbf8-8sdfz" podStartSLOduration=2.543099221 podStartE2EDuration="2.543099221s" podCreationTimestamp="2025-10-11 10:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:35.535602278 +0000 UTC m=+1710.320028987" watchObservedRunningTime="2025-10-11 10:54:35.543099221 +0000 UTC m=+1710.327525930" Oct 11 10:54:35.668487 master-0 kubenswrapper[4790]: I1011 10:54:35.668336 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-e1bf-account-create-9qds8" event={"ID":"03b3f6bf-ef4b-41fa-b098-fc5620a92300","Type":"ContainerDied","Data":"06d9bd3df217f4f9548cd17d34579c4352b757d706e48b231fa744d232a50671"} Oct 11 10:54:35.668487 master-0 kubenswrapper[4790]: I1011 10:54:35.668391 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06d9bd3df217f4f9548cd17d34579c4352b757d706e48b231fa744d232a50671" Oct 11 10:54:35.668487 master-0 kubenswrapper[4790]: I1011 10:54:35.668466 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-e1bf-account-create-9qds8" Oct 11 10:54:36.407713 master-2 kubenswrapper[4776]: I1011 10:54:36.407648 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:36.454956 master-2 kubenswrapper[4776]: I1011 10:54:36.454894 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:36.455198 master-2 kubenswrapper[4776]: I1011 10:54:36.454963 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:36.489486 master-2 kubenswrapper[4776]: I1011 10:54:36.489433 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb4d9\" (UniqueName: \"kubernetes.io/projected/30996a86-1b86-4a67-bfea-0e63f7417196-kube-api-access-kb4d9\") pod \"30996a86-1b86-4a67-bfea-0e63f7417196\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " Oct 11 10:54:36.493574 master-2 kubenswrapper[4776]: I1011 10:54:36.493544 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:36.498855 master-2 kubenswrapper[4776]: I1011 10:54:36.498812 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-fernet-keys\") pod \"30996a86-1b86-4a67-bfea-0e63f7417196\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " Oct 11 10:54:36.498942 master-2 kubenswrapper[4776]: I1011 10:54:36.498926 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-config-data\") pod \"30996a86-1b86-4a67-bfea-0e63f7417196\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " Oct 11 10:54:36.499010 master-2 kubenswrapper[4776]: I1011 10:54:36.498993 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-credential-keys\") pod \"30996a86-1b86-4a67-bfea-0e63f7417196\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " Oct 11 10:54:36.499046 master-2 kubenswrapper[4776]: I1011 10:54:36.499024 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-scripts\") pod \"30996a86-1b86-4a67-bfea-0e63f7417196\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " Oct 11 10:54:36.499089 master-2 kubenswrapper[4776]: I1011 10:54:36.499053 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-combined-ca-bundle\") pod \"30996a86-1b86-4a67-bfea-0e63f7417196\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " Oct 11 10:54:36.517762 master-2 kubenswrapper[4776]: I1011 10:54:36.502838 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-scripts" (OuterVolumeSpecName: "scripts") pod "30996a86-1b86-4a67-bfea-0e63f7417196" (UID: "30996a86-1b86-4a67-bfea-0e63f7417196"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:36.518319 master-2 kubenswrapper[4776]: I1011 10:54:36.518271 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "30996a86-1b86-4a67-bfea-0e63f7417196" (UID: "30996a86-1b86-4a67-bfea-0e63f7417196"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:36.529982 master-2 kubenswrapper[4776]: I1011 10:54:36.529830 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30996a86-1b86-4a67-bfea-0e63f7417196-kube-api-access-kb4d9" (OuterVolumeSpecName: "kube-api-access-kb4d9") pod "30996a86-1b86-4a67-bfea-0e63f7417196" (UID: "30996a86-1b86-4a67-bfea-0e63f7417196"). InnerVolumeSpecName "kube-api-access-kb4d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:36.532153 master-2 kubenswrapper[4776]: I1011 10:54:36.532101 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xptqx" event={"ID":"30996a86-1b86-4a67-bfea-0e63f7417196","Type":"ContainerDied","Data":"740014b42f7f5352557d95dd0518b5f24506765ebf058a5db025a483fda2e186"} Oct 11 10:54:36.532153 master-2 kubenswrapper[4776]: I1011 10:54:36.532151 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="740014b42f7f5352557d95dd0518b5f24506765ebf058a5db025a483fda2e186" Oct 11 10:54:36.532270 master-2 kubenswrapper[4776]: I1011 10:54:36.532234 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:36.539946 master-2 kubenswrapper[4776]: I1011 10:54:36.539824 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "30996a86-1b86-4a67-bfea-0e63f7417196" (UID: "30996a86-1b86-4a67-bfea-0e63f7417196"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:36.553323 master-2 kubenswrapper[4776]: I1011 10:54:36.550749 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2dgxj" event={"ID":"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38","Type":"ContainerStarted","Data":"b7e0d5acf6bbdc53a5ba11187ce29782ecfb6106125d3631307f9dca40bcd06a"} Oct 11 10:54:36.553323 master-2 kubenswrapper[4776]: I1011 10:54:36.551600 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:36.578542 master-2 kubenswrapper[4776]: I1011 10:54:36.578486 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-config-data" (OuterVolumeSpecName: "config-data") pod "30996a86-1b86-4a67-bfea-0e63f7417196" (UID: "30996a86-1b86-4a67-bfea-0e63f7417196"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:36.585089 master-2 kubenswrapper[4776]: I1011 10:54:36.584948 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:36.599168 master-2 kubenswrapper[4776]: I1011 10:54:36.598252 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-2dgxj" podStartSLOduration=2.598202427 podStartE2EDuration="2.598202427s" podCreationTimestamp="2025-10-11 10:54:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:36.591466315 +0000 UTC m=+1711.375893024" watchObservedRunningTime="2025-10-11 10:54:36.598202427 +0000 UTC m=+1711.382629136" Oct 11 10:54:36.608475 master-2 kubenswrapper[4776]: I1011 10:54:36.604044 4776 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-credential-keys\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:36.608475 master-2 kubenswrapper[4776]: I1011 10:54:36.604087 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:36.608475 master-2 kubenswrapper[4776]: I1011 10:54:36.604101 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb4d9\" (UniqueName: \"kubernetes.io/projected/30996a86-1b86-4a67-bfea-0e63f7417196-kube-api-access-kb4d9\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:36.608475 master-2 kubenswrapper[4776]: I1011 10:54:36.604113 4776 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-fernet-keys\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:36.608475 master-2 kubenswrapper[4776]: I1011 10:54:36.604124 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:36.697937 master-2 kubenswrapper[4776]: I1011 10:54:36.697879 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30996a86-1b86-4a67-bfea-0e63f7417196" (UID: "30996a86-1b86-4a67-bfea-0e63f7417196"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:36.709796 master-2 kubenswrapper[4776]: I1011 10:54:36.709750 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:36.746242 master-1 kubenswrapper[4771]: I1011 10:54:36.745980 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-848fcbb4df-cn592"] Oct 11 10:54:36.748997 master-2 kubenswrapper[4776]: I1011 10:54:36.747611 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-848fcbb4df-54n8l"] Oct 11 10:54:36.748997 master-2 kubenswrapper[4776]: E1011 10:54:36.747955 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30996a86-1b86-4a67-bfea-0e63f7417196" containerName="keystone-bootstrap" Oct 11 10:54:36.748997 master-2 kubenswrapper[4776]: I1011 10:54:36.747968 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="30996a86-1b86-4a67-bfea-0e63f7417196" containerName="keystone-bootstrap" Oct 11 10:54:36.748997 master-2 kubenswrapper[4776]: I1011 10:54:36.748131 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="30996a86-1b86-4a67-bfea-0e63f7417196" containerName="keystone-bootstrap" Oct 11 10:54:36.749205 master-1 kubenswrapper[4771]: I1011 10:54:36.749141 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.749322 master-2 kubenswrapper[4776]: I1011 10:54:36.749190 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:36.755748 master-2 kubenswrapper[4776]: I1011 10:54:36.754217 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 11 10:54:36.756484 master-1 kubenswrapper[4771]: I1011 10:54:36.755141 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 11 10:54:36.756484 master-1 kubenswrapper[4771]: I1011 10:54:36.755416 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 11 10:54:36.756484 master-1 kubenswrapper[4771]: I1011 10:54:36.755555 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 11 10:54:36.757958 master-1 kubenswrapper[4771]: I1011 10:54:36.757929 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 11 10:54:36.758176 master-1 kubenswrapper[4771]: I1011 10:54:36.758154 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 11 10:54:36.759308 master-1 kubenswrapper[4771]: I1011 10:54:36.759259 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-848fcbb4df-cn592"] Oct 11 10:54:36.759626 master-2 kubenswrapper[4776]: I1011 10:54:36.759604 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 11 10:54:36.762930 master-0 kubenswrapper[4790]: I1011 10:54:36.761783 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-848fcbb4df-dr4lc"] Oct 11 10:54:36.762930 master-0 kubenswrapper[4790]: E1011 10:54:36.762117 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b3f6bf-ef4b-41fa-b098-fc5620a92300" containerName="mariadb-account-create" Oct 11 10:54:36.762930 master-0 kubenswrapper[4790]: I1011 10:54:36.762132 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b3f6bf-ef4b-41fa-b098-fc5620a92300" containerName="mariadb-account-create" Oct 11 10:54:36.762930 master-0 kubenswrapper[4790]: I1011 10:54:36.762284 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="03b3f6bf-ef4b-41fa-b098-fc5620a92300" containerName="mariadb-account-create" Oct 11 10:54:36.763781 master-0 kubenswrapper[4790]: I1011 10:54:36.763000 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.765691 master-0 kubenswrapper[4790]: I1011 10:54:36.765648 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 11 10:54:36.766836 master-0 kubenswrapper[4790]: I1011 10:54:36.766776 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-public-tls-certs\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.766897 master-0 kubenswrapper[4790]: I1011 10:54:36.766839 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcmmz\" (UniqueName: \"kubernetes.io/projected/bc0250a9-8454-4716-8e79-36166266decb-kube-api-access-kcmmz\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.766897 master-0 kubenswrapper[4790]: I1011 10:54:36.766873 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-combined-ca-bundle\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.766970 master-0 kubenswrapper[4790]: I1011 10:54:36.766930 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-internal-tls-certs\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.766970 master-0 kubenswrapper[4790]: I1011 10:54:36.766964 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-fernet-keys\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.767034 master-0 kubenswrapper[4790]: I1011 10:54:36.766998 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-scripts\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.767139 master-0 kubenswrapper[4790]: I1011 10:54:36.767090 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-config-data\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.767139 master-0 kubenswrapper[4790]: I1011 10:54:36.767132 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-credential-keys\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.768338 master-0 kubenswrapper[4790]: I1011 10:54:36.767951 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 11 10:54:36.770121 master-0 kubenswrapper[4790]: I1011 10:54:36.769606 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 11 10:54:36.770121 master-0 kubenswrapper[4790]: I1011 10:54:36.769844 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 11 10:54:36.770121 master-0 kubenswrapper[4790]: I1011 10:54:36.769965 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 11 10:54:36.780438 master-0 kubenswrapper[4790]: I1011 10:54:36.780370 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-848fcbb4df-dr4lc"] Oct 11 10:54:36.784214 master-2 kubenswrapper[4776]: I1011 10:54:36.783598 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-848fcbb4df-54n8l"] Oct 11 10:54:36.799255 master-1 kubenswrapper[4771]: I1011 10:54:36.799134 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-combined-ca-bundle\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.800503 master-1 kubenswrapper[4771]: I1011 10:54:36.800476 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-credential-keys\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.800645 master-1 kubenswrapper[4771]: I1011 10:54:36.800631 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-config-data\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.800754 master-1 kubenswrapper[4771]: I1011 10:54:36.800741 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-scripts\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.800940 master-1 kubenswrapper[4771]: I1011 10:54:36.800923 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6qxc\" (UniqueName: \"kubernetes.io/projected/0c78b078-6372-4692-8a56-d9aee58bffb8-kube-api-access-n6qxc\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.801453 master-1 kubenswrapper[4771]: I1011 10:54:36.801372 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-public-tls-certs\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.801453 master-1 kubenswrapper[4771]: I1011 10:54:36.801437 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-internal-tls-certs\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.801832 master-1 kubenswrapper[4771]: I1011 10:54:36.801802 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-fernet-keys\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.868689 master-0 kubenswrapper[4790]: I1011 10:54:36.868597 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcmmz\" (UniqueName: \"kubernetes.io/projected/bc0250a9-8454-4716-8e79-36166266decb-kube-api-access-kcmmz\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.868689 master-0 kubenswrapper[4790]: I1011 10:54:36.868662 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-public-tls-certs\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.868689 master-0 kubenswrapper[4790]: I1011 10:54:36.868695 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-combined-ca-bundle\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.869164 master-0 kubenswrapper[4790]: I1011 10:54:36.868745 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-internal-tls-certs\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.869452 master-0 kubenswrapper[4790]: I1011 10:54:36.868774 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-fernet-keys\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.869527 master-0 kubenswrapper[4790]: I1011 10:54:36.869469 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-scripts\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.869527 master-0 kubenswrapper[4790]: I1011 10:54:36.869518 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-config-data\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.869643 master-0 kubenswrapper[4790]: I1011 10:54:36.869546 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-credential-keys\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.873308 master-0 kubenswrapper[4790]: I1011 10:54:36.873264 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-public-tls-certs\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.873782 master-0 kubenswrapper[4790]: I1011 10:54:36.873746 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-scripts\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.873871 master-0 kubenswrapper[4790]: I1011 10:54:36.873739 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-config-data\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.874395 master-0 kubenswrapper[4790]: I1011 10:54:36.874351 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-internal-tls-certs\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.874395 master-0 kubenswrapper[4790]: I1011 10:54:36.874386 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-fernet-keys\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.875626 master-0 kubenswrapper[4790]: I1011 10:54:36.874902 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-combined-ca-bundle\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.879269 master-0 kubenswrapper[4790]: I1011 10:54:36.879074 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bc0250a9-8454-4716-8e79-36166266decb-credential-keys\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.893325 master-0 kubenswrapper[4790]: I1011 10:54:36.893227 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcmmz\" (UniqueName: \"kubernetes.io/projected/bc0250a9-8454-4716-8e79-36166266decb-kube-api-access-kcmmz\") pod \"keystone-848fcbb4df-dr4lc\" (UID: \"bc0250a9-8454-4716-8e79-36166266decb\") " pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:36.904630 master-1 kubenswrapper[4771]: I1011 10:54:36.904560 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-combined-ca-bundle\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.905044 master-1 kubenswrapper[4771]: I1011 10:54:36.905024 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-credential-keys\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.905146 master-1 kubenswrapper[4771]: I1011 10:54:36.905132 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-config-data\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.905301 master-1 kubenswrapper[4771]: I1011 10:54:36.905284 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-scripts\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.905425 master-1 kubenswrapper[4771]: I1011 10:54:36.905409 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6qxc\" (UniqueName: \"kubernetes.io/projected/0c78b078-6372-4692-8a56-d9aee58bffb8-kube-api-access-n6qxc\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.905596 master-1 kubenswrapper[4771]: I1011 10:54:36.905580 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-public-tls-certs\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.905697 master-1 kubenswrapper[4771]: I1011 10:54:36.905683 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-internal-tls-certs\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.905865 master-1 kubenswrapper[4771]: I1011 10:54:36.905847 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-fernet-keys\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.909205 master-1 kubenswrapper[4771]: I1011 10:54:36.908962 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-combined-ca-bundle\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.909205 master-1 kubenswrapper[4771]: I1011 10:54:36.909153 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-scripts\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.909892 master-1 kubenswrapper[4771]: I1011 10:54:36.909847 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-credential-keys\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.910945 master-1 kubenswrapper[4771]: I1011 10:54:36.910783 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-internal-tls-certs\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.911069 master-1 kubenswrapper[4771]: I1011 10:54:36.911000 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-public-tls-certs\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.912755 master-1 kubenswrapper[4771]: I1011 10:54:36.912682 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-fernet-keys\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.914034 master-2 kubenswrapper[4776]: I1011 10:54:36.913966 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpjx8\" (UniqueName: \"kubernetes.io/projected/2ec8aba5-db0a-4c5f-a876-c513af95f945-kube-api-access-kpjx8\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:36.914217 master-2 kubenswrapper[4776]: I1011 10:54:36.914098 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-public-tls-certs\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:36.914217 master-2 kubenswrapper[4776]: I1011 10:54:36.914141 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-internal-tls-certs\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:36.914217 master-2 kubenswrapper[4776]: I1011 10:54:36.914190 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-config-data\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:36.914217 master-2 kubenswrapper[4776]: I1011 10:54:36.914210 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-credential-keys\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:36.914789 master-2 kubenswrapper[4776]: I1011 10:54:36.914354 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-combined-ca-bundle\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:36.914789 master-2 kubenswrapper[4776]: I1011 10:54:36.914415 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-scripts\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:36.914789 master-2 kubenswrapper[4776]: I1011 10:54:36.914622 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-fernet-keys\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:36.923620 master-1 kubenswrapper[4771]: I1011 10:54:36.914851 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c78b078-6372-4692-8a56-d9aee58bffb8-config-data\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:36.928250 master-1 kubenswrapper[4771]: I1011 10:54:36.928203 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6qxc\" (UniqueName: \"kubernetes.io/projected/0c78b078-6372-4692-8a56-d9aee58bffb8-kube-api-access-n6qxc\") pod \"keystone-848fcbb4df-cn592\" (UID: \"0c78b078-6372-4692-8a56-d9aee58bffb8\") " pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:37.016376 master-2 kubenswrapper[4776]: I1011 10:54:37.016333 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-combined-ca-bundle\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.016376 master-2 kubenswrapper[4776]: I1011 10:54:37.016381 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-scripts\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.017213 master-2 kubenswrapper[4776]: I1011 10:54:37.016449 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-fernet-keys\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.017213 master-2 kubenswrapper[4776]: I1011 10:54:37.016499 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpjx8\" (UniqueName: \"kubernetes.io/projected/2ec8aba5-db0a-4c5f-a876-c513af95f945-kube-api-access-kpjx8\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.017213 master-2 kubenswrapper[4776]: I1011 10:54:37.016555 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-public-tls-certs\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.017213 master-2 kubenswrapper[4776]: I1011 10:54:37.016591 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-internal-tls-certs\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.017213 master-2 kubenswrapper[4776]: I1011 10:54:37.016658 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-config-data\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.017213 master-2 kubenswrapper[4776]: I1011 10:54:37.016689 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-credential-keys\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.021638 master-2 kubenswrapper[4776]: I1011 10:54:37.021590 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-public-tls-certs\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.021638 master-2 kubenswrapper[4776]: I1011 10:54:37.021625 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-internal-tls-certs\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.022174 master-2 kubenswrapper[4776]: I1011 10:54:37.022126 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-combined-ca-bundle\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.023196 master-2 kubenswrapper[4776]: I1011 10:54:37.022398 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-credential-keys\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.024362 master-2 kubenswrapper[4776]: I1011 10:54:37.024320 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-fernet-keys\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.024556 master-2 kubenswrapper[4776]: I1011 10:54:37.024522 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-scripts\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.027469 master-2 kubenswrapper[4776]: I1011 10:54:37.027299 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-config-data\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.039262 master-2 kubenswrapper[4776]: I1011 10:54:37.038799 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpjx8\" (UniqueName: \"kubernetes.io/projected/2ec8aba5-db0a-4c5f-a876-c513af95f945-kube-api-access-kpjx8\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.066882 master-2 kubenswrapper[4776]: I1011 10:54:37.066829 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.072458 master-1 kubenswrapper[4771]: I1011 10:54:37.072322 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:37.082156 master-0 kubenswrapper[4790]: I1011 10:54:37.082072 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:37.510912 master-2 kubenswrapper[4776]: W1011 10:54:37.505151 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ec8aba5_db0a_4c5f_a876_c513af95f945.slice/crio-f27cb0c0bb58af608a20b3614ded308c9f6b525acf61da80a34f310f8d2fe543 WatchSource:0}: Error finding container f27cb0c0bb58af608a20b3614ded308c9f6b525acf61da80a34f310f8d2fe543: Status 404 returned error can't find the container with id f27cb0c0bb58af608a20b3614ded308c9f6b525acf61da80a34f310f8d2fe543 Oct 11 10:54:37.510912 master-2 kubenswrapper[4776]: I1011 10:54:37.505558 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-848fcbb4df-54n8l"] Oct 11 10:54:37.566618 master-2 kubenswrapper[4776]: I1011 10:54:37.566518 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-848fcbb4df-54n8l" event={"ID":"2ec8aba5-db0a-4c5f-a876-c513af95f945","Type":"ContainerStarted","Data":"f27cb0c0bb58af608a20b3614ded308c9f6b525acf61da80a34f310f8d2fe543"} Oct 11 10:54:37.566618 master-2 kubenswrapper[4776]: I1011 10:54:37.566594 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:37.706601 master-2 kubenswrapper[4776]: I1011 10:54:37.706553 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-n7nm2"] Oct 11 10:54:37.709956 master-2 kubenswrapper[4776]: I1011 10:54:37.709572 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.714702 master-2 kubenswrapper[4776]: I1011 10:54:37.714143 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Oct 11 10:54:37.714702 master-2 kubenswrapper[4776]: I1011 10:54:37.714358 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Oct 11 10:54:37.718706 master-2 kubenswrapper[4776]: I1011 10:54:37.718630 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-n7nm2"] Oct 11 10:54:37.834772 master-2 kubenswrapper[4776]: I1011 10:54:37.834720 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh8v5\" (UniqueName: \"kubernetes.io/projected/4a1c4d38-1f25-4465-9976-43be28a3b282-kube-api-access-fh8v5\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.834963 master-2 kubenswrapper[4776]: I1011 10:54:37.834791 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-combined-ca-bundle\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.834963 master-2 kubenswrapper[4776]: I1011 10:54:37.834856 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-scripts\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.835048 master-2 kubenswrapper[4776]: I1011 10:54:37.834978 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.835048 master-2 kubenswrapper[4776]: I1011 10:54:37.835012 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4a1c4d38-1f25-4465-9976-43be28a3b282-etc-podinfo\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.835481 master-2 kubenswrapper[4776]: I1011 10:54:37.835318 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data-merged\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.936627 master-2 kubenswrapper[4776]: I1011 10:54:37.936572 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.936627 master-2 kubenswrapper[4776]: I1011 10:54:37.936629 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4a1c4d38-1f25-4465-9976-43be28a3b282-etc-podinfo\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.936627 master-2 kubenswrapper[4776]: I1011 10:54:37.936653 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data-merged\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.936978 master-2 kubenswrapper[4776]: I1011 10:54:37.936747 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh8v5\" (UniqueName: \"kubernetes.io/projected/4a1c4d38-1f25-4465-9976-43be28a3b282-kube-api-access-fh8v5\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.936978 master-2 kubenswrapper[4776]: I1011 10:54:37.936771 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-combined-ca-bundle\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.936978 master-2 kubenswrapper[4776]: I1011 10:54:37.936791 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-scripts\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.937711 master-2 kubenswrapper[4776]: I1011 10:54:37.937668 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data-merged\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.941080 master-2 kubenswrapper[4776]: I1011 10:54:37.941058 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-scripts\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.941180 master-2 kubenswrapper[4776]: I1011 10:54:37.941062 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4a1c4d38-1f25-4465-9976-43be28a3b282-etc-podinfo\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.941642 master-2 kubenswrapper[4776]: I1011 10:54:37.941612 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-combined-ca-bundle\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.941764 master-2 kubenswrapper[4776]: I1011 10:54:37.941727 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.961568 master-2 kubenswrapper[4776]: I1011 10:54:37.961391 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh8v5\" (UniqueName: \"kubernetes.io/projected/4a1c4d38-1f25-4465-9976-43be28a3b282-kube-api-access-fh8v5\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:38.031414 master-2 kubenswrapper[4776]: I1011 10:54:38.031351 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:38.486313 master-2 kubenswrapper[4776]: I1011 10:54:38.481639 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-n7nm2"] Oct 11 10:54:38.505088 master-2 kubenswrapper[4776]: W1011 10:54:38.504922 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a1c4d38_1f25_4465_9976_43be28a3b282.slice/crio-08c94658c701837f72a72b62e6464a19fa3e877f799281df15d5cc453d495789 WatchSource:0}: Error finding container 08c94658c701837f72a72b62e6464a19fa3e877f799281df15d5cc453d495789: Status 404 returned error can't find the container with id 08c94658c701837f72a72b62e6464a19fa3e877f799281df15d5cc453d495789 Oct 11 10:54:38.576076 master-2 kubenswrapper[4776]: I1011 10:54:38.576029 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-n7nm2" event={"ID":"4a1c4d38-1f25-4465-9976-43be28a3b282","Type":"ContainerStarted","Data":"08c94658c701837f72a72b62e6464a19fa3e877f799281df15d5cc453d495789"} Oct 11 10:54:38.578614 master-2 kubenswrapper[4776]: I1011 10:54:38.578369 4776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:54:38.579952 master-2 kubenswrapper[4776]: I1011 10:54:38.579477 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-848fcbb4df-54n8l" event={"ID":"2ec8aba5-db0a-4c5f-a876-c513af95f945","Type":"ContainerStarted","Data":"2f2b5056a73fd316db0802a3f82120d51f77b82d3025ad8fc6ca4a8cf2a50912"} Oct 11 10:54:38.579952 master-2 kubenswrapper[4776]: I1011 10:54:38.579568 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:38.797636 master-2 kubenswrapper[4776]: I1011 10:54:38.797479 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:38.803618 master-2 kubenswrapper[4776]: I1011 10:54:38.803559 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:38.836690 master-2 kubenswrapper[4776]: I1011 10:54:38.836585 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-848fcbb4df-54n8l" podStartSLOduration=2.836563102 podStartE2EDuration="2.836563102s" podCreationTimestamp="2025-10-11 10:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:38.636042342 +0000 UTC m=+1713.420469061" watchObservedRunningTime="2025-10-11 10:54:38.836563102 +0000 UTC m=+1713.620989811" Oct 11 10:54:39.922991 master-2 kubenswrapper[4776]: I1011 10:54:39.919069 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:39.922991 master-2 kubenswrapper[4776]: I1011 10:54:39.919131 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:39.949325 master-2 kubenswrapper[4776]: I1011 10:54:39.949256 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:39.966406 master-2 kubenswrapper[4776]: I1011 10:54:39.966346 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:40.599959 master-2 kubenswrapper[4776]: I1011 10:54:40.599837 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:40.600111 master-2 kubenswrapper[4776]: I1011 10:54:40.600057 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:40.704488 master-1 kubenswrapper[4771]: I1011 10:54:40.704372 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:40.710992 master-1 kubenswrapper[4771]: I1011 10:54:40.710918 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:40.987136 master-1 kubenswrapper[4771]: I1011 10:54:40.984506 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:54:41.004098 master-1 kubenswrapper[4771]: I1011 10:54:41.004054 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-ovsdbserver-nb\") pod \"30700706-219b-47c1-83cd-278584a3f182\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " Oct 11 10:54:41.004270 master-1 kubenswrapper[4771]: I1011 10:54:41.004165 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-dns-svc\") pod \"30700706-219b-47c1-83cd-278584a3f182\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " Oct 11 10:54:41.004342 master-1 kubenswrapper[4771]: I1011 10:54:41.004322 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-ovsdbserver-sb\") pod \"30700706-219b-47c1-83cd-278584a3f182\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " Oct 11 10:54:41.004410 master-1 kubenswrapper[4771]: I1011 10:54:41.004344 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m5q5\" (UniqueName: \"kubernetes.io/projected/30700706-219b-47c1-83cd-278584a3f182-kube-api-access-2m5q5\") pod \"30700706-219b-47c1-83cd-278584a3f182\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " Oct 11 10:54:41.004410 master-1 kubenswrapper[4771]: I1011 10:54:41.004397 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-config\") pod \"30700706-219b-47c1-83cd-278584a3f182\" (UID: \"30700706-219b-47c1-83cd-278584a3f182\") " Oct 11 10:54:41.033295 master-1 kubenswrapper[4771]: I1011 10:54:41.033218 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30700706-219b-47c1-83cd-278584a3f182-kube-api-access-2m5q5" (OuterVolumeSpecName: "kube-api-access-2m5q5") pod "30700706-219b-47c1-83cd-278584a3f182" (UID: "30700706-219b-47c1-83cd-278584a3f182"). InnerVolumeSpecName "kube-api-access-2m5q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:41.056205 master-1 kubenswrapper[4771]: I1011 10:54:41.056132 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-config" (OuterVolumeSpecName: "config") pod "30700706-219b-47c1-83cd-278584a3f182" (UID: "30700706-219b-47c1-83cd-278584a3f182"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:41.061325 master-1 kubenswrapper[4771]: I1011 10:54:41.061185 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "30700706-219b-47c1-83cd-278584a3f182" (UID: "30700706-219b-47c1-83cd-278584a3f182"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:41.081629 master-1 kubenswrapper[4771]: I1011 10:54:41.081563 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "30700706-219b-47c1-83cd-278584a3f182" (UID: "30700706-219b-47c1-83cd-278584a3f182"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:41.087381 master-1 kubenswrapper[4771]: I1011 10:54:41.086827 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "30700706-219b-47c1-83cd-278584a3f182" (UID: "30700706-219b-47c1-83cd-278584a3f182"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:41.106451 master-1 kubenswrapper[4771]: I1011 10:54:41.106322 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:41.106451 master-1 kubenswrapper[4771]: I1011 10:54:41.106382 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-ovsdbserver-nb\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:41.106451 master-1 kubenswrapper[4771]: I1011 10:54:41.106398 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-dns-svc\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:41.106451 master-1 kubenswrapper[4771]: I1011 10:54:41.106409 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30700706-219b-47c1-83cd-278584a3f182-ovsdbserver-sb\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:41.106451 master-1 kubenswrapper[4771]: I1011 10:54:41.106424 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m5q5\" (UniqueName: \"kubernetes.io/projected/30700706-219b-47c1-83cd-278584a3f182-kube-api-access-2m5q5\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:41.143441 master-1 kubenswrapper[4771]: I1011 10:54:41.143386 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" event={"ID":"30700706-219b-47c1-83cd-278584a3f182","Type":"ContainerDied","Data":"cc8ace77e3138b8e3e45d04fc9090a4ecc543f54342ec2f309cdbc89855a76b5"} Oct 11 10:54:41.143441 master-1 kubenswrapper[4771]: I1011 10:54:41.143443 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" Oct 11 10:54:41.143679 master-1 kubenswrapper[4771]: I1011 10:54:41.143486 4771 scope.go:117] "RemoveContainer" containerID="2dcdb27cf0dbce506998b4c8cfe73f7847cd892689b076fcba313e976b8a5349" Oct 11 10:54:41.147768 master-1 kubenswrapper[4771]: I1011 10:54:41.147665 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6508f17e-afc7-44dd-89b4-2efa8a124b12","Type":"ContainerStarted","Data":"b88255ee4532787c4c00c2711a51a51676d239e5c1a4bb061ac2e5c547c75ccd"} Oct 11 10:54:41.153077 master-1 kubenswrapper[4771]: I1011 10:54:41.153033 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Oct 11 10:54:41.191974 master-1 kubenswrapper[4771]: I1011 10:54:41.191938 4771 scope.go:117] "RemoveContainer" containerID="06ae06abec101801ffcb11de5d066d694be4874cbd2110b56c80026a91417fc8" Oct 11 10:54:41.232944 master-1 kubenswrapper[4771]: I1011 10:54:41.232881 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c99f4877f-vjhdt"] Oct 11 10:54:41.237114 master-1 kubenswrapper[4771]: I1011 10:54:41.237056 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c99f4877f-vjhdt"] Oct 11 10:54:41.337765 master-1 kubenswrapper[4771]: I1011 10:54:41.337705 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-848fcbb4df-cn592"] Oct 11 10:54:41.340549 master-1 kubenswrapper[4771]: W1011 10:54:41.340481 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c78b078_6372_4692_8a56_d9aee58bffb8.slice/crio-349a33e83700cb100d3c7dfe2d13003eb2f27916ff1d46e4fac294cad84acdcc WatchSource:0}: Error finding container 349a33e83700cb100d3c7dfe2d13003eb2f27916ff1d46e4fac294cad84acdcc: Status 404 returned error can't find the container with id 349a33e83700cb100d3c7dfe2d13003eb2f27916ff1d46e4fac294cad84acdcc Oct 11 10:54:41.442604 master-1 kubenswrapper[4771]: W1011 10:54:41.442540 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39f6e33d_5313_461d_ac81_59ab693324e8.slice/crio-c0b23cbb30962b0597a0653b7895f57534e0520c3a6e69c040624aa8e0977731 WatchSource:0}: Error finding container c0b23cbb30962b0597a0653b7895f57534e0520c3a6e69c040624aa8e0977731: Status 404 returned error can't find the container with id c0b23cbb30962b0597a0653b7895f57534e0520c3a6e69c040624aa8e0977731 Oct 11 10:54:41.443258 master-1 kubenswrapper[4771]: I1011 10:54:41.442862 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b597cbbf8-8j29d"] Oct 11 10:54:42.177725 master-1 kubenswrapper[4771]: I1011 10:54:42.177627 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b597cbbf8-8j29d" event={"ID":"39f6e33d-5313-461d-ac81-59ab693324e8","Type":"ContainerStarted","Data":"c0b23cbb30962b0597a0653b7895f57534e0520c3a6e69c040624aa8e0977731"} Oct 11 10:54:42.183374 master-1 kubenswrapper[4771]: I1011 10:54:42.183277 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-1" event={"ID":"499f9e94-a738-484d-ae4b-0cc221750d1c","Type":"ContainerStarted","Data":"2d4fd9e07f37d7d0e4c5b7147d47642c209dd291fd8ea33730298efb1acb5aa4"} Oct 11 10:54:42.191933 master-1 kubenswrapper[4771]: I1011 10:54:42.191776 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-848fcbb4df-cn592" event={"ID":"0c78b078-6372-4692-8a56-d9aee58bffb8","Type":"ContainerStarted","Data":"349a33e83700cb100d3c7dfe2d13003eb2f27916ff1d46e4fac294cad84acdcc"} Oct 11 10:54:42.199974 master-1 kubenswrapper[4771]: I1011 10:54:42.199909 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-0" event={"ID":"18861a21-406e-479b-8712-9a62ca2ebf4a","Type":"ContainerStarted","Data":"497e4e9e0fe8900eab62dbf87a7d7ab2dcbd7bba959b0ab4451a71e7c8f7461b"} Oct 11 10:54:42.448616 master-1 kubenswrapper[4771]: I1011 10:54:42.447437 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30700706-219b-47c1-83cd-278584a3f182" path="/var/lib/kubelet/pods/30700706-219b-47c1-83cd-278584a3f182/volumes" Oct 11 10:54:42.623636 master-2 kubenswrapper[4776]: I1011 10:54:42.623504 4776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:54:42.623636 master-2 kubenswrapper[4776]: I1011 10:54:42.623572 4776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:54:42.636828 master-2 kubenswrapper[4776]: I1011 10:54:42.636786 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:42.638074 master-2 kubenswrapper[4776]: I1011 10:54:42.638026 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:42.742832 master-0 kubenswrapper[4790]: I1011 10:54:42.742768 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-external-api-1"] Oct 11 10:54:42.948823 master-1 kubenswrapper[4771]: I1011 10:54:42.948722 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6c99f4877f-vjhdt" podUID="30700706-219b-47c1-83cd-278584a3f182" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.129.0.120:5353: i/o timeout" Oct 11 10:54:43.215948 master-1 kubenswrapper[4771]: I1011 10:54:43.215872 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-1" event={"ID":"499f9e94-a738-484d-ae4b-0cc221750d1c","Type":"ContainerStarted","Data":"b479c48e028ed10f47dcf8ff360fd70182a69875e2d6e7028a9c345aed74bb52"} Oct 11 10:54:43.222136 master-1 kubenswrapper[4771]: I1011 10:54:43.222054 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-0" event={"ID":"18861a21-406e-479b-8712-9a62ca2ebf4a","Type":"ContainerStarted","Data":"122528239cb245efc14e6d8ccc44a5f754f4d71a3448c18c3a8d172db64e177f"} Oct 11 10:54:43.254679 master-1 kubenswrapper[4771]: I1011 10:54:43.254546 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-internal-api-1" podStartSLOduration=10.033035491 podStartE2EDuration="21.254502634s" podCreationTimestamp="2025-10-11 10:54:22 +0000 UTC" firstStartedPulling="2025-10-11 10:54:29.667669302 +0000 UTC m=+1701.641895743" lastFinishedPulling="2025-10-11 10:54:40.889136425 +0000 UTC m=+1712.863362886" observedRunningTime="2025-10-11 10:54:43.243755253 +0000 UTC m=+1715.217981704" watchObservedRunningTime="2025-10-11 10:54:43.254502634 +0000 UTC m=+1715.228729075" Oct 11 10:54:43.291974 master-1 kubenswrapper[4771]: I1011 10:54:43.291892 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-external-api-0" podStartSLOduration=8.598630598 podStartE2EDuration="21.291868104s" podCreationTimestamp="2025-10-11 10:54:22 +0000 UTC" firstStartedPulling="2025-10-11 10:54:28.177681193 +0000 UTC m=+1700.151907654" lastFinishedPulling="2025-10-11 10:54:40.870918689 +0000 UTC m=+1712.845145160" observedRunningTime="2025-10-11 10:54:43.289387592 +0000 UTC m=+1715.263614043" watchObservedRunningTime="2025-10-11 10:54:43.291868104 +0000 UTC m=+1715.266094545" Oct 11 10:54:43.443445 master-2 kubenswrapper[4776]: I1011 10:54:43.443370 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:43.652742 master-0 kubenswrapper[4790]: I1011 10:54:43.652672 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c99f4877f-dv8jt"] Oct 11 10:54:43.653542 master-0 kubenswrapper[4790]: I1011 10:54:43.653477 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" podUID="6a4dc537-c4a3-4538-887f-62fe3919d5f0" containerName="dnsmasq-dns" containerID="cri-o://381e97f0118ecd0a55897d40739e39a7fe13db1d358b9b0a4b193c8f784e5a52" gracePeriod=10 Oct 11 10:54:44.777285 master-0 kubenswrapper[4790]: I1011 10:54:44.777199 4790 generic.go:334] "Generic (PLEG): container finished" podID="6a4dc537-c4a3-4538-887f-62fe3919d5f0" containerID="381e97f0118ecd0a55897d40739e39a7fe13db1d358b9b0a4b193c8f784e5a52" exitCode=0 Oct 11 10:54:44.777285 master-0 kubenswrapper[4790]: I1011 10:54:44.777260 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" event={"ID":"6a4dc537-c4a3-4538-887f-62fe3919d5f0","Type":"ContainerDied","Data":"381e97f0118ecd0a55897d40739e39a7fe13db1d358b9b0a4b193c8f784e5a52"} Oct 11 10:54:45.001691 master-0 kubenswrapper[4790]: I1011 10:54:45.001645 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:54:45.054915 master-0 kubenswrapper[4790]: I1011 10:54:45.053804 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-ovsdbserver-nb\") pod \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " Oct 11 10:54:45.054915 master-0 kubenswrapper[4790]: I1011 10:54:45.053915 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-config\") pod \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " Oct 11 10:54:45.054915 master-0 kubenswrapper[4790]: I1011 10:54:45.054086 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-dns-svc\") pod \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " Oct 11 10:54:45.054915 master-0 kubenswrapper[4790]: I1011 10:54:45.054110 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjqms\" (UniqueName: \"kubernetes.io/projected/6a4dc537-c4a3-4538-887f-62fe3919d5f0-kube-api-access-kjqms\") pod \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " Oct 11 10:54:45.054915 master-0 kubenswrapper[4790]: I1011 10:54:45.054191 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-ovsdbserver-sb\") pod \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\" (UID: \"6a4dc537-c4a3-4538-887f-62fe3919d5f0\") " Oct 11 10:54:45.067043 master-0 kubenswrapper[4790]: I1011 10:54:45.065068 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a4dc537-c4a3-4538-887f-62fe3919d5f0-kube-api-access-kjqms" (OuterVolumeSpecName: "kube-api-access-kjqms") pod "6a4dc537-c4a3-4538-887f-62fe3919d5f0" (UID: "6a4dc537-c4a3-4538-887f-62fe3919d5f0"). InnerVolumeSpecName "kube-api-access-kjqms". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:45.098810 master-0 kubenswrapper[4790]: I1011 10:54:45.098143 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6a4dc537-c4a3-4538-887f-62fe3919d5f0" (UID: "6a4dc537-c4a3-4538-887f-62fe3919d5f0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:45.102812 master-0 kubenswrapper[4790]: I1011 10:54:45.102664 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-config" (OuterVolumeSpecName: "config") pod "6a4dc537-c4a3-4538-887f-62fe3919d5f0" (UID: "6a4dc537-c4a3-4538-887f-62fe3919d5f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:45.104162 master-0 kubenswrapper[4790]: I1011 10:54:45.104117 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6a4dc537-c4a3-4538-887f-62fe3919d5f0" (UID: "6a4dc537-c4a3-4538-887f-62fe3919d5f0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:45.104693 master-0 kubenswrapper[4790]: I1011 10:54:45.104626 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6a4dc537-c4a3-4538-887f-62fe3919d5f0" (UID: "6a4dc537-c4a3-4538-887f-62fe3919d5f0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:45.156229 master-0 kubenswrapper[4790]: I1011 10:54:45.156139 4790 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:45.156229 master-0 kubenswrapper[4790]: I1011 10:54:45.156178 4790 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:45.156229 master-0 kubenswrapper[4790]: I1011 10:54:45.156188 4790 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-config\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:45.156229 master-0 kubenswrapper[4790]: I1011 10:54:45.156197 4790 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a4dc537-c4a3-4538-887f-62fe3919d5f0-dns-svc\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:45.156229 master-0 kubenswrapper[4790]: I1011 10:54:45.156206 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjqms\" (UniqueName: \"kubernetes.io/projected/6a4dc537-c4a3-4538-887f-62fe3919d5f0-kube-api-access-kjqms\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:45.293966 master-0 kubenswrapper[4790]: I1011 10:54:45.293515 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-848fcbb4df-dr4lc"] Oct 11 10:54:45.301739 master-0 kubenswrapper[4790]: W1011 10:54:45.301666 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc0250a9_8454_4716_8e79_36166266decb.slice/crio-393b77c3ac74fc56e55351c8f8d1ba6f73966040128fe5cd20fe4f7a6e594eb9 WatchSource:0}: Error finding container 393b77c3ac74fc56e55351c8f8d1ba6f73966040128fe5cd20fe4f7a6e594eb9: Status 404 returned error can't find the container with id 393b77c3ac74fc56e55351c8f8d1ba6f73966040128fe5cd20fe4f7a6e594eb9 Oct 11 10:54:45.789099 master-0 kubenswrapper[4790]: I1011 10:54:45.789025 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-848fcbb4df-dr4lc" event={"ID":"bc0250a9-8454-4716-8e79-36166266decb","Type":"ContainerStarted","Data":"393b77c3ac74fc56e55351c8f8d1ba6f73966040128fe5cd20fe4f7a6e594eb9"} Oct 11 10:54:45.792339 master-0 kubenswrapper[4790]: I1011 10:54:45.792282 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" event={"ID":"6a4dc537-c4a3-4538-887f-62fe3919d5f0","Type":"ContainerDied","Data":"ff20f3c33adc8038a2f30426436b1b65b746807b9fb9fe4c5e10d86eebcbb3ee"} Oct 11 10:54:45.792339 master-0 kubenswrapper[4790]: I1011 10:54:45.792311 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c99f4877f-dv8jt" Oct 11 10:54:45.792439 master-0 kubenswrapper[4790]: I1011 10:54:45.792354 4790 scope.go:117] "RemoveContainer" containerID="381e97f0118ecd0a55897d40739e39a7fe13db1d358b9b0a4b193c8f784e5a52" Oct 11 10:54:45.794938 master-0 kubenswrapper[4790]: I1011 10:54:45.794892 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-2" event={"ID":"a0a5aa40-0146-4b81-83dd-761d514c557a","Type":"ContainerStarted","Data":"373e10b5f3797c10bf2f289215e447ba6ede87d322784f4e67d64c872023c089"} Oct 11 10:54:45.796787 master-0 kubenswrapper[4790]: I1011 10:54:45.796629 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b597cbbf8-mh4z2" event={"ID":"7322b229-2c7a-4d99-a73b-f3612dc2670e","Type":"ContainerStarted","Data":"d4fb89f8224e853d11b04ea0aba6fb48ea4aa8a4301efbc8913741f90ccd165d"} Oct 11 10:54:45.796787 master-0 kubenswrapper[4790]: I1011 10:54:45.796666 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b597cbbf8-mh4z2" event={"ID":"7322b229-2c7a-4d99-a73b-f3612dc2670e","Type":"ContainerStarted","Data":"4975feadce043fc26c4f727068316dfb421e25490cb098227c3d99cafb766226"} Oct 11 10:54:45.797736 master-0 kubenswrapper[4790]: I1011 10:54:45.797683 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:45.797795 master-0 kubenswrapper[4790]: I1011 10:54:45.797745 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:54:45.804823 master-0 kubenswrapper[4790]: I1011 10:54:45.804787 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-1" event={"ID":"06ebc6e3-ce04-4aac-bb04-ded9662f65e3","Type":"ContainerStarted","Data":"dc786133c9de5165782bb53f073d056d7ecf1bdd9443e8b6e694874591741d06"} Oct 11 10:54:45.810562 master-0 kubenswrapper[4790]: I1011 10:54:45.810508 4790 scope.go:117] "RemoveContainer" containerID="42923cd7993a370d966eb589a3c5dfe41bcbc3a27770fa8b1538dbc31e8e9a97" Oct 11 10:54:46.427283 master-0 kubenswrapper[4790]: I1011 10:54:46.427138 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6b597cbbf8-mh4z2" podStartSLOduration=3.166377405 podStartE2EDuration="13.427114031s" podCreationTimestamp="2025-10-11 10:54:33 +0000 UTC" firstStartedPulling="2025-10-11 10:54:34.422409521 +0000 UTC m=+950.976869813" lastFinishedPulling="2025-10-11 10:54:44.683146147 +0000 UTC m=+961.237606439" observedRunningTime="2025-10-11 10:54:45.998631256 +0000 UTC m=+962.553091658" watchObservedRunningTime="2025-10-11 10:54:46.427114031 +0000 UTC m=+962.981574323" Oct 11 10:54:46.431270 master-0 kubenswrapper[4790]: I1011 10:54:46.431214 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c99f4877f-dv8jt"] Oct 11 10:54:46.443977 master-0 kubenswrapper[4790]: I1011 10:54:46.443910 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c99f4877f-dv8jt"] Oct 11 10:54:46.817911 master-0 kubenswrapper[4790]: I1011 10:54:46.817841 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-2" event={"ID":"a0a5aa40-0146-4b81-83dd-761d514c557a","Type":"ContainerStarted","Data":"c15b47f90fa6f6a74e4ba2cf37c1852384ef4ec1575e5ae98a981dfe2894d2d7"} Oct 11 10:54:46.828327 master-0 kubenswrapper[4790]: I1011 10:54:46.828252 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-external-api-1" podUID="06ebc6e3-ce04-4aac-bb04-ded9662f65e3" containerName="glance-log" containerID="cri-o://dc786133c9de5165782bb53f073d056d7ecf1bdd9443e8b6e694874591741d06" gracePeriod=30 Oct 11 10:54:46.828653 master-0 kubenswrapper[4790]: I1011 10:54:46.828623 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-1" event={"ID":"06ebc6e3-ce04-4aac-bb04-ded9662f65e3","Type":"ContainerStarted","Data":"1a8e071be20d3f943b3788a26f067c75ea4b4b613b2fd9732c1580727514e865"} Oct 11 10:54:46.828777 master-0 kubenswrapper[4790]: I1011 10:54:46.828721 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-external-api-1" podUID="06ebc6e3-ce04-4aac-bb04-ded9662f65e3" containerName="glance-httpd" containerID="cri-o://1a8e071be20d3f943b3788a26f067c75ea4b4b613b2fd9732c1580727514e865" gracePeriod=30 Oct 11 10:54:46.944737 master-0 kubenswrapper[4790]: I1011 10:54:46.944364 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-internal-api-2" podStartSLOduration=10.150279284 podStartE2EDuration="21.944330916s" podCreationTimestamp="2025-10-11 10:54:25 +0000 UTC" firstStartedPulling="2025-10-11 10:54:32.94340623 +0000 UTC m=+949.497866532" lastFinishedPulling="2025-10-11 10:54:44.737457872 +0000 UTC m=+961.291918164" observedRunningTime="2025-10-11 10:54:46.935623214 +0000 UTC m=+963.490083526" watchObservedRunningTime="2025-10-11 10:54:46.944330916 +0000 UTC m=+963.498791228" Oct 11 10:54:47.087796 master-0 kubenswrapper[4790]: I1011 10:54:47.087661 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-external-api-1" podStartSLOduration=11.717082542 podStartE2EDuration="25.087632917s" podCreationTimestamp="2025-10-11 10:54:22 +0000 UTC" firstStartedPulling="2025-10-11 10:54:31.406971668 +0000 UTC m=+947.961431960" lastFinishedPulling="2025-10-11 10:54:44.777522043 +0000 UTC m=+961.331982335" observedRunningTime="2025-10-11 10:54:47.076698915 +0000 UTC m=+963.631159207" watchObservedRunningTime="2025-10-11 10:54:47.087632917 +0000 UTC m=+963.642093229" Oct 11 10:54:47.193189 master-1 kubenswrapper[4771]: I1011 10:54:47.193096 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:47.193189 master-1 kubenswrapper[4771]: I1011 10:54:47.193176 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:47.223779 master-1 kubenswrapper[4771]: I1011 10:54:47.223655 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:47.248154 master-1 kubenswrapper[4771]: I1011 10:54:47.248096 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:47.257148 master-1 kubenswrapper[4771]: I1011 10:54:47.257118 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:47.257148 master-1 kubenswrapper[4771]: I1011 10:54:47.257154 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:47.836115 master-0 kubenswrapper[4790]: I1011 10:54:47.836047 4790 generic.go:334] "Generic (PLEG): container finished" podID="06ebc6e3-ce04-4aac-bb04-ded9662f65e3" containerID="1a8e071be20d3f943b3788a26f067c75ea4b4b613b2fd9732c1580727514e865" exitCode=0 Oct 11 10:54:47.836115 master-0 kubenswrapper[4790]: I1011 10:54:47.836090 4790 generic.go:334] "Generic (PLEG): container finished" podID="06ebc6e3-ce04-4aac-bb04-ded9662f65e3" containerID="dc786133c9de5165782bb53f073d056d7ecf1bdd9443e8b6e694874591741d06" exitCode=143 Oct 11 10:54:47.837005 master-0 kubenswrapper[4790]: I1011 10:54:47.836966 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-1" event={"ID":"06ebc6e3-ce04-4aac-bb04-ded9662f65e3","Type":"ContainerDied","Data":"1a8e071be20d3f943b3788a26f067c75ea4b4b613b2fd9732c1580727514e865"} Oct 11 10:54:47.837132 master-0 kubenswrapper[4790]: I1011 10:54:47.837118 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-1" event={"ID":"06ebc6e3-ce04-4aac-bb04-ded9662f65e3","Type":"ContainerDied","Data":"dc786133c9de5165782bb53f073d056d7ecf1bdd9443e8b6e694874591741d06"} Oct 11 10:54:48.303868 master-0 kubenswrapper[4790]: I1011 10:54:48.302374 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a4dc537-c4a3-4538-887f-62fe3919d5f0" path="/var/lib/kubelet/pods/6a4dc537-c4a3-4538-887f-62fe3919d5f0/volumes" Oct 11 10:54:48.767220 master-1 kubenswrapper[4771]: I1011 10:54:48.767096 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:48.767220 master-1 kubenswrapper[4771]: I1011 10:54:48.767176 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:48.803765 master-1 kubenswrapper[4771]: I1011 10:54:48.803663 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:48.832304 master-1 kubenswrapper[4771]: I1011 10:54:48.832190 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:49.291029 master-1 kubenswrapper[4771]: I1011 10:54:49.290962 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:49.291631 master-1 kubenswrapper[4771]: I1011 10:54:49.291554 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:49.536339 master-1 kubenswrapper[4771]: I1011 10:54:49.536252 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:49.536696 master-1 kubenswrapper[4771]: I1011 10:54:49.536558 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:54:49.541658 master-1 kubenswrapper[4771]: I1011 10:54:49.541599 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:54:50.064872 master-0 kubenswrapper[4790]: I1011 10:54:50.064785 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:50.186293 master-0 kubenswrapper[4790]: I1011 10:54:50.186202 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-scripts\") pod \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " Oct 11 10:54:50.186784 master-0 kubenswrapper[4790]: I1011 10:54:50.186331 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-config-data\") pod \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " Oct 11 10:54:50.186784 master-0 kubenswrapper[4790]: I1011 10:54:50.186385 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-combined-ca-bundle\") pod \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " Oct 11 10:54:50.186784 master-0 kubenswrapper[4790]: I1011 10:54:50.186428 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-public-tls-certs\") pod \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " Oct 11 10:54:50.186784 master-0 kubenswrapper[4790]: I1011 10:54:50.186521 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-logs\") pod \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " Oct 11 10:54:50.186784 master-0 kubenswrapper[4790]: I1011 10:54:50.186545 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wvf6\" (UniqueName: \"kubernetes.io/projected/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-kube-api-access-2wvf6\") pod \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " Oct 11 10:54:50.186784 master-0 kubenswrapper[4790]: I1011 10:54:50.186586 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-httpd-run\") pod \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " Oct 11 10:54:50.186784 master-0 kubenswrapper[4790]: I1011 10:54:50.186782 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7\") pod \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\" (UID: \"06ebc6e3-ce04-4aac-bb04-ded9662f65e3\") " Oct 11 10:54:50.187532 master-0 kubenswrapper[4790]: I1011 10:54:50.187442 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-logs" (OuterVolumeSpecName: "logs") pod "06ebc6e3-ce04-4aac-bb04-ded9662f65e3" (UID: "06ebc6e3-ce04-4aac-bb04-ded9662f65e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:54:50.187652 master-0 kubenswrapper[4790]: I1011 10:54:50.187496 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "06ebc6e3-ce04-4aac-bb04-ded9662f65e3" (UID: "06ebc6e3-ce04-4aac-bb04-ded9662f65e3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:54:50.191018 master-0 kubenswrapper[4790]: I1011 10:54:50.190945 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-kube-api-access-2wvf6" (OuterVolumeSpecName: "kube-api-access-2wvf6") pod "06ebc6e3-ce04-4aac-bb04-ded9662f65e3" (UID: "06ebc6e3-ce04-4aac-bb04-ded9662f65e3"). InnerVolumeSpecName "kube-api-access-2wvf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:50.193456 master-0 kubenswrapper[4790]: I1011 10:54:50.193404 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-scripts" (OuterVolumeSpecName: "scripts") pod "06ebc6e3-ce04-4aac-bb04-ded9662f65e3" (UID: "06ebc6e3-ce04-4aac-bb04-ded9662f65e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:50.211596 master-0 kubenswrapper[4790]: I1011 10:54:50.211523 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7" (OuterVolumeSpecName: "glance") pod "06ebc6e3-ce04-4aac-bb04-ded9662f65e3" (UID: "06ebc6e3-ce04-4aac-bb04-ded9662f65e3"). InnerVolumeSpecName "pvc-c7212717-18be-4287-9071-f6f818672815". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 11 10:54:50.214966 master-0 kubenswrapper[4790]: I1011 10:54:50.214932 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06ebc6e3-ce04-4aac-bb04-ded9662f65e3" (UID: "06ebc6e3-ce04-4aac-bb04-ded9662f65e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:50.224671 master-0 kubenswrapper[4790]: I1011 10:54:50.224565 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-config-data" (OuterVolumeSpecName: "config-data") pod "06ebc6e3-ce04-4aac-bb04-ded9662f65e3" (UID: "06ebc6e3-ce04-4aac-bb04-ded9662f65e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:50.241784 master-0 kubenswrapper[4790]: I1011 10:54:50.241683 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "06ebc6e3-ce04-4aac-bb04-ded9662f65e3" (UID: "06ebc6e3-ce04-4aac-bb04-ded9662f65e3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:50.288268 master-0 kubenswrapper[4790]: I1011 10:54:50.288191 4790 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-config-data\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:50.288268 master-0 kubenswrapper[4790]: I1011 10:54:50.288238 4790 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:50.288268 master-0 kubenswrapper[4790]: I1011 10:54:50.288249 4790 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:50.288268 master-0 kubenswrapper[4790]: I1011 10:54:50.288258 4790 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-logs\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:50.288268 master-0 kubenswrapper[4790]: I1011 10:54:50.288268 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wvf6\" (UniqueName: \"kubernetes.io/projected/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-kube-api-access-2wvf6\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:50.288686 master-0 kubenswrapper[4790]: I1011 10:54:50.288303 4790 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-httpd-run\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:50.288686 master-0 kubenswrapper[4790]: I1011 10:54:50.288344 4790 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c7212717-18be-4287-9071-f6f818672815\" (UniqueName: \"kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7\") on node \"master-0\" " Oct 11 10:54:50.288686 master-0 kubenswrapper[4790]: I1011 10:54:50.288354 4790 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06ebc6e3-ce04-4aac-bb04-ded9662f65e3-scripts\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:50.291304 master-1 kubenswrapper[4771]: I1011 10:54:50.291177 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6508f17e-afc7-44dd-89b4-2efa8a124b12","Type":"ContainerStarted","Data":"1959dfa42cdae287607dba414d321cf7399ac93af3a551ad5d6c73607087a624"} Oct 11 10:54:50.292225 master-1 kubenswrapper[4771]: I1011 10:54:50.291416 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerName="ceilometer-central-agent" containerID="cri-o://5b743e4068a6cb14bb814fdd41d8cf6bd84e84d0f5ad68d1776ee95613452027" gracePeriod=30 Oct 11 10:54:50.292225 master-1 kubenswrapper[4771]: I1011 10:54:50.291490 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerName="ceilometer-notification-agent" containerID="cri-o://9ae9b7cbf86ceb270193377537f9fbc6af1ee710bbbb22c0a85e001eae30ccdb" gracePeriod=30 Oct 11 10:54:50.292225 master-1 kubenswrapper[4771]: I1011 10:54:50.291503 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerName="sg-core" containerID="cri-o://b88255ee4532787c4c00c2711a51a51676d239e5c1a4bb061ac2e5c547c75ccd" gracePeriod=30 Oct 11 10:54:50.292225 master-1 kubenswrapper[4771]: I1011 10:54:50.291426 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 11 10:54:50.292225 master-1 kubenswrapper[4771]: I1011 10:54:50.291739 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerName="proxy-httpd" containerID="cri-o://1959dfa42cdae287607dba414d321cf7399ac93af3a551ad5d6c73607087a624" gracePeriod=30 Oct 11 10:54:50.297191 master-1 kubenswrapper[4771]: I1011 10:54:50.297115 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-848fcbb4df-cn592" event={"ID":"0c78b078-6372-4692-8a56-d9aee58bffb8","Type":"ContainerStarted","Data":"0a7ad84194d539ff6b8c5be28c4477a9fa419f7f96949c0f702b34e73d791b55"} Oct 11 10:54:50.297465 master-1 kubenswrapper[4771]: I1011 10:54:50.297387 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:54:50.299794 master-1 kubenswrapper[4771]: I1011 10:54:50.299740 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b597cbbf8-8j29d" event={"ID":"39f6e33d-5313-461d-ac81-59ab693324e8","Type":"ContainerStarted","Data":"b5b0b9229993058400a57de708bc63e8c4ccaf9117579a7fc4d78a9e6a15bb9c"} Oct 11 10:54:50.299899 master-1 kubenswrapper[4771]: I1011 10:54:50.299791 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b597cbbf8-8j29d" event={"ID":"39f6e33d-5313-461d-ac81-59ab693324e8","Type":"ContainerStarted","Data":"dbc539ad66fd3494364c09c43de44cc8278a2d6c3cb528609ab332a556e7d475"} Oct 11 10:54:50.304208 master-0 kubenswrapper[4790]: I1011 10:54:50.304156 4790 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 11 10:54:50.304427 master-0 kubenswrapper[4790]: I1011 10:54:50.304399 4790 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c7212717-18be-4287-9071-f6f818672815" (UniqueName: "kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7") on node "master-0" Oct 11 10:54:50.389971 master-0 kubenswrapper[4790]: I1011 10:54:50.389879 4790 reconciler_common.go:293] "Volume detached for volume \"pvc-c7212717-18be-4287-9071-f6f818672815\" (UniqueName: \"kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7\") on node \"master-0\" DevicePath \"\"" Oct 11 10:54:50.499534 master-1 kubenswrapper[4771]: I1011 10:54:50.499420 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.620208204 podStartE2EDuration="28.499392856s" podCreationTimestamp="2025-10-11 10:54:22 +0000 UTC" firstStartedPulling="2025-10-11 10:54:23.281867057 +0000 UTC m=+1695.256093498" lastFinishedPulling="2025-10-11 10:54:49.161051709 +0000 UTC m=+1721.135278150" observedRunningTime="2025-10-11 10:54:50.488203453 +0000 UTC m=+1722.462429934" watchObservedRunningTime="2025-10-11 10:54:50.499392856 +0000 UTC m=+1722.473619337" Oct 11 10:54:50.522080 master-1 kubenswrapper[4771]: I1011 10:54:50.521941 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6b597cbbf8-8j29d" podStartSLOduration=9.841981992000001 podStartE2EDuration="17.521900337s" podCreationTimestamp="2025-10-11 10:54:33 +0000 UTC" firstStartedPulling="2025-10-11 10:54:41.449069657 +0000 UTC m=+1713.423296098" lastFinishedPulling="2025-10-11 10:54:49.128988002 +0000 UTC m=+1721.103214443" observedRunningTime="2025-10-11 10:54:50.517052737 +0000 UTC m=+1722.491279198" watchObservedRunningTime="2025-10-11 10:54:50.521900337 +0000 UTC m=+1722.496126788" Oct 11 10:54:50.630333 master-1 kubenswrapper[4771]: I1011 10:54:50.630181 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-848fcbb4df-cn592" podStartSLOduration=6.842980011 podStartE2EDuration="14.630153325s" podCreationTimestamp="2025-10-11 10:54:36 +0000 UTC" firstStartedPulling="2025-10-11 10:54:41.345029431 +0000 UTC m=+1713.319255872" lastFinishedPulling="2025-10-11 10:54:49.132202745 +0000 UTC m=+1721.106429186" observedRunningTime="2025-10-11 10:54:50.622586056 +0000 UTC m=+1722.596812517" watchObservedRunningTime="2025-10-11 10:54:50.630153325 +0000 UTC m=+1722.604379806" Oct 11 10:54:50.868000 master-0 kubenswrapper[4790]: I1011 10:54:50.867909 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-1" event={"ID":"06ebc6e3-ce04-4aac-bb04-ded9662f65e3","Type":"ContainerDied","Data":"ba85d8283f47fbc71a069e2badeb37c9ec3baa560e78e0e496d6945e23ef982f"} Oct 11 10:54:50.868000 master-0 kubenswrapper[4790]: I1011 10:54:50.868000 4790 scope.go:117] "RemoveContainer" containerID="1a8e071be20d3f943b3788a26f067c75ea4b4b613b2fd9732c1580727514e865" Oct 11 10:54:50.868267 master-0 kubenswrapper[4790]: I1011 10:54:50.868071 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:50.888773 master-0 kubenswrapper[4790]: I1011 10:54:50.888680 4790 scope.go:117] "RemoveContainer" containerID="dc786133c9de5165782bb53f073d056d7ecf1bdd9443e8b6e694874591741d06" Oct 11 10:54:50.951606 master-0 kubenswrapper[4790]: I1011 10:54:50.951517 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-external-api-1"] Oct 11 10:54:51.007904 master-0 kubenswrapper[4790]: I1011 10:54:51.007693 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b5802-default-external-api-1"] Oct 11 10:54:51.092151 master-0 kubenswrapper[4790]: I1011 10:54:51.092079 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-external-api-1"] Oct 11 10:54:51.092944 master-0 kubenswrapper[4790]: E1011 10:54:51.092923 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4dc537-c4a3-4538-887f-62fe3919d5f0" containerName="init" Oct 11 10:54:51.092993 master-0 kubenswrapper[4790]: I1011 10:54:51.092946 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4dc537-c4a3-4538-887f-62fe3919d5f0" containerName="init" Oct 11 10:54:51.092993 master-0 kubenswrapper[4790]: E1011 10:54:51.092960 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4dc537-c4a3-4538-887f-62fe3919d5f0" containerName="dnsmasq-dns" Oct 11 10:54:51.092993 master-0 kubenswrapper[4790]: I1011 10:54:51.092970 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4dc537-c4a3-4538-887f-62fe3919d5f0" containerName="dnsmasq-dns" Oct 11 10:54:51.093064 master-0 kubenswrapper[4790]: E1011 10:54:51.093007 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ebc6e3-ce04-4aac-bb04-ded9662f65e3" containerName="glance-log" Oct 11 10:54:51.093064 master-0 kubenswrapper[4790]: I1011 10:54:51.093018 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ebc6e3-ce04-4aac-bb04-ded9662f65e3" containerName="glance-log" Oct 11 10:54:51.093064 master-0 kubenswrapper[4790]: E1011 10:54:51.093048 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ebc6e3-ce04-4aac-bb04-ded9662f65e3" containerName="glance-httpd" Oct 11 10:54:51.093064 master-0 kubenswrapper[4790]: I1011 10:54:51.093057 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ebc6e3-ce04-4aac-bb04-ded9662f65e3" containerName="glance-httpd" Oct 11 10:54:51.093623 master-0 kubenswrapper[4790]: I1011 10:54:51.093567 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ebc6e3-ce04-4aac-bb04-ded9662f65e3" containerName="glance-httpd" Oct 11 10:54:51.093668 master-0 kubenswrapper[4790]: I1011 10:54:51.093626 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ebc6e3-ce04-4aac-bb04-ded9662f65e3" containerName="glance-log" Oct 11 10:54:51.093877 master-0 kubenswrapper[4790]: I1011 10:54:51.093842 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a4dc537-c4a3-4538-887f-62fe3919d5f0" containerName="dnsmasq-dns" Oct 11 10:54:51.097695 master-0 kubenswrapper[4790]: I1011 10:54:51.097666 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.102141 master-0 kubenswrapper[4790]: I1011 10:54:51.102090 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 11 10:54:51.103032 master-0 kubenswrapper[4790]: I1011 10:54:51.102919 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-external-config-data" Oct 11 10:54:51.139547 master-0 kubenswrapper[4790]: I1011 10:54:51.139470 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-1"] Oct 11 10:54:51.312337 master-1 kubenswrapper[4771]: I1011 10:54:51.312256 4771 generic.go:334] "Generic (PLEG): container finished" podID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerID="1959dfa42cdae287607dba414d321cf7399ac93af3a551ad5d6c73607087a624" exitCode=0 Oct 11 10:54:51.312337 master-1 kubenswrapper[4771]: I1011 10:54:51.312306 4771 generic.go:334] "Generic (PLEG): container finished" podID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerID="b88255ee4532787c4c00c2711a51a51676d239e5c1a4bb061ac2e5c547c75ccd" exitCode=2 Oct 11 10:54:51.312337 master-1 kubenswrapper[4771]: I1011 10:54:51.312313 4771 generic.go:334] "Generic (PLEG): container finished" podID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerID="5b743e4068a6cb14bb814fdd41d8cf6bd84e84d0f5ad68d1776ee95613452027" exitCode=0 Oct 11 10:54:51.313342 master-1 kubenswrapper[4771]: I1011 10:54:51.313304 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6508f17e-afc7-44dd-89b4-2efa8a124b12","Type":"ContainerDied","Data":"1959dfa42cdae287607dba414d321cf7399ac93af3a551ad5d6c73607087a624"} Oct 11 10:54:51.313409 master-1 kubenswrapper[4771]: I1011 10:54:51.313366 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6508f17e-afc7-44dd-89b4-2efa8a124b12","Type":"ContainerDied","Data":"b88255ee4532787c4c00c2711a51a51676d239e5c1a4bb061ac2e5c547c75ccd"} Oct 11 10:54:51.313409 master-1 kubenswrapper[4771]: I1011 10:54:51.313379 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6508f17e-afc7-44dd-89b4-2efa8a124b12","Type":"ContainerDied","Data":"5b743e4068a6cb14bb814fdd41d8cf6bd84e84d0f5ad68d1776ee95613452027"} Oct 11 10:54:51.313886 master-1 kubenswrapper[4771]: I1011 10:54:51.313855 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:51.313931 master-1 kubenswrapper[4771]: I1011 10:54:51.313891 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:54:51.482334 master-1 kubenswrapper[4771]: I1011 10:54:51.482253 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:51.482668 master-1 kubenswrapper[4771]: I1011 10:54:51.482423 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:54:51.487376 master-1 kubenswrapper[4771]: I1011 10:54:51.486240 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:51.711915 master-0 kubenswrapper[4790]: I1011 10:54:51.711812 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-scripts\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.711915 master-0 kubenswrapper[4790]: I1011 10:54:51.711910 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-config-data\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.712436 master-0 kubenswrapper[4790]: I1011 10:54:51.711940 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c7212717-18be-4287-9071-f6f818672815\" (UniqueName: \"kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.712436 master-0 kubenswrapper[4790]: I1011 10:54:51.712101 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-combined-ca-bundle\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.712436 master-0 kubenswrapper[4790]: I1011 10:54:51.712258 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3166d4fc-8488-46dc-9d63-87dc403f66bc-httpd-run\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.712436 master-0 kubenswrapper[4790]: I1011 10:54:51.712369 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpbtf\" (UniqueName: \"kubernetes.io/projected/3166d4fc-8488-46dc-9d63-87dc403f66bc-kube-api-access-vpbtf\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.712436 master-0 kubenswrapper[4790]: I1011 10:54:51.712394 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3166d4fc-8488-46dc-9d63-87dc403f66bc-logs\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.712728 master-0 kubenswrapper[4790]: I1011 10:54:51.712501 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-public-tls-certs\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.815815 master-0 kubenswrapper[4790]: I1011 10:54:51.815512 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-scripts\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.815815 master-0 kubenswrapper[4790]: I1011 10:54:51.815644 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-config-data\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.815815 master-0 kubenswrapper[4790]: I1011 10:54:51.815700 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c7212717-18be-4287-9071-f6f818672815\" (UniqueName: \"kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.815815 master-0 kubenswrapper[4790]: I1011 10:54:51.815816 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-combined-ca-bundle\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.816264 master-0 kubenswrapper[4790]: I1011 10:54:51.815873 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3166d4fc-8488-46dc-9d63-87dc403f66bc-httpd-run\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.816264 master-0 kubenswrapper[4790]: I1011 10:54:51.815937 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpbtf\" (UniqueName: \"kubernetes.io/projected/3166d4fc-8488-46dc-9d63-87dc403f66bc-kube-api-access-vpbtf\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.816264 master-0 kubenswrapper[4790]: I1011 10:54:51.815970 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3166d4fc-8488-46dc-9d63-87dc403f66bc-logs\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.816264 master-0 kubenswrapper[4790]: I1011 10:54:51.816022 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-public-tls-certs\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.816647 master-0 kubenswrapper[4790]: I1011 10:54:51.816592 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3166d4fc-8488-46dc-9d63-87dc403f66bc-httpd-run\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.816826 master-0 kubenswrapper[4790]: I1011 10:54:51.816768 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3166d4fc-8488-46dc-9d63-87dc403f66bc-logs\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.818346 master-0 kubenswrapper[4790]: I1011 10:54:51.818312 4790 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:54:51.818417 master-0 kubenswrapper[4790]: I1011 10:54:51.818355 4790 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c7212717-18be-4287-9071-f6f818672815\" (UniqueName: \"kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/95ab3ea1c73b905e55aa0f0a1e574a5056ec96dde23978388ab58fbe89465472/globalmount\"" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.820865 master-0 kubenswrapper[4790]: I1011 10:54:51.820836 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-config-data\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.822660 master-0 kubenswrapper[4790]: I1011 10:54:51.822378 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-scripts\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.822660 master-0 kubenswrapper[4790]: I1011 10:54:51.822622 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-combined-ca-bundle\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.826630 master-0 kubenswrapper[4790]: I1011 10:54:51.826587 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-public-tls-certs\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.881221 master-0 kubenswrapper[4790]: I1011 10:54:51.881118 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-848fcbb4df-dr4lc" event={"ID":"bc0250a9-8454-4716-8e79-36166266decb","Type":"ContainerStarted","Data":"c0ce4103aad4e111bf4e5c18f994dcb6ae345b0b57188ee1b0fbafd512d4a6ee"} Oct 11 10:54:51.881508 master-0 kubenswrapper[4790]: I1011 10:54:51.881332 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:54:51.890736 master-0 kubenswrapper[4790]: I1011 10:54:51.888512 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpbtf\" (UniqueName: \"kubernetes.io/projected/3166d4fc-8488-46dc-9d63-87dc403f66bc-kube-api-access-vpbtf\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:51.923599 master-0 kubenswrapper[4790]: I1011 10:54:51.923176 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-848fcbb4df-dr4lc" podStartSLOduration=10.492357931 podStartE2EDuration="15.923150355s" podCreationTimestamp="2025-10-11 10:54:36 +0000 UTC" firstStartedPulling="2025-10-11 10:54:45.304455717 +0000 UTC m=+961.858916009" lastFinishedPulling="2025-10-11 10:54:50.735248131 +0000 UTC m=+967.289708433" observedRunningTime="2025-10-11 10:54:51.91396203 +0000 UTC m=+968.468422332" watchObservedRunningTime="2025-10-11 10:54:51.923150355 +0000 UTC m=+968.477610657" Oct 11 10:54:52.303611 master-0 kubenswrapper[4790]: I1011 10:54:52.303498 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06ebc6e3-ce04-4aac-bb04-ded9662f65e3" path="/var/lib/kubelet/pods/06ebc6e3-ce04-4aac-bb04-ded9662f65e3/volumes" Oct 11 10:54:52.334956 master-0 kubenswrapper[4790]: I1011 10:54:52.334865 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:52.335204 master-0 kubenswrapper[4790]: I1011 10:54:52.334975 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:52.371878 master-0 kubenswrapper[4790]: I1011 10:54:52.371816 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:52.391453 master-0 kubenswrapper[4790]: I1011 10:54:52.391363 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:52.713759 master-2 kubenswrapper[4776]: I1011 10:54:52.713628 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nz82h" event={"ID":"005f2579-b848-40fd-b3f3-2d3383344047","Type":"ContainerStarted","Data":"5943503815e84fefb31e290c68a69b97ca3d79be2036bfcb024f274a60831171"} Oct 11 10:54:52.734037 master-2 kubenswrapper[4776]: I1011 10:54:52.733945 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-nz82h" podStartSLOduration=2.308301062 podStartE2EDuration="19.733927674s" podCreationTimestamp="2025-10-11 10:54:33 +0000 UTC" firstStartedPulling="2025-10-11 10:54:34.708136657 +0000 UTC m=+1709.492563366" lastFinishedPulling="2025-10-11 10:54:52.133763269 +0000 UTC m=+1726.918189978" observedRunningTime="2025-10-11 10:54:52.731565211 +0000 UTC m=+1727.515991930" watchObservedRunningTime="2025-10-11 10:54:52.733927674 +0000 UTC m=+1727.518354383" Oct 11 10:54:52.895419 master-0 kubenswrapper[4790]: I1011 10:54:52.895325 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:52.895419 master-0 kubenswrapper[4790]: I1011 10:54:52.895413 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:53.257313 master-1 kubenswrapper[4771]: I1011 10:54:53.257213 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:54:53.299791 master-0 kubenswrapper[4790]: I1011 10:54:53.295595 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c7212717-18be-4287-9071-f6f818672815\" (UniqueName: \"kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7\") pod \"glance-b5802-default-external-api-1\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:53.333296 master-1 kubenswrapper[4771]: I1011 10:54:53.333196 4771 generic.go:334] "Generic (PLEG): container finished" podID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerID="9ae9b7cbf86ceb270193377537f9fbc6af1ee710bbbb22c0a85e001eae30ccdb" exitCode=0 Oct 11 10:54:53.333296 master-1 kubenswrapper[4771]: I1011 10:54:53.333304 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:54:53.333765 master-1 kubenswrapper[4771]: I1011 10:54:53.333389 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6508f17e-afc7-44dd-89b4-2efa8a124b12","Type":"ContainerDied","Data":"9ae9b7cbf86ceb270193377537f9fbc6af1ee710bbbb22c0a85e001eae30ccdb"} Oct 11 10:54:53.333765 master-1 kubenswrapper[4771]: I1011 10:54:53.333554 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6508f17e-afc7-44dd-89b4-2efa8a124b12","Type":"ContainerDied","Data":"7c53a7f6f217d02a6155b61317bdcecef01a919dcbe718b5d7a4a4096ceec2ae"} Oct 11 10:54:53.333765 master-1 kubenswrapper[4771]: I1011 10:54:53.333593 4771 scope.go:117] "RemoveContainer" containerID="1959dfa42cdae287607dba414d321cf7399ac93af3a551ad5d6c73607087a624" Oct 11 10:54:53.374276 master-1 kubenswrapper[4771]: I1011 10:54:53.371717 4771 scope.go:117] "RemoveContainer" containerID="b88255ee4532787c4c00c2711a51a51676d239e5c1a4bb061ac2e5c547c75ccd" Oct 11 10:54:53.412389 master-1 kubenswrapper[4771]: I1011 10:54:53.412277 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-config-data\") pod \"6508f17e-afc7-44dd-89b4-2efa8a124b12\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " Oct 11 10:54:53.412767 master-1 kubenswrapper[4771]: I1011 10:54:53.412424 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6508f17e-afc7-44dd-89b4-2efa8a124b12-run-httpd\") pod \"6508f17e-afc7-44dd-89b4-2efa8a124b12\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " Oct 11 10:54:53.412767 master-1 kubenswrapper[4771]: I1011 10:54:53.412501 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-sg-core-conf-yaml\") pod \"6508f17e-afc7-44dd-89b4-2efa8a124b12\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " Oct 11 10:54:53.412767 master-1 kubenswrapper[4771]: I1011 10:54:53.412660 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6508f17e-afc7-44dd-89b4-2efa8a124b12-log-httpd\") pod \"6508f17e-afc7-44dd-89b4-2efa8a124b12\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " Oct 11 10:54:53.412921 master-1 kubenswrapper[4771]: I1011 10:54:53.412863 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6508f17e-afc7-44dd-89b4-2efa8a124b12-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6508f17e-afc7-44dd-89b4-2efa8a124b12" (UID: "6508f17e-afc7-44dd-89b4-2efa8a124b12"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:54:53.413124 master-1 kubenswrapper[4771]: I1011 10:54:53.413085 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6508f17e-afc7-44dd-89b4-2efa8a124b12-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6508f17e-afc7-44dd-89b4-2efa8a124b12" (UID: "6508f17e-afc7-44dd-89b4-2efa8a124b12"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:54:53.413466 master-1 kubenswrapper[4771]: I1011 10:54:53.413417 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-combined-ca-bundle\") pod \"6508f17e-afc7-44dd-89b4-2efa8a124b12\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " Oct 11 10:54:53.413718 master-1 kubenswrapper[4771]: I1011 10:54:53.413615 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76sj2\" (UniqueName: \"kubernetes.io/projected/6508f17e-afc7-44dd-89b4-2efa8a124b12-kube-api-access-76sj2\") pod \"6508f17e-afc7-44dd-89b4-2efa8a124b12\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " Oct 11 10:54:53.413939 master-1 kubenswrapper[4771]: I1011 10:54:53.413902 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-scripts\") pod \"6508f17e-afc7-44dd-89b4-2efa8a124b12\" (UID: \"6508f17e-afc7-44dd-89b4-2efa8a124b12\") " Oct 11 10:54:53.414912 master-1 kubenswrapper[4771]: I1011 10:54:53.414876 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6508f17e-afc7-44dd-89b4-2efa8a124b12-run-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:53.414912 master-1 kubenswrapper[4771]: I1011 10:54:53.414911 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6508f17e-afc7-44dd-89b4-2efa8a124b12-log-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:53.416748 master-1 kubenswrapper[4771]: I1011 10:54:53.416715 4771 scope.go:117] "RemoveContainer" containerID="9ae9b7cbf86ceb270193377537f9fbc6af1ee710bbbb22c0a85e001eae30ccdb" Oct 11 10:54:53.416996 master-1 kubenswrapper[4771]: I1011 10:54:53.416925 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6508f17e-afc7-44dd-89b4-2efa8a124b12-kube-api-access-76sj2" (OuterVolumeSpecName: "kube-api-access-76sj2") pod "6508f17e-afc7-44dd-89b4-2efa8a124b12" (UID: "6508f17e-afc7-44dd-89b4-2efa8a124b12"). InnerVolumeSpecName "kube-api-access-76sj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:53.417880 master-1 kubenswrapper[4771]: I1011 10:54:53.417817 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-scripts" (OuterVolumeSpecName: "scripts") pod "6508f17e-afc7-44dd-89b4-2efa8a124b12" (UID: "6508f17e-afc7-44dd-89b4-2efa8a124b12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:53.436974 master-1 kubenswrapper[4771]: I1011 10:54:53.436817 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6508f17e-afc7-44dd-89b4-2efa8a124b12" (UID: "6508f17e-afc7-44dd-89b4-2efa8a124b12"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:53.488086 master-1 kubenswrapper[4771]: I1011 10:54:53.488000 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6508f17e-afc7-44dd-89b4-2efa8a124b12" (UID: "6508f17e-afc7-44dd-89b4-2efa8a124b12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:53.496192 master-1 kubenswrapper[4771]: I1011 10:54:53.496135 4771 scope.go:117] "RemoveContainer" containerID="5b743e4068a6cb14bb814fdd41d8cf6bd84e84d0f5ad68d1776ee95613452027" Oct 11 10:54:53.513166 master-1 kubenswrapper[4771]: I1011 10:54:53.513121 4771 scope.go:117] "RemoveContainer" containerID="1959dfa42cdae287607dba414d321cf7399ac93af3a551ad5d6c73607087a624" Oct 11 10:54:53.513829 master-1 kubenswrapper[4771]: E1011 10:54:53.513779 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1959dfa42cdae287607dba414d321cf7399ac93af3a551ad5d6c73607087a624\": container with ID starting with 1959dfa42cdae287607dba414d321cf7399ac93af3a551ad5d6c73607087a624 not found: ID does not exist" containerID="1959dfa42cdae287607dba414d321cf7399ac93af3a551ad5d6c73607087a624" Oct 11 10:54:53.513885 master-1 kubenswrapper[4771]: I1011 10:54:53.513845 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1959dfa42cdae287607dba414d321cf7399ac93af3a551ad5d6c73607087a624"} err="failed to get container status \"1959dfa42cdae287607dba414d321cf7399ac93af3a551ad5d6c73607087a624\": rpc error: code = NotFound desc = could not find container \"1959dfa42cdae287607dba414d321cf7399ac93af3a551ad5d6c73607087a624\": container with ID starting with 1959dfa42cdae287607dba414d321cf7399ac93af3a551ad5d6c73607087a624 not found: ID does not exist" Oct 11 10:54:53.513933 master-1 kubenswrapper[4771]: I1011 10:54:53.513902 4771 scope.go:117] "RemoveContainer" containerID="b88255ee4532787c4c00c2711a51a51676d239e5c1a4bb061ac2e5c547c75ccd" Oct 11 10:54:53.514472 master-1 kubenswrapper[4771]: E1011 10:54:53.514441 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b88255ee4532787c4c00c2711a51a51676d239e5c1a4bb061ac2e5c547c75ccd\": container with ID starting with b88255ee4532787c4c00c2711a51a51676d239e5c1a4bb061ac2e5c547c75ccd not found: ID does not exist" containerID="b88255ee4532787c4c00c2711a51a51676d239e5c1a4bb061ac2e5c547c75ccd" Oct 11 10:54:53.514533 master-1 kubenswrapper[4771]: I1011 10:54:53.514467 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b88255ee4532787c4c00c2711a51a51676d239e5c1a4bb061ac2e5c547c75ccd"} err="failed to get container status \"b88255ee4532787c4c00c2711a51a51676d239e5c1a4bb061ac2e5c547c75ccd\": rpc error: code = NotFound desc = could not find container \"b88255ee4532787c4c00c2711a51a51676d239e5c1a4bb061ac2e5c547c75ccd\": container with ID starting with b88255ee4532787c4c00c2711a51a51676d239e5c1a4bb061ac2e5c547c75ccd not found: ID does not exist" Oct 11 10:54:53.514533 master-1 kubenswrapper[4771]: I1011 10:54:53.514485 4771 scope.go:117] "RemoveContainer" containerID="9ae9b7cbf86ceb270193377537f9fbc6af1ee710bbbb22c0a85e001eae30ccdb" Oct 11 10:54:53.514794 master-1 kubenswrapper[4771]: E1011 10:54:53.514767 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ae9b7cbf86ceb270193377537f9fbc6af1ee710bbbb22c0a85e001eae30ccdb\": container with ID starting with 9ae9b7cbf86ceb270193377537f9fbc6af1ee710bbbb22c0a85e001eae30ccdb not found: ID does not exist" containerID="9ae9b7cbf86ceb270193377537f9fbc6af1ee710bbbb22c0a85e001eae30ccdb" Oct 11 10:54:53.514851 master-1 kubenswrapper[4771]: I1011 10:54:53.514792 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ae9b7cbf86ceb270193377537f9fbc6af1ee710bbbb22c0a85e001eae30ccdb"} err="failed to get container status \"9ae9b7cbf86ceb270193377537f9fbc6af1ee710bbbb22c0a85e001eae30ccdb\": rpc error: code = NotFound desc = could not find container \"9ae9b7cbf86ceb270193377537f9fbc6af1ee710bbbb22c0a85e001eae30ccdb\": container with ID starting with 9ae9b7cbf86ceb270193377537f9fbc6af1ee710bbbb22c0a85e001eae30ccdb not found: ID does not exist" Oct 11 10:54:53.514851 master-1 kubenswrapper[4771]: I1011 10:54:53.514808 4771 scope.go:117] "RemoveContainer" containerID="5b743e4068a6cb14bb814fdd41d8cf6bd84e84d0f5ad68d1776ee95613452027" Oct 11 10:54:53.515103 master-1 kubenswrapper[4771]: E1011 10:54:53.515075 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b743e4068a6cb14bb814fdd41d8cf6bd84e84d0f5ad68d1776ee95613452027\": container with ID starting with 5b743e4068a6cb14bb814fdd41d8cf6bd84e84d0f5ad68d1776ee95613452027 not found: ID does not exist" containerID="5b743e4068a6cb14bb814fdd41d8cf6bd84e84d0f5ad68d1776ee95613452027" Oct 11 10:54:53.515157 master-1 kubenswrapper[4771]: I1011 10:54:53.515097 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b743e4068a6cb14bb814fdd41d8cf6bd84e84d0f5ad68d1776ee95613452027"} err="failed to get container status \"5b743e4068a6cb14bb814fdd41d8cf6bd84e84d0f5ad68d1776ee95613452027\": rpc error: code = NotFound desc = could not find container \"5b743e4068a6cb14bb814fdd41d8cf6bd84e84d0f5ad68d1776ee95613452027\": container with ID starting with 5b743e4068a6cb14bb814fdd41d8cf6bd84e84d0f5ad68d1776ee95613452027 not found: ID does not exist" Oct 11 10:54:53.516807 master-1 kubenswrapper[4771]: I1011 10:54:53.516753 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:53.516807 master-1 kubenswrapper[4771]: I1011 10:54:53.516793 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76sj2\" (UniqueName: \"kubernetes.io/projected/6508f17e-afc7-44dd-89b4-2efa8a124b12-kube-api-access-76sj2\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:53.516807 master-1 kubenswrapper[4771]: I1011 10:54:53.516807 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:53.516931 master-1 kubenswrapper[4771]: I1011 10:54:53.516823 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-sg-core-conf-yaml\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:53.517698 master-0 kubenswrapper[4790]: I1011 10:54:53.517583 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:54:53.532712 master-1 kubenswrapper[4771]: I1011 10:54:53.532630 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-config-data" (OuterVolumeSpecName: "config-data") pod "6508f17e-afc7-44dd-89b4-2efa8a124b12" (UID: "6508f17e-afc7-44dd-89b4-2efa8a124b12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:53.618902 master-1 kubenswrapper[4771]: I1011 10:54:53.618813 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6508f17e-afc7-44dd-89b4-2efa8a124b12-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:53.676919 master-1 kubenswrapper[4771]: I1011 10:54:53.676809 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:54:53.682526 master-1 kubenswrapper[4771]: I1011 10:54:53.682465 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:54:53.722022 master-1 kubenswrapper[4771]: I1011 10:54:53.721799 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:54:53.722797 master-1 kubenswrapper[4771]: E1011 10:54:53.722777 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerName="sg-core" Oct 11 10:54:53.722879 master-1 kubenswrapper[4771]: I1011 10:54:53.722868 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerName="sg-core" Oct 11 10:54:53.722957 master-1 kubenswrapper[4771]: E1011 10:54:53.722947 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerName="proxy-httpd" Oct 11 10:54:53.723010 master-1 kubenswrapper[4771]: I1011 10:54:53.723001 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerName="proxy-httpd" Oct 11 10:54:53.724207 master-1 kubenswrapper[4771]: E1011 10:54:53.724193 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerName="ceilometer-notification-agent" Oct 11 10:54:53.724295 master-1 kubenswrapper[4771]: I1011 10:54:53.724285 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerName="ceilometer-notification-agent" Oct 11 10:54:53.724399 master-1 kubenswrapper[4771]: E1011 10:54:53.724385 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30700706-219b-47c1-83cd-278584a3f182" containerName="dnsmasq-dns" Oct 11 10:54:53.724465 master-1 kubenswrapper[4771]: I1011 10:54:53.724455 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="30700706-219b-47c1-83cd-278584a3f182" containerName="dnsmasq-dns" Oct 11 10:54:53.724539 master-1 kubenswrapper[4771]: E1011 10:54:53.724529 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerName="ceilometer-central-agent" Oct 11 10:54:53.724594 master-1 kubenswrapper[4771]: I1011 10:54:53.724585 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerName="ceilometer-central-agent" Oct 11 10:54:53.724664 master-1 kubenswrapper[4771]: E1011 10:54:53.724655 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30700706-219b-47c1-83cd-278584a3f182" containerName="init" Oct 11 10:54:53.724718 master-1 kubenswrapper[4771]: I1011 10:54:53.724709 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="30700706-219b-47c1-83cd-278584a3f182" containerName="init" Oct 11 10:54:53.724954 master-1 kubenswrapper[4771]: I1011 10:54:53.724939 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerName="ceilometer-notification-agent" Oct 11 10:54:53.725029 master-1 kubenswrapper[4771]: I1011 10:54:53.725018 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerName="proxy-httpd" Oct 11 10:54:53.725095 master-1 kubenswrapper[4771]: I1011 10:54:53.725085 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerName="ceilometer-central-agent" Oct 11 10:54:53.725175 master-1 kubenswrapper[4771]: I1011 10:54:53.725163 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="30700706-219b-47c1-83cd-278584a3f182" containerName="dnsmasq-dns" Oct 11 10:54:53.725242 master-1 kubenswrapper[4771]: I1011 10:54:53.725232 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="6508f17e-afc7-44dd-89b4-2efa8a124b12" containerName="sg-core" Oct 11 10:54:53.727638 master-1 kubenswrapper[4771]: I1011 10:54:53.727607 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:54:53.731375 master-1 kubenswrapper[4771]: I1011 10:54:53.731306 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 11 10:54:53.731530 master-1 kubenswrapper[4771]: I1011 10:54:53.731474 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 11 10:54:53.731865 master-2 kubenswrapper[4776]: I1011 10:54:53.731788 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-db-sync-4sh7r" event={"ID":"c4f2a1bf-160f-40ad-bc2c-a7286a90b988","Type":"ContainerStarted","Data":"076657b6f64fe541979b78922896c11acb7556c05c2c695c6e5189bd99136f77"} Oct 11 10:54:53.743769 master-1 kubenswrapper[4771]: I1011 10:54:53.741495 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:54:53.760513 master-2 kubenswrapper[4776]: I1011 10:54:53.760339 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-db-sync-4sh7r" podStartSLOduration=2.511494488 podStartE2EDuration="19.760325294s" podCreationTimestamp="2025-10-11 10:54:34 +0000 UTC" firstStartedPulling="2025-10-11 10:54:35.040108917 +0000 UTC m=+1709.824535626" lastFinishedPulling="2025-10-11 10:54:52.288939723 +0000 UTC m=+1727.073366432" observedRunningTime="2025-10-11 10:54:53.759988335 +0000 UTC m=+1728.544415044" watchObservedRunningTime="2025-10-11 10:54:53.760325294 +0000 UTC m=+1728.544752003" Oct 11 10:54:53.823771 master-1 kubenswrapper[4771]: I1011 10:54:53.823688 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.823771 master-1 kubenswrapper[4771]: I1011 10:54:53.823790 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49ec9c51-e085-4cfa-8ce7-387a02f23731-run-httpd\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.824161 master-1 kubenswrapper[4771]: I1011 10:54:53.823982 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-config-data\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.824199 master-1 kubenswrapper[4771]: I1011 10:54:53.824160 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj45w\" (UniqueName: \"kubernetes.io/projected/49ec9c51-e085-4cfa-8ce7-387a02f23731-kube-api-access-rj45w\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.824245 master-1 kubenswrapper[4771]: I1011 10:54:53.824208 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-scripts\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.824600 master-1 kubenswrapper[4771]: I1011 10:54:53.824324 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49ec9c51-e085-4cfa-8ce7-387a02f23731-log-httpd\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.824600 master-1 kubenswrapper[4771]: I1011 10:54:53.824441 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.927066 master-1 kubenswrapper[4771]: I1011 10:54:53.926973 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49ec9c51-e085-4cfa-8ce7-387a02f23731-run-httpd\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.927398 master-1 kubenswrapper[4771]: I1011 10:54:53.927098 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-config-data\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.927398 master-1 kubenswrapper[4771]: I1011 10:54:53.927146 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj45w\" (UniqueName: \"kubernetes.io/projected/49ec9c51-e085-4cfa-8ce7-387a02f23731-kube-api-access-rj45w\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.927398 master-1 kubenswrapper[4771]: I1011 10:54:53.927171 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-scripts\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.927398 master-1 kubenswrapper[4771]: I1011 10:54:53.927200 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49ec9c51-e085-4cfa-8ce7-387a02f23731-log-httpd\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.927398 master-1 kubenswrapper[4771]: I1011 10:54:53.927231 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.927398 master-1 kubenswrapper[4771]: I1011 10:54:53.927283 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.927845 master-1 kubenswrapper[4771]: I1011 10:54:53.927775 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49ec9c51-e085-4cfa-8ce7-387a02f23731-run-httpd\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.927959 master-1 kubenswrapper[4771]: I1011 10:54:53.927906 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49ec9c51-e085-4cfa-8ce7-387a02f23731-log-httpd\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.931554 master-1 kubenswrapper[4771]: I1011 10:54:53.931508 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.932696 master-1 kubenswrapper[4771]: I1011 10:54:53.932627 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-config-data\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.933445 master-1 kubenswrapper[4771]: I1011 10:54:53.933388 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-scripts\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.935743 master-1 kubenswrapper[4771]: I1011 10:54:53.935705 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:53.946812 master-1 kubenswrapper[4771]: I1011 10:54:53.946745 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj45w\" (UniqueName: \"kubernetes.io/projected/49ec9c51-e085-4cfa-8ce7-387a02f23731-kube-api-access-rj45w\") pod \"ceilometer-0\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " pod="openstack/ceilometer-0" Oct 11 10:54:54.057314 master-1 kubenswrapper[4771]: I1011 10:54:54.057217 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:54:54.063415 master-0 kubenswrapper[4790]: I1011 10:54:54.063334 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-1"] Oct 11 10:54:54.453803 master-1 kubenswrapper[4771]: I1011 10:54:54.453591 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6508f17e-afc7-44dd-89b4-2efa8a124b12" path="/var/lib/kubelet/pods/6508f17e-afc7-44dd-89b4-2efa8a124b12/volumes" Oct 11 10:54:54.591601 master-1 kubenswrapper[4771]: I1011 10:54:54.591439 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:54:54.600186 master-1 kubenswrapper[4771]: W1011 10:54:54.600102 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49ec9c51_e085_4cfa_8ce7_387a02f23731.slice/crio-ada9da27885a5797d5d7044aa85caf6f5ca08e4448276ab5e54d26349906f001 WatchSource:0}: Error finding container ada9da27885a5797d5d7044aa85caf6f5ca08e4448276ab5e54d26349906f001: Status 404 returned error can't find the container with id ada9da27885a5797d5d7044aa85caf6f5ca08e4448276ab5e54d26349906f001 Oct 11 10:54:54.926637 master-0 kubenswrapper[4790]: I1011 10:54:54.926548 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-1" event={"ID":"3166d4fc-8488-46dc-9d63-87dc403f66bc","Type":"ContainerStarted","Data":"1980b844ffaf3f4ae914e07f575c0c012ae970754f7fcb5bc4e59940912228a6"} Oct 11 10:54:54.926637 master-0 kubenswrapper[4790]: I1011 10:54:54.926612 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-1" event={"ID":"3166d4fc-8488-46dc-9d63-87dc403f66bc","Type":"ContainerStarted","Data":"f242a619a098bee9251349acb03ad40745b6b14dcdda08d9b62f04ce2b3b042e"} Oct 11 10:54:55.005844 master-0 kubenswrapper[4790]: I1011 10:54:55.005282 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:55.006087 master-0 kubenswrapper[4790]: I1011 10:54:55.005952 4790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:54:55.083968 master-0 kubenswrapper[4790]: I1011 10:54:55.083914 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:54:55.161268 master-1 kubenswrapper[4771]: I1011 10:54:55.160182 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-internal-api-1"] Oct 11 10:54:55.162885 master-1 kubenswrapper[4771]: I1011 10:54:55.161459 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-internal-api-1" podUID="499f9e94-a738-484d-ae4b-0cc221750d1c" containerName="glance-log" containerID="cri-o://2d4fd9e07f37d7d0e4c5b7147d47642c209dd291fd8ea33730298efb1acb5aa4" gracePeriod=30 Oct 11 10:54:55.162885 master-1 kubenswrapper[4771]: I1011 10:54:55.161665 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-internal-api-1" podUID="499f9e94-a738-484d-ae4b-0cc221750d1c" containerName="glance-httpd" containerID="cri-o://b479c48e028ed10f47dcf8ff360fd70182a69875e2d6e7028a9c345aed74bb52" gracePeriod=30 Oct 11 10:54:55.358682 master-1 kubenswrapper[4771]: I1011 10:54:55.358497 4771 generic.go:334] "Generic (PLEG): container finished" podID="499f9e94-a738-484d-ae4b-0cc221750d1c" containerID="2d4fd9e07f37d7d0e4c5b7147d47642c209dd291fd8ea33730298efb1acb5aa4" exitCode=143 Oct 11 10:54:55.358682 master-1 kubenswrapper[4771]: I1011 10:54:55.358544 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-1" event={"ID":"499f9e94-a738-484d-ae4b-0cc221750d1c","Type":"ContainerDied","Data":"2d4fd9e07f37d7d0e4c5b7147d47642c209dd291fd8ea33730298efb1acb5aa4"} Oct 11 10:54:55.362172 master-1 kubenswrapper[4771]: I1011 10:54:55.362122 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49ec9c51-e085-4cfa-8ce7-387a02f23731","Type":"ContainerStarted","Data":"1c3861fe4b88a03a7f5a37466aa7554573bbeb56e12a609a086b0d7cf9119e59"} Oct 11 10:54:55.362172 master-1 kubenswrapper[4771]: I1011 10:54:55.362173 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49ec9c51-e085-4cfa-8ce7-387a02f23731","Type":"ContainerStarted","Data":"ada9da27885a5797d5d7044aa85caf6f5ca08e4448276ab5e54d26349906f001"} Oct 11 10:54:55.943620 master-0 kubenswrapper[4790]: I1011 10:54:55.943549 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-1" event={"ID":"3166d4fc-8488-46dc-9d63-87dc403f66bc","Type":"ContainerStarted","Data":"b351fbb5c1ba1d71d14296c2d406aca09cc6016d89d1300e0ba6b0d00727939f"} Oct 11 10:54:56.377097 master-1 kubenswrapper[4771]: I1011 10:54:56.377017 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49ec9c51-e085-4cfa-8ce7-387a02f23731","Type":"ContainerStarted","Data":"0581a657f8cf01879d33e71f4db4cc2df261f4f45ead619016173a151ac38bcc"} Oct 11 10:54:57.413263 master-1 kubenswrapper[4771]: I1011 10:54:57.412836 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49ec9c51-e085-4cfa-8ce7-387a02f23731","Type":"ContainerStarted","Data":"e1413b97fa6430a651152f91df8a84c4a32dbe4f3d81aabb8fb9fea0809e7a16"} Oct 11 10:54:58.426095 master-1 kubenswrapper[4771]: I1011 10:54:58.426026 4771 generic.go:334] "Generic (PLEG): container finished" podID="499f9e94-a738-484d-ae4b-0cc221750d1c" containerID="b479c48e028ed10f47dcf8ff360fd70182a69875e2d6e7028a9c345aed74bb52" exitCode=0 Oct 11 10:54:58.426970 master-1 kubenswrapper[4771]: I1011 10:54:58.426287 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-1" event={"ID":"499f9e94-a738-484d-ae4b-0cc221750d1c","Type":"ContainerDied","Data":"b479c48e028ed10f47dcf8ff360fd70182a69875e2d6e7028a9c345aed74bb52"} Oct 11 10:54:58.431806 master-1 kubenswrapper[4771]: I1011 10:54:58.431736 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49ec9c51-e085-4cfa-8ce7-387a02f23731","Type":"ContainerStarted","Data":"3b0917f1e0c562a330c2d467c28336993eadaec7d16c4d43d62cfb2b0ba25b4b"} Oct 11 10:54:58.476116 master-1 kubenswrapper[4771]: I1011 10:54:58.475757 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.276729548 podStartE2EDuration="5.475623794s" podCreationTimestamp="2025-10-11 10:54:53 +0000 UTC" firstStartedPulling="2025-10-11 10:54:54.60689416 +0000 UTC m=+1726.581120641" lastFinishedPulling="2025-10-11 10:54:57.805788446 +0000 UTC m=+1729.780014887" observedRunningTime="2025-10-11 10:54:58.47343399 +0000 UTC m=+1730.447660481" watchObservedRunningTime="2025-10-11 10:54:58.475623794 +0000 UTC m=+1730.449850255" Oct 11 10:54:58.769762 master-2 kubenswrapper[4776]: I1011 10:54:58.769665 4776 generic.go:334] "Generic (PLEG): container finished" podID="4a1c4d38-1f25-4465-9976-43be28a3b282" containerID="3eac08206e51e42747d734fbad286ecc138ff94d119ee5ffe85a0b9dac4348e7" exitCode=0 Oct 11 10:54:58.770483 master-2 kubenswrapper[4776]: I1011 10:54:58.769786 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-n7nm2" event={"ID":"4a1c4d38-1f25-4465-9976-43be28a3b282","Type":"ContainerDied","Data":"3eac08206e51e42747d734fbad286ecc138ff94d119ee5ffe85a0b9dac4348e7"} Oct 11 10:54:58.988390 master-1 kubenswrapper[4771]: I1011 10:54:58.988303 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.161844 master-1 kubenswrapper[4771]: I1011 10:54:59.161204 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-config-data\") pod \"499f9e94-a738-484d-ae4b-0cc221750d1c\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " Oct 11 10:54:59.161844 master-1 kubenswrapper[4771]: I1011 10:54:59.161617 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37\") pod \"499f9e94-a738-484d-ae4b-0cc221750d1c\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " Oct 11 10:54:59.161844 master-1 kubenswrapper[4771]: I1011 10:54:59.161659 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/499f9e94-a738-484d-ae4b-0cc221750d1c-httpd-run\") pod \"499f9e94-a738-484d-ae4b-0cc221750d1c\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " Oct 11 10:54:59.161844 master-1 kubenswrapper[4771]: I1011 10:54:59.161730 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/499f9e94-a738-484d-ae4b-0cc221750d1c-logs\") pod \"499f9e94-a738-484d-ae4b-0cc221750d1c\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " Oct 11 10:54:59.161844 master-1 kubenswrapper[4771]: I1011 10:54:59.161816 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5hg2\" (UniqueName: \"kubernetes.io/projected/499f9e94-a738-484d-ae4b-0cc221750d1c-kube-api-access-k5hg2\") pod \"499f9e94-a738-484d-ae4b-0cc221750d1c\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " Oct 11 10:54:59.162494 master-1 kubenswrapper[4771]: I1011 10:54:59.161891 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-internal-tls-certs\") pod \"499f9e94-a738-484d-ae4b-0cc221750d1c\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " Oct 11 10:54:59.162494 master-1 kubenswrapper[4771]: I1011 10:54:59.161914 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-combined-ca-bundle\") pod \"499f9e94-a738-484d-ae4b-0cc221750d1c\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " Oct 11 10:54:59.162494 master-1 kubenswrapper[4771]: I1011 10:54:59.161950 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-scripts\") pod \"499f9e94-a738-484d-ae4b-0cc221750d1c\" (UID: \"499f9e94-a738-484d-ae4b-0cc221750d1c\") " Oct 11 10:54:59.162494 master-1 kubenswrapper[4771]: I1011 10:54:59.161983 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/499f9e94-a738-484d-ae4b-0cc221750d1c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "499f9e94-a738-484d-ae4b-0cc221750d1c" (UID: "499f9e94-a738-484d-ae4b-0cc221750d1c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:54:59.162494 master-1 kubenswrapper[4771]: I1011 10:54:59.162405 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/499f9e94-a738-484d-ae4b-0cc221750d1c-httpd-run\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:59.163056 master-1 kubenswrapper[4771]: I1011 10:54:59.162991 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/499f9e94-a738-484d-ae4b-0cc221750d1c-logs" (OuterVolumeSpecName: "logs") pod "499f9e94-a738-484d-ae4b-0cc221750d1c" (UID: "499f9e94-a738-484d-ae4b-0cc221750d1c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:54:59.166949 master-1 kubenswrapper[4771]: I1011 10:54:59.166861 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-scripts" (OuterVolumeSpecName: "scripts") pod "499f9e94-a738-484d-ae4b-0cc221750d1c" (UID: "499f9e94-a738-484d-ae4b-0cc221750d1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:59.167182 master-1 kubenswrapper[4771]: I1011 10:54:59.167103 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/499f9e94-a738-484d-ae4b-0cc221750d1c-kube-api-access-k5hg2" (OuterVolumeSpecName: "kube-api-access-k5hg2") pod "499f9e94-a738-484d-ae4b-0cc221750d1c" (UID: "499f9e94-a738-484d-ae4b-0cc221750d1c"). InnerVolumeSpecName "kube-api-access-k5hg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:59.182593 master-1 kubenswrapper[4771]: I1011 10:54:59.182530 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "499f9e94-a738-484d-ae4b-0cc221750d1c" (UID: "499f9e94-a738-484d-ae4b-0cc221750d1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:59.185187 master-1 kubenswrapper[4771]: I1011 10:54:59.185114 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37" (OuterVolumeSpecName: "glance") pod "499f9e94-a738-484d-ae4b-0cc221750d1c" (UID: "499f9e94-a738-484d-ae4b-0cc221750d1c"). InnerVolumeSpecName "pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 11 10:54:59.211554 master-1 kubenswrapper[4771]: I1011 10:54:59.211431 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-config-data" (OuterVolumeSpecName: "config-data") pod "499f9e94-a738-484d-ae4b-0cc221750d1c" (UID: "499f9e94-a738-484d-ae4b-0cc221750d1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:59.225718 master-1 kubenswrapper[4771]: I1011 10:54:59.225616 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "499f9e94-a738-484d-ae4b-0cc221750d1c" (UID: "499f9e94-a738-484d-ae4b-0cc221750d1c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:59.263331 master-1 kubenswrapper[4771]: I1011 10:54:59.263252 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37\") on node \"master-1\" " Oct 11 10:54:59.263331 master-1 kubenswrapper[4771]: I1011 10:54:59.263302 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/499f9e94-a738-484d-ae4b-0cc221750d1c-logs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:59.263331 master-1 kubenswrapper[4771]: I1011 10:54:59.263317 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5hg2\" (UniqueName: \"kubernetes.io/projected/499f9e94-a738-484d-ae4b-0cc221750d1c-kube-api-access-k5hg2\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:59.263331 master-1 kubenswrapper[4771]: I1011 10:54:59.263330 4771 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-internal-tls-certs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:59.263331 master-1 kubenswrapper[4771]: I1011 10:54:59.263339 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:59.263331 master-1 kubenswrapper[4771]: I1011 10:54:59.263348 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:59.263331 master-1 kubenswrapper[4771]: I1011 10:54:59.263377 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/499f9e94-a738-484d-ae4b-0cc221750d1c-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:59.281511 master-1 kubenswrapper[4771]: I1011 10:54:59.281438 4771 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 11 10:54:59.282298 master-1 kubenswrapper[4771]: I1011 10:54:59.281660 4771 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76" (UniqueName: "kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37") on node "master-1" Oct 11 10:54:59.365317 master-1 kubenswrapper[4771]: I1011 10:54:59.365209 4771 reconciler_common.go:293] "Volume detached for volume \"pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37\") on node \"master-1\" DevicePath \"\"" Oct 11 10:54:59.443193 master-1 kubenswrapper[4771]: I1011 10:54:59.442956 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.445210 master-1 kubenswrapper[4771]: I1011 10:54:59.444436 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-1" event={"ID":"499f9e94-a738-484d-ae4b-0cc221750d1c","Type":"ContainerDied","Data":"ee8592262e70401d099ff1b266023cdb236d7c7195e76597576b3cf0944d23f5"} Oct 11 10:54:59.445210 master-1 kubenswrapper[4771]: I1011 10:54:59.444539 4771 scope.go:117] "RemoveContainer" containerID="b479c48e028ed10f47dcf8ff360fd70182a69875e2d6e7028a9c345aed74bb52" Oct 11 10:54:59.445210 master-1 kubenswrapper[4771]: I1011 10:54:59.444708 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 11 10:54:59.483127 master-1 kubenswrapper[4771]: I1011 10:54:59.482607 4771 scope.go:117] "RemoveContainer" containerID="2d4fd9e07f37d7d0e4c5b7147d47642c209dd291fd8ea33730298efb1acb5aa4" Oct 11 10:54:59.507882 master-1 kubenswrapper[4771]: I1011 10:54:59.507814 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-internal-api-1"] Oct 11 10:54:59.513764 master-1 kubenswrapper[4771]: I1011 10:54:59.513723 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b5802-default-internal-api-1"] Oct 11 10:54:59.551549 master-1 kubenswrapper[4771]: I1011 10:54:59.551429 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-internal-api-1"] Oct 11 10:54:59.552137 master-1 kubenswrapper[4771]: E1011 10:54:59.552112 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="499f9e94-a738-484d-ae4b-0cc221750d1c" containerName="glance-log" Oct 11 10:54:59.552243 master-1 kubenswrapper[4771]: I1011 10:54:59.552229 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="499f9e94-a738-484d-ae4b-0cc221750d1c" containerName="glance-log" Oct 11 10:54:59.552402 master-1 kubenswrapper[4771]: E1011 10:54:59.552387 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="499f9e94-a738-484d-ae4b-0cc221750d1c" containerName="glance-httpd" Oct 11 10:54:59.552516 master-1 kubenswrapper[4771]: I1011 10:54:59.552498 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="499f9e94-a738-484d-ae4b-0cc221750d1c" containerName="glance-httpd" Oct 11 10:54:59.552836 master-1 kubenswrapper[4771]: I1011 10:54:59.552810 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="499f9e94-a738-484d-ae4b-0cc221750d1c" containerName="glance-httpd" Oct 11 10:54:59.552959 master-1 kubenswrapper[4771]: I1011 10:54:59.552944 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="499f9e94-a738-484d-ae4b-0cc221750d1c" containerName="glance-log" Oct 11 10:54:59.554309 master-1 kubenswrapper[4771]: I1011 10:54:59.554289 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.557676 master-1 kubenswrapper[4771]: I1011 10:54:59.557610 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 11 10:54:59.558702 master-1 kubenswrapper[4771]: I1011 10:54:59.558012 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-internal-config-data" Oct 11 10:54:59.577934 master-1 kubenswrapper[4771]: I1011 10:54:59.577855 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-1"] Oct 11 10:54:59.679892 master-1 kubenswrapper[4771]: I1011 10:54:59.679821 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.679892 master-1 kubenswrapper[4771]: I1011 10:54:59.679898 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2deabbe8-397d-495c-aef9-afe91b4e9eeb-logs\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.680258 master-1 kubenswrapper[4771]: I1011 10:54:59.679927 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g2z9\" (UniqueName: \"kubernetes.io/projected/2deabbe8-397d-495c-aef9-afe91b4e9eeb-kube-api-access-2g2z9\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.680258 master-1 kubenswrapper[4771]: I1011 10:54:59.679953 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-scripts\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.680258 master-1 kubenswrapper[4771]: I1011 10:54:59.679980 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-config-data\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.680258 master-1 kubenswrapper[4771]: I1011 10:54:59.680000 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.680258 master-1 kubenswrapper[4771]: I1011 10:54:59.680049 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2deabbe8-397d-495c-aef9-afe91b4e9eeb-httpd-run\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.680258 master-1 kubenswrapper[4771]: I1011 10:54:59.680084 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-internal-tls-certs\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.782007 master-1 kubenswrapper[4771]: I1011 10:54:59.781811 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.782007 master-1 kubenswrapper[4771]: I1011 10:54:59.781941 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2deabbe8-397d-495c-aef9-afe91b4e9eeb-logs\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.782007 master-1 kubenswrapper[4771]: I1011 10:54:59.781983 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g2z9\" (UniqueName: \"kubernetes.io/projected/2deabbe8-397d-495c-aef9-afe91b4e9eeb-kube-api-access-2g2z9\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.782552 master-1 kubenswrapper[4771]: I1011 10:54:59.782021 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-scripts\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.782552 master-1 kubenswrapper[4771]: I1011 10:54:59.782068 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-config-data\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.782552 master-1 kubenswrapper[4771]: I1011 10:54:59.782104 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.782552 master-1 kubenswrapper[4771]: I1011 10:54:59.782203 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2deabbe8-397d-495c-aef9-afe91b4e9eeb-httpd-run\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.782552 master-1 kubenswrapper[4771]: I1011 10:54:59.782319 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-internal-tls-certs\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.783774 master-2 kubenswrapper[4776]: I1011 10:54:59.783690 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-n7nm2" event={"ID":"4a1c4d38-1f25-4465-9976-43be28a3b282","Type":"ContainerStarted","Data":"a51a44def94d3717c345488e50d0f116c0777018eb329e198859a513f723b71f"} Oct 11 10:54:59.784424 master-1 kubenswrapper[4771]: I1011 10:54:59.784318 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2deabbe8-397d-495c-aef9-afe91b4e9eeb-httpd-run\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.784875 master-1 kubenswrapper[4771]: I1011 10:54:59.784821 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2deabbe8-397d-495c-aef9-afe91b4e9eeb-logs\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.786314 master-1 kubenswrapper[4771]: I1011 10:54:59.786257 4771 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:54:59.786515 master-1 kubenswrapper[4771]: I1011 10:54:59.786326 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/319ddbbf14dc29e9dbd7eec9a997b70a9a11c6eca7f6496495d34ea4ac3ccad0/globalmount\"" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.786743 master-2 kubenswrapper[4776]: I1011 10:54:59.786705 4776 generic.go:334] "Generic (PLEG): container finished" podID="005f2579-b848-40fd-b3f3-2d3383344047" containerID="5943503815e84fefb31e290c68a69b97ca3d79be2036bfcb024f274a60831171" exitCode=0 Oct 11 10:54:59.787148 master-2 kubenswrapper[4776]: I1011 10:54:59.786861 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nz82h" event={"ID":"005f2579-b848-40fd-b3f3-2d3383344047","Type":"ContainerDied","Data":"5943503815e84fefb31e290c68a69b97ca3d79be2036bfcb024f274a60831171"} Oct 11 10:54:59.787315 master-1 kubenswrapper[4771]: I1011 10:54:59.787240 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-internal-tls-certs\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.790397 master-1 kubenswrapper[4771]: I1011 10:54:59.789993 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-scripts\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.791748 master-1 kubenswrapper[4771]: I1011 10:54:59.791700 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-config-data\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.804864 master-1 kubenswrapper[4771]: I1011 10:54:59.804622 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.815024 master-1 kubenswrapper[4771]: I1011 10:54:59.814953 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g2z9\" (UniqueName: \"kubernetes.io/projected/2deabbe8-397d-495c-aef9-afe91b4e9eeb-kube-api-access-2g2z9\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:54:59.815572 master-2 kubenswrapper[4776]: I1011 10:54:59.815496 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-n7nm2" podStartSLOduration=3.407428689 podStartE2EDuration="22.815479894s" podCreationTimestamp="2025-10-11 10:54:37 +0000 UTC" firstStartedPulling="2025-10-11 10:54:38.506925535 +0000 UTC m=+1713.291352234" lastFinishedPulling="2025-10-11 10:54:57.91497673 +0000 UTC m=+1732.699403439" observedRunningTime="2025-10-11 10:54:59.813216892 +0000 UTC m=+1734.597643601" watchObservedRunningTime="2025-10-11 10:54:59.815479894 +0000 UTC m=+1734.599906603" Oct 11 10:55:00.455243 master-1 kubenswrapper[4771]: I1011 10:55:00.455107 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="499f9e94-a738-484d-ae4b-0cc221750d1c" path="/var/lib/kubelet/pods/499f9e94-a738-484d-ae4b-0cc221750d1c/volumes" Oct 11 10:55:01.243960 master-1 kubenswrapper[4771]: I1011 10:55:01.243876 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:55:01.380933 master-1 kubenswrapper[4771]: I1011 10:55:01.380780 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:55:01.643631 master-2 kubenswrapper[4776]: I1011 10:55:01.643572 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nz82h" Oct 11 10:55:01.748555 master-2 kubenswrapper[4776]: I1011 10:55:01.748491 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf8n6\" (UniqueName: \"kubernetes.io/projected/005f2579-b848-40fd-b3f3-2d3383344047-kube-api-access-hf8n6\") pod \"005f2579-b848-40fd-b3f3-2d3383344047\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " Oct 11 10:55:01.748764 master-2 kubenswrapper[4776]: I1011 10:55:01.748640 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-combined-ca-bundle\") pod \"005f2579-b848-40fd-b3f3-2d3383344047\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " Oct 11 10:55:01.748764 master-2 kubenswrapper[4776]: I1011 10:55:01.748708 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-config-data\") pod \"005f2579-b848-40fd-b3f3-2d3383344047\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " Oct 11 10:55:01.767321 master-2 kubenswrapper[4776]: I1011 10:55:01.764489 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/005f2579-b848-40fd-b3f3-2d3383344047-kube-api-access-hf8n6" (OuterVolumeSpecName: "kube-api-access-hf8n6") pod "005f2579-b848-40fd-b3f3-2d3383344047" (UID: "005f2579-b848-40fd-b3f3-2d3383344047"). InnerVolumeSpecName "kube-api-access-hf8n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:01.785401 master-2 kubenswrapper[4776]: I1011 10:55:01.785321 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "005f2579-b848-40fd-b3f3-2d3383344047" (UID: "005f2579-b848-40fd-b3f3-2d3383344047"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:01.804792 master-2 kubenswrapper[4776]: I1011 10:55:01.804617 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-config-data" (OuterVolumeSpecName: "config-data") pod "005f2579-b848-40fd-b3f3-2d3383344047" (UID: "005f2579-b848-40fd-b3f3-2d3383344047"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:01.807146 master-2 kubenswrapper[4776]: I1011 10:55:01.807085 4776 generic.go:334] "Generic (PLEG): container finished" podID="c4f2a1bf-160f-40ad-bc2c-a7286a90b988" containerID="076657b6f64fe541979b78922896c11acb7556c05c2c695c6e5189bd99136f77" exitCode=0 Oct 11 10:55:01.807247 master-2 kubenswrapper[4776]: I1011 10:55:01.807145 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-db-sync-4sh7r" event={"ID":"c4f2a1bf-160f-40ad-bc2c-a7286a90b988","Type":"ContainerDied","Data":"076657b6f64fe541979b78922896c11acb7556c05c2c695c6e5189bd99136f77"} Oct 11 10:55:01.808628 master-2 kubenswrapper[4776]: I1011 10:55:01.808590 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nz82h" Oct 11 10:55:01.808628 master-2 kubenswrapper[4776]: I1011 10:55:01.808614 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nz82h" event={"ID":"005f2579-b848-40fd-b3f3-2d3383344047","Type":"ContainerDied","Data":"cc5b9c7250419dc166026542bf46853e7a0777efbedca2dc833bde7e55dd6960"} Oct 11 10:55:01.808762 master-2 kubenswrapper[4776]: I1011 10:55:01.808644 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc5b9c7250419dc166026542bf46853e7a0777efbedca2dc833bde7e55dd6960" Oct 11 10:55:01.810361 master-2 kubenswrapper[4776]: I1011 10:55:01.810298 4776 generic.go:334] "Generic (PLEG): container finished" podID="d90a5c6e-6cd2-4396-b38c-dc0e03da9d38" containerID="b7e0d5acf6bbdc53a5ba11187ce29782ecfb6106125d3631307f9dca40bcd06a" exitCode=0 Oct 11 10:55:01.810361 master-2 kubenswrapper[4776]: I1011 10:55:01.810359 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2dgxj" event={"ID":"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38","Type":"ContainerDied","Data":"b7e0d5acf6bbdc53a5ba11187ce29782ecfb6106125d3631307f9dca40bcd06a"} Oct 11 10:55:01.851016 master-2 kubenswrapper[4776]: I1011 10:55:01.850947 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:01.851016 master-2 kubenswrapper[4776]: I1011 10:55:01.851013 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:01.851171 master-2 kubenswrapper[4776]: I1011 10:55:01.851024 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf8n6\" (UniqueName: \"kubernetes.io/projected/005f2579-b848-40fd-b3f3-2d3383344047-kube-api-access-hf8n6\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:02.162111 master-1 kubenswrapper[4771]: I1011 10:55:02.162032 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-1"] Oct 11 10:55:02.168074 master-1 kubenswrapper[4771]: W1011 10:55:02.167999 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2deabbe8_397d_495c_aef9_afe91b4e9eeb.slice/crio-f7c97ec5b3e0ca2c1b2ecfb01745e5105213a6f53e9a75590979cbcd8d5e7e3f WatchSource:0}: Error finding container f7c97ec5b3e0ca2c1b2ecfb01745e5105213a6f53e9a75590979cbcd8d5e7e3f: Status 404 returned error can't find the container with id f7c97ec5b3e0ca2c1b2ecfb01745e5105213a6f53e9a75590979cbcd8d5e7e3f Oct 11 10:55:02.474727 master-1 kubenswrapper[4771]: I1011 10:55:02.474568 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-1" event={"ID":"2deabbe8-397d-495c-aef9-afe91b4e9eeb","Type":"ContainerStarted","Data":"f7c97ec5b3e0ca2c1b2ecfb01745e5105213a6f53e9a75590979cbcd8d5e7e3f"} Oct 11 10:55:03.485164 master-1 kubenswrapper[4771]: I1011 10:55:03.485084 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-1" event={"ID":"2deabbe8-397d-495c-aef9-afe91b4e9eeb","Type":"ContainerStarted","Data":"20c69b75b312956ea0de71da2fddb27d441722583daf44724022755b0dd0fa57"} Oct 11 10:55:03.485164 master-1 kubenswrapper[4771]: I1011 10:55:03.485148 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-1" event={"ID":"2deabbe8-397d-495c-aef9-afe91b4e9eeb","Type":"ContainerStarted","Data":"9b109114c918b22d9cdecdecec8567888df427a39c7ae80e11cc66d73a78ae7e"} Oct 11 10:55:03.519003 master-0 kubenswrapper[4790]: I1011 10:55:03.518865 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:55:03.520222 master-0 kubenswrapper[4790]: I1011 10:55:03.520106 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:55:03.530731 master-1 kubenswrapper[4771]: I1011 10:55:03.530572 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-internal-api-1" podStartSLOduration=4.530539438 podStartE2EDuration="4.530539438s" podCreationTimestamp="2025-10-11 10:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:03.52298718 +0000 UTC m=+1735.497213631" watchObservedRunningTime="2025-10-11 10:55:03.530539438 +0000 UTC m=+1735.504765909" Oct 11 10:55:03.558256 master-0 kubenswrapper[4790]: I1011 10:55:03.558170 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:55:03.573574 master-0 kubenswrapper[4790]: I1011 10:55:03.573503 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:55:03.605784 master-0 kubenswrapper[4790]: I1011 10:55:03.604115 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-external-api-1" podStartSLOduration=12.604066403000001 podStartE2EDuration="12.604066403s" podCreationTimestamp="2025-10-11 10:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:55.987808926 +0000 UTC m=+972.542269228" watchObservedRunningTime="2025-10-11 10:55:03.604066403 +0000 UTC m=+980.158526705" Oct 11 10:55:03.797298 master-2 kubenswrapper[4776]: I1011 10:55:03.797269 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:55:03.803032 master-2 kubenswrapper[4776]: I1011 10:55:03.802994 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:55:03.854040 master-2 kubenswrapper[4776]: I1011 10:55:03.853950 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2dgxj" event={"ID":"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38","Type":"ContainerDied","Data":"f641f5141915e0e5383d1c25132b34d7f73e6431d60ee61f32ba0f43bc3081a9"} Oct 11 10:55:03.854241 master-2 kubenswrapper[4776]: I1011 10:55:03.854202 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f641f5141915e0e5383d1c25132b34d7f73e6431d60ee61f32ba0f43bc3081a9" Oct 11 10:55:03.854591 master-2 kubenswrapper[4776]: I1011 10:55:03.854517 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:55:03.856907 master-2 kubenswrapper[4776]: I1011 10:55:03.856732 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-db-sync-4sh7r" event={"ID":"c4f2a1bf-160f-40ad-bc2c-a7286a90b988","Type":"ContainerDied","Data":"fa7c55aca3df89af2240ffd38909152d52165a2a24214ef5226c24aacb7b9aca"} Oct 11 10:55:03.856907 master-2 kubenswrapper[4776]: I1011 10:55:03.856800 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa7c55aca3df89af2240ffd38909152d52165a2a24214ef5226c24aacb7b9aca" Oct 11 10:55:03.856907 master-2 kubenswrapper[4776]: I1011 10:55:03.856808 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:55:03.924014 master-2 kubenswrapper[4776]: I1011 10:55:03.923943 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-etc-machine-id\") pod \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " Oct 11 10:55:03.924225 master-2 kubenswrapper[4776]: I1011 10:55:03.924037 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-db-sync-config-data\") pod \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " Oct 11 10:55:03.924225 master-2 kubenswrapper[4776]: I1011 10:55:03.924091 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-scripts\") pod \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " Oct 11 10:55:03.924225 master-2 kubenswrapper[4776]: I1011 10:55:03.924193 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkn85\" (UniqueName: \"kubernetes.io/projected/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-kube-api-access-dkn85\") pod \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " Oct 11 10:55:03.924321 master-2 kubenswrapper[4776]: I1011 10:55:03.924251 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-config\") pod \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " Oct 11 10:55:03.924321 master-2 kubenswrapper[4776]: I1011 10:55:03.924292 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmbf9\" (UniqueName: \"kubernetes.io/projected/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-kube-api-access-rmbf9\") pod \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " Oct 11 10:55:03.924321 master-2 kubenswrapper[4776]: I1011 10:55:03.924309 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-combined-ca-bundle\") pod \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " Oct 11 10:55:03.924408 master-2 kubenswrapper[4776]: I1011 10:55:03.924331 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-combined-ca-bundle\") pod \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " Oct 11 10:55:03.924408 master-2 kubenswrapper[4776]: I1011 10:55:03.924351 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-config-data\") pod \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " Oct 11 10:55:03.925896 master-2 kubenswrapper[4776]: I1011 10:55:03.925860 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c4f2a1bf-160f-40ad-bc2c-a7286a90b988" (UID: "c4f2a1bf-160f-40ad-bc2c-a7286a90b988"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:03.930509 master-2 kubenswrapper[4776]: I1011 10:55:03.928834 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-kube-api-access-rmbf9" (OuterVolumeSpecName: "kube-api-access-rmbf9") pod "d90a5c6e-6cd2-4396-b38c-dc0e03da9d38" (UID: "d90a5c6e-6cd2-4396-b38c-dc0e03da9d38"). InnerVolumeSpecName "kube-api-access-rmbf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:03.930509 master-2 kubenswrapper[4776]: I1011 10:55:03.928968 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c4f2a1bf-160f-40ad-bc2c-a7286a90b988" (UID: "c4f2a1bf-160f-40ad-bc2c-a7286a90b988"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:03.930509 master-2 kubenswrapper[4776]: I1011 10:55:03.929306 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-kube-api-access-dkn85" (OuterVolumeSpecName: "kube-api-access-dkn85") pod "c4f2a1bf-160f-40ad-bc2c-a7286a90b988" (UID: "c4f2a1bf-160f-40ad-bc2c-a7286a90b988"). InnerVolumeSpecName "kube-api-access-dkn85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:03.930509 master-2 kubenswrapper[4776]: I1011 10:55:03.929343 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-scripts" (OuterVolumeSpecName: "scripts") pod "c4f2a1bf-160f-40ad-bc2c-a7286a90b988" (UID: "c4f2a1bf-160f-40ad-bc2c-a7286a90b988"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:03.950032 master-2 kubenswrapper[4776]: I1011 10:55:03.949908 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4f2a1bf-160f-40ad-bc2c-a7286a90b988" (UID: "c4f2a1bf-160f-40ad-bc2c-a7286a90b988"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:03.953776 master-2 kubenswrapper[4776]: I1011 10:55:03.953709 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d90a5c6e-6cd2-4396-b38c-dc0e03da9d38" (UID: "d90a5c6e-6cd2-4396-b38c-dc0e03da9d38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:03.972608 master-2 kubenswrapper[4776]: I1011 10:55:03.972542 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-config-data" (OuterVolumeSpecName: "config-data") pod "c4f2a1bf-160f-40ad-bc2c-a7286a90b988" (UID: "c4f2a1bf-160f-40ad-bc2c-a7286a90b988"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:03.976744 master-2 kubenswrapper[4776]: I1011 10:55:03.976686 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-config" (OuterVolumeSpecName: "config") pod "d90a5c6e-6cd2-4396-b38c-dc0e03da9d38" (UID: "d90a5c6e-6cd2-4396-b38c-dc0e03da9d38"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:04.015241 master-0 kubenswrapper[4790]: I1011 10:55:04.015156 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:55:04.015241 master-0 kubenswrapper[4790]: I1011 10:55:04.015245 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:55:04.026180 master-2 kubenswrapper[4776]: I1011 10:55:04.025964 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkn85\" (UniqueName: \"kubernetes.io/projected/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-kube-api-access-dkn85\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:04.026180 master-2 kubenswrapper[4776]: I1011 10:55:04.026001 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:04.026180 master-2 kubenswrapper[4776]: I1011 10:55:04.026011 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmbf9\" (UniqueName: \"kubernetes.io/projected/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-kube-api-access-rmbf9\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:04.026180 master-2 kubenswrapper[4776]: I1011 10:55:04.026021 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:04.026180 master-2 kubenswrapper[4776]: I1011 10:55:04.026031 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:04.026180 master-2 kubenswrapper[4776]: I1011 10:55:04.026040 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:04.026180 master-2 kubenswrapper[4776]: I1011 10:55:04.026048 4776 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-etc-machine-id\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:04.026180 master-2 kubenswrapper[4776]: I1011 10:55:04.026056 4776 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-db-sync-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:04.026180 master-2 kubenswrapper[4776]: I1011 10:55:04.026063 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:04.177469 master-2 kubenswrapper[4776]: I1011 10:55:04.177416 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-scheduler-0"] Oct 11 10:55:04.177765 master-2 kubenswrapper[4776]: E1011 10:55:04.177740 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005f2579-b848-40fd-b3f3-2d3383344047" containerName="heat-db-sync" Oct 11 10:55:04.177765 master-2 kubenswrapper[4776]: I1011 10:55:04.177760 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="005f2579-b848-40fd-b3f3-2d3383344047" containerName="heat-db-sync" Oct 11 10:55:04.177849 master-2 kubenswrapper[4776]: E1011 10:55:04.177780 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d90a5c6e-6cd2-4396-b38c-dc0e03da9d38" containerName="neutron-db-sync" Oct 11 10:55:04.177849 master-2 kubenswrapper[4776]: I1011 10:55:04.177787 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="d90a5c6e-6cd2-4396-b38c-dc0e03da9d38" containerName="neutron-db-sync" Oct 11 10:55:04.177849 master-2 kubenswrapper[4776]: E1011 10:55:04.177813 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f2a1bf-160f-40ad-bc2c-a7286a90b988" containerName="cinder-b5802-db-sync" Oct 11 10:55:04.177849 master-2 kubenswrapper[4776]: I1011 10:55:04.177820 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f2a1bf-160f-40ad-bc2c-a7286a90b988" containerName="cinder-b5802-db-sync" Oct 11 10:55:04.178021 master-2 kubenswrapper[4776]: I1011 10:55:04.177992 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="d90a5c6e-6cd2-4396-b38c-dc0e03da9d38" containerName="neutron-db-sync" Oct 11 10:55:04.178021 master-2 kubenswrapper[4776]: I1011 10:55:04.178012 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f2a1bf-160f-40ad-bc2c-a7286a90b988" containerName="cinder-b5802-db-sync" Oct 11 10:55:04.178096 master-2 kubenswrapper[4776]: I1011 10:55:04.178025 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="005f2579-b848-40fd-b3f3-2d3383344047" containerName="heat-db-sync" Oct 11 10:55:04.178973 master-2 kubenswrapper[4776]: I1011 10:55:04.178851 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.185823 master-2 kubenswrapper[4776]: I1011 10:55:04.185650 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-scripts" Oct 11 10:55:04.187072 master-2 kubenswrapper[4776]: I1011 10:55:04.186855 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-scheduler-config-data" Oct 11 10:55:04.187393 master-2 kubenswrapper[4776]: I1011 10:55:04.187208 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-config-data" Oct 11 10:55:04.216645 master-2 kubenswrapper[4776]: I1011 10:55:04.216387 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-scheduler-0"] Oct 11 10:55:04.265526 master-0 kubenswrapper[4790]: I1011 10:55:04.265370 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-volume-lvm-iscsi-0"] Oct 11 10:55:04.267051 master-0 kubenswrapper[4790]: I1011 10:55:04.267019 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.272699 master-2 kubenswrapper[4776]: I1011 10:55:04.260282 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.272699 master-2 kubenswrapper[4776]: I1011 10:55:04.260361 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-scripts\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.272699 master-2 kubenswrapper[4776]: I1011 10:55:04.260441 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data-custom\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.272699 master-2 kubenswrapper[4776]: I1011 10:55:04.260516 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e1008bd-5444-4490-8e34-8a7843bf5c45-etc-machine-id\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.272699 master-2 kubenswrapper[4776]: I1011 10:55:04.260597 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlmpd\" (UniqueName: \"kubernetes.io/projected/6e1008bd-5444-4490-8e34-8a7843bf5c45-kube-api-access-jlmpd\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.272699 master-2 kubenswrapper[4776]: I1011 10:55:04.260618 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-combined-ca-bundle\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.274053 master-0 kubenswrapper[4790]: I1011 10:55:04.274001 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-scripts" Oct 11 10:55:04.274293 master-0 kubenswrapper[4790]: I1011 10:55:04.274260 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-config-data" Oct 11 10:55:04.274487 master-0 kubenswrapper[4790]: I1011 10:55:04.274464 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-volume-lvm-iscsi-config-data" Oct 11 10:55:04.363276 master-0 kubenswrapper[4790]: I1011 10:55:04.363207 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-volume-lvm-iscsi-0"] Oct 11 10:55:04.363276 master-0 kubenswrapper[4790]: I1011 10:55:04.363270 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c75cc6dff-n4dg7"] Oct 11 10:55:04.368619 master-0 kubenswrapper[4790]: I1011 10:55:04.368559 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-config-data\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.368799 master-0 kubenswrapper[4790]: I1011 10:55:04.368681 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-dev\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.368799 master-0 kubenswrapper[4790]: I1011 10:55:04.368750 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-sys\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.368799 master-0 kubenswrapper[4790]: I1011 10:55:04.368775 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-config-data-custom\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.368952 master-0 kubenswrapper[4790]: I1011 10:55:04.368804 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-etc-nvme\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.368952 master-0 kubenswrapper[4790]: I1011 10:55:04.368829 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k54k8\" (UniqueName: \"kubernetes.io/projected/60d68c10-8e1c-4a92-86f6-e2925df0f714-kube-api-access-k54k8\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.368952 master-0 kubenswrapper[4790]: I1011 10:55:04.368860 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-combined-ca-bundle\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.368952 master-0 kubenswrapper[4790]: I1011 10:55:04.368898 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-var-locks-brick\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.368952 master-0 kubenswrapper[4790]: I1011 10:55:04.368931 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-run\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.369151 master-0 kubenswrapper[4790]: I1011 10:55:04.368962 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-etc-machine-id\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.369151 master-0 kubenswrapper[4790]: I1011 10:55:04.368992 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-scripts\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.369151 master-0 kubenswrapper[4790]: I1011 10:55:04.369034 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-lib-modules\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.369151 master-0 kubenswrapper[4790]: I1011 10:55:04.369120 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-var-locks-cinder\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.369888 master-0 kubenswrapper[4790]: I1011 10:55:04.369167 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-var-lib-cinder\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.369888 master-0 kubenswrapper[4790]: I1011 10:55:04.369196 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-etc-iscsi\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.369888 master-0 kubenswrapper[4790]: I1011 10:55:04.369631 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:04.373837 master-0 kubenswrapper[4790]: I1011 10:55:04.372590 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c75cc6dff-n4dg7"] Oct 11 10:55:04.375122 master-2 kubenswrapper[4776]: I1011 10:55:04.366288 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlmpd\" (UniqueName: \"kubernetes.io/projected/6e1008bd-5444-4490-8e34-8a7843bf5c45-kube-api-access-jlmpd\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.375122 master-2 kubenswrapper[4776]: I1011 10:55:04.366340 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-combined-ca-bundle\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.375122 master-2 kubenswrapper[4776]: I1011 10:55:04.366407 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.375122 master-2 kubenswrapper[4776]: I1011 10:55:04.366434 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-scripts\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.375122 master-2 kubenswrapper[4776]: I1011 10:55:04.366481 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data-custom\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.375122 master-2 kubenswrapper[4776]: I1011 10:55:04.366602 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e1008bd-5444-4490-8e34-8a7843bf5c45-etc-machine-id\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.375122 master-2 kubenswrapper[4776]: I1011 10:55:04.366768 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e1008bd-5444-4490-8e34-8a7843bf5c45-etc-machine-id\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.375122 master-2 kubenswrapper[4776]: I1011 10:55:04.371286 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-combined-ca-bundle\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.377102 master-0 kubenswrapper[4790]: I1011 10:55:04.374208 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 10:55:04.377439 master-2 kubenswrapper[4776]: I1011 10:55:04.377407 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-scripts\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.377528 master-0 kubenswrapper[4790]: I1011 10:55:04.377456 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 10:55:04.377528 master-0 kubenswrapper[4790]: I1011 10:55:04.377505 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 11 10:55:04.377722 master-0 kubenswrapper[4790]: I1011 10:55:04.377671 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 11 10:55:04.378038 master-0 kubenswrapper[4790]: I1011 10:55:04.378000 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 11 10:55:04.392788 master-2 kubenswrapper[4776]: I1011 10:55:04.388620 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data-custom\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.400307 master-2 kubenswrapper[4776]: I1011 10:55:04.399776 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.409275 master-2 kubenswrapper[4776]: I1011 10:55:04.406824 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlmpd\" (UniqueName: \"kubernetes.io/projected/6e1008bd-5444-4490-8e34-8a7843bf5c45-kube-api-access-jlmpd\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.435784 master-1 kubenswrapper[4771]: I1011 10:55:04.434885 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7887b79bcd-stzg5"] Oct 11 10:55:04.438141 master-1 kubenswrapper[4771]: I1011 10:55:04.438075 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:04.438985 master-2 kubenswrapper[4776]: I1011 10:55:04.434583 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-backup-0"] Oct 11 10:55:04.438985 master-2 kubenswrapper[4776]: I1011 10:55:04.436447 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.443291 master-1 kubenswrapper[4771]: I1011 10:55:04.442531 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 11 10:55:04.443291 master-1 kubenswrapper[4771]: I1011 10:55:04.442826 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 11 10:55:04.444279 master-1 kubenswrapper[4771]: I1011 10:55:04.444206 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 11 10:55:04.453810 master-2 kubenswrapper[4776]: I1011 10:55:04.451410 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-backup-config-data" Oct 11 10:55:04.466104 master-1 kubenswrapper[4771]: I1011 10:55:04.466057 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7887b79bcd-stzg5"] Oct 11 10:55:04.470735 master-0 kubenswrapper[4790]: I1011 10:55:04.470680 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-config-data-custom\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.471181 master-0 kubenswrapper[4790]: I1011 10:55:04.471163 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-sys\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.471280 master-0 kubenswrapper[4790]: I1011 10:55:04.471268 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-etc-nvme\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.471362 master-0 kubenswrapper[4790]: I1011 10:55:04.471350 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k54k8\" (UniqueName: \"kubernetes.io/projected/60d68c10-8e1c-4a92-86f6-e2925df0f714-kube-api-access-k54k8\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.471436 master-0 kubenswrapper[4790]: I1011 10:55:04.471421 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-ovsdbserver-sb\") pod \"dnsmasq-dns-7c75cc6dff-n4dg7\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:04.471516 master-0 kubenswrapper[4790]: I1011 10:55:04.471503 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-combined-ca-bundle\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.471588 master-0 kubenswrapper[4790]: I1011 10:55:04.471575 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-var-locks-brick\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.471669 master-0 kubenswrapper[4790]: I1011 10:55:04.471657 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-run\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.471770 master-0 kubenswrapper[4790]: I1011 10:55:04.471758 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-etc-machine-id\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.471845 master-0 kubenswrapper[4790]: I1011 10:55:04.471834 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ljhl\" (UniqueName: \"kubernetes.io/projected/267b88dd-f511-44be-83eb-15e57143e363-kube-api-access-6ljhl\") pod \"dnsmasq-dns-7c75cc6dff-n4dg7\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:04.471917 master-0 kubenswrapper[4790]: I1011 10:55:04.471904 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-scripts\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.471992 master-0 kubenswrapper[4790]: I1011 10:55:04.471979 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-dns-swift-storage-0\") pod \"dnsmasq-dns-7c75cc6dff-n4dg7\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:04.472079 master-0 kubenswrapper[4790]: I1011 10:55:04.472068 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-lib-modules\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.472173 master-0 kubenswrapper[4790]: I1011 10:55:04.472151 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-ovsdbserver-nb\") pod \"dnsmasq-dns-7c75cc6dff-n4dg7\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:04.472448 master-0 kubenswrapper[4790]: I1011 10:55:04.472434 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-var-locks-cinder\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.472529 master-0 kubenswrapper[4790]: I1011 10:55:04.472519 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-var-lib-cinder\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.472608 master-0 kubenswrapper[4790]: I1011 10:55:04.472593 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-etc-iscsi\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.472700 master-0 kubenswrapper[4790]: I1011 10:55:04.472689 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-dns-svc\") pod \"dnsmasq-dns-7c75cc6dff-n4dg7\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:04.472797 master-0 kubenswrapper[4790]: I1011 10:55:04.472785 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-config-data\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.472878 master-0 kubenswrapper[4790]: I1011 10:55:04.472866 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-config\") pod \"dnsmasq-dns-7c75cc6dff-n4dg7\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:04.472956 master-0 kubenswrapper[4790]: I1011 10:55:04.472944 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-dev\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.473083 master-0 kubenswrapper[4790]: I1011 10:55:04.473070 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-dev\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.473161 master-0 kubenswrapper[4790]: I1011 10:55:04.471211 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-sys\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.473290 master-0 kubenswrapper[4790]: I1011 10:55:04.473276 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-etc-nvme\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.474473 master-0 kubenswrapper[4790]: I1011 10:55:04.474431 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-lib-modules\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.474735 master-0 kubenswrapper[4790]: I1011 10:55:04.474691 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-config-data-custom\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.474786 master-0 kubenswrapper[4790]: I1011 10:55:04.474758 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-var-locks-brick\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.474824 master-0 kubenswrapper[4790]: I1011 10:55:04.474780 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-etc-iscsi\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.474824 master-0 kubenswrapper[4790]: I1011 10:55:04.474797 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-run\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.475313 master-0 kubenswrapper[4790]: I1011 10:55:04.474827 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-etc-machine-id\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.475313 master-0 kubenswrapper[4790]: I1011 10:55:04.474896 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-var-locks-cinder\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.475313 master-0 kubenswrapper[4790]: I1011 10:55:04.475001 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-var-lib-cinder\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.478086 master-0 kubenswrapper[4790]: I1011 10:55:04.478065 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-combined-ca-bundle\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.479075 master-0 kubenswrapper[4790]: I1011 10:55:04.479020 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-scripts\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.487734 master-0 kubenswrapper[4790]: I1011 10:55:04.479768 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-config-data\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479090 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-brick\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479141 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data-custom\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479159 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-dev\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479176 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-machine-id\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479194 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-lib-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479217 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479240 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-iscsi\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479278 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-combined-ca-bundle\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479318 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-sys\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479341 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n864d\" (UniqueName: \"kubernetes.io/projected/64da8a05-f383-4643-b08d-639963f8bdd5-kube-api-access-n864d\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479362 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479384 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-nvme\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479421 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-run\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479460 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-scripts\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479480 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-lib-modules\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.518270 master-2 kubenswrapper[4776]: I1011 10:55:04.518231 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.520448 master-2 kubenswrapper[4776]: I1011 10:55:04.520425 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-backup-0"] Oct 11 10:55:04.522837 master-0 kubenswrapper[4790]: I1011 10:55:04.520201 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7887b79bcd-vk5xz"] Oct 11 10:55:04.522837 master-0 kubenswrapper[4790]: I1011 10:55:04.522190 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:55:04.532560 master-0 kubenswrapper[4790]: I1011 10:55:04.532060 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 11 10:55:04.532560 master-0 kubenswrapper[4790]: I1011 10:55:04.532346 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 11 10:55:04.532560 master-0 kubenswrapper[4790]: I1011 10:55:04.532498 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 11 10:55:04.541073 master-0 kubenswrapper[4790]: I1011 10:55:04.541023 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k54k8\" (UniqueName: \"kubernetes.io/projected/60d68c10-8e1c-4a92-86f6-e2925df0f714-kube-api-access-k54k8\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.544813 master-0 kubenswrapper[4790]: I1011 10:55:04.544745 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7887b79bcd-vk5xz"] Oct 11 10:55:04.568698 master-2 kubenswrapper[4776]: I1011 10:55:04.568620 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7887b79bcd-4lcts"] Oct 11 10:55:04.573699 master-2 kubenswrapper[4776]: I1011 10:55:04.570072 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.579578 master-0 kubenswrapper[4790]: I1011 10:55:04.574897 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-httpd-config\") pod \"neutron-7887b79bcd-vk5xz\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:55:04.579578 master-0 kubenswrapper[4790]: I1011 10:55:04.574958 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-combined-ca-bundle\") pod \"neutron-7887b79bcd-vk5xz\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:55:04.579578 master-0 kubenswrapper[4790]: I1011 10:55:04.575004 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-config\") pod \"dnsmasq-dns-7c75cc6dff-n4dg7\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:04.579578 master-0 kubenswrapper[4790]: I1011 10:55:04.575049 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ljhl\" (UniqueName: \"kubernetes.io/projected/267b88dd-f511-44be-83eb-15e57143e363-kube-api-access-6ljhl\") pod \"dnsmasq-dns-7c75cc6dff-n4dg7\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:04.579578 master-0 kubenswrapper[4790]: I1011 10:55:04.575079 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f96jc\" (UniqueName: \"kubernetes.io/projected/7739fd2d-10b5-425d-acbf-f50630f07017-kube-api-access-f96jc\") pod \"neutron-7887b79bcd-vk5xz\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:55:04.579578 master-0 kubenswrapper[4790]: I1011 10:55:04.575101 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-ovsdbserver-nb\") pod \"dnsmasq-dns-7c75cc6dff-n4dg7\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:04.579578 master-0 kubenswrapper[4790]: I1011 10:55:04.575120 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-ovndb-tls-certs\") pod \"neutron-7887b79bcd-vk5xz\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:55:04.579578 master-0 kubenswrapper[4790]: I1011 10:55:04.575152 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-dns-svc\") pod \"dnsmasq-dns-7c75cc6dff-n4dg7\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:04.579578 master-0 kubenswrapper[4790]: I1011 10:55:04.575181 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-config\") pod \"neutron-7887b79bcd-vk5xz\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:55:04.579578 master-0 kubenswrapper[4790]: I1011 10:55:04.575201 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-ovsdbserver-sb\") pod \"dnsmasq-dns-7c75cc6dff-n4dg7\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:04.579578 master-0 kubenswrapper[4790]: I1011 10:55:04.575237 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-dns-swift-storage-0\") pod \"dnsmasq-dns-7c75cc6dff-n4dg7\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:04.579578 master-0 kubenswrapper[4790]: I1011 10:55:04.577627 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-dns-swift-storage-0\") pod \"dnsmasq-dns-7c75cc6dff-n4dg7\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:04.579578 master-0 kubenswrapper[4790]: I1011 10:55:04.579322 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-ovsdbserver-nb\") pod \"dnsmasq-dns-7c75cc6dff-n4dg7\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:04.579578 master-0 kubenswrapper[4790]: I1011 10:55:04.579542 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-dns-svc\") pod \"dnsmasq-dns-7c75cc6dff-n4dg7\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:04.579578 master-0 kubenswrapper[4790]: I1011 10:55:04.579544 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-ovsdbserver-sb\") pod \"dnsmasq-dns-7c75cc6dff-n4dg7\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:04.579635 master-2 kubenswrapper[4776]: I1011 10:55:04.578291 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 11 10:55:04.579635 master-2 kubenswrapper[4776]: I1011 10:55:04.578647 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 11 10:55:04.579635 master-2 kubenswrapper[4776]: I1011 10:55:04.578797 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 11 10:55:04.580507 master-0 kubenswrapper[4790]: I1011 10:55:04.580387 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-config\") pod \"dnsmasq-dns-7c75cc6dff-n4dg7\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:04.582286 master-2 kubenswrapper[4776]: I1011 10:55:04.582042 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-sys\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582286 master-2 kubenswrapper[4776]: I1011 10:55:04.582086 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n864d\" (UniqueName: \"kubernetes.io/projected/64da8a05-f383-4643-b08d-639963f8bdd5-kube-api-access-n864d\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582286 master-2 kubenswrapper[4776]: I1011 10:55:04.582110 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582286 master-2 kubenswrapper[4776]: I1011 10:55:04.582139 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-nvme\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582286 master-2 kubenswrapper[4776]: I1011 10:55:04.582177 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-run\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582286 master-2 kubenswrapper[4776]: I1011 10:55:04.582218 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-scripts\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582286 master-2 kubenswrapper[4776]: I1011 10:55:04.582254 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-lib-modules\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582286 master-2 kubenswrapper[4776]: I1011 10:55:04.582275 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-brick\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582635 master-2 kubenswrapper[4776]: I1011 10:55:04.582291 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data-custom\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582635 master-2 kubenswrapper[4776]: I1011 10:55:04.582355 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-dev\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582635 master-2 kubenswrapper[4776]: I1011 10:55:04.582369 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-machine-id\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.583643 master-2 kubenswrapper[4776]: I1011 10:55:04.583475 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-lib-modules\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.583643 master-2 kubenswrapper[4776]: I1011 10:55:04.583595 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-brick\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.587324 master-2 kubenswrapper[4776]: I1011 10:55:04.585783 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-lib-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.587324 master-2 kubenswrapper[4776]: I1011 10:55:04.586714 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-scripts\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.588025 master-2 kubenswrapper[4776]: I1011 10:55:04.588001 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data-custom\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.595564 master-1 kubenswrapper[4771]: I1011 10:55:04.595481 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-httpd-config\") pod \"neutron-7887b79bcd-stzg5\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:04.596844 master-1 kubenswrapper[4771]: I1011 10:55:04.595640 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-config\") pod \"neutron-7887b79bcd-stzg5\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:04.597756 master-1 kubenswrapper[4771]: I1011 10:55:04.597724 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-ovndb-tls-certs\") pod \"neutron-7887b79bcd-stzg5\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:04.598105 master-1 kubenswrapper[4771]: I1011 10:55:04.598074 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4zt7\" (UniqueName: \"kubernetes.io/projected/362d815c-c6ec-48b0-9891-85d06ad00aed-kube-api-access-n4zt7\") pod \"neutron-7887b79bcd-stzg5\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:04.598196 master-1 kubenswrapper[4771]: I1011 10:55:04.598168 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-combined-ca-bundle\") pod \"neutron-7887b79bcd-stzg5\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:04.601739 master-0 kubenswrapper[4790]: I1011 10:55:04.598152 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:04.601739 master-0 kubenswrapper[4790]: I1011 10:55:04.600149 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c75cc6dff-n4dg7"] Oct 11 10:55:04.601739 master-0 kubenswrapper[4790]: E1011 10:55:04.600723 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-6ljhl], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" podUID="267b88dd-f511-44be-83eb-15e57143e363" Oct 11 10:55:04.618739 master-0 kubenswrapper[4790]: I1011 10:55:04.617757 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ljhl\" (UniqueName: \"kubernetes.io/projected/267b88dd-f511-44be-83eb-15e57143e363-kube-api-access-6ljhl\") pod \"dnsmasq-dns-7c75cc6dff-n4dg7\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.596879 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-run\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.596919 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-sys\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.597170 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-nvme\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.597202 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-machine-id\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.597219 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-dev\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.597299 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-lib-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.602299 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.602423 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-iscsi\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.602525 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-combined-ca-bundle\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.602499 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.602606 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-iscsi\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.609097 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.615864 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7887b79bcd-4lcts"] Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.622203 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-combined-ca-bundle\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.652751 master-2 kubenswrapper[4776]: I1011 10:55:04.633615 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n864d\" (UniqueName: \"kubernetes.io/projected/64da8a05-f383-4643-b08d-639963f8bdd5-kube-api-access-n864d\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.677864 master-0 kubenswrapper[4790]: I1011 10:55:04.676447 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-httpd-config\") pod \"neutron-7887b79bcd-vk5xz\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:55:04.677864 master-0 kubenswrapper[4790]: I1011 10:55:04.676559 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-combined-ca-bundle\") pod \"neutron-7887b79bcd-vk5xz\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:55:04.677864 master-0 kubenswrapper[4790]: I1011 10:55:04.676676 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f96jc\" (UniqueName: \"kubernetes.io/projected/7739fd2d-10b5-425d-acbf-f50630f07017-kube-api-access-f96jc\") pod \"neutron-7887b79bcd-vk5xz\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:55:04.677864 master-0 kubenswrapper[4790]: I1011 10:55:04.676700 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-ovndb-tls-certs\") pod \"neutron-7887b79bcd-vk5xz\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:55:04.677864 master-0 kubenswrapper[4790]: I1011 10:55:04.676765 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-config\") pod \"neutron-7887b79bcd-vk5xz\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:55:04.681768 master-0 kubenswrapper[4790]: I1011 10:55:04.680794 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-httpd-config\") pod \"neutron-7887b79bcd-vk5xz\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:55:04.681768 master-0 kubenswrapper[4790]: I1011 10:55:04.681699 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-config\") pod \"neutron-7887b79bcd-vk5xz\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:55:04.681768 master-0 kubenswrapper[4790]: I1011 10:55:04.681763 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-combined-ca-bundle\") pod \"neutron-7887b79bcd-vk5xz\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:55:04.685744 master-0 kubenswrapper[4790]: I1011 10:55:04.685033 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-ovndb-tls-certs\") pod \"neutron-7887b79bcd-vk5xz\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:55:04.689741 master-2 kubenswrapper[4776]: I1011 10:55:04.689694 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9f6b86c79-52ppr"] Oct 11 10:55:04.693086 master-2 kubenswrapper[4776]: I1011 10:55:04.692717 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.702460 master-1 kubenswrapper[4771]: I1011 10:55:04.702237 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-httpd-config\") pod \"neutron-7887b79bcd-stzg5\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:04.703040 master-1 kubenswrapper[4771]: I1011 10:55:04.702976 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-config\") pod \"neutron-7887b79bcd-stzg5\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:04.703979 master-1 kubenswrapper[4771]: I1011 10:55:04.703926 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-ovndb-tls-certs\") pod \"neutron-7887b79bcd-stzg5\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:04.704125 master-1 kubenswrapper[4771]: I1011 10:55:04.704079 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4zt7\" (UniqueName: \"kubernetes.io/projected/362d815c-c6ec-48b0-9891-85d06ad00aed-kube-api-access-n4zt7\") pod \"neutron-7887b79bcd-stzg5\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:04.704345 master-1 kubenswrapper[4771]: I1011 10:55:04.704252 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-combined-ca-bundle\") pod \"neutron-7887b79bcd-stzg5\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:04.708062 master-1 kubenswrapper[4771]: I1011 10:55:04.707190 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-httpd-config\") pod \"neutron-7887b79bcd-stzg5\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:04.708725 master-1 kubenswrapper[4771]: I1011 10:55:04.708669 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-config\") pod \"neutron-7887b79bcd-stzg5\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:04.712557 master-2 kubenswrapper[4776]: I1011 10:55:04.711402 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-config\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.712557 master-2 kubenswrapper[4776]: I1011 10:55:04.712154 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-combined-ca-bundle\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.712557 master-2 kubenswrapper[4776]: I1011 10:55:04.712290 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-ovndb-tls-certs\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.712557 master-2 kubenswrapper[4776]: I1011 10:55:04.712327 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfzck\" (UniqueName: \"kubernetes.io/projected/60f1a3e8-20d2-48e9-842c-9312ce07efe0-kube-api-access-kfzck\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.712557 master-2 kubenswrapper[4776]: I1011 10:55:04.712444 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-httpd-config\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.713604 master-1 kubenswrapper[4771]: I1011 10:55:04.713413 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-ovndb-tls-certs\") pod \"neutron-7887b79bcd-stzg5\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:04.714096 master-1 kubenswrapper[4771]: I1011 10:55:04.714048 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-combined-ca-bundle\") pod \"neutron-7887b79bcd-stzg5\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:04.720763 master-0 kubenswrapper[4790]: I1011 10:55:04.719500 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f96jc\" (UniqueName: \"kubernetes.io/projected/7739fd2d-10b5-425d-acbf-f50630f07017-kube-api-access-f96jc\") pod \"neutron-7887b79bcd-vk5xz\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:55:04.730519 master-1 kubenswrapper[4771]: I1011 10:55:04.730457 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4zt7\" (UniqueName: \"kubernetes.io/projected/362d815c-c6ec-48b0-9891-85d06ad00aed-kube-api-access-n4zt7\") pod \"neutron-7887b79bcd-stzg5\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:04.731307 master-2 kubenswrapper[4776]: I1011 10:55:04.731239 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9f6b86c79-52ppr"] Oct 11 10:55:04.749364 master-1 kubenswrapper[4771]: I1011 10:55:04.749300 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-api-0"] Oct 11 10:55:04.751268 master-1 kubenswrapper[4771]: I1011 10:55:04.751247 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-0" Oct 11 10:55:04.754036 master-1 kubenswrapper[4771]: I1011 10:55:04.753989 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-config-data" Oct 11 10:55:04.754293 master-1 kubenswrapper[4771]: I1011 10:55:04.754268 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-api-config-data" Oct 11 10:55:04.754576 master-1 kubenswrapper[4771]: I1011 10:55:04.754552 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-scripts" Oct 11 10:55:04.764432 master-1 kubenswrapper[4771]: I1011 10:55:04.764078 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-api-0"] Oct 11 10:55:04.765824 master-1 kubenswrapper[4771]: I1011 10:55:04.765734 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:04.770488 master-0 kubenswrapper[4790]: I1011 10:55:04.768903 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-api-1"] Oct 11 10:55:04.770488 master-0 kubenswrapper[4790]: I1011 10:55:04.770255 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-1" Oct 11 10:55:04.774754 master-0 kubenswrapper[4790]: I1011 10:55:04.774205 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-api-config-data" Oct 11 10:55:04.776829 master-2 kubenswrapper[4776]: I1011 10:55:04.776764 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-api-2"] Oct 11 10:55:04.781137 master-2 kubenswrapper[4776]: I1011 10:55:04.780646 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:04.790010 master-2 kubenswrapper[4776]: I1011 10:55:04.789966 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-api-config-data" Oct 11 10:55:04.792717 master-2 kubenswrapper[4776]: I1011 10:55:04.792630 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-api-2"] Oct 11 10:55:04.813396 master-2 kubenswrapper[4776]: I1011 10:55:04.813351 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-httpd-config\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.813869 master-2 kubenswrapper[4776]: I1011 10:55:04.813460 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-config\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.813869 master-2 kubenswrapper[4776]: I1011 10:55:04.813524 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4jjl\" (UniqueName: \"kubernetes.io/projected/941fb918-e4b8-4ef7-9ad1-9af907c5593a-kube-api-access-k4jjl\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.813869 master-2 kubenswrapper[4776]: I1011 10:55:04.813555 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-sb\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.813869 master-2 kubenswrapper[4776]: I1011 10:55:04.813615 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-svc\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.813869 master-2 kubenswrapper[4776]: I1011 10:55:04.813791 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-config\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.814022 master-2 kubenswrapper[4776]: I1011 10:55:04.813901 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-swift-storage-0\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.814022 master-2 kubenswrapper[4776]: I1011 10:55:04.813988 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-combined-ca-bundle\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.814704 master-2 kubenswrapper[4776]: I1011 10:55:04.814600 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-ovndb-tls-certs\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.814704 master-2 kubenswrapper[4776]: I1011 10:55:04.814627 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-nb\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.814704 master-2 kubenswrapper[4776]: I1011 10:55:04.814656 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfzck\" (UniqueName: \"kubernetes.io/projected/60f1a3e8-20d2-48e9-842c-9312ce07efe0-kube-api-access-kfzck\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.816800 master-0 kubenswrapper[4790]: I1011 10:55:04.805646 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-api-1"] Oct 11 10:55:04.821197 master-2 kubenswrapper[4776]: I1011 10:55:04.821155 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-config\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.822540 master-2 kubenswrapper[4776]: I1011 10:55:04.822505 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-ovndb-tls-certs\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.825025 master-2 kubenswrapper[4776]: I1011 10:55:04.824990 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-combined-ca-bundle\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.842508 master-2 kubenswrapper[4776]: I1011 10:55:04.842458 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-httpd-config\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.847827 master-2 kubenswrapper[4776]: I1011 10:55:04.847771 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfzck\" (UniqueName: \"kubernetes.io/projected/60f1a3e8-20d2-48e9-842c-9312ce07efe0-kube-api-access-kfzck\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.857603 master-2 kubenswrapper[4776]: I1011 10:55:04.857558 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.896912 master-2 kubenswrapper[4776]: I1011 10:55:04.896857 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.904755 master-0 kubenswrapper[4790]: I1011 10:55:04.904506 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30c351cb-246a-4343-a56d-c74fb4be119e-logs\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:04.905088 master-0 kubenswrapper[4790]: I1011 10:55:04.904830 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-scripts\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:04.905088 master-0 kubenswrapper[4790]: I1011 10:55:04.904951 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30c351cb-246a-4343-a56d-c74fb4be119e-etc-machine-id\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:04.905088 master-0 kubenswrapper[4790]: I1011 10:55:04.904999 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-config-data-custom\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:04.905088 master-0 kubenswrapper[4790]: I1011 10:55:04.905032 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xghz\" (UniqueName: \"kubernetes.io/projected/30c351cb-246a-4343-a56d-c74fb4be119e-kube-api-access-5xghz\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:04.905282 master-0 kubenswrapper[4790]: I1011 10:55:04.905112 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-combined-ca-bundle\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:04.905282 master-0 kubenswrapper[4790]: I1011 10:55:04.905163 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-config-data\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:04.905785 master-0 kubenswrapper[4790]: I1011 10:55:04.905730 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:55:04.911333 master-1 kubenswrapper[4771]: I1011 10:55:04.911075 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/478147ef-a0d7-4c37-952c-3fc3a23775db-etc-machine-id\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:04.911333 master-1 kubenswrapper[4771]: I1011 10:55:04.911197 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-config-data\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:04.911333 master-1 kubenswrapper[4771]: I1011 10:55:04.911256 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-config-data-custom\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:04.911333 master-1 kubenswrapper[4771]: I1011 10:55:04.911297 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/478147ef-a0d7-4c37-952c-3fc3a23775db-logs\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:04.911945 master-1 kubenswrapper[4771]: I1011 10:55:04.911867 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svvlr\" (UniqueName: \"kubernetes.io/projected/478147ef-a0d7-4c37-952c-3fc3a23775db-kube-api-access-svvlr\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:04.912216 master-1 kubenswrapper[4771]: I1011 10:55:04.911981 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-scripts\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:04.912216 master-1 kubenswrapper[4771]: I1011 10:55:04.912114 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-combined-ca-bundle\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:04.915924 master-2 kubenswrapper[4776]: I1011 10:55:04.915864 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-config\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.916119 master-2 kubenswrapper[4776]: I1011 10:55:04.915938 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-swift-storage-0\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.916119 master-2 kubenswrapper[4776]: I1011 10:55:04.915976 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:04.916119 master-2 kubenswrapper[4776]: I1011 10:55:04.916023 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-combined-ca-bundle\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:04.916119 master-2 kubenswrapper[4776]: I1011 10:55:04.916055 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-nb\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.916119 master-2 kubenswrapper[4776]: I1011 10:55:04.916085 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-scripts\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:04.916398 master-2 kubenswrapper[4776]: I1011 10:55:04.916123 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data-custom\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:04.916398 master-2 kubenswrapper[4776]: I1011 10:55:04.916167 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e99b787-4e9b-4285-b175-63008b7e39de-logs\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:04.916398 master-2 kubenswrapper[4776]: I1011 10:55:04.916212 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e99b787-4e9b-4285-b175-63008b7e39de-etc-machine-id\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:04.916633 master-2 kubenswrapper[4776]: I1011 10:55:04.916577 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4jjl\" (UniqueName: \"kubernetes.io/projected/941fb918-e4b8-4ef7-9ad1-9af907c5593a-kube-api-access-k4jjl\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.916711 master-2 kubenswrapper[4776]: I1011 10:55:04.916646 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqgnt\" (UniqueName: \"kubernetes.io/projected/7e99b787-4e9b-4285-b175-63008b7e39de-kube-api-access-cqgnt\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:04.916769 master-2 kubenswrapper[4776]: I1011 10:55:04.916752 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-sb\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.916832 master-2 kubenswrapper[4776]: I1011 10:55:04.916809 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-svc\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.917503 master-2 kubenswrapper[4776]: I1011 10:55:04.917470 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-nb\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.918104 master-2 kubenswrapper[4776]: I1011 10:55:04.918066 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-sb\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.918310 master-2 kubenswrapper[4776]: I1011 10:55:04.918279 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-config\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.918516 master-2 kubenswrapper[4776]: I1011 10:55:04.918480 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-swift-storage-0\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.920284 master-1 kubenswrapper[4771]: I1011 10:55:04.920202 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:55:04.920844 master-2 kubenswrapper[4776]: I1011 10:55:04.920809 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-svc\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.940049 master-2 kubenswrapper[4776]: I1011 10:55:04.939250 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4jjl\" (UniqueName: \"kubernetes.io/projected/941fb918-e4b8-4ef7-9ad1-9af907c5593a-kube-api-access-k4jjl\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:05.008299 master-0 kubenswrapper[4790]: I1011 10:55:05.007444 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-config-data-custom\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:05.008299 master-0 kubenswrapper[4790]: I1011 10:55:05.007521 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xghz\" (UniqueName: \"kubernetes.io/projected/30c351cb-246a-4343-a56d-c74fb4be119e-kube-api-access-5xghz\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:05.008299 master-0 kubenswrapper[4790]: I1011 10:55:05.007571 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-combined-ca-bundle\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:05.008299 master-0 kubenswrapper[4790]: I1011 10:55:05.007624 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-config-data\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:05.008299 master-0 kubenswrapper[4790]: I1011 10:55:05.007655 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30c351cb-246a-4343-a56d-c74fb4be119e-logs\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:05.008299 master-0 kubenswrapper[4790]: I1011 10:55:05.007769 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-scripts\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:05.008299 master-0 kubenswrapper[4790]: I1011 10:55:05.007838 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30c351cb-246a-4343-a56d-c74fb4be119e-etc-machine-id\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:05.008299 master-0 kubenswrapper[4790]: I1011 10:55:05.007972 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30c351cb-246a-4343-a56d-c74fb4be119e-etc-machine-id\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:05.010235 master-0 kubenswrapper[4790]: I1011 10:55:05.010184 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30c351cb-246a-4343-a56d-c74fb4be119e-logs\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:05.013926 master-1 kubenswrapper[4771]: I1011 10:55:05.013741 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-config-data\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:05.013926 master-1 kubenswrapper[4771]: I1011 10:55:05.013823 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-config-data-custom\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:05.013926 master-1 kubenswrapper[4771]: I1011 10:55:05.013873 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/478147ef-a0d7-4c37-952c-3fc3a23775db-logs\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:05.014390 master-1 kubenswrapper[4771]: I1011 10:55:05.014009 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svvlr\" (UniqueName: \"kubernetes.io/projected/478147ef-a0d7-4c37-952c-3fc3a23775db-kube-api-access-svvlr\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:05.014390 master-1 kubenswrapper[4771]: I1011 10:55:05.014047 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-scripts\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:05.014390 master-1 kubenswrapper[4771]: I1011 10:55:05.014105 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-combined-ca-bundle\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:05.014390 master-1 kubenswrapper[4771]: I1011 10:55:05.014167 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/478147ef-a0d7-4c37-952c-3fc3a23775db-etc-machine-id\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:05.015248 master-1 kubenswrapper[4771]: I1011 10:55:05.015159 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/478147ef-a0d7-4c37-952c-3fc3a23775db-etc-machine-id\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:05.015572 master-0 kubenswrapper[4790]: I1011 10:55:05.015528 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-combined-ca-bundle\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:05.015697 master-1 kubenswrapper[4771]: I1011 10:55:05.015637 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/478147ef-a0d7-4c37-952c-3fc3a23775db-logs\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:05.018575 master-0 kubenswrapper[4790]: I1011 10:55:05.018548 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-config-data\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:05.019075 master-2 kubenswrapper[4776]: I1011 10:55:05.019012 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.019208 master-2 kubenswrapper[4776]: I1011 10:55:05.019096 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-combined-ca-bundle\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.019208 master-2 kubenswrapper[4776]: I1011 10:55:05.019144 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-scripts\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.019208 master-2 kubenswrapper[4776]: I1011 10:55:05.019187 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data-custom\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.019336 master-2 kubenswrapper[4776]: I1011 10:55:05.019232 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e99b787-4e9b-4285-b175-63008b7e39de-logs\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.019336 master-2 kubenswrapper[4776]: I1011 10:55:05.019266 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e99b787-4e9b-4285-b175-63008b7e39de-etc-machine-id\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.019336 master-2 kubenswrapper[4776]: I1011 10:55:05.019312 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqgnt\" (UniqueName: \"kubernetes.io/projected/7e99b787-4e9b-4285-b175-63008b7e39de-kube-api-access-cqgnt\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.019530 master-0 kubenswrapper[4790]: I1011 10:55:05.019469 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-config-data-custom\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:05.019767 master-1 kubenswrapper[4771]: I1011 10:55:05.019710 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-scripts\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:05.019926 master-1 kubenswrapper[4771]: I1011 10:55:05.019869 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-config-data-custom\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:05.020055 master-1 kubenswrapper[4771]: I1011 10:55:05.019985 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-combined-ca-bundle\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:05.022717 master-2 kubenswrapper[4776]: I1011 10:55:05.020835 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e99b787-4e9b-4285-b175-63008b7e39de-etc-machine-id\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.022717 master-2 kubenswrapper[4776]: I1011 10:55:05.021094 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e99b787-4e9b-4285-b175-63008b7e39de-logs\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.023090 master-1 kubenswrapper[4771]: I1011 10:55:05.023053 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-config-data\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:05.026886 master-2 kubenswrapper[4776]: I1011 10:55:05.026839 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-combined-ca-bundle\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.034182 master-2 kubenswrapper[4776]: I1011 10:55:05.027618 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.034953 master-0 kubenswrapper[4790]: I1011 10:55:05.025269 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-scripts\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:05.041927 master-2 kubenswrapper[4776]: I1011 10:55:05.041894 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-scripts\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.053581 master-1 kubenswrapper[4771]: I1011 10:55:05.053537 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svvlr\" (UniqueName: \"kubernetes.io/projected/478147ef-a0d7-4c37-952c-3fc3a23775db-kube-api-access-svvlr\") pod \"cinder-b5802-api-0\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:55:05.053640 master-2 kubenswrapper[4776]: I1011 10:55:05.053595 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data-custom\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.055035 master-0 kubenswrapper[4790]: I1011 10:55:05.052892 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:05.056831 master-2 kubenswrapper[4776]: W1011 10:55:05.056767 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e1008bd_5444_4490_8e34_8a7843bf5c45.slice/crio-b44ccb387136d6d7afdc04ea61ce79bb899e6fab306823599152bcb3ecd66c0a WatchSource:0}: Error finding container b44ccb387136d6d7afdc04ea61ce79bb899e6fab306823599152bcb3ecd66c0a: Status 404 returned error can't find the container with id b44ccb387136d6d7afdc04ea61ce79bb899e6fab306823599152bcb3ecd66c0a Oct 11 10:55:05.057063 master-2 kubenswrapper[4776]: I1011 10:55:05.057007 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-scheduler-0"] Oct 11 10:55:05.058353 master-0 kubenswrapper[4790]: I1011 10:55:05.058032 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xghz\" (UniqueName: \"kubernetes.io/projected/30c351cb-246a-4343-a56d-c74fb4be119e-kube-api-access-5xghz\") pod \"cinder-b5802-api-1\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:05.068809 master-2 kubenswrapper[4776]: I1011 10:55:05.068263 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqgnt\" (UniqueName: \"kubernetes.io/projected/7e99b787-4e9b-4285-b175-63008b7e39de-kube-api-access-cqgnt\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.075378 master-0 kubenswrapper[4790]: I1011 10:55:05.075345 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:05.091346 master-2 kubenswrapper[4776]: I1011 10:55:05.091311 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:05.095869 master-0 kubenswrapper[4790]: I1011 10:55:05.095774 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-1" Oct 11 10:55:05.109243 master-0 kubenswrapper[4790]: I1011 10:55:05.109180 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-config\") pod \"267b88dd-f511-44be-83eb-15e57143e363\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " Oct 11 10:55:05.109513 master-0 kubenswrapper[4790]: I1011 10:55:05.109281 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-ovsdbserver-nb\") pod \"267b88dd-f511-44be-83eb-15e57143e363\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " Oct 11 10:55:05.109513 master-0 kubenswrapper[4790]: I1011 10:55:05.109409 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-ovsdbserver-sb\") pod \"267b88dd-f511-44be-83eb-15e57143e363\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " Oct 11 10:55:05.109513 master-0 kubenswrapper[4790]: I1011 10:55:05.109441 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-dns-svc\") pod \"267b88dd-f511-44be-83eb-15e57143e363\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " Oct 11 10:55:05.109513 master-0 kubenswrapper[4790]: I1011 10:55:05.109481 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-dns-swift-storage-0\") pod \"267b88dd-f511-44be-83eb-15e57143e363\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " Oct 11 10:55:05.109673 master-0 kubenswrapper[4790]: I1011 10:55:05.109569 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ljhl\" (UniqueName: \"kubernetes.io/projected/267b88dd-f511-44be-83eb-15e57143e363-kube-api-access-6ljhl\") pod \"267b88dd-f511-44be-83eb-15e57143e363\" (UID: \"267b88dd-f511-44be-83eb-15e57143e363\") " Oct 11 10:55:05.109842 master-0 kubenswrapper[4790]: I1011 10:55:05.109763 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-config" (OuterVolumeSpecName: "config") pod "267b88dd-f511-44be-83eb-15e57143e363" (UID: "267b88dd-f511-44be-83eb-15e57143e363"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:05.110290 master-0 kubenswrapper[4790]: I1011 10:55:05.110260 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "267b88dd-f511-44be-83eb-15e57143e363" (UID: "267b88dd-f511-44be-83eb-15e57143e363"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:05.111138 master-0 kubenswrapper[4790]: I1011 10:55:05.110747 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "267b88dd-f511-44be-83eb-15e57143e363" (UID: "267b88dd-f511-44be-83eb-15e57143e363"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:05.111138 master-0 kubenswrapper[4790]: I1011 10:55:05.110817 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "267b88dd-f511-44be-83eb-15e57143e363" (UID: "267b88dd-f511-44be-83eb-15e57143e363"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:05.111502 master-0 kubenswrapper[4790]: I1011 10:55:05.111428 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "267b88dd-f511-44be-83eb-15e57143e363" (UID: "267b88dd-f511-44be-83eb-15e57143e363"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:05.113327 master-0 kubenswrapper[4790]: I1011 10:55:05.113299 4790 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:05.113465 master-0 kubenswrapper[4790]: I1011 10:55:05.113450 4790 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:05.113577 master-0 kubenswrapper[4790]: I1011 10:55:05.113562 4790 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-dns-svc\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:05.113696 master-0 kubenswrapper[4790]: I1011 10:55:05.113677 4790 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:05.113832 master-0 kubenswrapper[4790]: I1011 10:55:05.113817 4790 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/267b88dd-f511-44be-83eb-15e57143e363-config\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:05.113913 master-2 kubenswrapper[4776]: I1011 10:55:05.113867 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.116041 master-0 kubenswrapper[4790]: I1011 10:55:05.115969 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/267b88dd-f511-44be-83eb-15e57143e363-kube-api-access-6ljhl" (OuterVolumeSpecName: "kube-api-access-6ljhl") pod "267b88dd-f511-44be-83eb-15e57143e363" (UID: "267b88dd-f511-44be-83eb-15e57143e363"). InnerVolumeSpecName "kube-api-access-6ljhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:05.161632 master-0 kubenswrapper[4790]: I1011 10:55:05.161568 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-volume-lvm-iscsi-0"] Oct 11 10:55:05.168808 master-1 kubenswrapper[4771]: I1011 10:55:05.168737 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-0" Oct 11 10:55:05.181123 master-0 kubenswrapper[4790]: W1011 10:55:05.181006 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60d68c10_8e1c_4a92_86f6_e2925df0f714.slice/crio-3242bfe4a3531200a62ec255f320d35782117671872a6959f988bac26f9f7d3f WatchSource:0}: Error finding container 3242bfe4a3531200a62ec255f320d35782117671872a6959f988bac26f9f7d3f: Status 404 returned error can't find the container with id 3242bfe4a3531200a62ec255f320d35782117671872a6959f988bac26f9f7d3f Oct 11 10:55:05.217312 master-0 kubenswrapper[4790]: I1011 10:55:05.217257 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ljhl\" (UniqueName: \"kubernetes.io/projected/267b88dd-f511-44be-83eb-15e57143e363-kube-api-access-6ljhl\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:05.254390 master-1 kubenswrapper[4771]: I1011 10:55:05.254271 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b597cbbf8-8j29d" Oct 11 10:55:05.255491 master-0 kubenswrapper[4790]: I1011 10:55:05.255047 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:55:05.255491 master-0 kubenswrapper[4790]: I1011 10:55:05.255128 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b597cbbf8-mh4z2" Oct 11 10:55:05.377977 master-2 kubenswrapper[4776]: I1011 10:55:05.377916 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:55:05.392196 master-2 kubenswrapper[4776]: I1011 10:55:05.392163 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:55:05.469967 master-1 kubenswrapper[4771]: W1011 10:55:05.469757 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod362d815c_c6ec_48b0_9891_85d06ad00aed.slice/crio-f93602e6ba46cd0010c3c32ac26a6e16a985dc38b7a78af0515d53b800f6c9e5 WatchSource:0}: Error finding container f93602e6ba46cd0010c3c32ac26a6e16a985dc38b7a78af0515d53b800f6c9e5: Status 404 returned error can't find the container with id f93602e6ba46cd0010c3c32ac26a6e16a985dc38b7a78af0515d53b800f6c9e5 Oct 11 10:55:05.479055 master-1 kubenswrapper[4771]: I1011 10:55:05.478987 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7887b79bcd-stzg5"] Oct 11 10:55:05.509555 master-1 kubenswrapper[4771]: I1011 10:55:05.509474 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-stzg5" event={"ID":"362d815c-c6ec-48b0-9891-85d06ad00aed","Type":"ContainerStarted","Data":"f93602e6ba46cd0010c3c32ac26a6e16a985dc38b7a78af0515d53b800f6c9e5"} Oct 11 10:55:05.553061 master-0 kubenswrapper[4790]: I1011 10:55:05.552576 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7887b79bcd-vk5xz"] Oct 11 10:55:05.558723 master-0 kubenswrapper[4790]: I1011 10:55:05.557845 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-api-1"] Oct 11 10:55:05.572061 master-0 kubenswrapper[4790]: W1011 10:55:05.571291 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7739fd2d_10b5_425d_acbf_f50630f07017.slice/crio-bff1b3163e12e8cb39a3fbc83d1e97e1c30281603df9047eeb4b5e8ea878efe5 WatchSource:0}: Error finding container bff1b3163e12e8cb39a3fbc83d1e97e1c30281603df9047eeb4b5e8ea878efe5: Status 404 returned error can't find the container with id bff1b3163e12e8cb39a3fbc83d1e97e1c30281603df9047eeb4b5e8ea878efe5 Oct 11 10:55:05.688174 master-1 kubenswrapper[4771]: I1011 10:55:05.683065 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-api-0"] Oct 11 10:55:05.789909 master-2 kubenswrapper[4776]: I1011 10:55:05.789863 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9f6b86c79-52ppr"] Oct 11 10:55:05.818237 master-2 kubenswrapper[4776]: W1011 10:55:05.817169 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod941fb918_e4b8_4ef7_9ad1_9af907c5593a.slice/crio-512e94719fefec26efbc4a52f6f8eafa4db87fb35a6caaa6e2a7c8012427af32 WatchSource:0}: Error finding container 512e94719fefec26efbc4a52f6f8eafa4db87fb35a6caaa6e2a7c8012427af32: Status 404 returned error can't find the container with id 512e94719fefec26efbc4a52f6f8eafa4db87fb35a6caaa6e2a7c8012427af32 Oct 11 10:55:05.885524 master-2 kubenswrapper[4776]: I1011 10:55:05.885425 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" event={"ID":"941fb918-e4b8-4ef7-9ad1-9af907c5593a","Type":"ContainerStarted","Data":"512e94719fefec26efbc4a52f6f8eafa4db87fb35a6caaa6e2a7c8012427af32"} Oct 11 10:55:05.910220 master-2 kubenswrapper[4776]: I1011 10:55:05.910095 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-scheduler-0" event={"ID":"6e1008bd-5444-4490-8e34-8a7843bf5c45","Type":"ContainerStarted","Data":"b44ccb387136d6d7afdc04ea61ce79bb899e6fab306823599152bcb3ecd66c0a"} Oct 11 10:55:05.935782 master-2 kubenswrapper[4776]: I1011 10:55:05.935241 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7887b79bcd-4lcts"] Oct 11 10:55:05.955556 master-2 kubenswrapper[4776]: W1011 10:55:05.955483 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60f1a3e8_20d2_48e9_842c_9312ce07efe0.slice/crio-c3c06a80f0b059f9f2526f170c4bc91b415604fc699cd4abb9acf3dd95970a4b WatchSource:0}: Error finding container c3c06a80f0b059f9f2526f170c4bc91b415604fc699cd4abb9acf3dd95970a4b: Status 404 returned error can't find the container with id c3c06a80f0b059f9f2526f170c4bc91b415604fc699cd4abb9acf3dd95970a4b Oct 11 10:55:05.960791 master-2 kubenswrapper[4776]: I1011 10:55:05.960762 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-api-2"] Oct 11 10:55:06.075779 master-0 kubenswrapper[4790]: I1011 10:55:06.075607 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-1" event={"ID":"30c351cb-246a-4343-a56d-c74fb4be119e","Type":"ContainerStarted","Data":"72c3b882472f96be141ddefa786998e8b0390cd596d77062abb0fcaa4a2d580f"} Oct 11 10:55:06.077964 master-0 kubenswrapper[4790]: I1011 10:55:06.077915 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-vk5xz" event={"ID":"7739fd2d-10b5-425d-acbf-f50630f07017","Type":"ContainerStarted","Data":"2d24a46f82f260f086d4dccdce3d54977e12591932842b6bc27cb71f5af9d42e"} Oct 11 10:55:06.078052 master-0 kubenswrapper[4790]: I1011 10:55:06.077965 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-vk5xz" event={"ID":"7739fd2d-10b5-425d-acbf-f50630f07017","Type":"ContainerStarted","Data":"477999ad20086f7e774ba456600ecbd44f0948bdc686eaeda5ddb795290ac9fe"} Oct 11 10:55:06.078052 master-0 kubenswrapper[4790]: I1011 10:55:06.077980 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-vk5xz" event={"ID":"7739fd2d-10b5-425d-acbf-f50630f07017","Type":"ContainerStarted","Data":"bff1b3163e12e8cb39a3fbc83d1e97e1c30281603df9047eeb4b5e8ea878efe5"} Oct 11 10:55:06.078201 master-2 kubenswrapper[4776]: W1011 10:55:06.078154 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e99b787_4e9b_4285_b175_63008b7e39de.slice/crio-3fa25e1670b8b5d447679f8aa5304db6a9e223ba597842d1098f20a3ef6c774c WatchSource:0}: Error finding container 3fa25e1670b8b5d447679f8aa5304db6a9e223ba597842d1098f20a3ef6c774c: Status 404 returned error can't find the container with id 3fa25e1670b8b5d447679f8aa5304db6a9e223ba597842d1098f20a3ef6c774c Oct 11 10:55:06.078422 master-0 kubenswrapper[4790]: I1011 10:55:06.078360 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:55:06.079494 master-0 kubenswrapper[4790]: I1011 10:55:06.079453 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c75cc6dff-n4dg7" Oct 11 10:55:06.080194 master-0 kubenswrapper[4790]: I1011 10:55:06.080144 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" event={"ID":"60d68c10-8e1c-4a92-86f6-e2925df0f714","Type":"ContainerStarted","Data":"3242bfe4a3531200a62ec255f320d35782117671872a6959f988bac26f9f7d3f"} Oct 11 10:55:06.108328 master-0 kubenswrapper[4790]: I1011 10:55:06.108110 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7887b79bcd-vk5xz" podStartSLOduration=2.108072935 podStartE2EDuration="2.108072935s" podCreationTimestamp="2025-10-11 10:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:06.104085726 +0000 UTC m=+982.658546028" watchObservedRunningTime="2025-10-11 10:55:06.108072935 +0000 UTC m=+982.662533247" Oct 11 10:55:06.183561 master-0 kubenswrapper[4790]: I1011 10:55:06.183514 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c75cc6dff-n4dg7"] Oct 11 10:55:06.191152 master-0 kubenswrapper[4790]: I1011 10:55:06.191087 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c75cc6dff-n4dg7"] Oct 11 10:55:06.258032 master-2 kubenswrapper[4776]: W1011 10:55:06.257973 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64da8a05_f383_4643_b08d_639963f8bdd5.slice/crio-f08b0710ef4008f1cdc3f5be94ecdbead3ce33922cbd677781666c53de5eb3c1 WatchSource:0}: Error finding container f08b0710ef4008f1cdc3f5be94ecdbead3ce33922cbd677781666c53de5eb3c1: Status 404 returned error can't find the container with id f08b0710ef4008f1cdc3f5be94ecdbead3ce33922cbd677781666c53de5eb3c1 Oct 11 10:55:06.284383 master-2 kubenswrapper[4776]: I1011 10:55:06.284284 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-backup-0"] Oct 11 10:55:06.315454 master-0 kubenswrapper[4790]: I1011 10:55:06.315369 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="267b88dd-f511-44be-83eb-15e57143e363" path="/var/lib/kubelet/pods/267b88dd-f511-44be-83eb-15e57143e363/volumes" Oct 11 10:55:06.412671 master-0 kubenswrapper[4790]: I1011 10:55:06.412508 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:55:06.412671 master-0 kubenswrapper[4790]: I1011 10:55:06.412660 4790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:55:06.501745 master-0 kubenswrapper[4790]: I1011 10:55:06.501611 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:55:06.525556 master-1 kubenswrapper[4771]: I1011 10:55:06.525371 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-0" event={"ID":"478147ef-a0d7-4c37-952c-3fc3a23775db","Type":"ContainerStarted","Data":"48979e1f4a2c3ea1ab7c6b56d19ad482a69e0c44021b08cc343a04080547c276"} Oct 11 10:55:06.528536 master-1 kubenswrapper[4771]: I1011 10:55:06.528441 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-stzg5" event={"ID":"362d815c-c6ec-48b0-9891-85d06ad00aed","Type":"ContainerStarted","Data":"99502f3eb6699cc67bcf11374ee8446bc01a1a157ce8024301c91ebed596f3f2"} Oct 11 10:55:06.528609 master-1 kubenswrapper[4771]: I1011 10:55:06.528549 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-stzg5" event={"ID":"362d815c-c6ec-48b0-9891-85d06ad00aed","Type":"ContainerStarted","Data":"b94dfe1997cbb3d378d19012a9b6401bc1cef35489c7ea7be575908bfe56b3a0"} Oct 11 10:55:06.528696 master-1 kubenswrapper[4771]: I1011 10:55:06.528673 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:06.580813 master-1 kubenswrapper[4771]: I1011 10:55:06.580718 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7887b79bcd-stzg5" podStartSLOduration=2.580698106 podStartE2EDuration="2.580698106s" podCreationTimestamp="2025-10-11 10:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:06.576513685 +0000 UTC m=+1738.550740146" watchObservedRunningTime="2025-10-11 10:55:06.580698106 +0000 UTC m=+1738.554924547" Oct 11 10:55:06.592010 master-1 kubenswrapper[4771]: I1011 10:55:06.591952 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-external-api-0"] Oct 11 10:55:06.592314 master-1 kubenswrapper[4771]: I1011 10:55:06.592268 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-external-api-0" podUID="18861a21-406e-479b-8712-9a62ca2ebf4a" containerName="glance-log" containerID="cri-o://497e4e9e0fe8900eab62dbf87a7d7ab2dcbd7bba959b0ab4451a71e7c8f7461b" gracePeriod=30 Oct 11 10:55:06.593148 master-1 kubenswrapper[4771]: I1011 10:55:06.592912 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-external-api-0" podUID="18861a21-406e-479b-8712-9a62ca2ebf4a" containerName="glance-httpd" containerID="cri-o://122528239cb245efc14e6d8ccc44a5f754f4d71a3448c18c3a8d172db64e177f" gracePeriod=30 Oct 11 10:55:06.921614 master-2 kubenswrapper[4776]: I1011 10:55:06.921548 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-2" event={"ID":"7e99b787-4e9b-4285-b175-63008b7e39de","Type":"ContainerStarted","Data":"0547089e6afc3eb60691c0ebbe3c41ee9104f9a77e690771e776160cdc0930fb"} Oct 11 10:55:06.921614 master-2 kubenswrapper[4776]: I1011 10:55:06.921604 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-2" event={"ID":"7e99b787-4e9b-4285-b175-63008b7e39de","Type":"ContainerStarted","Data":"3fa25e1670b8b5d447679f8aa5304db6a9e223ba597842d1098f20a3ef6c774c"} Oct 11 10:55:06.922923 master-2 kubenswrapper[4776]: I1011 10:55:06.922873 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-backup-0" event={"ID":"64da8a05-f383-4643-b08d-639963f8bdd5","Type":"ContainerStarted","Data":"f08b0710ef4008f1cdc3f5be94ecdbead3ce33922cbd677781666c53de5eb3c1"} Oct 11 10:55:06.925540 master-2 kubenswrapper[4776]: I1011 10:55:06.925469 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-4lcts" event={"ID":"60f1a3e8-20d2-48e9-842c-9312ce07efe0","Type":"ContainerStarted","Data":"2c24e9bb8cd753d61f312b08264059f0ef167df16dddaf2f79133f8b02212dea"} Oct 11 10:55:06.925540 master-2 kubenswrapper[4776]: I1011 10:55:06.925505 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-4lcts" event={"ID":"60f1a3e8-20d2-48e9-842c-9312ce07efe0","Type":"ContainerStarted","Data":"a8ba91b5b068d25618fa0c0a4315f75f1ab1929925bf68853e336e2a934b0ca6"} Oct 11 10:55:06.925540 master-2 kubenswrapper[4776]: I1011 10:55:06.925520 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-4lcts" event={"ID":"60f1a3e8-20d2-48e9-842c-9312ce07efe0","Type":"ContainerStarted","Data":"c3c06a80f0b059f9f2526f170c4bc91b415604fc699cd4abb9acf3dd95970a4b"} Oct 11 10:55:06.925826 master-2 kubenswrapper[4776]: I1011 10:55:06.925798 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:06.929058 master-2 kubenswrapper[4776]: I1011 10:55:06.929017 4776 generic.go:334] "Generic (PLEG): container finished" podID="941fb918-e4b8-4ef7-9ad1-9af907c5593a" containerID="164c282461ad0735fec232ffd6ba306dca0fc05dcf0a0b68a9bb53e9c6e9c07c" exitCode=0 Oct 11 10:55:06.929416 master-2 kubenswrapper[4776]: I1011 10:55:06.929082 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" event={"ID":"941fb918-e4b8-4ef7-9ad1-9af907c5593a","Type":"ContainerDied","Data":"164c282461ad0735fec232ffd6ba306dca0fc05dcf0a0b68a9bb53e9c6e9c07c"} Oct 11 10:55:06.933251 master-2 kubenswrapper[4776]: I1011 10:55:06.933210 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-scheduler-0" event={"ID":"6e1008bd-5444-4490-8e34-8a7843bf5c45","Type":"ContainerStarted","Data":"5887930f9bcf6c6925fafe6faf9cfa4414c977579d10c2864ef8ca027b356b34"} Oct 11 10:55:06.967606 master-2 kubenswrapper[4776]: I1011 10:55:06.966401 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7887b79bcd-4lcts" podStartSLOduration=2.966381112 podStartE2EDuration="2.966381112s" podCreationTimestamp="2025-10-11 10:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:06.957550473 +0000 UTC m=+1741.741977182" watchObservedRunningTime="2025-10-11 10:55:06.966381112 +0000 UTC m=+1741.750807821" Oct 11 10:55:07.121927 master-2 kubenswrapper[4776]: I1011 10:55:07.118709 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-api-2"] Oct 11 10:55:07.539095 master-1 kubenswrapper[4771]: I1011 10:55:07.538938 4771 generic.go:334] "Generic (PLEG): container finished" podID="18861a21-406e-479b-8712-9a62ca2ebf4a" containerID="497e4e9e0fe8900eab62dbf87a7d7ab2dcbd7bba959b0ab4451a71e7c8f7461b" exitCode=143 Oct 11 10:55:07.539095 master-1 kubenswrapper[4771]: I1011 10:55:07.539019 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-0" event={"ID":"18861a21-406e-479b-8712-9a62ca2ebf4a","Type":"ContainerDied","Data":"497e4e9e0fe8900eab62dbf87a7d7ab2dcbd7bba959b0ab4451a71e7c8f7461b"} Oct 11 10:55:07.945192 master-2 kubenswrapper[4776]: I1011 10:55:07.945146 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-2" event={"ID":"7e99b787-4e9b-4285-b175-63008b7e39de","Type":"ContainerStarted","Data":"d2f58a8e0319242a1b64a2af94772f7b9698c1eaa7f642a654e5e11cc2fe7f89"} Oct 11 10:55:07.945974 master-2 kubenswrapper[4776]: I1011 10:55:07.945298 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b5802-api-2" podUID="7e99b787-4e9b-4285-b175-63008b7e39de" containerName="cinder-b5802-api-log" containerID="cri-o://0547089e6afc3eb60691c0ebbe3c41ee9104f9a77e690771e776160cdc0930fb" gracePeriod=30 Oct 11 10:55:07.945974 master-2 kubenswrapper[4776]: I1011 10:55:07.945539 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:07.945974 master-2 kubenswrapper[4776]: I1011 10:55:07.945828 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b5802-api-2" podUID="7e99b787-4e9b-4285-b175-63008b7e39de" containerName="cinder-api" containerID="cri-o://d2f58a8e0319242a1b64a2af94772f7b9698c1eaa7f642a654e5e11cc2fe7f89" gracePeriod=30 Oct 11 10:55:07.949978 master-2 kubenswrapper[4776]: I1011 10:55:07.949942 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-backup-0" event={"ID":"64da8a05-f383-4643-b08d-639963f8bdd5","Type":"ContainerStarted","Data":"5510c48b8a1b349e6fccbe8283441e4880694e565021961bec519004a98d24f9"} Oct 11 10:55:07.950433 master-2 kubenswrapper[4776]: I1011 10:55:07.950413 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-backup-0" event={"ID":"64da8a05-f383-4643-b08d-639963f8bdd5","Type":"ContainerStarted","Data":"6cbb734e786ac7a270d6c6e63f2ba748d07f4398724c5368110f5b4040ec1204"} Oct 11 10:55:07.952172 master-2 kubenswrapper[4776]: I1011 10:55:07.952142 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" event={"ID":"941fb918-e4b8-4ef7-9ad1-9af907c5593a","Type":"ContainerStarted","Data":"8b6ad17ffd3f1f6962cd2f8e96a39bece7f7bf59db6192abf67589a9326a7807"} Oct 11 10:55:07.952289 master-2 kubenswrapper[4776]: I1011 10:55:07.952271 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:07.954847 master-2 kubenswrapper[4776]: I1011 10:55:07.954807 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-scheduler-0" event={"ID":"6e1008bd-5444-4490-8e34-8a7843bf5c45","Type":"ContainerStarted","Data":"664efc4ad947feadd97f79da92d3d8787041e164ebbb71c8dd6b636b6c939f3f"} Oct 11 10:55:07.989217 master-2 kubenswrapper[4776]: I1011 10:55:07.989110 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-api-2" podStartSLOduration=3.989094302 podStartE2EDuration="3.989094302s" podCreationTimestamp="2025-10-11 10:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:07.986788439 +0000 UTC m=+1742.771215148" watchObservedRunningTime="2025-10-11 10:55:07.989094302 +0000 UTC m=+1742.773521011" Oct 11 10:55:08.033366 master-2 kubenswrapper[4776]: I1011 10:55:08.033270 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-backup-0" podStartSLOduration=3.032780511 podStartE2EDuration="4.033243468s" podCreationTimestamp="2025-10-11 10:55:04 +0000 UTC" firstStartedPulling="2025-10-11 10:55:06.263764942 +0000 UTC m=+1741.048191651" lastFinishedPulling="2025-10-11 10:55:07.264227899 +0000 UTC m=+1742.048654608" observedRunningTime="2025-10-11 10:55:08.03036358 +0000 UTC m=+1742.814790289" watchObservedRunningTime="2025-10-11 10:55:08.033243468 +0000 UTC m=+1742.817670177" Oct 11 10:55:08.109415 master-2 kubenswrapper[4776]: I1011 10:55:08.109116 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" podStartSLOduration=4.109091662 podStartE2EDuration="4.109091662s" podCreationTimestamp="2025-10-11 10:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:08.068158973 +0000 UTC m=+1742.852585692" watchObservedRunningTime="2025-10-11 10:55:08.109091662 +0000 UTC m=+1742.893518371" Oct 11 10:55:08.597633 master-1 kubenswrapper[4771]: I1011 10:55:08.597552 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-848fcbb4df-cn592" Oct 11 10:55:08.890489 master-0 kubenswrapper[4790]: I1011 10:55:08.890425 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-848fcbb4df-dr4lc" Oct 11 10:55:08.989460 master-2 kubenswrapper[4776]: I1011 10:55:08.987389 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:55:08.991712 master-2 kubenswrapper[4776]: I1011 10:55:08.991628 4776 generic.go:334] "Generic (PLEG): container finished" podID="7e99b787-4e9b-4285-b175-63008b7e39de" containerID="0547089e6afc3eb60691c0ebbe3c41ee9104f9a77e690771e776160cdc0930fb" exitCode=143 Oct 11 10:55:08.994449 master-2 kubenswrapper[4776]: I1011 10:55:08.992830 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-2" event={"ID":"7e99b787-4e9b-4285-b175-63008b7e39de","Type":"ContainerDied","Data":"0547089e6afc3eb60691c0ebbe3c41ee9104f9a77e690771e776160cdc0930fb"} Oct 11 10:55:09.013946 master-2 kubenswrapper[4776]: I1011 10:55:09.013825 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-scheduler-0" podStartSLOduration=3.9476782999999998 podStartE2EDuration="5.013802196s" podCreationTimestamp="2025-10-11 10:55:04 +0000 UTC" firstStartedPulling="2025-10-11 10:55:05.065410775 +0000 UTC m=+1739.849837484" lastFinishedPulling="2025-10-11 10:55:06.131534671 +0000 UTC m=+1740.915961380" observedRunningTime="2025-10-11 10:55:08.107442797 +0000 UTC m=+1742.891869506" watchObservedRunningTime="2025-10-11 10:55:09.013802196 +0000 UTC m=+1743.798228905" Oct 11 10:55:09.522931 master-2 kubenswrapper[4776]: I1011 10:55:09.520445 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:09.859480 master-2 kubenswrapper[4776]: I1011 10:55:09.859351 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:10.487866 master-1 kubenswrapper[4771]: I1011 10:55:10.487806 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:10.592887 master-1 kubenswrapper[4771]: I1011 10:55:10.592812 4771 generic.go:334] "Generic (PLEG): container finished" podID="18861a21-406e-479b-8712-9a62ca2ebf4a" containerID="122528239cb245efc14e6d8ccc44a5f754f4d71a3448c18c3a8d172db64e177f" exitCode=0 Oct 11 10:55:10.592887 master-1 kubenswrapper[4771]: I1011 10:55:10.592884 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-0" event={"ID":"18861a21-406e-479b-8712-9a62ca2ebf4a","Type":"ContainerDied","Data":"122528239cb245efc14e6d8ccc44a5f754f4d71a3448c18c3a8d172db64e177f"} Oct 11 10:55:10.593328 master-1 kubenswrapper[4771]: I1011 10:55:10.592896 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:10.593328 master-1 kubenswrapper[4771]: I1011 10:55:10.592944 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-0" event={"ID":"18861a21-406e-479b-8712-9a62ca2ebf4a","Type":"ContainerDied","Data":"51aa777863a3d17bf81dc45f1659ccef0c9c30b6b9bf5305b555b52a6a626104"} Oct 11 10:55:10.593328 master-1 kubenswrapper[4771]: I1011 10:55:10.592972 4771 scope.go:117] "RemoveContainer" containerID="122528239cb245efc14e6d8ccc44a5f754f4d71a3448c18c3a8d172db64e177f" Oct 11 10:55:10.600447 master-1 kubenswrapper[4771]: I1011 10:55:10.600392 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-combined-ca-bundle\") pod \"18861a21-406e-479b-8712-9a62ca2ebf4a\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " Oct 11 10:55:10.600543 master-1 kubenswrapper[4771]: I1011 10:55:10.600531 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jlzm\" (UniqueName: \"kubernetes.io/projected/18861a21-406e-479b-8712-9a62ca2ebf4a-kube-api-access-5jlzm\") pod \"18861a21-406e-479b-8712-9a62ca2ebf4a\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " Oct 11 10:55:10.600640 master-1 kubenswrapper[4771]: I1011 10:55:10.600613 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18861a21-406e-479b-8712-9a62ca2ebf4a-httpd-run\") pod \"18861a21-406e-479b-8712-9a62ca2ebf4a\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " Oct 11 10:55:10.600713 master-1 kubenswrapper[4771]: I1011 10:55:10.600657 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-public-tls-certs\") pod \"18861a21-406e-479b-8712-9a62ca2ebf4a\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " Oct 11 10:55:10.600765 master-1 kubenswrapper[4771]: I1011 10:55:10.600717 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-scripts\") pod \"18861a21-406e-479b-8712-9a62ca2ebf4a\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " Oct 11 10:55:10.600809 master-1 kubenswrapper[4771]: I1011 10:55:10.600769 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-config-data\") pod \"18861a21-406e-479b-8712-9a62ca2ebf4a\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " Oct 11 10:55:10.600853 master-1 kubenswrapper[4771]: I1011 10:55:10.600828 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18861a21-406e-479b-8712-9a62ca2ebf4a-logs\") pod \"18861a21-406e-479b-8712-9a62ca2ebf4a\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " Oct 11 10:55:10.601094 master-1 kubenswrapper[4771]: I1011 10:55:10.601015 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b\") pod \"18861a21-406e-479b-8712-9a62ca2ebf4a\" (UID: \"18861a21-406e-479b-8712-9a62ca2ebf4a\") " Oct 11 10:55:10.602536 master-1 kubenswrapper[4771]: I1011 10:55:10.602498 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18861a21-406e-479b-8712-9a62ca2ebf4a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "18861a21-406e-479b-8712-9a62ca2ebf4a" (UID: "18861a21-406e-479b-8712-9a62ca2ebf4a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:10.602647 master-1 kubenswrapper[4771]: I1011 10:55:10.602598 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18861a21-406e-479b-8712-9a62ca2ebf4a-logs" (OuterVolumeSpecName: "logs") pod "18861a21-406e-479b-8712-9a62ca2ebf4a" (UID: "18861a21-406e-479b-8712-9a62ca2ebf4a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:10.604982 master-1 kubenswrapper[4771]: I1011 10:55:10.604917 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18861a21-406e-479b-8712-9a62ca2ebf4a-kube-api-access-5jlzm" (OuterVolumeSpecName: "kube-api-access-5jlzm") pod "18861a21-406e-479b-8712-9a62ca2ebf4a" (UID: "18861a21-406e-479b-8712-9a62ca2ebf4a"). InnerVolumeSpecName "kube-api-access-5jlzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:10.614200 master-1 kubenswrapper[4771]: I1011 10:55:10.614107 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-scripts" (OuterVolumeSpecName: "scripts") pod "18861a21-406e-479b-8712-9a62ca2ebf4a" (UID: "18861a21-406e-479b-8712-9a62ca2ebf4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:10.618109 master-1 kubenswrapper[4771]: I1011 10:55:10.618048 4771 scope.go:117] "RemoveContainer" containerID="497e4e9e0fe8900eab62dbf87a7d7ab2dcbd7bba959b0ab4451a71e7c8f7461b" Oct 11 10:55:10.626659 master-1 kubenswrapper[4771]: I1011 10:55:10.626609 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b" (OuterVolumeSpecName: "glance") pod "18861a21-406e-479b-8712-9a62ca2ebf4a" (UID: "18861a21-406e-479b-8712-9a62ca2ebf4a"). InnerVolumeSpecName "pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 11 10:55:10.628523 master-1 kubenswrapper[4771]: I1011 10:55:10.628473 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18861a21-406e-479b-8712-9a62ca2ebf4a" (UID: "18861a21-406e-479b-8712-9a62ca2ebf4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:10.649300 master-1 kubenswrapper[4771]: I1011 10:55:10.649238 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-config-data" (OuterVolumeSpecName: "config-data") pod "18861a21-406e-479b-8712-9a62ca2ebf4a" (UID: "18861a21-406e-479b-8712-9a62ca2ebf4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:10.658248 master-1 kubenswrapper[4771]: I1011 10:55:10.658073 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "18861a21-406e-479b-8712-9a62ca2ebf4a" (UID: "18861a21-406e-479b-8712-9a62ca2ebf4a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:10.692716 master-1 kubenswrapper[4771]: I1011 10:55:10.692614 4771 scope.go:117] "RemoveContainer" containerID="122528239cb245efc14e6d8ccc44a5f754f4d71a3448c18c3a8d172db64e177f" Oct 11 10:55:10.693653 master-1 kubenswrapper[4771]: E1011 10:55:10.693575 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"122528239cb245efc14e6d8ccc44a5f754f4d71a3448c18c3a8d172db64e177f\": container with ID starting with 122528239cb245efc14e6d8ccc44a5f754f4d71a3448c18c3a8d172db64e177f not found: ID does not exist" containerID="122528239cb245efc14e6d8ccc44a5f754f4d71a3448c18c3a8d172db64e177f" Oct 11 10:55:10.693892 master-1 kubenswrapper[4771]: I1011 10:55:10.693701 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"122528239cb245efc14e6d8ccc44a5f754f4d71a3448c18c3a8d172db64e177f"} err="failed to get container status \"122528239cb245efc14e6d8ccc44a5f754f4d71a3448c18c3a8d172db64e177f\": rpc error: code = NotFound desc = could not find container \"122528239cb245efc14e6d8ccc44a5f754f4d71a3448c18c3a8d172db64e177f\": container with ID starting with 122528239cb245efc14e6d8ccc44a5f754f4d71a3448c18c3a8d172db64e177f not found: ID does not exist" Oct 11 10:55:10.693892 master-1 kubenswrapper[4771]: I1011 10:55:10.693792 4771 scope.go:117] "RemoveContainer" containerID="497e4e9e0fe8900eab62dbf87a7d7ab2dcbd7bba959b0ab4451a71e7c8f7461b" Oct 11 10:55:10.694718 master-1 kubenswrapper[4771]: E1011 10:55:10.694666 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"497e4e9e0fe8900eab62dbf87a7d7ab2dcbd7bba959b0ab4451a71e7c8f7461b\": container with ID starting with 497e4e9e0fe8900eab62dbf87a7d7ab2dcbd7bba959b0ab4451a71e7c8f7461b not found: ID does not exist" containerID="497e4e9e0fe8900eab62dbf87a7d7ab2dcbd7bba959b0ab4451a71e7c8f7461b" Oct 11 10:55:10.694779 master-1 kubenswrapper[4771]: I1011 10:55:10.694719 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"497e4e9e0fe8900eab62dbf87a7d7ab2dcbd7bba959b0ab4451a71e7c8f7461b"} err="failed to get container status \"497e4e9e0fe8900eab62dbf87a7d7ab2dcbd7bba959b0ab4451a71e7c8f7461b\": rpc error: code = NotFound desc = could not find container \"497e4e9e0fe8900eab62dbf87a7d7ab2dcbd7bba959b0ab4451a71e7c8f7461b\": container with ID starting with 497e4e9e0fe8900eab62dbf87a7d7ab2dcbd7bba959b0ab4451a71e7c8f7461b not found: ID does not exist" Oct 11 10:55:10.705039 master-1 kubenswrapper[4771]: I1011 10:55:10.704993 4771 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-public-tls-certs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:10.705039 master-1 kubenswrapper[4771]: I1011 10:55:10.705038 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:10.705259 master-1 kubenswrapper[4771]: I1011 10:55:10.705221 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:10.705307 master-1 kubenswrapper[4771]: I1011 10:55:10.705241 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18861a21-406e-479b-8712-9a62ca2ebf4a-logs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:10.705574 master-1 kubenswrapper[4771]: I1011 10:55:10.705469 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b\") on node \"master-1\" " Oct 11 10:55:10.705574 master-1 kubenswrapper[4771]: I1011 10:55:10.705494 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18861a21-406e-479b-8712-9a62ca2ebf4a-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:10.705574 master-1 kubenswrapper[4771]: I1011 10:55:10.705508 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jlzm\" (UniqueName: \"kubernetes.io/projected/18861a21-406e-479b-8712-9a62ca2ebf4a-kube-api-access-5jlzm\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:10.705574 master-1 kubenswrapper[4771]: I1011 10:55:10.705520 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18861a21-406e-479b-8712-9a62ca2ebf4a-httpd-run\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:10.732773 master-1 kubenswrapper[4771]: I1011 10:55:10.732637 4771 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 11 10:55:10.733054 master-1 kubenswrapper[4771]: I1011 10:55:10.733005 4771 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9" (UniqueName: "kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b") on node "master-1" Oct 11 10:55:10.807869 master-1 kubenswrapper[4771]: I1011 10:55:10.807811 4771 reconciler_common.go:293] "Volume detached for volume \"pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:10.847909 master-1 kubenswrapper[4771]: I1011 10:55:10.847825 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-748bbfcf89-vpkvr"] Oct 11 10:55:10.848313 master-1 kubenswrapper[4771]: E1011 10:55:10.848202 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18861a21-406e-479b-8712-9a62ca2ebf4a" containerName="glance-httpd" Oct 11 10:55:10.848313 master-1 kubenswrapper[4771]: I1011 10:55:10.848220 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="18861a21-406e-479b-8712-9a62ca2ebf4a" containerName="glance-httpd" Oct 11 10:55:10.848313 master-1 kubenswrapper[4771]: E1011 10:55:10.848245 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18861a21-406e-479b-8712-9a62ca2ebf4a" containerName="glance-log" Oct 11 10:55:10.848313 master-1 kubenswrapper[4771]: I1011 10:55:10.848255 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="18861a21-406e-479b-8712-9a62ca2ebf4a" containerName="glance-log" Oct 11 10:55:10.848526 master-1 kubenswrapper[4771]: I1011 10:55:10.848468 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="18861a21-406e-479b-8712-9a62ca2ebf4a" containerName="glance-log" Oct 11 10:55:10.848526 master-1 kubenswrapper[4771]: I1011 10:55:10.848496 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="18861a21-406e-479b-8712-9a62ca2ebf4a" containerName="glance-httpd" Oct 11 10:55:10.849688 master-1 kubenswrapper[4771]: I1011 10:55:10.849650 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:10.852487 master-1 kubenswrapper[4771]: I1011 10:55:10.852411 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 11 10:55:10.852487 master-1 kubenswrapper[4771]: I1011 10:55:10.852457 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 11 10:55:10.864933 master-1 kubenswrapper[4771]: I1011 10:55:10.864878 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-748bbfcf89-vpkvr"] Oct 11 10:55:10.908841 master-1 kubenswrapper[4771]: I1011 10:55:10.908623 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-httpd-config\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:10.908948 master-1 kubenswrapper[4771]: I1011 10:55:10.908884 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-ovndb-tls-certs\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:10.908997 master-1 kubenswrapper[4771]: I1011 10:55:10.908949 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq6gt\" (UniqueName: \"kubernetes.io/projected/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-kube-api-access-tq6gt\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:10.908997 master-1 kubenswrapper[4771]: I1011 10:55:10.908979 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-combined-ca-bundle\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:10.909069 master-1 kubenswrapper[4771]: I1011 10:55:10.909024 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-internal-tls-certs\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:10.909069 master-1 kubenswrapper[4771]: I1011 10:55:10.909059 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-public-tls-certs\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:10.909155 master-1 kubenswrapper[4771]: I1011 10:55:10.909086 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-config\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:10.938469 master-1 kubenswrapper[4771]: I1011 10:55:10.938404 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-external-api-0"] Oct 11 10:55:10.945455 master-1 kubenswrapper[4771]: I1011 10:55:10.945400 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b5802-default-external-api-0"] Oct 11 10:55:10.966383 master-1 kubenswrapper[4771]: I1011 10:55:10.966302 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-external-api-0"] Oct 11 10:55:10.967796 master-1 kubenswrapper[4771]: I1011 10:55:10.967760 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:10.973570 master-1 kubenswrapper[4771]: I1011 10:55:10.971281 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 11 10:55:10.973570 master-1 kubenswrapper[4771]: I1011 10:55:10.971595 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-external-config-data" Oct 11 10:55:10.991413 master-1 kubenswrapper[4771]: I1011 10:55:10.991341 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-0"] Oct 11 10:55:11.010215 master-2 kubenswrapper[4776]: I1011 10:55:11.010149 4776 generic.go:334] "Generic (PLEG): container finished" podID="4a1c4d38-1f25-4465-9976-43be28a3b282" containerID="a51a44def94d3717c345488e50d0f116c0777018eb329e198859a513f723b71f" exitCode=0 Oct 11 10:55:11.010713 master-2 kubenswrapper[4776]: I1011 10:55:11.010230 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-n7nm2" event={"ID":"4a1c4d38-1f25-4465-9976-43be28a3b282","Type":"ContainerDied","Data":"a51a44def94d3717c345488e50d0f116c0777018eb329e198859a513f723b71f"} Oct 11 10:55:11.015439 master-1 kubenswrapper[4771]: I1011 10:55:11.015318 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-internal-tls-certs\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:11.015657 master-1 kubenswrapper[4771]: I1011 10:55:11.015449 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-public-tls-certs\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:11.015657 master-1 kubenswrapper[4771]: I1011 10:55:11.015477 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-config\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:11.015657 master-1 kubenswrapper[4771]: I1011 10:55:11.015528 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-httpd-config\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:11.015657 master-1 kubenswrapper[4771]: I1011 10:55:11.015590 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-ovndb-tls-certs\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:11.015657 master-1 kubenswrapper[4771]: I1011 10:55:11.015639 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq6gt\" (UniqueName: \"kubernetes.io/projected/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-kube-api-access-tq6gt\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:11.015657 master-1 kubenswrapper[4771]: I1011 10:55:11.015658 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-combined-ca-bundle\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:11.021133 master-1 kubenswrapper[4771]: I1011 10:55:11.021066 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-ovndb-tls-certs\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:11.021772 master-1 kubenswrapper[4771]: I1011 10:55:11.021725 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-public-tls-certs\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:11.022811 master-1 kubenswrapper[4771]: I1011 10:55:11.022785 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-httpd-config\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:11.025422 master-1 kubenswrapper[4771]: I1011 10:55:11.023865 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-internal-tls-certs\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:11.025422 master-1 kubenswrapper[4771]: I1011 10:55:11.025008 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-config\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:11.026078 master-1 kubenswrapper[4771]: I1011 10:55:11.026045 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-combined-ca-bundle\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:11.045397 master-1 kubenswrapper[4771]: I1011 10:55:11.043308 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq6gt\" (UniqueName: \"kubernetes.io/projected/5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b-kube-api-access-tq6gt\") pod \"neutron-748bbfcf89-vpkvr\" (UID: \"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b\") " pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:11.121386 master-1 kubenswrapper[4771]: I1011 10:55:11.120872 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-public-tls-certs\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.121386 master-1 kubenswrapper[4771]: I1011 10:55:11.120974 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-config-data\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.121386 master-1 kubenswrapper[4771]: I1011 10:55:11.121054 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3028266-255a-43a3-8bdb-9695ad7cbb30-httpd-run\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.121386 master-1 kubenswrapper[4771]: I1011 10:55:11.121138 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-combined-ca-bundle\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.121386 master-1 kubenswrapper[4771]: I1011 10:55:11.121279 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85jr4\" (UniqueName: \"kubernetes.io/projected/d3028266-255a-43a3-8bdb-9695ad7cbb30-kube-api-access-85jr4\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.121386 master-1 kubenswrapper[4771]: I1011 10:55:11.121314 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3028266-255a-43a3-8bdb-9695ad7cbb30-logs\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.121970 master-1 kubenswrapper[4771]: I1011 10:55:11.121654 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-scripts\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.121970 master-1 kubenswrapper[4771]: I1011 10:55:11.121734 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.211420 master-1 kubenswrapper[4771]: I1011 10:55:11.208095 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:11.228386 master-1 kubenswrapper[4771]: I1011 10:55:11.226636 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-scripts\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.228386 master-1 kubenswrapper[4771]: I1011 10:55:11.227364 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.228386 master-1 kubenswrapper[4771]: I1011 10:55:11.227395 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-public-tls-certs\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.228386 master-1 kubenswrapper[4771]: I1011 10:55:11.227419 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-config-data\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.228386 master-1 kubenswrapper[4771]: I1011 10:55:11.227461 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3028266-255a-43a3-8bdb-9695ad7cbb30-httpd-run\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.228386 master-1 kubenswrapper[4771]: I1011 10:55:11.227505 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-combined-ca-bundle\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.228386 master-1 kubenswrapper[4771]: I1011 10:55:11.227582 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85jr4\" (UniqueName: \"kubernetes.io/projected/d3028266-255a-43a3-8bdb-9695ad7cbb30-kube-api-access-85jr4\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.228386 master-1 kubenswrapper[4771]: I1011 10:55:11.227603 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3028266-255a-43a3-8bdb-9695ad7cbb30-logs\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.228386 master-1 kubenswrapper[4771]: I1011 10:55:11.228309 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3028266-255a-43a3-8bdb-9695ad7cbb30-logs\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.230469 master-1 kubenswrapper[4771]: I1011 10:55:11.230404 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3028266-255a-43a3-8bdb-9695ad7cbb30-httpd-run\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.233421 master-1 kubenswrapper[4771]: I1011 10:55:11.231217 4771 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:55:11.233421 master-1 kubenswrapper[4771]: I1011 10:55:11.231287 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/643ba808821ea6db76a2042d255ba68bbc43444ed3cc7e332598424f5540da0c/globalmount\"" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.233421 master-1 kubenswrapper[4771]: I1011 10:55:11.233151 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-scripts\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.238424 master-1 kubenswrapper[4771]: I1011 10:55:11.234844 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-public-tls-certs\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.238424 master-1 kubenswrapper[4771]: I1011 10:55:11.238170 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-config-data\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.240255 master-1 kubenswrapper[4771]: I1011 10:55:11.239619 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-combined-ca-bundle\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.251796 master-1 kubenswrapper[4771]: I1011 10:55:11.251676 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85jr4\" (UniqueName: \"kubernetes.io/projected/d3028266-255a-43a3-8bdb-9695ad7cbb30-kube-api-access-85jr4\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:11.406077 master-1 kubenswrapper[4771]: I1011 10:55:11.382453 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:55:11.406077 master-1 kubenswrapper[4771]: I1011 10:55:11.382517 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:55:11.414771 master-1 kubenswrapper[4771]: I1011 10:55:11.413952 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:55:11.432102 master-1 kubenswrapper[4771]: I1011 10:55:11.431337 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:55:11.615000 master-1 kubenswrapper[4771]: I1011 10:55:11.614936 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:55:11.616204 master-1 kubenswrapper[4771]: I1011 10:55:11.616169 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:55:11.802245 master-2 kubenswrapper[4776]: I1011 10:55:11.802109 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 11 10:55:11.804057 master-2 kubenswrapper[4776]: I1011 10:55:11.803868 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 11 10:55:11.807479 master-2 kubenswrapper[4776]: I1011 10:55:11.807428 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 11 10:55:11.807787 master-2 kubenswrapper[4776]: I1011 10:55:11.807754 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 11 10:55:11.899640 master-2 kubenswrapper[4776]: I1011 10:55:11.879986 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 11 10:55:11.899640 master-2 kubenswrapper[4776]: I1011 10:55:11.891903 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c1271fdd-4436-4935-b271-89ffa5394bc3-openstack-config\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.899640 master-2 kubenswrapper[4776]: I1011 10:55:11.891956 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs2vs\" (UniqueName: \"kubernetes.io/projected/c1271fdd-4436-4935-b271-89ffa5394bc3-kube-api-access-gs2vs\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.899640 master-2 kubenswrapper[4776]: I1011 10:55:11.892052 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1271fdd-4436-4935-b271-89ffa5394bc3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.899640 master-2 kubenswrapper[4776]: I1011 10:55:11.892120 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c1271fdd-4436-4935-b271-89ffa5394bc3-openstack-config-secret\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.919407 master-1 kubenswrapper[4771]: I1011 10:55:11.919323 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-748bbfcf89-vpkvr"] Oct 11 10:55:11.993695 master-2 kubenswrapper[4776]: I1011 10:55:11.993628 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs2vs\" (UniqueName: \"kubernetes.io/projected/c1271fdd-4436-4935-b271-89ffa5394bc3-kube-api-access-gs2vs\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.993977 master-2 kubenswrapper[4776]: I1011 10:55:11.993734 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1271fdd-4436-4935-b271-89ffa5394bc3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.993977 master-2 kubenswrapper[4776]: I1011 10:55:11.993823 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c1271fdd-4436-4935-b271-89ffa5394bc3-openstack-config-secret\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.993977 master-2 kubenswrapper[4776]: I1011 10:55:11.993869 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c1271fdd-4436-4935-b271-89ffa5394bc3-openstack-config\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.994717 master-2 kubenswrapper[4776]: I1011 10:55:11.994693 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c1271fdd-4436-4935-b271-89ffa5394bc3-openstack-config\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.997239 master-2 kubenswrapper[4776]: I1011 10:55:11.997166 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c1271fdd-4436-4935-b271-89ffa5394bc3-openstack-config-secret\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.998374 master-2 kubenswrapper[4776]: I1011 10:55:11.998342 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1271fdd-4436-4935-b271-89ffa5394bc3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:12.014276 master-2 kubenswrapper[4776]: I1011 10:55:12.014227 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs2vs\" (UniqueName: \"kubernetes.io/projected/c1271fdd-4436-4935-b271-89ffa5394bc3-kube-api-access-gs2vs\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:12.138775 master-2 kubenswrapper[4776]: I1011 10:55:12.138270 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 11 10:55:12.270661 master-1 kubenswrapper[4771]: I1011 10:55:12.270596 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b\") pod \"glance-b5802-default-external-api-0\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:12.460079 master-1 kubenswrapper[4771]: I1011 10:55:12.458712 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18861a21-406e-479b-8712-9a62ca2ebf4a" path="/var/lib/kubelet/pods/18861a21-406e-479b-8712-9a62ca2ebf4a/volumes" Oct 11 10:55:12.485619 master-1 kubenswrapper[4771]: I1011 10:55:12.485394 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:12.780705 master-2 kubenswrapper[4776]: I1011 10:55:12.780600 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 11 10:55:12.838216 master-2 kubenswrapper[4776]: W1011 10:55:12.838156 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1271fdd_4436_4935_b271_89ffa5394bc3.slice/crio-3056a9832e3a1c96181b7292d686779874ea6456c058dee4caaac1a208cd2208 WatchSource:0}: Error finding container 3056a9832e3a1c96181b7292d686779874ea6456c058dee4caaac1a208cd2208: Status 404 returned error can't find the container with id 3056a9832e3a1c96181b7292d686779874ea6456c058dee4caaac1a208cd2208 Oct 11 10:55:12.846741 master-2 kubenswrapper[4776]: I1011 10:55:12.846702 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:55:12.881414 master-2 kubenswrapper[4776]: I1011 10:55:12.881361 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:55:12.921439 master-2 kubenswrapper[4776]: I1011 10:55:12.921388 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4a1c4d38-1f25-4465-9976-43be28a3b282-etc-podinfo\") pod \"4a1c4d38-1f25-4465-9976-43be28a3b282\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " Oct 11 10:55:12.921883 master-2 kubenswrapper[4776]: I1011 10:55:12.921852 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-combined-ca-bundle\") pod \"4a1c4d38-1f25-4465-9976-43be28a3b282\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " Oct 11 10:55:12.922033 master-2 kubenswrapper[4776]: I1011 10:55:12.922002 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data-merged\") pod \"4a1c4d38-1f25-4465-9976-43be28a3b282\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " Oct 11 10:55:12.922079 master-2 kubenswrapper[4776]: I1011 10:55:12.922041 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data\") pod \"4a1c4d38-1f25-4465-9976-43be28a3b282\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " Oct 11 10:55:12.922289 master-2 kubenswrapper[4776]: I1011 10:55:12.922260 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh8v5\" (UniqueName: \"kubernetes.io/projected/4a1c4d38-1f25-4465-9976-43be28a3b282-kube-api-access-fh8v5\") pod \"4a1c4d38-1f25-4465-9976-43be28a3b282\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " Oct 11 10:55:12.922332 master-2 kubenswrapper[4776]: I1011 10:55:12.922292 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-scripts\") pod \"4a1c4d38-1f25-4465-9976-43be28a3b282\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " Oct 11 10:55:12.924103 master-2 kubenswrapper[4776]: I1011 10:55:12.923713 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "4a1c4d38-1f25-4465-9976-43be28a3b282" (UID: "4a1c4d38-1f25-4465-9976-43be28a3b282"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:12.925197 master-2 kubenswrapper[4776]: I1011 10:55:12.925173 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4a1c4d38-1f25-4465-9976-43be28a3b282-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "4a1c4d38-1f25-4465-9976-43be28a3b282" (UID: "4a1c4d38-1f25-4465-9976-43be28a3b282"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 11 10:55:12.927954 master-2 kubenswrapper[4776]: I1011 10:55:12.927895 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a1c4d38-1f25-4465-9976-43be28a3b282-kube-api-access-fh8v5" (OuterVolumeSpecName: "kube-api-access-fh8v5") pod "4a1c4d38-1f25-4465-9976-43be28a3b282" (UID: "4a1c4d38-1f25-4465-9976-43be28a3b282"). InnerVolumeSpecName "kube-api-access-fh8v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:12.929144 master-2 kubenswrapper[4776]: I1011 10:55:12.929080 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-scripts" (OuterVolumeSpecName: "scripts") pod "4a1c4d38-1f25-4465-9976-43be28a3b282" (UID: "4a1c4d38-1f25-4465-9976-43be28a3b282"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:12.948347 master-2 kubenswrapper[4776]: I1011 10:55:12.944067 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data" (OuterVolumeSpecName: "config-data") pod "4a1c4d38-1f25-4465-9976-43be28a3b282" (UID: "4a1c4d38-1f25-4465-9976-43be28a3b282"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:12.975264 master-2 kubenswrapper[4776]: I1011 10:55:12.975195 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a1c4d38-1f25-4465-9976-43be28a3b282" (UID: "4a1c4d38-1f25-4465-9976-43be28a3b282"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:13.031986 master-2 kubenswrapper[4776]: I1011 10:55:13.031931 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh8v5\" (UniqueName: \"kubernetes.io/projected/4a1c4d38-1f25-4465-9976-43be28a3b282-kube-api-access-fh8v5\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:13.031986 master-2 kubenswrapper[4776]: I1011 10:55:13.031974 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:13.031986 master-2 kubenswrapper[4776]: I1011 10:55:13.031984 4776 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4a1c4d38-1f25-4465-9976-43be28a3b282-etc-podinfo\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:13.031986 master-2 kubenswrapper[4776]: I1011 10:55:13.031992 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:13.032537 master-2 kubenswrapper[4776]: I1011 10:55:13.032004 4776 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data-merged\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:13.032537 master-2 kubenswrapper[4776]: I1011 10:55:13.032012 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:13.043773 master-2 kubenswrapper[4776]: I1011 10:55:13.042006 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-n7nm2" event={"ID":"4a1c4d38-1f25-4465-9976-43be28a3b282","Type":"ContainerDied","Data":"08c94658c701837f72a72b62e6464a19fa3e877f799281df15d5cc453d495789"} Oct 11 10:55:13.043773 master-2 kubenswrapper[4776]: I1011 10:55:13.042052 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08c94658c701837f72a72b62e6464a19fa3e877f799281df15d5cc453d495789" Oct 11 10:55:13.043773 master-2 kubenswrapper[4776]: I1011 10:55:13.042131 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:55:13.047714 master-2 kubenswrapper[4776]: I1011 10:55:13.047640 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c1271fdd-4436-4935-b271-89ffa5394bc3","Type":"ContainerStarted","Data":"3056a9832e3a1c96181b7292d686779874ea6456c058dee4caaac1a208cd2208"} Oct 11 10:55:13.629064 master-1 kubenswrapper[4771]: I1011 10:55:13.628980 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:55:13.629064 master-1 kubenswrapper[4771]: I1011 10:55:13.629032 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:55:13.658974 master-1 kubenswrapper[4771]: I1011 10:55:13.657926 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:55:13.754501 master-1 kubenswrapper[4771]: I1011 10:55:13.754328 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:55:13.985353 master-2 kubenswrapper[4776]: I1011 10:55:13.985290 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:55:13.985639 master-2 kubenswrapper[4776]: I1011 10:55:13.985558 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-internal-api-0" podUID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerName="glance-log" containerID="cri-o://c0a0f68535f1045164a03ee0e1499295237f65e65bc92ffb7cde06bc73007d4d" gracePeriod=30 Oct 11 10:55:13.986603 master-2 kubenswrapper[4776]: I1011 10:55:13.985998 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-internal-api-0" podUID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerName="glance-httpd" containerID="cri-o://30c682488fe3a4cbd0fa8fdf8a635610245b1344157964f363074a15ace75969" gracePeriod=30 Oct 11 10:55:14.771656 master-2 kubenswrapper[4776]: I1011 10:55:14.771606 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:14.935893 master-1 kubenswrapper[4771]: I1011 10:55:14.935814 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-656ddc8b67-kfkzr"] Oct 11 10:55:14.937849 master-1 kubenswrapper[4771]: I1011 10:55:14.937638 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" Oct 11 10:55:14.943914 master-1 kubenswrapper[4771]: I1011 10:55:14.941215 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Oct 11 10:55:14.983850 master-2 kubenswrapper[4776]: I1011 10:55:14.979425 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-scheduler-0"] Oct 11 10:55:14.985743 master-1 kubenswrapper[4771]: I1011 10:55:14.985328 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-656ddc8b67-kfkzr"] Oct 11 10:55:15.042075 master-1 kubenswrapper[4771]: I1011 10:55:15.041494 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx9gw\" (UniqueName: \"kubernetes.io/projected/c6af8eba-f8bf-47f6-8313-7a902aeb170f-kube-api-access-lx9gw\") pod \"ironic-neutron-agent-656ddc8b67-kfkzr\" (UID: \"c6af8eba-f8bf-47f6-8313-7a902aeb170f\") " pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" Oct 11 10:55:15.042075 master-1 kubenswrapper[4771]: I1011 10:55:15.041800 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c6af8eba-f8bf-47f6-8313-7a902aeb170f-config\") pod \"ironic-neutron-agent-656ddc8b67-kfkzr\" (UID: \"c6af8eba-f8bf-47f6-8313-7a902aeb170f\") " pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" Oct 11 10:55:15.042075 master-1 kubenswrapper[4771]: I1011 10:55:15.042001 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6af8eba-f8bf-47f6-8313-7a902aeb170f-combined-ca-bundle\") pod \"ironic-neutron-agent-656ddc8b67-kfkzr\" (UID: \"c6af8eba-f8bf-47f6-8313-7a902aeb170f\") " pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" Oct 11 10:55:15.085857 master-2 kubenswrapper[4776]: I1011 10:55:15.085759 4776 generic.go:334] "Generic (PLEG): container finished" podID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerID="c0a0f68535f1045164a03ee0e1499295237f65e65bc92ffb7cde06bc73007d4d" exitCode=143 Oct 11 10:55:15.087267 master-2 kubenswrapper[4776]: I1011 10:55:15.087054 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b5802-scheduler-0" podUID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerName="probe" containerID="cri-o://664efc4ad947feadd97f79da92d3d8787041e164ebbb71c8dd6b636b6c939f3f" gracePeriod=30 Oct 11 10:55:15.087463 master-2 kubenswrapper[4776]: I1011 10:55:15.087426 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"d0cc3965-fd34-4442-b408-a5ae441443e4","Type":"ContainerDied","Data":"c0a0f68535f1045164a03ee0e1499295237f65e65bc92ffb7cde06bc73007d4d"} Oct 11 10:55:15.087522 master-2 kubenswrapper[4776]: I1011 10:55:15.087476 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b5802-scheduler-0" podUID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerName="cinder-scheduler" containerID="cri-o://5887930f9bcf6c6925fafe6faf9cfa4414c977579d10c2864ef8ca027b356b34" gracePeriod=30 Oct 11 10:55:15.094201 master-2 kubenswrapper[4776]: I1011 10:55:15.093980 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:15.096045 master-2 kubenswrapper[4776]: I1011 10:55:15.095841 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:15.145201 master-1 kubenswrapper[4771]: I1011 10:55:15.144944 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6af8eba-f8bf-47f6-8313-7a902aeb170f-combined-ca-bundle\") pod \"ironic-neutron-agent-656ddc8b67-kfkzr\" (UID: \"c6af8eba-f8bf-47f6-8313-7a902aeb170f\") " pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" Oct 11 10:55:15.145201 master-1 kubenswrapper[4771]: I1011 10:55:15.145065 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx9gw\" (UniqueName: \"kubernetes.io/projected/c6af8eba-f8bf-47f6-8313-7a902aeb170f-kube-api-access-lx9gw\") pod \"ironic-neutron-agent-656ddc8b67-kfkzr\" (UID: \"c6af8eba-f8bf-47f6-8313-7a902aeb170f\") " pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" Oct 11 10:55:15.145201 master-1 kubenswrapper[4771]: I1011 10:55:15.145188 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c6af8eba-f8bf-47f6-8313-7a902aeb170f-config\") pod \"ironic-neutron-agent-656ddc8b67-kfkzr\" (UID: \"c6af8eba-f8bf-47f6-8313-7a902aeb170f\") " pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" Oct 11 10:55:15.164003 master-1 kubenswrapper[4771]: I1011 10:55:15.163604 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c6af8eba-f8bf-47f6-8313-7a902aeb170f-config\") pod \"ironic-neutron-agent-656ddc8b67-kfkzr\" (UID: \"c6af8eba-f8bf-47f6-8313-7a902aeb170f\") " pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" Oct 11 10:55:15.167417 master-2 kubenswrapper[4776]: I1011 10:55:15.167333 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9f6b86c79-52ppr"] Oct 11 10:55:15.172309 master-1 kubenswrapper[4771]: I1011 10:55:15.169548 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6af8eba-f8bf-47f6-8313-7a902aeb170f-combined-ca-bundle\") pod \"ironic-neutron-agent-656ddc8b67-kfkzr\" (UID: \"c6af8eba-f8bf-47f6-8313-7a902aeb170f\") " pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" Oct 11 10:55:15.178439 master-2 kubenswrapper[4776]: I1011 10:55:15.177895 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-backup-0"] Oct 11 10:55:15.198251 master-1 kubenswrapper[4771]: I1011 10:55:15.198177 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx9gw\" (UniqueName: \"kubernetes.io/projected/c6af8eba-f8bf-47f6-8313-7a902aeb170f-kube-api-access-lx9gw\") pod \"ironic-neutron-agent-656ddc8b67-kfkzr\" (UID: \"c6af8eba-f8bf-47f6-8313-7a902aeb170f\") " pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" Oct 11 10:55:15.213150 master-0 kubenswrapper[4790]: I1011 10:55:15.212973 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bb968bd67-mk4sp"] Oct 11 10:55:15.219284 master-0 kubenswrapper[4790]: I1011 10:55:15.219247 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.223628 master-0 kubenswrapper[4790]: I1011 10:55:15.223570 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 11 10:55:15.223775 master-0 kubenswrapper[4790]: I1011 10:55:15.223601 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 10:55:15.223775 master-0 kubenswrapper[4790]: I1011 10:55:15.223605 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 10:55:15.223865 master-0 kubenswrapper[4790]: I1011 10:55:15.223811 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 11 10:55:15.226912 master-1 kubenswrapper[4771]: I1011 10:55:15.226851 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-7cddc977f5-9ddgm"] Oct 11 10:55:15.228213 master-0 kubenswrapper[4790]: I1011 10:55:15.227652 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 11 10:55:15.230835 master-1 kubenswrapper[4771]: I1011 10:55:15.228892 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.231749 master-1 kubenswrapper[4771]: I1011 10:55:15.231415 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Oct 11 10:55:15.231749 master-1 kubenswrapper[4771]: I1011 10:55:15.231568 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Oct 11 10:55:15.233650 master-1 kubenswrapper[4771]: I1011 10:55:15.233628 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Oct 11 10:55:15.234373 master-1 kubenswrapper[4771]: I1011 10:55:15.233879 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Oct 11 10:55:15.256560 master-0 kubenswrapper[4790]: I1011 10:55:15.248799 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb968bd67-mk4sp"] Oct 11 10:55:15.257501 master-1 kubenswrapper[4771]: I1011 10:55:15.249752 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-7cddc977f5-9ddgm"] Oct 11 10:55:15.276759 master-1 kubenswrapper[4771]: I1011 10:55:15.276708 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" Oct 11 10:55:15.308096 master-0 kubenswrapper[4790]: I1011 10:55:15.308011 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-ovsdbserver-sb\") pod \"dnsmasq-dns-bb968bd67-mk4sp\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.309042 master-0 kubenswrapper[4790]: I1011 10:55:15.308495 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-ovsdbserver-nb\") pod \"dnsmasq-dns-bb968bd67-mk4sp\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.309042 master-0 kubenswrapper[4790]: I1011 10:55:15.308591 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tbm5\" (UniqueName: \"kubernetes.io/projected/82f87bf9-c348-4149-a72c-99e49db4ec09-kube-api-access-2tbm5\") pod \"dnsmasq-dns-bb968bd67-mk4sp\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.309042 master-0 kubenswrapper[4790]: I1011 10:55:15.308625 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-config\") pod \"dnsmasq-dns-bb968bd67-mk4sp\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.309042 master-0 kubenswrapper[4790]: I1011 10:55:15.308672 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-dns-svc\") pod \"dnsmasq-dns-bb968bd67-mk4sp\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.309042 master-0 kubenswrapper[4790]: I1011 10:55:15.308762 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-dns-swift-storage-0\") pod \"dnsmasq-dns-bb968bd67-mk4sp\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.351434 master-1 kubenswrapper[4771]: I1011 10:55:15.351190 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-config-data-custom\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.351434 master-1 kubenswrapper[4771]: I1011 10:55:15.351290 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/879970ca-6312-4aec-b8f4-a8a41a0e3797-etc-podinfo\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.351434 master-1 kubenswrapper[4771]: I1011 10:55:15.351378 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-config-data\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.351434 master-1 kubenswrapper[4771]: I1011 10:55:15.351430 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-scripts\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.352031 master-1 kubenswrapper[4771]: I1011 10:55:15.351471 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4pzb\" (UniqueName: \"kubernetes.io/projected/879970ca-6312-4aec-b8f4-a8a41a0e3797-kube-api-access-k4pzb\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.352031 master-1 kubenswrapper[4771]: I1011 10:55:15.351617 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/879970ca-6312-4aec-b8f4-a8a41a0e3797-logs\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.352191 master-1 kubenswrapper[4771]: I1011 10:55:15.352143 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-combined-ca-bundle\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.352397 master-1 kubenswrapper[4771]: I1011 10:55:15.352336 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/879970ca-6312-4aec-b8f4-a8a41a0e3797-config-data-merged\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.411352 master-0 kubenswrapper[4790]: I1011 10:55:15.411284 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-ovsdbserver-nb\") pod \"dnsmasq-dns-bb968bd67-mk4sp\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.411352 master-0 kubenswrapper[4790]: I1011 10:55:15.411354 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tbm5\" (UniqueName: \"kubernetes.io/projected/82f87bf9-c348-4149-a72c-99e49db4ec09-kube-api-access-2tbm5\") pod \"dnsmasq-dns-bb968bd67-mk4sp\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.411649 master-0 kubenswrapper[4790]: I1011 10:55:15.411386 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-config\") pod \"dnsmasq-dns-bb968bd67-mk4sp\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.411649 master-0 kubenswrapper[4790]: I1011 10:55:15.411414 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-dns-svc\") pod \"dnsmasq-dns-bb968bd67-mk4sp\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.411649 master-0 kubenswrapper[4790]: I1011 10:55:15.411461 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-dns-swift-storage-0\") pod \"dnsmasq-dns-bb968bd67-mk4sp\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.411649 master-0 kubenswrapper[4790]: I1011 10:55:15.411561 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-ovsdbserver-sb\") pod \"dnsmasq-dns-bb968bd67-mk4sp\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.412787 master-0 kubenswrapper[4790]: I1011 10:55:15.412740 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-ovsdbserver-sb\") pod \"dnsmasq-dns-bb968bd67-mk4sp\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.413466 master-0 kubenswrapper[4790]: I1011 10:55:15.413431 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-ovsdbserver-nb\") pod \"dnsmasq-dns-bb968bd67-mk4sp\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.414523 master-0 kubenswrapper[4790]: I1011 10:55:15.414481 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-config\") pod \"dnsmasq-dns-bb968bd67-mk4sp\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.414671 master-0 kubenswrapper[4790]: I1011 10:55:15.414641 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-dns-svc\") pod \"dnsmasq-dns-bb968bd67-mk4sp\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.417308 master-0 kubenswrapper[4790]: I1011 10:55:15.415298 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-dns-swift-storage-0\") pod \"dnsmasq-dns-bb968bd67-mk4sp\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.469469 master-1 kubenswrapper[4771]: I1011 10:55:15.454584 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/879970ca-6312-4aec-b8f4-a8a41a0e3797-logs\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.469469 master-1 kubenswrapper[4771]: I1011 10:55:15.454661 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-combined-ca-bundle\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.469469 master-1 kubenswrapper[4771]: I1011 10:55:15.454730 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/879970ca-6312-4aec-b8f4-a8a41a0e3797-config-data-merged\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.469469 master-1 kubenswrapper[4771]: I1011 10:55:15.454785 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-config-data-custom\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.469469 master-1 kubenswrapper[4771]: I1011 10:55:15.454814 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/879970ca-6312-4aec-b8f4-a8a41a0e3797-etc-podinfo\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.469469 master-1 kubenswrapper[4771]: I1011 10:55:15.454849 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-config-data\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.469469 master-1 kubenswrapper[4771]: I1011 10:55:15.454888 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-scripts\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.469469 master-1 kubenswrapper[4771]: I1011 10:55:15.454919 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4pzb\" (UniqueName: \"kubernetes.io/projected/879970ca-6312-4aec-b8f4-a8a41a0e3797-kube-api-access-k4pzb\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.469469 master-1 kubenswrapper[4771]: I1011 10:55:15.455226 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/879970ca-6312-4aec-b8f4-a8a41a0e3797-logs\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.483391 master-1 kubenswrapper[4771]: I1011 10:55:15.475767 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/879970ca-6312-4aec-b8f4-a8a41a0e3797-config-data-merged\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.483391 master-1 kubenswrapper[4771]: I1011 10:55:15.476308 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-config-data-custom\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.483391 master-1 kubenswrapper[4771]: I1011 10:55:15.480455 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-scripts\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.485425 master-1 kubenswrapper[4771]: I1011 10:55:15.484323 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-combined-ca-bundle\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.503402 master-1 kubenswrapper[4771]: I1011 10:55:15.487508 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4pzb\" (UniqueName: \"kubernetes.io/projected/879970ca-6312-4aec-b8f4-a8a41a0e3797-kube-api-access-k4pzb\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.503402 master-1 kubenswrapper[4771]: I1011 10:55:15.490447 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-config-data\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.508398 master-1 kubenswrapper[4771]: I1011 10:55:15.504097 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/879970ca-6312-4aec-b8f4-a8a41a0e3797-etc-podinfo\") pod \"ironic-7cddc977f5-9ddgm\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.516049 master-0 kubenswrapper[4790]: I1011 10:55:15.515917 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tbm5\" (UniqueName: \"kubernetes.io/projected/82f87bf9-c348-4149-a72c-99e49db4ec09-kube-api-access-2tbm5\") pod \"dnsmasq-dns-bb968bd67-mk4sp\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.586694 master-1 kubenswrapper[4771]: I1011 10:55:15.582500 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:15.615874 master-0 kubenswrapper[4790]: I1011 10:55:15.615792 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:15.733654 master-2 kubenswrapper[4776]: I1011 10:55:15.733587 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-66dfdcbff8-j4jhs"] Oct 11 10:55:15.734080 master-2 kubenswrapper[4776]: E1011 10:55:15.733943 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1c4d38-1f25-4465-9976-43be28a3b282" containerName="ironic-db-sync" Oct 11 10:55:15.734080 master-2 kubenswrapper[4776]: I1011 10:55:15.733956 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1c4d38-1f25-4465-9976-43be28a3b282" containerName="ironic-db-sync" Oct 11 10:55:15.734080 master-2 kubenswrapper[4776]: E1011 10:55:15.733967 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1c4d38-1f25-4465-9976-43be28a3b282" containerName="init" Oct 11 10:55:15.734080 master-2 kubenswrapper[4776]: I1011 10:55:15.733976 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1c4d38-1f25-4465-9976-43be28a3b282" containerName="init" Oct 11 10:55:15.734732 master-2 kubenswrapper[4776]: I1011 10:55:15.734117 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a1c4d38-1f25-4465-9976-43be28a3b282" containerName="ironic-db-sync" Oct 11 10:55:15.735050 master-2 kubenswrapper[4776]: I1011 10:55:15.735030 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.747825 master-2 kubenswrapper[4776]: I1011 10:55:15.744153 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 11 10:55:15.747825 master-2 kubenswrapper[4776]: I1011 10:55:15.744375 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 11 10:55:15.747825 master-2 kubenswrapper[4776]: I1011 10:55:15.744548 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 11 10:55:15.747825 master-2 kubenswrapper[4776]: I1011 10:55:15.744747 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 11 10:55:15.747825 master-2 kubenswrapper[4776]: I1011 10:55:15.744890 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 11 10:55:15.753405 master-2 kubenswrapper[4776]: I1011 10:55:15.753343 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-66dfdcbff8-j4jhs"] Oct 11 10:55:15.804733 master-2 kubenswrapper[4776]: I1011 10:55:15.801242 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-log-httpd\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.804733 master-2 kubenswrapper[4776]: I1011 10:55:15.801328 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-run-httpd\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.804733 master-2 kubenswrapper[4776]: I1011 10:55:15.801361 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-etc-swift\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.804733 master-2 kubenswrapper[4776]: I1011 10:55:15.801438 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-combined-ca-bundle\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.804733 master-2 kubenswrapper[4776]: I1011 10:55:15.801500 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-config-data\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.804733 master-2 kubenswrapper[4776]: I1011 10:55:15.801576 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw848\" (UniqueName: \"kubernetes.io/projected/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-kube-api-access-zw848\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.804733 master-2 kubenswrapper[4776]: I1011 10:55:15.801649 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-internal-tls-certs\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.804733 master-2 kubenswrapper[4776]: I1011 10:55:15.801741 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-public-tls-certs\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.900984 master-2 kubenswrapper[4776]: I1011 10:55:15.900923 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-86bdd47775-gpz8z"] Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.902107 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.903019 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw848\" (UniqueName: \"kubernetes.io/projected/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-kube-api-access-zw848\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.903079 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-internal-tls-certs\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.903119 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-public-tls-certs\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.903197 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-log-httpd\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.903214 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-run-httpd\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.903230 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-etc-swift\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.903260 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-combined-ca-bundle\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.903282 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-config-data\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.903956 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-run-httpd\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.904269 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-log-httpd\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.904894 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.905101 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.908389 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-internal-tls-certs\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.910938 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-public-tls-certs\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.911155 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-etc-swift\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.913185 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-config-data\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.944432 master-2 kubenswrapper[4776]: I1011 10:55:15.944387 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-86bdd47775-gpz8z"] Oct 11 10:55:15.953537 master-2 kubenswrapper[4776]: I1011 10:55:15.952915 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw848\" (UniqueName: \"kubernetes.io/projected/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-kube-api-access-zw848\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.992741 master-2 kubenswrapper[4776]: I1011 10:55:15.990771 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-combined-ca-bundle\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:16.005962 master-2 kubenswrapper[4776]: I1011 10:55:16.005911 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data-custom\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.006134 master-2 kubenswrapper[4776]: I1011 10:55:16.005973 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-combined-ca-bundle\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.006134 master-2 kubenswrapper[4776]: I1011 10:55:16.006035 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kschj\" (UniqueName: \"kubernetes.io/projected/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-kube-api-access-kschj\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.006134 master-2 kubenswrapper[4776]: I1011 10:55:16.006060 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.047318 master-1 kubenswrapper[4771]: I1011 10:55:16.046195 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-b8cb664c5-5zrqf"] Oct 11 10:55:16.047905 master-1 kubenswrapper[4771]: I1011 10:55:16.047837 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" Oct 11 10:55:16.051696 master-1 kubenswrapper[4771]: I1011 10:55:16.051615 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Oct 11 10:55:16.052021 master-1 kubenswrapper[4771]: I1011 10:55:16.051957 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Oct 11 10:55:16.061640 master-0 kubenswrapper[4790]: I1011 10:55:16.061572 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb968bd67-mk4sp"] Oct 11 10:55:16.064331 master-1 kubenswrapper[4771]: I1011 10:55:16.064263 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-b8cb664c5-5zrqf"] Oct 11 10:55:16.081863 master-2 kubenswrapper[4776]: I1011 10:55:16.081096 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:16.084325 master-1 kubenswrapper[4771]: I1011 10:55:16.084234 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:55:16.085493 master-1 kubenswrapper[4771]: I1011 10:55:16.084867 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerName="ceilometer-central-agent" containerID="cri-o://1c3861fe4b88a03a7f5a37466aa7554573bbeb56e12a609a086b0d7cf9119e59" gracePeriod=30 Oct 11 10:55:16.085493 master-1 kubenswrapper[4771]: I1011 10:55:16.085043 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerName="proxy-httpd" containerID="cri-o://3b0917f1e0c562a330c2d467c28336993eadaec7d16c4d43d62cfb2b0ba25b4b" gracePeriod=30 Oct 11 10:55:16.085493 master-1 kubenswrapper[4771]: I1011 10:55:16.085093 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerName="sg-core" containerID="cri-o://e1413b97fa6430a651152f91df8a84c4a32dbe4f3d81aabb8fb9fea0809e7a16" gracePeriod=30 Oct 11 10:55:16.085493 master-1 kubenswrapper[4771]: I1011 10:55:16.085126 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerName="ceilometer-notification-agent" containerID="cri-o://0581a657f8cf01879d33e71f4db4cc2df261f4f45ead619016173a151ac38bcc" gracePeriod=30 Oct 11 10:55:16.093943 master-1 kubenswrapper[4771]: I1011 10:55:16.093869 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 11 10:55:16.107069 master-2 kubenswrapper[4776]: I1011 10:55:16.106998 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kschj\" (UniqueName: \"kubernetes.io/projected/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-kube-api-access-kschj\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.107270 master-2 kubenswrapper[4776]: I1011 10:55:16.107100 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.107270 master-2 kubenswrapper[4776]: I1011 10:55:16.107239 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data-custom\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.107370 master-2 kubenswrapper[4776]: I1011 10:55:16.107295 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-combined-ca-bundle\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.110938 master-2 kubenswrapper[4776]: I1011 10:55:16.110902 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.126725 master-2 kubenswrapper[4776]: I1011 10:55:16.112215 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-combined-ca-bundle\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.126725 master-2 kubenswrapper[4776]: I1011 10:55:16.113647 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data-custom\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.132460 master-1 kubenswrapper[4771]: I1011 10:55:16.131985 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-768f954cfc-gvj8j"] Oct 11 10:55:16.134145 master-1 kubenswrapper[4771]: I1011 10:55:16.134105 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.135725 master-2 kubenswrapper[4776]: I1011 10:55:16.133778 4776 generic.go:334] "Generic (PLEG): container finished" podID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerID="664efc4ad947feadd97f79da92d3d8787041e164ebbb71c8dd6b636b6c939f3f" exitCode=0 Oct 11 10:55:16.135725 master-2 kubenswrapper[4776]: I1011 10:55:16.133816 4776 generic.go:334] "Generic (PLEG): container finished" podID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerID="5887930f9bcf6c6925fafe6faf9cfa4414c977579d10c2864ef8ca027b356b34" exitCode=0 Oct 11 10:55:16.135725 master-2 kubenswrapper[4776]: I1011 10:55:16.133991 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" podUID="941fb918-e4b8-4ef7-9ad1-9af907c5593a" containerName="dnsmasq-dns" containerID="cri-o://8b6ad17ffd3f1f6962cd2f8e96a39bece7f7bf59db6192abf67589a9326a7807" gracePeriod=10 Oct 11 10:55:16.135725 master-2 kubenswrapper[4776]: I1011 10:55:16.134064 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-scheduler-0" event={"ID":"6e1008bd-5444-4490-8e34-8a7843bf5c45","Type":"ContainerDied","Data":"664efc4ad947feadd97f79da92d3d8787041e164ebbb71c8dd6b636b6c939f3f"} Oct 11 10:55:16.135725 master-2 kubenswrapper[4776]: I1011 10:55:16.134091 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-scheduler-0" event={"ID":"6e1008bd-5444-4490-8e34-8a7843bf5c45","Type":"ContainerDied","Data":"5887930f9bcf6c6925fafe6faf9cfa4414c977579d10c2864ef8ca027b356b34"} Oct 11 10:55:16.135725 master-2 kubenswrapper[4776]: I1011 10:55:16.134211 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b5802-backup-0" podUID="64da8a05-f383-4643-b08d-639963f8bdd5" containerName="cinder-backup" containerID="cri-o://6cbb734e786ac7a270d6c6e63f2ba748d07f4398724c5368110f5b4040ec1204" gracePeriod=30 Oct 11 10:55:16.135725 master-2 kubenswrapper[4776]: I1011 10:55:16.134834 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b5802-backup-0" podUID="64da8a05-f383-4643-b08d-639963f8bdd5" containerName="probe" containerID="cri-o://5510c48b8a1b349e6fccbe8283441e4880694e565021961bec519004a98d24f9" gracePeriod=30 Oct 11 10:55:16.147928 master-2 kubenswrapper[4776]: I1011 10:55:16.139271 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kschj\" (UniqueName: \"kubernetes.io/projected/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-kube-api-access-kschj\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.173157 master-1 kubenswrapper[4771]: I1011 10:55:16.173063 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzbk4\" (UniqueName: \"kubernetes.io/projected/1fe7833d-9251-4545-ba68-f58c146188f1-kube-api-access-gzbk4\") pod \"heat-cfnapi-b8cb664c5-5zrqf\" (UID: \"1fe7833d-9251-4545-ba68-f58c146188f1\") " pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" Oct 11 10:55:16.173516 master-1 kubenswrapper[4771]: I1011 10:55:16.173208 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fe7833d-9251-4545-ba68-f58c146188f1-config-data\") pod \"heat-cfnapi-b8cb664c5-5zrqf\" (UID: \"1fe7833d-9251-4545-ba68-f58c146188f1\") " pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" Oct 11 10:55:16.173516 master-1 kubenswrapper[4771]: I1011 10:55:16.173274 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fe7833d-9251-4545-ba68-f58c146188f1-config-data-custom\") pod \"heat-cfnapi-b8cb664c5-5zrqf\" (UID: \"1fe7833d-9251-4545-ba68-f58c146188f1\") " pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" Oct 11 10:55:16.173516 master-1 kubenswrapper[4771]: I1011 10:55:16.173323 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe7833d-9251-4545-ba68-f58c146188f1-combined-ca-bundle\") pod \"heat-cfnapi-b8cb664c5-5zrqf\" (UID: \"1fe7833d-9251-4545-ba68-f58c146188f1\") " pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" Oct 11 10:55:16.206286 master-0 kubenswrapper[4790]: I1011 10:55:16.206214 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5f5c98dcd5-c8xhk"] Oct 11 10:55:16.208340 master-0 kubenswrapper[4790]: I1011 10:55:16.207997 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5f5c98dcd5-c8xhk" Oct 11 10:55:16.216097 master-0 kubenswrapper[4790]: I1011 10:55:16.211886 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Oct 11 10:55:16.216097 master-0 kubenswrapper[4790]: I1011 10:55:16.214222 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Oct 11 10:55:16.220277 master-0 kubenswrapper[4790]: I1011 10:55:16.220194 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5f5c98dcd5-c8xhk"] Oct 11 10:55:16.237069 master-1 kubenswrapper[4771]: I1011 10:55:16.222222 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-768f954cfc-gvj8j"] Oct 11 10:55:16.238608 master-0 kubenswrapper[4790]: I1011 10:55:16.238490 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41b60ff-457d-4c45-8b56-4523c5c0097f-combined-ca-bundle\") pod \"heat-api-5f5c98dcd5-c8xhk\" (UID: \"c41b60ff-457d-4c45-8b56-4523c5c0097f\") " pod="openstack/heat-api-5f5c98dcd5-c8xhk" Oct 11 10:55:16.238902 master-0 kubenswrapper[4790]: I1011 10:55:16.238650 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c41b60ff-457d-4c45-8b56-4523c5c0097f-config-data-custom\") pod \"heat-api-5f5c98dcd5-c8xhk\" (UID: \"c41b60ff-457d-4c45-8b56-4523c5c0097f\") " pod="openstack/heat-api-5f5c98dcd5-c8xhk" Oct 11 10:55:16.240218 master-0 kubenswrapper[4790]: I1011 10:55:16.239997 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41b60ff-457d-4c45-8b56-4523c5c0097f-config-data\") pod \"heat-api-5f5c98dcd5-c8xhk\" (UID: \"c41b60ff-457d-4c45-8b56-4523c5c0097f\") " pod="openstack/heat-api-5f5c98dcd5-c8xhk" Oct 11 10:55:16.240218 master-0 kubenswrapper[4790]: I1011 10:55:16.240108 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb8r8\" (UniqueName: \"kubernetes.io/projected/c41b60ff-457d-4c45-8b56-4523c5c0097f-kube-api-access-bb8r8\") pod \"heat-api-5f5c98dcd5-c8xhk\" (UID: \"c41b60ff-457d-4c45-8b56-4523c5c0097f\") " pod="openstack/heat-api-5f5c98dcd5-c8xhk" Oct 11 10:55:16.275402 master-1 kubenswrapper[4771]: I1011 10:55:16.275315 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fe7833d-9251-4545-ba68-f58c146188f1-config-data\") pod \"heat-cfnapi-b8cb664c5-5zrqf\" (UID: \"1fe7833d-9251-4545-ba68-f58c146188f1\") " pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" Oct 11 10:55:16.275800 master-1 kubenswrapper[4771]: I1011 10:55:16.275437 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-ovsdbserver-nb\") pod \"dnsmasq-dns-768f954cfc-gvj8j\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.275800 master-1 kubenswrapper[4771]: I1011 10:55:16.275514 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fe7833d-9251-4545-ba68-f58c146188f1-config-data-custom\") pod \"heat-cfnapi-b8cb664c5-5zrqf\" (UID: \"1fe7833d-9251-4545-ba68-f58c146188f1\") " pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" Oct 11 10:55:16.275800 master-1 kubenswrapper[4771]: I1011 10:55:16.275544 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-dns-svc\") pod \"dnsmasq-dns-768f954cfc-gvj8j\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.275800 master-1 kubenswrapper[4771]: I1011 10:55:16.275568 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-dns-swift-storage-0\") pod \"dnsmasq-dns-768f954cfc-gvj8j\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.275800 master-1 kubenswrapper[4771]: I1011 10:55:16.275589 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe7833d-9251-4545-ba68-f58c146188f1-combined-ca-bundle\") pod \"heat-cfnapi-b8cb664c5-5zrqf\" (UID: \"1fe7833d-9251-4545-ba68-f58c146188f1\") " pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" Oct 11 10:55:16.275800 master-1 kubenswrapper[4771]: I1011 10:55:16.275645 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64zxc\" (UniqueName: \"kubernetes.io/projected/7f2f3d22-d709-4602-bb25-2c17626b75f1-kube-api-access-64zxc\") pod \"dnsmasq-dns-768f954cfc-gvj8j\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.275800 master-1 kubenswrapper[4771]: I1011 10:55:16.275694 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzbk4\" (UniqueName: \"kubernetes.io/projected/1fe7833d-9251-4545-ba68-f58c146188f1-kube-api-access-gzbk4\") pod \"heat-cfnapi-b8cb664c5-5zrqf\" (UID: \"1fe7833d-9251-4545-ba68-f58c146188f1\") " pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" Oct 11 10:55:16.275800 master-1 kubenswrapper[4771]: I1011 10:55:16.275714 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-config\") pod \"dnsmasq-dns-768f954cfc-gvj8j\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.276052 master-1 kubenswrapper[4771]: I1011 10:55:16.275906 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-ovsdbserver-sb\") pod \"dnsmasq-dns-768f954cfc-gvj8j\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.281207 master-1 kubenswrapper[4771]: I1011 10:55:16.281156 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe7833d-9251-4545-ba68-f58c146188f1-combined-ca-bundle\") pod \"heat-cfnapi-b8cb664c5-5zrqf\" (UID: \"1fe7833d-9251-4545-ba68-f58c146188f1\") " pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" Oct 11 10:55:16.282097 master-1 kubenswrapper[4771]: I1011 10:55:16.282063 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fe7833d-9251-4545-ba68-f58c146188f1-config-data\") pod \"heat-cfnapi-b8cb664c5-5zrqf\" (UID: \"1fe7833d-9251-4545-ba68-f58c146188f1\") " pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" Oct 11 10:55:16.287453 master-1 kubenswrapper[4771]: I1011 10:55:16.287311 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fe7833d-9251-4545-ba68-f58c146188f1-config-data-custom\") pod \"heat-cfnapi-b8cb664c5-5zrqf\" (UID: \"1fe7833d-9251-4545-ba68-f58c146188f1\") " pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" Oct 11 10:55:16.296772 master-1 kubenswrapper[4771]: I1011 10:55:16.296712 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzbk4\" (UniqueName: \"kubernetes.io/projected/1fe7833d-9251-4545-ba68-f58c146188f1-kube-api-access-gzbk4\") pod \"heat-cfnapi-b8cb664c5-5zrqf\" (UID: \"1fe7833d-9251-4545-ba68-f58c146188f1\") " pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" Oct 11 10:55:16.328697 master-2 kubenswrapper[4776]: I1011 10:55:16.328233 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.341883 master-0 kubenswrapper[4790]: I1011 10:55:16.341789 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb8r8\" (UniqueName: \"kubernetes.io/projected/c41b60ff-457d-4c45-8b56-4523c5c0097f-kube-api-access-bb8r8\") pod \"heat-api-5f5c98dcd5-c8xhk\" (UID: \"c41b60ff-457d-4c45-8b56-4523c5c0097f\") " pod="openstack/heat-api-5f5c98dcd5-c8xhk" Oct 11 10:55:16.342112 master-0 kubenswrapper[4790]: I1011 10:55:16.341898 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41b60ff-457d-4c45-8b56-4523c5c0097f-combined-ca-bundle\") pod \"heat-api-5f5c98dcd5-c8xhk\" (UID: \"c41b60ff-457d-4c45-8b56-4523c5c0097f\") " pod="openstack/heat-api-5f5c98dcd5-c8xhk" Oct 11 10:55:16.342112 master-0 kubenswrapper[4790]: I1011 10:55:16.341964 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c41b60ff-457d-4c45-8b56-4523c5c0097f-config-data-custom\") pod \"heat-api-5f5c98dcd5-c8xhk\" (UID: \"c41b60ff-457d-4c45-8b56-4523c5c0097f\") " pod="openstack/heat-api-5f5c98dcd5-c8xhk" Oct 11 10:55:16.342112 master-0 kubenswrapper[4790]: I1011 10:55:16.342075 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41b60ff-457d-4c45-8b56-4523c5c0097f-config-data\") pod \"heat-api-5f5c98dcd5-c8xhk\" (UID: \"c41b60ff-457d-4c45-8b56-4523c5c0097f\") " pod="openstack/heat-api-5f5c98dcd5-c8xhk" Oct 11 10:55:16.347299 master-0 kubenswrapper[4790]: I1011 10:55:16.347210 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41b60ff-457d-4c45-8b56-4523c5c0097f-combined-ca-bundle\") pod \"heat-api-5f5c98dcd5-c8xhk\" (UID: \"c41b60ff-457d-4c45-8b56-4523c5c0097f\") " pod="openstack/heat-api-5f5c98dcd5-c8xhk" Oct 11 10:55:16.348586 master-0 kubenswrapper[4790]: I1011 10:55:16.348528 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41b60ff-457d-4c45-8b56-4523c5c0097f-config-data\") pod \"heat-api-5f5c98dcd5-c8xhk\" (UID: \"c41b60ff-457d-4c45-8b56-4523c5c0097f\") " pod="openstack/heat-api-5f5c98dcd5-c8xhk" Oct 11 10:55:16.348670 master-0 kubenswrapper[4790]: I1011 10:55:16.348593 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c41b60ff-457d-4c45-8b56-4523c5c0097f-config-data-custom\") pod \"heat-api-5f5c98dcd5-c8xhk\" (UID: \"c41b60ff-457d-4c45-8b56-4523c5c0097f\") " pod="openstack/heat-api-5f5c98dcd5-c8xhk" Oct 11 10:55:16.366856 master-0 kubenswrapper[4790]: I1011 10:55:16.366732 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb8r8\" (UniqueName: \"kubernetes.io/projected/c41b60ff-457d-4c45-8b56-4523c5c0097f-kube-api-access-bb8r8\") pod \"heat-api-5f5c98dcd5-c8xhk\" (UID: \"c41b60ff-457d-4c45-8b56-4523c5c0097f\") " pod="openstack/heat-api-5f5c98dcd5-c8xhk" Oct 11 10:55:16.372420 master-1 kubenswrapper[4771]: I1011 10:55:16.372181 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" Oct 11 10:55:16.379019 master-1 kubenswrapper[4771]: I1011 10:55:16.378941 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-ovsdbserver-nb\") pod \"dnsmasq-dns-768f954cfc-gvj8j\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.379136 master-1 kubenswrapper[4771]: I1011 10:55:16.379032 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-dns-svc\") pod \"dnsmasq-dns-768f954cfc-gvj8j\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.379136 master-1 kubenswrapper[4771]: I1011 10:55:16.379077 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-dns-swift-storage-0\") pod \"dnsmasq-dns-768f954cfc-gvj8j\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.379210 master-1 kubenswrapper[4771]: I1011 10:55:16.379156 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64zxc\" (UniqueName: \"kubernetes.io/projected/7f2f3d22-d709-4602-bb25-2c17626b75f1-kube-api-access-64zxc\") pod \"dnsmasq-dns-768f954cfc-gvj8j\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.379342 master-1 kubenswrapper[4771]: I1011 10:55:16.379220 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-config\") pod \"dnsmasq-dns-768f954cfc-gvj8j\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.379342 master-1 kubenswrapper[4771]: I1011 10:55:16.379277 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-ovsdbserver-sb\") pod \"dnsmasq-dns-768f954cfc-gvj8j\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.381006 master-1 kubenswrapper[4771]: I1011 10:55:16.380543 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-ovsdbserver-sb\") pod \"dnsmasq-dns-768f954cfc-gvj8j\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.381006 master-1 kubenswrapper[4771]: I1011 10:55:16.380858 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-dns-swift-storage-0\") pod \"dnsmasq-dns-768f954cfc-gvj8j\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.382037 master-1 kubenswrapper[4771]: I1011 10:55:16.381959 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-dns-svc\") pod \"dnsmasq-dns-768f954cfc-gvj8j\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.382261 master-1 kubenswrapper[4771]: I1011 10:55:16.382163 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-config\") pod \"dnsmasq-dns-768f954cfc-gvj8j\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.382887 master-1 kubenswrapper[4771]: I1011 10:55:16.382840 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-ovsdbserver-nb\") pod \"dnsmasq-dns-768f954cfc-gvj8j\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.415079 master-1 kubenswrapper[4771]: I1011 10:55:16.404910 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64zxc\" (UniqueName: \"kubernetes.io/projected/7f2f3d22-d709-4602-bb25-2c17626b75f1-kube-api-access-64zxc\") pod \"dnsmasq-dns-768f954cfc-gvj8j\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.459602 master-1 kubenswrapper[4771]: I1011 10:55:16.459320 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:16.547684 master-0 kubenswrapper[4790]: I1011 10:55:16.544988 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5f5c98dcd5-c8xhk" Oct 11 10:55:16.670506 master-1 kubenswrapper[4771]: I1011 10:55:16.670396 4771 generic.go:334] "Generic (PLEG): container finished" podID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerID="3b0917f1e0c562a330c2d467c28336993eadaec7d16c4d43d62cfb2b0ba25b4b" exitCode=0 Oct 11 10:55:16.670506 master-1 kubenswrapper[4771]: I1011 10:55:16.670437 4771 generic.go:334] "Generic (PLEG): container finished" podID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerID="e1413b97fa6430a651152f91df8a84c4a32dbe4f3d81aabb8fb9fea0809e7a16" exitCode=2 Oct 11 10:55:16.670506 master-1 kubenswrapper[4771]: I1011 10:55:16.670471 4771 generic.go:334] "Generic (PLEG): container finished" podID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerID="1c3861fe4b88a03a7f5a37466aa7554573bbeb56e12a609a086b0d7cf9119e59" exitCode=0 Oct 11 10:55:16.670506 master-1 kubenswrapper[4771]: I1011 10:55:16.670499 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49ec9c51-e085-4cfa-8ce7-387a02f23731","Type":"ContainerDied","Data":"3b0917f1e0c562a330c2d467c28336993eadaec7d16c4d43d62cfb2b0ba25b4b"} Oct 11 10:55:16.671240 master-1 kubenswrapper[4771]: I1011 10:55:16.670531 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49ec9c51-e085-4cfa-8ce7-387a02f23731","Type":"ContainerDied","Data":"e1413b97fa6430a651152f91df8a84c4a32dbe4f3d81aabb8fb9fea0809e7a16"} Oct 11 10:55:16.671240 master-1 kubenswrapper[4771]: I1011 10:55:16.670543 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49ec9c51-e085-4cfa-8ce7-387a02f23731","Type":"ContainerDied","Data":"1c3861fe4b88a03a7f5a37466aa7554573bbeb56e12a609a086b0d7cf9119e59"} Oct 11 10:55:16.859398 master-2 kubenswrapper[4776]: I1011 10:55:16.859291 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Oct 11 10:55:16.875115 master-2 kubenswrapper[4776]: I1011 10:55:16.874981 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Oct 11 10:55:16.883559 master-2 kubenswrapper[4776]: I1011 10:55:16.881897 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Oct 11 10:55:16.883559 master-2 kubenswrapper[4776]: I1011 10:55:16.882092 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Oct 11 10:55:16.883559 master-2 kubenswrapper[4776]: I1011 10:55:16.882093 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Oct 11 10:55:16.883559 master-2 kubenswrapper[4776]: I1011 10:55:16.882301 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Oct 11 10:55:16.901372 master-2 kubenswrapper[4776]: I1011 10:55:16.899378 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Oct 11 10:55:17.031762 master-2 kubenswrapper[4776]: I1011 10:55:17.030911 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-scripts\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.031762 master-2 kubenswrapper[4776]: I1011 10:55:17.031031 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.031762 master-2 kubenswrapper[4776]: I1011 10:55:17.031060 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5fth\" (UniqueName: \"kubernetes.io/projected/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-kube-api-access-x5fth\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.031762 master-2 kubenswrapper[4776]: I1011 10:55:17.031106 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-config-data\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.031762 master-2 kubenswrapper[4776]: I1011 10:55:17.031138 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.031762 master-2 kubenswrapper[4776]: I1011 10:55:17.031161 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.031762 master-2 kubenswrapper[4776]: I1011 10:55:17.031221 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-72cf197d-b530-4337-9ce7-c4684efc1643\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29100426-23f3-402d-9fda-ad3e2743ec8a\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.031762 master-2 kubenswrapper[4776]: I1011 10:55:17.031284 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.105771 master-2 kubenswrapper[4776]: I1011 10:55:17.105718 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.132767 master-2 kubenswrapper[4776]: I1011 10:55:17.132701 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-scripts\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.132997 master-2 kubenswrapper[4776]: I1011 10:55:17.132791 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.132997 master-2 kubenswrapper[4776]: I1011 10:55:17.132811 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5fth\" (UniqueName: \"kubernetes.io/projected/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-kube-api-access-x5fth\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.132997 master-2 kubenswrapper[4776]: I1011 10:55:17.132841 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-config-data\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.132997 master-2 kubenswrapper[4776]: I1011 10:55:17.132865 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.132997 master-2 kubenswrapper[4776]: I1011 10:55:17.132884 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.132997 master-2 kubenswrapper[4776]: I1011 10:55:17.132932 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-72cf197d-b530-4337-9ce7-c4684efc1643\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29100426-23f3-402d-9fda-ad3e2743ec8a\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.132997 master-2 kubenswrapper[4776]: I1011 10:55:17.132962 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.134271 master-2 kubenswrapper[4776]: I1011 10:55:17.133961 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.135294 master-2 kubenswrapper[4776]: I1011 10:55:17.135268 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:55:17.135371 master-2 kubenswrapper[4776]: I1011 10:55:17.135303 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-72cf197d-b530-4337-9ce7-c4684efc1643\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29100426-23f3-402d-9fda-ad3e2743ec8a\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/ab2d09fe8f862871592cfbd594d467b006a80d82a58168e1cd7a1c526a517195/globalmount\"" pod="openstack/ironic-conductor-0" Oct 11 10:55:17.136372 master-2 kubenswrapper[4776]: I1011 10:55:17.136286 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.136882 master-2 kubenswrapper[4776]: I1011 10:55:17.136864 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.137157 master-2 kubenswrapper[4776]: I1011 10:55:17.137097 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-config-data\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.138363 master-2 kubenswrapper[4776]: I1011 10:55:17.138284 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.151806 master-2 kubenswrapper[4776]: I1011 10:55:17.151704 4776 generic.go:334] "Generic (PLEG): container finished" podID="64da8a05-f383-4643-b08d-639963f8bdd5" containerID="5510c48b8a1b349e6fccbe8283441e4880694e565021961bec519004a98d24f9" exitCode=0 Oct 11 10:55:17.152666 master-2 kubenswrapper[4776]: I1011 10:55:17.151805 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-backup-0" event={"ID":"64da8a05-f383-4643-b08d-639963f8bdd5","Type":"ContainerDied","Data":"5510c48b8a1b349e6fccbe8283441e4880694e565021961bec519004a98d24f9"} Oct 11 10:55:17.155546 master-2 kubenswrapper[4776]: I1011 10:55:17.155488 4776 generic.go:334] "Generic (PLEG): container finished" podID="941fb918-e4b8-4ef7-9ad1-9af907c5593a" containerID="8b6ad17ffd3f1f6962cd2f8e96a39bece7f7bf59db6192abf67589a9326a7807" exitCode=0 Oct 11 10:55:17.155705 master-2 kubenswrapper[4776]: I1011 10:55:17.155636 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" event={"ID":"941fb918-e4b8-4ef7-9ad1-9af907c5593a","Type":"ContainerDied","Data":"8b6ad17ffd3f1f6962cd2f8e96a39bece7f7bf59db6192abf67589a9326a7807"} Oct 11 10:55:17.155997 master-2 kubenswrapper[4776]: I1011 10:55:17.155958 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-scripts\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.164580 master-2 kubenswrapper[4776]: I1011 10:55:17.164535 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-scheduler-0" event={"ID":"6e1008bd-5444-4490-8e34-8a7843bf5c45","Type":"ContainerDied","Data":"b44ccb387136d6d7afdc04ea61ce79bb899e6fab306823599152bcb3ecd66c0a"} Oct 11 10:55:17.164948 master-2 kubenswrapper[4776]: I1011 10:55:17.164599 4776 scope.go:117] "RemoveContainer" containerID="664efc4ad947feadd97f79da92d3d8787041e164ebbb71c8dd6b636b6c939f3f" Oct 11 10:55:17.164948 master-2 kubenswrapper[4776]: I1011 10:55:17.164811 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.167383 master-2 kubenswrapper[4776]: I1011 10:55:17.166968 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5fth\" (UniqueName: \"kubernetes.io/projected/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-kube-api-access-x5fth\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.234389 master-2 kubenswrapper[4776]: I1011 10:55:17.234332 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data-custom\") pod \"6e1008bd-5444-4490-8e34-8a7843bf5c45\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " Oct 11 10:55:17.234821 master-2 kubenswrapper[4776]: I1011 10:55:17.234783 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-combined-ca-bundle\") pod \"6e1008bd-5444-4490-8e34-8a7843bf5c45\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " Oct 11 10:55:17.234890 master-2 kubenswrapper[4776]: I1011 10:55:17.234867 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e1008bd-5444-4490-8e34-8a7843bf5c45-etc-machine-id\") pod \"6e1008bd-5444-4490-8e34-8a7843bf5c45\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " Oct 11 10:55:17.234956 master-2 kubenswrapper[4776]: I1011 10:55:17.234934 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlmpd\" (UniqueName: \"kubernetes.io/projected/6e1008bd-5444-4490-8e34-8a7843bf5c45-kube-api-access-jlmpd\") pod \"6e1008bd-5444-4490-8e34-8a7843bf5c45\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " Oct 11 10:55:17.235004 master-2 kubenswrapper[4776]: I1011 10:55:17.234984 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data\") pod \"6e1008bd-5444-4490-8e34-8a7843bf5c45\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " Oct 11 10:55:17.235041 master-2 kubenswrapper[4776]: I1011 10:55:17.235014 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-scripts\") pod \"6e1008bd-5444-4490-8e34-8a7843bf5c45\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " Oct 11 10:55:17.235457 master-2 kubenswrapper[4776]: I1011 10:55:17.235395 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e1008bd-5444-4490-8e34-8a7843bf5c45-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6e1008bd-5444-4490-8e34-8a7843bf5c45" (UID: "6e1008bd-5444-4490-8e34-8a7843bf5c45"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:17.235964 master-2 kubenswrapper[4776]: I1011 10:55:17.235773 4776 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e1008bd-5444-4490-8e34-8a7843bf5c45-etc-machine-id\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.239401 master-2 kubenswrapper[4776]: I1011 10:55:17.239112 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6e1008bd-5444-4490-8e34-8a7843bf5c45" (UID: "6e1008bd-5444-4490-8e34-8a7843bf5c45"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:17.239536 master-2 kubenswrapper[4776]: I1011 10:55:17.239495 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e1008bd-5444-4490-8e34-8a7843bf5c45-kube-api-access-jlmpd" (OuterVolumeSpecName: "kube-api-access-jlmpd") pod "6e1008bd-5444-4490-8e34-8a7843bf5c45" (UID: "6e1008bd-5444-4490-8e34-8a7843bf5c45"). InnerVolumeSpecName "kube-api-access-jlmpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:17.241858 master-2 kubenswrapper[4776]: I1011 10:55:17.241820 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-scripts" (OuterVolumeSpecName: "scripts") pod "6e1008bd-5444-4490-8e34-8a7843bf5c45" (UID: "6e1008bd-5444-4490-8e34-8a7843bf5c45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:17.292915 master-2 kubenswrapper[4776]: I1011 10:55:17.292847 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e1008bd-5444-4490-8e34-8a7843bf5c45" (UID: "6e1008bd-5444-4490-8e34-8a7843bf5c45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:17.326075 master-2 kubenswrapper[4776]: I1011 10:55:17.325365 4776 scope.go:117] "RemoveContainer" containerID="5887930f9bcf6c6925fafe6faf9cfa4414c977579d10c2864ef8ca027b356b34" Oct 11 10:55:17.337481 master-2 kubenswrapper[4776]: I1011 10:55:17.337431 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.337481 master-2 kubenswrapper[4776]: I1011 10:55:17.337482 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlmpd\" (UniqueName: \"kubernetes.io/projected/6e1008bd-5444-4490-8e34-8a7843bf5c45-kube-api-access-jlmpd\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.337868 master-2 kubenswrapper[4776]: I1011 10:55:17.337499 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.337868 master-2 kubenswrapper[4776]: I1011 10:55:17.337513 4776 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data-custom\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.356848 master-2 kubenswrapper[4776]: I1011 10:55:17.356206 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-66dfdcbff8-j4jhs"] Oct 11 10:55:17.360171 master-2 kubenswrapper[4776]: I1011 10:55:17.359349 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data" (OuterVolumeSpecName: "config-data") pod "6e1008bd-5444-4490-8e34-8a7843bf5c45" (UID: "6e1008bd-5444-4490-8e34-8a7843bf5c45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:17.374977 master-2 kubenswrapper[4776]: I1011 10:55:17.374344 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-86bdd47775-gpz8z"] Oct 11 10:55:17.447414 master-2 kubenswrapper[4776]: I1011 10:55:17.445841 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.478882 master-2 kubenswrapper[4776]: I1011 10:55:17.478767 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:17.557528 master-2 kubenswrapper[4776]: I1011 10:55:17.557462 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:17.574333 master-2 kubenswrapper[4776]: I1011 10:55:17.573802 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-scheduler-0"] Oct 11 10:55:17.595076 master-2 kubenswrapper[4776]: I1011 10:55:17.594988 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b5802-scheduler-0"] Oct 11 10:55:17.609564 master-2 kubenswrapper[4776]: I1011 10:55:17.609386 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-scheduler-0"] Oct 11 10:55:17.610078 master-2 kubenswrapper[4776]: E1011 10:55:17.610052 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941fb918-e4b8-4ef7-9ad1-9af907c5593a" containerName="dnsmasq-dns" Oct 11 10:55:17.610142 master-2 kubenswrapper[4776]: I1011 10:55:17.610079 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="941fb918-e4b8-4ef7-9ad1-9af907c5593a" containerName="dnsmasq-dns" Oct 11 10:55:17.610142 master-2 kubenswrapper[4776]: E1011 10:55:17.610105 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerName="cinder-scheduler" Oct 11 10:55:17.610142 master-2 kubenswrapper[4776]: I1011 10:55:17.610115 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerName="cinder-scheduler" Oct 11 10:55:17.610281 master-2 kubenswrapper[4776]: E1011 10:55:17.610150 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941fb918-e4b8-4ef7-9ad1-9af907c5593a" containerName="init" Oct 11 10:55:17.610281 master-2 kubenswrapper[4776]: I1011 10:55:17.610159 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="941fb918-e4b8-4ef7-9ad1-9af907c5593a" containerName="init" Oct 11 10:55:17.610281 master-2 kubenswrapper[4776]: E1011 10:55:17.610178 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerName="probe" Oct 11 10:55:17.610281 master-2 kubenswrapper[4776]: I1011 10:55:17.610187 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerName="probe" Oct 11 10:55:17.610464 master-2 kubenswrapper[4776]: I1011 10:55:17.610442 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerName="cinder-scheduler" Oct 11 10:55:17.610634 master-2 kubenswrapper[4776]: I1011 10:55:17.610473 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="941fb918-e4b8-4ef7-9ad1-9af907c5593a" containerName="dnsmasq-dns" Oct 11 10:55:17.610634 master-2 kubenswrapper[4776]: I1011 10:55:17.610491 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerName="probe" Oct 11 10:55:17.613340 master-2 kubenswrapper[4776]: I1011 10:55:17.613307 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.620582 master-2 kubenswrapper[4776]: I1011 10:55:17.620533 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-scheduler-config-data" Oct 11 10:55:17.635101 master-2 kubenswrapper[4776]: I1011 10:55:17.633377 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-scheduler-0"] Oct 11 10:55:17.653334 master-2 kubenswrapper[4776]: I1011 10:55:17.653194 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-swift-storage-0\") pod \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " Oct 11 10:55:17.653334 master-2 kubenswrapper[4776]: I1011 10:55:17.653265 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4jjl\" (UniqueName: \"kubernetes.io/projected/941fb918-e4b8-4ef7-9ad1-9af907c5593a-kube-api-access-k4jjl\") pod \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " Oct 11 10:55:17.653480 master-2 kubenswrapper[4776]: I1011 10:55:17.653410 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-nb\") pod \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " Oct 11 10:55:17.653480 master-2 kubenswrapper[4776]: I1011 10:55:17.653442 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-config\") pod \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " Oct 11 10:55:17.653480 master-2 kubenswrapper[4776]: I1011 10:55:17.653470 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-svc\") pod \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " Oct 11 10:55:17.653584 master-2 kubenswrapper[4776]: I1011 10:55:17.653504 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-sb\") pod \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " Oct 11 10:55:17.660154 master-2 kubenswrapper[4776]: I1011 10:55:17.659783 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/941fb918-e4b8-4ef7-9ad1-9af907c5593a-kube-api-access-k4jjl" (OuterVolumeSpecName: "kube-api-access-k4jjl") pod "941fb918-e4b8-4ef7-9ad1-9af907c5593a" (UID: "941fb918-e4b8-4ef7-9ad1-9af907c5593a"). InnerVolumeSpecName "kube-api-access-k4jjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:17.681851 master-1 kubenswrapper[4771]: I1011 10:55:17.681660 4771 generic.go:334] "Generic (PLEG): container finished" podID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerID="0581a657f8cf01879d33e71f4db4cc2df261f4f45ead619016173a151ac38bcc" exitCode=0 Oct 11 10:55:17.681851 master-1 kubenswrapper[4771]: I1011 10:55:17.681724 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49ec9c51-e085-4cfa-8ce7-387a02f23731","Type":"ContainerDied","Data":"0581a657f8cf01879d33e71f4db4cc2df261f4f45ead619016173a151ac38bcc"} Oct 11 10:55:17.734916 master-2 kubenswrapper[4776]: I1011 10:55:17.734832 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "941fb918-e4b8-4ef7-9ad1-9af907c5593a" (UID: "941fb918-e4b8-4ef7-9ad1-9af907c5593a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:17.740604 master-2 kubenswrapper[4776]: I1011 10:55:17.740561 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-config" (OuterVolumeSpecName: "config") pod "941fb918-e4b8-4ef7-9ad1-9af907c5593a" (UID: "941fb918-e4b8-4ef7-9ad1-9af907c5593a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:17.741240 master-2 kubenswrapper[4776]: I1011 10:55:17.741208 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "941fb918-e4b8-4ef7-9ad1-9af907c5593a" (UID: "941fb918-e4b8-4ef7-9ad1-9af907c5593a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756256 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-combined-ca-bundle\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756358 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-scripts\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756446 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-config-data-custom\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756516 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-config-data\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756546 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-etc-machine-id\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756613 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrvwb\" (UniqueName: \"kubernetes.io/projected/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-kube-api-access-rrvwb\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756722 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-nb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756739 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756754 4776 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-svc\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756770 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4jjl\" (UniqueName: \"kubernetes.io/projected/941fb918-e4b8-4ef7-9ad1-9af907c5593a-kube-api-access-k4jjl\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.759128 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "941fb918-e4b8-4ef7-9ad1-9af907c5593a" (UID: "941fb918-e4b8-4ef7-9ad1-9af907c5593a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:17.859432 master-2 kubenswrapper[4776]: I1011 10:55:17.859210 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-etc-machine-id\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.859432 master-2 kubenswrapper[4776]: I1011 10:55:17.859340 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrvwb\" (UniqueName: \"kubernetes.io/projected/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-kube-api-access-rrvwb\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.859432 master-2 kubenswrapper[4776]: I1011 10:55:17.859426 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-combined-ca-bundle\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.860243 master-2 kubenswrapper[4776]: I1011 10:55:17.859481 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-scripts\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.860243 master-2 kubenswrapper[4776]: I1011 10:55:17.859574 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-config-data-custom\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.860243 master-2 kubenswrapper[4776]: I1011 10:55:17.859656 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-config-data\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.860243 master-2 kubenswrapper[4776]: I1011 10:55:17.859759 4776 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-swift-storage-0\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.861059 master-2 kubenswrapper[4776]: I1011 10:55:17.861000 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-etc-machine-id\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.867833 master-2 kubenswrapper[4776]: I1011 10:55:17.867700 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-config-data\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.868554 master-2 kubenswrapper[4776]: I1011 10:55:17.868064 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-scripts\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.875328 master-2 kubenswrapper[4776]: I1011 10:55:17.875293 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-config-data-custom\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.880258 master-2 kubenswrapper[4776]: I1011 10:55:17.880191 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-combined-ca-bundle\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.903487 master-2 kubenswrapper[4776]: I1011 10:55:17.903372 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrvwb\" (UniqueName: \"kubernetes.io/projected/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-kube-api-access-rrvwb\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.956530 master-2 kubenswrapper[4776]: I1011 10:55:17.956482 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:18.006597 master-2 kubenswrapper[4776]: I1011 10:55:18.006517 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "941fb918-e4b8-4ef7-9ad1-9af907c5593a" (UID: "941fb918-e4b8-4ef7-9ad1-9af907c5593a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:18.068141 master-2 kubenswrapper[4776]: I1011 10:55:18.068090 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-sb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:18.075733 master-2 kubenswrapper[4776]: I1011 10:55:18.073156 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e1008bd-5444-4490-8e34-8a7843bf5c45" path="/var/lib/kubelet/pods/6e1008bd-5444-4490-8e34-8a7843bf5c45/volumes" Oct 11 10:55:18.183791 master-2 kubenswrapper[4776]: I1011 10:55:18.182800 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" event={"ID":"941fb918-e4b8-4ef7-9ad1-9af907c5593a","Type":"ContainerDied","Data":"512e94719fefec26efbc4a52f6f8eafa4db87fb35a6caaa6e2a7c8012427af32"} Oct 11 10:55:18.183791 master-2 kubenswrapper[4776]: I1011 10:55:18.182909 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:18.183791 master-2 kubenswrapper[4776]: I1011 10:55:18.183081 4776 scope.go:117] "RemoveContainer" containerID="8b6ad17ffd3f1f6962cd2f8e96a39bece7f7bf59db6192abf67589a9326a7807" Oct 11 10:55:18.191102 master-2 kubenswrapper[4776]: I1011 10:55:18.190941 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" event={"ID":"0f57d76b-0f9f-41bd-906b-dac5a7e5a986","Type":"ContainerStarted","Data":"6e1bfd14f3480ae7a609ce5fbee7ed185e3ebb89b3f68759b4f189fe0bdcf590"} Oct 11 10:55:18.191102 master-2 kubenswrapper[4776]: I1011 10:55:18.190997 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" event={"ID":"0f57d76b-0f9f-41bd-906b-dac5a7e5a986","Type":"ContainerStarted","Data":"d7f0d7f8cd82c9acd61280069c44b22a1e6064adff1007bf7bcc19c4493bdd20"} Oct 11 10:55:18.192218 master-2 kubenswrapper[4776]: I1011 10:55:18.191783 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:18.192440 master-2 kubenswrapper[4776]: I1011 10:55:18.192316 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:18.198039 master-2 kubenswrapper[4776]: I1011 10:55:18.197981 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"d0cc3965-fd34-4442-b408-a5ae441443e4","Type":"ContainerDied","Data":"30c682488fe3a4cbd0fa8fdf8a635610245b1344157964f363074a15ace75969"} Oct 11 10:55:18.199611 master-2 kubenswrapper[4776]: I1011 10:55:18.199099 4776 generic.go:334] "Generic (PLEG): container finished" podID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerID="30c682488fe3a4cbd0fa8fdf8a635610245b1344157964f363074a15ace75969" exitCode=0 Oct 11 10:55:18.204685 master-2 kubenswrapper[4776]: I1011 10:55:18.204536 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-86bdd47775-gpz8z" event={"ID":"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49","Type":"ContainerStarted","Data":"d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c"} Oct 11 10:55:18.204685 master-2 kubenswrapper[4776]: I1011 10:55:18.204628 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-86bdd47775-gpz8z" event={"ID":"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49","Type":"ContainerStarted","Data":"85ac756e7a970e46ccdb81506b9b8549165f9c0b853da21e24277ff1af233582"} Oct 11 10:55:18.204962 master-2 kubenswrapper[4776]: I1011 10:55:18.204921 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:18.206129 master-2 kubenswrapper[4776]: I1011 10:55:18.206060 4776 scope.go:117] "RemoveContainer" containerID="164c282461ad0735fec232ffd6ba306dca0fc05dcf0a0b68a9bb53e9c6e9c07c" Oct 11 10:55:18.525432 master-2 kubenswrapper[4776]: I1011 10:55:18.525363 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-72cf197d-b530-4337-9ce7-c4684efc1643\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29100426-23f3-402d-9fda-ad3e2743ec8a\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:18.549462 master-2 kubenswrapper[4776]: I1011 10:55:18.549390 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:18.681002 master-2 kubenswrapper[4776]: I1011 10:55:18.680907 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-combined-ca-bundle\") pod \"d0cc3965-fd34-4442-b408-a5ae441443e4\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " Oct 11 10:55:18.681537 master-2 kubenswrapper[4776]: I1011 10:55:18.681515 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-httpd-run\") pod \"d0cc3965-fd34-4442-b408-a5ae441443e4\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " Oct 11 10:55:18.681976 master-2 kubenswrapper[4776]: I1011 10:55:18.681953 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbzvc\" (UniqueName: \"kubernetes.io/projected/d0cc3965-fd34-4442-b408-a5ae441443e4-kube-api-access-jbzvc\") pod \"d0cc3965-fd34-4442-b408-a5ae441443e4\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " Oct 11 10:55:18.682233 master-2 kubenswrapper[4776]: I1011 10:55:18.682215 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-config-data\") pod \"d0cc3965-fd34-4442-b408-a5ae441443e4\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " Oct 11 10:55:18.682975 master-2 kubenswrapper[4776]: I1011 10:55:18.682955 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"d0cc3965-fd34-4442-b408-a5ae441443e4\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " Oct 11 10:55:18.683189 master-2 kubenswrapper[4776]: I1011 10:55:18.683168 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-internal-tls-certs\") pod \"d0cc3965-fd34-4442-b408-a5ae441443e4\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " Oct 11 10:55:18.683293 master-2 kubenswrapper[4776]: I1011 10:55:18.683276 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-logs\") pod \"d0cc3965-fd34-4442-b408-a5ae441443e4\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " Oct 11 10:55:18.683390 master-2 kubenswrapper[4776]: I1011 10:55:18.683375 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-scripts\") pod \"d0cc3965-fd34-4442-b408-a5ae441443e4\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " Oct 11 10:55:18.684402 master-2 kubenswrapper[4776]: I1011 10:55:18.683283 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d0cc3965-fd34-4442-b408-a5ae441443e4" (UID: "d0cc3965-fd34-4442-b408-a5ae441443e4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:18.687534 master-2 kubenswrapper[4776]: I1011 10:55:18.685075 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-logs" (OuterVolumeSpecName: "logs") pod "d0cc3965-fd34-4442-b408-a5ae441443e4" (UID: "d0cc3965-fd34-4442-b408-a5ae441443e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:18.688345 master-2 kubenswrapper[4776]: I1011 10:55:18.688061 4776 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-httpd-run\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:18.688345 master-2 kubenswrapper[4776]: I1011 10:55:18.688052 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-86bdd47775-gpz8z" podStartSLOduration=3.688033387 podStartE2EDuration="3.688033387s" podCreationTimestamp="2025-10-11 10:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:18.500773225 +0000 UTC m=+1753.285199934" watchObservedRunningTime="2025-10-11 10:55:18.688033387 +0000 UTC m=+1753.472460096" Oct 11 10:55:18.688954 master-2 kubenswrapper[4776]: I1011 10:55:18.688844 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-scripts" (OuterVolumeSpecName: "scripts") pod "d0cc3965-fd34-4442-b408-a5ae441443e4" (UID: "d0cc3965-fd34-4442-b408-a5ae441443e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:18.688954 master-2 kubenswrapper[4776]: I1011 10:55:18.688090 4776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-logs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:18.688954 master-2 kubenswrapper[4776]: I1011 10:55:18.688874 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0cc3965-fd34-4442-b408-a5ae441443e4-kube-api-access-jbzvc" (OuterVolumeSpecName: "kube-api-access-jbzvc") pod "d0cc3965-fd34-4442-b408-a5ae441443e4" (UID: "d0cc3965-fd34-4442-b408-a5ae441443e4"). InnerVolumeSpecName "kube-api-access-jbzvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:18.707406 master-2 kubenswrapper[4776]: I1011 10:55:18.707361 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574" (OuterVolumeSpecName: "glance") pod "d0cc3965-fd34-4442-b408-a5ae441443e4" (UID: "d0cc3965-fd34-4442-b408-a5ae441443e4"). InnerVolumeSpecName "pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 11 10:55:18.710618 master-2 kubenswrapper[4776]: W1011 10:55:18.709948 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaeb2b93_f2cd_4a03_961c_b9127f72a9d0.slice/crio-1fe678aa21f1fd8da745571265cb6a0e528c6cfbae4416ae9ce21d21aa2d5d8e WatchSource:0}: Error finding container 1fe678aa21f1fd8da745571265cb6a0e528c6cfbae4416ae9ce21d21aa2d5d8e: Status 404 returned error can't find the container with id 1fe678aa21f1fd8da745571265cb6a0e528c6cfbae4416ae9ce21d21aa2d5d8e Oct 11 10:55:18.710618 master-2 kubenswrapper[4776]: I1011 10:55:18.710387 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-scheduler-0"] Oct 11 10:55:18.717784 master-2 kubenswrapper[4776]: I1011 10:55:18.717494 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0cc3965-fd34-4442-b408-a5ae441443e4" (UID: "d0cc3965-fd34-4442-b408-a5ae441443e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:18.734056 master-2 kubenswrapper[4776]: I1011 10:55:18.730809 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-config-data" (OuterVolumeSpecName: "config-data") pod "d0cc3965-fd34-4442-b408-a5ae441443e4" (UID: "d0cc3965-fd34-4442-b408-a5ae441443e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:18.756598 master-2 kubenswrapper[4776]: I1011 10:55:18.741204 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d0cc3965-fd34-4442-b408-a5ae441443e4" (UID: "d0cc3965-fd34-4442-b408-a5ae441443e4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:18.792113 master-2 kubenswrapper[4776]: I1011 10:55:18.792025 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:18.793121 master-2 kubenswrapper[4776]: I1011 10:55:18.793080 4776 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") on node \"master-2\" " Oct 11 10:55:18.793121 master-2 kubenswrapper[4776]: I1011 10:55:18.793114 4776 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-internal-tls-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:18.793204 master-2 kubenswrapper[4776]: I1011 10:55:18.793127 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:18.793204 master-2 kubenswrapper[4776]: I1011 10:55:18.793139 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:18.793204 master-2 kubenswrapper[4776]: I1011 10:55:18.793150 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbzvc\" (UniqueName: \"kubernetes.io/projected/d0cc3965-fd34-4442-b408-a5ae441443e4-kube-api-access-jbzvc\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:18.841380 master-2 kubenswrapper[4776]: I1011 10:55:18.841289 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" podStartSLOduration=3.841265117 podStartE2EDuration="3.841265117s" podCreationTimestamp="2025-10-11 10:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:18.766099901 +0000 UTC m=+1753.550526610" watchObservedRunningTime="2025-10-11 10:55:18.841265117 +0000 UTC m=+1753.625691826" Oct 11 10:55:18.852547 master-2 kubenswrapper[4776]: I1011 10:55:18.852473 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9f6b86c79-52ppr"] Oct 11 10:55:18.855858 master-2 kubenswrapper[4776]: I1011 10:55:18.855827 4776 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 11 10:55:18.856171 master-2 kubenswrapper[4776]: I1011 10:55:18.856153 4776 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c" (UniqueName: "kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574") on node "master-2" Oct 11 10:55:18.902720 master-2 kubenswrapper[4776]: I1011 10:55:18.902655 4776 reconciler_common.go:293] "Volume detached for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:18.938580 master-2 kubenswrapper[4776]: I1011 10:55:18.938532 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9f6b86c79-52ppr"] Oct 11 10:55:19.018448 master-2 kubenswrapper[4776]: I1011 10:55:19.017667 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Oct 11 10:55:19.272766 master-2 kubenswrapper[4776]: I1011 10:55:19.248030 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" event={"ID":"0f57d76b-0f9f-41bd-906b-dac5a7e5a986","Type":"ContainerStarted","Data":"90e5c2646fea77ee3381b10c994dfe8b9170fa307052809ee3bd672c8f8c09e5"} Oct 11 10:55:19.272766 master-2 kubenswrapper[4776]: I1011 10:55:19.270952 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"d0cc3965-fd34-4442-b408-a5ae441443e4","Type":"ContainerDied","Data":"020cb51e8f192e46e701d1c522ecf5cc9d035525d4d7b945c86775cc56da8867"} Oct 11 10:55:19.272766 master-2 kubenswrapper[4776]: I1011 10:55:19.271007 4776 scope.go:117] "RemoveContainer" containerID="30c682488fe3a4cbd0fa8fdf8a635610245b1344157964f363074a15ace75969" Oct 11 10:55:19.272766 master-2 kubenswrapper[4776]: I1011 10:55:19.271153 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.309009 master-2 kubenswrapper[4776]: I1011 10:55:19.293719 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-scheduler-0" event={"ID":"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0","Type":"ContainerStarted","Data":"1fe678aa21f1fd8da745571265cb6a0e528c6cfbae4416ae9ce21d21aa2d5d8e"} Oct 11 10:55:19.524877 master-2 kubenswrapper[4776]: I1011 10:55:19.524754 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:55:19.612291 master-2 kubenswrapper[4776]: I1011 10:55:19.612220 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:55:19.692937 master-2 kubenswrapper[4776]: I1011 10:55:19.692875 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:55:19.693229 master-2 kubenswrapper[4776]: E1011 10:55:19.693210 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerName="glance-httpd" Oct 11 10:55:19.693229 master-2 kubenswrapper[4776]: I1011 10:55:19.693227 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerName="glance-httpd" Oct 11 10:55:19.693339 master-2 kubenswrapper[4776]: E1011 10:55:19.693256 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerName="glance-log" Oct 11 10:55:19.693339 master-2 kubenswrapper[4776]: I1011 10:55:19.693264 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerName="glance-log" Oct 11 10:55:19.693423 master-2 kubenswrapper[4776]: I1011 10:55:19.693406 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerName="glance-httpd" Oct 11 10:55:19.693465 master-2 kubenswrapper[4776]: I1011 10:55:19.693429 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerName="glance-log" Oct 11 10:55:19.694400 master-2 kubenswrapper[4776]: I1011 10:55:19.694369 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.697199 master-2 kubenswrapper[4776]: I1011 10:55:19.697150 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 11 10:55:19.697625 master-2 kubenswrapper[4776]: I1011 10:55:19.697576 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-internal-config-data" Oct 11 10:55:19.720278 master-2 kubenswrapper[4776]: I1011 10:55:19.720221 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:55:19.825204 master-2 kubenswrapper[4776]: I1011 10:55:19.825075 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-config-data\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.825204 master-2 kubenswrapper[4776]: I1011 10:55:19.825130 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.825204 master-2 kubenswrapper[4776]: I1011 10:55:19.825159 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-httpd-run\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.825204 master-2 kubenswrapper[4776]: I1011 10:55:19.825176 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnqtp\" (UniqueName: \"kubernetes.io/projected/7353cefe-e495-4633-9472-93497ca94612-kube-api-access-nnqtp\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.825204 master-2 kubenswrapper[4776]: I1011 10:55:19.825206 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-logs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.825557 master-2 kubenswrapper[4776]: I1011 10:55:19.825232 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-scripts\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.825557 master-2 kubenswrapper[4776]: I1011 10:55:19.825308 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-internal-tls-certs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.825557 master-2 kubenswrapper[4776]: I1011 10:55:19.825337 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.927550 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-internal-tls-certs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.927656 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-config-data\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.927693 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.927716 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-httpd-run\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.927737 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnqtp\" (UniqueName: \"kubernetes.io/projected/7353cefe-e495-4633-9472-93497ca94612-kube-api-access-nnqtp\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.927770 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-logs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.927798 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-scripts\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.931283 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-scripts\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.931542 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-httpd-run\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.931571 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.932079 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-logs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.935245 master-2 kubenswrapper[4776]: I1011 10:55:19.935201 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-internal-tls-certs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.936914 master-2 kubenswrapper[4776]: I1011 10:55:19.936891 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-config-data\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:20.029485 master-2 kubenswrapper[4776]: I1011 10:55:20.029428 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:20.032412 master-2 kubenswrapper[4776]: I1011 10:55:20.031860 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:55:20.032412 master-2 kubenswrapper[4776]: I1011 10:55:20.031889 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/c5f302d2b867cc737f2daf9c42090b10daaee38f14f31a51f3dbff0cf77a4fd1/globalmount\"" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:20.072650 master-2 kubenswrapper[4776]: I1011 10:55:20.072585 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="941fb918-e4b8-4ef7-9ad1-9af907c5593a" path="/var/lib/kubelet/pods/941fb918-e4b8-4ef7-9ad1-9af907c5593a/volumes" Oct 11 10:55:20.073527 master-2 kubenswrapper[4776]: I1011 10:55:20.073444 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0cc3965-fd34-4442-b408-a5ae441443e4" path="/var/lib/kubelet/pods/d0cc3965-fd34-4442-b408-a5ae441443e4/volumes" Oct 11 10:55:20.094640 master-2 kubenswrapper[4776]: I1011 10:55:20.094462 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnqtp\" (UniqueName: \"kubernetes.io/projected/7353cefe-e495-4633-9472-93497ca94612-kube-api-access-nnqtp\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:20.307631 master-2 kubenswrapper[4776]: I1011 10:55:20.307540 4776 generic.go:334] "Generic (PLEG): container finished" podID="64da8a05-f383-4643-b08d-639963f8bdd5" containerID="6cbb734e786ac7a270d6c6e63f2ba748d07f4398724c5368110f5b4040ec1204" exitCode=0 Oct 11 10:55:20.307631 master-2 kubenswrapper[4776]: I1011 10:55:20.307632 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-backup-0" event={"ID":"64da8a05-f383-4643-b08d-639963f8bdd5","Type":"ContainerDied","Data":"6cbb734e786ac7a270d6c6e63f2ba748d07f4398724c5368110f5b4040ec1204"} Oct 11 10:55:20.310610 master-2 kubenswrapper[4776]: I1011 10:55:20.310391 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-scheduler-0" event={"ID":"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0","Type":"ContainerStarted","Data":"493a1f0009ac7ed5a732bb40fe0fd5aef3d23c47236e8ce72a5234cf88da9d7d"} Oct 11 10:55:21.312527 master-2 kubenswrapper[4776]: I1011 10:55:21.312426 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:21.512960 master-2 kubenswrapper[4776]: I1011 10:55:21.510518 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:22.758342 master-1 kubenswrapper[4771]: I1011 10:55:22.758276 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-748bbfcf89-vpkvr" event={"ID":"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b","Type":"ContainerStarted","Data":"af9ff314458369d3f8145591e9f896c9e3bb8b7fa9ff1992f73b7aee94d96f60"} Oct 11 10:55:22.988217 master-1 kubenswrapper[4771]: I1011 10:55:22.988178 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:55:23.042135 master-1 kubenswrapper[4771]: I1011 10:55:23.042062 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-sg-core-conf-yaml\") pod \"49ec9c51-e085-4cfa-8ce7-387a02f23731\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " Oct 11 10:55:23.042365 master-1 kubenswrapper[4771]: I1011 10:55:23.042225 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-combined-ca-bundle\") pod \"49ec9c51-e085-4cfa-8ce7-387a02f23731\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " Oct 11 10:55:23.042732 master-1 kubenswrapper[4771]: I1011 10:55:23.042642 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj45w\" (UniqueName: \"kubernetes.io/projected/49ec9c51-e085-4cfa-8ce7-387a02f23731-kube-api-access-rj45w\") pod \"49ec9c51-e085-4cfa-8ce7-387a02f23731\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " Oct 11 10:55:23.042831 master-1 kubenswrapper[4771]: I1011 10:55:23.042750 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49ec9c51-e085-4cfa-8ce7-387a02f23731-run-httpd\") pod \"49ec9c51-e085-4cfa-8ce7-387a02f23731\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " Oct 11 10:55:23.042831 master-1 kubenswrapper[4771]: I1011 10:55:23.042778 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-config-data\") pod \"49ec9c51-e085-4cfa-8ce7-387a02f23731\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " Oct 11 10:55:23.042831 master-1 kubenswrapper[4771]: I1011 10:55:23.042807 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-scripts\") pod \"49ec9c51-e085-4cfa-8ce7-387a02f23731\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " Oct 11 10:55:23.043022 master-1 kubenswrapper[4771]: I1011 10:55:23.042853 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49ec9c51-e085-4cfa-8ce7-387a02f23731-log-httpd\") pod \"49ec9c51-e085-4cfa-8ce7-387a02f23731\" (UID: \"49ec9c51-e085-4cfa-8ce7-387a02f23731\") " Oct 11 10:55:23.044191 master-1 kubenswrapper[4771]: I1011 10:55:23.044109 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49ec9c51-e085-4cfa-8ce7-387a02f23731-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "49ec9c51-e085-4cfa-8ce7-387a02f23731" (UID: "49ec9c51-e085-4cfa-8ce7-387a02f23731"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:23.045986 master-1 kubenswrapper[4771]: I1011 10:55:23.045925 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49ec9c51-e085-4cfa-8ce7-387a02f23731-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "49ec9c51-e085-4cfa-8ce7-387a02f23731" (UID: "49ec9c51-e085-4cfa-8ce7-387a02f23731"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:23.047410 master-1 kubenswrapper[4771]: I1011 10:55:23.047381 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ec9c51-e085-4cfa-8ce7-387a02f23731-kube-api-access-rj45w" (OuterVolumeSpecName: "kube-api-access-rj45w") pod "49ec9c51-e085-4cfa-8ce7-387a02f23731" (UID: "49ec9c51-e085-4cfa-8ce7-387a02f23731"). InnerVolumeSpecName "kube-api-access-rj45w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:23.047938 master-1 kubenswrapper[4771]: I1011 10:55:23.047910 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-scripts" (OuterVolumeSpecName: "scripts") pod "49ec9c51-e085-4cfa-8ce7-387a02f23731" (UID: "49ec9c51-e085-4cfa-8ce7-387a02f23731"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:23.076733 master-1 kubenswrapper[4771]: I1011 10:55:23.076661 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "49ec9c51-e085-4cfa-8ce7-387a02f23731" (UID: "49ec9c51-e085-4cfa-8ce7-387a02f23731"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:23.118322 master-1 kubenswrapper[4771]: I1011 10:55:23.118230 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49ec9c51-e085-4cfa-8ce7-387a02f23731" (UID: "49ec9c51-e085-4cfa-8ce7-387a02f23731"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:23.136234 master-1 kubenswrapper[4771]: I1011 10:55:23.136175 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-config-data" (OuterVolumeSpecName: "config-data") pod "49ec9c51-e085-4cfa-8ce7-387a02f23731" (UID: "49ec9c51-e085-4cfa-8ce7-387a02f23731"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:23.146143 master-1 kubenswrapper[4771]: I1011 10:55:23.146072 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-sg-core-conf-yaml\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:23.147098 master-1 kubenswrapper[4771]: I1011 10:55:23.147069 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:23.147098 master-1 kubenswrapper[4771]: I1011 10:55:23.147092 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj45w\" (UniqueName: \"kubernetes.io/projected/49ec9c51-e085-4cfa-8ce7-387a02f23731-kube-api-access-rj45w\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:23.147213 master-1 kubenswrapper[4771]: I1011 10:55:23.147107 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49ec9c51-e085-4cfa-8ce7-387a02f23731-run-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:23.147213 master-1 kubenswrapper[4771]: I1011 10:55:23.147118 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:23.147213 master-1 kubenswrapper[4771]: I1011 10:55:23.147128 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49ec9c51-e085-4cfa-8ce7-387a02f23731-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:23.147213 master-1 kubenswrapper[4771]: I1011 10:55:23.147137 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49ec9c51-e085-4cfa-8ce7-387a02f23731-log-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:23.380976 master-1 kubenswrapper[4771]: I1011 10:55:23.378590 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-768f954cfc-gvj8j"] Oct 11 10:55:23.498739 master-1 kubenswrapper[4771]: I1011 10:55:23.498703 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-b8cb664c5-5zrqf"] Oct 11 10:55:23.505951 master-1 kubenswrapper[4771]: W1011 10:55:23.505661 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-fa2bc31890cd28c5ae042f31a82dc462c8d6a57c6fbf10c4706aaa08a519f43e WatchSource:0}: Error finding container fa2bc31890cd28c5ae042f31a82dc462c8d6a57c6fbf10c4706aaa08a519f43e: Status 404 returned error can't find the container with id fa2bc31890cd28c5ae042f31a82dc462c8d6a57c6fbf10c4706aaa08a519f43e Oct 11 10:55:23.538405 master-1 kubenswrapper[4771]: I1011 10:55:23.538363 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-7cddc977f5-9ddgm"] Oct 11 10:55:23.657599 master-1 kubenswrapper[4771]: W1011 10:55:23.657343 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod879970ca_6312_4aec_b8f4_a8a41a0e3797.slice/crio-3182b36cd0161342e4b24feff5f2f372022de500b18e279f9cd6a7f20bca373c WatchSource:0}: Error finding container 3182b36cd0161342e4b24feff5f2f372022de500b18e279f9cd6a7f20bca373c: Status 404 returned error can't find the container with id 3182b36cd0161342e4b24feff5f2f372022de500b18e279f9cd6a7f20bca373c Oct 11 10:55:23.679308 master-1 kubenswrapper[4771]: I1011 10:55:23.679253 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-656ddc8b67-kfkzr"] Oct 11 10:55:23.777660 master-1 kubenswrapper[4771]: I1011 10:55:23.777581 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" event={"ID":"7f2f3d22-d709-4602-bb25-2c17626b75f1","Type":"ContainerStarted","Data":"44d3f4f2f792c76343fbbdc36dc9dcbfc23552c6a12758c872ba3f43c891c162"} Oct 11 10:55:23.783082 master-1 kubenswrapper[4771]: I1011 10:55:23.782997 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" event={"ID":"1fe7833d-9251-4545-ba68-f58c146188f1","Type":"ContainerStarted","Data":"fa2bc31890cd28c5ae042f31a82dc462c8d6a57c6fbf10c4706aaa08a519f43e"} Oct 11 10:55:23.787668 master-1 kubenswrapper[4771]: I1011 10:55:23.786515 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-748bbfcf89-vpkvr" event={"ID":"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b","Type":"ContainerStarted","Data":"8ea70b801c6579249e824c5ec9cdefec815296a35486c33c1b92b000fae180f4"} Oct 11 10:55:23.787668 master-1 kubenswrapper[4771]: I1011 10:55:23.786568 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-748bbfcf89-vpkvr" event={"ID":"5c7a3a13-f5fd-4cb5-8830-d74a07e1b09b","Type":"ContainerStarted","Data":"d45639345dc639621f85b654973079ba0284184faa5f92ee6c18cf3c74c93a74"} Oct 11 10:55:23.787668 master-1 kubenswrapper[4771]: I1011 10:55:23.786710 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:23.789683 master-1 kubenswrapper[4771]: I1011 10:55:23.789652 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7cddc977f5-9ddgm" event={"ID":"879970ca-6312-4aec-b8f4-a8a41a0e3797","Type":"ContainerStarted","Data":"3182b36cd0161342e4b24feff5f2f372022de500b18e279f9cd6a7f20bca373c"} Oct 11 10:55:23.792390 master-1 kubenswrapper[4771]: I1011 10:55:23.792324 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-0" event={"ID":"478147ef-a0d7-4c37-952c-3fc3a23775db","Type":"ContainerStarted","Data":"fff01313b302342cb30f2201b57bb76a5615b3d6076b484b8fb9b7d061e529af"} Oct 11 10:55:23.796280 master-1 kubenswrapper[4771]: I1011 10:55:23.796103 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49ec9c51-e085-4cfa-8ce7-387a02f23731","Type":"ContainerDied","Data":"ada9da27885a5797d5d7044aa85caf6f5ca08e4448276ab5e54d26349906f001"} Oct 11 10:55:23.796280 master-1 kubenswrapper[4771]: I1011 10:55:23.796168 4771 scope.go:117] "RemoveContainer" containerID="3b0917f1e0c562a330c2d467c28336993eadaec7d16c4d43d62cfb2b0ba25b4b" Oct 11 10:55:23.796280 master-1 kubenswrapper[4771]: I1011 10:55:23.796223 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:55:23.854909 master-1 kubenswrapper[4771]: I1011 10:55:23.854853 4771 scope.go:117] "RemoveContainer" containerID="e1413b97fa6430a651152f91df8a84c4a32dbe4f3d81aabb8fb9fea0809e7a16" Oct 11 10:55:23.875869 master-1 kubenswrapper[4771]: I1011 10:55:23.875826 4771 scope.go:117] "RemoveContainer" containerID="0581a657f8cf01879d33e71f4db4cc2df261f4f45ead619016173a151ac38bcc" Oct 11 10:55:23.904535 master-1 kubenswrapper[4771]: I1011 10:55:23.904410 4771 scope.go:117] "RemoveContainer" containerID="1c3861fe4b88a03a7f5a37466aa7554573bbeb56e12a609a086b0d7cf9119e59" Oct 11 10:55:23.913222 master-1 kubenswrapper[4771]: W1011 10:55:23.913170 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6af8eba_f8bf_47f6_8313_7a902aeb170f.slice/crio-4e0c21fb2f4105429136935982c5a615f430c2fe86d42023dbe3fb84ea97d400 WatchSource:0}: Error finding container 4e0c21fb2f4105429136935982c5a615f430c2fe86d42023dbe3fb84ea97d400: Status 404 returned error can't find the container with id 4e0c21fb2f4105429136935982c5a615f430c2fe86d42023dbe3fb84ea97d400 Oct 11 10:55:23.933171 master-2 kubenswrapper[4776]: I1011 10:55:23.933085 4776 scope.go:117] "RemoveContainer" containerID="c0a0f68535f1045164a03ee0e1499295237f65e65bc92ffb7cde06bc73007d4d" Oct 11 10:55:23.970318 master-1 kubenswrapper[4771]: I1011 10:55:23.970220 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:55:23.984617 master-1 kubenswrapper[4771]: I1011 10:55:23.984189 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:55:23.993936 master-1 kubenswrapper[4771]: I1011 10:55:23.993863 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-0"] Oct 11 10:55:24.012489 master-1 kubenswrapper[4771]: I1011 10:55:24.004527 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:55:24.012489 master-1 kubenswrapper[4771]: E1011 10:55:24.005008 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerName="ceilometer-notification-agent" Oct 11 10:55:24.012489 master-1 kubenswrapper[4771]: I1011 10:55:24.005025 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerName="ceilometer-notification-agent" Oct 11 10:55:24.012489 master-1 kubenswrapper[4771]: E1011 10:55:24.005053 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerName="ceilometer-central-agent" Oct 11 10:55:24.012489 master-1 kubenswrapper[4771]: I1011 10:55:24.005062 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerName="ceilometer-central-agent" Oct 11 10:55:24.012489 master-1 kubenswrapper[4771]: E1011 10:55:24.005087 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerName="sg-core" Oct 11 10:55:24.012489 master-1 kubenswrapper[4771]: I1011 10:55:24.005096 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerName="sg-core" Oct 11 10:55:24.012489 master-1 kubenswrapper[4771]: E1011 10:55:24.005117 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerName="proxy-httpd" Oct 11 10:55:24.012489 master-1 kubenswrapper[4771]: I1011 10:55:24.005126 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerName="proxy-httpd" Oct 11 10:55:24.012489 master-1 kubenswrapper[4771]: I1011 10:55:24.005317 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerName="ceilometer-notification-agent" Oct 11 10:55:24.012489 master-1 kubenswrapper[4771]: I1011 10:55:24.005343 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerName="ceilometer-central-agent" Oct 11 10:55:24.012489 master-1 kubenswrapper[4771]: I1011 10:55:24.005378 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerName="proxy-httpd" Oct 11 10:55:24.012489 master-1 kubenswrapper[4771]: I1011 10:55:24.005392 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ec9c51-e085-4cfa-8ce7-387a02f23731" containerName="sg-core" Oct 11 10:55:24.012489 master-1 kubenswrapper[4771]: I1011 10:55:24.005624 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-748bbfcf89-vpkvr" podStartSLOduration=14.005596828 podStartE2EDuration="14.005596828s" podCreationTimestamp="2025-10-11 10:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:23.992892298 +0000 UTC m=+1755.967118749" watchObservedRunningTime="2025-10-11 10:55:24.005596828 +0000 UTC m=+1755.979823269" Oct 11 10:55:24.012489 master-1 kubenswrapper[4771]: I1011 10:55:24.007320 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:55:24.012489 master-1 kubenswrapper[4771]: I1011 10:55:24.011160 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 11 10:55:24.012489 master-1 kubenswrapper[4771]: I1011 10:55:24.011574 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 11 10:55:24.036400 master-1 kubenswrapper[4771]: I1011 10:55:24.031231 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:55:24.069399 master-1 kubenswrapper[4771]: I1011 10:55:24.065765 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-run-httpd\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.069399 master-1 kubenswrapper[4771]: I1011 10:55:24.065869 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.069399 master-1 kubenswrapper[4771]: I1011 10:55:24.065914 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-log-httpd\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.069399 master-1 kubenswrapper[4771]: I1011 10:55:24.065973 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7f7t\" (UniqueName: \"kubernetes.io/projected/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-kube-api-access-v7f7t\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.069399 master-1 kubenswrapper[4771]: I1011 10:55:24.066022 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-scripts\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.069399 master-1 kubenswrapper[4771]: I1011 10:55:24.066053 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-config-data\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.069399 master-1 kubenswrapper[4771]: I1011 10:55:24.066077 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.170692 master-1 kubenswrapper[4771]: I1011 10:55:24.170401 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-run-httpd\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.171122 master-1 kubenswrapper[4771]: I1011 10:55:24.171072 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.171295 master-1 kubenswrapper[4771]: I1011 10:55:24.171259 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-log-httpd\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.172371 master-1 kubenswrapper[4771]: I1011 10:55:24.172048 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-run-httpd\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.172371 master-1 kubenswrapper[4771]: I1011 10:55:24.172164 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-log-httpd\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.172727 master-1 kubenswrapper[4771]: I1011 10:55:24.172665 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7f7t\" (UniqueName: \"kubernetes.io/projected/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-kube-api-access-v7f7t\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.172969 master-1 kubenswrapper[4771]: I1011 10:55:24.172868 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-scripts\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.183277 master-1 kubenswrapper[4771]: I1011 10:55:24.172945 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-config-data\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.183277 master-1 kubenswrapper[4771]: I1011 10:55:24.173092 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.183277 master-1 kubenswrapper[4771]: I1011 10:55:24.181125 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-config-data\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.183277 master-1 kubenswrapper[4771]: I1011 10:55:24.183173 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.204168 master-1 kubenswrapper[4771]: I1011 10:55:24.204094 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-scripts\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.205453 master-1 kubenswrapper[4771]: I1011 10:55:24.205391 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.214399 master-1 kubenswrapper[4771]: I1011 10:55:24.211952 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7f7t\" (UniqueName: \"kubernetes.io/projected/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-kube-api-access-v7f7t\") pod \"ceilometer-0\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " pod="openstack/ceilometer-0" Oct 11 10:55:24.216435 master-1 kubenswrapper[4771]: I1011 10:55:24.216400 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:55:24.373653 master-2 kubenswrapper[4776]: I1011 10:55:24.372476 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c1271fdd-4436-4935-b271-89ffa5394bc3","Type":"ContainerStarted","Data":"95f71350a89d3d6fe18419fa54fd58ddaec3ff7d48a3a9105b0b3dfed3802fe6"} Oct 11 10:55:24.404526 master-2 kubenswrapper[4776]: I1011 10:55:24.404294 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.208009944 podStartE2EDuration="13.404276829s" podCreationTimestamp="2025-10-11 10:55:11 +0000 UTC" firstStartedPulling="2025-10-11 10:55:12.84643 +0000 UTC m=+1747.630856709" lastFinishedPulling="2025-10-11 10:55:24.042696875 +0000 UTC m=+1758.827123594" observedRunningTime="2025-10-11 10:55:24.398538283 +0000 UTC m=+1759.182964992" watchObservedRunningTime="2025-10-11 10:55:24.404276829 +0000 UTC m=+1759.188703538" Oct 11 10:55:24.456957 master-1 kubenswrapper[4771]: I1011 10:55:24.456690 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ec9c51-e085-4cfa-8ce7-387a02f23731" path="/var/lib/kubelet/pods/49ec9c51-e085-4cfa-8ce7-387a02f23731/volumes" Oct 11 10:55:24.722081 master-1 kubenswrapper[4771]: I1011 10:55:24.722028 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:55:24.799767 master-2 kubenswrapper[4776]: I1011 10:55:24.799517 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Oct 11 10:55:24.824433 master-1 kubenswrapper[4771]: I1011 10:55:24.823036 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-0" event={"ID":"478147ef-a0d7-4c37-952c-3fc3a23775db","Type":"ContainerStarted","Data":"cbc52f8e44c15d5b309b9a5f8920e77aecfb6aec423daa8801871f7c83313c80"} Oct 11 10:55:24.824433 master-1 kubenswrapper[4771]: I1011 10:55:24.824328 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-b5802-api-0" Oct 11 10:55:24.827791 master-1 kubenswrapper[4771]: I1011 10:55:24.827742 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"736a75e7-8c74-4862-9ac6-3b4c2d0d721d","Type":"ContainerStarted","Data":"5c043b242e5c8ff65d3ea42d92bba671ec7f6446a265531a8e4be33feddbe4fa"} Oct 11 10:55:24.829901 master-1 kubenswrapper[4771]: I1011 10:55:24.829859 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-0" event={"ID":"d3028266-255a-43a3-8bdb-9695ad7cbb30","Type":"ContainerStarted","Data":"7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474"} Oct 11 10:55:24.829985 master-1 kubenswrapper[4771]: I1011 10:55:24.829902 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-0" event={"ID":"d3028266-255a-43a3-8bdb-9695ad7cbb30","Type":"ContainerStarted","Data":"a58cd7c5986caf2dc23489be67139cc61da2a0165758a7758f40955bc3fa5183"} Oct 11 10:55:24.842696 master-1 kubenswrapper[4771]: I1011 10:55:24.842631 4771 generic.go:334] "Generic (PLEG): container finished" podID="7f2f3d22-d709-4602-bb25-2c17626b75f1" containerID="4fae999cde829daae9a63e594b851b24a38e90a61f567ae366b32cc609a54cf1" exitCode=0 Oct 11 10:55:24.843070 master-1 kubenswrapper[4771]: I1011 10:55:24.843042 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" event={"ID":"7f2f3d22-d709-4602-bb25-2c17626b75f1","Type":"ContainerDied","Data":"4fae999cde829daae9a63e594b851b24a38e90a61f567ae366b32cc609a54cf1"} Oct 11 10:55:24.847041 master-1 kubenswrapper[4771]: I1011 10:55:24.846985 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" event={"ID":"c6af8eba-f8bf-47f6-8313-7a902aeb170f","Type":"ContainerStarted","Data":"4e0c21fb2f4105429136935982c5a615f430c2fe86d42023dbe3fb84ea97d400"} Oct 11 10:55:24.856439 master-1 kubenswrapper[4771]: I1011 10:55:24.856261 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-api-0" podStartSLOduration=3.838802839 podStartE2EDuration="20.856232825s" podCreationTimestamp="2025-10-11 10:55:04 +0000 UTC" firstStartedPulling="2025-10-11 10:55:05.692614701 +0000 UTC m=+1737.666841192" lastFinishedPulling="2025-10-11 10:55:22.710044727 +0000 UTC m=+1754.684271178" observedRunningTime="2025-10-11 10:55:24.847017967 +0000 UTC m=+1756.821244418" watchObservedRunningTime="2025-10-11 10:55:24.856232825 +0000 UTC m=+1756.830459266" Oct 11 10:55:25.058863 master-0 kubenswrapper[4790]: W1011 10:55:25.058781 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82f87bf9_c348_4149_a72c_99e49db4ec09.slice/crio-792d4dacd71ae32e22ff517532decce940e406c6a15096335b14e8b534a2992c WatchSource:0}: Error finding container 792d4dacd71ae32e22ff517532decce940e406c6a15096335b14e8b534a2992c: Status 404 returned error can't find the container with id 792d4dacd71ae32e22ff517532decce940e406c6a15096335b14e8b534a2992c Oct 11 10:55:25.061592 master-0 kubenswrapper[4790]: I1011 10:55:25.061529 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb968bd67-mk4sp"] Oct 11 10:55:25.193851 master-2 kubenswrapper[4776]: I1011 10:55:25.192774 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:55:25.206805 master-0 kubenswrapper[4790]: I1011 10:55:25.205752 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5f5c98dcd5-c8xhk"] Oct 11 10:55:25.295480 master-0 kubenswrapper[4790]: I1011 10:55:25.295339 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" event={"ID":"82f87bf9-c348-4149-a72c-99e49db4ec09","Type":"ContainerStarted","Data":"792d4dacd71ae32e22ff517532decce940e406c6a15096335b14e8b534a2992c"} Oct 11 10:55:25.299741 master-0 kubenswrapper[4790]: I1011 10:55:25.298468 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" event={"ID":"60d68c10-8e1c-4a92-86f6-e2925df0f714","Type":"ContainerStarted","Data":"e192e2becd115690c1d668bc7d78fc93266390a2fb95e6340eaf7c60f1f198e9"} Oct 11 10:55:25.299741 master-0 kubenswrapper[4790]: I1011 10:55:25.298513 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" event={"ID":"60d68c10-8e1c-4a92-86f6-e2925df0f714","Type":"ContainerStarted","Data":"a9ec38cb1d58ff96f557dd1efeffa25f09991c28ad10ce26a6daa40b2d7694a0"} Oct 11 10:55:25.300360 master-0 kubenswrapper[4790]: I1011 10:55:25.300239 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5f5c98dcd5-c8xhk" event={"ID":"c41b60ff-457d-4c45-8b56-4523c5c0097f","Type":"ContainerStarted","Data":"c3de8b5a223cd729b6c034e2228715eb85d26de982d08c60dc47229ac7d1b110"} Oct 11 10:55:25.330892 master-2 kubenswrapper[4776]: I1011 10:55:25.330299 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-854549b758-grfk2"] Oct 11 10:55:25.339282 master-2 kubenswrapper[4776]: I1011 10:55:25.339134 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.373928 master-0 kubenswrapper[4790]: I1011 10:55:25.373243 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" podStartSLOduration=1.8724007299999998 podStartE2EDuration="21.373215811s" podCreationTimestamp="2025-10-11 10:55:04 +0000 UTC" firstStartedPulling="2025-10-11 10:55:05.185125106 +0000 UTC m=+981.739585398" lastFinishedPulling="2025-10-11 10:55:24.685940187 +0000 UTC m=+1001.240400479" observedRunningTime="2025-10-11 10:55:25.372858921 +0000 UTC m=+1001.927319223" watchObservedRunningTime="2025-10-11 10:55:25.373215811 +0000 UTC m=+1001.927676103" Oct 11 10:55:25.397928 master-2 kubenswrapper[4776]: I1011 10:55:25.391746 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-854549b758-grfk2"] Oct 11 10:55:25.408720 master-1 kubenswrapper[4771]: I1011 10:55:25.408641 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-d647f9c47-x7xc2"] Oct 11 10:55:25.410218 master-1 kubenswrapper[4771]: I1011 10:55:25.410197 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-d647f9c47-x7xc2" Oct 11 10:55:25.415102 master-1 kubenswrapper[4771]: I1011 10:55:25.415047 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Oct 11 10:55:25.431523 master-0 kubenswrapper[4790]: I1011 10:55:25.431409 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-89f9b4488-8vvt9"] Oct 11 10:55:25.433201 master-0 kubenswrapper[4790]: I1011 10:55:25.433169 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-89f9b4488-8vvt9" Oct 11 10:55:25.436265 master-1 kubenswrapper[4771]: I1011 10:55:25.434983 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-d647f9c47-x7xc2"] Oct 11 10:55:25.441772 master-0 kubenswrapper[4790]: I1011 10:55:25.437198 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Oct 11 10:55:25.456803 master-0 kubenswrapper[4790]: I1011 10:55:25.451833 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-89f9b4488-8vvt9"] Oct 11 10:55:25.467853 master-2 kubenswrapper[4776]: I1011 10:55:25.467794 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09078de9-6576-4afa-a94e-7b80617bba0f-config-data-custom\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.468344 master-2 kubenswrapper[4776]: I1011 10:55:25.467883 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfzdt\" (UniqueName: \"kubernetes.io/projected/09078de9-6576-4afa-a94e-7b80617bba0f-kube-api-access-kfzdt\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.468344 master-2 kubenswrapper[4776]: I1011 10:55:25.468094 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09078de9-6576-4afa-a94e-7b80617bba0f-combined-ca-bundle\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.468344 master-2 kubenswrapper[4776]: I1011 10:55:25.468168 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09078de9-6576-4afa-a94e-7b80617bba0f-config-data\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.480792 master-2 kubenswrapper[4776]: I1011 10:55:25.474540 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-scheduler-0" event={"ID":"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0","Type":"ContainerStarted","Data":"2014802943eee19ae9bd36e7ec710859ff77c2726f2b76501cfd630a35b1a3c7"} Oct 11 10:55:25.480792 master-2 kubenswrapper[4776]: I1011 10:55:25.479342 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerStarted","Data":"ddf25d3e6dfc76b87ac6599854740e9e3e9b46a167eab6435fa6a770ea42138e"} Oct 11 10:55:25.480792 master-2 kubenswrapper[4776]: I1011 10:55:25.479389 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerStarted","Data":"531ec9f06ba8fda6003f0a31b9d895b34ee13acb13d8e6c68604b2bb0e9a0c1b"} Oct 11 10:55:25.507788 master-2 kubenswrapper[4776]: I1011 10:55:25.507286 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"7353cefe-e495-4633-9472-93497ca94612","Type":"ContainerStarted","Data":"9a5837d1cb2c6c6bd2ee66332c6614d5f0898097af5aee94ed2b3ff54ca6ee42"} Oct 11 10:55:25.536377 master-1 kubenswrapper[4771]: I1011 10:55:25.533408 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0657ee5-2e60-4a96-905e-814f46a72970-config-data-custom\") pod \"heat-api-d647f9c47-x7xc2\" (UID: \"e0657ee5-2e60-4a96-905e-814f46a72970\") " pod="openstack/heat-api-d647f9c47-x7xc2" Oct 11 10:55:25.536377 master-1 kubenswrapper[4771]: I1011 10:55:25.533635 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0657ee5-2e60-4a96-905e-814f46a72970-combined-ca-bundle\") pod \"heat-api-d647f9c47-x7xc2\" (UID: \"e0657ee5-2e60-4a96-905e-814f46a72970\") " pod="openstack/heat-api-d647f9c47-x7xc2" Oct 11 10:55:25.536377 master-1 kubenswrapper[4771]: I1011 10:55:25.533784 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z885b\" (UniqueName: \"kubernetes.io/projected/e0657ee5-2e60-4a96-905e-814f46a72970-kube-api-access-z885b\") pod \"heat-api-d647f9c47-x7xc2\" (UID: \"e0657ee5-2e60-4a96-905e-814f46a72970\") " pod="openstack/heat-api-d647f9c47-x7xc2" Oct 11 10:55:25.536377 master-1 kubenswrapper[4771]: I1011 10:55:25.533847 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0657ee5-2e60-4a96-905e-814f46a72970-config-data\") pod \"heat-api-d647f9c47-x7xc2\" (UID: \"e0657ee5-2e60-4a96-905e-814f46a72970\") " pod="openstack/heat-api-d647f9c47-x7xc2" Oct 11 10:55:25.564979 master-0 kubenswrapper[4790]: I1011 10:55:25.564902 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90ca8fc6-bc53-461b-8384-ca8344e8abb1-config-data\") pod \"heat-cfnapi-89f9b4488-8vvt9\" (UID: \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\") " pod="openstack/heat-cfnapi-89f9b4488-8vvt9" Oct 11 10:55:25.564979 master-0 kubenswrapper[4790]: I1011 10:55:25.564968 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90ca8fc6-bc53-461b-8384-ca8344e8abb1-config-data-custom\") pod \"heat-cfnapi-89f9b4488-8vvt9\" (UID: \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\") " pod="openstack/heat-cfnapi-89f9b4488-8vvt9" Oct 11 10:55:25.565476 master-0 kubenswrapper[4790]: I1011 10:55:25.565065 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90ca8fc6-bc53-461b-8384-ca8344e8abb1-combined-ca-bundle\") pod \"heat-cfnapi-89f9b4488-8vvt9\" (UID: \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\") " pod="openstack/heat-cfnapi-89f9b4488-8vvt9" Oct 11 10:55:25.565476 master-0 kubenswrapper[4790]: I1011 10:55:25.565102 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxk4p\" (UniqueName: \"kubernetes.io/projected/90ca8fc6-bc53-461b-8384-ca8344e8abb1-kube-api-access-lxk4p\") pod \"heat-cfnapi-89f9b4488-8vvt9\" (UID: \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\") " pod="openstack/heat-cfnapi-89f9b4488-8vvt9" Oct 11 10:55:25.570669 master-2 kubenswrapper[4776]: I1011 10:55:25.570367 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09078de9-6576-4afa-a94e-7b80617bba0f-config-data-custom\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.570669 master-2 kubenswrapper[4776]: I1011 10:55:25.570448 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-scheduler-0" podStartSLOduration=8.570429523 podStartE2EDuration="8.570429523s" podCreationTimestamp="2025-10-11 10:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:25.506004728 +0000 UTC m=+1760.290431437" watchObservedRunningTime="2025-10-11 10:55:25.570429523 +0000 UTC m=+1760.354856232" Oct 11 10:55:25.574797 master-2 kubenswrapper[4776]: I1011 10:55:25.570482 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfzdt\" (UniqueName: \"kubernetes.io/projected/09078de9-6576-4afa-a94e-7b80617bba0f-kube-api-access-kfzdt\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.574797 master-2 kubenswrapper[4776]: I1011 10:55:25.574489 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09078de9-6576-4afa-a94e-7b80617bba0f-combined-ca-bundle\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.575266 master-2 kubenswrapper[4776]: I1011 10:55:25.575080 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09078de9-6576-4afa-a94e-7b80617bba0f-config-data\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.576868 master-2 kubenswrapper[4776]: I1011 10:55:25.575953 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09078de9-6576-4afa-a94e-7b80617bba0f-config-data-custom\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.587757 master-2 kubenswrapper[4776]: I1011 10:55:25.587650 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09078de9-6576-4afa-a94e-7b80617bba0f-combined-ca-bundle\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.599730 master-2 kubenswrapper[4776]: I1011 10:55:25.599614 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09078de9-6576-4afa-a94e-7b80617bba0f-config-data\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.601988 master-2 kubenswrapper[4776]: I1011 10:55:25.601955 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:25.605832 master-1 kubenswrapper[4771]: I1011 10:55:25.605735 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:55:25.622640 master-2 kubenswrapper[4776]: I1011 10:55:25.622600 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfzdt\" (UniqueName: \"kubernetes.io/projected/09078de9-6576-4afa-a94e-7b80617bba0f-kube-api-access-kfzdt\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.636122 master-1 kubenswrapper[4771]: I1011 10:55:25.635991 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0657ee5-2e60-4a96-905e-814f46a72970-config-data\") pod \"heat-api-d647f9c47-x7xc2\" (UID: \"e0657ee5-2e60-4a96-905e-814f46a72970\") " pod="openstack/heat-api-d647f9c47-x7xc2" Oct 11 10:55:25.636122 master-1 kubenswrapper[4771]: I1011 10:55:25.636085 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0657ee5-2e60-4a96-905e-814f46a72970-config-data-custom\") pod \"heat-api-d647f9c47-x7xc2\" (UID: \"e0657ee5-2e60-4a96-905e-814f46a72970\") " pod="openstack/heat-api-d647f9c47-x7xc2" Oct 11 10:55:25.636122 master-1 kubenswrapper[4771]: I1011 10:55:25.636120 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0657ee5-2e60-4a96-905e-814f46a72970-combined-ca-bundle\") pod \"heat-api-d647f9c47-x7xc2\" (UID: \"e0657ee5-2e60-4a96-905e-814f46a72970\") " pod="openstack/heat-api-d647f9c47-x7xc2" Oct 11 10:55:25.636638 master-1 kubenswrapper[4771]: I1011 10:55:25.636181 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z885b\" (UniqueName: \"kubernetes.io/projected/e0657ee5-2e60-4a96-905e-814f46a72970-kube-api-access-z885b\") pod \"heat-api-d647f9c47-x7xc2\" (UID: \"e0657ee5-2e60-4a96-905e-814f46a72970\") " pod="openstack/heat-api-d647f9c47-x7xc2" Oct 11 10:55:25.645104 master-1 kubenswrapper[4771]: I1011 10:55:25.644034 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0657ee5-2e60-4a96-905e-814f46a72970-config-data\") pod \"heat-api-d647f9c47-x7xc2\" (UID: \"e0657ee5-2e60-4a96-905e-814f46a72970\") " pod="openstack/heat-api-d647f9c47-x7xc2" Oct 11 10:55:25.653385 master-1 kubenswrapper[4771]: I1011 10:55:25.648798 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0657ee5-2e60-4a96-905e-814f46a72970-config-data-custom\") pod \"heat-api-d647f9c47-x7xc2\" (UID: \"e0657ee5-2e60-4a96-905e-814f46a72970\") " pod="openstack/heat-api-d647f9c47-x7xc2" Oct 11 10:55:25.660759 master-1 kubenswrapper[4771]: I1011 10:55:25.656630 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z885b\" (UniqueName: \"kubernetes.io/projected/e0657ee5-2e60-4a96-905e-814f46a72970-kube-api-access-z885b\") pod \"heat-api-d647f9c47-x7xc2\" (UID: \"e0657ee5-2e60-4a96-905e-814f46a72970\") " pod="openstack/heat-api-d647f9c47-x7xc2" Oct 11 10:55:25.665062 master-1 kubenswrapper[4771]: I1011 10:55:25.664996 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0657ee5-2e60-4a96-905e-814f46a72970-combined-ca-bundle\") pod \"heat-api-d647f9c47-x7xc2\" (UID: \"e0657ee5-2e60-4a96-905e-814f46a72970\") " pod="openstack/heat-api-d647f9c47-x7xc2" Oct 11 10:55:25.667329 master-0 kubenswrapper[4790]: I1011 10:55:25.667237 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxk4p\" (UniqueName: \"kubernetes.io/projected/90ca8fc6-bc53-461b-8384-ca8344e8abb1-kube-api-access-lxk4p\") pod \"heat-cfnapi-89f9b4488-8vvt9\" (UID: \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\") " pod="openstack/heat-cfnapi-89f9b4488-8vvt9" Oct 11 10:55:25.667562 master-0 kubenswrapper[4790]: I1011 10:55:25.667419 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90ca8fc6-bc53-461b-8384-ca8344e8abb1-config-data\") pod \"heat-cfnapi-89f9b4488-8vvt9\" (UID: \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\") " pod="openstack/heat-cfnapi-89f9b4488-8vvt9" Oct 11 10:55:25.667562 master-0 kubenswrapper[4790]: I1011 10:55:25.667449 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90ca8fc6-bc53-461b-8384-ca8344e8abb1-config-data-custom\") pod \"heat-cfnapi-89f9b4488-8vvt9\" (UID: \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\") " pod="openstack/heat-cfnapi-89f9b4488-8vvt9" Oct 11 10:55:25.667562 master-0 kubenswrapper[4790]: I1011 10:55:25.667475 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90ca8fc6-bc53-461b-8384-ca8344e8abb1-combined-ca-bundle\") pod \"heat-cfnapi-89f9b4488-8vvt9\" (UID: \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\") " pod="openstack/heat-cfnapi-89f9b4488-8vvt9" Oct 11 10:55:25.671906 master-0 kubenswrapper[4790]: I1011 10:55:25.671853 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90ca8fc6-bc53-461b-8384-ca8344e8abb1-combined-ca-bundle\") pod \"heat-cfnapi-89f9b4488-8vvt9\" (UID: \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\") " pod="openstack/heat-cfnapi-89f9b4488-8vvt9" Oct 11 10:55:25.673302 master-0 kubenswrapper[4790]: I1011 10:55:25.673270 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90ca8fc6-bc53-461b-8384-ca8344e8abb1-config-data\") pod \"heat-cfnapi-89f9b4488-8vvt9\" (UID: \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\") " pod="openstack/heat-cfnapi-89f9b4488-8vvt9" Oct 11 10:55:25.674166 master-0 kubenswrapper[4790]: I1011 10:55:25.674108 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90ca8fc6-bc53-461b-8384-ca8344e8abb1-config-data-custom\") pod \"heat-cfnapi-89f9b4488-8vvt9\" (UID: \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\") " pod="openstack/heat-cfnapi-89f9b4488-8vvt9" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676534 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-nvme\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676594 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-brick\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676654 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data-custom\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676727 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-lib-modules\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676795 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676795 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676819 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n864d\" (UniqueName: \"kubernetes.io/projected/64da8a05-f383-4643-b08d-639963f8bdd5-kube-api-access-n864d\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676925 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-sys\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676949 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676938 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676991 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676970 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-lib-cinder\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.677029 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-combined-ca-bundle\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.677057 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-machine-id\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.677075 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-cinder\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.677106 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-iscsi\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.677132 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-run\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.677151 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-dev\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676996 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-sys" (OuterVolumeSpecName: "sys") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.677225 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.677312 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-scripts\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.677996 4776 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-nvme\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.678009 4776 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-brick\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.678020 4776 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-lib-modules\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.678028 4776 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-sys\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.678036 4776 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-lib-cinder\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.678044 4776 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-machine-id\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.678765 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.678836 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.678857 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-run" (OuterVolumeSpecName: "run") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.678877 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-dev" (OuterVolumeSpecName: "dev") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.682854 master-2 kubenswrapper[4776]: I1011 10:55:25.682789 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-scripts" (OuterVolumeSpecName: "scripts") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:25.682970 master-2 kubenswrapper[4776]: I1011 10:55:25.682844 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:25.683058 master-2 kubenswrapper[4776]: I1011 10:55:25.683035 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64da8a05-f383-4643-b08d-639963f8bdd5-kube-api-access-n864d" (OuterVolumeSpecName: "kube-api-access-n864d") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "kube-api-access-n864d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:25.690977 master-0 kubenswrapper[4790]: I1011 10:55:25.689470 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxk4p\" (UniqueName: \"kubernetes.io/projected/90ca8fc6-bc53-461b-8384-ca8344e8abb1-kube-api-access-lxk4p\") pod \"heat-cfnapi-89f9b4488-8vvt9\" (UID: \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\") " pod="openstack/heat-cfnapi-89f9b4488-8vvt9" Oct 11 10:55:25.715205 master-2 kubenswrapper[4776]: I1011 10:55:25.715148 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.731500 master-1 kubenswrapper[4771]: I1011 10:55:25.729459 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-d647f9c47-x7xc2" Oct 11 10:55:25.745522 master-2 kubenswrapper[4776]: I1011 10:55:25.745472 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:25.767646 master-2 kubenswrapper[4776]: I1011 10:55:25.767525 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data" (OuterVolumeSpecName: "config-data") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:25.773056 master-0 kubenswrapper[4790]: I1011 10:55:25.771669 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-89f9b4488-8vvt9" Oct 11 10:55:25.779494 master-2 kubenswrapper[4776]: I1011 10:55:25.779332 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.779494 master-2 kubenswrapper[4776]: I1011 10:55:25.779378 4776 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data-custom\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.780000 master-2 kubenswrapper[4776]: I1011 10:55:25.779923 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n864d\" (UniqueName: \"kubernetes.io/projected/64da8a05-f383-4643-b08d-639963f8bdd5-kube-api-access-n864d\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.780045 master-2 kubenswrapper[4776]: I1011 10:55:25.780020 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.780045 master-2 kubenswrapper[4776]: I1011 10:55:25.780033 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.780045 master-2 kubenswrapper[4776]: I1011 10:55:25.780042 4776 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-cinder\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.780148 master-2 kubenswrapper[4776]: I1011 10:55:25.780051 4776 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-iscsi\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.780148 master-2 kubenswrapper[4776]: I1011 10:55:25.780059 4776 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-run\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.780148 master-2 kubenswrapper[4776]: I1011 10:55:25.780067 4776 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-dev\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.871837 master-1 kubenswrapper[4771]: I1011 10:55:25.871721 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"736a75e7-8c74-4862-9ac6-3b4c2d0d721d","Type":"ContainerStarted","Data":"33e1159e64df7103066e5f7850051b2adc3d09e823478d0dc1137ddef2aee326"} Oct 11 10:55:25.882039 master-1 kubenswrapper[4771]: I1011 10:55:25.881936 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-0" event={"ID":"d3028266-255a-43a3-8bdb-9695ad7cbb30","Type":"ContainerStarted","Data":"9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c"} Oct 11 10:55:25.892750 master-1 kubenswrapper[4771]: I1011 10:55:25.892647 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" event={"ID":"7f2f3d22-d709-4602-bb25-2c17626b75f1","Type":"ContainerStarted","Data":"f842da8207799702100719e16d6957042c6a44764caa5bd18413f4806a961973"} Oct 11 10:55:25.897042 master-1 kubenswrapper[4771]: I1011 10:55:25.896470 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:25.917511 master-1 kubenswrapper[4771]: I1011 10:55:25.917417 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-external-api-0" podStartSLOduration=15.917388042 podStartE2EDuration="15.917388042s" podCreationTimestamp="2025-10-11 10:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:25.915550978 +0000 UTC m=+1757.889777429" watchObservedRunningTime="2025-10-11 10:55:25.917388042 +0000 UTC m=+1757.891614483" Oct 11 10:55:26.093556 master-2 kubenswrapper[4776]: I1011 10:55:26.093479 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:26.093803 master-2 kubenswrapper[4776]: I1011 10:55:26.093666 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:26.096507 master-0 kubenswrapper[4790]: I1011 10:55:26.096423 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-r9jnj"] Oct 11 10:55:26.098606 master-0 kubenswrapper[4790]: I1011 10:55:26.097903 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-r9jnj" Oct 11 10:55:26.117598 master-0 kubenswrapper[4790]: I1011 10:55:26.116553 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-r9jnj"] Oct 11 10:55:26.195822 master-0 kubenswrapper[4790]: I1011 10:55:26.195673 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcmpk\" (UniqueName: \"kubernetes.io/projected/5b7ae2a3-6802-400c-bbe7-5729052a2c1c-kube-api-access-kcmpk\") pod \"nova-api-db-create-r9jnj\" (UID: \"5b7ae2a3-6802-400c-bbe7-5729052a2c1c\") " pod="openstack/nova-api-db-create-r9jnj" Oct 11 10:55:26.206294 master-0 kubenswrapper[4790]: I1011 10:55:26.206230 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-rgsq2"] Oct 11 10:55:26.210436 master-0 kubenswrapper[4790]: I1011 10:55:26.209812 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rgsq2" Oct 11 10:55:26.226142 master-0 kubenswrapper[4790]: I1011 10:55:26.226039 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rgsq2"] Oct 11 10:55:26.229154 master-2 kubenswrapper[4776]: I1011 10:55:26.229109 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-854549b758-grfk2"] Oct 11 10:55:26.253611 master-1 kubenswrapper[4771]: I1011 10:55:26.253484 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" podStartSLOduration=10.253453853 podStartE2EDuration="10.253453853s" podCreationTimestamp="2025-10-11 10:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:25.966784122 +0000 UTC m=+1757.941010563" watchObservedRunningTime="2025-10-11 10:55:26.253453853 +0000 UTC m=+1758.227680294" Oct 11 10:55:26.261004 master-1 kubenswrapper[4771]: I1011 10:55:26.260943 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-d647f9c47-x7xc2"] Oct 11 10:55:26.281698 master-0 kubenswrapper[4790]: I1011 10:55:26.281631 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-89f9b4488-8vvt9"] Oct 11 10:55:26.297191 master-0 kubenswrapper[4790]: I1011 10:55:26.297116 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcmpk\" (UniqueName: \"kubernetes.io/projected/5b7ae2a3-6802-400c-bbe7-5729052a2c1c-kube-api-access-kcmpk\") pod \"nova-api-db-create-r9jnj\" (UID: \"5b7ae2a3-6802-400c-bbe7-5729052a2c1c\") " pod="openstack/nova-api-db-create-r9jnj" Oct 11 10:55:26.297288 master-0 kubenswrapper[4790]: I1011 10:55:26.297241 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m7jm\" (UniqueName: \"kubernetes.io/projected/7bb6dd47-2665-4a3f-8773-2a61034146a3-kube-api-access-5m7jm\") pod \"nova-cell0-db-create-rgsq2\" (UID: \"7bb6dd47-2665-4a3f-8773-2a61034146a3\") " pod="openstack/nova-cell0-db-create-rgsq2" Oct 11 10:55:26.317280 master-0 kubenswrapper[4790]: I1011 10:55:26.317217 4790 generic.go:334] "Generic (PLEG): container finished" podID="82f87bf9-c348-4149-a72c-99e49db4ec09" containerID="2b846ca497c34f678163324ba082b2f539146a51cf7170ec086171931a25f9ba" exitCode=0 Oct 11 10:55:26.317534 master-0 kubenswrapper[4790]: I1011 10:55:26.317473 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" event={"ID":"82f87bf9-c348-4149-a72c-99e49db4ec09","Type":"ContainerDied","Data":"2b846ca497c34f678163324ba082b2f539146a51cf7170ec086171931a25f9ba"} Oct 11 10:55:26.327306 master-0 kubenswrapper[4790]: I1011 10:55:26.319484 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcmpk\" (UniqueName: \"kubernetes.io/projected/5b7ae2a3-6802-400c-bbe7-5729052a2c1c-kube-api-access-kcmpk\") pod \"nova-api-db-create-r9jnj\" (UID: \"5b7ae2a3-6802-400c-bbe7-5729052a2c1c\") " pod="openstack/nova-api-db-create-r9jnj" Oct 11 10:55:26.327306 master-0 kubenswrapper[4790]: I1011 10:55:26.321815 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-1" event={"ID":"30c351cb-246a-4343-a56d-c74fb4be119e","Type":"ContainerStarted","Data":"f0083e208a574369bee7ac5b619444ddb040559a7e6adc8699675b4802ed4ee1"} Oct 11 10:55:26.327306 master-0 kubenswrapper[4790]: I1011 10:55:26.321876 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-1" event={"ID":"30c351cb-246a-4343-a56d-c74fb4be119e","Type":"ContainerStarted","Data":"60be7f28501f150423ff01b582dcc8cfc5bab82c1c195c3a36e327493870a191"} Oct 11 10:55:26.413502 master-0 kubenswrapper[4790]: I1011 10:55:26.413086 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m7jm\" (UniqueName: \"kubernetes.io/projected/7bb6dd47-2665-4a3f-8773-2a61034146a3-kube-api-access-5m7jm\") pod \"nova-cell0-db-create-rgsq2\" (UID: \"7bb6dd47-2665-4a3f-8773-2a61034146a3\") " pod="openstack/nova-cell0-db-create-rgsq2" Oct 11 10:55:26.418083 master-0 kubenswrapper[4790]: I1011 10:55:26.417989 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-api-1" podStartSLOduration=3.3422582370000002 podStartE2EDuration="22.417942767s" podCreationTimestamp="2025-10-11 10:55:04 +0000 UTC" firstStartedPulling="2025-10-11 10:55:05.582805523 +0000 UTC m=+982.137265815" lastFinishedPulling="2025-10-11 10:55:24.658490053 +0000 UTC m=+1001.212950345" observedRunningTime="2025-10-11 10:55:26.396274361 +0000 UTC m=+1002.950734673" watchObservedRunningTime="2025-10-11 10:55:26.417942767 +0000 UTC m=+1002.972422549" Oct 11 10:55:26.428914 master-0 kubenswrapper[4790]: I1011 10:55:26.428063 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-r9jnj" Oct 11 10:55:26.459837 master-0 kubenswrapper[4790]: I1011 10:55:26.454410 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-nw6gg"] Oct 11 10:55:26.472999 master-0 kubenswrapper[4790]: I1011 10:55:26.472932 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-nw6gg"] Oct 11 10:55:26.473122 master-0 kubenswrapper[4790]: I1011 10:55:26.473091 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nw6gg" Oct 11 10:55:26.489120 master-0 kubenswrapper[4790]: I1011 10:55:26.489077 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m7jm\" (UniqueName: \"kubernetes.io/projected/7bb6dd47-2665-4a3f-8773-2a61034146a3-kube-api-access-5m7jm\") pod \"nova-cell0-db-create-rgsq2\" (UID: \"7bb6dd47-2665-4a3f-8773-2a61034146a3\") " pod="openstack/nova-cell0-db-create-rgsq2" Oct 11 10:55:26.548906 master-2 kubenswrapper[4776]: I1011 10:55:26.548776 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-854549b758-grfk2" event={"ID":"09078de9-6576-4afa-a94e-7b80617bba0f","Type":"ContainerStarted","Data":"1863112160e8eb8f7f94a0016bc5baba00482733893cceacc222fc426a64fc57"} Oct 11 10:55:26.548906 master-2 kubenswrapper[4776]: I1011 10:55:26.548848 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-854549b758-grfk2" event={"ID":"09078de9-6576-4afa-a94e-7b80617bba0f","Type":"ContainerStarted","Data":"2fc0528238f103153bbac9e7a09546643ab74ad3439dc5514ba73dfd3aee059e"} Oct 11 10:55:26.548906 master-2 kubenswrapper[4776]: I1011 10:55:26.548888 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:26.551422 master-2 kubenswrapper[4776]: I1011 10:55:26.551289 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"7353cefe-e495-4633-9472-93497ca94612","Type":"ContainerStarted","Data":"d313f81d0a8278bef4e01893ade947879d98007ed6a8ec60b3e2299775595e63"} Oct 11 10:55:26.554194 master-2 kubenswrapper[4776]: I1011 10:55:26.553688 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.554309 master-2 kubenswrapper[4776]: I1011 10:55:26.554274 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-backup-0" event={"ID":"64da8a05-f383-4643-b08d-639963f8bdd5","Type":"ContainerDied","Data":"f08b0710ef4008f1cdc3f5be94ecdbead3ce33922cbd677781666c53de5eb3c1"} Oct 11 10:55:26.554309 master-2 kubenswrapper[4776]: I1011 10:55:26.554308 4776 scope.go:117] "RemoveContainer" containerID="5510c48b8a1b349e6fccbe8283441e4880694e565021961bec519004a98d24f9" Oct 11 10:55:26.572842 master-0 kubenswrapper[4790]: I1011 10:55:26.572218 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rgsq2" Oct 11 10:55:26.585399 master-2 kubenswrapper[4776]: I1011 10:55:26.585330 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-854549b758-grfk2" podStartSLOduration=1.58530827 podStartE2EDuration="1.58530827s" podCreationTimestamp="2025-10-11 10:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:26.576410409 +0000 UTC m=+1761.360837118" watchObservedRunningTime="2025-10-11 10:55:26.58530827 +0000 UTC m=+1761.369734979" Oct 11 10:55:26.598258 master-2 kubenswrapper[4776]: I1011 10:55:26.598196 4776 scope.go:117] "RemoveContainer" containerID="6cbb734e786ac7a270d6c6e63f2ba748d07f4398724c5368110f5b4040ec1204" Oct 11 10:55:26.626444 master-2 kubenswrapper[4776]: I1011 10:55:26.626051 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-backup-0"] Oct 11 10:55:26.632369 master-0 kubenswrapper[4790]: I1011 10:55:26.627136 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w4cd\" (UniqueName: \"kubernetes.io/projected/8b0929b8-354d-4de6-9e2d-ac6e11324b10-kube-api-access-7w4cd\") pod \"nova-cell1-db-create-nw6gg\" (UID: \"8b0929b8-354d-4de6-9e2d-ac6e11324b10\") " pod="openstack/nova-cell1-db-create-nw6gg" Oct 11 10:55:26.645235 master-2 kubenswrapper[4776]: I1011 10:55:26.645187 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b5802-backup-0"] Oct 11 10:55:26.697936 master-2 kubenswrapper[4776]: I1011 10:55:26.697872 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-backup-0"] Oct 11 10:55:26.698307 master-2 kubenswrapper[4776]: E1011 10:55:26.698271 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64da8a05-f383-4643-b08d-639963f8bdd5" containerName="probe" Oct 11 10:55:26.698307 master-2 kubenswrapper[4776]: I1011 10:55:26.698298 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="64da8a05-f383-4643-b08d-639963f8bdd5" containerName="probe" Oct 11 10:55:26.698419 master-2 kubenswrapper[4776]: E1011 10:55:26.698315 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64da8a05-f383-4643-b08d-639963f8bdd5" containerName="cinder-backup" Oct 11 10:55:26.698419 master-2 kubenswrapper[4776]: I1011 10:55:26.698325 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="64da8a05-f383-4643-b08d-639963f8bdd5" containerName="cinder-backup" Oct 11 10:55:26.698581 master-2 kubenswrapper[4776]: I1011 10:55:26.698542 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="64da8a05-f383-4643-b08d-639963f8bdd5" containerName="cinder-backup" Oct 11 10:55:26.698581 master-2 kubenswrapper[4776]: I1011 10:55:26.698567 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="64da8a05-f383-4643-b08d-639963f8bdd5" containerName="probe" Oct 11 10:55:26.699559 master-2 kubenswrapper[4776]: I1011 10:55:26.699522 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-backup-0"] Oct 11 10:55:26.699638 master-2 kubenswrapper[4776]: I1011 10:55:26.699614 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.704776 master-2 kubenswrapper[4776]: I1011 10:55:26.703373 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-backup-config-data" Oct 11 10:55:26.729803 master-0 kubenswrapper[4790]: I1011 10:55:26.729612 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w4cd\" (UniqueName: \"kubernetes.io/projected/8b0929b8-354d-4de6-9e2d-ac6e11324b10-kube-api-access-7w4cd\") pod \"nova-cell1-db-create-nw6gg\" (UID: \"8b0929b8-354d-4de6-9e2d-ac6e11324b10\") " pod="openstack/nova-cell1-db-create-nw6gg" Oct 11 10:55:26.763223 master-0 kubenswrapper[4790]: I1011 10:55:26.763177 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w4cd\" (UniqueName: \"kubernetes.io/projected/8b0929b8-354d-4de6-9e2d-ac6e11324b10-kube-api-access-7w4cd\") pod \"nova-cell1-db-create-nw6gg\" (UID: \"8b0929b8-354d-4de6-9e2d-ac6e11324b10\") " pod="openstack/nova-cell1-db-create-nw6gg" Oct 11 10:55:26.812684 master-0 kubenswrapper[4790]: I1011 10:55:26.810542 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nw6gg" Oct 11 10:55:26.909295 master-0 kubenswrapper[4790]: I1011 10:55:26.908904 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:26.913343 master-1 kubenswrapper[4771]: I1011 10:55:26.913210 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"736a75e7-8c74-4862-9ac6-3b4c2d0d721d","Type":"ContainerStarted","Data":"1d0d93b3fc6393dcdc851e8c3921d7c5d5a44cf9e99d331f9e66f61b3c48f59d"} Oct 11 10:55:26.916619 master-1 kubenswrapper[4771]: I1011 10:55:26.914977 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-d647f9c47-x7xc2" event={"ID":"e0657ee5-2e60-4a96-905e-814f46a72970","Type":"ContainerStarted","Data":"4d5653f9de27a6b9172b5e020cb9d04e796130d0328e34ab12c5dd9a66c1452e"} Oct 11 10:55:26.918409 master-2 kubenswrapper[4776]: I1011 10:55:26.918079 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-var-locks-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918409 master-2 kubenswrapper[4776]: I1011 10:55:26.918234 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-lib-modules\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918409 master-2 kubenswrapper[4776]: I1011 10:55:26.918317 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-scripts\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918788 master-2 kubenswrapper[4776]: I1011 10:55:26.918514 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-run\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918788 master-2 kubenswrapper[4776]: I1011 10:55:26.918655 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-config-data\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918788 master-2 kubenswrapper[4776]: I1011 10:55:26.918718 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-sys\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918788 master-2 kubenswrapper[4776]: I1011 10:55:26.918746 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-dev\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918788 master-2 kubenswrapper[4776]: I1011 10:55:26.918784 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-combined-ca-bundle\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918989 master-2 kubenswrapper[4776]: I1011 10:55:26.918817 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-config-data-custom\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918989 master-2 kubenswrapper[4776]: I1011 10:55:26.918856 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-var-lib-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918989 master-2 kubenswrapper[4776]: I1011 10:55:26.918886 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-etc-machine-id\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918989 master-2 kubenswrapper[4776]: I1011 10:55:26.918912 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-var-locks-brick\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918989 master-2 kubenswrapper[4776]: I1011 10:55:26.918928 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwv5n\" (UniqueName: \"kubernetes.io/projected/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-kube-api-access-zwv5n\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918989 master-2 kubenswrapper[4776]: I1011 10:55:26.918946 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-etc-iscsi\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.919193 master-2 kubenswrapper[4776]: I1011 10:55:26.919022 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-etc-nvme\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.020278 master-2 kubenswrapper[4776]: I1011 10:55:27.020224 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-etc-machine-id\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.020509 master-2 kubenswrapper[4776]: I1011 10:55:27.020496 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-var-locks-brick\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.020592 master-2 kubenswrapper[4776]: I1011 10:55:27.020580 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwv5n\" (UniqueName: \"kubernetes.io/projected/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-kube-api-access-zwv5n\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.020732 master-2 kubenswrapper[4776]: I1011 10:55:27.020714 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-etc-iscsi\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.020848 master-2 kubenswrapper[4776]: I1011 10:55:27.020458 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-etc-machine-id\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.020893 master-2 kubenswrapper[4776]: I1011 10:55:27.020619 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-var-locks-brick\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.020938 master-2 kubenswrapper[4776]: I1011 10:55:27.020836 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-etc-nvme\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021113 master-2 kubenswrapper[4776]: I1011 10:55:27.020940 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-etc-iscsi\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021161 master-2 kubenswrapper[4776]: I1011 10:55:27.020981 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-etc-nvme\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021161 master-2 kubenswrapper[4776]: I1011 10:55:27.021096 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-var-locks-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021238 master-2 kubenswrapper[4776]: I1011 10:55:27.021216 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-lib-modules\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021287 master-2 kubenswrapper[4776]: I1011 10:55:27.021255 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-scripts\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021317 master-2 kubenswrapper[4776]: I1011 10:55:27.021285 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-run\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021317 master-2 kubenswrapper[4776]: I1011 10:55:27.021306 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-lib-modules\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021377 master-2 kubenswrapper[4776]: I1011 10:55:27.021346 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-config-data\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021413 master-2 kubenswrapper[4776]: I1011 10:55:27.021398 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-sys\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021481 master-2 kubenswrapper[4776]: I1011 10:55:27.021419 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-dev\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021481 master-2 kubenswrapper[4776]: I1011 10:55:27.021473 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-combined-ca-bundle\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021568 master-2 kubenswrapper[4776]: I1011 10:55:27.021515 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-config-data-custom\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021601 master-2 kubenswrapper[4776]: I1011 10:55:27.021569 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-var-lib-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021694 master-2 kubenswrapper[4776]: I1011 10:55:27.021628 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-sys\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021756 master-2 kubenswrapper[4776]: I1011 10:55:27.021345 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-run\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021804 master-2 kubenswrapper[4776]: I1011 10:55:27.021743 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-var-lib-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021804 master-2 kubenswrapper[4776]: I1011 10:55:27.021761 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-dev\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021903 master-2 kubenswrapper[4776]: I1011 10:55:27.021885 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-var-locks-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.024649 master-2 kubenswrapper[4776]: I1011 10:55:27.024585 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-scripts\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.024977 master-2 kubenswrapper[4776]: I1011 10:55:27.024949 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-combined-ca-bundle\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.026034 master-2 kubenswrapper[4776]: I1011 10:55:27.026002 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-config-data-custom\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.027527 master-2 kubenswrapper[4776]: I1011 10:55:27.027498 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-config-data\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.044394 master-0 kubenswrapper[4790]: I1011 10:55:27.044352 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-dns-svc\") pod \"82f87bf9-c348-4149-a72c-99e49db4ec09\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " Oct 11 10:55:27.044394 master-0 kubenswrapper[4790]: I1011 10:55:27.044388 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-ovsdbserver-nb\") pod \"82f87bf9-c348-4149-a72c-99e49db4ec09\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " Oct 11 10:55:27.044517 master-0 kubenswrapper[4790]: I1011 10:55:27.044427 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-ovsdbserver-sb\") pod \"82f87bf9-c348-4149-a72c-99e49db4ec09\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " Oct 11 10:55:27.044517 master-0 kubenswrapper[4790]: I1011 10:55:27.044486 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tbm5\" (UniqueName: \"kubernetes.io/projected/82f87bf9-c348-4149-a72c-99e49db4ec09-kube-api-access-2tbm5\") pod \"82f87bf9-c348-4149-a72c-99e49db4ec09\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " Oct 11 10:55:27.044589 master-0 kubenswrapper[4790]: I1011 10:55:27.044546 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-config\") pod \"82f87bf9-c348-4149-a72c-99e49db4ec09\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " Oct 11 10:55:27.044620 master-0 kubenswrapper[4790]: I1011 10:55:27.044597 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-dns-swift-storage-0\") pod \"82f87bf9-c348-4149-a72c-99e49db4ec09\" (UID: \"82f87bf9-c348-4149-a72c-99e49db4ec09\") " Oct 11 10:55:27.048610 master-0 kubenswrapper[4790]: I1011 10:55:27.048538 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82f87bf9-c348-4149-a72c-99e49db4ec09-kube-api-access-2tbm5" (OuterVolumeSpecName: "kube-api-access-2tbm5") pod "82f87bf9-c348-4149-a72c-99e49db4ec09" (UID: "82f87bf9-c348-4149-a72c-99e49db4ec09"). InnerVolumeSpecName "kube-api-access-2tbm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:27.058449 master-2 kubenswrapper[4776]: I1011 10:55:27.058381 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwv5n\" (UniqueName: \"kubernetes.io/projected/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-kube-api-access-zwv5n\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.060660 master-0 kubenswrapper[4790]: I1011 10:55:27.060609 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-r9jnj"] Oct 11 10:55:27.077766 master-0 kubenswrapper[4790]: I1011 10:55:27.077661 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "82f87bf9-c348-4149-a72c-99e49db4ec09" (UID: "82f87bf9-c348-4149-a72c-99e49db4ec09"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:27.095834 master-0 kubenswrapper[4790]: I1011 10:55:27.092132 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "82f87bf9-c348-4149-a72c-99e49db4ec09" (UID: "82f87bf9-c348-4149-a72c-99e49db4ec09"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:27.097737 master-0 kubenswrapper[4790]: I1011 10:55:27.096788 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "82f87bf9-c348-4149-a72c-99e49db4ec09" (UID: "82f87bf9-c348-4149-a72c-99e49db4ec09"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:27.097737 master-0 kubenswrapper[4790]: I1011 10:55:27.097311 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5f5c98dcd5-c8xhk"] Oct 11 10:55:27.117395 master-0 kubenswrapper[4790]: I1011 10:55:27.117337 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "82f87bf9-c348-4149-a72c-99e49db4ec09" (UID: "82f87bf9-c348-4149-a72c-99e49db4ec09"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:27.123204 master-1 kubenswrapper[4771]: I1011 10:55:27.123114 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-b8cb664c5-5zrqf"] Oct 11 10:55:27.135237 master-0 kubenswrapper[4790]: I1011 10:55:27.135039 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-config" (OuterVolumeSpecName: "config") pod "82f87bf9-c348-4149-a72c-99e49db4ec09" (UID: "82f87bf9-c348-4149-a72c-99e49db4ec09"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:27.137492 master-2 kubenswrapper[4776]: I1011 10:55:27.137419 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-db76b8b85-xpl75"] Oct 11 10:55:27.138985 master-2 kubenswrapper[4776]: I1011 10:55:27.138951 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.142912 master-2 kubenswrapper[4776]: I1011 10:55:27.142871 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Oct 11 10:55:27.143130 master-2 kubenswrapper[4776]: I1011 10:55:27.143107 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Oct 11 10:55:27.143307 master-2 kubenswrapper[4776]: I1011 10:55:27.143282 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Oct 11 10:55:27.148886 master-0 kubenswrapper[4790]: I1011 10:55:27.148178 4790 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-config\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:27.148886 master-0 kubenswrapper[4790]: I1011 10:55:27.148419 4790 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:27.148886 master-0 kubenswrapper[4790]: I1011 10:55:27.148436 4790 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-dns-svc\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:27.148886 master-0 kubenswrapper[4790]: I1011 10:55:27.148452 4790 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:27.148886 master-0 kubenswrapper[4790]: I1011 10:55:27.148466 4790 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82f87bf9-c348-4149-a72c-99e49db4ec09-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:27.148886 master-0 kubenswrapper[4790]: I1011 10:55:27.148476 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tbm5\" (UniqueName: \"kubernetes.io/projected/82f87bf9-c348-4149-a72c-99e49db4ec09-kube-api-access-2tbm5\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:27.161925 master-2 kubenswrapper[4776]: I1011 10:55:27.161832 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-db76b8b85-xpl75"] Oct 11 10:55:27.162447 master-1 kubenswrapper[4771]: I1011 10:55:27.162386 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-64fcdf7d54-8r455"] Oct 11 10:55:27.164458 master-1 kubenswrapper[4771]: I1011 10:55:27.164435 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.167718 master-1 kubenswrapper[4771]: I1011 10:55:27.167651 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Oct 11 10:55:27.168024 master-1 kubenswrapper[4771]: I1011 10:55:27.167996 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Oct 11 10:55:27.186345 master-1 kubenswrapper[4771]: I1011 10:55:27.186286 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dc94855-37f8-4fa8-a3e1-72808b37f966-internal-tls-certs\") pod \"heat-cfnapi-64fcdf7d54-8r455\" (UID: \"2dc94855-37f8-4fa8-a3e1-72808b37f966\") " pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.186865 master-1 kubenswrapper[4771]: I1011 10:55:27.186688 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc94855-37f8-4fa8-a3e1-72808b37f966-combined-ca-bundle\") pod \"heat-cfnapi-64fcdf7d54-8r455\" (UID: \"2dc94855-37f8-4fa8-a3e1-72808b37f966\") " pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.186865 master-1 kubenswrapper[4771]: I1011 10:55:27.186846 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dc94855-37f8-4fa8-a3e1-72808b37f966-config-data\") pod \"heat-cfnapi-64fcdf7d54-8r455\" (UID: \"2dc94855-37f8-4fa8-a3e1-72808b37f966\") " pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.187061 master-1 kubenswrapper[4771]: I1011 10:55:27.186982 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2dc94855-37f8-4fa8-a3e1-72808b37f966-config-data-custom\") pod \"heat-cfnapi-64fcdf7d54-8r455\" (UID: \"2dc94855-37f8-4fa8-a3e1-72808b37f966\") " pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.187061 master-1 kubenswrapper[4771]: I1011 10:55:27.187042 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dc94855-37f8-4fa8-a3e1-72808b37f966-public-tls-certs\") pod \"heat-cfnapi-64fcdf7d54-8r455\" (UID: \"2dc94855-37f8-4fa8-a3e1-72808b37f966\") " pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.187142 master-1 kubenswrapper[4771]: I1011 10:55:27.187067 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slwt9\" (UniqueName: \"kubernetes.io/projected/2dc94855-37f8-4fa8-a3e1-72808b37f966-kube-api-access-slwt9\") pod \"heat-cfnapi-64fcdf7d54-8r455\" (UID: \"2dc94855-37f8-4fa8-a3e1-72808b37f966\") " pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.187180 master-1 kubenswrapper[4771]: I1011 10:55:27.187123 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-64fcdf7d54-8r455"] Oct 11 10:55:27.234165 master-0 kubenswrapper[4790]: W1011 10:55:27.221424 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bb6dd47_2665_4a3f_8773_2a61034146a3.slice/crio-159cf359ecee116df9e95cf785829a9f2d2d271b6a361ea0f6a5198dc8a786f3 WatchSource:0}: Error finding container 159cf359ecee116df9e95cf785829a9f2d2d271b6a361ea0f6a5198dc8a786f3: Status 404 returned error can't find the container with id 159cf359ecee116df9e95cf785829a9f2d2d271b6a361ea0f6a5198dc8a786f3 Oct 11 10:55:27.234165 master-0 kubenswrapper[4790]: I1011 10:55:27.223770 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rgsq2"] Oct 11 10:55:27.289503 master-1 kubenswrapper[4771]: I1011 10:55:27.289430 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc94855-37f8-4fa8-a3e1-72808b37f966-combined-ca-bundle\") pod \"heat-cfnapi-64fcdf7d54-8r455\" (UID: \"2dc94855-37f8-4fa8-a3e1-72808b37f966\") " pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.289654 master-1 kubenswrapper[4771]: I1011 10:55:27.289580 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dc94855-37f8-4fa8-a3e1-72808b37f966-config-data\") pod \"heat-cfnapi-64fcdf7d54-8r455\" (UID: \"2dc94855-37f8-4fa8-a3e1-72808b37f966\") " pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.289699 master-1 kubenswrapper[4771]: I1011 10:55:27.289660 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2dc94855-37f8-4fa8-a3e1-72808b37f966-config-data-custom\") pod \"heat-cfnapi-64fcdf7d54-8r455\" (UID: \"2dc94855-37f8-4fa8-a3e1-72808b37f966\") " pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.289743 master-1 kubenswrapper[4771]: I1011 10:55:27.289715 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dc94855-37f8-4fa8-a3e1-72808b37f966-public-tls-certs\") pod \"heat-cfnapi-64fcdf7d54-8r455\" (UID: \"2dc94855-37f8-4fa8-a3e1-72808b37f966\") " pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.289743 master-1 kubenswrapper[4771]: I1011 10:55:27.289737 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slwt9\" (UniqueName: \"kubernetes.io/projected/2dc94855-37f8-4fa8-a3e1-72808b37f966-kube-api-access-slwt9\") pod \"heat-cfnapi-64fcdf7d54-8r455\" (UID: \"2dc94855-37f8-4fa8-a3e1-72808b37f966\") " pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.289806 master-1 kubenswrapper[4771]: I1011 10:55:27.289797 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dc94855-37f8-4fa8-a3e1-72808b37f966-internal-tls-certs\") pod \"heat-cfnapi-64fcdf7d54-8r455\" (UID: \"2dc94855-37f8-4fa8-a3e1-72808b37f966\") " pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.294052 master-1 kubenswrapper[4771]: I1011 10:55:27.293919 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dc94855-37f8-4fa8-a3e1-72808b37f966-internal-tls-certs\") pod \"heat-cfnapi-64fcdf7d54-8r455\" (UID: \"2dc94855-37f8-4fa8-a3e1-72808b37f966\") " pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.308116 master-1 kubenswrapper[4771]: I1011 10:55:27.308067 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dc94855-37f8-4fa8-a3e1-72808b37f966-config-data\") pod \"heat-cfnapi-64fcdf7d54-8r455\" (UID: \"2dc94855-37f8-4fa8-a3e1-72808b37f966\") " pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.320025 master-2 kubenswrapper[4776]: I1011 10:55:27.319975 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.322386 master-1 kubenswrapper[4771]: I1011 10:55:27.318005 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc94855-37f8-4fa8-a3e1-72808b37f966-combined-ca-bundle\") pod \"heat-cfnapi-64fcdf7d54-8r455\" (UID: \"2dc94855-37f8-4fa8-a3e1-72808b37f966\") " pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.322386 master-1 kubenswrapper[4771]: I1011 10:55:27.319454 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dc94855-37f8-4fa8-a3e1-72808b37f966-public-tls-certs\") pod \"heat-cfnapi-64fcdf7d54-8r455\" (UID: \"2dc94855-37f8-4fa8-a3e1-72808b37f966\") " pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.323718 master-1 kubenswrapper[4771]: I1011 10:55:27.323634 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slwt9\" (UniqueName: \"kubernetes.io/projected/2dc94855-37f8-4fa8-a3e1-72808b37f966-kube-api-access-slwt9\") pod \"heat-cfnapi-64fcdf7d54-8r455\" (UID: \"2dc94855-37f8-4fa8-a3e1-72808b37f966\") " pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.325436 master-1 kubenswrapper[4771]: I1011 10:55:27.325065 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2dc94855-37f8-4fa8-a3e1-72808b37f966-config-data-custom\") pod \"heat-cfnapi-64fcdf7d54-8r455\" (UID: \"2dc94855-37f8-4fa8-a3e1-72808b37f966\") " pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.327728 master-2 kubenswrapper[4776]: I1011 10:55:27.326582 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-config-data-custom\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.327728 master-2 kubenswrapper[4776]: I1011 10:55:27.326756 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lstld\" (UniqueName: \"kubernetes.io/projected/803272bc-1d03-4e1f-af8a-42b8d6e029d1-kube-api-access-lstld\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.327728 master-2 kubenswrapper[4776]: I1011 10:55:27.326790 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-public-tls-certs\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.327728 master-2 kubenswrapper[4776]: I1011 10:55:27.326881 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-internal-tls-certs\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.327728 master-2 kubenswrapper[4776]: I1011 10:55:27.326902 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-combined-ca-bundle\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.327728 master-2 kubenswrapper[4776]: I1011 10:55:27.326928 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-config-data\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.341087 master-0 kubenswrapper[4790]: I1011 10:55:27.339038 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rgsq2" event={"ID":"7bb6dd47-2665-4a3f-8773-2a61034146a3","Type":"ContainerStarted","Data":"159cf359ecee116df9e95cf785829a9f2d2d271b6a361ea0f6a5198dc8a786f3"} Oct 11 10:55:27.341087 master-0 kubenswrapper[4790]: I1011 10:55:27.341005 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-r9jnj" event={"ID":"5b7ae2a3-6802-400c-bbe7-5729052a2c1c","Type":"ContainerStarted","Data":"187ba140386fe1bdb57e2319039e5e08987bfe52015b52422eac9cff8a82d276"} Oct 11 10:55:27.341087 master-0 kubenswrapper[4790]: I1011 10:55:27.341031 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-r9jnj" event={"ID":"5b7ae2a3-6802-400c-bbe7-5729052a2c1c","Type":"ContainerStarted","Data":"4074522a220cfcc13675ec89ed6e6addf00a02c34d3e5d2f86e64b3f545d3cad"} Oct 11 10:55:27.343578 master-0 kubenswrapper[4790]: I1011 10:55:27.343534 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-89f9b4488-8vvt9" event={"ID":"90ca8fc6-bc53-461b-8384-ca8344e8abb1","Type":"ContainerStarted","Data":"2889f39d4725531c10441fa9236d4ba817fb73083c92ada0288c6f7dfdb54987"} Oct 11 10:55:27.351006 master-0 kubenswrapper[4790]: I1011 10:55:27.350965 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" Oct 11 10:55:27.352137 master-0 kubenswrapper[4790]: I1011 10:55:27.352072 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb968bd67-mk4sp" event={"ID":"82f87bf9-c348-4149-a72c-99e49db4ec09","Type":"ContainerDied","Data":"792d4dacd71ae32e22ff517532decce940e406c6a15096335b14e8b534a2992c"} Oct 11 10:55:27.352137 master-0 kubenswrapper[4790]: I1011 10:55:27.352115 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-b5802-api-1" Oct 11 10:55:27.352228 master-0 kubenswrapper[4790]: I1011 10:55:27.352150 4790 scope.go:117] "RemoveContainer" containerID="2b846ca497c34f678163324ba082b2f539146a51cf7170ec086171931a25f9ba" Oct 11 10:55:27.370888 master-0 kubenswrapper[4790]: I1011 10:55:27.370798 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-nw6gg"] Oct 11 10:55:27.378232 master-0 kubenswrapper[4790]: I1011 10:55:27.377186 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-r9jnj" podStartSLOduration=1.377164633 podStartE2EDuration="1.377164633s" podCreationTimestamp="2025-10-11 10:55:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:27.366290145 +0000 UTC m=+1003.920750457" watchObservedRunningTime="2025-10-11 10:55:27.377164633 +0000 UTC m=+1003.931624925" Oct 11 10:55:27.388724 master-0 kubenswrapper[4790]: W1011 10:55:27.388488 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b0929b8_354d_4de6_9e2d_ac6e11324b10.slice/crio-99f365cb76e3f76cb36494306ddb4f6a8b5e7332210b71ce165a6676119b559f WatchSource:0}: Error finding container 99f365cb76e3f76cb36494306ddb4f6a8b5e7332210b71ce165a6676119b559f: Status 404 returned error can't find the container with id 99f365cb76e3f76cb36494306ddb4f6a8b5e7332210b71ce165a6676119b559f Oct 11 10:55:27.429489 master-2 kubenswrapper[4776]: I1011 10:55:27.429386 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lstld\" (UniqueName: \"kubernetes.io/projected/803272bc-1d03-4e1f-af8a-42b8d6e029d1-kube-api-access-lstld\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.429489 master-2 kubenswrapper[4776]: I1011 10:55:27.429454 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-public-tls-certs\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.429741 master-2 kubenswrapper[4776]: I1011 10:55:27.429515 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-internal-tls-certs\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.429741 master-2 kubenswrapper[4776]: I1011 10:55:27.429532 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-combined-ca-bundle\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.429741 master-2 kubenswrapper[4776]: I1011 10:55:27.429558 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-config-data\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.429741 master-2 kubenswrapper[4776]: I1011 10:55:27.429588 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-config-data-custom\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.435237 master-2 kubenswrapper[4776]: I1011 10:55:27.435189 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-config-data-custom\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.436599 master-2 kubenswrapper[4776]: I1011 10:55:27.436554 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-public-tls-certs\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.438044 master-2 kubenswrapper[4776]: I1011 10:55:27.437996 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-config-data\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.438147 master-2 kubenswrapper[4776]: I1011 10:55:27.438109 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-combined-ca-bundle\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.439444 master-2 kubenswrapper[4776]: I1011 10:55:27.439382 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-internal-tls-certs\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.452447 master-0 kubenswrapper[4790]: I1011 10:55:27.452294 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb968bd67-mk4sp"] Oct 11 10:55:27.452455 master-2 kubenswrapper[4776]: I1011 10:55:27.452375 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lstld\" (UniqueName: \"kubernetes.io/projected/803272bc-1d03-4e1f-af8a-42b8d6e029d1-kube-api-access-lstld\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.457399 master-0 kubenswrapper[4790]: I1011 10:55:27.457301 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bb968bd67-mk4sp"] Oct 11 10:55:27.458523 master-2 kubenswrapper[4776]: I1011 10:55:27.458458 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.504626 master-1 kubenswrapper[4771]: I1011 10:55:27.503385 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:27.565696 master-2 kubenswrapper[4776]: I1011 10:55:27.564297 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"7353cefe-e495-4633-9472-93497ca94612","Type":"ContainerStarted","Data":"3b4c4342002f1ca53ae36a5961557e4addd090f4ce6b5d57db4005435cbb0b8e"} Oct 11 10:55:27.598652 master-2 kubenswrapper[4776]: I1011 10:55:27.598579 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-internal-api-0" podStartSLOduration=8.598488932 podStartE2EDuration="8.598488932s" podCreationTimestamp="2025-10-11 10:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:27.591762389 +0000 UTC m=+1762.376189098" watchObservedRunningTime="2025-10-11 10:55:27.598488932 +0000 UTC m=+1762.382915641" Oct 11 10:55:27.958720 master-2 kubenswrapper[4776]: I1011 10:55:27.957309 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:28.129975 master-2 kubenswrapper[4776]: I1011 10:55:28.129909 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64da8a05-f383-4643-b08d-639963f8bdd5" path="/var/lib/kubelet/pods/64da8a05-f383-4643-b08d-639963f8bdd5/volumes" Oct 11 10:55:28.130727 master-2 kubenswrapper[4776]: I1011 10:55:28.130644 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-db76b8b85-xpl75"] Oct 11 10:55:28.130801 master-2 kubenswrapper[4776]: I1011 10:55:28.130732 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-backup-0"] Oct 11 10:55:28.233326 master-2 kubenswrapper[4776]: I1011 10:55:28.233251 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:28.316757 master-0 kubenswrapper[4790]: I1011 10:55:28.316681 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82f87bf9-c348-4149-a72c-99e49db4ec09" path="/var/lib/kubelet/pods/82f87bf9-c348-4149-a72c-99e49db4ec09/volumes" Oct 11 10:55:28.364448 master-0 kubenswrapper[4790]: I1011 10:55:28.364347 4790 generic.go:334] "Generic (PLEG): container finished" podID="7bb6dd47-2665-4a3f-8773-2a61034146a3" containerID="5b50296ba2efac22efde8aae60a1ee89c11a8ace1ff375049f5a9b2bda8f8fc0" exitCode=0 Oct 11 10:55:28.364448 master-0 kubenswrapper[4790]: I1011 10:55:28.364431 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rgsq2" event={"ID":"7bb6dd47-2665-4a3f-8773-2a61034146a3","Type":"ContainerDied","Data":"5b50296ba2efac22efde8aae60a1ee89c11a8ace1ff375049f5a9b2bda8f8fc0"} Oct 11 10:55:28.366386 master-0 kubenswrapper[4790]: I1011 10:55:28.366354 4790 generic.go:334] "Generic (PLEG): container finished" podID="8b0929b8-354d-4de6-9e2d-ac6e11324b10" containerID="ad9c864509e03d2c97f9d070b630e91a99b7a68797b54f4da7ce040e5a112381" exitCode=0 Oct 11 10:55:28.366455 master-0 kubenswrapper[4790]: I1011 10:55:28.366413 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nw6gg" event={"ID":"8b0929b8-354d-4de6-9e2d-ac6e11324b10","Type":"ContainerDied","Data":"ad9c864509e03d2c97f9d070b630e91a99b7a68797b54f4da7ce040e5a112381"} Oct 11 10:55:28.366455 master-0 kubenswrapper[4790]: I1011 10:55:28.366432 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nw6gg" event={"ID":"8b0929b8-354d-4de6-9e2d-ac6e11324b10","Type":"ContainerStarted","Data":"99f365cb76e3f76cb36494306ddb4f6a8b5e7332210b71ce165a6676119b559f"} Oct 11 10:55:28.368751 master-0 kubenswrapper[4790]: I1011 10:55:28.368393 4790 generic.go:334] "Generic (PLEG): container finished" podID="5b7ae2a3-6802-400c-bbe7-5729052a2c1c" containerID="187ba140386fe1bdb57e2319039e5e08987bfe52015b52422eac9cff8a82d276" exitCode=0 Oct 11 10:55:28.368751 master-0 kubenswrapper[4790]: I1011 10:55:28.368508 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-r9jnj" event={"ID":"5b7ae2a3-6802-400c-bbe7-5729052a2c1c","Type":"ContainerDied","Data":"187ba140386fe1bdb57e2319039e5e08987bfe52015b52422eac9cff8a82d276"} Oct 11 10:55:28.576857 master-2 kubenswrapper[4776]: I1011 10:55:28.576818 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-db76b8b85-xpl75" event={"ID":"803272bc-1d03-4e1f-af8a-42b8d6e029d1","Type":"ContainerStarted","Data":"9d62c43b387d0c33d9adad19285acd9956ee4a3d6676c951e46cdaf5931fc6f0"} Oct 11 10:55:28.580360 master-2 kubenswrapper[4776]: I1011 10:55:28.580316 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-backup-0" event={"ID":"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a","Type":"ContainerStarted","Data":"1149fabecee31777f28d014ce56af4789a79360d8446c24f16e86a789f09e7db"} Oct 11 10:55:28.580360 master-2 kubenswrapper[4776]: I1011 10:55:28.580355 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-backup-0" event={"ID":"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a","Type":"ContainerStarted","Data":"448518ea4caccce3fab360b053c0052558f01a28be4a8199bdfabace42d3327b"} Oct 11 10:55:28.581940 master-2 kubenswrapper[4776]: I1011 10:55:28.581892 4776 generic.go:334] "Generic (PLEG): container finished" podID="98ff7c8d-cc7c-4b25-917b-88dfa7f837c5" containerID="ddf25d3e6dfc76b87ac6599854740e9e3e9b46a167eab6435fa6a770ea42138e" exitCode=0 Oct 11 10:55:28.582260 master-2 kubenswrapper[4776]: I1011 10:55:28.582229 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerDied","Data":"ddf25d3e6dfc76b87ac6599854740e9e3e9b46a167eab6435fa6a770ea42138e"} Oct 11 10:55:28.719822 master-0 kubenswrapper[4790]: I1011 10:55:28.718854 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-d55d46749-qq6mv"] Oct 11 10:55:28.719822 master-0 kubenswrapper[4790]: E1011 10:55:28.719242 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82f87bf9-c348-4149-a72c-99e49db4ec09" containerName="init" Oct 11 10:55:28.719822 master-0 kubenswrapper[4790]: I1011 10:55:28.719276 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="82f87bf9-c348-4149-a72c-99e49db4ec09" containerName="init" Oct 11 10:55:28.719822 master-0 kubenswrapper[4790]: I1011 10:55:28.719438 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="82f87bf9-c348-4149-a72c-99e49db4ec09" containerName="init" Oct 11 10:55:28.723726 master-0 kubenswrapper[4790]: I1011 10:55:28.720823 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.728720 master-0 kubenswrapper[4790]: I1011 10:55:28.724848 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Oct 11 10:55:28.728720 master-0 kubenswrapper[4790]: I1011 10:55:28.726057 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Oct 11 10:55:28.728720 master-0 kubenswrapper[4790]: I1011 10:55:28.726160 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Oct 11 10:55:28.728720 master-0 kubenswrapper[4790]: I1011 10:55:28.726280 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Oct 11 10:55:28.728720 master-0 kubenswrapper[4790]: I1011 10:55:28.726464 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Oct 11 10:55:28.728720 master-0 kubenswrapper[4790]: I1011 10:55:28.727990 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Oct 11 10:55:28.743729 master-0 kubenswrapper[4790]: I1011 10:55:28.743454 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-d55d46749-qq6mv"] Oct 11 10:55:28.768709 master-1 kubenswrapper[4771]: I1011 10:55:28.768480 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-b5802-default-internal-api-1" podUID="499f9e94-a738-484d-ae4b-0cc221750d1c" containerName="glance-log" probeResult="failure" output="Get \"https://10.129.0.126:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 11 10:55:28.768709 master-1 kubenswrapper[4771]: I1011 10:55:28.768523 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-b5802-default-internal-api-1" podUID="499f9e94-a738-484d-ae4b-0cc221750d1c" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.129.0.126:9292/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:55:28.889732 master-0 kubenswrapper[4790]: I1011 10:55:28.889540 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4070c53-33f0-488e-80d1-f374f59c96cd-logs\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.889732 master-0 kubenswrapper[4790]: I1011 10:55:28.889649 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e4070c53-33f0-488e-80d1-f374f59c96cd-etc-podinfo\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.889732 master-0 kubenswrapper[4790]: I1011 10:55:28.889688 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4070c53-33f0-488e-80d1-f374f59c96cd-public-tls-certs\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.890001 master-0 kubenswrapper[4790]: I1011 10:55:28.889756 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4070c53-33f0-488e-80d1-f374f59c96cd-internal-tls-certs\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.890001 master-0 kubenswrapper[4790]: I1011 10:55:28.889793 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4070c53-33f0-488e-80d1-f374f59c96cd-scripts\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.893723 master-0 kubenswrapper[4790]: I1011 10:55:28.890111 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4070c53-33f0-488e-80d1-f374f59c96cd-config-data-custom\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.893723 master-0 kubenswrapper[4790]: I1011 10:55:28.890236 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4070c53-33f0-488e-80d1-f374f59c96cd-config-data\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.893723 master-0 kubenswrapper[4790]: I1011 10:55:28.890395 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwhc7\" (UniqueName: \"kubernetes.io/projected/e4070c53-33f0-488e-80d1-f374f59c96cd-kube-api-access-dwhc7\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.893723 master-0 kubenswrapper[4790]: I1011 10:55:28.890533 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4070c53-33f0-488e-80d1-f374f59c96cd-combined-ca-bundle\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.893723 master-0 kubenswrapper[4790]: I1011 10:55:28.890568 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e4070c53-33f0-488e-80d1-f374f59c96cd-config-data-merged\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.967234 master-1 kubenswrapper[4771]: I1011 10:55:28.965621 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8vzsw"] Oct 11 10:55:28.967962 master-1 kubenswrapper[4771]: I1011 10:55:28.967930 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vzsw" Oct 11 10:55:28.992492 master-1 kubenswrapper[4771]: I1011 10:55:28.980176 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8vzsw"] Oct 11 10:55:28.994369 master-0 kubenswrapper[4790]: I1011 10:55:28.994133 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4070c53-33f0-488e-80d1-f374f59c96cd-config-data\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.994369 master-0 kubenswrapper[4790]: I1011 10:55:28.994238 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwhc7\" (UniqueName: \"kubernetes.io/projected/e4070c53-33f0-488e-80d1-f374f59c96cd-kube-api-access-dwhc7\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.994369 master-0 kubenswrapper[4790]: I1011 10:55:28.994282 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4070c53-33f0-488e-80d1-f374f59c96cd-combined-ca-bundle\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.994369 master-0 kubenswrapper[4790]: I1011 10:55:28.994302 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e4070c53-33f0-488e-80d1-f374f59c96cd-config-data-merged\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.994369 master-0 kubenswrapper[4790]: I1011 10:55:28.994327 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4070c53-33f0-488e-80d1-f374f59c96cd-logs\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.996000 master-0 kubenswrapper[4790]: I1011 10:55:28.995969 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e4070c53-33f0-488e-80d1-f374f59c96cd-etc-podinfo\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.996075 master-0 kubenswrapper[4790]: I1011 10:55:28.996005 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4070c53-33f0-488e-80d1-f374f59c96cd-public-tls-certs\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.996075 master-0 kubenswrapper[4790]: I1011 10:55:28.996037 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4070c53-33f0-488e-80d1-f374f59c96cd-internal-tls-certs\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.996075 master-0 kubenswrapper[4790]: I1011 10:55:28.996062 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4070c53-33f0-488e-80d1-f374f59c96cd-scripts\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.996343 master-0 kubenswrapper[4790]: I1011 10:55:28.996311 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e4070c53-33f0-488e-80d1-f374f59c96cd-config-data-merged\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:28.996430 master-0 kubenswrapper[4790]: I1011 10:55:28.996389 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4070c53-33f0-488e-80d1-f374f59c96cd-logs\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:29.000481 master-0 kubenswrapper[4790]: I1011 10:55:28.997913 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4070c53-33f0-488e-80d1-f374f59c96cd-config-data-custom\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:29.000481 master-0 kubenswrapper[4790]: I1011 10:55:28.999438 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4070c53-33f0-488e-80d1-f374f59c96cd-combined-ca-bundle\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:29.003817 master-0 kubenswrapper[4790]: I1011 10:55:29.003123 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e4070c53-33f0-488e-80d1-f374f59c96cd-etc-podinfo\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:29.010757 master-0 kubenswrapper[4790]: I1011 10:55:29.005806 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4070c53-33f0-488e-80d1-f374f59c96cd-internal-tls-certs\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:29.010757 master-0 kubenswrapper[4790]: I1011 10:55:29.007236 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4070c53-33f0-488e-80d1-f374f59c96cd-config-data-custom\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:29.011668 master-0 kubenswrapper[4790]: I1011 10:55:29.011613 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4070c53-33f0-488e-80d1-f374f59c96cd-public-tls-certs\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:29.012135 master-0 kubenswrapper[4790]: I1011 10:55:29.012109 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4070c53-33f0-488e-80d1-f374f59c96cd-scripts\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:29.012906 master-0 kubenswrapper[4790]: I1011 10:55:29.012868 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4070c53-33f0-488e-80d1-f374f59c96cd-config-data\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:29.016002 master-0 kubenswrapper[4790]: I1011 10:55:29.015942 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwhc7\" (UniqueName: \"kubernetes.io/projected/e4070c53-33f0-488e-80d1-f374f59c96cd-kube-api-access-dwhc7\") pod \"ironic-d55d46749-qq6mv\" (UID: \"e4070c53-33f0-488e-80d1-f374f59c96cd\") " pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:29.058791 master-0 kubenswrapper[4790]: I1011 10:55:29.058691 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:29.140068 master-1 kubenswrapper[4771]: I1011 10:55:29.139757 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1daf9046-c7b7-4c9c-a9b3-76ae17e47e44-catalog-content\") pod \"redhat-operators-8vzsw\" (UID: \"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44\") " pod="openshift-marketplace/redhat-operators-8vzsw" Oct 11 10:55:29.140396 master-1 kubenswrapper[4771]: I1011 10:55:29.140095 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sl2p\" (UniqueName: \"kubernetes.io/projected/1daf9046-c7b7-4c9c-a9b3-76ae17e47e44-kube-api-access-8sl2p\") pod \"redhat-operators-8vzsw\" (UID: \"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44\") " pod="openshift-marketplace/redhat-operators-8vzsw" Oct 11 10:55:29.140396 master-1 kubenswrapper[4771]: I1011 10:55:29.140178 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1daf9046-c7b7-4c9c-a9b3-76ae17e47e44-utilities\") pod \"redhat-operators-8vzsw\" (UID: \"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44\") " pod="openshift-marketplace/redhat-operators-8vzsw" Oct 11 10:55:29.244402 master-1 kubenswrapper[4771]: I1011 10:55:29.242671 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1daf9046-c7b7-4c9c-a9b3-76ae17e47e44-catalog-content\") pod \"redhat-operators-8vzsw\" (UID: \"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44\") " pod="openshift-marketplace/redhat-operators-8vzsw" Oct 11 10:55:29.244402 master-1 kubenswrapper[4771]: I1011 10:55:29.242738 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sl2p\" (UniqueName: \"kubernetes.io/projected/1daf9046-c7b7-4c9c-a9b3-76ae17e47e44-kube-api-access-8sl2p\") pod \"redhat-operators-8vzsw\" (UID: \"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44\") " pod="openshift-marketplace/redhat-operators-8vzsw" Oct 11 10:55:29.244402 master-1 kubenswrapper[4771]: I1011 10:55:29.242792 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1daf9046-c7b7-4c9c-a9b3-76ae17e47e44-utilities\") pod \"redhat-operators-8vzsw\" (UID: \"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44\") " pod="openshift-marketplace/redhat-operators-8vzsw" Oct 11 10:55:29.244402 master-1 kubenswrapper[4771]: I1011 10:55:29.243313 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1daf9046-c7b7-4c9c-a9b3-76ae17e47e44-utilities\") pod \"redhat-operators-8vzsw\" (UID: \"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44\") " pod="openshift-marketplace/redhat-operators-8vzsw" Oct 11 10:55:29.244402 master-1 kubenswrapper[4771]: I1011 10:55:29.243608 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1daf9046-c7b7-4c9c-a9b3-76ae17e47e44-catalog-content\") pod \"redhat-operators-8vzsw\" (UID: \"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44\") " pod="openshift-marketplace/redhat-operators-8vzsw" Oct 11 10:55:29.294388 master-1 kubenswrapper[4771]: I1011 10:55:29.293630 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sl2p\" (UniqueName: \"kubernetes.io/projected/1daf9046-c7b7-4c9c-a9b3-76ae17e47e44-kube-api-access-8sl2p\") pod \"redhat-operators-8vzsw\" (UID: \"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44\") " pod="openshift-marketplace/redhat-operators-8vzsw" Oct 11 10:55:29.298377 master-1 kubenswrapper[4771]: I1011 10:55:29.298074 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vzsw" Oct 11 10:55:29.590859 master-0 kubenswrapper[4790]: I1011 10:55:29.590804 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-d55d46749-qq6mv"] Oct 11 10:55:29.599493 master-0 kubenswrapper[4790]: I1011 10:55:29.599049 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:29.602153 master-2 kubenswrapper[4776]: I1011 10:55:29.602033 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-backup-0" event={"ID":"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a","Type":"ContainerStarted","Data":"0702bc6f895affee86cc9e5d5ededd1bae1cb5a2466e13dd02b7ec597bef759d"} Oct 11 10:55:29.776436 master-0 kubenswrapper[4790]: I1011 10:55:29.776299 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rgsq2" Oct 11 10:55:29.852800 master-0 kubenswrapper[4790]: I1011 10:55:29.852726 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:29.923394 master-0 kubenswrapper[4790]: I1011 10:55:29.923231 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m7jm\" (UniqueName: \"kubernetes.io/projected/7bb6dd47-2665-4a3f-8773-2a61034146a3-kube-api-access-5m7jm\") pod \"7bb6dd47-2665-4a3f-8773-2a61034146a3\" (UID: \"7bb6dd47-2665-4a3f-8773-2a61034146a3\") " Oct 11 10:55:29.931126 master-0 kubenswrapper[4790]: I1011 10:55:29.928512 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb6dd47-2665-4a3f-8773-2a61034146a3-kube-api-access-5m7jm" (OuterVolumeSpecName: "kube-api-access-5m7jm") pod "7bb6dd47-2665-4a3f-8773-2a61034146a3" (UID: "7bb6dd47-2665-4a3f-8773-2a61034146a3"). InnerVolumeSpecName "kube-api-access-5m7jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:30.028817 master-0 kubenswrapper[4790]: I1011 10:55:30.028782 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m7jm\" (UniqueName: \"kubernetes.io/projected/7bb6dd47-2665-4a3f-8773-2a61034146a3-kube-api-access-5m7jm\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:30.396233 master-0 kubenswrapper[4790]: I1011 10:55:30.396128 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rgsq2" event={"ID":"7bb6dd47-2665-4a3f-8773-2a61034146a3","Type":"ContainerDied","Data":"159cf359ecee116df9e95cf785829a9f2d2d271b6a361ea0f6a5198dc8a786f3"} Oct 11 10:55:30.396233 master-0 kubenswrapper[4790]: I1011 10:55:30.396198 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="159cf359ecee116df9e95cf785829a9f2d2d271b6a361ea0f6a5198dc8a786f3" Oct 11 10:55:30.396233 master-0 kubenswrapper[4790]: I1011 10:55:30.396203 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rgsq2" Oct 11 10:55:30.397227 master-0 kubenswrapper[4790]: I1011 10:55:30.397181 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-d55d46749-qq6mv" event={"ID":"e4070c53-33f0-488e-80d1-f374f59c96cd","Type":"ContainerStarted","Data":"e3a3e629dbcff296cf5826be81ab4b7bccd4a010c0e1d91ad34fc053597033a5"} Oct 11 10:55:30.539030 master-0 kubenswrapper[4790]: I1011 10:55:30.534289 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-volume-lvm-iscsi-0"] Oct 11 10:55:31.415288 master-0 kubenswrapper[4790]: I1011 10:55:31.415212 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" podUID="60d68c10-8e1c-4a92-86f6-e2925df0f714" containerName="cinder-volume" containerID="cri-o://a9ec38cb1d58ff96f557dd1efeffa25f09991c28ad10ce26a6daa40b2d7694a0" gracePeriod=30 Oct 11 10:55:31.415995 master-0 kubenswrapper[4790]: I1011 10:55:31.415698 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" podUID="60d68c10-8e1c-4a92-86f6-e2925df0f714" containerName="probe" containerID="cri-o://e192e2becd115690c1d668bc7d78fc93266390a2fb95e6340eaf7c60f1f198e9" gracePeriod=30 Oct 11 10:55:31.462646 master-1 kubenswrapper[4771]: I1011 10:55:31.462595 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:55:31.512163 master-2 kubenswrapper[4776]: I1011 10:55:31.512126 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:31.512691 master-2 kubenswrapper[4776]: I1011 10:55:31.512222 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:31.550952 master-1 kubenswrapper[4771]: I1011 10:55:31.550878 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-595686b98f-blmgp"] Oct 11 10:55:31.551268 master-1 kubenswrapper[4771]: I1011 10:55:31.551180 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-595686b98f-blmgp" podUID="a50b2fec-a3b6-4245-9080-5987b411b581" containerName="dnsmasq-dns" containerID="cri-o://64768fb3aaa57fbf977b42bcf01d911517cd3d56cc20742d472651a90c1c3f06" gracePeriod=10 Oct 11 10:55:31.574796 master-2 kubenswrapper[4776]: I1011 10:55:31.569968 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:31.589530 master-2 kubenswrapper[4776]: I1011 10:55:31.589485 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:31.603082 master-0 kubenswrapper[4790]: I1011 10:55:31.602984 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-768f954cfc-9xg22"] Oct 11 10:55:31.603428 master-0 kubenswrapper[4790]: E1011 10:55:31.603391 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bb6dd47-2665-4a3f-8773-2a61034146a3" containerName="mariadb-database-create" Oct 11 10:55:31.603428 master-0 kubenswrapper[4790]: I1011 10:55:31.603412 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bb6dd47-2665-4a3f-8773-2a61034146a3" containerName="mariadb-database-create" Oct 11 10:55:31.603581 master-0 kubenswrapper[4790]: I1011 10:55:31.603557 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bb6dd47-2665-4a3f-8773-2a61034146a3" containerName="mariadb-database-create" Oct 11 10:55:31.604541 master-0 kubenswrapper[4790]: I1011 10:55:31.604495 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.607980 master-0 kubenswrapper[4790]: I1011 10:55:31.607927 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 11 10:55:31.608488 master-0 kubenswrapper[4790]: I1011 10:55:31.608447 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 10:55:31.608896 master-0 kubenswrapper[4790]: I1011 10:55:31.608837 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 10:55:31.609032 master-0 kubenswrapper[4790]: I1011 10:55:31.608984 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 11 10:55:31.609156 master-0 kubenswrapper[4790]: I1011 10:55:31.609118 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 11 10:55:31.612906 master-2 kubenswrapper[4776]: I1011 10:55:31.612837 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-backup-0" podStartSLOduration=5.612815097 podStartE2EDuration="5.612815097s" podCreationTimestamp="2025-10-11 10:55:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:29.779278517 +0000 UTC m=+1764.563705236" watchObservedRunningTime="2025-10-11 10:55:31.612815097 +0000 UTC m=+1766.397241806" Oct 11 10:55:31.626103 master-0 kubenswrapper[4790]: I1011 10:55:31.626049 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-768f954cfc-9xg22"] Oct 11 10:55:31.646873 master-2 kubenswrapper[4776]: I1011 10:55:31.646527 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-db76b8b85-xpl75" event={"ID":"803272bc-1d03-4e1f-af8a-42b8d6e029d1","Type":"ContainerStarted","Data":"5796768e6f59a267b44c87a688513b751c2a2db01fdde74286cf46500fe7d585"} Oct 11 10:55:31.646873 master-2 kubenswrapper[4776]: I1011 10:55:31.646668 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:31.647093 master-2 kubenswrapper[4776]: I1011 10:55:31.647078 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:31.647139 master-2 kubenswrapper[4776]: I1011 10:55:31.647098 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:31.697873 master-2 kubenswrapper[4776]: I1011 10:55:31.696597 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-db76b8b85-xpl75" podStartSLOduration=2.37286526 podStartE2EDuration="4.696575156s" podCreationTimestamp="2025-10-11 10:55:27 +0000 UTC" firstStartedPulling="2025-10-11 10:55:28.198379689 +0000 UTC m=+1762.982806398" lastFinishedPulling="2025-10-11 10:55:30.522089585 +0000 UTC m=+1765.306516294" observedRunningTime="2025-10-11 10:55:31.68678995 +0000 UTC m=+1766.471216669" watchObservedRunningTime="2025-10-11 10:55:31.696575156 +0000 UTC m=+1766.481001865" Oct 11 10:55:31.699100 master-0 kubenswrapper[4790]: I1011 10:55:31.698931 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-dns-svc\") pod \"dnsmasq-dns-768f954cfc-9xg22\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.699100 master-0 kubenswrapper[4790]: I1011 10:55:31.699014 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-ovsdbserver-nb\") pod \"dnsmasq-dns-768f954cfc-9xg22\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.699362 master-0 kubenswrapper[4790]: I1011 10:55:31.699111 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb9f4\" (UniqueName: \"kubernetes.io/projected/6ff24705-c685-47d9-ad1b-9ec04c541bf7-kube-api-access-hb9f4\") pod \"dnsmasq-dns-768f954cfc-9xg22\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.699362 master-0 kubenswrapper[4790]: I1011 10:55:31.699143 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-dns-swift-storage-0\") pod \"dnsmasq-dns-768f954cfc-9xg22\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.699362 master-0 kubenswrapper[4790]: I1011 10:55:31.699183 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-ovsdbserver-sb\") pod \"dnsmasq-dns-768f954cfc-9xg22\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.699362 master-0 kubenswrapper[4790]: I1011 10:55:31.699225 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-config\") pod \"dnsmasq-dns-768f954cfc-9xg22\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.801677 master-0 kubenswrapper[4790]: I1011 10:55:31.801613 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-dns-svc\") pod \"dnsmasq-dns-768f954cfc-9xg22\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.801677 master-0 kubenswrapper[4790]: I1011 10:55:31.801680 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-ovsdbserver-nb\") pod \"dnsmasq-dns-768f954cfc-9xg22\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.801996 master-0 kubenswrapper[4790]: I1011 10:55:31.801771 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb9f4\" (UniqueName: \"kubernetes.io/projected/6ff24705-c685-47d9-ad1b-9ec04c541bf7-kube-api-access-hb9f4\") pod \"dnsmasq-dns-768f954cfc-9xg22\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.801996 master-0 kubenswrapper[4790]: I1011 10:55:31.801796 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-dns-swift-storage-0\") pod \"dnsmasq-dns-768f954cfc-9xg22\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.801996 master-0 kubenswrapper[4790]: I1011 10:55:31.801831 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-ovsdbserver-sb\") pod \"dnsmasq-dns-768f954cfc-9xg22\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.801996 master-0 kubenswrapper[4790]: I1011 10:55:31.801867 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-config\") pod \"dnsmasq-dns-768f954cfc-9xg22\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.802853 master-0 kubenswrapper[4790]: I1011 10:55:31.802795 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-config\") pod \"dnsmasq-dns-768f954cfc-9xg22\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.802979 master-0 kubenswrapper[4790]: I1011 10:55:31.802905 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-dns-svc\") pod \"dnsmasq-dns-768f954cfc-9xg22\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.803906 master-0 kubenswrapper[4790]: I1011 10:55:31.803880 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-ovsdbserver-nb\") pod \"dnsmasq-dns-768f954cfc-9xg22\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.804819 master-0 kubenswrapper[4790]: I1011 10:55:31.804783 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-ovsdbserver-sb\") pod \"dnsmasq-dns-768f954cfc-9xg22\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.805163 master-0 kubenswrapper[4790]: I1011 10:55:31.805110 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-dns-swift-storage-0\") pod \"dnsmasq-dns-768f954cfc-9xg22\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.854997 master-0 kubenswrapper[4790]: I1011 10:55:31.854802 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb9f4\" (UniqueName: \"kubernetes.io/projected/6ff24705-c685-47d9-ad1b-9ec04c541bf7-kube-api-access-hb9f4\") pod \"dnsmasq-dns-768f954cfc-9xg22\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.945559 master-0 kubenswrapper[4790]: I1011 10:55:31.945464 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:31.978958 master-1 kubenswrapper[4771]: I1011 10:55:31.978896 4771 generic.go:334] "Generic (PLEG): container finished" podID="a50b2fec-a3b6-4245-9080-5987b411b581" containerID="64768fb3aaa57fbf977b42bcf01d911517cd3d56cc20742d472651a90c1c3f06" exitCode=0 Oct 11 10:55:31.979465 master-1 kubenswrapper[4771]: I1011 10:55:31.978986 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595686b98f-blmgp" event={"ID":"a50b2fec-a3b6-4245-9080-5987b411b581","Type":"ContainerDied","Data":"64768fb3aaa57fbf977b42bcf01d911517cd3d56cc20742d472651a90c1c3f06"} Oct 11 10:55:32.295450 master-1 kubenswrapper[4771]: I1011 10:55:32.295233 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-b5802-api-0" Oct 11 10:55:32.321051 master-2 kubenswrapper[4776]: I1011 10:55:32.320978 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:32.424481 master-0 kubenswrapper[4790]: I1011 10:55:32.424416 4790 generic.go:334] "Generic (PLEG): container finished" podID="60d68c10-8e1c-4a92-86f6-e2925df0f714" containerID="a9ec38cb1d58ff96f557dd1efeffa25f09991c28ad10ce26a6daa40b2d7694a0" exitCode=0 Oct 11 10:55:32.424481 master-0 kubenswrapper[4790]: I1011 10:55:32.424474 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" event={"ID":"60d68c10-8e1c-4a92-86f6-e2925df0f714","Type":"ContainerDied","Data":"a9ec38cb1d58ff96f557dd1efeffa25f09991c28ad10ce26a6daa40b2d7694a0"} Oct 11 10:55:32.486745 master-1 kubenswrapper[4771]: I1011 10:55:32.486679 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:32.487342 master-1 kubenswrapper[4771]: I1011 10:55:32.486765 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:32.537275 master-1 kubenswrapper[4771]: I1011 10:55:32.537207 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:32.538419 master-1 kubenswrapper[4771]: I1011 10:55:32.538384 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:32.933087 master-1 kubenswrapper[4771]: I1011 10:55:32.932647 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-595686b98f-blmgp" podUID="a50b2fec-a3b6-4245-9080-5987b411b581" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.129.0.124:5353: connect: connection refused" Oct 11 10:55:32.991118 master-1 kubenswrapper[4771]: I1011 10:55:32.990933 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:32.991118 master-1 kubenswrapper[4771]: I1011 10:55:32.991024 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:33.442353 master-0 kubenswrapper[4790]: I1011 10:55:33.442279 4790 generic.go:334] "Generic (PLEG): container finished" podID="60d68c10-8e1c-4a92-86f6-e2925df0f714" containerID="e192e2becd115690c1d668bc7d78fc93266390a2fb95e6340eaf7c60f1f198e9" exitCode=0 Oct 11 10:55:33.442353 master-0 kubenswrapper[4790]: I1011 10:55:33.442347 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" event={"ID":"60d68c10-8e1c-4a92-86f6-e2925df0f714","Type":"ContainerDied","Data":"e192e2becd115690c1d668bc7d78fc93266390a2fb95e6340eaf7c60f1f198e9"} Oct 11 10:55:33.703139 master-2 kubenswrapper[4776]: I1011 10:55:33.703098 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:34.623235 master-2 kubenswrapper[4776]: I1011 10:55:34.623080 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:34.781391 master-1 kubenswrapper[4771]: I1011 10:55:34.781232 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:34.907381 master-2 kubenswrapper[4776]: I1011 10:55:34.905725 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:34.916350 master-0 kubenswrapper[4790]: I1011 10:55:34.916280 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:55:34.931685 master-1 kubenswrapper[4771]: I1011 10:55:34.931629 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:35.016712 master-1 kubenswrapper[4771]: I1011 10:55:35.016599 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:55:35.072657 master-1 kubenswrapper[4771]: I1011 10:55:35.072464 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:55:35.381670 master-0 kubenswrapper[4790]: I1011 10:55:35.381636 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nw6gg" Oct 11 10:55:35.391906 master-0 kubenswrapper[4790]: I1011 10:55:35.389887 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-r9jnj" Oct 11 10:55:35.478689 master-0 kubenswrapper[4790]: I1011 10:55:35.478514 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nw6gg" event={"ID":"8b0929b8-354d-4de6-9e2d-ac6e11324b10","Type":"ContainerDied","Data":"99f365cb76e3f76cb36494306ddb4f6a8b5e7332210b71ce165a6676119b559f"} Oct 11 10:55:35.478689 master-0 kubenswrapper[4790]: I1011 10:55:35.478612 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99f365cb76e3f76cb36494306ddb4f6a8b5e7332210b71ce165a6676119b559f" Oct 11 10:55:35.479010 master-0 kubenswrapper[4790]: I1011 10:55:35.478963 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nw6gg" Oct 11 10:55:35.480559 master-0 kubenswrapper[4790]: I1011 10:55:35.480495 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-r9jnj" event={"ID":"5b7ae2a3-6802-400c-bbe7-5729052a2c1c","Type":"ContainerDied","Data":"4074522a220cfcc13675ec89ed6e6addf00a02c34d3e5d2f86e64b3f545d3cad"} Oct 11 10:55:35.480621 master-0 kubenswrapper[4790]: I1011 10:55:35.480563 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4074522a220cfcc13675ec89ed6e6addf00a02c34d3e5d2f86e64b3f545d3cad" Oct 11 10:55:35.480621 master-0 kubenswrapper[4790]: I1011 10:55:35.480579 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-r9jnj" Oct 11 10:55:35.495586 master-0 kubenswrapper[4790]: I1011 10:55:35.495531 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcmpk\" (UniqueName: \"kubernetes.io/projected/5b7ae2a3-6802-400c-bbe7-5729052a2c1c-kube-api-access-kcmpk\") pod \"5b7ae2a3-6802-400c-bbe7-5729052a2c1c\" (UID: \"5b7ae2a3-6802-400c-bbe7-5729052a2c1c\") " Oct 11 10:55:35.495898 master-0 kubenswrapper[4790]: I1011 10:55:35.495735 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w4cd\" (UniqueName: \"kubernetes.io/projected/8b0929b8-354d-4de6-9e2d-ac6e11324b10-kube-api-access-7w4cd\") pod \"8b0929b8-354d-4de6-9e2d-ac6e11324b10\" (UID: \"8b0929b8-354d-4de6-9e2d-ac6e11324b10\") " Oct 11 10:55:35.511631 master-0 kubenswrapper[4790]: I1011 10:55:35.508381 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b7ae2a3-6802-400c-bbe7-5729052a2c1c-kube-api-access-kcmpk" (OuterVolumeSpecName: "kube-api-access-kcmpk") pod "5b7ae2a3-6802-400c-bbe7-5729052a2c1c" (UID: "5b7ae2a3-6802-400c-bbe7-5729052a2c1c"). InnerVolumeSpecName "kube-api-access-kcmpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:35.511631 master-0 kubenswrapper[4790]: I1011 10:55:35.509582 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b0929b8-354d-4de6-9e2d-ac6e11324b10-kube-api-access-7w4cd" (OuterVolumeSpecName: "kube-api-access-7w4cd") pod "8b0929b8-354d-4de6-9e2d-ac6e11324b10" (UID: "8b0929b8-354d-4de6-9e2d-ac6e11324b10"). InnerVolumeSpecName "kube-api-access-7w4cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:35.597664 master-0 kubenswrapper[4790]: I1011 10:55:35.597579 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcmpk\" (UniqueName: \"kubernetes.io/projected/5b7ae2a3-6802-400c-bbe7-5729052a2c1c-kube-api-access-kcmpk\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:35.597664 master-0 kubenswrapper[4790]: I1011 10:55:35.597620 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w4cd\" (UniqueName: \"kubernetes.io/projected/8b0929b8-354d-4de6-9e2d-ac6e11324b10-kube-api-access-7w4cd\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:35.828761 master-0 kubenswrapper[4790]: I1011 10:55:35.828722 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.003988 master-0 kubenswrapper[4790]: I1011 10:55:36.003944 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-config-data\") pod \"60d68c10-8e1c-4a92-86f6-e2925df0f714\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " Oct 11 10:55:36.004413 master-0 kubenswrapper[4790]: I1011 10:55:36.004024 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-run\") pod \"60d68c10-8e1c-4a92-86f6-e2925df0f714\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " Oct 11 10:55:36.004413 master-0 kubenswrapper[4790]: I1011 10:55:36.004076 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-scripts\") pod \"60d68c10-8e1c-4a92-86f6-e2925df0f714\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " Oct 11 10:55:36.004413 master-0 kubenswrapper[4790]: I1011 10:55:36.004099 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-var-lib-cinder\") pod \"60d68c10-8e1c-4a92-86f6-e2925df0f714\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " Oct 11 10:55:36.004413 master-0 kubenswrapper[4790]: I1011 10:55:36.004130 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-var-locks-cinder\") pod \"60d68c10-8e1c-4a92-86f6-e2925df0f714\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " Oct 11 10:55:36.004413 master-0 kubenswrapper[4790]: I1011 10:55:36.004178 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-combined-ca-bundle\") pod \"60d68c10-8e1c-4a92-86f6-e2925df0f714\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " Oct 11 10:55:36.004413 master-0 kubenswrapper[4790]: I1011 10:55:36.004179 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-run" (OuterVolumeSpecName: "run") pod "60d68c10-8e1c-4a92-86f6-e2925df0f714" (UID: "60d68c10-8e1c-4a92-86f6-e2925df0f714"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:36.004413 master-0 kubenswrapper[4790]: I1011 10:55:36.004210 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-etc-iscsi\") pod \"60d68c10-8e1c-4a92-86f6-e2925df0f714\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " Oct 11 10:55:36.004413 master-0 kubenswrapper[4790]: I1011 10:55:36.004260 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "60d68c10-8e1c-4a92-86f6-e2925df0f714" (UID: "60d68c10-8e1c-4a92-86f6-e2925df0f714"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:36.004413 master-0 kubenswrapper[4790]: I1011 10:55:36.004391 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-sys\") pod \"60d68c10-8e1c-4a92-86f6-e2925df0f714\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " Oct 11 10:55:36.004673 master-0 kubenswrapper[4790]: I1011 10:55:36.004454 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-var-locks-brick\") pod \"60d68c10-8e1c-4a92-86f6-e2925df0f714\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " Oct 11 10:55:36.004673 master-0 kubenswrapper[4790]: I1011 10:55:36.004487 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-dev\") pod \"60d68c10-8e1c-4a92-86f6-e2925df0f714\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " Oct 11 10:55:36.004673 master-0 kubenswrapper[4790]: I1011 10:55:36.004520 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-lib-modules\") pod \"60d68c10-8e1c-4a92-86f6-e2925df0f714\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " Oct 11 10:55:36.004673 master-0 kubenswrapper[4790]: I1011 10:55:36.004510 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-sys" (OuterVolumeSpecName: "sys") pod "60d68c10-8e1c-4a92-86f6-e2925df0f714" (UID: "60d68c10-8e1c-4a92-86f6-e2925df0f714"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:36.004673 master-0 kubenswrapper[4790]: I1011 10:55:36.004601 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-etc-nvme\") pod \"60d68c10-8e1c-4a92-86f6-e2925df0f714\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " Oct 11 10:55:36.004673 master-0 kubenswrapper[4790]: I1011 10:55:36.004639 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-etc-machine-id\") pod \"60d68c10-8e1c-4a92-86f6-e2925df0f714\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " Oct 11 10:55:36.004875 master-0 kubenswrapper[4790]: I1011 10:55:36.004545 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "60d68c10-8e1c-4a92-86f6-e2925df0f714" (UID: "60d68c10-8e1c-4a92-86f6-e2925df0f714"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:36.004875 master-0 kubenswrapper[4790]: I1011 10:55:36.004569 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-dev" (OuterVolumeSpecName: "dev") pod "60d68c10-8e1c-4a92-86f6-e2925df0f714" (UID: "60d68c10-8e1c-4a92-86f6-e2925df0f714"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:36.004875 master-0 kubenswrapper[4790]: I1011 10:55:36.004591 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "60d68c10-8e1c-4a92-86f6-e2925df0f714" (UID: "60d68c10-8e1c-4a92-86f6-e2925df0f714"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:36.004875 master-0 kubenswrapper[4790]: I1011 10:55:36.004628 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "60d68c10-8e1c-4a92-86f6-e2925df0f714" (UID: "60d68c10-8e1c-4a92-86f6-e2925df0f714"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:36.004875 master-0 kubenswrapper[4790]: I1011 10:55:36.004651 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "60d68c10-8e1c-4a92-86f6-e2925df0f714" (UID: "60d68c10-8e1c-4a92-86f6-e2925df0f714"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:36.004875 master-0 kubenswrapper[4790]: I1011 10:55:36.004688 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "60d68c10-8e1c-4a92-86f6-e2925df0f714" (UID: "60d68c10-8e1c-4a92-86f6-e2925df0f714"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:36.004875 master-0 kubenswrapper[4790]: I1011 10:55:36.004823 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-config-data-custom\") pod \"60d68c10-8e1c-4a92-86f6-e2925df0f714\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " Oct 11 10:55:36.004875 master-0 kubenswrapper[4790]: I1011 10:55:36.004870 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "60d68c10-8e1c-4a92-86f6-e2925df0f714" (UID: "60d68c10-8e1c-4a92-86f6-e2925df0f714"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:36.005092 master-0 kubenswrapper[4790]: I1011 10:55:36.004932 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k54k8\" (UniqueName: \"kubernetes.io/projected/60d68c10-8e1c-4a92-86f6-e2925df0f714-kube-api-access-k54k8\") pod \"60d68c10-8e1c-4a92-86f6-e2925df0f714\" (UID: \"60d68c10-8e1c-4a92-86f6-e2925df0f714\") " Oct 11 10:55:36.006062 master-0 kubenswrapper[4790]: I1011 10:55:36.006021 4790 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-sys\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:36.006062 master-0 kubenswrapper[4790]: I1011 10:55:36.006050 4790 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:36.006153 master-0 kubenswrapper[4790]: I1011 10:55:36.006070 4790 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-dev\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:36.006153 master-0 kubenswrapper[4790]: I1011 10:55:36.006086 4790 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-lib-modules\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:36.006153 master-0 kubenswrapper[4790]: I1011 10:55:36.006099 4790 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-etc-nvme\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:36.006153 master-0 kubenswrapper[4790]: I1011 10:55:36.006113 4790 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:36.006153 master-0 kubenswrapper[4790]: I1011 10:55:36.006126 4790 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-run\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:36.006153 master-0 kubenswrapper[4790]: I1011 10:55:36.006140 4790 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:36.006153 master-0 kubenswrapper[4790]: I1011 10:55:36.006153 4790 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:36.006357 master-0 kubenswrapper[4790]: I1011 10:55:36.006169 4790 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/60d68c10-8e1c-4a92-86f6-e2925df0f714-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:36.007947 master-0 kubenswrapper[4790]: I1011 10:55:36.007893 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-scripts" (OuterVolumeSpecName: "scripts") pod "60d68c10-8e1c-4a92-86f6-e2925df0f714" (UID: "60d68c10-8e1c-4a92-86f6-e2925df0f714"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:36.011275 master-0 kubenswrapper[4790]: I1011 10:55:36.011237 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "60d68c10-8e1c-4a92-86f6-e2925df0f714" (UID: "60d68c10-8e1c-4a92-86f6-e2925df0f714"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:36.015439 master-1 kubenswrapper[4771]: I1011 10:55:36.013805 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:55:36.015570 master-0 kubenswrapper[4790]: I1011 10:55:36.015500 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60d68c10-8e1c-4a92-86f6-e2925df0f714-kube-api-access-k54k8" (OuterVolumeSpecName: "kube-api-access-k54k8") pod "60d68c10-8e1c-4a92-86f6-e2925df0f714" (UID: "60d68c10-8e1c-4a92-86f6-e2925df0f714"). InnerVolumeSpecName "kube-api-access-k54k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:36.030793 master-1 kubenswrapper[4771]: I1011 10:55:36.030750 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595686b98f-blmgp" Oct 11 10:55:36.030986 master-1 kubenswrapper[4771]: I1011 10:55:36.030838 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595686b98f-blmgp" event={"ID":"a50b2fec-a3b6-4245-9080-5987b411b581","Type":"ContainerDied","Data":"df3edeb105ed637b9d7fa0933dc5cae9f70ee8feff1cdbfb3585b9bc6889a72c"} Oct 11 10:55:36.030986 master-1 kubenswrapper[4771]: I1011 10:55:36.030960 4771 scope.go:117] "RemoveContainer" containerID="64768fb3aaa57fbf977b42bcf01d911517cd3d56cc20742d472651a90c1c3f06" Oct 11 10:55:36.086657 master-0 kubenswrapper[4790]: I1011 10:55:36.086386 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60d68c10-8e1c-4a92-86f6-e2925df0f714" (UID: "60d68c10-8e1c-4a92-86f6-e2925df0f714"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:36.108119 master-0 kubenswrapper[4790]: I1011 10:55:36.108069 4790 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-scripts\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:36.108119 master-0 kubenswrapper[4790]: I1011 10:55:36.108110 4790 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:36.108255 master-0 kubenswrapper[4790]: I1011 10:55:36.108147 4790 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-config-data-custom\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:36.108255 master-0 kubenswrapper[4790]: I1011 10:55:36.108165 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k54k8\" (UniqueName: \"kubernetes.io/projected/60d68c10-8e1c-4a92-86f6-e2925df0f714-kube-api-access-k54k8\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:36.151732 master-0 kubenswrapper[4790]: I1011 10:55:36.145784 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-config-data" (OuterVolumeSpecName: "config-data") pod "60d68c10-8e1c-4a92-86f6-e2925df0f714" (UID: "60d68c10-8e1c-4a92-86f6-e2925df0f714"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:36.158475 master-1 kubenswrapper[4771]: I1011 10:55:36.158265 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-ovsdbserver-sb\") pod \"a50b2fec-a3b6-4245-9080-5987b411b581\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " Oct 11 10:55:36.158745 master-1 kubenswrapper[4771]: I1011 10:55:36.158534 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-dns-swift-storage-0\") pod \"a50b2fec-a3b6-4245-9080-5987b411b581\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " Oct 11 10:55:36.158745 master-1 kubenswrapper[4771]: I1011 10:55:36.158683 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-dns-svc\") pod \"a50b2fec-a3b6-4245-9080-5987b411b581\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " Oct 11 10:55:36.158745 master-1 kubenswrapper[4771]: I1011 10:55:36.158728 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-ovsdbserver-nb\") pod \"a50b2fec-a3b6-4245-9080-5987b411b581\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " Oct 11 10:55:36.158865 master-1 kubenswrapper[4771]: I1011 10:55:36.158776 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-config\") pod \"a50b2fec-a3b6-4245-9080-5987b411b581\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " Oct 11 10:55:36.158865 master-1 kubenswrapper[4771]: I1011 10:55:36.158830 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtcx9\" (UniqueName: \"kubernetes.io/projected/a50b2fec-a3b6-4245-9080-5987b411b581-kube-api-access-mtcx9\") pod \"a50b2fec-a3b6-4245-9080-5987b411b581\" (UID: \"a50b2fec-a3b6-4245-9080-5987b411b581\") " Oct 11 10:55:36.165135 master-1 kubenswrapper[4771]: I1011 10:55:36.165061 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a50b2fec-a3b6-4245-9080-5987b411b581-kube-api-access-mtcx9" (OuterVolumeSpecName: "kube-api-access-mtcx9") pod "a50b2fec-a3b6-4245-9080-5987b411b581" (UID: "a50b2fec-a3b6-4245-9080-5987b411b581"). InnerVolumeSpecName "kube-api-access-mtcx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:36.212335 master-0 kubenswrapper[4790]: I1011 10:55:36.212269 4790 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d68c10-8e1c-4a92-86f6-e2925df0f714-config-data\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:36.217041 master-1 kubenswrapper[4771]: I1011 10:55:36.216897 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a50b2fec-a3b6-4245-9080-5987b411b581" (UID: "a50b2fec-a3b6-4245-9080-5987b411b581"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:36.220856 master-1 kubenswrapper[4771]: I1011 10:55:36.220459 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a50b2fec-a3b6-4245-9080-5987b411b581" (UID: "a50b2fec-a3b6-4245-9080-5987b411b581"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:36.224725 master-1 kubenswrapper[4771]: I1011 10:55:36.224612 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a50b2fec-a3b6-4245-9080-5987b411b581" (UID: "a50b2fec-a3b6-4245-9080-5987b411b581"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:36.229991 master-1 kubenswrapper[4771]: I1011 10:55:36.229923 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-config" (OuterVolumeSpecName: "config") pod "a50b2fec-a3b6-4245-9080-5987b411b581" (UID: "a50b2fec-a3b6-4245-9080-5987b411b581"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:36.238989 master-1 kubenswrapper[4771]: I1011 10:55:36.238921 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a50b2fec-a3b6-4245-9080-5987b411b581" (UID: "a50b2fec-a3b6-4245-9080-5987b411b581"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:36.246947 master-0 kubenswrapper[4790]: I1011 10:55:36.246245 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-768f954cfc-9xg22"] Oct 11 10:55:36.256413 master-0 kubenswrapper[4790]: W1011 10:55:36.251998 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ff24705_c685_47d9_ad1b_9ec04c541bf7.slice/crio-2e505acf6ba0dd723d6c70412053c327d464e773f7d7a12371848c3428f7bbf6 WatchSource:0}: Error finding container 2e505acf6ba0dd723d6c70412053c327d464e773f7d7a12371848c3428f7bbf6: Status 404 returned error can't find the container with id 2e505acf6ba0dd723d6c70412053c327d464e773f7d7a12371848c3428f7bbf6 Oct 11 10:55:36.262526 master-1 kubenswrapper[4771]: I1011 10:55:36.262453 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-ovsdbserver-sb\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:36.262526 master-1 kubenswrapper[4771]: I1011 10:55:36.262510 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-dns-swift-storage-0\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:36.262526 master-1 kubenswrapper[4771]: I1011 10:55:36.262530 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-dns-svc\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:36.262526 master-1 kubenswrapper[4771]: I1011 10:55:36.262541 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-ovsdbserver-nb\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:36.263012 master-1 kubenswrapper[4771]: I1011 10:55:36.262552 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a50b2fec-a3b6-4245-9080-5987b411b581-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:36.263012 master-1 kubenswrapper[4771]: I1011 10:55:36.262563 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtcx9\" (UniqueName: \"kubernetes.io/projected/a50b2fec-a3b6-4245-9080-5987b411b581-kube-api-access-mtcx9\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:36.340923 master-1 kubenswrapper[4771]: I1011 10:55:36.340867 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-64fcdf7d54-8r455"] Oct 11 10:55:36.349447 master-0 kubenswrapper[4790]: I1011 10:55:36.347627 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-8a39-account-create-clqqg"] Oct 11 10:55:36.349447 master-0 kubenswrapper[4790]: E1011 10:55:36.348645 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b7ae2a3-6802-400c-bbe7-5729052a2c1c" containerName="mariadb-database-create" Oct 11 10:55:36.349447 master-0 kubenswrapper[4790]: I1011 10:55:36.348676 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b7ae2a3-6802-400c-bbe7-5729052a2c1c" containerName="mariadb-database-create" Oct 11 10:55:36.349447 master-0 kubenswrapper[4790]: E1011 10:55:36.348724 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b0929b8-354d-4de6-9e2d-ac6e11324b10" containerName="mariadb-database-create" Oct 11 10:55:36.349447 master-0 kubenswrapper[4790]: I1011 10:55:36.348735 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b0929b8-354d-4de6-9e2d-ac6e11324b10" containerName="mariadb-database-create" Oct 11 10:55:36.349447 master-0 kubenswrapper[4790]: E1011 10:55:36.348753 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d68c10-8e1c-4a92-86f6-e2925df0f714" containerName="cinder-volume" Oct 11 10:55:36.349447 master-0 kubenswrapper[4790]: I1011 10:55:36.348759 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d68c10-8e1c-4a92-86f6-e2925df0f714" containerName="cinder-volume" Oct 11 10:55:36.349447 master-0 kubenswrapper[4790]: E1011 10:55:36.349048 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d68c10-8e1c-4a92-86f6-e2925df0f714" containerName="probe" Oct 11 10:55:36.349447 master-0 kubenswrapper[4790]: I1011 10:55:36.349062 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d68c10-8e1c-4a92-86f6-e2925df0f714" containerName="probe" Oct 11 10:55:36.349447 master-0 kubenswrapper[4790]: I1011 10:55:36.349448 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b0929b8-354d-4de6-9e2d-ac6e11324b10" containerName="mariadb-database-create" Oct 11 10:55:36.349894 master-0 kubenswrapper[4790]: I1011 10:55:36.349481 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="60d68c10-8e1c-4a92-86f6-e2925df0f714" containerName="cinder-volume" Oct 11 10:55:36.349894 master-0 kubenswrapper[4790]: I1011 10:55:36.349496 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b7ae2a3-6802-400c-bbe7-5729052a2c1c" containerName="mariadb-database-create" Oct 11 10:55:36.349894 master-0 kubenswrapper[4790]: I1011 10:55:36.349509 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="60d68c10-8e1c-4a92-86f6-e2925df0f714" containerName="probe" Oct 11 10:55:36.350561 master-0 kubenswrapper[4790]: I1011 10:55:36.350519 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8a39-account-create-clqqg" Oct 11 10:55:36.356499 master-0 kubenswrapper[4790]: I1011 10:55:36.356440 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8a39-account-create-clqqg"] Oct 11 10:55:36.360327 master-0 kubenswrapper[4790]: I1011 10:55:36.359522 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 11 10:55:36.362773 master-2 kubenswrapper[4776]: I1011 10:55:36.362725 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:36.367981 master-1 kubenswrapper[4771]: W1011 10:55:36.367867 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dc94855_37f8_4fa8_a3e1_72808b37f966.slice/crio-7aec9cb2b97e7d8a103a161accbb44e46472e2bd0c84ba74baf43bde7dc083ff WatchSource:0}: Error finding container 7aec9cb2b97e7d8a103a161accbb44e46472e2bd0c84ba74baf43bde7dc083ff: Status 404 returned error can't find the container with id 7aec9cb2b97e7d8a103a161accbb44e46472e2bd0c84ba74baf43bde7dc083ff Oct 11 10:55:36.390452 master-1 kubenswrapper[4771]: I1011 10:55:36.390338 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-595686b98f-blmgp"] Oct 11 10:55:36.402471 master-1 kubenswrapper[4771]: I1011 10:55:36.402374 4771 scope.go:117] "RemoveContainer" containerID="314a76b2857d795a4f3ebe7e8b09e8abca5d105e5ba862e3833d60a9a90b7cc3" Oct 11 10:55:36.403085 master-1 kubenswrapper[4771]: I1011 10:55:36.402948 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-595686b98f-blmgp"] Oct 11 10:55:36.491034 master-1 kubenswrapper[4771]: I1011 10:55:36.490925 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a50b2fec-a3b6-4245-9080-5987b411b581" path="/var/lib/kubelet/pods/a50b2fec-a3b6-4245-9080-5987b411b581/volumes" Oct 11 10:55:36.504480 master-0 kubenswrapper[4790]: I1011 10:55:36.504431 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5f5c98dcd5-c8xhk" event={"ID":"c41b60ff-457d-4c45-8b56-4523c5c0097f","Type":"ContainerStarted","Data":"d68ca291cf2b0da68d6a8d6c151be0aa939d7b802691ee00796f934bd8619918"} Oct 11 10:55:36.504883 master-0 kubenswrapper[4790]: I1011 10:55:36.504859 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5f5c98dcd5-c8xhk" podUID="c41b60ff-457d-4c45-8b56-4523c5c0097f" containerName="heat-api" containerID="cri-o://d68ca291cf2b0da68d6a8d6c151be0aa939d7b802691ee00796f934bd8619918" gracePeriod=60 Oct 11 10:55:36.505317 master-0 kubenswrapper[4790]: I1011 10:55:36.505302 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5f5c98dcd5-c8xhk" Oct 11 10:55:36.512307 master-0 kubenswrapper[4790]: I1011 10:55:36.511834 4790 generic.go:334] "Generic (PLEG): container finished" podID="90ca8fc6-bc53-461b-8384-ca8344e8abb1" containerID="3509395ffa91a1e45916358a3e4f7592cd6af5d4e7f1b22db61be9000daad5af" exitCode=1 Oct 11 10:55:36.513069 master-0 kubenswrapper[4790]: I1011 10:55:36.513045 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-89f9b4488-8vvt9" event={"ID":"90ca8fc6-bc53-461b-8384-ca8344e8abb1","Type":"ContainerDied","Data":"3509395ffa91a1e45916358a3e4f7592cd6af5d4e7f1b22db61be9000daad5af"} Oct 11 10:55:36.516128 master-0 kubenswrapper[4790]: I1011 10:55:36.513679 4790 scope.go:117] "RemoveContainer" containerID="3509395ffa91a1e45916358a3e4f7592cd6af5d4e7f1b22db61be9000daad5af" Oct 11 10:55:36.524347 master-0 kubenswrapper[4790]: I1011 10:55:36.523797 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768f954cfc-9xg22" event={"ID":"6ff24705-c685-47d9-ad1b-9ec04c541bf7","Type":"ContainerStarted","Data":"2e505acf6ba0dd723d6c70412053c327d464e773f7d7a12371848c3428f7bbf6"} Oct 11 10:55:36.558665 master-0 kubenswrapper[4790]: I1011 10:55:36.557937 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" event={"ID":"60d68c10-8e1c-4a92-86f6-e2925df0f714","Type":"ContainerDied","Data":"3242bfe4a3531200a62ec255f320d35782117671872a6959f988bac26f9f7d3f"} Oct 11 10:55:36.558665 master-0 kubenswrapper[4790]: I1011 10:55:36.558010 4790 scope.go:117] "RemoveContainer" containerID="e192e2becd115690c1d668bc7d78fc93266390a2fb95e6340eaf7c60f1f198e9" Oct 11 10:55:36.558665 master-0 kubenswrapper[4790]: I1011 10:55:36.558065 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.565645 master-0 kubenswrapper[4790]: I1011 10:55:36.565089 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptnz2\" (UniqueName: \"kubernetes.io/projected/e37e1fe6-6e89-4407-a40f-cf494a35eccd-kube-api-access-ptnz2\") pod \"nova-cell0-8a39-account-create-clqqg\" (UID: \"e37e1fe6-6e89-4407-a40f-cf494a35eccd\") " pod="openstack/nova-cell0-8a39-account-create-clqqg" Oct 11 10:55:36.587072 master-0 kubenswrapper[4790]: I1011 10:55:36.586959 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5f5c98dcd5-c8xhk" podStartSLOduration=9.944004454 podStartE2EDuration="20.586920368s" podCreationTimestamp="2025-10-11 10:55:16 +0000 UTC" firstStartedPulling="2025-10-11 10:55:25.217346588 +0000 UTC m=+1001.771806880" lastFinishedPulling="2025-10-11 10:55:35.860262512 +0000 UTC m=+1012.414722794" observedRunningTime="2025-10-11 10:55:36.55605203 +0000 UTC m=+1013.110512322" watchObservedRunningTime="2025-10-11 10:55:36.586920368 +0000 UTC m=+1013.141380660" Oct 11 10:55:36.603053 master-0 kubenswrapper[4790]: I1011 10:55:36.603002 4790 scope.go:117] "RemoveContainer" containerID="a9ec38cb1d58ff96f557dd1efeffa25f09991c28ad10ce26a6daa40b2d7694a0" Oct 11 10:55:36.630942 master-0 kubenswrapper[4790]: I1011 10:55:36.630813 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-volume-lvm-iscsi-0"] Oct 11 10:55:36.637016 master-0 kubenswrapper[4790]: I1011 10:55:36.636981 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b5802-volume-lvm-iscsi-0"] Oct 11 10:55:36.666956 master-0 kubenswrapper[4790]: I1011 10:55:36.666752 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-volume-lvm-iscsi-0"] Oct 11 10:55:36.666956 master-0 kubenswrapper[4790]: I1011 10:55:36.666943 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptnz2\" (UniqueName: \"kubernetes.io/projected/e37e1fe6-6e89-4407-a40f-cf494a35eccd-kube-api-access-ptnz2\") pod \"nova-cell0-8a39-account-create-clqqg\" (UID: \"e37e1fe6-6e89-4407-a40f-cf494a35eccd\") " pod="openstack/nova-cell0-8a39-account-create-clqqg" Oct 11 10:55:36.668778 master-0 kubenswrapper[4790]: I1011 10:55:36.668739 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.674998 master-0 kubenswrapper[4790]: I1011 10:55:36.674953 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-volume-lvm-iscsi-config-data" Oct 11 10:55:36.682271 master-0 kubenswrapper[4790]: I1011 10:55:36.682201 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-volume-lvm-iscsi-0"] Oct 11 10:55:36.694734 master-0 kubenswrapper[4790]: I1011 10:55:36.694297 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptnz2\" (UniqueName: \"kubernetes.io/projected/e37e1fe6-6e89-4407-a40f-cf494a35eccd-kube-api-access-ptnz2\") pod \"nova-cell0-8a39-account-create-clqqg\" (UID: \"e37e1fe6-6e89-4407-a40f-cf494a35eccd\") " pod="openstack/nova-cell0-8a39-account-create-clqqg" Oct 11 10:55:36.770958 master-0 kubenswrapper[4790]: I1011 10:55:36.770902 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24482e3e-ba4c-4920-90d4-077df9a7b329-combined-ca-bundle\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.771334 master-0 kubenswrapper[4790]: I1011 10:55:36.771316 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24482e3e-ba4c-4920-90d4-077df9a7b329-scripts\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.771465 master-0 kubenswrapper[4790]: I1011 10:55:36.771451 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-dev\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.771697 master-0 kubenswrapper[4790]: I1011 10:55:36.771682 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-sys\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.772110 master-0 kubenswrapper[4790]: I1011 10:55:36.772078 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24482e3e-ba4c-4920-90d4-077df9a7b329-config-data-custom\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.772173 master-0 kubenswrapper[4790]: I1011 10:55:36.772139 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-var-locks-cinder\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.772231 master-0 kubenswrapper[4790]: I1011 10:55:36.772208 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmgf8\" (UniqueName: \"kubernetes.io/projected/24482e3e-ba4c-4920-90d4-077df9a7b329-kube-api-access-vmgf8\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.772299 master-0 kubenswrapper[4790]: I1011 10:55:36.772272 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-lib-modules\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.772477 master-0 kubenswrapper[4790]: I1011 10:55:36.772462 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-etc-iscsi\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.772592 master-0 kubenswrapper[4790]: I1011 10:55:36.772574 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24482e3e-ba4c-4920-90d4-077df9a7b329-config-data\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.772970 master-0 kubenswrapper[4790]: I1011 10:55:36.772908 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-var-lib-cinder\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.773106 master-0 kubenswrapper[4790]: I1011 10:55:36.773092 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-etc-machine-id\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.773277 master-0 kubenswrapper[4790]: I1011 10:55:36.773263 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-etc-nvme\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.773395 master-0 kubenswrapper[4790]: I1011 10:55:36.773381 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-run\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.773585 master-0 kubenswrapper[4790]: I1011 10:55:36.773542 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-var-locks-brick\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.824953 master-1 kubenswrapper[4771]: I1011 10:55:36.824886 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8vzsw"] Oct 11 10:55:36.885755 master-0 kubenswrapper[4790]: I1011 10:55:36.884996 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24482e3e-ba4c-4920-90d4-077df9a7b329-config-data-custom\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.885755 master-0 kubenswrapper[4790]: I1011 10:55:36.885061 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-var-locks-cinder\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.885755 master-0 kubenswrapper[4790]: I1011 10:55:36.885096 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmgf8\" (UniqueName: \"kubernetes.io/projected/24482e3e-ba4c-4920-90d4-077df9a7b329-kube-api-access-vmgf8\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.885755 master-0 kubenswrapper[4790]: I1011 10:55:36.885121 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-lib-modules\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.885755 master-0 kubenswrapper[4790]: I1011 10:55:36.885149 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-etc-iscsi\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.885755 master-0 kubenswrapper[4790]: I1011 10:55:36.885168 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24482e3e-ba4c-4920-90d4-077df9a7b329-config-data\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.885755 master-0 kubenswrapper[4790]: I1011 10:55:36.885190 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-var-lib-cinder\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.885755 master-0 kubenswrapper[4790]: I1011 10:55:36.885220 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-etc-machine-id\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.885755 master-0 kubenswrapper[4790]: I1011 10:55:36.885243 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-etc-nvme\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.885755 master-0 kubenswrapper[4790]: I1011 10:55:36.885267 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-run\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.885755 master-0 kubenswrapper[4790]: I1011 10:55:36.885287 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-var-locks-brick\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.885755 master-0 kubenswrapper[4790]: I1011 10:55:36.885335 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24482e3e-ba4c-4920-90d4-077df9a7b329-combined-ca-bundle\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.885755 master-0 kubenswrapper[4790]: I1011 10:55:36.885355 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24482e3e-ba4c-4920-90d4-077df9a7b329-scripts\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.885755 master-0 kubenswrapper[4790]: I1011 10:55:36.885386 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-dev\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.885755 master-0 kubenswrapper[4790]: I1011 10:55:36.885410 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-sys\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.885755 master-0 kubenswrapper[4790]: I1011 10:55:36.885506 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-sys\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.886664 master-0 kubenswrapper[4790]: I1011 10:55:36.886221 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-etc-machine-id\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.886664 master-0 kubenswrapper[4790]: I1011 10:55:36.886385 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-var-locks-cinder\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.886905 master-0 kubenswrapper[4790]: I1011 10:55:36.886875 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-lib-modules\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.887008 master-0 kubenswrapper[4790]: I1011 10:55:36.886930 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-etc-iscsi\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.887084 master-0 kubenswrapper[4790]: I1011 10:55:36.886888 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-var-locks-brick\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.887408 master-0 kubenswrapper[4790]: I1011 10:55:36.887305 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-etc-nvme\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.887551 master-0 kubenswrapper[4790]: I1011 10:55:36.887532 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-run\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.890851 master-0 kubenswrapper[4790]: I1011 10:55:36.888529 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-dev\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.890851 master-0 kubenswrapper[4790]: I1011 10:55:36.889570 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24482e3e-ba4c-4920-90d4-077df9a7b329-config-data-custom\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.890851 master-0 kubenswrapper[4790]: I1011 10:55:36.889643 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/24482e3e-ba4c-4920-90d4-077df9a7b329-var-lib-cinder\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.892738 master-0 kubenswrapper[4790]: I1011 10:55:36.892127 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24482e3e-ba4c-4920-90d4-077df9a7b329-combined-ca-bundle\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.893052 master-0 kubenswrapper[4790]: I1011 10:55:36.892850 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8a39-account-create-clqqg" Oct 11 10:55:36.897131 master-0 kubenswrapper[4790]: I1011 10:55:36.894349 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24482e3e-ba4c-4920-90d4-077df9a7b329-config-data\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.909427 master-0 kubenswrapper[4790]: I1011 10:55:36.909338 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24482e3e-ba4c-4920-90d4-077df9a7b329-scripts\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:36.918386 master-0 kubenswrapper[4790]: I1011 10:55:36.918214 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmgf8\" (UniqueName: \"kubernetes.io/projected/24482e3e-ba4c-4920-90d4-077df9a7b329-kube-api-access-vmgf8\") pod \"cinder-b5802-volume-lvm-iscsi-0\" (UID: \"24482e3e-ba4c-4920-90d4-077df9a7b329\") " pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:37.044189 master-1 kubenswrapper[4771]: I1011 10:55:37.043207 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7cddc977f5-9ddgm" event={"ID":"879970ca-6312-4aec-b8f4-a8a41a0e3797","Type":"ContainerStarted","Data":"ef95a48f33dffbcc5025cb68d51739a664704d5571f4c1f1a0bf787ab1898104"} Oct 11 10:55:37.047593 master-1 kubenswrapper[4771]: I1011 10:55:37.044944 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vzsw" event={"ID":"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44","Type":"ContainerStarted","Data":"e046736cf54a6f375a2d21055bc37323ff6d218a499c8b0059aa035f5e4d1a0c"} Oct 11 10:55:37.047593 master-1 kubenswrapper[4771]: I1011 10:55:37.045035 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vzsw" event={"ID":"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44","Type":"ContainerStarted","Data":"2e03e915f6f95ecc8f0f52052466e21bd1b0bb1a12eb203399bd0345ac65bccf"} Oct 11 10:55:37.047593 master-1 kubenswrapper[4771]: I1011 10:55:37.047126 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"736a75e7-8c74-4862-9ac6-3b4c2d0d721d","Type":"ContainerStarted","Data":"10573443fa9f81c261e267c2d4f01ad7d7cf7482785a8f4f22c2ccd3fa1fc631"} Oct 11 10:55:37.050486 master-1 kubenswrapper[4771]: I1011 10:55:37.049978 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" event={"ID":"c6af8eba-f8bf-47f6-8313-7a902aeb170f","Type":"ContainerStarted","Data":"18f55509d99d6df6e062c9f98f3f97b0989b5f829acb0a772e9a836bc344b833"} Oct 11 10:55:37.050486 master-1 kubenswrapper[4771]: I1011 10:55:37.050200 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" Oct 11 10:55:37.051892 master-1 kubenswrapper[4771]: I1011 10:55:37.051865 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" event={"ID":"1fe7833d-9251-4545-ba68-f58c146188f1","Type":"ContainerStarted","Data":"3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d"} Oct 11 10:55:37.052015 master-1 kubenswrapper[4771]: I1011 10:55:37.051989 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" podUID="1fe7833d-9251-4545-ba68-f58c146188f1" containerName="heat-cfnapi" containerID="cri-o://3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d" gracePeriod=60 Oct 11 10:55:37.052288 master-1 kubenswrapper[4771]: I1011 10:55:37.052269 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" Oct 11 10:55:37.054945 master-1 kubenswrapper[4771]: I1011 10:55:37.054916 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-64fcdf7d54-8r455" event={"ID":"2dc94855-37f8-4fa8-a3e1-72808b37f966","Type":"ContainerStarted","Data":"3d839e5a2690f36d8e0304ba13a497d4c3537f02e2e7261be9b1a7abfcf45c44"} Oct 11 10:55:37.054945 master-1 kubenswrapper[4771]: I1011 10:55:37.054946 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-64fcdf7d54-8r455" event={"ID":"2dc94855-37f8-4fa8-a3e1-72808b37f966","Type":"ContainerStarted","Data":"7aec9cb2b97e7d8a103a161accbb44e46472e2bd0c84ba74baf43bde7dc083ff"} Oct 11 10:55:37.055589 master-1 kubenswrapper[4771]: I1011 10:55:37.055551 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:37.057154 master-1 kubenswrapper[4771]: I1011 10:55:37.057110 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-d647f9c47-x7xc2" event={"ID":"e0657ee5-2e60-4a96-905e-814f46a72970","Type":"ContainerStarted","Data":"a5ba999f8e1551f739b0074644873dac43d11fc22bc8e7bb8107aced2b4ca581"} Oct 11 10:55:37.057496 master-1 kubenswrapper[4771]: I1011 10:55:37.057474 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-d647f9c47-x7xc2" Oct 11 10:55:37.090349 master-1 kubenswrapper[4771]: I1011 10:55:37.090251 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" podStartSLOduration=11.098397867 podStartE2EDuration="23.090223872s" podCreationTimestamp="2025-10-11 10:55:14 +0000 UTC" firstStartedPulling="2025-10-11 10:55:23.923328249 +0000 UTC m=+1755.897554700" lastFinishedPulling="2025-10-11 10:55:35.915154264 +0000 UTC m=+1767.889380705" observedRunningTime="2025-10-11 10:55:37.09012523 +0000 UTC m=+1769.064351691" watchObservedRunningTime="2025-10-11 10:55:37.090223872 +0000 UTC m=+1769.064450313" Oct 11 10:55:37.100054 master-0 kubenswrapper[4790]: I1011 10:55:37.099952 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:37.120533 master-1 kubenswrapper[4771]: I1011 10:55:37.120433 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" podStartSLOduration=8.200960996 podStartE2EDuration="21.120406253s" podCreationTimestamp="2025-10-11 10:55:16 +0000 UTC" firstStartedPulling="2025-10-11 10:55:23.508151501 +0000 UTC m=+1755.482377942" lastFinishedPulling="2025-10-11 10:55:36.427596758 +0000 UTC m=+1768.401823199" observedRunningTime="2025-10-11 10:55:37.112823812 +0000 UTC m=+1769.087050243" watchObservedRunningTime="2025-10-11 10:55:37.120406253 +0000 UTC m=+1769.094632694" Oct 11 10:55:37.153108 master-1 kubenswrapper[4771]: I1011 10:55:37.153009 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-d647f9c47-x7xc2" podStartSLOduration=2.433012732 podStartE2EDuration="12.152975062s" podCreationTimestamp="2025-10-11 10:55:25 +0000 UTC" firstStartedPulling="2025-10-11 10:55:26.733145152 +0000 UTC m=+1758.707371593" lastFinishedPulling="2025-10-11 10:55:36.453107472 +0000 UTC m=+1768.427333923" observedRunningTime="2025-10-11 10:55:37.146910906 +0000 UTC m=+1769.121137357" watchObservedRunningTime="2025-10-11 10:55:37.152975062 +0000 UTC m=+1769.127201503" Oct 11 10:55:37.179023 master-1 kubenswrapper[4771]: I1011 10:55:37.178936 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-64fcdf7d54-8r455" podStartSLOduration=10.178915539 podStartE2EDuration="10.178915539s" podCreationTimestamp="2025-10-11 10:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:37.176364765 +0000 UTC m=+1769.150591216" watchObservedRunningTime="2025-10-11 10:55:37.178915539 +0000 UTC m=+1769.153141980" Oct 11 10:55:37.370590 master-0 kubenswrapper[4790]: I1011 10:55:37.370550 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8a39-account-create-clqqg"] Oct 11 10:55:37.402603 master-0 kubenswrapper[4790]: I1011 10:55:37.402213 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-b5802-api-1" Oct 11 10:55:37.527120 master-2 kubenswrapper[4776]: I1011 10:55:37.525069 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:37.594732 master-0 kubenswrapper[4790]: I1011 10:55:37.590952 4790 generic.go:334] "Generic (PLEG): container finished" podID="c41b60ff-457d-4c45-8b56-4523c5c0097f" containerID="d68ca291cf2b0da68d6a8d6c151be0aa939d7b802691ee00796f934bd8619918" exitCode=0 Oct 11 10:55:37.594732 master-0 kubenswrapper[4790]: I1011 10:55:37.591051 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5f5c98dcd5-c8xhk" event={"ID":"c41b60ff-457d-4c45-8b56-4523c5c0097f","Type":"ContainerDied","Data":"d68ca291cf2b0da68d6a8d6c151be0aa939d7b802691ee00796f934bd8619918"} Oct 11 10:55:37.594732 master-0 kubenswrapper[4790]: I1011 10:55:37.591136 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5f5c98dcd5-c8xhk" event={"ID":"c41b60ff-457d-4c45-8b56-4523c5c0097f","Type":"ContainerDied","Data":"c3de8b5a223cd729b6c034e2228715eb85d26de982d08c60dc47229ac7d1b110"} Oct 11 10:55:37.594732 master-0 kubenswrapper[4790]: I1011 10:55:37.591152 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3de8b5a223cd729b6c034e2228715eb85d26de982d08c60dc47229ac7d1b110" Oct 11 10:55:37.600743 master-0 kubenswrapper[4790]: I1011 10:55:37.597591 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8a39-account-create-clqqg" event={"ID":"e37e1fe6-6e89-4407-a40f-cf494a35eccd","Type":"ContainerStarted","Data":"5bb2b37cb0135387058a9a5780b75d200e3d6755f1f8d0b825a04e68757b6f14"} Oct 11 10:55:37.600743 master-0 kubenswrapper[4790]: I1011 10:55:37.598119 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5f5c98dcd5-c8xhk" Oct 11 10:55:37.600743 master-0 kubenswrapper[4790]: I1011 10:55:37.600128 4790 generic.go:334] "Generic (PLEG): container finished" podID="90ca8fc6-bc53-461b-8384-ca8344e8abb1" containerID="471d5706c589d3886adf86b8614c32749579ddcf4ce9dfdf7d21941669c457bf" exitCode=1 Oct 11 10:55:37.600743 master-0 kubenswrapper[4790]: I1011 10:55:37.600697 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-89f9b4488-8vvt9" event={"ID":"90ca8fc6-bc53-461b-8384-ca8344e8abb1","Type":"ContainerDied","Data":"471d5706c589d3886adf86b8614c32749579ddcf4ce9dfdf7d21941669c457bf"} Oct 11 10:55:37.600918 master-0 kubenswrapper[4790]: I1011 10:55:37.600773 4790 scope.go:117] "RemoveContainer" containerID="3509395ffa91a1e45916358a3e4f7592cd6af5d4e7f1b22db61be9000daad5af" Oct 11 10:55:37.600918 master-0 kubenswrapper[4790]: I1011 10:55:37.600818 4790 scope.go:117] "RemoveContainer" containerID="471d5706c589d3886adf86b8614c32749579ddcf4ce9dfdf7d21941669c457bf" Oct 11 10:55:37.601778 master-0 kubenswrapper[4790]: E1011 10:55:37.601059 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-89f9b4488-8vvt9_openstack(90ca8fc6-bc53-461b-8384-ca8344e8abb1)\"" pod="openstack/heat-cfnapi-89f9b4488-8vvt9" podUID="90ca8fc6-bc53-461b-8384-ca8344e8abb1" Oct 11 10:55:37.604035 master-0 kubenswrapper[4790]: I1011 10:55:37.603964 4790 generic.go:334] "Generic (PLEG): container finished" podID="6ff24705-c685-47d9-ad1b-9ec04c541bf7" containerID="7210d87c28a292af798e5994b8f7c1185cbe0c9dd8ab3744872cfdcf6e01c602" exitCode=0 Oct 11 10:55:37.604112 master-0 kubenswrapper[4790]: I1011 10:55:37.604046 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768f954cfc-9xg22" event={"ID":"6ff24705-c685-47d9-ad1b-9ec04c541bf7","Type":"ContainerDied","Data":"7210d87c28a292af798e5994b8f7c1185cbe0c9dd8ab3744872cfdcf6e01c602"} Oct 11 10:55:37.660587 master-0 kubenswrapper[4790]: I1011 10:55:37.660495 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-volume-lvm-iscsi-0"] Oct 11 10:55:37.703618 master-0 kubenswrapper[4790]: I1011 10:55:37.703559 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c41b60ff-457d-4c45-8b56-4523c5c0097f-config-data-custom\") pod \"c41b60ff-457d-4c45-8b56-4523c5c0097f\" (UID: \"c41b60ff-457d-4c45-8b56-4523c5c0097f\") " Oct 11 10:55:37.703738 master-0 kubenswrapper[4790]: I1011 10:55:37.703695 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41b60ff-457d-4c45-8b56-4523c5c0097f-combined-ca-bundle\") pod \"c41b60ff-457d-4c45-8b56-4523c5c0097f\" (UID: \"c41b60ff-457d-4c45-8b56-4523c5c0097f\") " Oct 11 10:55:37.703834 master-0 kubenswrapper[4790]: I1011 10:55:37.703808 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb8r8\" (UniqueName: \"kubernetes.io/projected/c41b60ff-457d-4c45-8b56-4523c5c0097f-kube-api-access-bb8r8\") pod \"c41b60ff-457d-4c45-8b56-4523c5c0097f\" (UID: \"c41b60ff-457d-4c45-8b56-4523c5c0097f\") " Oct 11 10:55:37.703874 master-0 kubenswrapper[4790]: I1011 10:55:37.703865 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41b60ff-457d-4c45-8b56-4523c5c0097f-config-data\") pod \"c41b60ff-457d-4c45-8b56-4523c5c0097f\" (UID: \"c41b60ff-457d-4c45-8b56-4523c5c0097f\") " Oct 11 10:55:37.711919 master-0 kubenswrapper[4790]: I1011 10:55:37.711483 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41b60ff-457d-4c45-8b56-4523c5c0097f-kube-api-access-bb8r8" (OuterVolumeSpecName: "kube-api-access-bb8r8") pod "c41b60ff-457d-4c45-8b56-4523c5c0097f" (UID: "c41b60ff-457d-4c45-8b56-4523c5c0097f"). InnerVolumeSpecName "kube-api-access-bb8r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:37.713449 master-0 kubenswrapper[4790]: I1011 10:55:37.713391 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41b60ff-457d-4c45-8b56-4523c5c0097f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c41b60ff-457d-4c45-8b56-4523c5c0097f" (UID: "c41b60ff-457d-4c45-8b56-4523c5c0097f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:37.742900 master-0 kubenswrapper[4790]: I1011 10:55:37.742700 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41b60ff-457d-4c45-8b56-4523c5c0097f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c41b60ff-457d-4c45-8b56-4523c5c0097f" (UID: "c41b60ff-457d-4c45-8b56-4523c5c0097f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:37.755398 master-0 kubenswrapper[4790]: I1011 10:55:37.755337 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41b60ff-457d-4c45-8b56-4523c5c0097f-config-data" (OuterVolumeSpecName: "config-data") pod "c41b60ff-457d-4c45-8b56-4523c5c0097f" (UID: "c41b60ff-457d-4c45-8b56-4523c5c0097f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:37.806205 master-0 kubenswrapper[4790]: I1011 10:55:37.806161 4790 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c41b60ff-457d-4c45-8b56-4523c5c0097f-config-data-custom\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:37.806205 master-0 kubenswrapper[4790]: I1011 10:55:37.806199 4790 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41b60ff-457d-4c45-8b56-4523c5c0097f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:37.806205 master-0 kubenswrapper[4790]: I1011 10:55:37.806210 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb8r8\" (UniqueName: \"kubernetes.io/projected/c41b60ff-457d-4c45-8b56-4523c5c0097f-kube-api-access-bb8r8\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:37.806485 master-0 kubenswrapper[4790]: I1011 10:55:37.806220 4790 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41b60ff-457d-4c45-8b56-4523c5c0097f-config-data\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:38.068402 master-1 kubenswrapper[4771]: I1011 10:55:38.068206 4771 generic.go:334] "Generic (PLEG): container finished" podID="e0657ee5-2e60-4a96-905e-814f46a72970" containerID="a5ba999f8e1551f739b0074644873dac43d11fc22bc8e7bb8107aced2b4ca581" exitCode=1 Oct 11 10:55:38.068402 master-1 kubenswrapper[4771]: I1011 10:55:38.068274 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-d647f9c47-x7xc2" event={"ID":"e0657ee5-2e60-4a96-905e-814f46a72970","Type":"ContainerDied","Data":"a5ba999f8e1551f739b0074644873dac43d11fc22bc8e7bb8107aced2b4ca581"} Oct 11 10:55:38.069459 master-1 kubenswrapper[4771]: I1011 10:55:38.069286 4771 scope.go:117] "RemoveContainer" containerID="a5ba999f8e1551f739b0074644873dac43d11fc22bc8e7bb8107aced2b4ca581" Oct 11 10:55:38.072735 master-1 kubenswrapper[4771]: I1011 10:55:38.072629 4771 generic.go:334] "Generic (PLEG): container finished" podID="879970ca-6312-4aec-b8f4-a8a41a0e3797" containerID="ef95a48f33dffbcc5025cb68d51739a664704d5571f4c1f1a0bf787ab1898104" exitCode=0 Oct 11 10:55:38.072957 master-1 kubenswrapper[4771]: I1011 10:55:38.072732 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7cddc977f5-9ddgm" event={"ID":"879970ca-6312-4aec-b8f4-a8a41a0e3797","Type":"ContainerDied","Data":"ef95a48f33dffbcc5025cb68d51739a664704d5571f4c1f1a0bf787ab1898104"} Oct 11 10:55:38.076961 master-1 kubenswrapper[4771]: I1011 10:55:38.076903 4771 generic.go:334] "Generic (PLEG): container finished" podID="1daf9046-c7b7-4c9c-a9b3-76ae17e47e44" containerID="e046736cf54a6f375a2d21055bc37323ff6d218a499c8b0059aa035f5e4d1a0c" exitCode=0 Oct 11 10:55:38.077055 master-1 kubenswrapper[4771]: I1011 10:55:38.077013 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vzsw" event={"ID":"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44","Type":"ContainerDied","Data":"e046736cf54a6f375a2d21055bc37323ff6d218a499c8b0059aa035f5e4d1a0c"} Oct 11 10:55:38.314996 master-0 kubenswrapper[4790]: I1011 10:55:38.314791 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60d68c10-8e1c-4a92-86f6-e2925df0f714" path="/var/lib/kubelet/pods/60d68c10-8e1c-4a92-86f6-e2925df0f714/volumes" Oct 11 10:55:38.619287 master-0 kubenswrapper[4790]: I1011 10:55:38.619072 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768f954cfc-9xg22" event={"ID":"6ff24705-c685-47d9-ad1b-9ec04c541bf7","Type":"ContainerStarted","Data":"585d3075d62338a2a9a42f35ce1fe0086b7e13eaad601eec48eba4bae373241a"} Oct 11 10:55:38.620759 master-0 kubenswrapper[4790]: I1011 10:55:38.620681 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:38.625377 master-0 kubenswrapper[4790]: I1011 10:55:38.625327 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" event={"ID":"24482e3e-ba4c-4920-90d4-077df9a7b329","Type":"ContainerStarted","Data":"4bbf3b03ee3ac7a6c2a580b58d789d9524ecea2f239e33911feb5ef644b2631e"} Oct 11 10:55:38.625377 master-0 kubenswrapper[4790]: I1011 10:55:38.625366 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" event={"ID":"24482e3e-ba4c-4920-90d4-077df9a7b329","Type":"ContainerStarted","Data":"0413f5f4f9ae719fc0d00825e1ce60e01e59a9f8594c84b36f853ea751b147e2"} Oct 11 10:55:38.627822 master-0 kubenswrapper[4790]: I1011 10:55:38.627781 4790 generic.go:334] "Generic (PLEG): container finished" podID="e37e1fe6-6e89-4407-a40f-cf494a35eccd" containerID="ef89d0976a05facc749dfabb5416524787541999145463ce1f713dd9a9f315fb" exitCode=0 Oct 11 10:55:38.627962 master-0 kubenswrapper[4790]: I1011 10:55:38.627830 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8a39-account-create-clqqg" event={"ID":"e37e1fe6-6e89-4407-a40f-cf494a35eccd","Type":"ContainerDied","Data":"ef89d0976a05facc749dfabb5416524787541999145463ce1f713dd9a9f315fb"} Oct 11 10:55:38.629590 master-0 kubenswrapper[4790]: I1011 10:55:38.629555 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5f5c98dcd5-c8xhk" Oct 11 10:55:38.632778 master-0 kubenswrapper[4790]: I1011 10:55:38.632403 4790 scope.go:117] "RemoveContainer" containerID="471d5706c589d3886adf86b8614c32749579ddcf4ce9dfdf7d21941669c457bf" Oct 11 10:55:38.633826 master-0 kubenswrapper[4790]: E1011 10:55:38.633001 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-89f9b4488-8vvt9_openstack(90ca8fc6-bc53-461b-8384-ca8344e8abb1)\"" pod="openstack/heat-cfnapi-89f9b4488-8vvt9" podUID="90ca8fc6-bc53-461b-8384-ca8344e8abb1" Oct 11 10:55:38.658566 master-0 kubenswrapper[4790]: I1011 10:55:38.658467 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-768f954cfc-9xg22" podStartSLOduration=7.658441317 podStartE2EDuration="7.658441317s" podCreationTimestamp="2025-10-11 10:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:38.651774644 +0000 UTC m=+1015.206234976" watchObservedRunningTime="2025-10-11 10:55:38.658441317 +0000 UTC m=+1015.212901609" Oct 11 10:55:38.678478 master-0 kubenswrapper[4790]: I1011 10:55:38.678399 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5f5c98dcd5-c8xhk"] Oct 11 10:55:38.692725 master-0 kubenswrapper[4790]: I1011 10:55:38.692654 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5f5c98dcd5-c8xhk"] Oct 11 10:55:38.706274 master-2 kubenswrapper[4776]: I1011 10:55:38.706215 4776 generic.go:334] "Generic (PLEG): container finished" podID="7e99b787-4e9b-4285-b175-63008b7e39de" containerID="d2f58a8e0319242a1b64a2af94772f7b9698c1eaa7f642a654e5e11cc2fe7f89" exitCode=137 Oct 11 10:55:38.706274 master-2 kubenswrapper[4776]: I1011 10:55:38.706266 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-2" event={"ID":"7e99b787-4e9b-4285-b175-63008b7e39de","Type":"ContainerDied","Data":"d2f58a8e0319242a1b64a2af94772f7b9698c1eaa7f642a654e5e11cc2fe7f89"} Oct 11 10:55:38.813693 master-2 kubenswrapper[4776]: I1011 10:55:38.813139 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:38.917865 master-1 kubenswrapper[4771]: I1011 10:55:38.917229 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-d647f9c47-x7xc2"] Oct 11 10:55:39.091128 master-1 kubenswrapper[4771]: I1011 10:55:39.090884 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vzsw" event={"ID":"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44","Type":"ContainerStarted","Data":"f4caf5c874767c7fd75c2e84ff37e1b5c988f50fb776ae2062994f2e951ecc23"} Oct 11 10:55:39.095895 master-1 kubenswrapper[4771]: I1011 10:55:39.095548 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"736a75e7-8c74-4862-9ac6-3b4c2d0d721d","Type":"ContainerStarted","Data":"7f100b006260b4ff812a662ca4646172d63077e3423c5b53974c9a4fc93bb108"} Oct 11 10:55:39.096093 master-1 kubenswrapper[4771]: I1011 10:55:39.095970 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 11 10:55:39.096093 master-1 kubenswrapper[4771]: I1011 10:55:39.096020 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerName="proxy-httpd" containerID="cri-o://7f100b006260b4ff812a662ca4646172d63077e3423c5b53974c9a4fc93bb108" gracePeriod=30 Oct 11 10:55:39.096434 master-1 kubenswrapper[4771]: I1011 10:55:39.095967 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerName="ceilometer-notification-agent" containerID="cri-o://1d0d93b3fc6393dcdc851e8c3921d7c5d5a44cf9e99d331f9e66f61b3c48f59d" gracePeriod=30 Oct 11 10:55:39.096434 master-1 kubenswrapper[4771]: I1011 10:55:39.095979 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerName="sg-core" containerID="cri-o://10573443fa9f81c261e267c2d4f01ad7d7cf7482785a8f4f22c2ccd3fa1fc631" gracePeriod=30 Oct 11 10:55:39.096434 master-1 kubenswrapper[4771]: I1011 10:55:39.095947 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerName="ceilometer-central-agent" containerID="cri-o://33e1159e64df7103066e5f7850051b2adc3d09e823478d0dc1137ddef2aee326" gracePeriod=30 Oct 11 10:55:39.100007 master-1 kubenswrapper[4771]: I1011 10:55:39.098777 4771 generic.go:334] "Generic (PLEG): container finished" podID="e0657ee5-2e60-4a96-905e-814f46a72970" containerID="359e13273e98466e823fa5c4d2aba3d9afd810ad691524f34525677325371beb" exitCode=1 Oct 11 10:55:39.100007 master-1 kubenswrapper[4771]: I1011 10:55:39.098853 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-d647f9c47-x7xc2" event={"ID":"e0657ee5-2e60-4a96-905e-814f46a72970","Type":"ContainerDied","Data":"359e13273e98466e823fa5c4d2aba3d9afd810ad691524f34525677325371beb"} Oct 11 10:55:39.100007 master-1 kubenswrapper[4771]: I1011 10:55:39.098919 4771 scope.go:117] "RemoveContainer" containerID="a5ba999f8e1551f739b0074644873dac43d11fc22bc8e7bb8107aced2b4ca581" Oct 11 10:55:39.102479 master-1 kubenswrapper[4771]: I1011 10:55:39.102416 4771 scope.go:117] "RemoveContainer" containerID="359e13273e98466e823fa5c4d2aba3d9afd810ad691524f34525677325371beb" Oct 11 10:55:39.103968 master-1 kubenswrapper[4771]: I1011 10:55:39.103881 4771 generic.go:334] "Generic (PLEG): container finished" podID="879970ca-6312-4aec-b8f4-a8a41a0e3797" containerID="47d878804e10bb973f86c4fb2c2dafde704a472be7b960d95d71befafe9306e4" exitCode=1 Oct 11 10:55:39.103968 master-1 kubenswrapper[4771]: I1011 10:55:39.103929 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7cddc977f5-9ddgm" event={"ID":"879970ca-6312-4aec-b8f4-a8a41a0e3797","Type":"ContainerDied","Data":"47d878804e10bb973f86c4fb2c2dafde704a472be7b960d95d71befafe9306e4"} Oct 11 10:55:39.103968 master-1 kubenswrapper[4771]: I1011 10:55:39.103952 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7cddc977f5-9ddgm" event={"ID":"879970ca-6312-4aec-b8f4-a8a41a0e3797","Type":"ContainerStarted","Data":"a3bee7ce4b0c367a06f8dac0455b4363a2d71194a9474e87524e9ae41ec4264e"} Oct 11 10:55:39.105137 master-1 kubenswrapper[4771]: E1011 10:55:39.105067 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-d647f9c47-x7xc2_openstack(e0657ee5-2e60-4a96-905e-814f46a72970)\"" pod="openstack/heat-api-d647f9c47-x7xc2" podUID="e0657ee5-2e60-4a96-905e-814f46a72970" Oct 11 10:55:39.105208 master-1 kubenswrapper[4771]: I1011 10:55:39.105128 4771 scope.go:117] "RemoveContainer" containerID="47d878804e10bb973f86c4fb2c2dafde704a472be7b960d95d71befafe9306e4" Oct 11 10:55:39.128658 master-2 kubenswrapper[4776]: I1011 10:55:39.128587 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.187341 master-1 kubenswrapper[4771]: I1011 10:55:39.186918 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.900443888 podStartE2EDuration="16.186896627s" podCreationTimestamp="2025-10-11 10:55:23 +0000 UTC" firstStartedPulling="2025-10-11 10:55:24.729574292 +0000 UTC m=+1756.703800733" lastFinishedPulling="2025-10-11 10:55:38.016027001 +0000 UTC m=+1769.990253472" observedRunningTime="2025-10-11 10:55:39.183158868 +0000 UTC m=+1771.157385319" watchObservedRunningTime="2025-10-11 10:55:39.186896627 +0000 UTC m=+1771.161123078" Oct 11 10:55:39.228757 master-2 kubenswrapper[4776]: I1011 10:55:39.227315 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e99b787-4e9b-4285-b175-63008b7e39de-etc-machine-id\") pod \"7e99b787-4e9b-4285-b175-63008b7e39de\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " Oct 11 10:55:39.228757 master-2 kubenswrapper[4776]: I1011 10:55:39.227449 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-combined-ca-bundle\") pod \"7e99b787-4e9b-4285-b175-63008b7e39de\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " Oct 11 10:55:39.228757 master-2 kubenswrapper[4776]: I1011 10:55:39.227992 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-scripts\") pod \"7e99b787-4e9b-4285-b175-63008b7e39de\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " Oct 11 10:55:39.228757 master-2 kubenswrapper[4776]: I1011 10:55:39.228032 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e99b787-4e9b-4285-b175-63008b7e39de-logs\") pod \"7e99b787-4e9b-4285-b175-63008b7e39de\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " Oct 11 10:55:39.228757 master-2 kubenswrapper[4776]: I1011 10:55:39.228145 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data\") pod \"7e99b787-4e9b-4285-b175-63008b7e39de\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " Oct 11 10:55:39.228757 master-2 kubenswrapper[4776]: I1011 10:55:39.228178 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqgnt\" (UniqueName: \"kubernetes.io/projected/7e99b787-4e9b-4285-b175-63008b7e39de-kube-api-access-cqgnt\") pod \"7e99b787-4e9b-4285-b175-63008b7e39de\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " Oct 11 10:55:39.228757 master-2 kubenswrapper[4776]: I1011 10:55:39.228259 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data-custom\") pod \"7e99b787-4e9b-4285-b175-63008b7e39de\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " Oct 11 10:55:39.233370 master-2 kubenswrapper[4776]: I1011 10:55:39.233269 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e99b787-4e9b-4285-b175-63008b7e39de-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7e99b787-4e9b-4285-b175-63008b7e39de" (UID: "7e99b787-4e9b-4285-b175-63008b7e39de"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:39.234078 master-2 kubenswrapper[4776]: I1011 10:55:39.233998 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e99b787-4e9b-4285-b175-63008b7e39de-logs" (OuterVolumeSpecName: "logs") pod "7e99b787-4e9b-4285-b175-63008b7e39de" (UID: "7e99b787-4e9b-4285-b175-63008b7e39de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:39.236150 master-2 kubenswrapper[4776]: I1011 10:55:39.236078 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7e99b787-4e9b-4285-b175-63008b7e39de" (UID: "7e99b787-4e9b-4285-b175-63008b7e39de"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:39.236228 master-2 kubenswrapper[4776]: I1011 10:55:39.236178 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-scripts" (OuterVolumeSpecName: "scripts") pod "7e99b787-4e9b-4285-b175-63008b7e39de" (UID: "7e99b787-4e9b-4285-b175-63008b7e39de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:39.237234 master-2 kubenswrapper[4776]: I1011 10:55:39.237201 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e99b787-4e9b-4285-b175-63008b7e39de-kube-api-access-cqgnt" (OuterVolumeSpecName: "kube-api-access-cqgnt") pod "7e99b787-4e9b-4285-b175-63008b7e39de" (UID: "7e99b787-4e9b-4285-b175-63008b7e39de"). InnerVolumeSpecName "kube-api-access-cqgnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:39.269705 master-2 kubenswrapper[4776]: I1011 10:55:39.269282 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e99b787-4e9b-4285-b175-63008b7e39de" (UID: "7e99b787-4e9b-4285-b175-63008b7e39de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:39.283475 master-2 kubenswrapper[4776]: I1011 10:55:39.283400 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data" (OuterVolumeSpecName: "config-data") pod "7e99b787-4e9b-4285-b175-63008b7e39de" (UID: "7e99b787-4e9b-4285-b175-63008b7e39de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:39.334492 master-2 kubenswrapper[4776]: I1011 10:55:39.334308 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:39.334492 master-2 kubenswrapper[4776]: I1011 10:55:39.334356 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:39.334492 master-2 kubenswrapper[4776]: I1011 10:55:39.334368 4776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e99b787-4e9b-4285-b175-63008b7e39de-logs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:39.334492 master-2 kubenswrapper[4776]: I1011 10:55:39.334379 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:39.334492 master-2 kubenswrapper[4776]: I1011 10:55:39.334392 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqgnt\" (UniqueName: \"kubernetes.io/projected/7e99b787-4e9b-4285-b175-63008b7e39de-kube-api-access-cqgnt\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:39.334492 master-2 kubenswrapper[4776]: I1011 10:55:39.334406 4776 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data-custom\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:39.334492 master-2 kubenswrapper[4776]: I1011 10:55:39.334417 4776 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e99b787-4e9b-4285-b175-63008b7e39de-etc-machine-id\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:39.432904 master-2 kubenswrapper[4776]: I1011 10:55:39.432725 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-external-api-2"] Oct 11 10:55:39.433156 master-2 kubenswrapper[4776]: I1011 10:55:39.433028 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-external-api-2" podUID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerName="glance-log" containerID="cri-o://2d269532330f7a891baad11ad939ef35677256aa7bb563b56736833615edcf28" gracePeriod=30 Oct 11 10:55:39.433234 master-2 kubenswrapper[4776]: I1011 10:55:39.433177 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-external-api-2" podUID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerName="glance-httpd" containerID="cri-o://5770704138c0c2f874908ca2cc1d2acaea03506574846327d58cb886820df9bf" gracePeriod=30 Oct 11 10:55:39.717917 master-2 kubenswrapper[4776]: I1011 10:55:39.717766 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.720714 master-2 kubenswrapper[4776]: I1011 10:55:39.717766 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-2" event={"ID":"7e99b787-4e9b-4285-b175-63008b7e39de","Type":"ContainerDied","Data":"3fa25e1670b8b5d447679f8aa5304db6a9e223ba597842d1098f20a3ef6c774c"} Oct 11 10:55:39.720714 master-2 kubenswrapper[4776]: I1011 10:55:39.719967 4776 scope.go:117] "RemoveContainer" containerID="d2f58a8e0319242a1b64a2af94772f7b9698c1eaa7f642a654e5e11cc2fe7f89" Oct 11 10:55:39.729392 master-2 kubenswrapper[4776]: I1011 10:55:39.729349 4776 generic.go:334] "Generic (PLEG): container finished" podID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerID="2d269532330f7a891baad11ad939ef35677256aa7bb563b56736833615edcf28" exitCode=143 Oct 11 10:55:39.729559 master-2 kubenswrapper[4776]: I1011 10:55:39.729399 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-2" event={"ID":"a0831eef-0c2e-4d09-a44b-7276f30bc1cf","Type":"ContainerDied","Data":"2d269532330f7a891baad11ad939ef35677256aa7bb563b56736833615edcf28"} Oct 11 10:55:39.744320 master-2 kubenswrapper[4776]: I1011 10:55:39.744288 4776 scope.go:117] "RemoveContainer" containerID="0547089e6afc3eb60691c0ebbe3c41ee9104f9a77e690771e776160cdc0930fb" Oct 11 10:55:39.760822 master-2 kubenswrapper[4776]: I1011 10:55:39.760746 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-api-2"] Oct 11 10:55:39.770038 master-2 kubenswrapper[4776]: I1011 10:55:39.769954 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b5802-api-2"] Oct 11 10:55:39.810364 master-2 kubenswrapper[4776]: I1011 10:55:39.810297 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-api-2"] Oct 11 10:55:39.811366 master-2 kubenswrapper[4776]: E1011 10:55:39.811323 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e99b787-4e9b-4285-b175-63008b7e39de" containerName="cinder-api" Oct 11 10:55:39.811366 master-2 kubenswrapper[4776]: I1011 10:55:39.811358 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e99b787-4e9b-4285-b175-63008b7e39de" containerName="cinder-api" Oct 11 10:55:39.811500 master-2 kubenswrapper[4776]: E1011 10:55:39.811389 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e99b787-4e9b-4285-b175-63008b7e39de" containerName="cinder-b5802-api-log" Oct 11 10:55:39.811500 master-2 kubenswrapper[4776]: I1011 10:55:39.811398 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e99b787-4e9b-4285-b175-63008b7e39de" containerName="cinder-b5802-api-log" Oct 11 10:55:39.811660 master-2 kubenswrapper[4776]: I1011 10:55:39.811604 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e99b787-4e9b-4285-b175-63008b7e39de" containerName="cinder-api" Oct 11 10:55:39.811748 master-2 kubenswrapper[4776]: I1011 10:55:39.811671 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e99b787-4e9b-4285-b175-63008b7e39de" containerName="cinder-b5802-api-log" Oct 11 10:55:39.814187 master-2 kubenswrapper[4776]: I1011 10:55:39.813730 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.817775 master-2 kubenswrapper[4776]: I1011 10:55:39.817733 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 11 10:55:39.818036 master-2 kubenswrapper[4776]: I1011 10:55:39.817780 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-api-config-data" Oct 11 10:55:39.818775 master-2 kubenswrapper[4776]: I1011 10:55:39.818745 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 11 10:55:39.821271 master-2 kubenswrapper[4776]: I1011 10:55:39.821233 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-api-2"] Oct 11 10:55:39.949089 master-2 kubenswrapper[4776]: I1011 10:55:39.949027 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-internal-tls-certs\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.949325 master-2 kubenswrapper[4776]: I1011 10:55:39.949103 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-etc-machine-id\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.949325 master-2 kubenswrapper[4776]: I1011 10:55:39.949153 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-scripts\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.949325 master-2 kubenswrapper[4776]: I1011 10:55:39.949182 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-logs\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.949325 master-2 kubenswrapper[4776]: I1011 10:55:39.949208 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-config-data\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.949325 master-2 kubenswrapper[4776]: I1011 10:55:39.949296 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-combined-ca-bundle\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.949325 master-2 kubenswrapper[4776]: I1011 10:55:39.949320 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hbls\" (UniqueName: \"kubernetes.io/projected/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-kube-api-access-7hbls\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.949594 master-2 kubenswrapper[4776]: I1011 10:55:39.949354 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-public-tls-certs\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.949594 master-2 kubenswrapper[4776]: I1011 10:55:39.949415 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-config-data-custom\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.051529 master-2 kubenswrapper[4776]: I1011 10:55:40.051394 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-config-data-custom\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.051529 master-2 kubenswrapper[4776]: I1011 10:55:40.051466 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-internal-tls-certs\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.051529 master-2 kubenswrapper[4776]: I1011 10:55:40.051502 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-etc-machine-id\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.051841 master-2 kubenswrapper[4776]: I1011 10:55:40.051542 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-scripts\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.051841 master-2 kubenswrapper[4776]: I1011 10:55:40.051560 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-logs\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.051841 master-2 kubenswrapper[4776]: I1011 10:55:40.051577 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-config-data\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.051841 master-2 kubenswrapper[4776]: I1011 10:55:40.051641 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hbls\" (UniqueName: \"kubernetes.io/projected/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-kube-api-access-7hbls\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.051841 master-2 kubenswrapper[4776]: I1011 10:55:40.051659 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-combined-ca-bundle\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.051841 master-2 kubenswrapper[4776]: I1011 10:55:40.051696 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-public-tls-certs\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.052104 master-2 kubenswrapper[4776]: I1011 10:55:40.051894 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-etc-machine-id\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.052186 master-2 kubenswrapper[4776]: I1011 10:55:40.052144 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-logs\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.055049 master-2 kubenswrapper[4776]: I1011 10:55:40.055007 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-config-data-custom\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.055303 master-2 kubenswrapper[4776]: I1011 10:55:40.055247 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-config-data\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.055549 master-2 kubenswrapper[4776]: I1011 10:55:40.055508 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-public-tls-certs\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.055618 master-2 kubenswrapper[4776]: I1011 10:55:40.055599 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-internal-tls-certs\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.055846 master-2 kubenswrapper[4776]: I1011 10:55:40.055806 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-combined-ca-bundle\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.056397 master-2 kubenswrapper[4776]: I1011 10:55:40.056369 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-scripts\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.073113 master-2 kubenswrapper[4776]: I1011 10:55:40.072970 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e99b787-4e9b-4285-b175-63008b7e39de" path="/var/lib/kubelet/pods/7e99b787-4e9b-4285-b175-63008b7e39de/volumes" Oct 11 10:55:40.075686 master-2 kubenswrapper[4776]: I1011 10:55:40.075585 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hbls\" (UniqueName: \"kubernetes.io/projected/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-kube-api-access-7hbls\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.117039 master-1 kubenswrapper[4771]: I1011 10:55:40.116984 4771 generic.go:334] "Generic (PLEG): container finished" podID="1daf9046-c7b7-4c9c-a9b3-76ae17e47e44" containerID="f4caf5c874767c7fd75c2e84ff37e1b5c988f50fb776ae2062994f2e951ecc23" exitCode=0 Oct 11 10:55:40.117930 master-1 kubenswrapper[4771]: I1011 10:55:40.117060 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vzsw" event={"ID":"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44","Type":"ContainerDied","Data":"f4caf5c874767c7fd75c2e84ff37e1b5c988f50fb776ae2062994f2e951ecc23"} Oct 11 10:55:40.121170 master-1 kubenswrapper[4771]: I1011 10:55:40.121113 4771 generic.go:334] "Generic (PLEG): container finished" podID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerID="7f100b006260b4ff812a662ca4646172d63077e3423c5b53974c9a4fc93bb108" exitCode=0 Oct 11 10:55:40.121170 master-1 kubenswrapper[4771]: I1011 10:55:40.121158 4771 generic.go:334] "Generic (PLEG): container finished" podID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerID="10573443fa9f81c261e267c2d4f01ad7d7cf7482785a8f4f22c2ccd3fa1fc631" exitCode=2 Oct 11 10:55:40.121170 master-1 kubenswrapper[4771]: I1011 10:55:40.121168 4771 generic.go:334] "Generic (PLEG): container finished" podID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerID="33e1159e64df7103066e5f7850051b2adc3d09e823478d0dc1137ddef2aee326" exitCode=0 Oct 11 10:55:40.121428 master-1 kubenswrapper[4771]: I1011 10:55:40.121216 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"736a75e7-8c74-4862-9ac6-3b4c2d0d721d","Type":"ContainerDied","Data":"7f100b006260b4ff812a662ca4646172d63077e3423c5b53974c9a4fc93bb108"} Oct 11 10:55:40.121428 master-1 kubenswrapper[4771]: I1011 10:55:40.121309 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"736a75e7-8c74-4862-9ac6-3b4c2d0d721d","Type":"ContainerDied","Data":"10573443fa9f81c261e267c2d4f01ad7d7cf7482785a8f4f22c2ccd3fa1fc631"} Oct 11 10:55:40.121428 master-1 kubenswrapper[4771]: I1011 10:55:40.121334 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"736a75e7-8c74-4862-9ac6-3b4c2d0d721d","Type":"ContainerDied","Data":"33e1159e64df7103066e5f7850051b2adc3d09e823478d0dc1137ddef2aee326"} Oct 11 10:55:40.123436 master-1 kubenswrapper[4771]: I1011 10:55:40.123376 4771 generic.go:334] "Generic (PLEG): container finished" podID="c6af8eba-f8bf-47f6-8313-7a902aeb170f" containerID="18f55509d99d6df6e062c9f98f3f97b0989b5f829acb0a772e9a836bc344b833" exitCode=1 Oct 11 10:55:40.123579 master-1 kubenswrapper[4771]: I1011 10:55:40.123474 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" event={"ID":"c6af8eba-f8bf-47f6-8313-7a902aeb170f","Type":"ContainerDied","Data":"18f55509d99d6df6e062c9f98f3f97b0989b5f829acb0a772e9a836bc344b833"} Oct 11 10:55:40.123899 master-1 kubenswrapper[4771]: I1011 10:55:40.123856 4771 scope.go:117] "RemoveContainer" containerID="18f55509d99d6df6e062c9f98f3f97b0989b5f829acb0a772e9a836bc344b833" Oct 11 10:55:40.131602 master-2 kubenswrapper[4776]: I1011 10:55:40.131538 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.137902 master-1 kubenswrapper[4771]: I1011 10:55:40.137836 4771 generic.go:334] "Generic (PLEG): container finished" podID="879970ca-6312-4aec-b8f4-a8a41a0e3797" containerID="d960a6ebc10578bac963a4210bd8cc373ed3887cebb614d8345d375a88c7faeb" exitCode=1 Oct 11 10:55:40.137902 master-1 kubenswrapper[4771]: I1011 10:55:40.137890 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7cddc977f5-9ddgm" event={"ID":"879970ca-6312-4aec-b8f4-a8a41a0e3797","Type":"ContainerDied","Data":"d960a6ebc10578bac963a4210bd8cc373ed3887cebb614d8345d375a88c7faeb"} Oct 11 10:55:40.138171 master-1 kubenswrapper[4771]: I1011 10:55:40.137933 4771 scope.go:117] "RemoveContainer" containerID="47d878804e10bb973f86c4fb2c2dafde704a472be7b960d95d71befafe9306e4" Oct 11 10:55:40.138947 master-1 kubenswrapper[4771]: I1011 10:55:40.138919 4771 scope.go:117] "RemoveContainer" containerID="d960a6ebc10578bac963a4210bd8cc373ed3887cebb614d8345d375a88c7faeb" Oct 11 10:55:40.139229 master-1 kubenswrapper[4771]: E1011 10:55:40.139201 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-7cddc977f5-9ddgm_openstack(879970ca-6312-4aec-b8f4-a8a41a0e3797)\"" pod="openstack/ironic-7cddc977f5-9ddgm" podUID="879970ca-6312-4aec-b8f4-a8a41a0e3797" Oct 11 10:55:40.277338 master-1 kubenswrapper[4771]: I1011 10:55:40.277294 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" Oct 11 10:55:40.280528 master-0 kubenswrapper[4790]: I1011 10:55:40.280314 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-internal-api-2"] Oct 11 10:55:40.281810 master-0 kubenswrapper[4790]: I1011 10:55:40.281677 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-internal-api-2" podUID="a0a5aa40-0146-4b81-83dd-761d514c557a" containerName="glance-log" containerID="cri-o://373e10b5f3797c10bf2f289215e447ba6ede87d322784f4e67d64c872023c089" gracePeriod=30 Oct 11 10:55:40.282132 master-0 kubenswrapper[4790]: I1011 10:55:40.282076 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-internal-api-2" podUID="a0a5aa40-0146-4b81-83dd-761d514c557a" containerName="glance-httpd" containerID="cri-o://c15b47f90fa6f6a74e4ba2cf37c1852384ef4ec1575e5ae98a981dfe2894d2d7" gracePeriod=30 Oct 11 10:55:40.317403 master-0 kubenswrapper[4790]: I1011 10:55:40.317334 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c41b60ff-457d-4c45-8b56-4523c5c0097f" path="/var/lib/kubelet/pods/c41b60ff-457d-4c45-8b56-4523c5c0097f/volumes" Oct 11 10:55:40.480188 master-0 kubenswrapper[4790]: I1011 10:55:40.478054 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8a39-account-create-clqqg" Oct 11 10:55:40.539122 master-1 kubenswrapper[4771]: I1011 10:55:40.539053 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-d647f9c47-x7xc2" Oct 11 10:55:40.568191 master-0 kubenswrapper[4790]: I1011 10:55:40.567403 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptnz2\" (UniqueName: \"kubernetes.io/projected/e37e1fe6-6e89-4407-a40f-cf494a35eccd-kube-api-access-ptnz2\") pod \"e37e1fe6-6e89-4407-a40f-cf494a35eccd\" (UID: \"e37e1fe6-6e89-4407-a40f-cf494a35eccd\") " Oct 11 10:55:40.585984 master-1 kubenswrapper[4771]: I1011 10:55:40.585905 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:40.586402 master-1 kubenswrapper[4771]: I1011 10:55:40.586019 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:40.592223 master-0 kubenswrapper[4790]: I1011 10:55:40.592118 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e37e1fe6-6e89-4407-a40f-cf494a35eccd-kube-api-access-ptnz2" (OuterVolumeSpecName: "kube-api-access-ptnz2") pod "e37e1fe6-6e89-4407-a40f-cf494a35eccd" (UID: "e37e1fe6-6e89-4407-a40f-cf494a35eccd"). InnerVolumeSpecName "kube-api-access-ptnz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:40.595078 master-1 kubenswrapper[4771]: I1011 10:55:40.595020 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0657ee5-2e60-4a96-905e-814f46a72970-combined-ca-bundle\") pod \"e0657ee5-2e60-4a96-905e-814f46a72970\" (UID: \"e0657ee5-2e60-4a96-905e-814f46a72970\") " Oct 11 10:55:40.595266 master-1 kubenswrapper[4771]: I1011 10:55:40.595167 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0657ee5-2e60-4a96-905e-814f46a72970-config-data-custom\") pod \"e0657ee5-2e60-4a96-905e-814f46a72970\" (UID: \"e0657ee5-2e60-4a96-905e-814f46a72970\") " Oct 11 10:55:40.595266 master-1 kubenswrapper[4771]: I1011 10:55:40.595225 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0657ee5-2e60-4a96-905e-814f46a72970-config-data\") pod \"e0657ee5-2e60-4a96-905e-814f46a72970\" (UID: \"e0657ee5-2e60-4a96-905e-814f46a72970\") " Oct 11 10:55:40.595437 master-1 kubenswrapper[4771]: I1011 10:55:40.595384 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z885b\" (UniqueName: \"kubernetes.io/projected/e0657ee5-2e60-4a96-905e-814f46a72970-kube-api-access-z885b\") pod \"e0657ee5-2e60-4a96-905e-814f46a72970\" (UID: \"e0657ee5-2e60-4a96-905e-814f46a72970\") " Oct 11 10:55:40.611252 master-1 kubenswrapper[4771]: I1011 10:55:40.603574 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0657ee5-2e60-4a96-905e-814f46a72970-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e0657ee5-2e60-4a96-905e-814f46a72970" (UID: "e0657ee5-2e60-4a96-905e-814f46a72970"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:40.611252 master-1 kubenswrapper[4771]: I1011 10:55:40.603574 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0657ee5-2e60-4a96-905e-814f46a72970-kube-api-access-z885b" (OuterVolumeSpecName: "kube-api-access-z885b") pod "e0657ee5-2e60-4a96-905e-814f46a72970" (UID: "e0657ee5-2e60-4a96-905e-814f46a72970"). InnerVolumeSpecName "kube-api-access-z885b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:40.620132 master-1 kubenswrapper[4771]: I1011 10:55:40.620075 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0657ee5-2e60-4a96-905e-814f46a72970-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0657ee5-2e60-4a96-905e-814f46a72970" (UID: "e0657ee5-2e60-4a96-905e-814f46a72970"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:40.649478 master-1 kubenswrapper[4771]: I1011 10:55:40.648699 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0657ee5-2e60-4a96-905e-814f46a72970-config-data" (OuterVolumeSpecName: "config-data") pod "e0657ee5-2e60-4a96-905e-814f46a72970" (UID: "e0657ee5-2e60-4a96-905e-814f46a72970"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:40.654829 master-0 kubenswrapper[4790]: I1011 10:55:40.654763 4790 generic.go:334] "Generic (PLEG): container finished" podID="a0a5aa40-0146-4b81-83dd-761d514c557a" containerID="373e10b5f3797c10bf2f289215e447ba6ede87d322784f4e67d64c872023c089" exitCode=143 Oct 11 10:55:40.655384 master-0 kubenswrapper[4790]: I1011 10:55:40.654848 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-2" event={"ID":"a0a5aa40-0146-4b81-83dd-761d514c557a","Type":"ContainerDied","Data":"373e10b5f3797c10bf2f289215e447ba6ede87d322784f4e67d64c872023c089"} Oct 11 10:55:40.656951 master-2 kubenswrapper[4776]: I1011 10:55:40.656885 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-api-2"] Oct 11 10:55:40.657190 master-0 kubenswrapper[4790]: I1011 10:55:40.657158 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8a39-account-create-clqqg" Oct 11 10:55:40.665490 master-0 kubenswrapper[4790]: I1011 10:55:40.665448 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8a39-account-create-clqqg" event={"ID":"e37e1fe6-6e89-4407-a40f-cf494a35eccd","Type":"ContainerDied","Data":"5bb2b37cb0135387058a9a5780b75d200e3d6755f1f8d0b825a04e68757b6f14"} Oct 11 10:55:40.665490 master-0 kubenswrapper[4790]: I1011 10:55:40.665474 4790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bb2b37cb0135387058a9a5780b75d200e3d6755f1f8d0b825a04e68757b6f14" Oct 11 10:55:40.678845 master-0 kubenswrapper[4790]: I1011 10:55:40.678186 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptnz2\" (UniqueName: \"kubernetes.io/projected/e37e1fe6-6e89-4407-a40f-cf494a35eccd-kube-api-access-ptnz2\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:40.698071 master-1 kubenswrapper[4771]: I1011 10:55:40.698006 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z885b\" (UniqueName: \"kubernetes.io/projected/e0657ee5-2e60-4a96-905e-814f46a72970-kube-api-access-z885b\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:40.698071 master-1 kubenswrapper[4771]: I1011 10:55:40.698064 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0657ee5-2e60-4a96-905e-814f46a72970-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:40.698262 master-1 kubenswrapper[4771]: I1011 10:55:40.698082 4771 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0657ee5-2e60-4a96-905e-814f46a72970-config-data-custom\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:40.698262 master-1 kubenswrapper[4771]: I1011 10:55:40.698104 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0657ee5-2e60-4a96-905e-814f46a72970-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:40.746184 master-2 kubenswrapper[4776]: I1011 10:55:40.746130 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-2" event={"ID":"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57","Type":"ContainerStarted","Data":"1339fa35aee16501eb487fbb01f91540a50ece578bb3e75a1be09cc263132c60"} Oct 11 10:55:40.772863 master-0 kubenswrapper[4790]: I1011 10:55:40.772127 4790 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-89f9b4488-8vvt9" Oct 11 10:55:40.772863 master-0 kubenswrapper[4790]: I1011 10:55:40.772192 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-89f9b4488-8vvt9" Oct 11 10:55:40.774475 master-0 kubenswrapper[4790]: I1011 10:55:40.773496 4790 scope.go:117] "RemoveContainer" containerID="471d5706c589d3886adf86b8614c32749579ddcf4ce9dfdf7d21941669c457bf" Oct 11 10:55:40.774745 master-0 kubenswrapper[4790]: E1011 10:55:40.774660 4790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-89f9b4488-8vvt9_openstack(90ca8fc6-bc53-461b-8384-ca8344e8abb1)\"" pod="openstack/heat-cfnapi-89f9b4488-8vvt9" podUID="90ca8fc6-bc53-461b-8384-ca8344e8abb1" Oct 11 10:55:41.152248 master-1 kubenswrapper[4771]: I1011 10:55:41.152043 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" event={"ID":"c6af8eba-f8bf-47f6-8313-7a902aeb170f","Type":"ContainerStarted","Data":"d389be566649d008a3e09b2815ff87839b176862458587a6a08bb0703f09d204"} Oct 11 10:55:41.155345 master-1 kubenswrapper[4771]: I1011 10:55:41.155270 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-d647f9c47-x7xc2" event={"ID":"e0657ee5-2e60-4a96-905e-814f46a72970","Type":"ContainerDied","Data":"4d5653f9de27a6b9172b5e020cb9d04e796130d0328e34ab12c5dd9a66c1452e"} Oct 11 10:55:41.155528 master-1 kubenswrapper[4771]: I1011 10:55:41.155375 4771 scope.go:117] "RemoveContainer" containerID="359e13273e98466e823fa5c4d2aba3d9afd810ad691524f34525677325371beb" Oct 11 10:55:41.155608 master-1 kubenswrapper[4771]: I1011 10:55:41.155529 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-d647f9c47-x7xc2" Oct 11 10:55:41.163397 master-1 kubenswrapper[4771]: I1011 10:55:41.163320 4771 scope.go:117] "RemoveContainer" containerID="d960a6ebc10578bac963a4210bd8cc373ed3887cebb614d8345d375a88c7faeb" Oct 11 10:55:41.163750 master-1 kubenswrapper[4771]: E1011 10:55:41.163707 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-7cddc977f5-9ddgm_openstack(879970ca-6312-4aec-b8f4-a8a41a0e3797)\"" pod="openstack/ironic-7cddc977f5-9ddgm" podUID="879970ca-6312-4aec-b8f4-a8a41a0e3797" Oct 11 10:55:41.168366 master-1 kubenswrapper[4771]: I1011 10:55:41.168316 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vzsw" event={"ID":"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44","Type":"ContainerStarted","Data":"00b0738c103a1f8936204d6f12df3d6fd67321868af021b54204d55c141f77ca"} Oct 11 10:55:41.216412 master-1 kubenswrapper[4771]: I1011 10:55:41.216130 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8vzsw" podStartSLOduration=10.745220976 podStartE2EDuration="13.216107324s" podCreationTimestamp="2025-10-11 10:55:28 +0000 UTC" firstStartedPulling="2025-10-11 10:55:38.079524743 +0000 UTC m=+1770.053751224" lastFinishedPulling="2025-10-11 10:55:40.550411131 +0000 UTC m=+1772.524637572" observedRunningTime="2025-10-11 10:55:41.214205318 +0000 UTC m=+1773.188431769" watchObservedRunningTime="2025-10-11 10:55:41.216107324 +0000 UTC m=+1773.190333765" Oct 11 10:55:41.229194 master-1 kubenswrapper[4771]: I1011 10:55:41.229027 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-748bbfcf89-vpkvr" Oct 11 10:55:41.311379 master-1 kubenswrapper[4771]: I1011 10:55:41.311281 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-d647f9c47-x7xc2"] Oct 11 10:55:41.326710 master-1 kubenswrapper[4771]: I1011 10:55:41.326257 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-d647f9c47-x7xc2"] Oct 11 10:55:41.350550 master-1 kubenswrapper[4771]: I1011 10:55:41.350492 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7887b79bcd-stzg5"] Oct 11 10:55:41.350876 master-1 kubenswrapper[4771]: I1011 10:55:41.350837 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7887b79bcd-stzg5" podUID="362d815c-c6ec-48b0-9891-85d06ad00aed" containerName="neutron-api" containerID="cri-o://b94dfe1997cbb3d378d19012a9b6401bc1cef35489c7ea7be575908bfe56b3a0" gracePeriod=30 Oct 11 10:55:41.351094 master-1 kubenswrapper[4771]: I1011 10:55:41.351043 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7887b79bcd-stzg5" podUID="362d815c-c6ec-48b0-9891-85d06ad00aed" containerName="neutron-httpd" containerID="cri-o://99502f3eb6699cc67bcf11374ee8446bc01a1a157ce8024301c91ebed596f3f2" gracePeriod=30 Oct 11 10:55:41.418070 master-2 kubenswrapper[4776]: I1011 10:55:41.417888 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-748bbfcf89-tr8n2"] Oct 11 10:55:41.419806 master-2 kubenswrapper[4776]: I1011 10:55:41.419763 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.427996 master-2 kubenswrapper[4776]: I1011 10:55:41.427945 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 11 10:55:41.428195 master-2 kubenswrapper[4776]: I1011 10:55:41.428098 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 11 10:55:41.449467 master-2 kubenswrapper[4776]: I1011 10:55:41.449420 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-748bbfcf89-tr8n2"] Oct 11 10:55:41.483169 master-2 kubenswrapper[4776]: I1011 10:55:41.483096 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-public-tls-certs\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.483475 master-2 kubenswrapper[4776]: I1011 10:55:41.483189 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-httpd-config\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.483475 master-2 kubenswrapper[4776]: I1011 10:55:41.483261 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-combined-ca-bundle\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.483475 master-2 kubenswrapper[4776]: I1011 10:55:41.483309 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-config\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.483475 master-2 kubenswrapper[4776]: I1011 10:55:41.483338 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rm8h\" (UniqueName: \"kubernetes.io/projected/7e2da74b-d7e3-45c1-8c4b-e01415113c95-kube-api-access-9rm8h\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.483475 master-2 kubenswrapper[4776]: I1011 10:55:41.483368 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-internal-tls-certs\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.484093 master-2 kubenswrapper[4776]: I1011 10:55:41.484057 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-ovndb-tls-certs\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.585769 master-2 kubenswrapper[4776]: I1011 10:55:41.585655 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-config\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.585769 master-2 kubenswrapper[4776]: I1011 10:55:41.585738 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rm8h\" (UniqueName: \"kubernetes.io/projected/7e2da74b-d7e3-45c1-8c4b-e01415113c95-kube-api-access-9rm8h\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.585769 master-2 kubenswrapper[4776]: I1011 10:55:41.585774 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-internal-tls-certs\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.589738 master-2 kubenswrapper[4776]: I1011 10:55:41.589490 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-config\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.590130 master-2 kubenswrapper[4776]: I1011 10:55:41.589762 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-internal-tls-certs\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.592798 master-2 kubenswrapper[4776]: I1011 10:55:41.592766 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-ovndb-tls-certs\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.592923 master-2 kubenswrapper[4776]: I1011 10:55:41.592875 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-public-tls-certs\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.593005 master-2 kubenswrapper[4776]: I1011 10:55:41.592975 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-httpd-config\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.593103 master-2 kubenswrapper[4776]: I1011 10:55:41.593038 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-combined-ca-bundle\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.596455 master-2 kubenswrapper[4776]: I1011 10:55:41.596419 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-ovndb-tls-certs\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.598118 master-2 kubenswrapper[4776]: I1011 10:55:41.598088 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-public-tls-certs\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.603373 master-2 kubenswrapper[4776]: I1011 10:55:41.603062 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-httpd-config\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.607947 master-2 kubenswrapper[4776]: I1011 10:55:41.607804 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-combined-ca-bundle\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.639101 master-2 kubenswrapper[4776]: I1011 10:55:41.632926 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rm8h\" (UniqueName: \"kubernetes.io/projected/7e2da74b-d7e3-45c1-8c4b-e01415113c95-kube-api-access-9rm8h\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.671923 master-0 kubenswrapper[4790]: I1011 10:55:41.670498 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-d55d46749-qq6mv" event={"ID":"e4070c53-33f0-488e-80d1-f374f59c96cd","Type":"ContainerStarted","Data":"9ccb660b18f89ed1e9ba902f7cad76c821973aa301bd218f890f40e09498e3e1"} Oct 11 10:55:41.673914 master-0 kubenswrapper[4790]: I1011 10:55:41.673184 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" event={"ID":"24482e3e-ba4c-4920-90d4-077df9a7b329","Type":"ContainerStarted","Data":"a82b900b3e700703b6be815ee2a727d550189498d95c2ec3c75393c953c8afe0"} Oct 11 10:55:41.731303 master-0 kubenswrapper[4790]: I1011 10:55:41.731218 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" podStartSLOduration=5.7311921869999995 podStartE2EDuration="5.731192187s" podCreationTimestamp="2025-10-11 10:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:41.73022681 +0000 UTC m=+1018.284687102" watchObservedRunningTime="2025-10-11 10:55:41.731192187 +0000 UTC m=+1018.285652479" Oct 11 10:55:41.758500 master-2 kubenswrapper[4776]: I1011 10:55:41.758325 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-2" event={"ID":"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57","Type":"ContainerStarted","Data":"93db0b09e54cde1d1adab74a0f17b88c125781b5a4724d40de0780d77dfb38bd"} Oct 11 10:55:41.762322 master-2 kubenswrapper[4776]: I1011 10:55:41.762272 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:42.101729 master-0 kubenswrapper[4790]: I1011 10:55:42.101257 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:42.104605 master-0 kubenswrapper[4790]: I1011 10:55:42.103726 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" podUID="24482e3e-ba4c-4920-90d4-077df9a7b329" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.130.0.106:8080/\": dial tcp 10.130.0.106:8080: connect: connection refused" Oct 11 10:55:42.190122 master-1 kubenswrapper[4771]: I1011 10:55:42.190045 4771 generic.go:334] "Generic (PLEG): container finished" podID="362d815c-c6ec-48b0-9891-85d06ad00aed" containerID="99502f3eb6699cc67bcf11374ee8446bc01a1a157ce8024301c91ebed596f3f2" exitCode=0 Oct 11 10:55:42.190693 master-1 kubenswrapper[4771]: I1011 10:55:42.190165 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-stzg5" event={"ID":"362d815c-c6ec-48b0-9891-85d06ad00aed","Type":"ContainerDied","Data":"99502f3eb6699cc67bcf11374ee8446bc01a1a157ce8024301c91ebed596f3f2"} Oct 11 10:55:42.193417 master-1 kubenswrapper[4771]: I1011 10:55:42.193342 4771 scope.go:117] "RemoveContainer" containerID="d960a6ebc10578bac963a4210bd8cc373ed3887cebb614d8345d375a88c7faeb" Oct 11 10:55:42.193820 master-1 kubenswrapper[4771]: E1011 10:55:42.193773 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-7cddc977f5-9ddgm_openstack(879970ca-6312-4aec-b8f4-a8a41a0e3797)\"" pod="openstack/ironic-7cddc977f5-9ddgm" podUID="879970ca-6312-4aec-b8f4-a8a41a0e3797" Oct 11 10:55:42.195665 master-1 kubenswrapper[4771]: I1011 10:55:42.195607 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" Oct 11 10:55:42.452533 master-1 kubenswrapper[4771]: I1011 10:55:42.452283 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0657ee5-2e60-4a96-905e-814f46a72970" path="/var/lib/kubelet/pods/e0657ee5-2e60-4a96-905e-814f46a72970/volumes" Oct 11 10:55:42.613343 master-2 kubenswrapper[4776]: I1011 10:55:42.612726 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-748bbfcf89-tr8n2"] Oct 11 10:55:42.665631 master-2 kubenswrapper[4776]: W1011 10:55:42.665543 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e2da74b_d7e3_45c1_8c4b_e01415113c95.slice/crio-28147c8ed528fe882308c9988a4413d5a0a1b32f150683ba8de265e037228a3d WatchSource:0}: Error finding container 28147c8ed528fe882308c9988a4413d5a0a1b32f150683ba8de265e037228a3d: Status 404 returned error can't find the container with id 28147c8ed528fe882308c9988a4413d5a0a1b32f150683ba8de265e037228a3d Oct 11 10:55:42.695737 master-0 kubenswrapper[4790]: I1011 10:55:42.694549 4790 generic.go:334] "Generic (PLEG): container finished" podID="e4070c53-33f0-488e-80d1-f374f59c96cd" containerID="9ccb660b18f89ed1e9ba902f7cad76c821973aa301bd218f890f40e09498e3e1" exitCode=0 Oct 11 10:55:42.697373 master-0 kubenswrapper[4790]: I1011 10:55:42.696576 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-d55d46749-qq6mv" event={"ID":"e4070c53-33f0-488e-80d1-f374f59c96cd","Type":"ContainerDied","Data":"9ccb660b18f89ed1e9ba902f7cad76c821973aa301bd218f890f40e09498e3e1"} Oct 11 10:55:42.697373 master-0 kubenswrapper[4790]: I1011 10:55:42.696617 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-d55d46749-qq6mv" event={"ID":"e4070c53-33f0-488e-80d1-f374f59c96cd","Type":"ContainerStarted","Data":"3962fee214b48290f3b4d7d88b18ff2a4ea8e104b622cbb5acbf312b3eaf73e0"} Oct 11 10:55:42.697373 master-0 kubenswrapper[4790]: I1011 10:55:42.696631 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-d55d46749-qq6mv" event={"ID":"e4070c53-33f0-488e-80d1-f374f59c96cd","Type":"ContainerStarted","Data":"379173187d53388fb3d2bdca3c2f022929893aa2034fbc1ca1133cd2d76c6fc5"} Oct 11 10:55:42.697373 master-0 kubenswrapper[4790]: I1011 10:55:42.696668 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:42.734743 master-0 kubenswrapper[4790]: I1011 10:55:42.733933 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-d55d46749-qq6mv" podStartSLOduration=3.280811494 podStartE2EDuration="14.733886778s" podCreationTimestamp="2025-10-11 10:55:28 +0000 UTC" firstStartedPulling="2025-10-11 10:55:29.602099608 +0000 UTC m=+1006.156559900" lastFinishedPulling="2025-10-11 10:55:41.055174892 +0000 UTC m=+1017.609635184" observedRunningTime="2025-10-11 10:55:42.730343941 +0000 UTC m=+1019.284804233" watchObservedRunningTime="2025-10-11 10:55:42.733886778 +0000 UTC m=+1019.288347080" Oct 11 10:55:42.755214 master-1 kubenswrapper[4771]: I1011 10:55:42.755111 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" Oct 11 10:55:42.808402 master-2 kubenswrapper[4776]: I1011 10:55:42.808343 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-2" event={"ID":"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57","Type":"ContainerStarted","Data":"3f9f9b89aea4daee0c9203facbf87eef99f5f5e9ba646dcd06979bc6a462f337"} Oct 11 10:55:42.808402 master-2 kubenswrapper[4776]: I1011 10:55:42.808415 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:42.815707 master-2 kubenswrapper[4776]: I1011 10:55:42.815629 4776 generic.go:334] "Generic (PLEG): container finished" podID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerID="5770704138c0c2f874908ca2cc1d2acaea03506574846327d58cb886820df9bf" exitCode=0 Oct 11 10:55:42.815904 master-2 kubenswrapper[4776]: I1011 10:55:42.815734 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-2" event={"ID":"a0831eef-0c2e-4d09-a44b-7276f30bc1cf","Type":"ContainerDied","Data":"5770704138c0c2f874908ca2cc1d2acaea03506574846327d58cb886820df9bf"} Oct 11 10:55:42.817218 master-2 kubenswrapper[4776]: I1011 10:55:42.817182 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-748bbfcf89-tr8n2" event={"ID":"7e2da74b-d7e3-45c1-8c4b-e01415113c95","Type":"ContainerStarted","Data":"28147c8ed528fe882308c9988a4413d5a0a1b32f150683ba8de265e037228a3d"} Oct 11 10:55:42.847520 master-2 kubenswrapper[4776]: I1011 10:55:42.847375 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-api-2" podStartSLOduration=3.847352848 podStartE2EDuration="3.847352848s" podCreationTimestamp="2025-10-11 10:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:42.838434466 +0000 UTC m=+1777.622861175" watchObservedRunningTime="2025-10-11 10:55:42.847352848 +0000 UTC m=+1777.631779557" Oct 11 10:55:43.217376 master-1 kubenswrapper[4771]: I1011 10:55:43.217299 4771 generic.go:334] "Generic (PLEG): container finished" podID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerID="1d0d93b3fc6393dcdc851e8c3921d7c5d5a44cf9e99d331f9e66f61b3c48f59d" exitCode=0 Oct 11 10:55:43.218129 master-1 kubenswrapper[4771]: I1011 10:55:43.217493 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"736a75e7-8c74-4862-9ac6-3b4c2d0d721d","Type":"ContainerDied","Data":"1d0d93b3fc6393dcdc851e8c3921d7c5d5a44cf9e99d331f9e66f61b3c48f59d"} Oct 11 10:55:43.218129 master-1 kubenswrapper[4771]: I1011 10:55:43.217558 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"736a75e7-8c74-4862-9ac6-3b4c2d0d721d","Type":"ContainerDied","Data":"5c043b242e5c8ff65d3ea42d92bba671ec7f6446a265531a8e4be33feddbe4fa"} Oct 11 10:55:43.218129 master-1 kubenswrapper[4771]: I1011 10:55:43.217571 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c043b242e5c8ff65d3ea42d92bba671ec7f6446a265531a8e4be33feddbe4fa" Oct 11 10:55:43.256400 master-1 kubenswrapper[4771]: I1011 10:55:43.256227 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" Oct 11 10:55:43.299094 master-1 kubenswrapper[4771]: I1011 10:55:43.299046 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:55:43.492826 master-1 kubenswrapper[4771]: I1011 10:55:43.492650 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-run-httpd\") pod \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " Oct 11 10:55:43.492826 master-1 kubenswrapper[4771]: I1011 10:55:43.492763 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-combined-ca-bundle\") pod \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " Oct 11 10:55:43.492826 master-1 kubenswrapper[4771]: I1011 10:55:43.492793 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-sg-core-conf-yaml\") pod \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " Oct 11 10:55:43.492826 master-1 kubenswrapper[4771]: I1011 10:55:43.492815 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-scripts\") pod \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " Oct 11 10:55:43.493471 master-1 kubenswrapper[4771]: I1011 10:55:43.492938 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-log-httpd\") pod \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " Oct 11 10:55:43.493471 master-1 kubenswrapper[4771]: I1011 10:55:43.493084 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-config-data\") pod \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " Oct 11 10:55:43.493743 master-1 kubenswrapper[4771]: I1011 10:55:43.493708 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7f7t\" (UniqueName: \"kubernetes.io/projected/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-kube-api-access-v7f7t\") pod \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\" (UID: \"736a75e7-8c74-4862-9ac6-3b4c2d0d721d\") " Oct 11 10:55:43.493809 master-1 kubenswrapper[4771]: I1011 10:55:43.493094 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "736a75e7-8c74-4862-9ac6-3b4c2d0d721d" (UID: "736a75e7-8c74-4862-9ac6-3b4c2d0d721d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:43.493893 master-1 kubenswrapper[4771]: I1011 10:55:43.493693 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "736a75e7-8c74-4862-9ac6-3b4c2d0d721d" (UID: "736a75e7-8c74-4862-9ac6-3b4c2d0d721d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:43.494139 master-1 kubenswrapper[4771]: I1011 10:55:43.494109 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-log-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:43.494139 master-1 kubenswrapper[4771]: I1011 10:55:43.494136 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-run-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:43.512896 master-1 kubenswrapper[4771]: I1011 10:55:43.512684 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-kube-api-access-v7f7t" (OuterVolumeSpecName: "kube-api-access-v7f7t") pod "736a75e7-8c74-4862-9ac6-3b4c2d0d721d" (UID: "736a75e7-8c74-4862-9ac6-3b4c2d0d721d"). InnerVolumeSpecName "kube-api-access-v7f7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:43.513399 master-1 kubenswrapper[4771]: I1011 10:55:43.512953 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-scripts" (OuterVolumeSpecName: "scripts") pod "736a75e7-8c74-4862-9ac6-3b4c2d0d721d" (UID: "736a75e7-8c74-4862-9ac6-3b4c2d0d721d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:43.529183 master-1 kubenswrapper[4771]: I1011 10:55:43.528714 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "736a75e7-8c74-4862-9ac6-3b4c2d0d721d" (UID: "736a75e7-8c74-4862-9ac6-3b4c2d0d721d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:43.575018 master-1 kubenswrapper[4771]: I1011 10:55:43.574920 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "736a75e7-8c74-4862-9ac6-3b4c2d0d721d" (UID: "736a75e7-8c74-4862-9ac6-3b4c2d0d721d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:43.591403 master-1 kubenswrapper[4771]: I1011 10:55:43.591311 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-config-data" (OuterVolumeSpecName: "config-data") pod "736a75e7-8c74-4862-9ac6-3b4c2d0d721d" (UID: "736a75e7-8c74-4862-9ac6-3b4c2d0d721d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:43.596426 master-1 kubenswrapper[4771]: I1011 10:55:43.596373 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:43.596426 master-1 kubenswrapper[4771]: I1011 10:55:43.596411 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7f7t\" (UniqueName: \"kubernetes.io/projected/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-kube-api-access-v7f7t\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:43.596426 master-1 kubenswrapper[4771]: I1011 10:55:43.596425 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:43.596598 master-1 kubenswrapper[4771]: I1011 10:55:43.596435 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-sg-core-conf-yaml\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:43.596598 master-1 kubenswrapper[4771]: I1011 10:55:43.596446 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/736a75e7-8c74-4862-9ac6-3b4c2d0d721d-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:43.708888 master-0 kubenswrapper[4790]: I1011 10:55:43.706865 4790 generic.go:334] "Generic (PLEG): container finished" podID="a0a5aa40-0146-4b81-83dd-761d514c557a" containerID="c15b47f90fa6f6a74e4ba2cf37c1852384ef4ec1575e5ae98a981dfe2894d2d7" exitCode=0 Oct 11 10:55:43.708888 master-0 kubenswrapper[4790]: I1011 10:55:43.706968 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-2" event={"ID":"a0a5aa40-0146-4b81-83dd-761d514c557a","Type":"ContainerDied","Data":"c15b47f90fa6f6a74e4ba2cf37c1852384ef4ec1575e5ae98a981dfe2894d2d7"} Oct 11 10:55:43.797253 master-2 kubenswrapper[4776]: I1011 10:55:43.796860 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:43.832935 master-2 kubenswrapper[4776]: I1011 10:55:43.832849 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-2" event={"ID":"a0831eef-0c2e-4d09-a44b-7276f30bc1cf","Type":"ContainerDied","Data":"5a48d5bbfd49d56d4f32777007bb97dc3fb7108ea65533008682389d26fd8acc"} Oct 11 10:55:43.833488 master-2 kubenswrapper[4776]: I1011 10:55:43.832965 4776 scope.go:117] "RemoveContainer" containerID="5770704138c0c2f874908ca2cc1d2acaea03506574846327d58cb886820df9bf" Oct 11 10:55:43.833488 master-2 kubenswrapper[4776]: I1011 10:55:43.833154 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:43.839073 master-2 kubenswrapper[4776]: I1011 10:55:43.839038 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-748bbfcf89-tr8n2" event={"ID":"7e2da74b-d7e3-45c1-8c4b-e01415113c95","Type":"ContainerStarted","Data":"d87c91a1d2ccc6e15d38a410528e6d89dce602fb8e82c0e98233a0939f6cd7c3"} Oct 11 10:55:43.839567 master-2 kubenswrapper[4776]: I1011 10:55:43.839550 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-748bbfcf89-tr8n2" event={"ID":"7e2da74b-d7e3-45c1-8c4b-e01415113c95","Type":"ContainerStarted","Data":"ec47c17fd343b5bb68562630fde40a9c43e8e9ad6d725893f890bf8aace1f28c"} Oct 11 10:55:43.839644 master-2 kubenswrapper[4776]: I1011 10:55:43.839633 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:43.865825 master-2 kubenswrapper[4776]: I1011 10:55:43.865787 4776 scope.go:117] "RemoveContainer" containerID="2d269532330f7a891baad11ad939ef35677256aa7bb563b56736833615edcf28" Oct 11 10:55:43.876719 master-2 kubenswrapper[4776]: I1011 10:55:43.873184 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-config-data\") pod \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " Oct 11 10:55:43.876719 master-2 kubenswrapper[4776]: I1011 10:55:43.873265 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-logs\") pod \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " Oct 11 10:55:43.876719 master-2 kubenswrapper[4776]: I1011 10:55:43.873356 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-combined-ca-bundle\") pod \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " Oct 11 10:55:43.876719 master-2 kubenswrapper[4776]: I1011 10:55:43.873476 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-public-tls-certs\") pod \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " Oct 11 10:55:43.876719 master-2 kubenswrapper[4776]: I1011 10:55:43.873615 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") pod \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " Oct 11 10:55:43.876719 master-2 kubenswrapper[4776]: I1011 10:55:43.873650 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-httpd-run\") pod \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " Oct 11 10:55:43.876719 master-2 kubenswrapper[4776]: I1011 10:55:43.873706 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-scripts\") pod \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " Oct 11 10:55:43.876719 master-2 kubenswrapper[4776]: I1011 10:55:43.873743 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f984s\" (UniqueName: \"kubernetes.io/projected/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-kube-api-access-f984s\") pod \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " Oct 11 10:55:43.876719 master-2 kubenswrapper[4776]: I1011 10:55:43.876069 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a0831eef-0c2e-4d09-a44b-7276f30bc1cf" (UID: "a0831eef-0c2e-4d09-a44b-7276f30bc1cf"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:43.880907 master-2 kubenswrapper[4776]: I1011 10:55:43.880584 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-logs" (OuterVolumeSpecName: "logs") pod "a0831eef-0c2e-4d09-a44b-7276f30bc1cf" (UID: "a0831eef-0c2e-4d09-a44b-7276f30bc1cf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:43.881339 master-2 kubenswrapper[4776]: I1011 10:55:43.881308 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-kube-api-access-f984s" (OuterVolumeSpecName: "kube-api-access-f984s") pod "a0831eef-0c2e-4d09-a44b-7276f30bc1cf" (UID: "a0831eef-0c2e-4d09-a44b-7276f30bc1cf"). InnerVolumeSpecName "kube-api-access-f984s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:43.911766 master-2 kubenswrapper[4776]: I1011 10:55:43.907403 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007" (OuterVolumeSpecName: "glance") pod "a0831eef-0c2e-4d09-a44b-7276f30bc1cf" (UID: "a0831eef-0c2e-4d09-a44b-7276f30bc1cf"). InnerVolumeSpecName "pvc-96ecbc97-5be5-45e4-8942-00605756b89a". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 11 10:55:43.911766 master-2 kubenswrapper[4776]: I1011 10:55:43.907791 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-scripts" (OuterVolumeSpecName: "scripts") pod "a0831eef-0c2e-4d09-a44b-7276f30bc1cf" (UID: "a0831eef-0c2e-4d09-a44b-7276f30bc1cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:43.919480 master-2 kubenswrapper[4776]: I1011 10:55:43.919320 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0831eef-0c2e-4d09-a44b-7276f30bc1cf" (UID: "a0831eef-0c2e-4d09-a44b-7276f30bc1cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:43.937705 master-2 kubenswrapper[4776]: I1011 10:55:43.933637 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-748bbfcf89-tr8n2" podStartSLOduration=2.933618899 podStartE2EDuration="2.933618899s" podCreationTimestamp="2025-10-11 10:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:43.929522837 +0000 UTC m=+1778.713949546" watchObservedRunningTime="2025-10-11 10:55:43.933618899 +0000 UTC m=+1778.718045608" Oct 11 10:55:43.951702 master-2 kubenswrapper[4776]: I1011 10:55:43.948995 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-config-data" (OuterVolumeSpecName: "config-data") pod "a0831eef-0c2e-4d09-a44b-7276f30bc1cf" (UID: "a0831eef-0c2e-4d09-a44b-7276f30bc1cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:43.955697 master-2 kubenswrapper[4776]: I1011 10:55:43.955538 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a0831eef-0c2e-4d09-a44b-7276f30bc1cf" (UID: "a0831eef-0c2e-4d09-a44b-7276f30bc1cf"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:43.978143 master-2 kubenswrapper[4776]: I1011 10:55:43.978090 4776 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-public-tls-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:43.978318 master-2 kubenswrapper[4776]: I1011 10:55:43.978150 4776 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") on node \"master-2\" " Oct 11 10:55:43.978318 master-2 kubenswrapper[4776]: I1011 10:55:43.978168 4776 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-httpd-run\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:43.978318 master-2 kubenswrapper[4776]: I1011 10:55:43.978182 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:43.978318 master-2 kubenswrapper[4776]: I1011 10:55:43.978225 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f984s\" (UniqueName: \"kubernetes.io/projected/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-kube-api-access-f984s\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:43.978318 master-2 kubenswrapper[4776]: I1011 10:55:43.978237 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:43.978318 master-2 kubenswrapper[4776]: I1011 10:55:43.978253 4776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-logs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:43.978318 master-2 kubenswrapper[4776]: I1011 10:55:43.978265 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:44.004621 master-2 kubenswrapper[4776]: I1011 10:55:44.004563 4776 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 11 10:55:44.004800 master-1 kubenswrapper[4771]: I1011 10:55:44.004735 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-64fcdf7d54-8r455" Oct 11 10:55:44.004869 master-2 kubenswrapper[4776]: I1011 10:55:44.004762 4776 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-96ecbc97-5be5-45e4-8942-00605756b89a" (UniqueName: "kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007") on node "master-2" Oct 11 10:55:44.080106 master-2 kubenswrapper[4776]: I1011 10:55:44.080039 4776 reconciler_common.go:293] "Volume detached for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:44.089566 master-0 kubenswrapper[4790]: I1011 10:55:44.089519 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:44.123409 master-0 kubenswrapper[4790]: I1011 10:55:44.122487 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-89f9b4488-8vvt9"] Oct 11 10:55:44.182820 master-2 kubenswrapper[4776]: I1011 10:55:44.182761 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-external-api-2"] Oct 11 10:55:44.188423 master-0 kubenswrapper[4790]: I1011 10:55:44.186166 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0a5aa40-0146-4b81-83dd-761d514c557a-logs\") pod \"a0a5aa40-0146-4b81-83dd-761d514c557a\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " Oct 11 10:55:44.188423 master-0 kubenswrapper[4790]: I1011 10:55:44.186326 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0a5aa40-0146-4b81-83dd-761d514c557a-httpd-run\") pod \"a0a5aa40-0146-4b81-83dd-761d514c557a\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " Oct 11 10:55:44.188423 master-0 kubenswrapper[4790]: I1011 10:55:44.186394 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vczqt\" (UniqueName: \"kubernetes.io/projected/a0a5aa40-0146-4b81-83dd-761d514c557a-kube-api-access-vczqt\") pod \"a0a5aa40-0146-4b81-83dd-761d514c557a\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " Oct 11 10:55:44.188423 master-0 kubenswrapper[4790]: I1011 10:55:44.186427 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-combined-ca-bundle\") pod \"a0a5aa40-0146-4b81-83dd-761d514c557a\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " Oct 11 10:55:44.188423 master-0 kubenswrapper[4790]: I1011 10:55:44.186472 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-config-data\") pod \"a0a5aa40-0146-4b81-83dd-761d514c557a\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " Oct 11 10:55:44.188423 master-0 kubenswrapper[4790]: I1011 10:55:44.186786 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8217a7c0-7507-4763-8106-d527748a7b39\") pod \"a0a5aa40-0146-4b81-83dd-761d514c557a\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " Oct 11 10:55:44.188423 master-0 kubenswrapper[4790]: I1011 10:55:44.186881 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-internal-tls-certs\") pod \"a0a5aa40-0146-4b81-83dd-761d514c557a\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " Oct 11 10:55:44.188423 master-0 kubenswrapper[4790]: I1011 10:55:44.186935 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-scripts\") pod \"a0a5aa40-0146-4b81-83dd-761d514c557a\" (UID: \"a0a5aa40-0146-4b81-83dd-761d514c557a\") " Oct 11 10:55:44.188929 master-0 kubenswrapper[4790]: I1011 10:55:44.188651 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0a5aa40-0146-4b81-83dd-761d514c557a-logs" (OuterVolumeSpecName: "logs") pod "a0a5aa40-0146-4b81-83dd-761d514c557a" (UID: "a0a5aa40-0146-4b81-83dd-761d514c557a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:44.191075 master-0 kubenswrapper[4790]: I1011 10:55:44.189260 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0a5aa40-0146-4b81-83dd-761d514c557a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a0a5aa40-0146-4b81-83dd-761d514c557a" (UID: "a0a5aa40-0146-4b81-83dd-761d514c557a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:44.191933 master-0 kubenswrapper[4790]: I1011 10:55:44.191795 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0a5aa40-0146-4b81-83dd-761d514c557a-kube-api-access-vczqt" (OuterVolumeSpecName: "kube-api-access-vczqt") pod "a0a5aa40-0146-4b81-83dd-761d514c557a" (UID: "a0a5aa40-0146-4b81-83dd-761d514c557a"). InnerVolumeSpecName "kube-api-access-vczqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:44.192441 master-0 kubenswrapper[4790]: I1011 10:55:44.192351 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-scripts" (OuterVolumeSpecName: "scripts") pod "a0a5aa40-0146-4b81-83dd-761d514c557a" (UID: "a0a5aa40-0146-4b81-83dd-761d514c557a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:44.193402 master-2 kubenswrapper[4776]: I1011 10:55:44.193351 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b5802-default-external-api-2"] Oct 11 10:55:44.215888 master-0 kubenswrapper[4790]: I1011 10:55:44.215824 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0a5aa40-0146-4b81-83dd-761d514c557a" (UID: "a0a5aa40-0146-4b81-83dd-761d514c557a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:44.241795 master-2 kubenswrapper[4776]: I1011 10:55:44.238497 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-external-api-2"] Oct 11 10:55:44.241795 master-2 kubenswrapper[4776]: E1011 10:55:44.239230 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerName="glance-log" Oct 11 10:55:44.241795 master-2 kubenswrapper[4776]: I1011 10:55:44.239248 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerName="glance-log" Oct 11 10:55:44.241795 master-2 kubenswrapper[4776]: E1011 10:55:44.239265 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerName="glance-httpd" Oct 11 10:55:44.241795 master-2 kubenswrapper[4776]: I1011 10:55:44.239271 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerName="glance-httpd" Oct 11 10:55:44.241795 master-2 kubenswrapper[4776]: I1011 10:55:44.239480 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerName="glance-httpd" Oct 11 10:55:44.241795 master-2 kubenswrapper[4776]: I1011 10:55:44.239502 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerName="glance-log" Oct 11 10:55:44.241795 master-2 kubenswrapper[4776]: I1011 10:55:44.240564 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.245678 master-0 kubenswrapper[4790]: I1011 10:55:44.245611 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a0a5aa40-0146-4b81-83dd-761d514c557a" (UID: "a0a5aa40-0146-4b81-83dd-761d514c557a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:44.247804 master-2 kubenswrapper[4776]: I1011 10:55:44.244182 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-external-config-data" Oct 11 10:55:44.247804 master-2 kubenswrapper[4776]: I1011 10:55:44.244880 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 11 10:55:44.253029 master-1 kubenswrapper[4771]: I1011 10:55:44.252927 4771 generic.go:334] "Generic (PLEG): container finished" podID="c6af8eba-f8bf-47f6-8313-7a902aeb170f" containerID="d389be566649d008a3e09b2815ff87839b176862458587a6a08bb0703f09d204" exitCode=1 Oct 11 10:55:44.254146 master-1 kubenswrapper[4771]: I1011 10:55:44.253035 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" event={"ID":"c6af8eba-f8bf-47f6-8313-7a902aeb170f","Type":"ContainerDied","Data":"d389be566649d008a3e09b2815ff87839b176862458587a6a08bb0703f09d204"} Oct 11 10:55:44.254146 master-1 kubenswrapper[4771]: I1011 10:55:44.253129 4771 scope.go:117] "RemoveContainer" containerID="18f55509d99d6df6e062c9f98f3f97b0989b5f829acb0a772e9a836bc344b833" Oct 11 10:55:44.254146 master-1 kubenswrapper[4771]: I1011 10:55:44.253827 4771 scope.go:117] "RemoveContainer" containerID="d389be566649d008a3e09b2815ff87839b176862458587a6a08bb0703f09d204" Oct 11 10:55:44.254146 master-1 kubenswrapper[4771]: E1011 10:55:44.254106 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-656ddc8b67-kfkzr_openstack(c6af8eba-f8bf-47f6-8313-7a902aeb170f)\"" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" podUID="c6af8eba-f8bf-47f6-8313-7a902aeb170f" Oct 11 10:55:44.254146 master-1 kubenswrapper[4771]: I1011 10:55:44.254159 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:55:44.254751 master-0 kubenswrapper[4790]: I1011 10:55:44.252858 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-config-data" (OuterVolumeSpecName: "config-data") pod "a0a5aa40-0146-4b81-83dd-761d514c557a" (UID: "a0a5aa40-0146-4b81-83dd-761d514c557a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:44.256004 master-2 kubenswrapper[4776]: I1011 10:55:44.255947 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-2"] Oct 11 10:55:44.285739 master-2 kubenswrapper[4776]: I1011 10:55:44.283087 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-combined-ca-bundle\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.285739 master-2 kubenswrapper[4776]: I1011 10:55:44.283161 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.285739 master-2 kubenswrapper[4776]: I1011 10:55:44.283240 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3afb9e92-33b4-4cbf-8857-de31fa326a7a-httpd-run\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.285739 master-2 kubenswrapper[4776]: I1011 10:55:44.283281 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-public-tls-certs\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.285739 master-2 kubenswrapper[4776]: I1011 10:55:44.283376 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h98d\" (UniqueName: \"kubernetes.io/projected/3afb9e92-33b4-4cbf-8857-de31fa326a7a-kube-api-access-9h98d\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.285739 master-2 kubenswrapper[4776]: I1011 10:55:44.283424 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-scripts\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.285739 master-2 kubenswrapper[4776]: I1011 10:55:44.283474 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-config-data\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.285739 master-2 kubenswrapper[4776]: I1011 10:55:44.283509 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afb9e92-33b4-4cbf-8857-de31fa326a7a-logs\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.289390 master-0 kubenswrapper[4790]: I1011 10:55:44.289274 4790 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:44.289390 master-0 kubenswrapper[4790]: I1011 10:55:44.289318 4790 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-scripts\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:44.289390 master-0 kubenswrapper[4790]: I1011 10:55:44.289329 4790 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0a5aa40-0146-4b81-83dd-761d514c557a-logs\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:44.289390 master-0 kubenswrapper[4790]: I1011 10:55:44.289341 4790 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0a5aa40-0146-4b81-83dd-761d514c557a-httpd-run\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:44.289390 master-0 kubenswrapper[4790]: I1011 10:55:44.289353 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vczqt\" (UniqueName: \"kubernetes.io/projected/a0a5aa40-0146-4b81-83dd-761d514c557a-kube-api-access-vczqt\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:44.289390 master-0 kubenswrapper[4790]: I1011 10:55:44.289375 4790 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:44.289390 master-0 kubenswrapper[4790]: I1011 10:55:44.289386 4790 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a5aa40-0146-4b81-83dd-761d514c557a-config-data\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:44.309173 master-0 kubenswrapper[4790]: I1011 10:55:44.309126 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^8217a7c0-7507-4763-8106-d527748a7b39" (OuterVolumeSpecName: "glance") pod "a0a5aa40-0146-4b81-83dd-761d514c557a" (UID: "a0a5aa40-0146-4b81-83dd-761d514c557a"). InnerVolumeSpecName "pvc-db4348ad-f9a3-4500-a9f3-efd0f3ffb9a5". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 11 10:55:44.347001 master-1 kubenswrapper[4771]: I1011 10:55:44.346909 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:55:44.352830 master-1 kubenswrapper[4771]: I1011 10:55:44.352759 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:55:44.385356 master-2 kubenswrapper[4776]: I1011 10:55:44.385297 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-public-tls-certs\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.385612 master-2 kubenswrapper[4776]: I1011 10:55:44.385397 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h98d\" (UniqueName: \"kubernetes.io/projected/3afb9e92-33b4-4cbf-8857-de31fa326a7a-kube-api-access-9h98d\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.385612 master-2 kubenswrapper[4776]: I1011 10:55:44.385432 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-scripts\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.385612 master-2 kubenswrapper[4776]: I1011 10:55:44.385463 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-config-data\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.385612 master-2 kubenswrapper[4776]: I1011 10:55:44.385485 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afb9e92-33b4-4cbf-8857-de31fa326a7a-logs\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.385612 master-2 kubenswrapper[4776]: I1011 10:55:44.385514 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-combined-ca-bundle\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.385612 master-2 kubenswrapper[4776]: I1011 10:55:44.385542 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.385612 master-2 kubenswrapper[4776]: I1011 10:55:44.385584 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3afb9e92-33b4-4cbf-8857-de31fa326a7a-httpd-run\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.386123 master-2 kubenswrapper[4776]: I1011 10:55:44.386079 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afb9e92-33b4-4cbf-8857-de31fa326a7a-logs\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.386206 master-2 kubenswrapper[4776]: I1011 10:55:44.386161 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3afb9e92-33b4-4cbf-8857-de31fa326a7a-httpd-run\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.387200 master-1 kubenswrapper[4771]: I1011 10:55:44.387142 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:55:44.387562 master-1 kubenswrapper[4771]: E1011 10:55:44.387539 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0657ee5-2e60-4a96-905e-814f46a72970" containerName="heat-api" Oct 11 10:55:44.387622 master-1 kubenswrapper[4771]: I1011 10:55:44.387564 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0657ee5-2e60-4a96-905e-814f46a72970" containerName="heat-api" Oct 11 10:55:44.387622 master-1 kubenswrapper[4771]: E1011 10:55:44.387585 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerName="ceilometer-central-agent" Oct 11 10:55:44.387622 master-1 kubenswrapper[4771]: I1011 10:55:44.387594 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerName="ceilometer-central-agent" Oct 11 10:55:44.387622 master-1 kubenswrapper[4771]: E1011 10:55:44.387616 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerName="proxy-httpd" Oct 11 10:55:44.387823 master-1 kubenswrapper[4771]: I1011 10:55:44.387625 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerName="proxy-httpd" Oct 11 10:55:44.387823 master-1 kubenswrapper[4771]: E1011 10:55:44.387656 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerName="sg-core" Oct 11 10:55:44.387823 master-1 kubenswrapper[4771]: I1011 10:55:44.387665 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerName="sg-core" Oct 11 10:55:44.387823 master-1 kubenswrapper[4771]: E1011 10:55:44.387684 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50b2fec-a3b6-4245-9080-5987b411b581" containerName="init" Oct 11 10:55:44.387823 master-1 kubenswrapper[4771]: I1011 10:55:44.387692 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50b2fec-a3b6-4245-9080-5987b411b581" containerName="init" Oct 11 10:55:44.387823 master-1 kubenswrapper[4771]: E1011 10:55:44.387712 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerName="ceilometer-notification-agent" Oct 11 10:55:44.387823 master-1 kubenswrapper[4771]: I1011 10:55:44.387721 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerName="ceilometer-notification-agent" Oct 11 10:55:44.387823 master-1 kubenswrapper[4771]: E1011 10:55:44.387740 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50b2fec-a3b6-4245-9080-5987b411b581" containerName="dnsmasq-dns" Oct 11 10:55:44.387823 master-1 kubenswrapper[4771]: I1011 10:55:44.387748 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50b2fec-a3b6-4245-9080-5987b411b581" containerName="dnsmasq-dns" Oct 11 10:55:44.387911 master-2 kubenswrapper[4776]: I1011 10:55:44.387870 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:55:44.387911 master-2 kubenswrapper[4776]: I1011 10:55:44.387897 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/977628254c2695ff17425dccc1fbe376fb7c4f4d8dfcfd87eb3a48ca9779afa1/globalmount\"" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.388184 master-1 kubenswrapper[4771]: I1011 10:55:44.387934 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerName="sg-core" Oct 11 10:55:44.388184 master-1 kubenswrapper[4771]: I1011 10:55:44.387951 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0657ee5-2e60-4a96-905e-814f46a72970" containerName="heat-api" Oct 11 10:55:44.388184 master-1 kubenswrapper[4771]: I1011 10:55:44.387968 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerName="proxy-httpd" Oct 11 10:55:44.388184 master-1 kubenswrapper[4771]: I1011 10:55:44.387982 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerName="ceilometer-central-agent" Oct 11 10:55:44.388184 master-1 kubenswrapper[4771]: I1011 10:55:44.388001 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" containerName="ceilometer-notification-agent" Oct 11 10:55:44.388184 master-1 kubenswrapper[4771]: I1011 10:55:44.388014 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50b2fec-a3b6-4245-9080-5987b411b581" containerName="dnsmasq-dns" Oct 11 10:55:44.388545 master-1 kubenswrapper[4771]: E1011 10:55:44.388201 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0657ee5-2e60-4a96-905e-814f46a72970" containerName="heat-api" Oct 11 10:55:44.388545 master-1 kubenswrapper[4771]: I1011 10:55:44.388213 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0657ee5-2e60-4a96-905e-814f46a72970" containerName="heat-api" Oct 11 10:55:44.388545 master-1 kubenswrapper[4771]: I1011 10:55:44.388530 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0657ee5-2e60-4a96-905e-814f46a72970" containerName="heat-api" Oct 11 10:55:44.389407 master-2 kubenswrapper[4776]: I1011 10:55:44.389359 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-combined-ca-bundle\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.390791 master-1 kubenswrapper[4771]: I1011 10:55:44.390677 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:55:44.391636 master-2 kubenswrapper[4776]: I1011 10:55:44.391580 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-scripts\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.393095 master-2 kubenswrapper[4776]: I1011 10:55:44.393047 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-config-data\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.393387 master-0 kubenswrapper[4790]: I1011 10:55:44.393336 4790 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-db4348ad-f9a3-4500-a9f3-efd0f3ffb9a5\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8217a7c0-7507-4763-8106-d527748a7b39\") on node \"master-0\" " Oct 11 10:55:44.394542 master-2 kubenswrapper[4776]: I1011 10:55:44.394495 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-public-tls-certs\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.406143 master-1 kubenswrapper[4771]: I1011 10:55:44.406081 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 11 10:55:44.406655 master-1 kubenswrapper[4771]: I1011 10:55:44.406265 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 11 10:55:44.427417 master-0 kubenswrapper[4790]: I1011 10:55:44.427374 4790 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 11 10:55:44.427679 master-0 kubenswrapper[4790]: I1011 10:55:44.427653 4790 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-db4348ad-f9a3-4500-a9f3-efd0f3ffb9a5" (UniqueName: "kubernetes.io/csi/topolvm.io^8217a7c0-7507-4763-8106-d527748a7b39") on node "master-0" Oct 11 10:55:44.448059 master-1 kubenswrapper[4771]: I1011 10:55:44.447986 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736a75e7-8c74-4862-9ac6-3b4c2d0d721d" path="/var/lib/kubelet/pods/736a75e7-8c74-4862-9ac6-3b4c2d0d721d/volumes" Oct 11 10:55:44.496644 master-0 kubenswrapper[4790]: I1011 10:55:44.496477 4790 reconciler_common.go:293] "Volume detached for volume \"pvc-db4348ad-f9a3-4500-a9f3-efd0f3ffb9a5\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8217a7c0-7507-4763-8106-d527748a7b39\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:44.519379 master-1 kubenswrapper[4771]: I1011 10:55:44.519158 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl7xd\" (UniqueName: \"kubernetes.io/projected/0f6abca2-1aea-4da7-88aa-1d7651959165-kube-api-access-pl7xd\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.519379 master-1 kubenswrapper[4771]: I1011 10:55:44.519288 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6abca2-1aea-4da7-88aa-1d7651959165-log-httpd\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.519379 master-1 kubenswrapper[4771]: I1011 10:55:44.519321 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-config-data\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.519736 master-1 kubenswrapper[4771]: I1011 10:55:44.519613 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.519736 master-1 kubenswrapper[4771]: I1011 10:55:44.519668 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6abca2-1aea-4da7-88aa-1d7651959165-run-httpd\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.520157 master-1 kubenswrapper[4771]: I1011 10:55:44.520081 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.520217 master-1 kubenswrapper[4771]: I1011 10:55:44.520171 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-scripts\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.544578 master-1 kubenswrapper[4771]: I1011 10:55:44.544510 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:55:44.556894 master-2 kubenswrapper[4776]: I1011 10:55:44.555314 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h98d\" (UniqueName: \"kubernetes.io/projected/3afb9e92-33b4-4cbf-8857-de31fa326a7a-kube-api-access-9h98d\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.621433 master-1 kubenswrapper[4771]: I1011 10:55:44.621375 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.621723 master-1 kubenswrapper[4771]: I1011 10:55:44.621478 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6abca2-1aea-4da7-88aa-1d7651959165-run-httpd\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.621723 master-1 kubenswrapper[4771]: I1011 10:55:44.621554 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.621723 master-1 kubenswrapper[4771]: I1011 10:55:44.621574 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-scripts\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.621723 master-1 kubenswrapper[4771]: I1011 10:55:44.621640 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl7xd\" (UniqueName: \"kubernetes.io/projected/0f6abca2-1aea-4da7-88aa-1d7651959165-kube-api-access-pl7xd\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.621723 master-1 kubenswrapper[4771]: I1011 10:55:44.621659 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6abca2-1aea-4da7-88aa-1d7651959165-log-httpd\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.621723 master-1 kubenswrapper[4771]: I1011 10:55:44.621703 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-config-data\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.622872 master-1 kubenswrapper[4771]: I1011 10:55:44.622848 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6abca2-1aea-4da7-88aa-1d7651959165-run-httpd\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.623005 master-1 kubenswrapper[4771]: I1011 10:55:44.622938 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6abca2-1aea-4da7-88aa-1d7651959165-log-httpd\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.626235 master-1 kubenswrapper[4771]: I1011 10:55:44.626207 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-config-data\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.626322 master-1 kubenswrapper[4771]: I1011 10:55:44.626259 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.626913 master-1 kubenswrapper[4771]: I1011 10:55:44.626386 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.626913 master-1 kubenswrapper[4771]: I1011 10:55:44.626827 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-scripts\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.635113 master-0 kubenswrapper[4790]: I1011 10:55:44.635073 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-89f9b4488-8vvt9" Oct 11 10:55:44.654421 master-1 kubenswrapper[4771]: I1011 10:55:44.654332 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl7xd\" (UniqueName: \"kubernetes.io/projected/0f6abca2-1aea-4da7-88aa-1d7651959165-kube-api-access-pl7xd\") pod \"ceilometer-0\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " pod="openstack/ceilometer-0" Oct 11 10:55:44.700379 master-0 kubenswrapper[4790]: I1011 10:55:44.700303 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90ca8fc6-bc53-461b-8384-ca8344e8abb1-combined-ca-bundle\") pod \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\" (UID: \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\") " Oct 11 10:55:44.700645 master-0 kubenswrapper[4790]: I1011 10:55:44.700460 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxk4p\" (UniqueName: \"kubernetes.io/projected/90ca8fc6-bc53-461b-8384-ca8344e8abb1-kube-api-access-lxk4p\") pod \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\" (UID: \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\") " Oct 11 10:55:44.700645 master-0 kubenswrapper[4790]: I1011 10:55:44.700533 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90ca8fc6-bc53-461b-8384-ca8344e8abb1-config-data\") pod \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\" (UID: \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\") " Oct 11 10:55:44.700782 master-0 kubenswrapper[4790]: I1011 10:55:44.700698 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90ca8fc6-bc53-461b-8384-ca8344e8abb1-config-data-custom\") pod \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\" (UID: \"90ca8fc6-bc53-461b-8384-ca8344e8abb1\") " Oct 11 10:55:44.705327 master-0 kubenswrapper[4790]: I1011 10:55:44.705294 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ca8fc6-bc53-461b-8384-ca8344e8abb1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "90ca8fc6-bc53-461b-8384-ca8344e8abb1" (UID: "90ca8fc6-bc53-461b-8384-ca8344e8abb1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:44.713146 master-0 kubenswrapper[4790]: I1011 10:55:44.713067 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ca8fc6-bc53-461b-8384-ca8344e8abb1-kube-api-access-lxk4p" (OuterVolumeSpecName: "kube-api-access-lxk4p") pod "90ca8fc6-bc53-461b-8384-ca8344e8abb1" (UID: "90ca8fc6-bc53-461b-8384-ca8344e8abb1"). InnerVolumeSpecName "kube-api-access-lxk4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:44.728934 master-1 kubenswrapper[4771]: I1011 10:55:44.728871 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:55:44.763237 master-0 kubenswrapper[4790]: I1011 10:55:44.763041 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ca8fc6-bc53-461b-8384-ca8344e8abb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90ca8fc6-bc53-461b-8384-ca8344e8abb1" (UID: "90ca8fc6-bc53-461b-8384-ca8344e8abb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:44.804754 master-0 kubenswrapper[4790]: I1011 10:55:44.788376 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-2" event={"ID":"a0a5aa40-0146-4b81-83dd-761d514c557a","Type":"ContainerDied","Data":"a7f963bb2db0e5602164d87267bd8833838260ebb96c568d5c14b7ec412ff1e4"} Oct 11 10:55:44.804754 master-0 kubenswrapper[4790]: I1011 10:55:44.788450 4790 scope.go:117] "RemoveContainer" containerID="c15b47f90fa6f6a74e4ba2cf37c1852384ef4ec1575e5ae98a981dfe2894d2d7" Oct 11 10:55:44.804754 master-0 kubenswrapper[4790]: I1011 10:55:44.788606 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:44.804754 master-0 kubenswrapper[4790]: I1011 10:55:44.802307 4790 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90ca8fc6-bc53-461b-8384-ca8344e8abb1-config-data-custom\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:44.804754 master-0 kubenswrapper[4790]: I1011 10:55:44.802338 4790 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90ca8fc6-bc53-461b-8384-ca8344e8abb1-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:44.804754 master-0 kubenswrapper[4790]: I1011 10:55:44.802350 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxk4p\" (UniqueName: \"kubernetes.io/projected/90ca8fc6-bc53-461b-8384-ca8344e8abb1-kube-api-access-lxk4p\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:44.827628 master-0 kubenswrapper[4790]: I1011 10:55:44.818404 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-89f9b4488-8vvt9" Oct 11 10:55:44.827628 master-0 kubenswrapper[4790]: I1011 10:55:44.818872 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-89f9b4488-8vvt9" event={"ID":"90ca8fc6-bc53-461b-8384-ca8344e8abb1","Type":"ContainerDied","Data":"2889f39d4725531c10441fa9236d4ba817fb73083c92ada0288c6f7dfdb54987"} Oct 11 10:55:44.832906 master-0 kubenswrapper[4790]: I1011 10:55:44.828551 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ca8fc6-bc53-461b-8384-ca8344e8abb1-config-data" (OuterVolumeSpecName: "config-data") pod "90ca8fc6-bc53-461b-8384-ca8344e8abb1" (UID: "90ca8fc6-bc53-461b-8384-ca8344e8abb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:44.870747 master-0 kubenswrapper[4790]: I1011 10:55:44.870076 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-internal-api-2"] Oct 11 10:55:44.880314 master-0 kubenswrapper[4790]: I1011 10:55:44.880243 4790 scope.go:117] "RemoveContainer" containerID="373e10b5f3797c10bf2f289215e447ba6ede87d322784f4e67d64c872023c089" Oct 11 10:55:44.889670 master-0 kubenswrapper[4790]: I1011 10:55:44.889623 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b5802-default-internal-api-2"] Oct 11 10:55:44.905350 master-0 kubenswrapper[4790]: I1011 10:55:44.905315 4790 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90ca8fc6-bc53-461b-8384-ca8344e8abb1-config-data\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:44.926291 master-0 kubenswrapper[4790]: I1011 10:55:44.925889 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-internal-api-2"] Oct 11 10:55:44.926291 master-0 kubenswrapper[4790]: E1011 10:55:44.926294 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ca8fc6-bc53-461b-8384-ca8344e8abb1" containerName="heat-cfnapi" Oct 11 10:55:44.926726 master-0 kubenswrapper[4790]: I1011 10:55:44.926309 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ca8fc6-bc53-461b-8384-ca8344e8abb1" containerName="heat-cfnapi" Oct 11 10:55:44.926726 master-0 kubenswrapper[4790]: E1011 10:55:44.926333 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37e1fe6-6e89-4407-a40f-cf494a35eccd" containerName="mariadb-account-create" Oct 11 10:55:44.926726 master-0 kubenswrapper[4790]: I1011 10:55:44.926340 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37e1fe6-6e89-4407-a40f-cf494a35eccd" containerName="mariadb-account-create" Oct 11 10:55:44.926726 master-0 kubenswrapper[4790]: E1011 10:55:44.926347 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ca8fc6-bc53-461b-8384-ca8344e8abb1" containerName="heat-cfnapi" Oct 11 10:55:44.926726 master-0 kubenswrapper[4790]: I1011 10:55:44.926354 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ca8fc6-bc53-461b-8384-ca8344e8abb1" containerName="heat-cfnapi" Oct 11 10:55:44.926726 master-0 kubenswrapper[4790]: E1011 10:55:44.926381 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41b60ff-457d-4c45-8b56-4523c5c0097f" containerName="heat-api" Oct 11 10:55:44.926726 master-0 kubenswrapper[4790]: I1011 10:55:44.926386 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41b60ff-457d-4c45-8b56-4523c5c0097f" containerName="heat-api" Oct 11 10:55:44.926726 master-0 kubenswrapper[4790]: E1011 10:55:44.926393 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a5aa40-0146-4b81-83dd-761d514c557a" containerName="glance-log" Oct 11 10:55:44.926726 master-0 kubenswrapper[4790]: I1011 10:55:44.926399 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a5aa40-0146-4b81-83dd-761d514c557a" containerName="glance-log" Oct 11 10:55:44.926726 master-0 kubenswrapper[4790]: E1011 10:55:44.926412 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a5aa40-0146-4b81-83dd-761d514c557a" containerName="glance-httpd" Oct 11 10:55:44.926726 master-0 kubenswrapper[4790]: I1011 10:55:44.926418 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a5aa40-0146-4b81-83dd-761d514c557a" containerName="glance-httpd" Oct 11 10:55:44.926726 master-0 kubenswrapper[4790]: I1011 10:55:44.926548 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0a5aa40-0146-4b81-83dd-761d514c557a" containerName="glance-log" Oct 11 10:55:44.926726 master-0 kubenswrapper[4790]: I1011 10:55:44.926557 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0a5aa40-0146-4b81-83dd-761d514c557a" containerName="glance-httpd" Oct 11 10:55:44.926726 master-0 kubenswrapper[4790]: I1011 10:55:44.926566 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="90ca8fc6-bc53-461b-8384-ca8344e8abb1" containerName="heat-cfnapi" Oct 11 10:55:44.926726 master-0 kubenswrapper[4790]: I1011 10:55:44.926574 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41b60ff-457d-4c45-8b56-4523c5c0097f" containerName="heat-api" Oct 11 10:55:44.926726 master-0 kubenswrapper[4790]: I1011 10:55:44.926595 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="90ca8fc6-bc53-461b-8384-ca8344e8abb1" containerName="heat-cfnapi" Oct 11 10:55:44.926726 master-0 kubenswrapper[4790]: I1011 10:55:44.926611 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="e37e1fe6-6e89-4407-a40f-cf494a35eccd" containerName="mariadb-account-create" Oct 11 10:55:44.944159 master-0 kubenswrapper[4790]: I1011 10:55:44.944095 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:44.957973 master-0 kubenswrapper[4790]: I1011 10:55:44.953599 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 11 10:55:44.957973 master-0 kubenswrapper[4790]: I1011 10:55:44.954402 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-internal-config-data" Oct 11 10:55:44.957973 master-0 kubenswrapper[4790]: I1011 10:55:44.954592 4790 scope.go:117] "RemoveContainer" containerID="471d5706c589d3886adf86b8614c32749579ddcf4ce9dfdf7d21941669c457bf" Oct 11 10:55:44.978735 master-0 kubenswrapper[4790]: I1011 10:55:44.972002 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-2"] Oct 11 10:55:45.013522 master-0 kubenswrapper[4790]: I1011 10:55:45.012345 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/243e93fe-e6cd-47af-95ed-3f141cb74deb-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.013522 master-0 kubenswrapper[4790]: I1011 10:55:45.012412 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/243e93fe-e6cd-47af-95ed-3f141cb74deb-scripts\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.013522 master-0 kubenswrapper[4790]: I1011 10:55:45.012453 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/243e93fe-e6cd-47af-95ed-3f141cb74deb-httpd-run\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.013522 master-0 kubenswrapper[4790]: I1011 10:55:45.012492 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/243e93fe-e6cd-47af-95ed-3f141cb74deb-config-data\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.013522 master-0 kubenswrapper[4790]: I1011 10:55:45.012534 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/243e93fe-e6cd-47af-95ed-3f141cb74deb-internal-tls-certs\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.013522 master-0 kubenswrapper[4790]: I1011 10:55:45.012572 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/243e93fe-e6cd-47af-95ed-3f141cb74deb-logs\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.013522 master-0 kubenswrapper[4790]: I1011 10:55:45.012597 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-db4348ad-f9a3-4500-a9f3-efd0f3ffb9a5\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8217a7c0-7507-4763-8106-d527748a7b39\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.013522 master-0 kubenswrapper[4790]: I1011 10:55:45.012626 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4lgt\" (UniqueName: \"kubernetes.io/projected/243e93fe-e6cd-47af-95ed-3f141cb74deb-kube-api-access-n4lgt\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.115048 master-0 kubenswrapper[4790]: I1011 10:55:45.114982 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-db4348ad-f9a3-4500-a9f3-efd0f3ffb9a5\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8217a7c0-7507-4763-8106-d527748a7b39\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.115195 master-0 kubenswrapper[4790]: I1011 10:55:45.115093 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4lgt\" (UniqueName: \"kubernetes.io/projected/243e93fe-e6cd-47af-95ed-3f141cb74deb-kube-api-access-n4lgt\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.115195 master-0 kubenswrapper[4790]: I1011 10:55:45.115138 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/243e93fe-e6cd-47af-95ed-3f141cb74deb-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.115297 master-0 kubenswrapper[4790]: I1011 10:55:45.115213 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/243e93fe-e6cd-47af-95ed-3f141cb74deb-scripts\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.116034 master-0 kubenswrapper[4790]: I1011 10:55:45.116000 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/243e93fe-e6cd-47af-95ed-3f141cb74deb-httpd-run\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.116093 master-0 kubenswrapper[4790]: I1011 10:55:45.116051 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/243e93fe-e6cd-47af-95ed-3f141cb74deb-config-data\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.116093 master-0 kubenswrapper[4790]: I1011 10:55:45.116081 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/243e93fe-e6cd-47af-95ed-3f141cb74deb-internal-tls-certs\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.116184 master-0 kubenswrapper[4790]: I1011 10:55:45.116130 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/243e93fe-e6cd-47af-95ed-3f141cb74deb-logs\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.116687 master-0 kubenswrapper[4790]: I1011 10:55:45.116635 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/243e93fe-e6cd-47af-95ed-3f141cb74deb-httpd-run\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.116752 master-0 kubenswrapper[4790]: I1011 10:55:45.116655 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/243e93fe-e6cd-47af-95ed-3f141cb74deb-logs\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.117602 master-0 kubenswrapper[4790]: I1011 10:55:45.117570 4790 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:55:45.117668 master-0 kubenswrapper[4790]: I1011 10:55:45.117601 4790 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-db4348ad-f9a3-4500-a9f3-efd0f3ffb9a5\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8217a7c0-7507-4763-8106-d527748a7b39\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/b0c7c7eacbecbf6beec44181cd1a14327b215e622b505cc0fbc4653c9c57c6ce/globalmount\"" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.118658 master-0 kubenswrapper[4790]: I1011 10:55:45.118590 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/243e93fe-e6cd-47af-95ed-3f141cb74deb-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.119701 master-0 kubenswrapper[4790]: I1011 10:55:45.119645 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/243e93fe-e6cd-47af-95ed-3f141cb74deb-scripts\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.131949 master-0 kubenswrapper[4790]: I1011 10:55:45.131914 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/243e93fe-e6cd-47af-95ed-3f141cb74deb-internal-tls-certs\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.134091 master-0 kubenswrapper[4790]: I1011 10:55:45.134058 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/243e93fe-e6cd-47af-95ed-3f141cb74deb-config-data\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.141668 master-0 kubenswrapper[4790]: I1011 10:55:45.141617 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4lgt\" (UniqueName: \"kubernetes.io/projected/243e93fe-e6cd-47af-95ed-3f141cb74deb-kube-api-access-n4lgt\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:45.166236 master-0 kubenswrapper[4790]: I1011 10:55:45.166175 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-89f9b4488-8vvt9"] Oct 11 10:55:45.173009 master-0 kubenswrapper[4790]: I1011 10:55:45.172281 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-89f9b4488-8vvt9"] Oct 11 10:55:45.199074 master-1 kubenswrapper[4771]: I1011 10:55:45.198911 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:55:45.263825 master-1 kubenswrapper[4771]: I1011 10:55:45.263768 4771 scope.go:117] "RemoveContainer" containerID="d389be566649d008a3e09b2815ff87839b176862458587a6a08bb0703f09d204" Oct 11 10:55:45.264393 master-1 kubenswrapper[4771]: E1011 10:55:45.264076 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-656ddc8b67-kfkzr_openstack(c6af8eba-f8bf-47f6-8313-7a902aeb170f)\"" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" podUID="c6af8eba-f8bf-47f6-8313-7a902aeb170f" Oct 11 10:55:45.264447 master-1 kubenswrapper[4771]: I1011 10:55:45.264337 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6abca2-1aea-4da7-88aa-1d7651959165","Type":"ContainerStarted","Data":"84ce7b87c3fe4abde506e09dd4965353c30550d9eae5707b3cd6ecad602405a9"} Oct 11 10:55:45.271955 master-2 kubenswrapper[4776]: I1011 10:55:45.271864 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:45.283558 master-1 kubenswrapper[4771]: I1011 10:55:45.283501 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" Oct 11 10:55:45.283558 master-1 kubenswrapper[4771]: I1011 10:55:45.283539 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" Oct 11 10:55:45.466457 master-2 kubenswrapper[4776]: I1011 10:55:45.466398 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:45.754426 master-2 kubenswrapper[4776]: I1011 10:55:45.754107 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:45.982732 master-0 kubenswrapper[4790]: I1011 10:55:45.982194 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-db4348ad-f9a3-4500-a9f3-efd0f3ffb9a5\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8217a7c0-7507-4763-8106-d527748a7b39\") pod \"glance-b5802-default-internal-api-2\" (UID: \"243e93fe-e6cd-47af-95ed-3f141cb74deb\") " pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:46.073076 master-2 kubenswrapper[4776]: I1011 10:55:46.072923 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" path="/var/lib/kubelet/pods/a0831eef-0c2e-4d09-a44b-7276f30bc1cf/volumes" Oct 11 10:55:46.275986 master-1 kubenswrapper[4771]: I1011 10:55:46.275828 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6abca2-1aea-4da7-88aa-1d7651959165","Type":"ContainerStarted","Data":"927b471b1fa44ac0453124772c1c315e44a9bd395c3c5d07c01293a49d70a8ef"} Oct 11 10:55:46.277226 master-1 kubenswrapper[4771]: I1011 10:55:46.277172 4771 scope.go:117] "RemoveContainer" containerID="d389be566649d008a3e09b2815ff87839b176862458587a6a08bb0703f09d204" Oct 11 10:55:46.277605 master-1 kubenswrapper[4771]: E1011 10:55:46.277568 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-656ddc8b67-kfkzr_openstack(c6af8eba-f8bf-47f6-8313-7a902aeb170f)\"" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" podUID="c6af8eba-f8bf-47f6-8313-7a902aeb170f" Oct 11 10:55:46.302008 master-2 kubenswrapper[4776]: I1011 10:55:46.301915 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-86bdd47775-gpz8z"] Oct 11 10:55:46.302848 master-2 kubenswrapper[4776]: I1011 10:55:46.302285 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-86bdd47775-gpz8z" podUID="fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" containerName="heat-engine" containerID="cri-o://d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c" gracePeriod=60 Oct 11 10:55:46.304470 master-0 kubenswrapper[4790]: I1011 10:55:46.304388 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90ca8fc6-bc53-461b-8384-ca8344e8abb1" path="/var/lib/kubelet/pods/90ca8fc6-bc53-461b-8384-ca8344e8abb1/volumes" Oct 11 10:55:46.305061 master-0 kubenswrapper[4790]: I1011 10:55:46.305032 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0a5aa40-0146-4b81-83dd-761d514c557a" path="/var/lib/kubelet/pods/a0a5aa40-0146-4b81-83dd-761d514c557a/volumes" Oct 11 10:55:46.335370 master-2 kubenswrapper[4776]: E1011 10:55:46.335097 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 11 10:55:46.345590 master-2 kubenswrapper[4776]: E1011 10:55:46.345511 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 11 10:55:46.348703 master-2 kubenswrapper[4776]: E1011 10:55:46.347414 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 11 10:55:46.348703 master-2 kubenswrapper[4776]: E1011 10:55:46.347456 4776 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-86bdd47775-gpz8z" podUID="fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" containerName="heat-engine" Oct 11 10:55:46.502952 master-0 kubenswrapper[4790]: I1011 10:55:46.502882 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:46.530318 master-1 kubenswrapper[4771]: I1011 10:55:46.529635 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a338-account-create-v4xzh"] Oct 11 10:55:46.531701 master-1 kubenswrapper[4771]: I1011 10:55:46.531668 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a338-account-create-v4xzh" Oct 11 10:55:46.534823 master-1 kubenswrapper[4771]: I1011 10:55:46.534750 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 11 10:55:46.563804 master-1 kubenswrapper[4771]: I1011 10:55:46.563665 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nlsv\" (UniqueName: \"kubernetes.io/projected/7ac9af7f-afc6-4d4d-9923-db14ac820459-kube-api-access-8nlsv\") pod \"nova-api-a338-account-create-v4xzh\" (UID: \"7ac9af7f-afc6-4d4d-9923-db14ac820459\") " pod="openstack/nova-api-a338-account-create-v4xzh" Oct 11 10:55:46.665937 master-1 kubenswrapper[4771]: I1011 10:55:46.665841 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nlsv\" (UniqueName: \"kubernetes.io/projected/7ac9af7f-afc6-4d4d-9923-db14ac820459-kube-api-access-8nlsv\") pod \"nova-api-a338-account-create-v4xzh\" (UID: \"7ac9af7f-afc6-4d4d-9923-db14ac820459\") " pod="openstack/nova-api-a338-account-create-v4xzh" Oct 11 10:55:46.689848 master-2 kubenswrapper[4776]: I1011 10:55:46.689790 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-2"] Oct 11 10:55:46.694235 master-1 kubenswrapper[4771]: I1011 10:55:46.694164 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nlsv\" (UniqueName: \"kubernetes.io/projected/7ac9af7f-afc6-4d4d-9923-db14ac820459-kube-api-access-8nlsv\") pod \"nova-api-a338-account-create-v4xzh\" (UID: \"7ac9af7f-afc6-4d4d-9923-db14ac820459\") " pod="openstack/nova-api-a338-account-create-v4xzh" Oct 11 10:55:46.864467 master-1 kubenswrapper[4771]: I1011 10:55:46.864399 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a338-account-create-v4xzh"] Oct 11 10:55:46.907334 master-1 kubenswrapper[4771]: I1011 10:55:46.906521 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a338-account-create-v4xzh" Oct 11 10:55:46.909813 master-2 kubenswrapper[4776]: I1011 10:55:46.909750 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-2" event={"ID":"3afb9e92-33b4-4cbf-8857-de31fa326a7a","Type":"ContainerStarted","Data":"216f96730d54b66928bbd66a1d8ed9534f53af6a012174f7434f249334d47fb5"} Oct 11 10:55:46.949283 master-0 kubenswrapper[4790]: I1011 10:55:46.949170 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:55:47.301389 master-1 kubenswrapper[4771]: I1011 10:55:47.301139 4771 generic.go:334] "Generic (PLEG): container finished" podID="362d815c-c6ec-48b0-9891-85d06ad00aed" containerID="b94dfe1997cbb3d378d19012a9b6401bc1cef35489c7ea7be575908bfe56b3a0" exitCode=0 Oct 11 10:55:47.301389 master-1 kubenswrapper[4771]: I1011 10:55:47.301203 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-stzg5" event={"ID":"362d815c-c6ec-48b0-9891-85d06ad00aed","Type":"ContainerDied","Data":"b94dfe1997cbb3d378d19012a9b6401bc1cef35489c7ea7be575908bfe56b3a0"} Oct 11 10:55:47.305377 master-1 kubenswrapper[4771]: I1011 10:55:47.304109 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6abca2-1aea-4da7-88aa-1d7651959165","Type":"ContainerStarted","Data":"e699798eb3d463b27b94ec4cd6a4bacb981768b5034628ba7f0e14e667c53445"} Oct 11 10:55:47.313802 master-2 kubenswrapper[4776]: I1011 10:55:47.313701 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-faff-account-create-nc9gw"] Oct 11 10:55:47.315442 master-2 kubenswrapper[4776]: I1011 10:55:47.315397 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-faff-account-create-nc9gw" Oct 11 10:55:47.333788 master-2 kubenswrapper[4776]: I1011 10:55:47.323498 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 11 10:55:47.350368 master-2 kubenswrapper[4776]: I1011 10:55:47.350069 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-595686b98f-4tcwj"] Oct 11 10:55:47.350572 master-2 kubenswrapper[4776]: I1011 10:55:47.350372 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" podUID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" containerName="dnsmasq-dns" containerID="cri-o://459c94f48331f93d40730495be1474d6c1c89c8df43275f7b4ad519ea521cb3d" gracePeriod=10 Oct 11 10:55:47.362744 master-2 kubenswrapper[4776]: I1011 10:55:47.362474 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-faff-account-create-nc9gw"] Oct 11 10:55:47.391735 master-0 kubenswrapper[4790]: I1011 10:55:47.391650 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b5802-volume-lvm-iscsi-0" Oct 11 10:55:47.451650 master-2 kubenswrapper[4776]: I1011 10:55:47.451600 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l224\" (UniqueName: \"kubernetes.io/projected/be6ef0ad-fb25-4af2-a9fc-c89be4b1983b-kube-api-access-5l224\") pod \"nova-cell1-faff-account-create-nc9gw\" (UID: \"be6ef0ad-fb25-4af2-a9fc-c89be4b1983b\") " pod="openstack/nova-cell1-faff-account-create-nc9gw" Oct 11 10:55:47.466387 master-1 kubenswrapper[4771]: I1011 10:55:47.465718 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a338-account-create-v4xzh"] Oct 11 10:55:47.470610 master-1 kubenswrapper[4771]: W1011 10:55:47.468425 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ac9af7f_afc6_4d4d_9923_db14ac820459.slice/crio-9b12e8787780eb4c0e7ae7ec5652e12847e4d2cd1f0a158946584ba9e587475f WatchSource:0}: Error finding container 9b12e8787780eb4c0e7ae7ec5652e12847e4d2cd1f0a158946584ba9e587475f: Status 404 returned error can't find the container with id 9b12e8787780eb4c0e7ae7ec5652e12847e4d2cd1f0a158946584ba9e587475f Oct 11 10:55:47.556049 master-2 kubenswrapper[4776]: I1011 10:55:47.554722 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l224\" (UniqueName: \"kubernetes.io/projected/be6ef0ad-fb25-4af2-a9fc-c89be4b1983b-kube-api-access-5l224\") pod \"nova-cell1-faff-account-create-nc9gw\" (UID: \"be6ef0ad-fb25-4af2-a9fc-c89be4b1983b\") " pod="openstack/nova-cell1-faff-account-create-nc9gw" Oct 11 10:55:47.699607 master-1 kubenswrapper[4771]: I1011 10:55:47.699507 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-chcrd"] Oct 11 10:55:47.701333 master-1 kubenswrapper[4771]: I1011 10:55:47.701279 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-chcrd" Oct 11 10:55:47.708149 master-1 kubenswrapper[4771]: I1011 10:55:47.708073 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 11 10:55:47.708616 master-1 kubenswrapper[4771]: I1011 10:55:47.708556 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 11 10:55:47.784062 master-1 kubenswrapper[4771]: I1011 10:55:47.783970 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-chcrd"] Oct 11 10:55:47.784451 master-2 kubenswrapper[4776]: I1011 10:55:47.784404 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l224\" (UniqueName: \"kubernetes.io/projected/be6ef0ad-fb25-4af2-a9fc-c89be4b1983b-kube-api-access-5l224\") pod \"nova-cell1-faff-account-create-nc9gw\" (UID: \"be6ef0ad-fb25-4af2-a9fc-c89be4b1983b\") " pod="openstack/nova-cell1-faff-account-create-nc9gw" Oct 11 10:55:47.885970 master-0 kubenswrapper[4790]: I1011 10:55:47.885903 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-2"] Oct 11 10:55:47.907551 master-1 kubenswrapper[4771]: I1011 10:55:47.907487 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38267a66-0ebd-44ab-bc7f-cd5703503b74-config-data\") pod \"nova-cell0-conductor-db-sync-chcrd\" (UID: \"38267a66-0ebd-44ab-bc7f-cd5703503b74\") " pod="openstack/nova-cell0-conductor-db-sync-chcrd" Oct 11 10:55:47.907791 master-1 kubenswrapper[4771]: I1011 10:55:47.907585 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38267a66-0ebd-44ab-bc7f-cd5703503b74-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-chcrd\" (UID: \"38267a66-0ebd-44ab-bc7f-cd5703503b74\") " pod="openstack/nova-cell0-conductor-db-sync-chcrd" Oct 11 10:55:47.907791 master-1 kubenswrapper[4771]: I1011 10:55:47.907636 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v6lx\" (UniqueName: \"kubernetes.io/projected/38267a66-0ebd-44ab-bc7f-cd5703503b74-kube-api-access-5v6lx\") pod \"nova-cell0-conductor-db-sync-chcrd\" (UID: \"38267a66-0ebd-44ab-bc7f-cd5703503b74\") " pod="openstack/nova-cell0-conductor-db-sync-chcrd" Oct 11 10:55:47.907791 master-1 kubenswrapper[4771]: I1011 10:55:47.907736 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38267a66-0ebd-44ab-bc7f-cd5703503b74-scripts\") pod \"nova-cell0-conductor-db-sync-chcrd\" (UID: \"38267a66-0ebd-44ab-bc7f-cd5703503b74\") " pod="openstack/nova-cell0-conductor-db-sync-chcrd" Oct 11 10:55:47.941004 master-2 kubenswrapper[4776]: I1011 10:55:47.940955 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-2" event={"ID":"3afb9e92-33b4-4cbf-8857-de31fa326a7a","Type":"ContainerStarted","Data":"d0b6a6a7c2caccf14853e2a4c52cdb3f6c8df48048d7259941eaf69c074c04ff"} Oct 11 10:55:47.944528 master-2 kubenswrapper[4776]: I1011 10:55:47.944494 4776 generic.go:334] "Generic (PLEG): container finished" podID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" containerID="459c94f48331f93d40730495be1474d6c1c89c8df43275f7b4ad519ea521cb3d" exitCode=0 Oct 11 10:55:47.944897 master-2 kubenswrapper[4776]: I1011 10:55:47.944549 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" event={"ID":"70447ad9-31f0-4f6a-8c40-19fbe8141ada","Type":"ContainerDied","Data":"459c94f48331f93d40730495be1474d6c1c89c8df43275f7b4ad519ea521cb3d"} Oct 11 10:55:48.009181 master-1 kubenswrapper[4771]: I1011 10:55:48.009057 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38267a66-0ebd-44ab-bc7f-cd5703503b74-scripts\") pod \"nova-cell0-conductor-db-sync-chcrd\" (UID: \"38267a66-0ebd-44ab-bc7f-cd5703503b74\") " pod="openstack/nova-cell0-conductor-db-sync-chcrd" Oct 11 10:55:48.009181 master-1 kubenswrapper[4771]: I1011 10:55:48.009123 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38267a66-0ebd-44ab-bc7f-cd5703503b74-config-data\") pod \"nova-cell0-conductor-db-sync-chcrd\" (UID: \"38267a66-0ebd-44ab-bc7f-cd5703503b74\") " pod="openstack/nova-cell0-conductor-db-sync-chcrd" Oct 11 10:55:48.009181 master-1 kubenswrapper[4771]: I1011 10:55:48.009169 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38267a66-0ebd-44ab-bc7f-cd5703503b74-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-chcrd\" (UID: \"38267a66-0ebd-44ab-bc7f-cd5703503b74\") " pod="openstack/nova-cell0-conductor-db-sync-chcrd" Oct 11 10:55:48.009556 master-1 kubenswrapper[4771]: I1011 10:55:48.009207 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v6lx\" (UniqueName: \"kubernetes.io/projected/38267a66-0ebd-44ab-bc7f-cd5703503b74-kube-api-access-5v6lx\") pod \"nova-cell0-conductor-db-sync-chcrd\" (UID: \"38267a66-0ebd-44ab-bc7f-cd5703503b74\") " pod="openstack/nova-cell0-conductor-db-sync-chcrd" Oct 11 10:55:48.014310 master-1 kubenswrapper[4771]: I1011 10:55:48.014243 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38267a66-0ebd-44ab-bc7f-cd5703503b74-config-data\") pod \"nova-cell0-conductor-db-sync-chcrd\" (UID: \"38267a66-0ebd-44ab-bc7f-cd5703503b74\") " pod="openstack/nova-cell0-conductor-db-sync-chcrd" Oct 11 10:55:48.015291 master-1 kubenswrapper[4771]: I1011 10:55:48.015224 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38267a66-0ebd-44ab-bc7f-cd5703503b74-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-chcrd\" (UID: \"38267a66-0ebd-44ab-bc7f-cd5703503b74\") " pod="openstack/nova-cell0-conductor-db-sync-chcrd" Oct 11 10:55:48.015817 master-1 kubenswrapper[4771]: I1011 10:55:48.015774 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38267a66-0ebd-44ab-bc7f-cd5703503b74-scripts\") pod \"nova-cell0-conductor-db-sync-chcrd\" (UID: \"38267a66-0ebd-44ab-bc7f-cd5703503b74\") " pod="openstack/nova-cell0-conductor-db-sync-chcrd" Oct 11 10:55:48.055324 master-2 kubenswrapper[4776]: I1011 10:55:48.055200 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-faff-account-create-nc9gw" Oct 11 10:55:48.065851 master-1 kubenswrapper[4771]: I1011 10:55:48.065771 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v6lx\" (UniqueName: \"kubernetes.io/projected/38267a66-0ebd-44ab-bc7f-cd5703503b74-kube-api-access-5v6lx\") pod \"nova-cell0-conductor-db-sync-chcrd\" (UID: \"38267a66-0ebd-44ab-bc7f-cd5703503b74\") " pod="openstack/nova-cell0-conductor-db-sync-chcrd" Oct 11 10:55:48.323320 master-1 kubenswrapper[4771]: I1011 10:55:48.323150 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6abca2-1aea-4da7-88aa-1d7651959165","Type":"ContainerStarted","Data":"dea9e948550286a77692eb6be22395f88e3b28cb639cfe7f80a137916ddb5ac1"} Oct 11 10:55:48.326075 master-1 kubenswrapper[4771]: I1011 10:55:48.325970 4771 generic.go:334] "Generic (PLEG): container finished" podID="7ac9af7f-afc6-4d4d-9923-db14ac820459" containerID="e1ee0992af169f3773493c300780fafe6521ac72bd4a220402d3338c4c92c6fb" exitCode=0 Oct 11 10:55:48.326075 master-1 kubenswrapper[4771]: I1011 10:55:48.326042 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a338-account-create-v4xzh" event={"ID":"7ac9af7f-afc6-4d4d-9923-db14ac820459","Type":"ContainerDied","Data":"e1ee0992af169f3773493c300780fafe6521ac72bd4a220402d3338c4c92c6fb"} Oct 11 10:55:48.326335 master-1 kubenswrapper[4771]: I1011 10:55:48.326092 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a338-account-create-v4xzh" event={"ID":"7ac9af7f-afc6-4d4d-9923-db14ac820459","Type":"ContainerStarted","Data":"9b12e8787780eb4c0e7ae7ec5652e12847e4d2cd1f0a158946584ba9e587475f"} Oct 11 10:55:48.327646 master-1 kubenswrapper[4771]: I1011 10:55:48.327553 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-chcrd" Oct 11 10:55:48.463197 master-2 kubenswrapper[4776]: I1011 10:55:48.463138 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:55:48.582958 master-2 kubenswrapper[4776]: I1011 10:55:48.582870 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-svc\") pod \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " Oct 11 10:55:48.583335 master-2 kubenswrapper[4776]: I1011 10:55:48.583013 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znxxj\" (UniqueName: \"kubernetes.io/projected/70447ad9-31f0-4f6a-8c40-19fbe8141ada-kube-api-access-znxxj\") pod \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " Oct 11 10:55:48.583335 master-2 kubenswrapper[4776]: I1011 10:55:48.583072 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-config\") pod \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " Oct 11 10:55:48.583335 master-2 kubenswrapper[4776]: I1011 10:55:48.583110 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-swift-storage-0\") pod \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " Oct 11 10:55:48.583335 master-2 kubenswrapper[4776]: I1011 10:55:48.583233 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-nb\") pod \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " Oct 11 10:55:48.583335 master-2 kubenswrapper[4776]: I1011 10:55:48.583288 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-sb\") pod \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " Oct 11 10:55:48.591792 master-2 kubenswrapper[4776]: I1011 10:55:48.591708 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70447ad9-31f0-4f6a-8c40-19fbe8141ada-kube-api-access-znxxj" (OuterVolumeSpecName: "kube-api-access-znxxj") pod "70447ad9-31f0-4f6a-8c40-19fbe8141ada" (UID: "70447ad9-31f0-4f6a-8c40-19fbe8141ada"). InnerVolumeSpecName "kube-api-access-znxxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:48.628372 master-2 kubenswrapper[4776]: I1011 10:55:48.628310 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70447ad9-31f0-4f6a-8c40-19fbe8141ada" (UID: "70447ad9-31f0-4f6a-8c40-19fbe8141ada"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:48.628601 master-2 kubenswrapper[4776]: I1011 10:55:48.628482 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "70447ad9-31f0-4f6a-8c40-19fbe8141ada" (UID: "70447ad9-31f0-4f6a-8c40-19fbe8141ada"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:48.629131 master-2 kubenswrapper[4776]: I1011 10:55:48.629105 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-config" (OuterVolumeSpecName: "config") pod "70447ad9-31f0-4f6a-8c40-19fbe8141ada" (UID: "70447ad9-31f0-4f6a-8c40-19fbe8141ada"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:48.635214 master-2 kubenswrapper[4776]: I1011 10:55:48.635095 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "70447ad9-31f0-4f6a-8c40-19fbe8141ada" (UID: "70447ad9-31f0-4f6a-8c40-19fbe8141ada"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:48.637150 master-2 kubenswrapper[4776]: I1011 10:55:48.637124 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "70447ad9-31f0-4f6a-8c40-19fbe8141ada" (UID: "70447ad9-31f0-4f6a-8c40-19fbe8141ada"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:48.701530 master-2 kubenswrapper[4776]: I1011 10:55:48.699298 4776 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-svc\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:48.701530 master-2 kubenswrapper[4776]: I1011 10:55:48.699340 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znxxj\" (UniqueName: \"kubernetes.io/projected/70447ad9-31f0-4f6a-8c40-19fbe8141ada-kube-api-access-znxxj\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:48.701530 master-2 kubenswrapper[4776]: I1011 10:55:48.699352 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:48.701530 master-2 kubenswrapper[4776]: I1011 10:55:48.699360 4776 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-swift-storage-0\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:48.701530 master-2 kubenswrapper[4776]: I1011 10:55:48.699369 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-nb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:48.701530 master-2 kubenswrapper[4776]: I1011 10:55:48.699378 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-sb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:48.882857 master-0 kubenswrapper[4790]: I1011 10:55:48.881060 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-2" event={"ID":"243e93fe-e6cd-47af-95ed-3f141cb74deb","Type":"ContainerStarted","Data":"533e9496b96514b5451c5a79e329830071f463573e7952e498684538988b8ca9"} Oct 11 10:55:48.882857 master-0 kubenswrapper[4790]: I1011 10:55:48.881145 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-2" event={"ID":"243e93fe-e6cd-47af-95ed-3f141cb74deb","Type":"ContainerStarted","Data":"b16754812669831b9d867df7082d7952853f0eafc7d573fa6d85c8c76b0a335d"} Oct 11 10:55:48.944291 master-2 kubenswrapper[4776]: W1011 10:55:48.943866 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe6ef0ad_fb25_4af2_a9fc_c89be4b1983b.slice/crio-6421511dcf0ab89c9e53d65d6b14be5caf661ab4c5ff703f9d8dfafd5a15ebfd WatchSource:0}: Error finding container 6421511dcf0ab89c9e53d65d6b14be5caf661ab4c5ff703f9d8dfafd5a15ebfd: Status 404 returned error can't find the container with id 6421511dcf0ab89c9e53d65d6b14be5caf661ab4c5ff703f9d8dfafd5a15ebfd Oct 11 10:55:48.958087 master-2 kubenswrapper[4776]: I1011 10:55:48.958049 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-faff-account-create-nc9gw" event={"ID":"be6ef0ad-fb25-4af2-a9fc-c89be4b1983b","Type":"ContainerStarted","Data":"6421511dcf0ab89c9e53d65d6b14be5caf661ab4c5ff703f9d8dfafd5a15ebfd"} Oct 11 10:55:48.959943 master-2 kubenswrapper[4776]: I1011 10:55:48.959868 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-2" event={"ID":"3afb9e92-33b4-4cbf-8857-de31fa326a7a","Type":"ContainerStarted","Data":"e706f6ed99469eec9e6d7866abb1c193cc8d11c82b1dff538542449bdc25d671"} Oct 11 10:55:48.963929 master-2 kubenswrapper[4776]: I1011 10:55:48.963876 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" event={"ID":"70447ad9-31f0-4f6a-8c40-19fbe8141ada","Type":"ContainerDied","Data":"deced8507734b7d702b6a986cf9629954a10ef889b4d591c0c86c8d9b9826ad7"} Oct 11 10:55:48.963929 master-2 kubenswrapper[4776]: I1011 10:55:48.963922 4776 scope.go:117] "RemoveContainer" containerID="459c94f48331f93d40730495be1474d6c1c89c8df43275f7b4ad519ea521cb3d" Oct 11 10:55:48.964362 master-2 kubenswrapper[4776]: I1011 10:55:48.964022 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:55:49.020912 master-2 kubenswrapper[4776]: I1011 10:55:49.020879 4776 scope.go:117] "RemoveContainer" containerID="304ada663003ba027290aa5f510ba1f9e62024cd530b437aab6c3371a94b50d9" Oct 11 10:55:49.155799 master-1 kubenswrapper[4771]: I1011 10:55:49.155719 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:49.158197 master-2 kubenswrapper[4776]: I1011 10:55:49.158141 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-faff-account-create-nc9gw"] Oct 11 10:55:49.261072 master-1 kubenswrapper[4771]: I1011 10:55:49.260867 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-config\") pod \"362d815c-c6ec-48b0-9891-85d06ad00aed\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " Oct 11 10:55:49.261072 master-1 kubenswrapper[4771]: I1011 10:55:49.261011 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-httpd-config\") pod \"362d815c-c6ec-48b0-9891-85d06ad00aed\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " Oct 11 10:55:49.261072 master-1 kubenswrapper[4771]: I1011 10:55:49.261064 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-ovndb-tls-certs\") pod \"362d815c-c6ec-48b0-9891-85d06ad00aed\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " Oct 11 10:55:49.261560 master-1 kubenswrapper[4771]: I1011 10:55:49.261198 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-combined-ca-bundle\") pod \"362d815c-c6ec-48b0-9891-85d06ad00aed\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " Oct 11 10:55:49.261560 master-1 kubenswrapper[4771]: I1011 10:55:49.261283 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4zt7\" (UniqueName: \"kubernetes.io/projected/362d815c-c6ec-48b0-9891-85d06ad00aed-kube-api-access-n4zt7\") pod \"362d815c-c6ec-48b0-9891-85d06ad00aed\" (UID: \"362d815c-c6ec-48b0-9891-85d06ad00aed\") " Oct 11 10:55:49.264549 master-1 kubenswrapper[4771]: I1011 10:55:49.264499 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "362d815c-c6ec-48b0-9891-85d06ad00aed" (UID: "362d815c-c6ec-48b0-9891-85d06ad00aed"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:49.265160 master-1 kubenswrapper[4771]: I1011 10:55:49.265117 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/362d815c-c6ec-48b0-9891-85d06ad00aed-kube-api-access-n4zt7" (OuterVolumeSpecName: "kube-api-access-n4zt7") pod "362d815c-c6ec-48b0-9891-85d06ad00aed" (UID: "362d815c-c6ec-48b0-9891-85d06ad00aed"). InnerVolumeSpecName "kube-api-access-n4zt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:49.298948 master-1 kubenswrapper[4771]: I1011 10:55:49.298846 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8vzsw" Oct 11 10:55:49.298948 master-1 kubenswrapper[4771]: I1011 10:55:49.298958 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8vzsw" Oct 11 10:55:49.305432 master-1 kubenswrapper[4771]: I1011 10:55:49.305333 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "362d815c-c6ec-48b0-9891-85d06ad00aed" (UID: "362d815c-c6ec-48b0-9891-85d06ad00aed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:49.306145 master-1 kubenswrapper[4771]: I1011 10:55:49.306066 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-config" (OuterVolumeSpecName: "config") pod "362d815c-c6ec-48b0-9891-85d06ad00aed" (UID: "362d815c-c6ec-48b0-9891-85d06ad00aed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:49.324301 master-1 kubenswrapper[4771]: I1011 10:55:49.324241 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "362d815c-c6ec-48b0-9891-85d06ad00aed" (UID: "362d815c-c6ec-48b0-9891-85d06ad00aed"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:49.338841 master-1 kubenswrapper[4771]: I1011 10:55:49.338761 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-stzg5" event={"ID":"362d815c-c6ec-48b0-9891-85d06ad00aed","Type":"ContainerDied","Data":"f93602e6ba46cd0010c3c32ac26a6e16a985dc38b7a78af0515d53b800f6c9e5"} Oct 11 10:55:49.339658 master-1 kubenswrapper[4771]: I1011 10:55:49.338877 4771 scope.go:117] "RemoveContainer" containerID="99502f3eb6699cc67bcf11374ee8446bc01a1a157ce8024301c91ebed596f3f2" Oct 11 10:55:49.340536 master-1 kubenswrapper[4771]: I1011 10:55:49.340496 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887b79bcd-stzg5" Oct 11 10:55:49.363661 master-1 kubenswrapper[4771]: I1011 10:55:49.363593 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4zt7\" (UniqueName: \"kubernetes.io/projected/362d815c-c6ec-48b0-9891-85d06ad00aed-kube-api-access-n4zt7\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:49.364043 master-1 kubenswrapper[4771]: I1011 10:55:49.364030 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:49.364161 master-1 kubenswrapper[4771]: I1011 10:55:49.364148 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-httpd-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:49.364257 master-1 kubenswrapper[4771]: I1011 10:55:49.364245 4771 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-ovndb-tls-certs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:49.364340 master-1 kubenswrapper[4771]: I1011 10:55:49.364329 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/362d815c-c6ec-48b0-9891-85d06ad00aed-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:49.365552 master-1 kubenswrapper[4771]: I1011 10:55:49.364072 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8vzsw" Oct 11 10:55:49.404506 master-1 kubenswrapper[4771]: I1011 10:55:49.404445 4771 scope.go:117] "RemoveContainer" containerID="b94dfe1997cbb3d378d19012a9b6401bc1cef35489c7ea7be575908bfe56b3a0" Oct 11 10:55:49.422333 master-1 kubenswrapper[4771]: I1011 10:55:49.422265 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8vzsw" Oct 11 10:55:49.423240 master-2 kubenswrapper[4776]: I1011 10:55:49.423033 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-external-api-2" podStartSLOduration=5.423017016 podStartE2EDuration="5.423017016s" podCreationTimestamp="2025-10-11 10:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:49.420374725 +0000 UTC m=+1784.204801424" watchObservedRunningTime="2025-10-11 10:55:49.423017016 +0000 UTC m=+1784.207443725" Oct 11 10:55:49.481936 master-1 kubenswrapper[4771]: I1011 10:55:49.481824 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-chcrd"] Oct 11 10:55:49.487557 master-1 kubenswrapper[4771]: W1011 10:55:49.487495 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-0c0dbb46c9c9dd56fc370dbf3bf4cefb3705db5d800f0bb6bbcaf05c4ff29fa5 WatchSource:0}: Error finding container 0c0dbb46c9c9dd56fc370dbf3bf4cefb3705db5d800f0bb6bbcaf05c4ff29fa5: Status 404 returned error can't find the container with id 0c0dbb46c9c9dd56fc370dbf3bf4cefb3705db5d800f0bb6bbcaf05c4ff29fa5 Oct 11 10:55:49.671588 master-2 kubenswrapper[4776]: I1011 10:55:49.671528 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-595686b98f-4tcwj"] Oct 11 10:55:49.741927 master-1 kubenswrapper[4771]: I1011 10:55:49.741846 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7887b79bcd-stzg5"] Oct 11 10:55:49.745658 master-2 kubenswrapper[4776]: I1011 10:55:49.745596 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-595686b98f-4tcwj"] Oct 11 10:55:49.762970 master-1 kubenswrapper[4771]: I1011 10:55:49.760604 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7887b79bcd-stzg5"] Oct 11 10:55:49.810161 master-1 kubenswrapper[4771]: I1011 10:55:49.810082 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a338-account-create-v4xzh" Oct 11 10:55:49.884447 master-1 kubenswrapper[4771]: I1011 10:55:49.884346 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nlsv\" (UniqueName: \"kubernetes.io/projected/7ac9af7f-afc6-4d4d-9923-db14ac820459-kube-api-access-8nlsv\") pod \"7ac9af7f-afc6-4d4d-9923-db14ac820459\" (UID: \"7ac9af7f-afc6-4d4d-9923-db14ac820459\") " Oct 11 10:55:49.888064 master-1 kubenswrapper[4771]: I1011 10:55:49.887980 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ac9af7f-afc6-4d4d-9923-db14ac820459-kube-api-access-8nlsv" (OuterVolumeSpecName: "kube-api-access-8nlsv") pod "7ac9af7f-afc6-4d4d-9923-db14ac820459" (UID: "7ac9af7f-afc6-4d4d-9923-db14ac820459"). InnerVolumeSpecName "kube-api-access-8nlsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:49.894653 master-0 kubenswrapper[4790]: I1011 10:55:49.894429 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-2" event={"ID":"243e93fe-e6cd-47af-95ed-3f141cb74deb","Type":"ContainerStarted","Data":"e7459295cac3cf27355164ee90104191dcd8d154f17d3647741ac0398a118224"} Oct 11 10:55:49.982348 master-2 kubenswrapper[4776]: I1011 10:55:49.982220 4776 generic.go:334] "Generic (PLEG): container finished" podID="be6ef0ad-fb25-4af2-a9fc-c89be4b1983b" containerID="b466abf7a0b4707a238b9568c5f5c7ad243418122b1d4aff19889a45820a6369" exitCode=0 Oct 11 10:55:49.982348 master-2 kubenswrapper[4776]: I1011 10:55:49.982318 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-faff-account-create-nc9gw" event={"ID":"be6ef0ad-fb25-4af2-a9fc-c89be4b1983b","Type":"ContainerDied","Data":"b466abf7a0b4707a238b9568c5f5c7ad243418122b1d4aff19889a45820a6369"} Oct 11 10:55:49.994102 master-1 kubenswrapper[4771]: I1011 10:55:49.987991 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nlsv\" (UniqueName: \"kubernetes.io/projected/7ac9af7f-afc6-4d4d-9923-db14ac820459-kube-api-access-8nlsv\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:50.021111 master-0 kubenswrapper[4790]: I1011 10:55:50.020979 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-internal-api-2" podStartSLOduration=6.020947333 podStartE2EDuration="6.020947333s" podCreationTimestamp="2025-10-11 10:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:50.014093775 +0000 UTC m=+1026.568554077" watchObservedRunningTime="2025-10-11 10:55:50.020947333 +0000 UTC m=+1026.575407635" Oct 11 10:55:50.077192 master-2 kubenswrapper[4776]: I1011 10:55:50.077123 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" path="/var/lib/kubelet/pods/70447ad9-31f0-4f6a-8c40-19fbe8141ada/volumes" Oct 11 10:55:50.123087 master-1 kubenswrapper[4771]: I1011 10:55:50.123011 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8vzsw"] Oct 11 10:55:50.351708 master-1 kubenswrapper[4771]: I1011 10:55:50.351631 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6abca2-1aea-4da7-88aa-1d7651959165","Type":"ContainerStarted","Data":"b6b58d0d02154c8b742871143df315c38b23b17f3fced652bf59c6663b8ca178"} Oct 11 10:55:50.352562 master-1 kubenswrapper[4771]: I1011 10:55:50.352281 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 11 10:55:50.353922 master-1 kubenswrapper[4771]: I1011 10:55:50.353884 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-chcrd" event={"ID":"38267a66-0ebd-44ab-bc7f-cd5703503b74","Type":"ContainerStarted","Data":"0c0dbb46c9c9dd56fc370dbf3bf4cefb3705db5d800f0bb6bbcaf05c4ff29fa5"} Oct 11 10:55:50.359287 master-1 kubenswrapper[4771]: I1011 10:55:50.359249 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a338-account-create-v4xzh" Oct 11 10:55:50.359588 master-1 kubenswrapper[4771]: I1011 10:55:50.359555 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a338-account-create-v4xzh" event={"ID":"7ac9af7f-afc6-4d4d-9923-db14ac820459","Type":"ContainerDied","Data":"9b12e8787780eb4c0e7ae7ec5652e12847e4d2cd1f0a158946584ba9e587475f"} Oct 11 10:55:50.359674 master-1 kubenswrapper[4771]: I1011 10:55:50.359597 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b12e8787780eb4c0e7ae7ec5652e12847e4d2cd1f0a158946584ba9e587475f" Oct 11 10:55:50.461299 master-1 kubenswrapper[4771]: I1011 10:55:50.461122 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="362d815c-c6ec-48b0-9891-85d06ad00aed" path="/var/lib/kubelet/pods/362d815c-c6ec-48b0-9891-85d06ad00aed/volumes" Oct 11 10:55:50.542413 master-0 kubenswrapper[4790]: I1011 10:55:50.542317 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-d55d46749-qq6mv" Oct 11 10:55:50.605800 master-1 kubenswrapper[4771]: I1011 10:55:50.605672 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.934589968 podStartE2EDuration="6.605637139s" podCreationTimestamp="2025-10-11 10:55:44 +0000 UTC" firstStartedPulling="2025-10-11 10:55:45.211733768 +0000 UTC m=+1777.185960209" lastFinishedPulling="2025-10-11 10:55:49.882780939 +0000 UTC m=+1781.857007380" observedRunningTime="2025-10-11 10:55:50.598621075 +0000 UTC m=+1782.572847526" watchObservedRunningTime="2025-10-11 10:55:50.605637139 +0000 UTC m=+1782.579863610" Oct 11 10:55:51.369655 master-1 kubenswrapper[4771]: I1011 10:55:51.369579 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8vzsw" podUID="1daf9046-c7b7-4c9c-a9b3-76ae17e47e44" containerName="registry-server" containerID="cri-o://00b0738c103a1f8936204d6f12df3d6fd67321868af021b54204d55c141f77ca" gracePeriod=2 Oct 11 10:55:51.701839 master-1 kubenswrapper[4771]: I1011 10:55:51.701649 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-7cddc977f5-9ddgm"] Oct 11 10:55:51.702083 master-1 kubenswrapper[4771]: I1011 10:55:51.702041 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-7cddc977f5-9ddgm" podUID="879970ca-6312-4aec-b8f4-a8a41a0e3797" containerName="ironic-api-log" containerID="cri-o://a3bee7ce4b0c367a06f8dac0455b4363a2d71194a9474e87524e9ae41ec4264e" gracePeriod=60 Oct 11 10:55:51.923904 master-2 kubenswrapper[4776]: I1011 10:55:51.923850 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-faff-account-create-nc9gw" Oct 11 10:55:51.965029 master-2 kubenswrapper[4776]: I1011 10:55:51.964984 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l224\" (UniqueName: \"kubernetes.io/projected/be6ef0ad-fb25-4af2-a9fc-c89be4b1983b-kube-api-access-5l224\") pod \"be6ef0ad-fb25-4af2-a9fc-c89be4b1983b\" (UID: \"be6ef0ad-fb25-4af2-a9fc-c89be4b1983b\") " Oct 11 10:55:51.968285 master-2 kubenswrapper[4776]: I1011 10:55:51.968240 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be6ef0ad-fb25-4af2-a9fc-c89be4b1983b-kube-api-access-5l224" (OuterVolumeSpecName: "kube-api-access-5l224") pod "be6ef0ad-fb25-4af2-a9fc-c89be4b1983b" (UID: "be6ef0ad-fb25-4af2-a9fc-c89be4b1983b"). InnerVolumeSpecName "kube-api-access-5l224". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:51.998508 master-2 kubenswrapper[4776]: I1011 10:55:51.998448 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-faff-account-create-nc9gw" event={"ID":"be6ef0ad-fb25-4af2-a9fc-c89be4b1983b","Type":"ContainerDied","Data":"6421511dcf0ab89c9e53d65d6b14be5caf661ab4c5ff703f9d8dfafd5a15ebfd"} Oct 11 10:55:51.998508 master-2 kubenswrapper[4776]: I1011 10:55:51.998507 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6421511dcf0ab89c9e53d65d6b14be5caf661ab4c5ff703f9d8dfafd5a15ebfd" Oct 11 10:55:51.998639 master-2 kubenswrapper[4776]: I1011 10:55:51.998514 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-faff-account-create-nc9gw" Oct 11 10:55:52.041217 master-2 kubenswrapper[4776]: I1011 10:55:52.041164 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:52.069185 master-2 kubenswrapper[4776]: I1011 10:55:52.069097 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l224\" (UniqueName: \"kubernetes.io/projected/be6ef0ad-fb25-4af2-a9fc-c89be4b1983b-kube-api-access-5l224\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:52.318587 master-1 kubenswrapper[4771]: I1011 10:55:52.318071 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:52.344879 master-1 kubenswrapper[4771]: I1011 10:55:52.344782 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-scripts\") pod \"879970ca-6312-4aec-b8f4-a8a41a0e3797\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " Oct 11 10:55:52.345230 master-1 kubenswrapper[4771]: I1011 10:55:52.344913 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4pzb\" (UniqueName: \"kubernetes.io/projected/879970ca-6312-4aec-b8f4-a8a41a0e3797-kube-api-access-k4pzb\") pod \"879970ca-6312-4aec-b8f4-a8a41a0e3797\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " Oct 11 10:55:52.345230 master-1 kubenswrapper[4771]: I1011 10:55:52.344989 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-config-data\") pod \"879970ca-6312-4aec-b8f4-a8a41a0e3797\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " Oct 11 10:55:52.345230 master-1 kubenswrapper[4771]: I1011 10:55:52.345037 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-config-data-custom\") pod \"879970ca-6312-4aec-b8f4-a8a41a0e3797\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " Oct 11 10:55:52.345230 master-1 kubenswrapper[4771]: I1011 10:55:52.345066 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/879970ca-6312-4aec-b8f4-a8a41a0e3797-logs\") pod \"879970ca-6312-4aec-b8f4-a8a41a0e3797\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " Oct 11 10:55:52.345230 master-1 kubenswrapper[4771]: I1011 10:55:52.345095 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-combined-ca-bundle\") pod \"879970ca-6312-4aec-b8f4-a8a41a0e3797\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " Oct 11 10:55:52.345230 master-1 kubenswrapper[4771]: I1011 10:55:52.345235 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/879970ca-6312-4aec-b8f4-a8a41a0e3797-config-data-merged\") pod \"879970ca-6312-4aec-b8f4-a8a41a0e3797\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " Oct 11 10:55:52.345506 master-1 kubenswrapper[4771]: I1011 10:55:52.345259 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/879970ca-6312-4aec-b8f4-a8a41a0e3797-etc-podinfo\") pod \"879970ca-6312-4aec-b8f4-a8a41a0e3797\" (UID: \"879970ca-6312-4aec-b8f4-a8a41a0e3797\") " Oct 11 10:55:52.349682 master-1 kubenswrapper[4771]: I1011 10:55:52.349562 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "879970ca-6312-4aec-b8f4-a8a41a0e3797" (UID: "879970ca-6312-4aec-b8f4-a8a41a0e3797"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:52.349682 master-1 kubenswrapper[4771]: I1011 10:55:52.349603 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/879970ca-6312-4aec-b8f4-a8a41a0e3797-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "879970ca-6312-4aec-b8f4-a8a41a0e3797" (UID: "879970ca-6312-4aec-b8f4-a8a41a0e3797"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 11 10:55:52.349682 master-1 kubenswrapper[4771]: I1011 10:55:52.349575 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/879970ca-6312-4aec-b8f4-a8a41a0e3797-kube-api-access-k4pzb" (OuterVolumeSpecName: "kube-api-access-k4pzb") pod "879970ca-6312-4aec-b8f4-a8a41a0e3797" (UID: "879970ca-6312-4aec-b8f4-a8a41a0e3797"). InnerVolumeSpecName "kube-api-access-k4pzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:52.349968 master-1 kubenswrapper[4771]: I1011 10:55:52.349882 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/879970ca-6312-4aec-b8f4-a8a41a0e3797-logs" (OuterVolumeSpecName: "logs") pod "879970ca-6312-4aec-b8f4-a8a41a0e3797" (UID: "879970ca-6312-4aec-b8f4-a8a41a0e3797"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:52.350578 master-1 kubenswrapper[4771]: I1011 10:55:52.350529 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-scripts" (OuterVolumeSpecName: "scripts") pod "879970ca-6312-4aec-b8f4-a8a41a0e3797" (UID: "879970ca-6312-4aec-b8f4-a8a41a0e3797"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:52.358564 master-1 kubenswrapper[4771]: I1011 10:55:52.358380 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/879970ca-6312-4aec-b8f4-a8a41a0e3797-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "879970ca-6312-4aec-b8f4-a8a41a0e3797" (UID: "879970ca-6312-4aec-b8f4-a8a41a0e3797"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:52.368843 master-1 kubenswrapper[4771]: I1011 10:55:52.368763 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-config-data" (OuterVolumeSpecName: "config-data") pod "879970ca-6312-4aec-b8f4-a8a41a0e3797" (UID: "879970ca-6312-4aec-b8f4-a8a41a0e3797"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:52.386196 master-1 kubenswrapper[4771]: I1011 10:55:52.386104 4771 generic.go:334] "Generic (PLEG): container finished" podID="879970ca-6312-4aec-b8f4-a8a41a0e3797" containerID="a3bee7ce4b0c367a06f8dac0455b4363a2d71194a9474e87524e9ae41ec4264e" exitCode=143 Oct 11 10:55:52.386739 master-1 kubenswrapper[4771]: I1011 10:55:52.386237 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7cddc977f5-9ddgm" event={"ID":"879970ca-6312-4aec-b8f4-a8a41a0e3797","Type":"ContainerDied","Data":"a3bee7ce4b0c367a06f8dac0455b4363a2d71194a9474e87524e9ae41ec4264e"} Oct 11 10:55:52.386739 master-1 kubenswrapper[4771]: I1011 10:55:52.386279 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7cddc977f5-9ddgm" event={"ID":"879970ca-6312-4aec-b8f4-a8a41a0e3797","Type":"ContainerDied","Data":"3182b36cd0161342e4b24feff5f2f372022de500b18e279f9cd6a7f20bca373c"} Oct 11 10:55:52.386739 master-1 kubenswrapper[4771]: I1011 10:55:52.386308 4771 scope.go:117] "RemoveContainer" containerID="d960a6ebc10578bac963a4210bd8cc373ed3887cebb614d8345d375a88c7faeb" Oct 11 10:55:52.387348 master-1 kubenswrapper[4771]: I1011 10:55:52.386835 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7cddc977f5-9ddgm" Oct 11 10:55:52.398547 master-1 kubenswrapper[4771]: I1011 10:55:52.398464 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "879970ca-6312-4aec-b8f4-a8a41a0e3797" (UID: "879970ca-6312-4aec-b8f4-a8a41a0e3797"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:52.410636 master-1 kubenswrapper[4771]: I1011 10:55:52.410440 4771 generic.go:334] "Generic (PLEG): container finished" podID="1daf9046-c7b7-4c9c-a9b3-76ae17e47e44" containerID="00b0738c103a1f8936204d6f12df3d6fd67321868af021b54204d55c141f77ca" exitCode=0 Oct 11 10:55:52.410636 master-1 kubenswrapper[4771]: I1011 10:55:52.410515 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vzsw" event={"ID":"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44","Type":"ContainerDied","Data":"00b0738c103a1f8936204d6f12df3d6fd67321868af021b54204d55c141f77ca"} Oct 11 10:55:52.422209 master-1 kubenswrapper[4771]: I1011 10:55:52.421998 4771 scope.go:117] "RemoveContainer" containerID="a3bee7ce4b0c367a06f8dac0455b4363a2d71194a9474e87524e9ae41ec4264e" Oct 11 10:55:52.442764 master-1 kubenswrapper[4771]: I1011 10:55:52.442718 4771 scope.go:117] "RemoveContainer" containerID="ef95a48f33dffbcc5025cb68d51739a664704d5571f4c1f1a0bf787ab1898104" Oct 11 10:55:52.447507 master-1 kubenswrapper[4771]: I1011 10:55:52.447454 4771 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-config-data-custom\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:52.447507 master-1 kubenswrapper[4771]: I1011 10:55:52.447502 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/879970ca-6312-4aec-b8f4-a8a41a0e3797-logs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:52.448181 master-1 kubenswrapper[4771]: I1011 10:55:52.447518 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:52.448181 master-1 kubenswrapper[4771]: I1011 10:55:52.447534 4771 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/879970ca-6312-4aec-b8f4-a8a41a0e3797-config-data-merged\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:52.448181 master-1 kubenswrapper[4771]: I1011 10:55:52.447551 4771 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/879970ca-6312-4aec-b8f4-a8a41a0e3797-etc-podinfo\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:52.448181 master-1 kubenswrapper[4771]: I1011 10:55:52.447564 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:52.448181 master-1 kubenswrapper[4771]: I1011 10:55:52.447578 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4pzb\" (UniqueName: \"kubernetes.io/projected/879970ca-6312-4aec-b8f4-a8a41a0e3797-kube-api-access-k4pzb\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:52.448181 master-1 kubenswrapper[4771]: I1011 10:55:52.447590 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879970ca-6312-4aec-b8f4-a8a41a0e3797-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:52.459154 master-1 kubenswrapper[4771]: I1011 10:55:52.459101 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vzsw" Oct 11 10:55:52.465031 master-1 kubenswrapper[4771]: I1011 10:55:52.464984 4771 scope.go:117] "RemoveContainer" containerID="d960a6ebc10578bac963a4210bd8cc373ed3887cebb614d8345d375a88c7faeb" Oct 11 10:55:52.465460 master-1 kubenswrapper[4771]: E1011 10:55:52.465412 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d960a6ebc10578bac963a4210bd8cc373ed3887cebb614d8345d375a88c7faeb\": container with ID starting with d960a6ebc10578bac963a4210bd8cc373ed3887cebb614d8345d375a88c7faeb not found: ID does not exist" containerID="d960a6ebc10578bac963a4210bd8cc373ed3887cebb614d8345d375a88c7faeb" Oct 11 10:55:52.465529 master-1 kubenswrapper[4771]: I1011 10:55:52.465465 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d960a6ebc10578bac963a4210bd8cc373ed3887cebb614d8345d375a88c7faeb"} err="failed to get container status \"d960a6ebc10578bac963a4210bd8cc373ed3887cebb614d8345d375a88c7faeb\": rpc error: code = NotFound desc = could not find container \"d960a6ebc10578bac963a4210bd8cc373ed3887cebb614d8345d375a88c7faeb\": container with ID starting with d960a6ebc10578bac963a4210bd8cc373ed3887cebb614d8345d375a88c7faeb not found: ID does not exist" Oct 11 10:55:52.465529 master-1 kubenswrapper[4771]: I1011 10:55:52.465503 4771 scope.go:117] "RemoveContainer" containerID="a3bee7ce4b0c367a06f8dac0455b4363a2d71194a9474e87524e9ae41ec4264e" Oct 11 10:55:52.465894 master-1 kubenswrapper[4771]: E1011 10:55:52.465860 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3bee7ce4b0c367a06f8dac0455b4363a2d71194a9474e87524e9ae41ec4264e\": container with ID starting with a3bee7ce4b0c367a06f8dac0455b4363a2d71194a9474e87524e9ae41ec4264e not found: ID does not exist" containerID="a3bee7ce4b0c367a06f8dac0455b4363a2d71194a9474e87524e9ae41ec4264e" Oct 11 10:55:52.465939 master-1 kubenswrapper[4771]: I1011 10:55:52.465891 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3bee7ce4b0c367a06f8dac0455b4363a2d71194a9474e87524e9ae41ec4264e"} err="failed to get container status \"a3bee7ce4b0c367a06f8dac0455b4363a2d71194a9474e87524e9ae41ec4264e\": rpc error: code = NotFound desc = could not find container \"a3bee7ce4b0c367a06f8dac0455b4363a2d71194a9474e87524e9ae41ec4264e\": container with ID starting with a3bee7ce4b0c367a06f8dac0455b4363a2d71194a9474e87524e9ae41ec4264e not found: ID does not exist" Oct 11 10:55:52.465939 master-1 kubenswrapper[4771]: I1011 10:55:52.465909 4771 scope.go:117] "RemoveContainer" containerID="ef95a48f33dffbcc5025cb68d51739a664704d5571f4c1f1a0bf787ab1898104" Oct 11 10:55:52.466219 master-1 kubenswrapper[4771]: E1011 10:55:52.466186 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef95a48f33dffbcc5025cb68d51739a664704d5571f4c1f1a0bf787ab1898104\": container with ID starting with ef95a48f33dffbcc5025cb68d51739a664704d5571f4c1f1a0bf787ab1898104 not found: ID does not exist" containerID="ef95a48f33dffbcc5025cb68d51739a664704d5571f4c1f1a0bf787ab1898104" Oct 11 10:55:52.466276 master-1 kubenswrapper[4771]: I1011 10:55:52.466232 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef95a48f33dffbcc5025cb68d51739a664704d5571f4c1f1a0bf787ab1898104"} err="failed to get container status \"ef95a48f33dffbcc5025cb68d51739a664704d5571f4c1f1a0bf787ab1898104\": rpc error: code = NotFound desc = could not find container \"ef95a48f33dffbcc5025cb68d51739a664704d5571f4c1f1a0bf787ab1898104\": container with ID starting with ef95a48f33dffbcc5025cb68d51739a664704d5571f4c1f1a0bf787ab1898104 not found: ID does not exist" Oct 11 10:55:52.548950 master-1 kubenswrapper[4771]: I1011 10:55:52.548768 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sl2p\" (UniqueName: \"kubernetes.io/projected/1daf9046-c7b7-4c9c-a9b3-76ae17e47e44-kube-api-access-8sl2p\") pod \"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44\" (UID: \"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44\") " Oct 11 10:55:52.548950 master-1 kubenswrapper[4771]: I1011 10:55:52.548866 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1daf9046-c7b7-4c9c-a9b3-76ae17e47e44-catalog-content\") pod \"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44\" (UID: \"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44\") " Oct 11 10:55:52.549501 master-1 kubenswrapper[4771]: I1011 10:55:52.548961 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1daf9046-c7b7-4c9c-a9b3-76ae17e47e44-utilities\") pod \"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44\" (UID: \"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44\") " Oct 11 10:55:52.550310 master-1 kubenswrapper[4771]: I1011 10:55:52.550265 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1daf9046-c7b7-4c9c-a9b3-76ae17e47e44-utilities" (OuterVolumeSpecName: "utilities") pod "1daf9046-c7b7-4c9c-a9b3-76ae17e47e44" (UID: "1daf9046-c7b7-4c9c-a9b3-76ae17e47e44"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:52.553500 master-1 kubenswrapper[4771]: I1011 10:55:52.553428 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1daf9046-c7b7-4c9c-a9b3-76ae17e47e44-kube-api-access-8sl2p" (OuterVolumeSpecName: "kube-api-access-8sl2p") pod "1daf9046-c7b7-4c9c-a9b3-76ae17e47e44" (UID: "1daf9046-c7b7-4c9c-a9b3-76ae17e47e44"). InnerVolumeSpecName "kube-api-access-8sl2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:52.659392 master-1 kubenswrapper[4771]: I1011 10:55:52.659291 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1daf9046-c7b7-4c9c-a9b3-76ae17e47e44-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1daf9046-c7b7-4c9c-a9b3-76ae17e47e44" (UID: "1daf9046-c7b7-4c9c-a9b3-76ae17e47e44"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:52.665827 master-1 kubenswrapper[4771]: I1011 10:55:52.665770 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1daf9046-c7b7-4c9c-a9b3-76ae17e47e44-utilities\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:52.666156 master-1 kubenswrapper[4771]: I1011 10:55:52.666142 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sl2p\" (UniqueName: \"kubernetes.io/projected/1daf9046-c7b7-4c9c-a9b3-76ae17e47e44-kube-api-access-8sl2p\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:52.666235 master-1 kubenswrapper[4771]: I1011 10:55:52.666222 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1daf9046-c7b7-4c9c-a9b3-76ae17e47e44-catalog-content\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:52.702062 master-0 kubenswrapper[4790]: I1011 10:55:52.701814 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-api-1"] Oct 11 10:55:52.702600 master-0 kubenswrapper[4790]: I1011 10:55:52.702254 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b5802-api-1" podUID="30c351cb-246a-4343-a56d-c74fb4be119e" containerName="cinder-b5802-api-log" containerID="cri-o://60be7f28501f150423ff01b582dcc8cfc5bab82c1c195c3a36e327493870a191" gracePeriod=30 Oct 11 10:55:52.702600 master-0 kubenswrapper[4790]: I1011 10:55:52.702370 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b5802-api-1" podUID="30c351cb-246a-4343-a56d-c74fb4be119e" containerName="cinder-api" containerID="cri-o://f0083e208a574369bee7ac5b619444ddb040559a7e6adc8699675b4802ed4ee1" gracePeriod=30 Oct 11 10:55:52.927915 master-0 kubenswrapper[4790]: I1011 10:55:52.927857 4790 generic.go:334] "Generic (PLEG): container finished" podID="30c351cb-246a-4343-a56d-c74fb4be119e" containerID="60be7f28501f150423ff01b582dcc8cfc5bab82c1c195c3a36e327493870a191" exitCode=143 Oct 11 10:55:52.927915 master-0 kubenswrapper[4790]: I1011 10:55:52.927925 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-1" event={"ID":"30c351cb-246a-4343-a56d-c74fb4be119e","Type":"ContainerDied","Data":"60be7f28501f150423ff01b582dcc8cfc5bab82c1c195c3a36e327493870a191"} Oct 11 10:55:53.263379 master-1 kubenswrapper[4771]: I1011 10:55:53.263152 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-7cddc977f5-9ddgm"] Oct 11 10:55:53.423193 master-1 kubenswrapper[4771]: I1011 10:55:53.423110 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vzsw" event={"ID":"1daf9046-c7b7-4c9c-a9b3-76ae17e47e44","Type":"ContainerDied","Data":"2e03e915f6f95ecc8f0f52052466e21bd1b0bb1a12eb203399bd0345ac65bccf"} Oct 11 10:55:53.423193 master-1 kubenswrapper[4771]: I1011 10:55:53.423197 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vzsw" Oct 11 10:55:53.424006 master-1 kubenswrapper[4771]: I1011 10:55:53.423211 4771 scope.go:117] "RemoveContainer" containerID="00b0738c103a1f8936204d6f12df3d6fd67321868af021b54204d55c141f77ca" Oct 11 10:55:53.443859 master-2 kubenswrapper[4776]: I1011 10:55:53.443786 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" podUID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.143:5353: i/o timeout" Oct 11 10:55:53.511778 master-1 kubenswrapper[4771]: E1011 10:55:53.511710 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1daf9046_c7b7_4c9c_a9b3_76ae17e47e44.slice/crio-2e03e915f6f95ecc8f0f52052466e21bd1b0bb1a12eb203399bd0345ac65bccf\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1daf9046_c7b7_4c9c_a9b3_76ae17e47e44.slice\": RecentStats: unable to find data in memory cache]" Oct 11 10:55:53.512292 master-1 kubenswrapper[4771]: E1011 10:55:53.512248 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1daf9046_c7b7_4c9c_a9b3_76ae17e47e44.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1daf9046_c7b7_4c9c_a9b3_76ae17e47e44.slice/crio-2e03e915f6f95ecc8f0f52052466e21bd1b0bb1a12eb203399bd0345ac65bccf\": RecentStats: unable to find data in memory cache]" Oct 11 10:55:53.638783 master-1 kubenswrapper[4771]: I1011 10:55:53.638695 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4bbqs"] Oct 11 10:55:53.639131 master-1 kubenswrapper[4771]: E1011 10:55:53.639110 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1daf9046-c7b7-4c9c-a9b3-76ae17e47e44" containerName="extract-utilities" Oct 11 10:55:53.639131 master-1 kubenswrapper[4771]: I1011 10:55:53.639125 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1daf9046-c7b7-4c9c-a9b3-76ae17e47e44" containerName="extract-utilities" Oct 11 10:55:53.639212 master-1 kubenswrapper[4771]: E1011 10:55:53.639152 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1daf9046-c7b7-4c9c-a9b3-76ae17e47e44" containerName="registry-server" Oct 11 10:55:53.639212 master-1 kubenswrapper[4771]: I1011 10:55:53.639159 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1daf9046-c7b7-4c9c-a9b3-76ae17e47e44" containerName="registry-server" Oct 11 10:55:53.639212 master-1 kubenswrapper[4771]: E1011 10:55:53.639173 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1daf9046-c7b7-4c9c-a9b3-76ae17e47e44" containerName="extract-content" Oct 11 10:55:53.639212 master-1 kubenswrapper[4771]: I1011 10:55:53.639180 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1daf9046-c7b7-4c9c-a9b3-76ae17e47e44" containerName="extract-content" Oct 11 10:55:53.639212 master-1 kubenswrapper[4771]: E1011 10:55:53.639192 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879970ca-6312-4aec-b8f4-a8a41a0e3797" containerName="ironic-api" Oct 11 10:55:53.639212 master-1 kubenswrapper[4771]: I1011 10:55:53.639199 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="879970ca-6312-4aec-b8f4-a8a41a0e3797" containerName="ironic-api" Oct 11 10:55:53.639212 master-1 kubenswrapper[4771]: E1011 10:55:53.639215 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879970ca-6312-4aec-b8f4-a8a41a0e3797" containerName="init" Oct 11 10:55:53.639491 master-1 kubenswrapper[4771]: I1011 10:55:53.639222 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="879970ca-6312-4aec-b8f4-a8a41a0e3797" containerName="init" Oct 11 10:55:53.639491 master-1 kubenswrapper[4771]: E1011 10:55:53.639236 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="362d815c-c6ec-48b0-9891-85d06ad00aed" containerName="neutron-httpd" Oct 11 10:55:53.639491 master-1 kubenswrapper[4771]: I1011 10:55:53.639244 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="362d815c-c6ec-48b0-9891-85d06ad00aed" containerName="neutron-httpd" Oct 11 10:55:53.639491 master-1 kubenswrapper[4771]: E1011 10:55:53.639254 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac9af7f-afc6-4d4d-9923-db14ac820459" containerName="mariadb-account-create" Oct 11 10:55:53.639491 master-1 kubenswrapper[4771]: I1011 10:55:53.639259 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac9af7f-afc6-4d4d-9923-db14ac820459" containerName="mariadb-account-create" Oct 11 10:55:53.639491 master-1 kubenswrapper[4771]: E1011 10:55:53.639268 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="362d815c-c6ec-48b0-9891-85d06ad00aed" containerName="neutron-api" Oct 11 10:55:53.639491 master-1 kubenswrapper[4771]: I1011 10:55:53.639273 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="362d815c-c6ec-48b0-9891-85d06ad00aed" containerName="neutron-api" Oct 11 10:55:53.639491 master-1 kubenswrapper[4771]: E1011 10:55:53.639284 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879970ca-6312-4aec-b8f4-a8a41a0e3797" containerName="ironic-api-log" Oct 11 10:55:53.639491 master-1 kubenswrapper[4771]: I1011 10:55:53.639290 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="879970ca-6312-4aec-b8f4-a8a41a0e3797" containerName="ironic-api-log" Oct 11 10:55:53.639491 master-1 kubenswrapper[4771]: I1011 10:55:53.639440 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="879970ca-6312-4aec-b8f4-a8a41a0e3797" containerName="ironic-api" Oct 11 10:55:53.639491 master-1 kubenswrapper[4771]: I1011 10:55:53.639456 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="362d815c-c6ec-48b0-9891-85d06ad00aed" containerName="neutron-api" Oct 11 10:55:53.639491 master-1 kubenswrapper[4771]: I1011 10:55:53.639466 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="879970ca-6312-4aec-b8f4-a8a41a0e3797" containerName="ironic-api" Oct 11 10:55:53.655446 master-1 kubenswrapper[4771]: I1011 10:55:53.639978 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="1daf9046-c7b7-4c9c-a9b3-76ae17e47e44" containerName="registry-server" Oct 11 10:55:53.655446 master-1 kubenswrapper[4771]: I1011 10:55:53.639990 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="362d815c-c6ec-48b0-9891-85d06ad00aed" containerName="neutron-httpd" Oct 11 10:55:53.655446 master-1 kubenswrapper[4771]: I1011 10:55:53.640003 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="879970ca-6312-4aec-b8f4-a8a41a0e3797" containerName="ironic-api-log" Oct 11 10:55:53.655446 master-1 kubenswrapper[4771]: I1011 10:55:53.640010 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac9af7f-afc6-4d4d-9923-db14ac820459" containerName="mariadb-account-create" Oct 11 10:55:53.655446 master-1 kubenswrapper[4771]: E1011 10:55:53.640241 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879970ca-6312-4aec-b8f4-a8a41a0e3797" containerName="ironic-api" Oct 11 10:55:53.655446 master-1 kubenswrapper[4771]: I1011 10:55:53.640250 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="879970ca-6312-4aec-b8f4-a8a41a0e3797" containerName="ironic-api" Oct 11 10:55:53.655446 master-1 kubenswrapper[4771]: I1011 10:55:53.642245 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4bbqs" Oct 11 10:55:53.689317 master-1 kubenswrapper[4771]: I1011 10:55:53.689250 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzpqz\" (UniqueName: \"kubernetes.io/projected/5921e565-c581-42f4-8da8-df72fae9a3c0-kube-api-access-jzpqz\") pod \"community-operators-4bbqs\" (UID: \"5921e565-c581-42f4-8da8-df72fae9a3c0\") " pod="openshift-marketplace/community-operators-4bbqs" Oct 11 10:55:53.689694 master-1 kubenswrapper[4771]: I1011 10:55:53.689450 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5921e565-c581-42f4-8da8-df72fae9a3c0-utilities\") pod \"community-operators-4bbqs\" (UID: \"5921e565-c581-42f4-8da8-df72fae9a3c0\") " pod="openshift-marketplace/community-operators-4bbqs" Oct 11 10:55:53.689897 master-1 kubenswrapper[4771]: I1011 10:55:53.689852 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5921e565-c581-42f4-8da8-df72fae9a3c0-catalog-content\") pod \"community-operators-4bbqs\" (UID: \"5921e565-c581-42f4-8da8-df72fae9a3c0\") " pod="openshift-marketplace/community-operators-4bbqs" Oct 11 10:55:53.793673 master-1 kubenswrapper[4771]: I1011 10:55:53.793581 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5921e565-c581-42f4-8da8-df72fae9a3c0-catalog-content\") pod \"community-operators-4bbqs\" (UID: \"5921e565-c581-42f4-8da8-df72fae9a3c0\") " pod="openshift-marketplace/community-operators-4bbqs" Oct 11 10:55:53.794079 master-1 kubenswrapper[4771]: I1011 10:55:53.793784 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzpqz\" (UniqueName: \"kubernetes.io/projected/5921e565-c581-42f4-8da8-df72fae9a3c0-kube-api-access-jzpqz\") pod \"community-operators-4bbqs\" (UID: \"5921e565-c581-42f4-8da8-df72fae9a3c0\") " pod="openshift-marketplace/community-operators-4bbqs" Oct 11 10:55:53.794079 master-1 kubenswrapper[4771]: I1011 10:55:53.793811 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5921e565-c581-42f4-8da8-df72fae9a3c0-utilities\") pod \"community-operators-4bbqs\" (UID: \"5921e565-c581-42f4-8da8-df72fae9a3c0\") " pod="openshift-marketplace/community-operators-4bbqs" Oct 11 10:55:53.794241 master-1 kubenswrapper[4771]: I1011 10:55:53.794216 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5921e565-c581-42f4-8da8-df72fae9a3c0-catalog-content\") pod \"community-operators-4bbqs\" (UID: \"5921e565-c581-42f4-8da8-df72fae9a3c0\") " pod="openshift-marketplace/community-operators-4bbqs" Oct 11 10:55:53.794241 master-1 kubenswrapper[4771]: I1011 10:55:53.794235 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5921e565-c581-42f4-8da8-df72fae9a3c0-utilities\") pod \"community-operators-4bbqs\" (UID: \"5921e565-c581-42f4-8da8-df72fae9a3c0\") " pod="openshift-marketplace/community-operators-4bbqs" Oct 11 10:55:53.978597 master-1 kubenswrapper[4771]: I1011 10:55:53.978435 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4bbqs"] Oct 11 10:55:53.998545 master-1 kubenswrapper[4771]: I1011 10:55:53.998464 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-7cddc977f5-9ddgm"] Oct 11 10:55:54.230860 master-1 kubenswrapper[4771]: I1011 10:55:54.228815 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8vzsw"] Oct 11 10:55:54.246496 master-1 kubenswrapper[4771]: I1011 10:55:54.241873 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzpqz\" (UniqueName: \"kubernetes.io/projected/5921e565-c581-42f4-8da8-df72fae9a3c0-kube-api-access-jzpqz\") pod \"community-operators-4bbqs\" (UID: \"5921e565-c581-42f4-8da8-df72fae9a3c0\") " pod="openshift-marketplace/community-operators-4bbqs" Oct 11 10:55:54.274793 master-1 kubenswrapper[4771]: I1011 10:55:54.274738 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4bbqs" Oct 11 10:55:54.300684 master-1 kubenswrapper[4771]: I1011 10:55:54.299108 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8vzsw"] Oct 11 10:55:54.453461 master-1 kubenswrapper[4771]: I1011 10:55:54.453379 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1daf9046-c7b7-4c9c-a9b3-76ae17e47e44" path="/var/lib/kubelet/pods/1daf9046-c7b7-4c9c-a9b3-76ae17e47e44/volumes" Oct 11 10:55:54.454438 master-1 kubenswrapper[4771]: I1011 10:55:54.454416 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="879970ca-6312-4aec-b8f4-a8a41a0e3797" path="/var/lib/kubelet/pods/879970ca-6312-4aec-b8f4-a8a41a0e3797/volumes" Oct 11 10:55:55.468266 master-2 kubenswrapper[4776]: I1011 10:55:55.468204 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:55.468266 master-2 kubenswrapper[4776]: I1011 10:55:55.468261 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:55.504728 master-2 kubenswrapper[4776]: I1011 10:55:55.504347 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:55.509332 master-2 kubenswrapper[4776]: I1011 10:55:55.509276 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:55.809528 master-0 kubenswrapper[4790]: I1011 10:55:55.809432 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-b5802-api-1" podUID="30c351cb-246a-4343-a56d-c74fb4be119e" containerName="cinder-api" probeResult="failure" output="Get \"http://10.130.0.96:8776/healthcheck\": read tcp 10.130.0.2:37106->10.130.0.96:8776: read: connection reset by peer" Oct 11 10:55:55.979183 master-0 kubenswrapper[4790]: I1011 10:55:55.978045 4790 generic.go:334] "Generic (PLEG): container finished" podID="30c351cb-246a-4343-a56d-c74fb4be119e" containerID="f0083e208a574369bee7ac5b619444ddb040559a7e6adc8699675b4802ed4ee1" exitCode=0 Oct 11 10:55:55.979183 master-0 kubenswrapper[4790]: I1011 10:55:55.978109 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-1" event={"ID":"30c351cb-246a-4343-a56d-c74fb4be119e","Type":"ContainerDied","Data":"f0083e208a574369bee7ac5b619444ddb040559a7e6adc8699675b4802ed4ee1"} Oct 11 10:55:56.045072 master-2 kubenswrapper[4776]: I1011 10:55:56.044889 4776 generic.go:334] "Generic (PLEG): container finished" podID="fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" containerID="d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c" exitCode=0 Oct 11 10:55:56.045072 master-2 kubenswrapper[4776]: I1011 10:55:56.044980 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-86bdd47775-gpz8z" event={"ID":"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49","Type":"ContainerDied","Data":"d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c"} Oct 11 10:55:56.045378 master-2 kubenswrapper[4776]: I1011 10:55:56.045294 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:56.045378 master-2 kubenswrapper[4776]: I1011 10:55:56.045314 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:56.308097 master-0 kubenswrapper[4790]: I1011 10:55:56.307906 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-1" Oct 11 10:55:56.333793 master-2 kubenswrapper[4776]: E1011 10:55:56.333722 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c is running failed: container process not found" containerID="d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 11 10:55:56.339255 master-2 kubenswrapper[4776]: E1011 10:55:56.339205 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c is running failed: container process not found" containerID="d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 11 10:55:56.339758 master-2 kubenswrapper[4776]: E1011 10:55:56.339667 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c is running failed: container process not found" containerID="d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 11 10:55:56.339851 master-2 kubenswrapper[4776]: E1011 10:55:56.339759 4776 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-86bdd47775-gpz8z" podUID="fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" containerName="heat-engine" Oct 11 10:55:56.372515 master-0 kubenswrapper[4790]: I1011 10:55:56.372442 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30c351cb-246a-4343-a56d-c74fb4be119e-etc-machine-id\") pod \"30c351cb-246a-4343-a56d-c74fb4be119e\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " Oct 11 10:55:56.372866 master-0 kubenswrapper[4790]: I1011 10:55:56.372617 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-scripts\") pod \"30c351cb-246a-4343-a56d-c74fb4be119e\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " Oct 11 10:55:56.372866 master-0 kubenswrapper[4790]: I1011 10:55:56.372647 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-config-data\") pod \"30c351cb-246a-4343-a56d-c74fb4be119e\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " Oct 11 10:55:56.372866 master-0 kubenswrapper[4790]: I1011 10:55:56.372651 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30c351cb-246a-4343-a56d-c74fb4be119e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "30c351cb-246a-4343-a56d-c74fb4be119e" (UID: "30c351cb-246a-4343-a56d-c74fb4be119e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:56.373430 master-0 kubenswrapper[4790]: I1011 10:55:56.373351 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-config-data-custom\") pod \"30c351cb-246a-4343-a56d-c74fb4be119e\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " Oct 11 10:55:56.373660 master-0 kubenswrapper[4790]: I1011 10:55:56.373634 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xghz\" (UniqueName: \"kubernetes.io/projected/30c351cb-246a-4343-a56d-c74fb4be119e-kube-api-access-5xghz\") pod \"30c351cb-246a-4343-a56d-c74fb4be119e\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " Oct 11 10:55:56.375551 master-0 kubenswrapper[4790]: I1011 10:55:56.374007 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30c351cb-246a-4343-a56d-c74fb4be119e-logs\") pod \"30c351cb-246a-4343-a56d-c74fb4be119e\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " Oct 11 10:55:56.375551 master-0 kubenswrapper[4790]: I1011 10:55:56.374121 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-combined-ca-bundle\") pod \"30c351cb-246a-4343-a56d-c74fb4be119e\" (UID: \"30c351cb-246a-4343-a56d-c74fb4be119e\") " Oct 11 10:55:56.376662 master-0 kubenswrapper[4790]: I1011 10:55:56.376613 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30c351cb-246a-4343-a56d-c74fb4be119e-logs" (OuterVolumeSpecName: "logs") pod "30c351cb-246a-4343-a56d-c74fb4be119e" (UID: "30c351cb-246a-4343-a56d-c74fb4be119e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:56.376800 master-0 kubenswrapper[4790]: I1011 10:55:56.376746 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-scripts" (OuterVolumeSpecName: "scripts") pod "30c351cb-246a-4343-a56d-c74fb4be119e" (UID: "30c351cb-246a-4343-a56d-c74fb4be119e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:56.378980 master-0 kubenswrapper[4790]: I1011 10:55:56.378947 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30c351cb-246a-4343-a56d-c74fb4be119e-kube-api-access-5xghz" (OuterVolumeSpecName: "kube-api-access-5xghz") pod "30c351cb-246a-4343-a56d-c74fb4be119e" (UID: "30c351cb-246a-4343-a56d-c74fb4be119e"). InnerVolumeSpecName "kube-api-access-5xghz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:56.382401 master-0 kubenswrapper[4790]: I1011 10:55:56.381818 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "30c351cb-246a-4343-a56d-c74fb4be119e" (UID: "30c351cb-246a-4343-a56d-c74fb4be119e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:56.393220 master-0 kubenswrapper[4790]: I1011 10:55:56.392993 4790 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30c351cb-246a-4343-a56d-c74fb4be119e-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:56.393220 master-0 kubenswrapper[4790]: I1011 10:55:56.393031 4790 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-scripts\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:56.393220 master-0 kubenswrapper[4790]: I1011 10:55:56.393041 4790 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-config-data-custom\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:56.393220 master-0 kubenswrapper[4790]: I1011 10:55:56.393053 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xghz\" (UniqueName: \"kubernetes.io/projected/30c351cb-246a-4343-a56d-c74fb4be119e-kube-api-access-5xghz\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:56.393220 master-0 kubenswrapper[4790]: I1011 10:55:56.393064 4790 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30c351cb-246a-4343-a56d-c74fb4be119e-logs\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:56.412354 master-0 kubenswrapper[4790]: I1011 10:55:56.412267 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30c351cb-246a-4343-a56d-c74fb4be119e" (UID: "30c351cb-246a-4343-a56d-c74fb4be119e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:56.425728 master-0 kubenswrapper[4790]: I1011 10:55:56.425643 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-config-data" (OuterVolumeSpecName: "config-data") pod "30c351cb-246a-4343-a56d-c74fb4be119e" (UID: "30c351cb-246a-4343-a56d-c74fb4be119e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:56.494492 master-0 kubenswrapper[4790]: I1011 10:55:56.494368 4790 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:56.494492 master-0 kubenswrapper[4790]: I1011 10:55:56.494411 4790 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30c351cb-246a-4343-a56d-c74fb4be119e-config-data\") on node \"master-0\" DevicePath \"\"" Oct 11 10:55:56.503916 master-0 kubenswrapper[4790]: I1011 10:55:56.503864 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:56.503916 master-0 kubenswrapper[4790]: I1011 10:55:56.503919 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:56.538045 master-0 kubenswrapper[4790]: I1011 10:55:56.537969 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:56.550306 master-0 kubenswrapper[4790]: I1011 10:55:56.550240 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:56.636909 master-1 kubenswrapper[4771]: I1011 10:55:56.636801 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:55:56.638131 master-1 kubenswrapper[4771]: I1011 10:55:56.637339 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerName="ceilometer-central-agent" containerID="cri-o://927b471b1fa44ac0453124772c1c315e44a9bd395c3c5d07c01293a49d70a8ef" gracePeriod=30 Oct 11 10:55:56.638131 master-1 kubenswrapper[4771]: I1011 10:55:56.637544 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerName="proxy-httpd" containerID="cri-o://b6b58d0d02154c8b742871143df315c38b23b17f3fced652bf59c6663b8ca178" gracePeriod=30 Oct 11 10:55:56.638131 master-1 kubenswrapper[4771]: I1011 10:55:56.637588 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerName="sg-core" containerID="cri-o://dea9e948550286a77692eb6be22395f88e3b28cb639cfe7f80a137916ddb5ac1" gracePeriod=30 Oct 11 10:55:56.638131 master-1 kubenswrapper[4771]: I1011 10:55:56.637920 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerName="ceilometer-notification-agent" containerID="cri-o://e699798eb3d463b27b94ec4cd6a4bacb981768b5034628ba7f0e14e667c53445" gracePeriod=30 Oct 11 10:55:56.988405 master-0 kubenswrapper[4790]: I1011 10:55:56.988345 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-1" Oct 11 10:55:56.997502 master-0 kubenswrapper[4790]: I1011 10:55:56.997411 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-1" event={"ID":"30c351cb-246a-4343-a56d-c74fb4be119e","Type":"ContainerDied","Data":"72c3b882472f96be141ddefa786998e8b0390cd596d77062abb0fcaa4a2d580f"} Oct 11 10:55:56.997502 master-0 kubenswrapper[4790]: I1011 10:55:56.997464 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:56.997502 master-0 kubenswrapper[4790]: I1011 10:55:56.997483 4790 scope.go:117] "RemoveContainer" containerID="f0083e208a574369bee7ac5b619444ddb040559a7e6adc8699675b4802ed4ee1" Oct 11 10:55:56.998303 master-0 kubenswrapper[4790]: I1011 10:55:56.998271 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:57.022828 master-0 kubenswrapper[4790]: I1011 10:55:57.022771 4790 scope.go:117] "RemoveContainer" containerID="60be7f28501f150423ff01b582dcc8cfc5bab82c1c195c3a36e327493870a191" Oct 11 10:55:57.036119 master-0 kubenswrapper[4790]: I1011 10:55:57.035979 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-api-1"] Oct 11 10:55:57.057423 master-2 kubenswrapper[4776]: I1011 10:55:57.057325 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerStarted","Data":"ab41c4a67f100a55ec2db0cdabcc4ba00728b7ab8e01537dd56a71b0f51c8c16"} Oct 11 10:55:57.066969 master-0 kubenswrapper[4790]: I1011 10:55:57.066885 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b5802-api-1"] Oct 11 10:55:57.086449 master-0 kubenswrapper[4790]: I1011 10:55:57.086333 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-api-1"] Oct 11 10:55:57.087776 master-0 kubenswrapper[4790]: E1011 10:55:57.086804 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30c351cb-246a-4343-a56d-c74fb4be119e" containerName="cinder-api" Oct 11 10:55:57.087776 master-0 kubenswrapper[4790]: I1011 10:55:57.086829 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="30c351cb-246a-4343-a56d-c74fb4be119e" containerName="cinder-api" Oct 11 10:55:57.087776 master-0 kubenswrapper[4790]: E1011 10:55:57.086853 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30c351cb-246a-4343-a56d-c74fb4be119e" containerName="cinder-b5802-api-log" Oct 11 10:55:57.087776 master-0 kubenswrapper[4790]: I1011 10:55:57.087128 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="30c351cb-246a-4343-a56d-c74fb4be119e" containerName="cinder-b5802-api-log" Oct 11 10:55:57.090019 master-0 kubenswrapper[4790]: I1011 10:55:57.088169 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="30c351cb-246a-4343-a56d-c74fb4be119e" containerName="cinder-api" Oct 11 10:55:57.090019 master-0 kubenswrapper[4790]: I1011 10:55:57.088226 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="30c351cb-246a-4343-a56d-c74fb4be119e" containerName="cinder-b5802-api-log" Oct 11 10:55:57.090152 master-0 kubenswrapper[4790]: I1011 10:55:57.090119 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.093731 master-0 kubenswrapper[4790]: I1011 10:55:57.093677 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-api-config-data" Oct 11 10:55:57.093979 master-0 kubenswrapper[4790]: I1011 10:55:57.093939 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 11 10:55:57.094661 master-0 kubenswrapper[4790]: I1011 10:55:57.094294 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 11 10:55:57.096681 master-0 kubenswrapper[4790]: I1011 10:55:57.096611 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-api-1"] Oct 11 10:55:57.201589 master-2 kubenswrapper[4776]: I1011 10:55:57.199987 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:57.210289 master-0 kubenswrapper[4790]: I1011 10:55:57.207911 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b61e18b-6985-48e4-96d6-880b0c497e66-internal-tls-certs\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.210289 master-0 kubenswrapper[4790]: I1011 10:55:57.208107 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b61e18b-6985-48e4-96d6-880b0c497e66-scripts\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.210289 master-0 kubenswrapper[4790]: I1011 10:55:57.208209 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b61e18b-6985-48e4-96d6-880b0c497e66-etc-machine-id\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.210289 master-0 kubenswrapper[4790]: I1011 10:55:57.208323 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b61e18b-6985-48e4-96d6-880b0c497e66-public-tls-certs\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.210289 master-0 kubenswrapper[4790]: I1011 10:55:57.208353 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9vf2\" (UniqueName: \"kubernetes.io/projected/2b61e18b-6985-48e4-96d6-880b0c497e66-kube-api-access-z9vf2\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.210289 master-0 kubenswrapper[4790]: I1011 10:55:57.208434 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b61e18b-6985-48e4-96d6-880b0c497e66-config-data-custom\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.210289 master-0 kubenswrapper[4790]: I1011 10:55:57.208493 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b61e18b-6985-48e4-96d6-880b0c497e66-config-data\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.210289 master-0 kubenswrapper[4790]: I1011 10:55:57.208525 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b61e18b-6985-48e4-96d6-880b0c497e66-logs\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.210289 master-0 kubenswrapper[4790]: I1011 10:55:57.208589 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b61e18b-6985-48e4-96d6-880b0c497e66-combined-ca-bundle\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.285931 master-2 kubenswrapper[4776]: I1011 10:55:57.285862 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-combined-ca-bundle\") pod \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " Oct 11 10:55:57.286100 master-2 kubenswrapper[4776]: I1011 10:55:57.285950 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data-custom\") pod \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " Oct 11 10:55:57.286100 master-2 kubenswrapper[4776]: I1011 10:55:57.286042 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data\") pod \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " Oct 11 10:55:57.286183 master-2 kubenswrapper[4776]: I1011 10:55:57.286153 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kschj\" (UniqueName: \"kubernetes.io/projected/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-kube-api-access-kschj\") pod \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " Oct 11 10:55:57.289108 master-2 kubenswrapper[4776]: I1011 10:55:57.289049 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-kube-api-access-kschj" (OuterVolumeSpecName: "kube-api-access-kschj") pod "fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" (UID: "fa8c4de6-1a61-4bea-9e87-0157bc5eeb49"). InnerVolumeSpecName "kube-api-access-kschj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:57.292065 master-2 kubenswrapper[4776]: I1011 10:55:57.292020 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" (UID: "fa8c4de6-1a61-4bea-9e87-0157bc5eeb49"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:57.311231 master-0 kubenswrapper[4790]: I1011 10:55:57.311131 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b61e18b-6985-48e4-96d6-880b0c497e66-internal-tls-certs\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.311553 master-0 kubenswrapper[4790]: I1011 10:55:57.311247 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b61e18b-6985-48e4-96d6-880b0c497e66-scripts\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.311553 master-0 kubenswrapper[4790]: I1011 10:55:57.311304 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b61e18b-6985-48e4-96d6-880b0c497e66-etc-machine-id\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.311553 master-0 kubenswrapper[4790]: I1011 10:55:57.311354 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b61e18b-6985-48e4-96d6-880b0c497e66-public-tls-certs\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.311553 master-0 kubenswrapper[4790]: I1011 10:55:57.311378 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9vf2\" (UniqueName: \"kubernetes.io/projected/2b61e18b-6985-48e4-96d6-880b0c497e66-kube-api-access-z9vf2\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.311553 master-0 kubenswrapper[4790]: I1011 10:55:57.311410 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b61e18b-6985-48e4-96d6-880b0c497e66-config-data-custom\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.311553 master-0 kubenswrapper[4790]: I1011 10:55:57.311447 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b61e18b-6985-48e4-96d6-880b0c497e66-config-data\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.311553 master-0 kubenswrapper[4790]: I1011 10:55:57.311464 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b61e18b-6985-48e4-96d6-880b0c497e66-logs\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.311553 master-0 kubenswrapper[4790]: I1011 10:55:57.311485 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b61e18b-6985-48e4-96d6-880b0c497e66-combined-ca-bundle\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.314832 master-0 kubenswrapper[4790]: I1011 10:55:57.314504 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b61e18b-6985-48e4-96d6-880b0c497e66-etc-machine-id\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.315445 master-2 kubenswrapper[4776]: I1011 10:55:57.315328 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" (UID: "fa8c4de6-1a61-4bea-9e87-0157bc5eeb49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:57.317057 master-0 kubenswrapper[4790]: I1011 10:55:57.316947 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b61e18b-6985-48e4-96d6-880b0c497e66-logs\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.317900 master-0 kubenswrapper[4790]: I1011 10:55:57.317853 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b61e18b-6985-48e4-96d6-880b0c497e66-combined-ca-bundle\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.318106 master-0 kubenswrapper[4790]: I1011 10:55:57.318057 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b61e18b-6985-48e4-96d6-880b0c497e66-internal-tls-certs\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.318243 master-0 kubenswrapper[4790]: I1011 10:55:57.318184 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b61e18b-6985-48e4-96d6-880b0c497e66-config-data\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.318950 master-0 kubenswrapper[4790]: I1011 10:55:57.318920 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b61e18b-6985-48e4-96d6-880b0c497e66-config-data-custom\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.319588 master-0 kubenswrapper[4790]: I1011 10:55:57.319547 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b61e18b-6985-48e4-96d6-880b0c497e66-scripts\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.323055 master-0 kubenswrapper[4790]: I1011 10:55:57.322186 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b61e18b-6985-48e4-96d6-880b0c497e66-public-tls-certs\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.334276 master-2 kubenswrapper[4776]: I1011 10:55:57.334224 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data" (OuterVolumeSpecName: "config-data") pod "fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" (UID: "fa8c4de6-1a61-4bea-9e87-0157bc5eeb49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:57.334446 master-0 kubenswrapper[4790]: I1011 10:55:57.334028 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9vf2\" (UniqueName: \"kubernetes.io/projected/2b61e18b-6985-48e4-96d6-880b0c497e66-kube-api-access-z9vf2\") pod \"cinder-b5802-api-1\" (UID: \"2b61e18b-6985-48e4-96d6-880b0c497e66\") " pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.387824 master-2 kubenswrapper[4776]: I1011 10:55:57.387769 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kschj\" (UniqueName: \"kubernetes.io/projected/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-kube-api-access-kschj\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:57.387824 master-2 kubenswrapper[4776]: I1011 10:55:57.387810 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:57.387824 master-2 kubenswrapper[4776]: I1011 10:55:57.387820 4776 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data-custom\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:57.387824 master-2 kubenswrapper[4776]: I1011 10:55:57.387828 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:57.465486 master-0 kubenswrapper[4790]: I1011 10:55:57.465407 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-1" Oct 11 10:55:57.466677 master-1 kubenswrapper[4771]: I1011 10:55:57.466510 4771 generic.go:334] "Generic (PLEG): container finished" podID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerID="b6b58d0d02154c8b742871143df315c38b23b17f3fced652bf59c6663b8ca178" exitCode=0 Oct 11 10:55:57.466677 master-1 kubenswrapper[4771]: I1011 10:55:57.466571 4771 generic.go:334] "Generic (PLEG): container finished" podID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerID="dea9e948550286a77692eb6be22395f88e3b28cb639cfe7f80a137916ddb5ac1" exitCode=2 Oct 11 10:55:57.466677 master-1 kubenswrapper[4771]: I1011 10:55:57.466580 4771 generic.go:334] "Generic (PLEG): container finished" podID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerID="e699798eb3d463b27b94ec4cd6a4bacb981768b5034628ba7f0e14e667c53445" exitCode=0 Oct 11 10:55:57.466677 master-1 kubenswrapper[4771]: I1011 10:55:57.466589 4771 generic.go:334] "Generic (PLEG): container finished" podID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerID="927b471b1fa44ac0453124772c1c315e44a9bd395c3c5d07c01293a49d70a8ef" exitCode=0 Oct 11 10:55:57.466677 master-1 kubenswrapper[4771]: I1011 10:55:57.466628 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6abca2-1aea-4da7-88aa-1d7651959165","Type":"ContainerDied","Data":"b6b58d0d02154c8b742871143df315c38b23b17f3fced652bf59c6663b8ca178"} Oct 11 10:55:57.466677 master-1 kubenswrapper[4771]: I1011 10:55:57.466657 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6abca2-1aea-4da7-88aa-1d7651959165","Type":"ContainerDied","Data":"dea9e948550286a77692eb6be22395f88e3b28cb639cfe7f80a137916ddb5ac1"} Oct 11 10:55:57.466677 master-1 kubenswrapper[4771]: I1011 10:55:57.466668 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6abca2-1aea-4da7-88aa-1d7651959165","Type":"ContainerDied","Data":"e699798eb3d463b27b94ec4cd6a4bacb981768b5034628ba7f0e14e667c53445"} Oct 11 10:55:57.467158 master-1 kubenswrapper[4771]: I1011 10:55:57.466700 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6abca2-1aea-4da7-88aa-1d7651959165","Type":"ContainerDied","Data":"927b471b1fa44ac0453124772c1c315e44a9bd395c3c5d07c01293a49d70a8ef"} Oct 11 10:55:57.935511 master-0 kubenswrapper[4790]: W1011 10:55:57.935444 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b61e18b_6985_48e4_96d6_880b0c497e66.slice/crio-d4f8b9fb6cd0677010930ff89f8578c0b2a6fc98f1343e529253bcbfc7307e63 WatchSource:0}: Error finding container d4f8b9fb6cd0677010930ff89f8578c0b2a6fc98f1343e529253bcbfc7307e63: Status 404 returned error can't find the container with id d4f8b9fb6cd0677010930ff89f8578c0b2a6fc98f1343e529253bcbfc7307e63 Oct 11 10:55:57.941969 master-0 kubenswrapper[4790]: I1011 10:55:57.941888 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-api-1"] Oct 11 10:55:58.011644 master-0 kubenswrapper[4790]: I1011 10:55:58.011578 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-1" event={"ID":"2b61e18b-6985-48e4-96d6-880b0c497e66","Type":"ContainerStarted","Data":"d4f8b9fb6cd0677010930ff89f8578c0b2a6fc98f1343e529253bcbfc7307e63"} Oct 11 10:55:58.014007 master-2 kubenswrapper[4776]: I1011 10:55:58.013954 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:58.015758 master-2 kubenswrapper[4776]: I1011 10:55:58.015738 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:58.069915 master-2 kubenswrapper[4776]: I1011 10:55:58.069859 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:58.078615 master-2 kubenswrapper[4776]: I1011 10:55:58.078540 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-86bdd47775-gpz8z" event={"ID":"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49","Type":"ContainerDied","Data":"85ac756e7a970e46ccdb81506b9b8549165f9c0b853da21e24277ff1af233582"} Oct 11 10:55:58.078615 master-2 kubenswrapper[4776]: I1011 10:55:58.078598 4776 scope.go:117] "RemoveContainer" containerID="d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c" Oct 11 10:55:58.235210 master-1 kubenswrapper[4771]: I1011 10:55:58.235133 4771 scope.go:117] "RemoveContainer" containerID="f4caf5c874767c7fd75c2e84ff37e1b5c988f50fb776ae2062994f2e951ecc23" Oct 11 10:55:58.308513 master-0 kubenswrapper[4790]: I1011 10:55:58.308457 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30c351cb-246a-4343-a56d-c74fb4be119e" path="/var/lib/kubelet/pods/30c351cb-246a-4343-a56d-c74fb4be119e/volumes" Oct 11 10:55:58.321609 master-2 kubenswrapper[4776]: I1011 10:55:58.321477 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-86bdd47775-gpz8z"] Oct 11 10:55:58.377631 master-2 kubenswrapper[4776]: I1011 10:55:58.377544 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-86bdd47775-gpz8z"] Oct 11 10:55:58.377775 master-1 kubenswrapper[4771]: I1011 10:55:58.376597 4771 scope.go:117] "RemoveContainer" containerID="e046736cf54a6f375a2d21055bc37323ff6d218a499c8b0059aa035f5e4d1a0c" Oct 11 10:55:58.481095 master-0 kubenswrapper[4790]: I1011 10:55:58.481046 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-external-api-1"] Oct 11 10:55:58.481410 master-0 kubenswrapper[4790]: I1011 10:55:58.481378 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-external-api-1" podUID="3166d4fc-8488-46dc-9d63-87dc403f66bc" containerName="glance-log" containerID="cri-o://1980b844ffaf3f4ae914e07f575c0c012ae970754f7fcb5bc4e59940912228a6" gracePeriod=30 Oct 11 10:55:58.481579 master-0 kubenswrapper[4790]: I1011 10:55:58.481539 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-external-api-1" podUID="3166d4fc-8488-46dc-9d63-87dc403f66bc" containerName="glance-httpd" containerID="cri-o://b351fbb5c1ba1d71d14296c2d406aca09cc6016d89d1300e0ba6b0d00727939f" gracePeriod=30 Oct 11 10:55:58.659418 master-1 kubenswrapper[4771]: I1011 10:55:58.657481 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:55:58.734731 master-1 kubenswrapper[4771]: I1011 10:55:58.734669 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-scripts\") pod \"0f6abca2-1aea-4da7-88aa-1d7651959165\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " Oct 11 10:55:58.734966 master-1 kubenswrapper[4771]: I1011 10:55:58.734951 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-sg-core-conf-yaml\") pod \"0f6abca2-1aea-4da7-88aa-1d7651959165\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " Oct 11 10:55:58.735795 master-1 kubenswrapper[4771]: I1011 10:55:58.735761 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-config-data\") pod \"0f6abca2-1aea-4da7-88aa-1d7651959165\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " Oct 11 10:55:58.735844 master-1 kubenswrapper[4771]: I1011 10:55:58.735829 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl7xd\" (UniqueName: \"kubernetes.io/projected/0f6abca2-1aea-4da7-88aa-1d7651959165-kube-api-access-pl7xd\") pod \"0f6abca2-1aea-4da7-88aa-1d7651959165\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " Oct 11 10:55:58.736947 master-1 kubenswrapper[4771]: I1011 10:55:58.736919 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f6abca2-1aea-4da7-88aa-1d7651959165-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0f6abca2-1aea-4da7-88aa-1d7651959165" (UID: "0f6abca2-1aea-4da7-88aa-1d7651959165"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:58.737048 master-1 kubenswrapper[4771]: I1011 10:55:58.736960 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6abca2-1aea-4da7-88aa-1d7651959165-log-httpd\") pod \"0f6abca2-1aea-4da7-88aa-1d7651959165\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " Oct 11 10:55:58.737048 master-1 kubenswrapper[4771]: I1011 10:55:58.737017 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6abca2-1aea-4da7-88aa-1d7651959165-run-httpd\") pod \"0f6abca2-1aea-4da7-88aa-1d7651959165\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " Oct 11 10:55:58.738107 master-1 kubenswrapper[4771]: I1011 10:55:58.738083 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f6abca2-1aea-4da7-88aa-1d7651959165-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0f6abca2-1aea-4da7-88aa-1d7651959165" (UID: "0f6abca2-1aea-4da7-88aa-1d7651959165"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:58.739461 master-1 kubenswrapper[4771]: I1011 10:55:58.739380 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-scripts" (OuterVolumeSpecName: "scripts") pod "0f6abca2-1aea-4da7-88aa-1d7651959165" (UID: "0f6abca2-1aea-4da7-88aa-1d7651959165"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:58.739645 master-1 kubenswrapper[4771]: I1011 10:55:58.737177 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-combined-ca-bundle\") pod \"0f6abca2-1aea-4da7-88aa-1d7651959165\" (UID: \"0f6abca2-1aea-4da7-88aa-1d7651959165\") " Oct 11 10:55:58.740136 master-1 kubenswrapper[4771]: I1011 10:55:58.740094 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f6abca2-1aea-4da7-88aa-1d7651959165-kube-api-access-pl7xd" (OuterVolumeSpecName: "kube-api-access-pl7xd") pod "0f6abca2-1aea-4da7-88aa-1d7651959165" (UID: "0f6abca2-1aea-4da7-88aa-1d7651959165"). InnerVolumeSpecName "kube-api-access-pl7xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:58.743767 master-1 kubenswrapper[4771]: I1011 10:55:58.743700 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:58.743867 master-1 kubenswrapper[4771]: I1011 10:55:58.743794 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl7xd\" (UniqueName: \"kubernetes.io/projected/0f6abca2-1aea-4da7-88aa-1d7651959165-kube-api-access-pl7xd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:58.743867 master-1 kubenswrapper[4771]: I1011 10:55:58.743816 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6abca2-1aea-4da7-88aa-1d7651959165-log-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:58.743867 master-1 kubenswrapper[4771]: I1011 10:55:58.743830 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6abca2-1aea-4da7-88aa-1d7651959165-run-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:58.763202 master-1 kubenswrapper[4771]: I1011 10:55:58.763096 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0f6abca2-1aea-4da7-88aa-1d7651959165" (UID: "0f6abca2-1aea-4da7-88aa-1d7651959165"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:58.783863 master-1 kubenswrapper[4771]: I1011 10:55:58.783120 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4bbqs"] Oct 11 10:55:58.792402 master-1 kubenswrapper[4771]: W1011 10:55:58.792326 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-2046d1999ef071c637c21e7f0dabc3a11726315f639116dfe75b51029d76c1fe WatchSource:0}: Error finding container 2046d1999ef071c637c21e7f0dabc3a11726315f639116dfe75b51029d76c1fe: Status 404 returned error can't find the container with id 2046d1999ef071c637c21e7f0dabc3a11726315f639116dfe75b51029d76c1fe Oct 11 10:55:58.822518 master-1 kubenswrapper[4771]: I1011 10:55:58.822461 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f6abca2-1aea-4da7-88aa-1d7651959165" (UID: "0f6abca2-1aea-4da7-88aa-1d7651959165"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:58.828803 master-1 kubenswrapper[4771]: I1011 10:55:58.828732 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-config-data" (OuterVolumeSpecName: "config-data") pod "0f6abca2-1aea-4da7-88aa-1d7651959165" (UID: "0f6abca2-1aea-4da7-88aa-1d7651959165"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:58.846643 master-1 kubenswrapper[4771]: I1011 10:55:58.846592 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:58.846755 master-1 kubenswrapper[4771]: I1011 10:55:58.846650 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:58.846755 master-1 kubenswrapper[4771]: I1011 10:55:58.846665 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f6abca2-1aea-4da7-88aa-1d7651959165-sg-core-conf-yaml\") on node \"master-1\" DevicePath \"\"" Oct 11 10:55:59.034068 master-0 kubenswrapper[4790]: I1011 10:55:59.034009 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-1" event={"ID":"2b61e18b-6985-48e4-96d6-880b0c497e66","Type":"ContainerStarted","Data":"60172d0913fc2a88f9aab061db4aca93de54ff4b7e2c4c948541249d981cee3f"} Oct 11 10:55:59.040991 master-0 kubenswrapper[4790]: I1011 10:55:59.040842 4790 generic.go:334] "Generic (PLEG): container finished" podID="3166d4fc-8488-46dc-9d63-87dc403f66bc" containerID="1980b844ffaf3f4ae914e07f575c0c012ae970754f7fcb5bc4e59940912228a6" exitCode=143 Oct 11 10:55:59.040991 master-0 kubenswrapper[4790]: I1011 10:55:59.040935 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-1" event={"ID":"3166d4fc-8488-46dc-9d63-87dc403f66bc","Type":"ContainerDied","Data":"1980b844ffaf3f4ae914e07f575c0c012ae970754f7fcb5bc4e59940912228a6"} Oct 11 10:55:59.041166 master-0 kubenswrapper[4790]: I1011 10:55:59.041001 4790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:55:59.041166 master-0 kubenswrapper[4790]: I1011 10:55:59.041013 4790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:55:59.048635 master-1 kubenswrapper[4771]: E1011 10:55:59.048011 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-conmon-57cea5f10f5cb919bbd746a5cc770cdb2e4c8666c371e8c10ebe7f9b79fd5346.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:55:59.236434 master-0 kubenswrapper[4790]: I1011 10:55:59.236377 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:59.236847 master-0 kubenswrapper[4790]: I1011 10:55:59.236809 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-2" Oct 11 10:55:59.376883 master-1 kubenswrapper[4771]: I1011 10:55:59.350499 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-internal-api-1"] Oct 11 10:55:59.376883 master-1 kubenswrapper[4771]: I1011 10:55:59.350838 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-internal-api-1" podUID="2deabbe8-397d-495c-aef9-afe91b4e9eeb" containerName="glance-log" containerID="cri-o://9b109114c918b22d9cdecdecec8567888df427a39c7ae80e11cc66d73a78ae7e" gracePeriod=30 Oct 11 10:55:59.376883 master-1 kubenswrapper[4771]: I1011 10:55:59.350968 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-internal-api-1" podUID="2deabbe8-397d-495c-aef9-afe91b4e9eeb" containerName="glance-httpd" containerID="cri-o://20c69b75b312956ea0de71da2fddb27d441722583daf44724022755b0dd0fa57" gracePeriod=30 Oct 11 10:55:59.505662 master-1 kubenswrapper[4771]: I1011 10:55:59.505108 4771 generic.go:334] "Generic (PLEG): container finished" podID="5921e565-c581-42f4-8da8-df72fae9a3c0" containerID="57cea5f10f5cb919bbd746a5cc770cdb2e4c8666c371e8c10ebe7f9b79fd5346" exitCode=0 Oct 11 10:55:59.505662 master-1 kubenswrapper[4771]: I1011 10:55:59.505212 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bbqs" event={"ID":"5921e565-c581-42f4-8da8-df72fae9a3c0","Type":"ContainerDied","Data":"57cea5f10f5cb919bbd746a5cc770cdb2e4c8666c371e8c10ebe7f9b79fd5346"} Oct 11 10:55:59.505662 master-1 kubenswrapper[4771]: I1011 10:55:59.505259 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bbqs" event={"ID":"5921e565-c581-42f4-8da8-df72fae9a3c0","Type":"ContainerStarted","Data":"2046d1999ef071c637c21e7f0dabc3a11726315f639116dfe75b51029d76c1fe"} Oct 11 10:55:59.511109 master-1 kubenswrapper[4771]: I1011 10:55:59.511061 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6abca2-1aea-4da7-88aa-1d7651959165","Type":"ContainerDied","Data":"84ce7b87c3fe4abde506e09dd4965353c30550d9eae5707b3cd6ecad602405a9"} Oct 11 10:55:59.511220 master-1 kubenswrapper[4771]: I1011 10:55:59.511125 4771 scope.go:117] "RemoveContainer" containerID="b6b58d0d02154c8b742871143df315c38b23b17f3fced652bf59c6663b8ca178" Oct 11 10:55:59.511389 master-1 kubenswrapper[4771]: I1011 10:55:59.511312 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:55:59.526723 master-1 kubenswrapper[4771]: I1011 10:55:59.526668 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-chcrd" event={"ID":"38267a66-0ebd-44ab-bc7f-cd5703503b74","Type":"ContainerStarted","Data":"5d17ff04cecf6e6d74e4dc9eda892c61efec1bfa4b25f713f94a437e54e6aeed"} Oct 11 10:55:59.550851 master-1 kubenswrapper[4771]: I1011 10:55:59.550821 4771 scope.go:117] "RemoveContainer" containerID="dea9e948550286a77692eb6be22395f88e3b28cb639cfe7f80a137916ddb5ac1" Oct 11 10:55:59.565712 master-1 kubenswrapper[4771]: I1011 10:55:59.565552 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:55:59.573573 master-1 kubenswrapper[4771]: I1011 10:55:59.573500 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:55:59.575442 master-1 kubenswrapper[4771]: I1011 10:55:59.575417 4771 scope.go:117] "RemoveContainer" containerID="e699798eb3d463b27b94ec4cd6a4bacb981768b5034628ba7f0e14e667c53445" Oct 11 10:55:59.593237 master-1 kubenswrapper[4771]: I1011 10:55:59.593108 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-chcrd" podStartSLOduration=3.954474251 podStartE2EDuration="12.593083317s" podCreationTimestamp="2025-10-11 10:55:47 +0000 UTC" firstStartedPulling="2025-10-11 10:55:49.632283683 +0000 UTC m=+1781.606510124" lastFinishedPulling="2025-10-11 10:55:58.270892749 +0000 UTC m=+1790.245119190" observedRunningTime="2025-10-11 10:55:59.590065229 +0000 UTC m=+1791.564291680" watchObservedRunningTime="2025-10-11 10:55:59.593083317 +0000 UTC m=+1791.567309768" Oct 11 10:55:59.606772 master-1 kubenswrapper[4771]: I1011 10:55:59.606713 4771 scope.go:117] "RemoveContainer" containerID="927b471b1fa44ac0453124772c1c315e44a9bd395c3c5d07c01293a49d70a8ef" Oct 11 10:55:59.610535 master-1 kubenswrapper[4771]: I1011 10:55:59.610497 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:55:59.610938 master-1 kubenswrapper[4771]: E1011 10:55:59.610909 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerName="ceilometer-notification-agent" Oct 11 10:55:59.610938 master-1 kubenswrapper[4771]: I1011 10:55:59.610934 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerName="ceilometer-notification-agent" Oct 11 10:55:59.611037 master-1 kubenswrapper[4771]: E1011 10:55:59.610988 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerName="ceilometer-central-agent" Oct 11 10:55:59.611037 master-1 kubenswrapper[4771]: I1011 10:55:59.611000 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerName="ceilometer-central-agent" Oct 11 10:55:59.611037 master-1 kubenswrapper[4771]: E1011 10:55:59.611028 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerName="proxy-httpd" Oct 11 10:55:59.611162 master-1 kubenswrapper[4771]: I1011 10:55:59.611039 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerName="proxy-httpd" Oct 11 10:55:59.611162 master-1 kubenswrapper[4771]: E1011 10:55:59.611068 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerName="sg-core" Oct 11 10:55:59.611162 master-1 kubenswrapper[4771]: I1011 10:55:59.611078 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerName="sg-core" Oct 11 10:55:59.611325 master-1 kubenswrapper[4771]: I1011 10:55:59.611300 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerName="ceilometer-central-agent" Oct 11 10:55:59.611410 master-1 kubenswrapper[4771]: I1011 10:55:59.611342 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerName="sg-core" Oct 11 10:55:59.611410 master-1 kubenswrapper[4771]: I1011 10:55:59.611374 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerName="proxy-httpd" Oct 11 10:55:59.611410 master-1 kubenswrapper[4771]: I1011 10:55:59.611393 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f6abca2-1aea-4da7-88aa-1d7651959165" containerName="ceilometer-notification-agent" Oct 11 10:55:59.613382 master-1 kubenswrapper[4771]: I1011 10:55:59.613344 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:55:59.618517 master-1 kubenswrapper[4771]: I1011 10:55:59.618490 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 11 10:55:59.618592 master-1 kubenswrapper[4771]: I1011 10:55:59.618523 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 11 10:55:59.628152 master-1 kubenswrapper[4771]: I1011 10:55:59.628093 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:55:59.670698 master-1 kubenswrapper[4771]: I1011 10:55:59.670620 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-config-data\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.671043 master-1 kubenswrapper[4771]: I1011 10:55:59.670741 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.671043 master-1 kubenswrapper[4771]: I1011 10:55:59.670769 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75dsl\" (UniqueName: \"kubernetes.io/projected/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-kube-api-access-75dsl\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.671043 master-1 kubenswrapper[4771]: I1011 10:55:59.670797 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-scripts\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.671043 master-1 kubenswrapper[4771]: I1011 10:55:59.670848 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.671236 master-1 kubenswrapper[4771]: I1011 10:55:59.671076 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-run-httpd\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.671341 master-1 kubenswrapper[4771]: I1011 10:55:59.671285 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-log-httpd\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.774844 master-1 kubenswrapper[4771]: I1011 10:55:59.773704 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-scripts\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.774844 master-1 kubenswrapper[4771]: I1011 10:55:59.773819 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.774844 master-1 kubenswrapper[4771]: I1011 10:55:59.773855 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-run-httpd\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.774844 master-1 kubenswrapper[4771]: I1011 10:55:59.773895 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-log-httpd\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.774844 master-1 kubenswrapper[4771]: I1011 10:55:59.773976 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-config-data\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.774844 master-1 kubenswrapper[4771]: I1011 10:55:59.774047 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.774844 master-1 kubenswrapper[4771]: I1011 10:55:59.774075 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75dsl\" (UniqueName: \"kubernetes.io/projected/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-kube-api-access-75dsl\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.775375 master-1 kubenswrapper[4771]: I1011 10:55:59.774838 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-log-httpd\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.775375 master-1 kubenswrapper[4771]: I1011 10:55:59.775159 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-run-httpd\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.777948 master-1 kubenswrapper[4771]: I1011 10:55:59.777902 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-scripts\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.778342 master-1 kubenswrapper[4771]: I1011 10:55:59.778311 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.779387 master-1 kubenswrapper[4771]: I1011 10:55:59.779348 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-config-data\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.800190 master-1 kubenswrapper[4771]: I1011 10:55:59.800140 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75dsl\" (UniqueName: \"kubernetes.io/projected/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-kube-api-access-75dsl\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.806220 master-1 kubenswrapper[4771]: I1011 10:55:59.806183 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " pod="openstack/ceilometer-0" Oct 11 10:55:59.933669 master-1 kubenswrapper[4771]: I1011 10:55:59.933494 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:56:00.057982 master-0 kubenswrapper[4790]: I1011 10:56:00.057881 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-1" event={"ID":"2b61e18b-6985-48e4-96d6-880b0c497e66","Type":"ContainerStarted","Data":"87fe0bfeba1f22ca991252d7932d9baf470ee6ea4292baa531184e04779f043e"} Oct 11 10:56:00.059001 master-0 kubenswrapper[4790]: I1011 10:56:00.058024 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-b5802-api-1" Oct 11 10:56:00.068647 master-2 kubenswrapper[4776]: I1011 10:56:00.068575 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" path="/var/lib/kubelet/pods/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49/volumes" Oct 11 10:56:00.099841 master-0 kubenswrapper[4790]: I1011 10:56:00.099720 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-api-1" podStartSLOduration=3.099681135 podStartE2EDuration="3.099681135s" podCreationTimestamp="2025-10-11 10:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:00.09404314 +0000 UTC m=+1036.648503442" watchObservedRunningTime="2025-10-11 10:56:00.099681135 +0000 UTC m=+1036.654141427" Oct 11 10:56:00.409884 master-1 kubenswrapper[4771]: I1011 10:56:00.409831 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:56:00.415031 master-1 kubenswrapper[4771]: W1011 10:56:00.414945 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-c62163629aca98833fc3692d5fe6b9a44972b443acaee8cb73af5daad3f74fd0 WatchSource:0}: Error finding container c62163629aca98833fc3692d5fe6b9a44972b443acaee8cb73af5daad3f74fd0: Status 404 returned error can't find the container with id c62163629aca98833fc3692d5fe6b9a44972b443acaee8cb73af5daad3f74fd0 Oct 11 10:56:00.450576 master-1 kubenswrapper[4771]: I1011 10:56:00.450505 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f6abca2-1aea-4da7-88aa-1d7651959165" path="/var/lib/kubelet/pods/0f6abca2-1aea-4da7-88aa-1d7651959165/volumes" Oct 11 10:56:00.537709 master-1 kubenswrapper[4771]: I1011 10:56:00.537592 4771 generic.go:334] "Generic (PLEG): container finished" podID="2deabbe8-397d-495c-aef9-afe91b4e9eeb" containerID="9b109114c918b22d9cdecdecec8567888df427a39c7ae80e11cc66d73a78ae7e" exitCode=143 Oct 11 10:56:00.537709 master-1 kubenswrapper[4771]: I1011 10:56:00.537656 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-1" event={"ID":"2deabbe8-397d-495c-aef9-afe91b4e9eeb","Type":"ContainerDied","Data":"9b109114c918b22d9cdecdecec8567888df427a39c7ae80e11cc66d73a78ae7e"} Oct 11 10:56:00.539637 master-1 kubenswrapper[4771]: I1011 10:56:00.539609 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bbqs" event={"ID":"5921e565-c581-42f4-8da8-df72fae9a3c0","Type":"ContainerStarted","Data":"f0b97708a382409609fd043fe1fddb88387503e83ce47d62daed66c317f5fded"} Oct 11 10:56:00.558515 master-1 kubenswrapper[4771]: I1011 10:56:00.551555 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0cc5394-b33f-41a9-bbe2-d772e75a8f58","Type":"ContainerStarted","Data":"c62163629aca98833fc3692d5fe6b9a44972b443acaee8cb73af5daad3f74fd0"} Oct 11 10:56:01.101087 master-2 kubenswrapper[4776]: I1011 10:56:01.101026 4776 generic.go:334] "Generic (PLEG): container finished" podID="98ff7c8d-cc7c-4b25-917b-88dfa7f837c5" containerID="ab41c4a67f100a55ec2db0cdabcc4ba00728b7ab8e01537dd56a71b0f51c8c16" exitCode=0 Oct 11 10:56:01.101087 master-2 kubenswrapper[4776]: I1011 10:56:01.101070 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerDied","Data":"ab41c4a67f100a55ec2db0cdabcc4ba00728b7ab8e01537dd56a71b0f51c8c16"} Oct 11 10:56:01.438393 master-1 kubenswrapper[4771]: I1011 10:56:01.438298 4771 scope.go:117] "RemoveContainer" containerID="d389be566649d008a3e09b2815ff87839b176862458587a6a08bb0703f09d204" Oct 11 10:56:01.565654 master-1 kubenswrapper[4771]: I1011 10:56:01.565584 4771 generic.go:334] "Generic (PLEG): container finished" podID="5921e565-c581-42f4-8da8-df72fae9a3c0" containerID="f0b97708a382409609fd043fe1fddb88387503e83ce47d62daed66c317f5fded" exitCode=0 Oct 11 10:56:01.565940 master-1 kubenswrapper[4771]: I1011 10:56:01.565710 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bbqs" event={"ID":"5921e565-c581-42f4-8da8-df72fae9a3c0","Type":"ContainerDied","Data":"f0b97708a382409609fd043fe1fddb88387503e83ce47d62daed66c317f5fded"} Oct 11 10:56:01.568582 master-1 kubenswrapper[4771]: I1011 10:56:01.568511 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0cc5394-b33f-41a9-bbe2-d772e75a8f58","Type":"ContainerStarted","Data":"2c5f1db8917c8af20f15f8f5c86b116c03c3cf84afbea6d406851b9dc2d31536"} Oct 11 10:56:02.079990 master-0 kubenswrapper[4790]: I1011 10:56:02.079915 4790 generic.go:334] "Generic (PLEG): container finished" podID="3166d4fc-8488-46dc-9d63-87dc403f66bc" containerID="b351fbb5c1ba1d71d14296c2d406aca09cc6016d89d1300e0ba6b0d00727939f" exitCode=0 Oct 11 10:56:02.080591 master-0 kubenswrapper[4790]: I1011 10:56:02.080010 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-1" event={"ID":"3166d4fc-8488-46dc-9d63-87dc403f66bc","Type":"ContainerDied","Data":"b351fbb5c1ba1d71d14296c2d406aca09cc6016d89d1300e0ba6b0d00727939f"} Oct 11 10:56:02.197513 master-0 kubenswrapper[4790]: I1011 10:56:02.197290 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:02.323102 master-0 kubenswrapper[4790]: I1011 10:56:02.323024 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-config-data\") pod \"3166d4fc-8488-46dc-9d63-87dc403f66bc\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " Oct 11 10:56:02.323550 master-0 kubenswrapper[4790]: I1011 10:56:02.323525 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpbtf\" (UniqueName: \"kubernetes.io/projected/3166d4fc-8488-46dc-9d63-87dc403f66bc-kube-api-access-vpbtf\") pod \"3166d4fc-8488-46dc-9d63-87dc403f66bc\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " Oct 11 10:56:02.323804 master-0 kubenswrapper[4790]: I1011 10:56:02.323788 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-combined-ca-bundle\") pod \"3166d4fc-8488-46dc-9d63-87dc403f66bc\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " Oct 11 10:56:02.323938 master-0 kubenswrapper[4790]: I1011 10:56:02.323922 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3166d4fc-8488-46dc-9d63-87dc403f66bc-logs\") pod \"3166d4fc-8488-46dc-9d63-87dc403f66bc\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " Oct 11 10:56:02.324028 master-0 kubenswrapper[4790]: I1011 10:56:02.324016 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-public-tls-certs\") pod \"3166d4fc-8488-46dc-9d63-87dc403f66bc\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " Oct 11 10:56:02.324151 master-0 kubenswrapper[4790]: I1011 10:56:02.324137 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3166d4fc-8488-46dc-9d63-87dc403f66bc-httpd-run\") pod \"3166d4fc-8488-46dc-9d63-87dc403f66bc\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " Oct 11 10:56:02.324336 master-0 kubenswrapper[4790]: I1011 10:56:02.324320 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-scripts\") pod \"3166d4fc-8488-46dc-9d63-87dc403f66bc\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " Oct 11 10:56:02.324491 master-0 kubenswrapper[4790]: I1011 10:56:02.324433 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3166d4fc-8488-46dc-9d63-87dc403f66bc-logs" (OuterVolumeSpecName: "logs") pod "3166d4fc-8488-46dc-9d63-87dc403f66bc" (UID: "3166d4fc-8488-46dc-9d63-87dc403f66bc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:02.324647 master-0 kubenswrapper[4790]: I1011 10:56:02.324633 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7\") pod \"3166d4fc-8488-46dc-9d63-87dc403f66bc\" (UID: \"3166d4fc-8488-46dc-9d63-87dc403f66bc\") " Oct 11 10:56:02.325074 master-0 kubenswrapper[4790]: I1011 10:56:02.325043 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3166d4fc-8488-46dc-9d63-87dc403f66bc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3166d4fc-8488-46dc-9d63-87dc403f66bc" (UID: "3166d4fc-8488-46dc-9d63-87dc403f66bc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:02.325375 master-0 kubenswrapper[4790]: I1011 10:56:02.325336 4790 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3166d4fc-8488-46dc-9d63-87dc403f66bc-logs\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:02.325455 master-0 kubenswrapper[4790]: I1011 10:56:02.325443 4790 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3166d4fc-8488-46dc-9d63-87dc403f66bc-httpd-run\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:02.329813 master-0 kubenswrapper[4790]: I1011 10:56:02.329779 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3166d4fc-8488-46dc-9d63-87dc403f66bc-kube-api-access-vpbtf" (OuterVolumeSpecName: "kube-api-access-vpbtf") pod "3166d4fc-8488-46dc-9d63-87dc403f66bc" (UID: "3166d4fc-8488-46dc-9d63-87dc403f66bc"). InnerVolumeSpecName "kube-api-access-vpbtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:02.335538 master-0 kubenswrapper[4790]: I1011 10:56:02.335435 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-scripts" (OuterVolumeSpecName: "scripts") pod "3166d4fc-8488-46dc-9d63-87dc403f66bc" (UID: "3166d4fc-8488-46dc-9d63-87dc403f66bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:02.355002 master-0 kubenswrapper[4790]: I1011 10:56:02.354915 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7" (OuterVolumeSpecName: "glance") pod "3166d4fc-8488-46dc-9d63-87dc403f66bc" (UID: "3166d4fc-8488-46dc-9d63-87dc403f66bc"). InnerVolumeSpecName "pvc-c7212717-18be-4287-9071-f6f818672815". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 11 10:56:02.360611 master-0 kubenswrapper[4790]: I1011 10:56:02.360422 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3166d4fc-8488-46dc-9d63-87dc403f66bc" (UID: "3166d4fc-8488-46dc-9d63-87dc403f66bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:02.369938 master-0 kubenswrapper[4790]: I1011 10:56:02.369860 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-config-data" (OuterVolumeSpecName: "config-data") pod "3166d4fc-8488-46dc-9d63-87dc403f66bc" (UID: "3166d4fc-8488-46dc-9d63-87dc403f66bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:02.404738 master-0 kubenswrapper[4790]: I1011 10:56:02.404618 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3166d4fc-8488-46dc-9d63-87dc403f66bc" (UID: "3166d4fc-8488-46dc-9d63-87dc403f66bc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:02.428042 master-0 kubenswrapper[4790]: I1011 10:56:02.427988 4790 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-scripts\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:02.428133 master-0 kubenswrapper[4790]: I1011 10:56:02.428075 4790 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c7212717-18be-4287-9071-f6f818672815\" (UniqueName: \"kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7\") on node \"master-0\" " Oct 11 10:56:02.428133 master-0 kubenswrapper[4790]: I1011 10:56:02.428098 4790 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-config-data\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:02.428133 master-0 kubenswrapper[4790]: I1011 10:56:02.428113 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpbtf\" (UniqueName: \"kubernetes.io/projected/3166d4fc-8488-46dc-9d63-87dc403f66bc-kube-api-access-vpbtf\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:02.428133 master-0 kubenswrapper[4790]: I1011 10:56:02.428130 4790 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:02.428311 master-0 kubenswrapper[4790]: I1011 10:56:02.428143 4790 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3166d4fc-8488-46dc-9d63-87dc403f66bc-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:02.447102 master-0 kubenswrapper[4790]: I1011 10:56:02.447029 4790 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 11 10:56:02.447404 master-0 kubenswrapper[4790]: I1011 10:56:02.447284 4790 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c7212717-18be-4287-9071-f6f818672815" (UniqueName: "kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7") on node "master-0" Oct 11 10:56:02.529298 master-1 kubenswrapper[4771]: I1011 10:56:02.529210 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-b5802-default-internal-api-1" podUID="2deabbe8-397d-495c-aef9-afe91b4e9eeb" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.129.0.130:9292/healthcheck\": read tcp 10.129.0.2:35066->10.129.0.130:9292: read: connection reset by peer" Oct 11 10:56:02.529298 master-1 kubenswrapper[4771]: I1011 10:56:02.529264 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-b5802-default-internal-api-1" podUID="2deabbe8-397d-495c-aef9-afe91b4e9eeb" containerName="glance-log" probeResult="failure" output="Get \"https://10.129.0.130:9292/healthcheck\": read tcp 10.129.0.2:35072->10.129.0.130:9292: read: connection reset by peer" Oct 11 10:56:02.530757 master-0 kubenswrapper[4790]: I1011 10:56:02.530544 4790 reconciler_common.go:293] "Volume detached for volume \"pvc-c7212717-18be-4287-9071-f6f818672815\" (UniqueName: \"kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:02.588679 master-1 kubenswrapper[4771]: I1011 10:56:02.588621 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bbqs" event={"ID":"5921e565-c581-42f4-8da8-df72fae9a3c0","Type":"ContainerStarted","Data":"fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53"} Oct 11 10:56:02.597329 master-1 kubenswrapper[4771]: I1011 10:56:02.597073 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" event={"ID":"c6af8eba-f8bf-47f6-8313-7a902aeb170f","Type":"ContainerStarted","Data":"706cb456d30744ac062a8bfb63467464baef222596394937d45b40e61aa97f06"} Oct 11 10:56:02.598528 master-1 kubenswrapper[4771]: I1011 10:56:02.598476 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" Oct 11 10:56:02.611955 master-1 kubenswrapper[4771]: I1011 10:56:02.611812 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0cc5394-b33f-41a9-bbe2-d772e75a8f58","Type":"ContainerStarted","Data":"6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111"} Oct 11 10:56:02.634864 master-1 kubenswrapper[4771]: I1011 10:56:02.634707 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4bbqs" podStartSLOduration=7.177732426 podStartE2EDuration="9.634671788s" podCreationTimestamp="2025-10-11 10:55:53 +0000 UTC" firstStartedPulling="2025-10-11 10:55:59.517524334 +0000 UTC m=+1791.491750775" lastFinishedPulling="2025-10-11 10:56:01.974463676 +0000 UTC m=+1793.948690137" observedRunningTime="2025-10-11 10:56:02.617268751 +0000 UTC m=+1794.591495192" watchObservedRunningTime="2025-10-11 10:56:02.634671788 +0000 UTC m=+1794.608898229" Oct 11 10:56:03.056981 master-1 kubenswrapper[4771]: I1011 10:56:03.056917 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.099980 master-0 kubenswrapper[4790]: I1011 10:56:03.099825 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-1" event={"ID":"3166d4fc-8488-46dc-9d63-87dc403f66bc","Type":"ContainerDied","Data":"f242a619a098bee9251349acb03ad40745b6b14dcdda08d9b62f04ce2b3b042e"} Oct 11 10:56:03.099980 master-0 kubenswrapper[4790]: I1011 10:56:03.099945 4790 scope.go:117] "RemoveContainer" containerID="b351fbb5c1ba1d71d14296c2d406aca09cc6016d89d1300e0ba6b0d00727939f" Oct 11 10:56:03.099980 master-0 kubenswrapper[4790]: I1011 10:56:03.099965 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.131062 master-0 kubenswrapper[4790]: I1011 10:56:03.131002 4790 scope.go:117] "RemoveContainer" containerID="1980b844ffaf3f4ae914e07f575c0c012ae970754f7fcb5bc4e59940912228a6" Oct 11 10:56:03.152462 master-0 kubenswrapper[4790]: I1011 10:56:03.151764 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-external-api-1"] Oct 11 10:56:03.158579 master-0 kubenswrapper[4790]: I1011 10:56:03.158515 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b5802-default-external-api-1"] Oct 11 10:56:03.176596 master-1 kubenswrapper[4771]: I1011 10:56:03.176390 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-config-data\") pod \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " Oct 11 10:56:03.177970 master-1 kubenswrapper[4771]: I1011 10:56:03.177132 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g2z9\" (UniqueName: \"kubernetes.io/projected/2deabbe8-397d-495c-aef9-afe91b4e9eeb-kube-api-access-2g2z9\") pod \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " Oct 11 10:56:03.177970 master-1 kubenswrapper[4771]: I1011 10:56:03.177239 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2deabbe8-397d-495c-aef9-afe91b4e9eeb-httpd-run\") pod \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " Oct 11 10:56:03.177970 master-1 kubenswrapper[4771]: I1011 10:56:03.177910 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-combined-ca-bundle\") pod \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " Oct 11 10:56:03.178124 master-1 kubenswrapper[4771]: I1011 10:56:03.178052 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37\") pod \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " Oct 11 10:56:03.178294 master-1 kubenswrapper[4771]: I1011 10:56:03.178210 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2deabbe8-397d-495c-aef9-afe91b4e9eeb-logs\") pod \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " Oct 11 10:56:03.179203 master-1 kubenswrapper[4771]: I1011 10:56:03.178345 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-scripts\") pod \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " Oct 11 10:56:03.179203 master-1 kubenswrapper[4771]: I1011 10:56:03.178414 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-internal-tls-certs\") pod \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\" (UID: \"2deabbe8-397d-495c-aef9-afe91b4e9eeb\") " Oct 11 10:56:03.179203 master-1 kubenswrapper[4771]: I1011 10:56:03.179156 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2deabbe8-397d-495c-aef9-afe91b4e9eeb-logs" (OuterVolumeSpecName: "logs") pod "2deabbe8-397d-495c-aef9-afe91b4e9eeb" (UID: "2deabbe8-397d-495c-aef9-afe91b4e9eeb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:03.179488 master-1 kubenswrapper[4771]: I1011 10:56:03.179423 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2deabbe8-397d-495c-aef9-afe91b4e9eeb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2deabbe8-397d-495c-aef9-afe91b4e9eeb" (UID: "2deabbe8-397d-495c-aef9-afe91b4e9eeb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:03.180109 master-1 kubenswrapper[4771]: I1011 10:56:03.179451 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2deabbe8-397d-495c-aef9-afe91b4e9eeb-logs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:03.180519 master-1 kubenswrapper[4771]: I1011 10:56:03.180376 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2deabbe8-397d-495c-aef9-afe91b4e9eeb-kube-api-access-2g2z9" (OuterVolumeSpecName: "kube-api-access-2g2z9") pod "2deabbe8-397d-495c-aef9-afe91b4e9eeb" (UID: "2deabbe8-397d-495c-aef9-afe91b4e9eeb"). InnerVolumeSpecName "kube-api-access-2g2z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:03.182332 master-1 kubenswrapper[4771]: I1011 10:56:03.182247 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-scripts" (OuterVolumeSpecName: "scripts") pod "2deabbe8-397d-495c-aef9-afe91b4e9eeb" (UID: "2deabbe8-397d-495c-aef9-afe91b4e9eeb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:03.183851 master-0 kubenswrapper[4790]: I1011 10:56:03.183792 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-external-api-1"] Oct 11 10:56:03.184538 master-0 kubenswrapper[4790]: E1011 10:56:03.184521 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3166d4fc-8488-46dc-9d63-87dc403f66bc" containerName="glance-log" Oct 11 10:56:03.184616 master-0 kubenswrapper[4790]: I1011 10:56:03.184606 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="3166d4fc-8488-46dc-9d63-87dc403f66bc" containerName="glance-log" Oct 11 10:56:03.184680 master-0 kubenswrapper[4790]: E1011 10:56:03.184670 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3166d4fc-8488-46dc-9d63-87dc403f66bc" containerName="glance-httpd" Oct 11 10:56:03.184756 master-0 kubenswrapper[4790]: I1011 10:56:03.184744 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="3166d4fc-8488-46dc-9d63-87dc403f66bc" containerName="glance-httpd" Oct 11 10:56:03.185015 master-0 kubenswrapper[4790]: I1011 10:56:03.184998 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="3166d4fc-8488-46dc-9d63-87dc403f66bc" containerName="glance-log" Oct 11 10:56:03.185101 master-0 kubenswrapper[4790]: I1011 10:56:03.185091 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="3166d4fc-8488-46dc-9d63-87dc403f66bc" containerName="glance-httpd" Oct 11 10:56:03.186184 master-0 kubenswrapper[4790]: I1011 10:56:03.186167 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.189487 master-0 kubenswrapper[4790]: I1011 10:56:03.189362 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-external-config-data" Oct 11 10:56:03.190620 master-0 kubenswrapper[4790]: I1011 10:56:03.190220 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 11 10:56:03.201092 master-1 kubenswrapper[4771]: I1011 10:56:03.201001 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37" (OuterVolumeSpecName: "glance") pod "2deabbe8-397d-495c-aef9-afe91b4e9eeb" (UID: "2deabbe8-397d-495c-aef9-afe91b4e9eeb"). InnerVolumeSpecName "pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 11 10:56:03.207470 master-1 kubenswrapper[4771]: I1011 10:56:03.207396 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2deabbe8-397d-495c-aef9-afe91b4e9eeb" (UID: "2deabbe8-397d-495c-aef9-afe91b4e9eeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:03.234030 master-1 kubenswrapper[4771]: I1011 10:56:03.233951 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-config-data" (OuterVolumeSpecName: "config-data") pod "2deabbe8-397d-495c-aef9-afe91b4e9eeb" (UID: "2deabbe8-397d-495c-aef9-afe91b4e9eeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:03.239410 master-0 kubenswrapper[4790]: I1011 10:56:03.239246 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-1"] Oct 11 10:56:03.241496 master-1 kubenswrapper[4771]: I1011 10:56:03.241423 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2deabbe8-397d-495c-aef9-afe91b4e9eeb" (UID: "2deabbe8-397d-495c-aef9-afe91b4e9eeb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:03.254242 master-0 kubenswrapper[4790]: I1011 10:56:03.254127 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76b7c4b6-c727-4201-9627-23a06e9ae7ea-public-tls-certs\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.254242 master-0 kubenswrapper[4790]: I1011 10:56:03.254203 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76b7c4b6-c727-4201-9627-23a06e9ae7ea-scripts\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.254566 master-0 kubenswrapper[4790]: I1011 10:56:03.254257 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b7c4b6-c727-4201-9627-23a06e9ae7ea-config-data\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.254863 master-0 kubenswrapper[4790]: I1011 10:56:03.254306 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkccz\" (UniqueName: \"kubernetes.io/projected/76b7c4b6-c727-4201-9627-23a06e9ae7ea-kube-api-access-fkccz\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.254997 master-0 kubenswrapper[4790]: I1011 10:56:03.254946 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76b7c4b6-c727-4201-9627-23a06e9ae7ea-logs\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.254997 master-0 kubenswrapper[4790]: I1011 10:56:03.254991 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76b7c4b6-c727-4201-9627-23a06e9ae7ea-httpd-run\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.255098 master-0 kubenswrapper[4790]: I1011 10:56:03.255037 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b7c4b6-c727-4201-9627-23a06e9ae7ea-combined-ca-bundle\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.255185 master-0 kubenswrapper[4790]: I1011 10:56:03.255163 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c7212717-18be-4287-9071-f6f818672815\" (UniqueName: \"kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.285682 master-1 kubenswrapper[4771]: I1011 10:56:03.285588 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g2z9\" (UniqueName: \"kubernetes.io/projected/2deabbe8-397d-495c-aef9-afe91b4e9eeb-kube-api-access-2g2z9\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:03.285682 master-1 kubenswrapper[4771]: I1011 10:56:03.285685 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2deabbe8-397d-495c-aef9-afe91b4e9eeb-httpd-run\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:03.285682 master-1 kubenswrapper[4771]: I1011 10:56:03.285700 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:03.286115 master-1 kubenswrapper[4771]: I1011 10:56:03.285768 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37\") on node \"master-1\" " Oct 11 10:56:03.286115 master-1 kubenswrapper[4771]: I1011 10:56:03.285782 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:03.286115 master-1 kubenswrapper[4771]: I1011 10:56:03.285792 4771 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-internal-tls-certs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:03.286115 master-1 kubenswrapper[4771]: I1011 10:56:03.285805 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2deabbe8-397d-495c-aef9-afe91b4e9eeb-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:03.306668 master-1 kubenswrapper[4771]: I1011 10:56:03.306597 4771 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 11 10:56:03.306966 master-1 kubenswrapper[4771]: I1011 10:56:03.306935 4771 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76" (UniqueName: "kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37") on node "master-1" Oct 11 10:56:03.356860 master-0 kubenswrapper[4790]: I1011 10:56:03.356622 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76b7c4b6-c727-4201-9627-23a06e9ae7ea-httpd-run\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.356860 master-0 kubenswrapper[4790]: I1011 10:56:03.356693 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b7c4b6-c727-4201-9627-23a06e9ae7ea-combined-ca-bundle\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.356860 master-0 kubenswrapper[4790]: I1011 10:56:03.356760 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c7212717-18be-4287-9071-f6f818672815\" (UniqueName: \"kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.356860 master-0 kubenswrapper[4790]: I1011 10:56:03.356804 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76b7c4b6-c727-4201-9627-23a06e9ae7ea-public-tls-certs\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.356860 master-0 kubenswrapper[4790]: I1011 10:56:03.356827 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76b7c4b6-c727-4201-9627-23a06e9ae7ea-scripts\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.356860 master-0 kubenswrapper[4790]: I1011 10:56:03.356855 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b7c4b6-c727-4201-9627-23a06e9ae7ea-config-data\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.356860 master-0 kubenswrapper[4790]: I1011 10:56:03.356883 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkccz\" (UniqueName: \"kubernetes.io/projected/76b7c4b6-c727-4201-9627-23a06e9ae7ea-kube-api-access-fkccz\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.357382 master-0 kubenswrapper[4790]: I1011 10:56:03.356926 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76b7c4b6-c727-4201-9627-23a06e9ae7ea-logs\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.357382 master-0 kubenswrapper[4790]: I1011 10:56:03.357253 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76b7c4b6-c727-4201-9627-23a06e9ae7ea-httpd-run\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.357624 master-0 kubenswrapper[4790]: I1011 10:56:03.357593 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76b7c4b6-c727-4201-9627-23a06e9ae7ea-logs\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.359894 master-0 kubenswrapper[4790]: I1011 10:56:03.359861 4790 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:56:03.359978 master-0 kubenswrapper[4790]: I1011 10:56:03.359896 4790 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c7212717-18be-4287-9071-f6f818672815\" (UniqueName: \"kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/95ab3ea1c73b905e55aa0f0a1e574a5056ec96dde23978388ab58fbe89465472/globalmount\"" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.361290 master-0 kubenswrapper[4790]: I1011 10:56:03.361247 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76b7c4b6-c727-4201-9627-23a06e9ae7ea-public-tls-certs\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.362278 master-0 kubenswrapper[4790]: I1011 10:56:03.362236 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76b7c4b6-c727-4201-9627-23a06e9ae7ea-scripts\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.362665 master-0 kubenswrapper[4790]: I1011 10:56:03.362600 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b7c4b6-c727-4201-9627-23a06e9ae7ea-combined-ca-bundle\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.369186 master-0 kubenswrapper[4790]: I1011 10:56:03.369137 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b7c4b6-c727-4201-9627-23a06e9ae7ea-config-data\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.377335 master-0 kubenswrapper[4790]: I1011 10:56:03.377281 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkccz\" (UniqueName: \"kubernetes.io/projected/76b7c4b6-c727-4201-9627-23a06e9ae7ea-kube-api-access-fkccz\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:03.393440 master-1 kubenswrapper[4771]: I1011 10:56:03.388803 4771 reconciler_common.go:293] "Volume detached for volume \"pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:03.625691 master-1 kubenswrapper[4771]: I1011 10:56:03.625602 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0cc5394-b33f-41a9-bbe2-d772e75a8f58","Type":"ContainerStarted","Data":"dfcca484b1ef8ab38e804d6ef394f26fc64c3bf3d7f1246b3ca103ffc5677a68"} Oct 11 10:56:03.628441 master-1 kubenswrapper[4771]: I1011 10:56:03.628400 4771 generic.go:334] "Generic (PLEG): container finished" podID="2deabbe8-397d-495c-aef9-afe91b4e9eeb" containerID="20c69b75b312956ea0de71da2fddb27d441722583daf44724022755b0dd0fa57" exitCode=0 Oct 11 10:56:03.629668 master-1 kubenswrapper[4771]: I1011 10:56:03.629646 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-1" event={"ID":"2deabbe8-397d-495c-aef9-afe91b4e9eeb","Type":"ContainerDied","Data":"20c69b75b312956ea0de71da2fddb27d441722583daf44724022755b0dd0fa57"} Oct 11 10:56:03.629774 master-1 kubenswrapper[4771]: I1011 10:56:03.629761 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-1" event={"ID":"2deabbe8-397d-495c-aef9-afe91b4e9eeb","Type":"ContainerDied","Data":"f7c97ec5b3e0ca2c1b2ecfb01745e5105213a6f53e9a75590979cbcd8d5e7e3f"} Oct 11 10:56:03.629868 master-1 kubenswrapper[4771]: I1011 10:56:03.629818 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.630151 master-1 kubenswrapper[4771]: I1011 10:56:03.629844 4771 scope.go:117] "RemoveContainer" containerID="20c69b75b312956ea0de71da2fddb27d441722583daf44724022755b0dd0fa57" Oct 11 10:56:03.660271 master-1 kubenswrapper[4771]: I1011 10:56:03.660226 4771 scope.go:117] "RemoveContainer" containerID="9b109114c918b22d9cdecdecec8567888df427a39c7ae80e11cc66d73a78ae7e" Oct 11 10:56:03.680141 master-1 kubenswrapper[4771]: I1011 10:56:03.680075 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-internal-api-1"] Oct 11 10:56:03.689414 master-1 kubenswrapper[4771]: I1011 10:56:03.689365 4771 scope.go:117] "RemoveContainer" containerID="20c69b75b312956ea0de71da2fddb27d441722583daf44724022755b0dd0fa57" Oct 11 10:56:03.689698 master-1 kubenswrapper[4771]: I1011 10:56:03.689676 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b5802-default-internal-api-1"] Oct 11 10:56:03.690110 master-1 kubenswrapper[4771]: E1011 10:56:03.690087 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20c69b75b312956ea0de71da2fddb27d441722583daf44724022755b0dd0fa57\": container with ID starting with 20c69b75b312956ea0de71da2fddb27d441722583daf44724022755b0dd0fa57 not found: ID does not exist" containerID="20c69b75b312956ea0de71da2fddb27d441722583daf44724022755b0dd0fa57" Oct 11 10:56:03.690223 master-1 kubenswrapper[4771]: I1011 10:56:03.690195 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20c69b75b312956ea0de71da2fddb27d441722583daf44724022755b0dd0fa57"} err="failed to get container status \"20c69b75b312956ea0de71da2fddb27d441722583daf44724022755b0dd0fa57\": rpc error: code = NotFound desc = could not find container \"20c69b75b312956ea0de71da2fddb27d441722583daf44724022755b0dd0fa57\": container with ID starting with 20c69b75b312956ea0de71da2fddb27d441722583daf44724022755b0dd0fa57 not found: ID does not exist" Oct 11 10:56:03.690320 master-1 kubenswrapper[4771]: I1011 10:56:03.690305 4771 scope.go:117] "RemoveContainer" containerID="9b109114c918b22d9cdecdecec8567888df427a39c7ae80e11cc66d73a78ae7e" Oct 11 10:56:03.691005 master-1 kubenswrapper[4771]: E1011 10:56:03.690963 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b109114c918b22d9cdecdecec8567888df427a39c7ae80e11cc66d73a78ae7e\": container with ID starting with 9b109114c918b22d9cdecdecec8567888df427a39c7ae80e11cc66d73a78ae7e not found: ID does not exist" containerID="9b109114c918b22d9cdecdecec8567888df427a39c7ae80e11cc66d73a78ae7e" Oct 11 10:56:03.691064 master-1 kubenswrapper[4771]: I1011 10:56:03.691023 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b109114c918b22d9cdecdecec8567888df427a39c7ae80e11cc66d73a78ae7e"} err="failed to get container status \"9b109114c918b22d9cdecdecec8567888df427a39c7ae80e11cc66d73a78ae7e\": rpc error: code = NotFound desc = could not find container \"9b109114c918b22d9cdecdecec8567888df427a39c7ae80e11cc66d73a78ae7e\": container with ID starting with 9b109114c918b22d9cdecdecec8567888df427a39c7ae80e11cc66d73a78ae7e not found: ID does not exist" Oct 11 10:56:03.722200 master-1 kubenswrapper[4771]: I1011 10:56:03.722081 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-internal-api-1"] Oct 11 10:56:03.722928 master-1 kubenswrapper[4771]: E1011 10:56:03.722530 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2deabbe8-397d-495c-aef9-afe91b4e9eeb" containerName="glance-log" Oct 11 10:56:03.722928 master-1 kubenswrapper[4771]: I1011 10:56:03.722545 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2deabbe8-397d-495c-aef9-afe91b4e9eeb" containerName="glance-log" Oct 11 10:56:03.722928 master-1 kubenswrapper[4771]: E1011 10:56:03.722560 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2deabbe8-397d-495c-aef9-afe91b4e9eeb" containerName="glance-httpd" Oct 11 10:56:03.722928 master-1 kubenswrapper[4771]: I1011 10:56:03.722567 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2deabbe8-397d-495c-aef9-afe91b4e9eeb" containerName="glance-httpd" Oct 11 10:56:03.722928 master-1 kubenswrapper[4771]: I1011 10:56:03.722736 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="2deabbe8-397d-495c-aef9-afe91b4e9eeb" containerName="glance-log" Oct 11 10:56:03.722928 master-1 kubenswrapper[4771]: I1011 10:56:03.722763 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="2deabbe8-397d-495c-aef9-afe91b4e9eeb" containerName="glance-httpd" Oct 11 10:56:03.723998 master-1 kubenswrapper[4771]: I1011 10:56:03.723975 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.727379 master-1 kubenswrapper[4771]: I1011 10:56:03.727310 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-internal-config-data" Oct 11 10:56:03.727648 master-1 kubenswrapper[4771]: I1011 10:56:03.727629 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 11 10:56:03.742573 master-1 kubenswrapper[4771]: I1011 10:56:03.742509 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-1"] Oct 11 10:56:03.797779 master-1 kubenswrapper[4771]: I1011 10:56:03.797687 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc63350-c31a-459f-b45b-73f465e53bc5-config-data\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.798144 master-1 kubenswrapper[4771]: I1011 10:56:03.798032 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cc63350-c31a-459f-b45b-73f465e53bc5-internal-tls-certs\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.798321 master-1 kubenswrapper[4771]: I1011 10:56:03.798288 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.798657 master-1 kubenswrapper[4771]: I1011 10:56:03.798627 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cc63350-c31a-459f-b45b-73f465e53bc5-logs\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.798980 master-1 kubenswrapper[4771]: I1011 10:56:03.798954 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc63350-c31a-459f-b45b-73f465e53bc5-scripts\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.799040 master-1 kubenswrapper[4771]: I1011 10:56:03.799005 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2cc63350-c31a-459f-b45b-73f465e53bc5-httpd-run\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.799040 master-1 kubenswrapper[4771]: I1011 10:56:03.799032 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc63350-c31a-459f-b45b-73f465e53bc5-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.799110 master-1 kubenswrapper[4771]: I1011 10:56:03.799087 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9xdn\" (UniqueName: \"kubernetes.io/projected/2cc63350-c31a-459f-b45b-73f465e53bc5-kube-api-access-x9xdn\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.900950 master-1 kubenswrapper[4771]: I1011 10:56:03.900774 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.901398 master-1 kubenswrapper[4771]: I1011 10:56:03.901348 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cc63350-c31a-459f-b45b-73f465e53bc5-logs\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.901565 master-1 kubenswrapper[4771]: I1011 10:56:03.901548 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc63350-c31a-459f-b45b-73f465e53bc5-scripts\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.901685 master-1 kubenswrapper[4771]: I1011 10:56:03.901665 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2cc63350-c31a-459f-b45b-73f465e53bc5-httpd-run\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.901813 master-1 kubenswrapper[4771]: I1011 10:56:03.901789 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc63350-c31a-459f-b45b-73f465e53bc5-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.901956 master-1 kubenswrapper[4771]: I1011 10:56:03.901936 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9xdn\" (UniqueName: \"kubernetes.io/projected/2cc63350-c31a-459f-b45b-73f465e53bc5-kube-api-access-x9xdn\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.902110 master-1 kubenswrapper[4771]: I1011 10:56:03.902089 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc63350-c31a-459f-b45b-73f465e53bc5-config-data\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.902332 master-1 kubenswrapper[4771]: I1011 10:56:03.902273 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cc63350-c31a-459f-b45b-73f465e53bc5-logs\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.902441 master-1 kubenswrapper[4771]: I1011 10:56:03.902405 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2cc63350-c31a-459f-b45b-73f465e53bc5-httpd-run\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.902512 master-1 kubenswrapper[4771]: I1011 10:56:03.902495 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cc63350-c31a-459f-b45b-73f465e53bc5-internal-tls-certs\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.904899 master-1 kubenswrapper[4771]: I1011 10:56:03.904848 4771 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:56:03.905005 master-1 kubenswrapper[4771]: I1011 10:56:03.904916 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/319ddbbf14dc29e9dbd7eec9a997b70a9a11c6eca7f6496495d34ea4ac3ccad0/globalmount\"" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.905939 master-1 kubenswrapper[4771]: I1011 10:56:03.905885 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc63350-c31a-459f-b45b-73f465e53bc5-scripts\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.907769 master-1 kubenswrapper[4771]: I1011 10:56:03.907735 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc63350-c31a-459f-b45b-73f465e53bc5-config-data\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.908467 master-1 kubenswrapper[4771]: I1011 10:56:03.908417 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc63350-c31a-459f-b45b-73f465e53bc5-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.909251 master-1 kubenswrapper[4771]: I1011 10:56:03.909224 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cc63350-c31a-459f-b45b-73f465e53bc5-internal-tls-certs\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:03.933642 master-1 kubenswrapper[4771]: I1011 10:56:03.933560 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9xdn\" (UniqueName: \"kubernetes.io/projected/2cc63350-c31a-459f-b45b-73f465e53bc5-kube-api-access-x9xdn\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:04.275665 master-1 kubenswrapper[4771]: I1011 10:56:04.275600 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4bbqs" Oct 11 10:56:04.275830 master-1 kubenswrapper[4771]: I1011 10:56:04.275801 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4bbqs" Oct 11 10:56:04.306273 master-0 kubenswrapper[4790]: I1011 10:56:04.306205 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3166d4fc-8488-46dc-9d63-87dc403f66bc" path="/var/lib/kubelet/pods/3166d4fc-8488-46dc-9d63-87dc403f66bc/volumes" Oct 11 10:56:04.451869 master-1 kubenswrapper[4771]: I1011 10:56:04.451709 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2deabbe8-397d-495c-aef9-afe91b4e9eeb" path="/var/lib/kubelet/pods/2deabbe8-397d-495c-aef9-afe91b4e9eeb/volumes" Oct 11 10:56:04.642679 master-1 kubenswrapper[4771]: I1011 10:56:04.642623 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0cc5394-b33f-41a9-bbe2-d772e75a8f58","Type":"ContainerStarted","Data":"505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe"} Oct 11 10:56:04.643199 master-1 kubenswrapper[4771]: I1011 10:56:04.642930 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 11 10:56:04.687378 master-1 kubenswrapper[4771]: I1011 10:56:04.686309 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.076279791 podStartE2EDuration="5.686280709s" podCreationTimestamp="2025-10-11 10:55:59 +0000 UTC" firstStartedPulling="2025-10-11 10:56:00.417989474 +0000 UTC m=+1792.392215915" lastFinishedPulling="2025-10-11 10:56:04.027990392 +0000 UTC m=+1796.002216833" observedRunningTime="2025-10-11 10:56:04.675044252 +0000 UTC m=+1796.649270753" watchObservedRunningTime="2025-10-11 10:56:04.686280709 +0000 UTC m=+1796.660507190" Oct 11 10:56:04.834420 master-0 kubenswrapper[4790]: I1011 10:56:04.834359 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c7212717-18be-4287-9071-f6f818672815\" (UniqueName: \"kubernetes.io/csi/topolvm.io^adbf5277-58ef-49f6-8f5f-4a3a5350d8b7\") pod \"glance-b5802-default-external-api-1\" (UID: \"76b7c4b6-c727-4201-9627-23a06e9ae7ea\") " pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:05.041490 master-0 kubenswrapper[4790]: I1011 10:56:05.039998 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:05.088286 master-1 kubenswrapper[4771]: I1011 10:56:05.088187 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b6ba1d46-4dc8-46c7-b2c2-511158bb8b76\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed627bc4-642c-44b9-a346-0d8fa1903b37\") pod \"glance-b5802-default-internal-api-1\" (UID: \"2cc63350-c31a-459f-b45b-73f465e53bc5\") " pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:05.242331 master-1 kubenswrapper[4771]: I1011 10:56:05.242181 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:05.321383 master-1 kubenswrapper[4771]: I1011 10:56:05.318912 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-656ddc8b67-kfkzr" Oct 11 10:56:05.325380 master-1 kubenswrapper[4771]: I1011 10:56:05.322761 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4bbqs" podUID="5921e565-c581-42f4-8da8-df72fae9a3c0" containerName="registry-server" probeResult="failure" output=< Oct 11 10:56:05.325380 master-1 kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Oct 11 10:56:05.325380 master-1 kubenswrapper[4771]: > Oct 11 10:56:05.658322 master-0 kubenswrapper[4790]: I1011 10:56:05.658271 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-1"] Oct 11 10:56:05.854822 master-1 kubenswrapper[4771]: W1011 10:56:05.851129 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cc63350_c31a_459f_b45b_73f465e53bc5.slice/crio-ea4cb66a3c0705f4d1840f224e02cd6bc0111d2c882543d0a0d1e1ac22aefe85 WatchSource:0}: Error finding container ea4cb66a3c0705f4d1840f224e02cd6bc0111d2c882543d0a0d1e1ac22aefe85: Status 404 returned error can't find the container with id ea4cb66a3c0705f4d1840f224e02cd6bc0111d2c882543d0a0d1e1ac22aefe85 Oct 11 10:56:05.862626 master-1 kubenswrapper[4771]: I1011 10:56:05.862411 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-1"] Oct 11 10:56:06.129598 master-0 kubenswrapper[4790]: I1011 10:56:06.129543 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-1" event={"ID":"76b7c4b6-c727-4201-9627-23a06e9ae7ea","Type":"ContainerStarted","Data":"c85f73b43bcb60791e1609387e6f01887cda94a93c1b9c636cf715e1a1e6d520"} Oct 11 10:56:06.665862 master-1 kubenswrapper[4771]: I1011 10:56:06.665649 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-1" event={"ID":"2cc63350-c31a-459f-b45b-73f465e53bc5","Type":"ContainerStarted","Data":"f389296367a4d076b95e26ae48f492e36a8ecbfcb2d24a4e1b0c219f613e5043"} Oct 11 10:56:06.665862 master-1 kubenswrapper[4771]: I1011 10:56:06.665747 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-1" event={"ID":"2cc63350-c31a-459f-b45b-73f465e53bc5","Type":"ContainerStarted","Data":"ea4cb66a3c0705f4d1840f224e02cd6bc0111d2c882543d0a0d1e1ac22aefe85"} Oct 11 10:56:07.145348 master-0 kubenswrapper[4790]: I1011 10:56:07.145269 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-1" event={"ID":"76b7c4b6-c727-4201-9627-23a06e9ae7ea","Type":"ContainerStarted","Data":"7cfad841733d6d01c552a9fe1d6ac9225e5aea7beee8d5703b4c33e0f1b8d4f1"} Oct 11 10:56:07.145348 master-0 kubenswrapper[4790]: I1011 10:56:07.145332 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-1" event={"ID":"76b7c4b6-c727-4201-9627-23a06e9ae7ea","Type":"ContainerStarted","Data":"1f3509d59c7f9f63906f892a9d00e75ff0e459cdb1a0f40da469a10a53a0d0c7"} Oct 11 10:56:07.172386 master-2 kubenswrapper[4776]: I1011 10:56:07.172272 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerStarted","Data":"23f0e7b89983da20c11b93039bc87953bf1c5b41a82bfad5304a8f7dfd94bc3f"} Oct 11 10:56:07.192835 master-0 kubenswrapper[4790]: I1011 10:56:07.192652 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-external-api-1" podStartSLOduration=4.192624517 podStartE2EDuration="4.192624517s" podCreationTimestamp="2025-10-11 10:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:07.184099743 +0000 UTC m=+1043.738560075" watchObservedRunningTime="2025-10-11 10:56:07.192624517 +0000 UTC m=+1043.747084819" Oct 11 10:56:07.651885 master-1 kubenswrapper[4771]: I1011 10:56:07.651734 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-sz8dm"] Oct 11 10:56:07.654443 master-1 kubenswrapper[4771]: I1011 10:56:07.654390 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-sz8dm" Oct 11 10:56:07.682842 master-1 kubenswrapper[4771]: I1011 10:56:07.682762 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-sz8dm"] Oct 11 10:56:07.700317 master-1 kubenswrapper[4771]: I1011 10:56:07.700206 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vch5\" (UniqueName: \"kubernetes.io/projected/fa024267-404c-497a-a798-3a371608b678-kube-api-access-5vch5\") pod \"aodh-db-create-sz8dm\" (UID: \"fa024267-404c-497a-a798-3a371608b678\") " pod="openstack/aodh-db-create-sz8dm" Oct 11 10:56:07.727110 master-1 kubenswrapper[4771]: I1011 10:56:07.727020 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-1" event={"ID":"2cc63350-c31a-459f-b45b-73f465e53bc5","Type":"ContainerStarted","Data":"a0dc956d5f2ab55e2217e32912c7cbe0043299962352ddb238b696672680ac68"} Oct 11 10:56:07.802525 master-1 kubenswrapper[4771]: I1011 10:56:07.801964 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vch5\" (UniqueName: \"kubernetes.io/projected/fa024267-404c-497a-a798-3a371608b678-kube-api-access-5vch5\") pod \"aodh-db-create-sz8dm\" (UID: \"fa024267-404c-497a-a798-3a371608b678\") " pod="openstack/aodh-db-create-sz8dm" Oct 11 10:56:07.968927 master-1 kubenswrapper[4771]: I1011 10:56:07.968752 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vch5\" (UniqueName: \"kubernetes.io/projected/fa024267-404c-497a-a798-3a371608b678-kube-api-access-5vch5\") pod \"aodh-db-create-sz8dm\" (UID: \"fa024267-404c-497a-a798-3a371608b678\") " pod="openstack/aodh-db-create-sz8dm" Oct 11 10:56:07.975682 master-1 kubenswrapper[4771]: I1011 10:56:07.975630 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-sz8dm" Oct 11 10:56:08.000216 master-1 kubenswrapper[4771]: I1011 10:56:07.999255 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-internal-api-1" podStartSLOduration=4.999216864 podStartE2EDuration="4.999216864s" podCreationTimestamp="2025-10-11 10:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:07.997513464 +0000 UTC m=+1799.971739935" watchObservedRunningTime="2025-10-11 10:56:07.999216864 +0000 UTC m=+1799.973443345" Oct 11 10:56:08.464645 master-1 kubenswrapper[4771]: I1011 10:56:08.464567 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-sz8dm"] Oct 11 10:56:08.471820 master-1 kubenswrapper[4771]: W1011 10:56:08.471742 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa024267_404c_497a_a798_3a371608b678.slice/crio-abbe609448c0eeb6fc32b8a0fd7ff7adaccda13bac04e7607b9d744d2c05e950 WatchSource:0}: Error finding container abbe609448c0eeb6fc32b8a0fd7ff7adaccda13bac04e7607b9d744d2c05e950: Status 404 returned error can't find the container with id abbe609448c0eeb6fc32b8a0fd7ff7adaccda13bac04e7607b9d744d2c05e950 Oct 11 10:56:08.743109 master-1 kubenswrapper[4771]: I1011 10:56:08.740517 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-sz8dm" event={"ID":"fa024267-404c-497a-a798-3a371608b678","Type":"ContainerStarted","Data":"558ef049f7336b936f032c4d3e3115131e36703eb572e93323b57a5fd484ff9e"} Oct 11 10:56:08.743109 master-1 kubenswrapper[4771]: I1011 10:56:08.740611 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-sz8dm" event={"ID":"fa024267-404c-497a-a798-3a371608b678","Type":"ContainerStarted","Data":"abbe609448c0eeb6fc32b8a0fd7ff7adaccda13bac04e7607b9d744d2c05e950"} Oct 11 10:56:08.781008 master-1 kubenswrapper[4771]: I1011 10:56:08.780767 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-sz8dm" podStartSLOduration=1.780739235 podStartE2EDuration="1.780739235s" podCreationTimestamp="2025-10-11 10:56:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:08.767294843 +0000 UTC m=+1800.741521294" watchObservedRunningTime="2025-10-11 10:56:08.780739235 +0000 UTC m=+1800.754965686" Oct 11 10:56:09.382805 master-0 kubenswrapper[4790]: I1011 10:56:09.382733 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-b5802-api-1" Oct 11 10:56:09.443106 master-1 kubenswrapper[4771]: I1011 10:56:09.441347 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-api-0"] Oct 11 10:56:09.443106 master-1 kubenswrapper[4771]: I1011 10:56:09.441887 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b5802-api-0" podUID="478147ef-a0d7-4c37-952c-3fc3a23775db" containerName="cinder-b5802-api-log" containerID="cri-o://fff01313b302342cb30f2201b57bb76a5615b3d6076b484b8fb9b7d061e529af" gracePeriod=30 Oct 11 10:56:09.443106 master-1 kubenswrapper[4771]: I1011 10:56:09.442163 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b5802-api-0" podUID="478147ef-a0d7-4c37-952c-3fc3a23775db" containerName="cinder-api" containerID="cri-o://cbc52f8e44c15d5b309b9a5f8920e77aecfb6aec423daa8801871f7c83313c80" gracePeriod=30 Oct 11 10:56:09.749538 master-1 kubenswrapper[4771]: I1011 10:56:09.749350 4771 generic.go:334] "Generic (PLEG): container finished" podID="478147ef-a0d7-4c37-952c-3fc3a23775db" containerID="fff01313b302342cb30f2201b57bb76a5615b3d6076b484b8fb9b7d061e529af" exitCode=143 Oct 11 10:56:09.749538 master-1 kubenswrapper[4771]: I1011 10:56:09.749420 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-0" event={"ID":"478147ef-a0d7-4c37-952c-3fc3a23775db","Type":"ContainerDied","Data":"fff01313b302342cb30f2201b57bb76a5615b3d6076b484b8fb9b7d061e529af"} Oct 11 10:56:09.751517 master-1 kubenswrapper[4771]: I1011 10:56:09.751449 4771 generic.go:334] "Generic (PLEG): container finished" podID="fa024267-404c-497a-a798-3a371608b678" containerID="558ef049f7336b936f032c4d3e3115131e36703eb572e93323b57a5fd484ff9e" exitCode=0 Oct 11 10:56:09.751618 master-1 kubenswrapper[4771]: I1011 10:56:09.751516 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-sz8dm" event={"ID":"fa024267-404c-497a-a798-3a371608b678","Type":"ContainerDied","Data":"558ef049f7336b936f032c4d3e3115131e36703eb572e93323b57a5fd484ff9e"} Oct 11 10:56:11.320495 master-1 kubenswrapper[4771]: I1011 10:56:11.320427 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-sz8dm" Oct 11 10:56:11.396073 master-1 kubenswrapper[4771]: I1011 10:56:11.395968 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vch5\" (UniqueName: \"kubernetes.io/projected/fa024267-404c-497a-a798-3a371608b678-kube-api-access-5vch5\") pod \"fa024267-404c-497a-a798-3a371608b678\" (UID: \"fa024267-404c-497a-a798-3a371608b678\") " Oct 11 10:56:11.399723 master-1 kubenswrapper[4771]: I1011 10:56:11.399646 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa024267-404c-497a-a798-3a371608b678-kube-api-access-5vch5" (OuterVolumeSpecName: "kube-api-access-5vch5") pod "fa024267-404c-497a-a798-3a371608b678" (UID: "fa024267-404c-497a-a798-3a371608b678"). InnerVolumeSpecName "kube-api-access-5vch5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:11.498826 master-1 kubenswrapper[4771]: I1011 10:56:11.498770 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vch5\" (UniqueName: \"kubernetes.io/projected/fa024267-404c-497a-a798-3a371608b678-kube-api-access-5vch5\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:11.776588 master-2 kubenswrapper[4776]: I1011 10:56:11.776533 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:56:11.782980 master-1 kubenswrapper[4771]: I1011 10:56:11.782912 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-sz8dm" event={"ID":"fa024267-404c-497a-a798-3a371608b678","Type":"ContainerDied","Data":"abbe609448c0eeb6fc32b8a0fd7ff7adaccda13bac04e7607b9d744d2c05e950"} Oct 11 10:56:11.783346 master-1 kubenswrapper[4771]: I1011 10:56:11.783326 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abbe609448c0eeb6fc32b8a0fd7ff7adaccda13bac04e7607b9d744d2c05e950" Oct 11 10:56:11.783480 master-1 kubenswrapper[4771]: I1011 10:56:11.783006 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-sz8dm" Oct 11 10:56:11.864025 master-2 kubenswrapper[4776]: I1011 10:56:11.862058 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7887b79bcd-4lcts"] Oct 11 10:56:11.864025 master-2 kubenswrapper[4776]: I1011 10:56:11.862642 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7887b79bcd-4lcts" podUID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerName="neutron-api" containerID="cri-o://a8ba91b5b068d25618fa0c0a4315f75f1ab1929925bf68853e336e2a934b0ca6" gracePeriod=30 Oct 11 10:56:11.864025 master-2 kubenswrapper[4776]: I1011 10:56:11.863132 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7887b79bcd-4lcts" podUID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerName="neutron-httpd" containerID="cri-o://2c24e9bb8cd753d61f312b08264059f0ef167df16dddaf2f79133f8b02212dea" gracePeriod=30 Oct 11 10:56:11.910035 master-0 kubenswrapper[4790]: I1011 10:56:11.909966 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-748bbfcf89-9smpw"] Oct 11 10:56:11.912832 master-0 kubenswrapper[4790]: I1011 10:56:11.912752 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:11.916138 master-0 kubenswrapper[4790]: I1011 10:56:11.916111 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 11 10:56:11.916342 master-0 kubenswrapper[4790]: I1011 10:56:11.916165 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 11 10:56:11.932587 master-0 kubenswrapper[4790]: I1011 10:56:11.932531 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-748bbfcf89-9smpw"] Oct 11 10:56:11.946232 master-0 kubenswrapper[4790]: I1011 10:56:11.946159 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd214893-adb3-4a7f-b947-814410ab6375-internal-tls-certs\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:11.946232 master-0 kubenswrapper[4790]: I1011 10:56:11.946218 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd214893-adb3-4a7f-b947-814410ab6375-combined-ca-bundle\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:11.946620 master-0 kubenswrapper[4790]: I1011 10:56:11.946290 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd214893-adb3-4a7f-b947-814410ab6375-public-tls-certs\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:11.946620 master-0 kubenswrapper[4790]: I1011 10:56:11.946517 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd214893-adb3-4a7f-b947-814410ab6375-ovndb-tls-certs\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:11.946897 master-0 kubenswrapper[4790]: I1011 10:56:11.946842 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjdpj\" (UniqueName: \"kubernetes.io/projected/bd214893-adb3-4a7f-b947-814410ab6375-kube-api-access-qjdpj\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:11.946979 master-0 kubenswrapper[4790]: I1011 10:56:11.946907 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bd214893-adb3-4a7f-b947-814410ab6375-config\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:11.946979 master-0 kubenswrapper[4790]: I1011 10:56:11.946957 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bd214893-adb3-4a7f-b947-814410ab6375-httpd-config\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:12.048814 master-0 kubenswrapper[4790]: I1011 10:56:12.048740 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd214893-adb3-4a7f-b947-814410ab6375-internal-tls-certs\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:12.048814 master-0 kubenswrapper[4790]: I1011 10:56:12.048803 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd214893-adb3-4a7f-b947-814410ab6375-combined-ca-bundle\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:12.049205 master-0 kubenswrapper[4790]: I1011 10:56:12.048857 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd214893-adb3-4a7f-b947-814410ab6375-public-tls-certs\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:12.049205 master-0 kubenswrapper[4790]: I1011 10:56:12.048889 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd214893-adb3-4a7f-b947-814410ab6375-ovndb-tls-certs\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:12.049205 master-0 kubenswrapper[4790]: I1011 10:56:12.048923 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjdpj\" (UniqueName: \"kubernetes.io/projected/bd214893-adb3-4a7f-b947-814410ab6375-kube-api-access-qjdpj\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:12.049205 master-0 kubenswrapper[4790]: I1011 10:56:12.048948 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bd214893-adb3-4a7f-b947-814410ab6375-config\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:12.049205 master-0 kubenswrapper[4790]: I1011 10:56:12.048976 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bd214893-adb3-4a7f-b947-814410ab6375-httpd-config\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:12.053912 master-0 kubenswrapper[4790]: I1011 10:56:12.053853 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bd214893-adb3-4a7f-b947-814410ab6375-config\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:12.054467 master-0 kubenswrapper[4790]: I1011 10:56:12.054433 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd214893-adb3-4a7f-b947-814410ab6375-ovndb-tls-certs\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:12.055149 master-0 kubenswrapper[4790]: I1011 10:56:12.055086 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bd214893-adb3-4a7f-b947-814410ab6375-httpd-config\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:12.055209 master-0 kubenswrapper[4790]: I1011 10:56:12.055151 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd214893-adb3-4a7f-b947-814410ab6375-public-tls-certs\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:12.056383 master-0 kubenswrapper[4790]: I1011 10:56:12.056347 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd214893-adb3-4a7f-b947-814410ab6375-combined-ca-bundle\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:12.056729 master-0 kubenswrapper[4790]: I1011 10:56:12.056660 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd214893-adb3-4a7f-b947-814410ab6375-internal-tls-certs\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:12.094598 master-0 kubenswrapper[4790]: I1011 10:56:12.094524 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjdpj\" (UniqueName: \"kubernetes.io/projected/bd214893-adb3-4a7f-b947-814410ab6375-kube-api-access-qjdpj\") pod \"neutron-748bbfcf89-9smpw\" (UID: \"bd214893-adb3-4a7f-b947-814410ab6375\") " pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:12.215926 master-2 kubenswrapper[4776]: I1011 10:56:12.215851 4776 generic.go:334] "Generic (PLEG): container finished" podID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerID="2c24e9bb8cd753d61f312b08264059f0ef167df16dddaf2f79133f8b02212dea" exitCode=0 Oct 11 10:56:12.215926 master-2 kubenswrapper[4776]: I1011 10:56:12.215910 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-4lcts" event={"ID":"60f1a3e8-20d2-48e9-842c-9312ce07efe0","Type":"ContainerDied","Data":"2c24e9bb8cd753d61f312b08264059f0ef167df16dddaf2f79133f8b02212dea"} Oct 11 10:56:12.255232 master-0 kubenswrapper[4790]: I1011 10:56:12.255102 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:12.588995 master-1 kubenswrapper[4771]: I1011 10:56:12.588897 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-b5802-api-0" podUID="478147ef-a0d7-4c37-952c-3fc3a23775db" containerName="cinder-api" probeResult="failure" output="Get \"http://10.129.0.132:8776/healthcheck\": read tcp 10.129.0.2:35862->10.129.0.132:8776: read: connection reset by peer" Oct 11 10:56:12.798206 master-1 kubenswrapper[4771]: I1011 10:56:12.798124 4771 generic.go:334] "Generic (PLEG): container finished" podID="478147ef-a0d7-4c37-952c-3fc3a23775db" containerID="cbc52f8e44c15d5b309b9a5f8920e77aecfb6aec423daa8801871f7c83313c80" exitCode=0 Oct 11 10:56:12.799675 master-1 kubenswrapper[4771]: I1011 10:56:12.798228 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-0" event={"ID":"478147ef-a0d7-4c37-952c-3fc3a23775db","Type":"ContainerDied","Data":"cbc52f8e44c15d5b309b9a5f8920e77aecfb6aec423daa8801871f7c83313c80"} Oct 11 10:56:12.860850 master-0 kubenswrapper[4790]: I1011 10:56:12.860761 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-748bbfcf89-9smpw"] Oct 11 10:56:12.870075 master-0 kubenswrapper[4790]: W1011 10:56:12.870013 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd214893_adb3_4a7f_b947_814410ab6375.slice/crio-1d40fed2c34b787c65133a4e33b9d39ed4fa1f38e899e85f25a46d38e7e9ca57 WatchSource:0}: Error finding container 1d40fed2c34b787c65133a4e33b9d39ed4fa1f38e899e85f25a46d38e7e9ca57: Status 404 returned error can't find the container with id 1d40fed2c34b787c65133a4e33b9d39ed4fa1f38e899e85f25a46d38e7e9ca57 Oct 11 10:56:13.123535 master-1 kubenswrapper[4771]: I1011 10:56:13.123242 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-0" Oct 11 10:56:13.219290 master-0 kubenswrapper[4790]: I1011 10:56:13.219211 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-748bbfcf89-9smpw" event={"ID":"bd214893-adb3-4a7f-b947-814410ab6375","Type":"ContainerStarted","Data":"a84e4701f8238a0a3ba3476b64e6b406711d9beb43f39f2ab6887bac602983d2"} Oct 11 10:56:13.219290 master-0 kubenswrapper[4790]: I1011 10:56:13.219279 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-748bbfcf89-9smpw" event={"ID":"bd214893-adb3-4a7f-b947-814410ab6375","Type":"ContainerStarted","Data":"1d40fed2c34b787c65133a4e33b9d39ed4fa1f38e899e85f25a46d38e7e9ca57"} Oct 11 10:56:13.232565 master-1 kubenswrapper[4771]: I1011 10:56:13.232510 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/478147ef-a0d7-4c37-952c-3fc3a23775db-logs\") pod \"478147ef-a0d7-4c37-952c-3fc3a23775db\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " Oct 11 10:56:13.232565 master-1 kubenswrapper[4771]: I1011 10:56:13.232574 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-scripts\") pod \"478147ef-a0d7-4c37-952c-3fc3a23775db\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " Oct 11 10:56:13.233019 master-1 kubenswrapper[4771]: I1011 10:56:13.232753 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svvlr\" (UniqueName: \"kubernetes.io/projected/478147ef-a0d7-4c37-952c-3fc3a23775db-kube-api-access-svvlr\") pod \"478147ef-a0d7-4c37-952c-3fc3a23775db\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " Oct 11 10:56:13.233019 master-1 kubenswrapper[4771]: I1011 10:56:13.232793 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-config-data\") pod \"478147ef-a0d7-4c37-952c-3fc3a23775db\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " Oct 11 10:56:13.233019 master-1 kubenswrapper[4771]: I1011 10:56:13.232860 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-config-data-custom\") pod \"478147ef-a0d7-4c37-952c-3fc3a23775db\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " Oct 11 10:56:13.233019 master-1 kubenswrapper[4771]: I1011 10:56:13.232895 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/478147ef-a0d7-4c37-952c-3fc3a23775db-etc-machine-id\") pod \"478147ef-a0d7-4c37-952c-3fc3a23775db\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " Oct 11 10:56:13.233019 master-1 kubenswrapper[4771]: I1011 10:56:13.232972 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-combined-ca-bundle\") pod \"478147ef-a0d7-4c37-952c-3fc3a23775db\" (UID: \"478147ef-a0d7-4c37-952c-3fc3a23775db\") " Oct 11 10:56:13.233245 master-1 kubenswrapper[4771]: I1011 10:56:13.233148 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/478147ef-a0d7-4c37-952c-3fc3a23775db-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "478147ef-a0d7-4c37-952c-3fc3a23775db" (UID: "478147ef-a0d7-4c37-952c-3fc3a23775db"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:56:13.233245 master-1 kubenswrapper[4771]: I1011 10:56:13.233198 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/478147ef-a0d7-4c37-952c-3fc3a23775db-logs" (OuterVolumeSpecName: "logs") pod "478147ef-a0d7-4c37-952c-3fc3a23775db" (UID: "478147ef-a0d7-4c37-952c-3fc3a23775db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:13.233666 master-1 kubenswrapper[4771]: I1011 10:56:13.233630 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/478147ef-a0d7-4c37-952c-3fc3a23775db-logs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:13.233782 master-1 kubenswrapper[4771]: I1011 10:56:13.233675 4771 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/478147ef-a0d7-4c37-952c-3fc3a23775db-etc-machine-id\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:13.236868 master-1 kubenswrapper[4771]: I1011 10:56:13.236819 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "478147ef-a0d7-4c37-952c-3fc3a23775db" (UID: "478147ef-a0d7-4c37-952c-3fc3a23775db"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:13.237657 master-1 kubenswrapper[4771]: I1011 10:56:13.237614 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/478147ef-a0d7-4c37-952c-3fc3a23775db-kube-api-access-svvlr" (OuterVolumeSpecName: "kube-api-access-svvlr") pod "478147ef-a0d7-4c37-952c-3fc3a23775db" (UID: "478147ef-a0d7-4c37-952c-3fc3a23775db"). InnerVolumeSpecName "kube-api-access-svvlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:13.248786 master-1 kubenswrapper[4771]: I1011 10:56:13.248690 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-scripts" (OuterVolumeSpecName: "scripts") pod "478147ef-a0d7-4c37-952c-3fc3a23775db" (UID: "478147ef-a0d7-4c37-952c-3fc3a23775db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:13.257346 master-1 kubenswrapper[4771]: I1011 10:56:13.257251 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "478147ef-a0d7-4c37-952c-3fc3a23775db" (UID: "478147ef-a0d7-4c37-952c-3fc3a23775db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:13.277430 master-1 kubenswrapper[4771]: I1011 10:56:13.277331 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-config-data" (OuterVolumeSpecName: "config-data") pod "478147ef-a0d7-4c37-952c-3fc3a23775db" (UID: "478147ef-a0d7-4c37-952c-3fc3a23775db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:13.335883 master-1 kubenswrapper[4771]: I1011 10:56:13.335783 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svvlr\" (UniqueName: \"kubernetes.io/projected/478147ef-a0d7-4c37-952c-3fc3a23775db-kube-api-access-svvlr\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:13.335883 master-1 kubenswrapper[4771]: I1011 10:56:13.335848 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:13.335883 master-1 kubenswrapper[4771]: I1011 10:56:13.335859 4771 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-config-data-custom\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:13.335883 master-1 kubenswrapper[4771]: I1011 10:56:13.335872 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:13.335883 master-1 kubenswrapper[4771]: I1011 10:56:13.335882 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/478147ef-a0d7-4c37-952c-3fc3a23775db-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:13.809497 master-1 kubenswrapper[4771]: I1011 10:56:13.809382 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-0" event={"ID":"478147ef-a0d7-4c37-952c-3fc3a23775db","Type":"ContainerDied","Data":"48979e1f4a2c3ea1ab7c6b56d19ad482a69e0c44021b08cc343a04080547c276"} Oct 11 10:56:13.809497 master-1 kubenswrapper[4771]: I1011 10:56:13.809493 4771 scope.go:117] "RemoveContainer" containerID="cbc52f8e44c15d5b309b9a5f8920e77aecfb6aec423daa8801871f7c83313c80" Oct 11 10:56:13.810701 master-1 kubenswrapper[4771]: I1011 10:56:13.810642 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-0" Oct 11 10:56:13.838945 master-1 kubenswrapper[4771]: I1011 10:56:13.838815 4771 scope.go:117] "RemoveContainer" containerID="fff01313b302342cb30f2201b57bb76a5615b3d6076b484b8fb9b7d061e529af" Oct 11 10:56:13.860120 master-1 kubenswrapper[4771]: I1011 10:56:13.860062 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-api-0"] Oct 11 10:56:13.866233 master-1 kubenswrapper[4771]: I1011 10:56:13.866171 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b5802-api-0"] Oct 11 10:56:13.898958 master-1 kubenswrapper[4771]: I1011 10:56:13.898887 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-api-0"] Oct 11 10:56:13.899303 master-1 kubenswrapper[4771]: E1011 10:56:13.899274 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa024267-404c-497a-a798-3a371608b678" containerName="mariadb-database-create" Oct 11 10:56:13.899303 master-1 kubenswrapper[4771]: I1011 10:56:13.899294 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa024267-404c-497a-a798-3a371608b678" containerName="mariadb-database-create" Oct 11 10:56:13.899398 master-1 kubenswrapper[4771]: E1011 10:56:13.899317 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478147ef-a0d7-4c37-952c-3fc3a23775db" containerName="cinder-api" Oct 11 10:56:13.899398 master-1 kubenswrapper[4771]: I1011 10:56:13.899325 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="478147ef-a0d7-4c37-952c-3fc3a23775db" containerName="cinder-api" Oct 11 10:56:13.899398 master-1 kubenswrapper[4771]: E1011 10:56:13.899342 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478147ef-a0d7-4c37-952c-3fc3a23775db" containerName="cinder-b5802-api-log" Oct 11 10:56:13.899398 master-1 kubenswrapper[4771]: I1011 10:56:13.899365 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="478147ef-a0d7-4c37-952c-3fc3a23775db" containerName="cinder-b5802-api-log" Oct 11 10:56:13.899526 master-1 kubenswrapper[4771]: I1011 10:56:13.899491 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="478147ef-a0d7-4c37-952c-3fc3a23775db" containerName="cinder-api" Oct 11 10:56:13.899526 master-1 kubenswrapper[4771]: I1011 10:56:13.899512 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="478147ef-a0d7-4c37-952c-3fc3a23775db" containerName="cinder-b5802-api-log" Oct 11 10:56:13.899526 master-1 kubenswrapper[4771]: I1011 10:56:13.899524 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa024267-404c-497a-a798-3a371608b678" containerName="mariadb-database-create" Oct 11 10:56:13.900770 master-1 kubenswrapper[4771]: I1011 10:56:13.900739 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-0" Oct 11 10:56:13.904691 master-1 kubenswrapper[4771]: I1011 10:56:13.904659 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-api-config-data" Oct 11 10:56:13.904774 master-1 kubenswrapper[4771]: I1011 10:56:13.904691 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-scripts" Oct 11 10:56:13.904774 master-1 kubenswrapper[4771]: I1011 10:56:13.904696 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 11 10:56:13.904924 master-1 kubenswrapper[4771]: I1011 10:56:13.904895 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 11 10:56:13.905026 master-1 kubenswrapper[4771]: I1011 10:56:13.905003 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-config-data" Oct 11 10:56:13.925589 master-1 kubenswrapper[4771]: I1011 10:56:13.925534 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-api-0"] Oct 11 10:56:13.950911 master-1 kubenswrapper[4771]: I1011 10:56:13.950824 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-combined-ca-bundle\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:13.951110 master-1 kubenswrapper[4771]: I1011 10:56:13.950948 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-config-data-custom\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:13.951110 master-1 kubenswrapper[4771]: I1011 10:56:13.951005 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-internal-tls-certs\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:13.951110 master-1 kubenswrapper[4771]: I1011 10:56:13.951049 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-public-tls-certs\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:13.951211 master-1 kubenswrapper[4771]: I1011 10:56:13.951119 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sc9t\" (UniqueName: \"kubernetes.io/projected/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-kube-api-access-4sc9t\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:13.951249 master-1 kubenswrapper[4771]: I1011 10:56:13.951214 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-config-data\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:13.951289 master-1 kubenswrapper[4771]: I1011 10:56:13.951268 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-logs\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:13.951333 master-1 kubenswrapper[4771]: I1011 10:56:13.951308 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-etc-machine-id\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:13.951385 master-1 kubenswrapper[4771]: I1011 10:56:13.951347 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-scripts\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.053221 master-1 kubenswrapper[4771]: I1011 10:56:14.053147 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-scripts\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.053534 master-1 kubenswrapper[4771]: I1011 10:56:14.053239 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-combined-ca-bundle\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.053534 master-1 kubenswrapper[4771]: I1011 10:56:14.053307 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-config-data-custom\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.053534 master-1 kubenswrapper[4771]: I1011 10:56:14.053397 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-internal-tls-certs\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.053534 master-1 kubenswrapper[4771]: I1011 10:56:14.053457 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-public-tls-certs\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.053534 master-1 kubenswrapper[4771]: I1011 10:56:14.053495 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sc9t\" (UniqueName: \"kubernetes.io/projected/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-kube-api-access-4sc9t\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.053739 master-1 kubenswrapper[4771]: I1011 10:56:14.053592 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-config-data\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.053739 master-1 kubenswrapper[4771]: I1011 10:56:14.053654 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-logs\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.053739 master-1 kubenswrapper[4771]: I1011 10:56:14.053698 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-etc-machine-id\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.054119 master-1 kubenswrapper[4771]: I1011 10:56:14.053845 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-etc-machine-id\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.054728 master-1 kubenswrapper[4771]: I1011 10:56:14.054492 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-logs\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.058156 master-1 kubenswrapper[4771]: I1011 10:56:14.057627 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-config-data\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.058156 master-1 kubenswrapper[4771]: I1011 10:56:14.058100 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-public-tls-certs\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.058531 master-1 kubenswrapper[4771]: I1011 10:56:14.058482 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-internal-tls-certs\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.058604 master-1 kubenswrapper[4771]: I1011 10:56:14.058487 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-scripts\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.058894 master-1 kubenswrapper[4771]: I1011 10:56:14.058839 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-config-data-custom\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.060922 master-1 kubenswrapper[4771]: I1011 10:56:14.060852 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-combined-ca-bundle\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.094955 master-1 kubenswrapper[4771]: I1011 10:56:14.094868 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sc9t\" (UniqueName: \"kubernetes.io/projected/4a6945b7-6d08-4a4a-9627-5993eb5e0a7f-kube-api-access-4sc9t\") pod \"cinder-b5802-api-0\" (UID: \"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f\") " pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.225686 master-1 kubenswrapper[4771]: I1011 10:56:14.225617 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-0" Oct 11 10:56:14.234763 master-0 kubenswrapper[4790]: I1011 10:56:14.234577 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-748bbfcf89-9smpw" event={"ID":"bd214893-adb3-4a7f-b947-814410ab6375","Type":"ContainerStarted","Data":"d6c7ac18f7845ac5f376c39bebecfc59193f64791fdb3a9cdf0592dde370a55c"} Oct 11 10:56:14.235956 master-0 kubenswrapper[4790]: I1011 10:56:14.234962 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:14.327231 master-1 kubenswrapper[4771]: I1011 10:56:14.327157 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4bbqs" Oct 11 10:56:14.414931 master-1 kubenswrapper[4771]: I1011 10:56:14.414499 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4bbqs" Oct 11 10:56:14.459718 master-1 kubenswrapper[4771]: I1011 10:56:14.459632 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="478147ef-a0d7-4c37-952c-3fc3a23775db" path="/var/lib/kubelet/pods/478147ef-a0d7-4c37-952c-3fc3a23775db/volumes" Oct 11 10:56:14.953456 master-1 kubenswrapper[4771]: I1011 10:56:14.953299 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-api-0"] Oct 11 10:56:14.990681 master-1 kubenswrapper[4771]: I1011 10:56:14.990612 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4bbqs"] Oct 11 10:56:15.040927 master-0 kubenswrapper[4790]: I1011 10:56:15.040841 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:15.040927 master-0 kubenswrapper[4790]: I1011 10:56:15.040925 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:15.081325 master-0 kubenswrapper[4790]: I1011 10:56:15.081033 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:15.096640 master-0 kubenswrapper[4790]: I1011 10:56:15.096578 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:15.116009 master-0 kubenswrapper[4790]: I1011 10:56:15.115916 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-748bbfcf89-9smpw" podStartSLOduration=4.115894484 podStartE2EDuration="4.115894484s" podCreationTimestamp="2025-10-11 10:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:14.498664494 +0000 UTC m=+1051.053124786" watchObservedRunningTime="2025-10-11 10:56:15.115894484 +0000 UTC m=+1051.670354766" Oct 11 10:56:15.243140 master-1 kubenswrapper[4771]: I1011 10:56:15.242973 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:15.243140 master-1 kubenswrapper[4771]: I1011 10:56:15.243073 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:15.252700 master-0 kubenswrapper[4790]: I1011 10:56:15.252616 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:15.252700 master-0 kubenswrapper[4790]: I1011 10:56:15.252665 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:15.287801 master-1 kubenswrapper[4771]: I1011 10:56:15.287524 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:15.302469 master-1 kubenswrapper[4771]: I1011 10:56:15.301900 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:15.829464 master-1 kubenswrapper[4771]: I1011 10:56:15.829398 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-0" event={"ID":"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f","Type":"ContainerStarted","Data":"f42ad3585d89b67cdab529dc67ac31a8c2d471a973b6fcbb6740c8dd37b76a96"} Oct 11 10:56:15.829464 master-1 kubenswrapper[4771]: I1011 10:56:15.829459 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-0" event={"ID":"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f","Type":"ContainerStarted","Data":"c69d5d53470534d9c412bdab403cbe0881354ba1e16ffac19d620e059bd28afb"} Oct 11 10:56:15.829949 master-1 kubenswrapper[4771]: I1011 10:56:15.829837 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:15.829949 master-1 kubenswrapper[4771]: I1011 10:56:15.829877 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:15.829949 master-1 kubenswrapper[4771]: I1011 10:56:15.829834 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4bbqs" podUID="5921e565-c581-42f4-8da8-df72fae9a3c0" containerName="registry-server" containerID="cri-o://fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53" gracePeriod=2 Oct 11 10:56:16.418322 master-1 kubenswrapper[4771]: I1011 10:56:16.418251 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4bbqs" Oct 11 10:56:16.516654 master-1 kubenswrapper[4771]: I1011 10:56:16.515374 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5921e565-c581-42f4-8da8-df72fae9a3c0-utilities\") pod \"5921e565-c581-42f4-8da8-df72fae9a3c0\" (UID: \"5921e565-c581-42f4-8da8-df72fae9a3c0\") " Oct 11 10:56:16.516654 master-1 kubenswrapper[4771]: I1011 10:56:16.515702 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5921e565-c581-42f4-8da8-df72fae9a3c0-catalog-content\") pod \"5921e565-c581-42f4-8da8-df72fae9a3c0\" (UID: \"5921e565-c581-42f4-8da8-df72fae9a3c0\") " Oct 11 10:56:16.516654 master-1 kubenswrapper[4771]: I1011 10:56:16.515808 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzpqz\" (UniqueName: \"kubernetes.io/projected/5921e565-c581-42f4-8da8-df72fae9a3c0-kube-api-access-jzpqz\") pod \"5921e565-c581-42f4-8da8-df72fae9a3c0\" (UID: \"5921e565-c581-42f4-8da8-df72fae9a3c0\") " Oct 11 10:56:16.518185 master-1 kubenswrapper[4771]: I1011 10:56:16.517204 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5921e565-c581-42f4-8da8-df72fae9a3c0-utilities" (OuterVolumeSpecName: "utilities") pod "5921e565-c581-42f4-8da8-df72fae9a3c0" (UID: "5921e565-c581-42f4-8da8-df72fae9a3c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:16.526466 master-1 kubenswrapper[4771]: I1011 10:56:16.525751 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5921e565-c581-42f4-8da8-df72fae9a3c0-kube-api-access-jzpqz" (OuterVolumeSpecName: "kube-api-access-jzpqz") pod "5921e565-c581-42f4-8da8-df72fae9a3c0" (UID: "5921e565-c581-42f4-8da8-df72fae9a3c0"). InnerVolumeSpecName "kube-api-access-jzpqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:16.576231 master-1 kubenswrapper[4771]: I1011 10:56:16.576163 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5921e565-c581-42f4-8da8-df72fae9a3c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5921e565-c581-42f4-8da8-df72fae9a3c0" (UID: "5921e565-c581-42f4-8da8-df72fae9a3c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:16.617766 master-1 kubenswrapper[4771]: I1011 10:56:16.617713 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzpqz\" (UniqueName: \"kubernetes.io/projected/5921e565-c581-42f4-8da8-df72fae9a3c0-kube-api-access-jzpqz\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:16.618019 master-1 kubenswrapper[4771]: I1011 10:56:16.618007 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5921e565-c581-42f4-8da8-df72fae9a3c0-utilities\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:16.618099 master-1 kubenswrapper[4771]: I1011 10:56:16.618089 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5921e565-c581-42f4-8da8-df72fae9a3c0-catalog-content\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:16.841136 master-1 kubenswrapper[4771]: I1011 10:56:16.841067 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-0" event={"ID":"4a6945b7-6d08-4a4a-9627-5993eb5e0a7f","Type":"ContainerStarted","Data":"af433967ea39199deedf56b2f1f2283de9b047e4635f3608fd9fbf9f788ee162"} Oct 11 10:56:16.841506 master-1 kubenswrapper[4771]: I1011 10:56:16.841220 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-b5802-api-0" Oct 11 10:56:16.844979 master-1 kubenswrapper[4771]: I1011 10:56:16.844928 4771 generic.go:334] "Generic (PLEG): container finished" podID="5921e565-c581-42f4-8da8-df72fae9a3c0" containerID="fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53" exitCode=0 Oct 11 10:56:16.845160 master-1 kubenswrapper[4771]: I1011 10:56:16.845008 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bbqs" event={"ID":"5921e565-c581-42f4-8da8-df72fae9a3c0","Type":"ContainerDied","Data":"fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53"} Oct 11 10:56:16.845160 master-1 kubenswrapper[4771]: I1011 10:56:16.845083 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bbqs" event={"ID":"5921e565-c581-42f4-8da8-df72fae9a3c0","Type":"ContainerDied","Data":"2046d1999ef071c637c21e7f0dabc3a11726315f639116dfe75b51029d76c1fe"} Oct 11 10:56:16.845292 master-1 kubenswrapper[4771]: I1011 10:56:16.845163 4771 scope.go:117] "RemoveContainer" containerID="fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53" Oct 11 10:56:16.845571 master-1 kubenswrapper[4771]: I1011 10:56:16.845540 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4bbqs" Oct 11 10:56:16.868252 master-1 kubenswrapper[4771]: I1011 10:56:16.868209 4771 scope.go:117] "RemoveContainer" containerID="f0b97708a382409609fd043fe1fddb88387503e83ce47d62daed66c317f5fded" Oct 11 10:56:16.906471 master-1 kubenswrapper[4771]: I1011 10:56:16.906172 4771 scope.go:117] "RemoveContainer" containerID="57cea5f10f5cb919bbd746a5cc770cdb2e4c8666c371e8c10ebe7f9b79fd5346" Oct 11 10:56:16.934994 master-1 kubenswrapper[4771]: I1011 10:56:16.934800 4771 scope.go:117] "RemoveContainer" containerID="fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53" Oct 11 10:56:16.935644 master-1 kubenswrapper[4771]: E1011 10:56:16.935407 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53\": container with ID starting with fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53 not found: ID does not exist" containerID="fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53" Oct 11 10:56:16.935808 master-1 kubenswrapper[4771]: I1011 10:56:16.935776 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53"} err="failed to get container status \"fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53\": rpc error: code = NotFound desc = could not find container \"fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53\": container with ID starting with fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53 not found: ID does not exist" Oct 11 10:56:16.935966 master-1 kubenswrapper[4771]: I1011 10:56:16.935946 4771 scope.go:117] "RemoveContainer" containerID="f0b97708a382409609fd043fe1fddb88387503e83ce47d62daed66c317f5fded" Oct 11 10:56:16.936988 master-1 kubenswrapper[4771]: E1011 10:56:16.936946 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0b97708a382409609fd043fe1fddb88387503e83ce47d62daed66c317f5fded\": container with ID starting with f0b97708a382409609fd043fe1fddb88387503e83ce47d62daed66c317f5fded not found: ID does not exist" containerID="f0b97708a382409609fd043fe1fddb88387503e83ce47d62daed66c317f5fded" Oct 11 10:56:16.937090 master-1 kubenswrapper[4771]: I1011 10:56:16.936994 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0b97708a382409609fd043fe1fddb88387503e83ce47d62daed66c317f5fded"} err="failed to get container status \"f0b97708a382409609fd043fe1fddb88387503e83ce47d62daed66c317f5fded\": rpc error: code = NotFound desc = could not find container \"f0b97708a382409609fd043fe1fddb88387503e83ce47d62daed66c317f5fded\": container with ID starting with f0b97708a382409609fd043fe1fddb88387503e83ce47d62daed66c317f5fded not found: ID does not exist" Oct 11 10:56:16.937090 master-1 kubenswrapper[4771]: I1011 10:56:16.937045 4771 scope.go:117] "RemoveContainer" containerID="57cea5f10f5cb919bbd746a5cc770cdb2e4c8666c371e8c10ebe7f9b79fd5346" Oct 11 10:56:16.937635 master-1 kubenswrapper[4771]: E1011 10:56:16.937608 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57cea5f10f5cb919bbd746a5cc770cdb2e4c8666c371e8c10ebe7f9b79fd5346\": container with ID starting with 57cea5f10f5cb919bbd746a5cc770cdb2e4c8666c371e8c10ebe7f9b79fd5346 not found: ID does not exist" containerID="57cea5f10f5cb919bbd746a5cc770cdb2e4c8666c371e8c10ebe7f9b79fd5346" Oct 11 10:56:16.937773 master-1 kubenswrapper[4771]: I1011 10:56:16.937744 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57cea5f10f5cb919bbd746a5cc770cdb2e4c8666c371e8c10ebe7f9b79fd5346"} err="failed to get container status \"57cea5f10f5cb919bbd746a5cc770cdb2e4c8666c371e8c10ebe7f9b79fd5346\": rpc error: code = NotFound desc = could not find container \"57cea5f10f5cb919bbd746a5cc770cdb2e4c8666c371e8c10ebe7f9b79fd5346\": container with ID starting with 57cea5f10f5cb919bbd746a5cc770cdb2e4c8666c371e8c10ebe7f9b79fd5346 not found: ID does not exist" Oct 11 10:56:17.277949 master-0 kubenswrapper[4790]: I1011 10:56:17.274679 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:17.277949 master-0 kubenswrapper[4790]: I1011 10:56:17.275085 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-external-api-1" Oct 11 10:56:17.788423 master-1 kubenswrapper[4771]: I1011 10:56:17.788274 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-api-0" podStartSLOduration=4.7882227969999995 podStartE2EDuration="4.788222797s" podCreationTimestamp="2025-10-11 10:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:17.62064272 +0000 UTC m=+1809.594869201" watchObservedRunningTime="2025-10-11 10:56:17.788222797 +0000 UTC m=+1809.762449278" Oct 11 10:56:17.798418 master-1 kubenswrapper[4771]: I1011 10:56:17.798318 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4bbqs"] Oct 11 10:56:17.859642 master-1 kubenswrapper[4771]: I1011 10:56:17.859591 4771 generic.go:334] "Generic (PLEG): container finished" podID="38267a66-0ebd-44ab-bc7f-cd5703503b74" containerID="5d17ff04cecf6e6d74e4dc9eda892c61efec1bfa4b25f713f94a437e54e6aeed" exitCode=0 Oct 11 10:56:17.859934 master-1 kubenswrapper[4771]: I1011 10:56:17.859665 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-chcrd" event={"ID":"38267a66-0ebd-44ab-bc7f-cd5703503b74","Type":"ContainerDied","Data":"5d17ff04cecf6e6d74e4dc9eda892c61efec1bfa4b25f713f94a437e54e6aeed"} Oct 11 10:56:17.957638 master-1 kubenswrapper[4771]: I1011 10:56:17.957583 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:17.958560 master-1 kubenswrapper[4771]: I1011 10:56:17.957737 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:56:17.970241 master-1 kubenswrapper[4771]: I1011 10:56:17.970153 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-external-api-0"] Oct 11 10:56:17.971057 master-1 kubenswrapper[4771]: I1011 10:56:17.970920 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-external-api-0" podUID="d3028266-255a-43a3-8bdb-9695ad7cbb30" containerName="glance-log" containerID="cri-o://7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474" gracePeriod=30 Oct 11 10:56:17.971795 master-1 kubenswrapper[4771]: I1011 10:56:17.971531 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-external-api-0" podUID="d3028266-255a-43a3-8bdb-9695ad7cbb30" containerName="glance-httpd" containerID="cri-o://9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c" gracePeriod=30 Oct 11 10:56:18.045089 master-1 kubenswrapper[4771]: I1011 10:56:18.044930 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-1" Oct 11 10:56:18.502922 master-1 kubenswrapper[4771]: I1011 10:56:18.502827 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4bbqs"] Oct 11 10:56:18.875612 master-1 kubenswrapper[4771]: I1011 10:56:18.875493 4771 generic.go:334] "Generic (PLEG): container finished" podID="d3028266-255a-43a3-8bdb-9695ad7cbb30" containerID="7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474" exitCode=143 Oct 11 10:56:18.876647 master-1 kubenswrapper[4771]: I1011 10:56:18.876573 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-0" event={"ID":"d3028266-255a-43a3-8bdb-9695ad7cbb30","Type":"ContainerDied","Data":"7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474"} Oct 11 10:56:19.414986 master-1 kubenswrapper[4771]: I1011 10:56:19.414916 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-chcrd" Oct 11 10:56:19.494841 master-1 kubenswrapper[4771]: I1011 10:56:19.494753 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38267a66-0ebd-44ab-bc7f-cd5703503b74-scripts\") pod \"38267a66-0ebd-44ab-bc7f-cd5703503b74\" (UID: \"38267a66-0ebd-44ab-bc7f-cd5703503b74\") " Oct 11 10:56:19.495161 master-1 kubenswrapper[4771]: I1011 10:56:19.494961 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38267a66-0ebd-44ab-bc7f-cd5703503b74-combined-ca-bundle\") pod \"38267a66-0ebd-44ab-bc7f-cd5703503b74\" (UID: \"38267a66-0ebd-44ab-bc7f-cd5703503b74\") " Oct 11 10:56:19.495161 master-1 kubenswrapper[4771]: I1011 10:56:19.495096 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38267a66-0ebd-44ab-bc7f-cd5703503b74-config-data\") pod \"38267a66-0ebd-44ab-bc7f-cd5703503b74\" (UID: \"38267a66-0ebd-44ab-bc7f-cd5703503b74\") " Oct 11 10:56:19.495317 master-1 kubenswrapper[4771]: I1011 10:56:19.495170 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v6lx\" (UniqueName: \"kubernetes.io/projected/38267a66-0ebd-44ab-bc7f-cd5703503b74-kube-api-access-5v6lx\") pod \"38267a66-0ebd-44ab-bc7f-cd5703503b74\" (UID: \"38267a66-0ebd-44ab-bc7f-cd5703503b74\") " Oct 11 10:56:19.499251 master-1 kubenswrapper[4771]: I1011 10:56:19.499140 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38267a66-0ebd-44ab-bc7f-cd5703503b74-kube-api-access-5v6lx" (OuterVolumeSpecName: "kube-api-access-5v6lx") pod "38267a66-0ebd-44ab-bc7f-cd5703503b74" (UID: "38267a66-0ebd-44ab-bc7f-cd5703503b74"). InnerVolumeSpecName "kube-api-access-5v6lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:19.516043 master-1 kubenswrapper[4771]: I1011 10:56:19.515881 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38267a66-0ebd-44ab-bc7f-cd5703503b74-scripts" (OuterVolumeSpecName: "scripts") pod "38267a66-0ebd-44ab-bc7f-cd5703503b74" (UID: "38267a66-0ebd-44ab-bc7f-cd5703503b74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:19.529228 master-1 kubenswrapper[4771]: I1011 10:56:19.528973 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38267a66-0ebd-44ab-bc7f-cd5703503b74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38267a66-0ebd-44ab-bc7f-cd5703503b74" (UID: "38267a66-0ebd-44ab-bc7f-cd5703503b74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:19.536328 master-1 kubenswrapper[4771]: I1011 10:56:19.536277 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38267a66-0ebd-44ab-bc7f-cd5703503b74-config-data" (OuterVolumeSpecName: "config-data") pod "38267a66-0ebd-44ab-bc7f-cd5703503b74" (UID: "38267a66-0ebd-44ab-bc7f-cd5703503b74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:19.598966 master-1 kubenswrapper[4771]: I1011 10:56:19.598893 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38267a66-0ebd-44ab-bc7f-cd5703503b74-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:19.598966 master-1 kubenswrapper[4771]: I1011 10:56:19.598958 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38267a66-0ebd-44ab-bc7f-cd5703503b74-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:19.599107 master-1 kubenswrapper[4771]: I1011 10:56:19.598975 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v6lx\" (UniqueName: \"kubernetes.io/projected/38267a66-0ebd-44ab-bc7f-cd5703503b74-kube-api-access-5v6lx\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:19.599107 master-1 kubenswrapper[4771]: I1011 10:56:19.598991 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38267a66-0ebd-44ab-bc7f-cd5703503b74-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:19.891446 master-1 kubenswrapper[4771]: I1011 10:56:19.891277 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-chcrd" event={"ID":"38267a66-0ebd-44ab-bc7f-cd5703503b74","Type":"ContainerDied","Data":"0c0dbb46c9c9dd56fc370dbf3bf4cefb3705db5d800f0bb6bbcaf05c4ff29fa5"} Oct 11 10:56:19.891446 master-1 kubenswrapper[4771]: I1011 10:56:19.891385 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c0dbb46c9c9dd56fc370dbf3bf4cefb3705db5d800f0bb6bbcaf05c4ff29fa5" Oct 11 10:56:19.892040 master-1 kubenswrapper[4771]: I1011 10:56:19.891435 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-chcrd" Oct 11 10:56:20.181716 master-2 kubenswrapper[4776]: I1011 10:56:20.179744 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:56:20.181716 master-2 kubenswrapper[4776]: I1011 10:56:20.180114 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-internal-api-0" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-log" containerID="cri-o://d313f81d0a8278bef4e01893ade947879d98007ed6a8ec60b3e2299775595e63" gracePeriod=30 Oct 11 10:56:20.181716 master-2 kubenswrapper[4776]: I1011 10:56:20.180189 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-internal-api-0" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-httpd" containerID="cri-o://3b4c4342002f1ca53ae36a5961557e4addd090f4ce6b5d57db4005435cbb0b8e" gracePeriod=30 Oct 11 10:56:20.314088 master-2 kubenswrapper[4776]: I1011 10:56:20.314031 4776 generic.go:334] "Generic (PLEG): container finished" podID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerID="a8ba91b5b068d25618fa0c0a4315f75f1ab1929925bf68853e336e2a934b0ca6" exitCode=0 Oct 11 10:56:20.314088 master-2 kubenswrapper[4776]: I1011 10:56:20.314093 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-4lcts" event={"ID":"60f1a3e8-20d2-48e9-842c-9312ce07efe0","Type":"ContainerDied","Data":"a8ba91b5b068d25618fa0c0a4315f75f1ab1929925bf68853e336e2a934b0ca6"} Oct 11 10:56:20.449947 master-1 kubenswrapper[4771]: I1011 10:56:20.449851 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5921e565-c581-42f4-8da8-df72fae9a3c0" path="/var/lib/kubelet/pods/5921e565-c581-42f4-8da8-df72fae9a3c0/volumes" Oct 11 10:56:20.728254 master-1 kubenswrapper[4771]: I1011 10:56:20.728011 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-1a24-account-create-pb6gd"] Oct 11 10:56:20.728820 master-1 kubenswrapper[4771]: E1011 10:56:20.728763 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5921e565-c581-42f4-8da8-df72fae9a3c0" containerName="registry-server" Oct 11 10:56:20.728820 master-1 kubenswrapper[4771]: I1011 10:56:20.728808 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5921e565-c581-42f4-8da8-df72fae9a3c0" containerName="registry-server" Oct 11 10:56:20.729026 master-1 kubenswrapper[4771]: E1011 10:56:20.728845 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5921e565-c581-42f4-8da8-df72fae9a3c0" containerName="extract-utilities" Oct 11 10:56:20.729026 master-1 kubenswrapper[4771]: I1011 10:56:20.728862 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5921e565-c581-42f4-8da8-df72fae9a3c0" containerName="extract-utilities" Oct 11 10:56:20.729026 master-1 kubenswrapper[4771]: E1011 10:56:20.728892 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5921e565-c581-42f4-8da8-df72fae9a3c0" containerName="extract-content" Oct 11 10:56:20.729026 master-1 kubenswrapper[4771]: I1011 10:56:20.728908 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5921e565-c581-42f4-8da8-df72fae9a3c0" containerName="extract-content" Oct 11 10:56:20.729026 master-1 kubenswrapper[4771]: E1011 10:56:20.728961 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38267a66-0ebd-44ab-bc7f-cd5703503b74" containerName="nova-cell0-conductor-db-sync" Oct 11 10:56:20.729026 master-1 kubenswrapper[4771]: I1011 10:56:20.728976 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="38267a66-0ebd-44ab-bc7f-cd5703503b74" containerName="nova-cell0-conductor-db-sync" Oct 11 10:56:20.729546 master-1 kubenswrapper[4771]: I1011 10:56:20.729239 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5921e565-c581-42f4-8da8-df72fae9a3c0" containerName="registry-server" Oct 11 10:56:20.729546 master-1 kubenswrapper[4771]: I1011 10:56:20.729263 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="38267a66-0ebd-44ab-bc7f-cd5703503b74" containerName="nova-cell0-conductor-db-sync" Oct 11 10:56:20.730676 master-1 kubenswrapper[4771]: I1011 10:56:20.730624 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1a24-account-create-pb6gd" Oct 11 10:56:20.734656 master-1 kubenswrapper[4771]: I1011 10:56:20.734589 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Oct 11 10:56:20.749441 master-1 kubenswrapper[4771]: I1011 10:56:20.749298 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-1a24-account-create-pb6gd"] Oct 11 10:56:20.828701 master-1 kubenswrapper[4771]: I1011 10:56:20.828625 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2t9n\" (UniqueName: \"kubernetes.io/projected/b670525b-9ca9-419c-858b-6bb2a2303cf6-kube-api-access-n2t9n\") pod \"aodh-1a24-account-create-pb6gd\" (UID: \"b670525b-9ca9-419c-858b-6bb2a2303cf6\") " pod="openstack/aodh-1a24-account-create-pb6gd" Oct 11 10:56:20.915884 master-1 kubenswrapper[4771]: I1011 10:56:20.915799 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 11 10:56:20.922106 master-1 kubenswrapper[4771]: I1011 10:56:20.922019 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 11 10:56:20.927788 master-1 kubenswrapper[4771]: I1011 10:56:20.927258 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 11 10:56:20.951503 master-1 kubenswrapper[4771]: I1011 10:56:20.930574 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2t9n\" (UniqueName: \"kubernetes.io/projected/b670525b-9ca9-419c-858b-6bb2a2303cf6-kube-api-access-n2t9n\") pod \"aodh-1a24-account-create-pb6gd\" (UID: \"b670525b-9ca9-419c-858b-6bb2a2303cf6\") " pod="openstack/aodh-1a24-account-create-pb6gd" Oct 11 10:56:20.951503 master-1 kubenswrapper[4771]: I1011 10:56:20.932696 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 11 10:56:20.960336 master-1 kubenswrapper[4771]: I1011 10:56:20.960268 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2t9n\" (UniqueName: \"kubernetes.io/projected/b670525b-9ca9-419c-858b-6bb2a2303cf6-kube-api-access-n2t9n\") pod \"aodh-1a24-account-create-pb6gd\" (UID: \"b670525b-9ca9-419c-858b-6bb2a2303cf6\") " pod="openstack/aodh-1a24-account-create-pb6gd" Oct 11 10:56:21.032765 master-1 kubenswrapper[4771]: I1011 10:56:21.032592 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56a5472-3816-43a8-9a63-373c7893cd5c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c56a5472-3816-43a8-9a63-373c7893cd5c\") " pod="openstack/nova-cell0-conductor-0" Oct 11 10:56:21.033056 master-1 kubenswrapper[4771]: I1011 10:56:21.032839 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmzwv\" (UniqueName: \"kubernetes.io/projected/c56a5472-3816-43a8-9a63-373c7893cd5c-kube-api-access-gmzwv\") pod \"nova-cell0-conductor-0\" (UID: \"c56a5472-3816-43a8-9a63-373c7893cd5c\") " pod="openstack/nova-cell0-conductor-0" Oct 11 10:56:21.033056 master-1 kubenswrapper[4771]: I1011 10:56:21.032909 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56a5472-3816-43a8-9a63-373c7893cd5c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c56a5472-3816-43a8-9a63-373c7893cd5c\") " pod="openstack/nova-cell0-conductor-0" Oct 11 10:56:21.078318 master-1 kubenswrapper[4771]: I1011 10:56:21.078216 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1a24-account-create-pb6gd" Oct 11 10:56:21.135517 master-1 kubenswrapper[4771]: I1011 10:56:21.135408 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmzwv\" (UniqueName: \"kubernetes.io/projected/c56a5472-3816-43a8-9a63-373c7893cd5c-kube-api-access-gmzwv\") pod \"nova-cell0-conductor-0\" (UID: \"c56a5472-3816-43a8-9a63-373c7893cd5c\") " pod="openstack/nova-cell0-conductor-0" Oct 11 10:56:21.135653 master-1 kubenswrapper[4771]: I1011 10:56:21.135527 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56a5472-3816-43a8-9a63-373c7893cd5c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c56a5472-3816-43a8-9a63-373c7893cd5c\") " pod="openstack/nova-cell0-conductor-0" Oct 11 10:56:21.135700 master-1 kubenswrapper[4771]: I1011 10:56:21.135673 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56a5472-3816-43a8-9a63-373c7893cd5c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c56a5472-3816-43a8-9a63-373c7893cd5c\") " pod="openstack/nova-cell0-conductor-0" Oct 11 10:56:21.140734 master-1 kubenswrapper[4771]: I1011 10:56:21.140675 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56a5472-3816-43a8-9a63-373c7893cd5c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c56a5472-3816-43a8-9a63-373c7893cd5c\") " pod="openstack/nova-cell0-conductor-0" Oct 11 10:56:21.141976 master-1 kubenswrapper[4771]: I1011 10:56:21.141923 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56a5472-3816-43a8-9a63-373c7893cd5c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c56a5472-3816-43a8-9a63-373c7893cd5c\") " pod="openstack/nova-cell0-conductor-0" Oct 11 10:56:21.156807 master-1 kubenswrapper[4771]: I1011 10:56:21.156730 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmzwv\" (UniqueName: \"kubernetes.io/projected/c56a5472-3816-43a8-9a63-373c7893cd5c-kube-api-access-gmzwv\") pod \"nova-cell0-conductor-0\" (UID: \"c56a5472-3816-43a8-9a63-373c7893cd5c\") " pod="openstack/nova-cell0-conductor-0" Oct 11 10:56:21.241860 master-1 kubenswrapper[4771]: I1011 10:56:21.241800 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 11 10:56:21.327436 master-2 kubenswrapper[4776]: I1011 10:56:21.327267 4776 generic.go:334] "Generic (PLEG): container finished" podID="7353cefe-e495-4633-9472-93497ca94612" containerID="d313f81d0a8278bef4e01893ade947879d98007ed6a8ec60b3e2299775595e63" exitCode=143 Oct 11 10:56:21.327436 master-2 kubenswrapper[4776]: I1011 10:56:21.327316 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"7353cefe-e495-4633-9472-93497ca94612","Type":"ContainerDied","Data":"d313f81d0a8278bef4e01893ade947879d98007ed6a8ec60b3e2299775595e63"} Oct 11 10:56:21.566608 master-1 kubenswrapper[4771]: I1011 10:56:21.566554 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-1a24-account-create-pb6gd"] Oct 11 10:56:21.740974 master-1 kubenswrapper[4771]: I1011 10:56:21.740926 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 11 10:56:21.777651 master-2 kubenswrapper[4776]: I1011 10:56:21.777605 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:56:21.824244 master-2 kubenswrapper[4776]: I1011 10:56:21.824179 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-config\") pod \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " Oct 11 10:56:21.824491 master-2 kubenswrapper[4776]: I1011 10:56:21.824260 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfzck\" (UniqueName: \"kubernetes.io/projected/60f1a3e8-20d2-48e9-842c-9312ce07efe0-kube-api-access-kfzck\") pod \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " Oct 11 10:56:21.824491 master-2 kubenswrapper[4776]: I1011 10:56:21.824327 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-httpd-config\") pod \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " Oct 11 10:56:21.824491 master-2 kubenswrapper[4776]: I1011 10:56:21.824457 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-ovndb-tls-certs\") pod \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " Oct 11 10:56:21.824491 master-2 kubenswrapper[4776]: I1011 10:56:21.824482 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-combined-ca-bundle\") pod \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " Oct 11 10:56:21.827519 master-2 kubenswrapper[4776]: I1011 10:56:21.827464 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60f1a3e8-20d2-48e9-842c-9312ce07efe0-kube-api-access-kfzck" (OuterVolumeSpecName: "kube-api-access-kfzck") pod "60f1a3e8-20d2-48e9-842c-9312ce07efe0" (UID: "60f1a3e8-20d2-48e9-842c-9312ce07efe0"). InnerVolumeSpecName "kube-api-access-kfzck". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:21.828278 master-2 kubenswrapper[4776]: I1011 10:56:21.828212 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "60f1a3e8-20d2-48e9-842c-9312ce07efe0" (UID: "60f1a3e8-20d2-48e9-842c-9312ce07efe0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:21.832369 master-1 kubenswrapper[4771]: I1011 10:56:21.832312 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:21.874784 master-2 kubenswrapper[4776]: I1011 10:56:21.874725 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-config" (OuterVolumeSpecName: "config") pod "60f1a3e8-20d2-48e9-842c-9312ce07efe0" (UID: "60f1a3e8-20d2-48e9-842c-9312ce07efe0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:21.875282 master-2 kubenswrapper[4776]: I1011 10:56:21.875246 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60f1a3e8-20d2-48e9-842c-9312ce07efe0" (UID: "60f1a3e8-20d2-48e9-842c-9312ce07efe0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:21.892229 master-2 kubenswrapper[4776]: I1011 10:56:21.892178 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "60f1a3e8-20d2-48e9-842c-9312ce07efe0" (UID: "60f1a3e8-20d2-48e9-842c-9312ce07efe0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:21.926624 master-2 kubenswrapper[4776]: I1011 10:56:21.926572 4776 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-ovndb-tls-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:21.926624 master-2 kubenswrapper[4776]: I1011 10:56:21.926618 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:21.926767 master-2 kubenswrapper[4776]: I1011 10:56:21.926629 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:21.926767 master-2 kubenswrapper[4776]: I1011 10:56:21.926640 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfzck\" (UniqueName: \"kubernetes.io/projected/60f1a3e8-20d2-48e9-842c-9312ce07efe0-kube-api-access-kfzck\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:21.926767 master-2 kubenswrapper[4776]: I1011 10:56:21.926651 4776 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-httpd-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:21.940586 master-1 kubenswrapper[4771]: I1011 10:56:21.940484 4771 generic.go:334] "Generic (PLEG): container finished" podID="b670525b-9ca9-419c-858b-6bb2a2303cf6" containerID="34c012fefebf03c137c3d264726e9a32c974159496d4bf0d0a4dad6dcdf4c655" exitCode=0 Oct 11 10:56:21.940586 master-1 kubenswrapper[4771]: I1011 10:56:21.940598 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1a24-account-create-pb6gd" event={"ID":"b670525b-9ca9-419c-858b-6bb2a2303cf6","Type":"ContainerDied","Data":"34c012fefebf03c137c3d264726e9a32c974159496d4bf0d0a4dad6dcdf4c655"} Oct 11 10:56:21.941524 master-1 kubenswrapper[4771]: I1011 10:56:21.940640 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1a24-account-create-pb6gd" event={"ID":"b670525b-9ca9-419c-858b-6bb2a2303cf6","Type":"ContainerStarted","Data":"3995d8b1cb4f3f3ee48eac28241686624d899abf7b682d7cf2e0300e24841328"} Oct 11 10:56:21.943267 master-1 kubenswrapper[4771]: I1011 10:56:21.943117 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c56a5472-3816-43a8-9a63-373c7893cd5c","Type":"ContainerStarted","Data":"bec81004ceea111cb650b5ad6d16be3710371031464fb1a09f17b94b9ee51ee5"} Oct 11 10:56:21.946522 master-1 kubenswrapper[4771]: I1011 10:56:21.946457 4771 generic.go:334] "Generic (PLEG): container finished" podID="d3028266-255a-43a3-8bdb-9695ad7cbb30" containerID="9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c" exitCode=0 Oct 11 10:56:21.946522 master-1 kubenswrapper[4771]: I1011 10:56:21.946504 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-0" event={"ID":"d3028266-255a-43a3-8bdb-9695ad7cbb30","Type":"ContainerDied","Data":"9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c"} Oct 11 10:56:21.946668 master-1 kubenswrapper[4771]: I1011 10:56:21.946521 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:21.946668 master-1 kubenswrapper[4771]: I1011 10:56:21.946542 4771 scope.go:117] "RemoveContainer" containerID="9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c" Oct 11 10:56:21.946775 master-1 kubenswrapper[4771]: I1011 10:56:21.946526 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-0" event={"ID":"d3028266-255a-43a3-8bdb-9695ad7cbb30","Type":"ContainerDied","Data":"a58cd7c5986caf2dc23489be67139cc61da2a0165758a7758f40955bc3fa5183"} Oct 11 10:56:21.949718 master-1 kubenswrapper[4771]: I1011 10:56:21.949685 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-config-data\") pod \"d3028266-255a-43a3-8bdb-9695ad7cbb30\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " Oct 11 10:56:21.949801 master-1 kubenswrapper[4771]: I1011 10:56:21.949752 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3028266-255a-43a3-8bdb-9695ad7cbb30-logs\") pod \"d3028266-255a-43a3-8bdb-9695ad7cbb30\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " Oct 11 10:56:21.949841 master-1 kubenswrapper[4771]: I1011 10:56:21.949804 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85jr4\" (UniqueName: \"kubernetes.io/projected/d3028266-255a-43a3-8bdb-9695ad7cbb30-kube-api-access-85jr4\") pod \"d3028266-255a-43a3-8bdb-9695ad7cbb30\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " Oct 11 10:56:21.949908 master-1 kubenswrapper[4771]: I1011 10:56:21.949875 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3028266-255a-43a3-8bdb-9695ad7cbb30-httpd-run\") pod \"d3028266-255a-43a3-8bdb-9695ad7cbb30\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " Oct 11 10:56:21.949998 master-1 kubenswrapper[4771]: I1011 10:56:21.949959 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-combined-ca-bundle\") pod \"d3028266-255a-43a3-8bdb-9695ad7cbb30\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " Oct 11 10:56:21.950274 master-1 kubenswrapper[4771]: I1011 10:56:21.950247 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b\") pod \"d3028266-255a-43a3-8bdb-9695ad7cbb30\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " Oct 11 10:56:21.950342 master-1 kubenswrapper[4771]: I1011 10:56:21.950318 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-public-tls-certs\") pod \"d3028266-255a-43a3-8bdb-9695ad7cbb30\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " Oct 11 10:56:21.950413 master-1 kubenswrapper[4771]: I1011 10:56:21.950338 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3028266-255a-43a3-8bdb-9695ad7cbb30-logs" (OuterVolumeSpecName: "logs") pod "d3028266-255a-43a3-8bdb-9695ad7cbb30" (UID: "d3028266-255a-43a3-8bdb-9695ad7cbb30"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:21.950461 master-1 kubenswrapper[4771]: I1011 10:56:21.950413 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-scripts\") pod \"d3028266-255a-43a3-8bdb-9695ad7cbb30\" (UID: \"d3028266-255a-43a3-8bdb-9695ad7cbb30\") " Oct 11 10:56:21.951791 master-1 kubenswrapper[4771]: I1011 10:56:21.951465 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3028266-255a-43a3-8bdb-9695ad7cbb30-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d3028266-255a-43a3-8bdb-9695ad7cbb30" (UID: "d3028266-255a-43a3-8bdb-9695ad7cbb30"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:21.952567 master-1 kubenswrapper[4771]: I1011 10:56:21.952016 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3028266-255a-43a3-8bdb-9695ad7cbb30-logs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:21.952567 master-1 kubenswrapper[4771]: I1011 10:56:21.952050 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3028266-255a-43a3-8bdb-9695ad7cbb30-httpd-run\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:21.955321 master-1 kubenswrapper[4771]: I1011 10:56:21.955204 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3028266-255a-43a3-8bdb-9695ad7cbb30-kube-api-access-85jr4" (OuterVolumeSpecName: "kube-api-access-85jr4") pod "d3028266-255a-43a3-8bdb-9695ad7cbb30" (UID: "d3028266-255a-43a3-8bdb-9695ad7cbb30"). InnerVolumeSpecName "kube-api-access-85jr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:21.958907 master-1 kubenswrapper[4771]: I1011 10:56:21.958849 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-scripts" (OuterVolumeSpecName: "scripts") pod "d3028266-255a-43a3-8bdb-9695ad7cbb30" (UID: "d3028266-255a-43a3-8bdb-9695ad7cbb30"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:21.976586 master-1 kubenswrapper[4771]: I1011 10:56:21.976414 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b" (OuterVolumeSpecName: "glance") pod "d3028266-255a-43a3-8bdb-9695ad7cbb30" (UID: "d3028266-255a-43a3-8bdb-9695ad7cbb30"). InnerVolumeSpecName "pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 11 10:56:21.982024 master-1 kubenswrapper[4771]: I1011 10:56:21.981985 4771 scope.go:117] "RemoveContainer" containerID="7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474" Oct 11 10:56:21.989981 master-1 kubenswrapper[4771]: I1011 10:56:21.989892 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-config-data" (OuterVolumeSpecName: "config-data") pod "d3028266-255a-43a3-8bdb-9695ad7cbb30" (UID: "d3028266-255a-43a3-8bdb-9695ad7cbb30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:21.999738 master-1 kubenswrapper[4771]: I1011 10:56:21.999664 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3028266-255a-43a3-8bdb-9695ad7cbb30" (UID: "d3028266-255a-43a3-8bdb-9695ad7cbb30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:22.000110 master-1 kubenswrapper[4771]: I1011 10:56:22.000078 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d3028266-255a-43a3-8bdb-9695ad7cbb30" (UID: "d3028266-255a-43a3-8bdb-9695ad7cbb30"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:22.040535 master-2 kubenswrapper[4776]: I1011 10:56:22.040457 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9pr8j"] Oct 11 10:56:22.041000 master-2 kubenswrapper[4776]: E1011 10:56:22.040918 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerName="neutron-httpd" Oct 11 10:56:22.041076 master-2 kubenswrapper[4776]: I1011 10:56:22.041008 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerName="neutron-httpd" Oct 11 10:56:22.041076 master-2 kubenswrapper[4776]: E1011 10:56:22.041026 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" containerName="dnsmasq-dns" Oct 11 10:56:22.041076 master-2 kubenswrapper[4776]: I1011 10:56:22.041034 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" containerName="dnsmasq-dns" Oct 11 10:56:22.041076 master-2 kubenswrapper[4776]: E1011 10:56:22.041051 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerName="neutron-api" Oct 11 10:56:22.041076 master-2 kubenswrapper[4776]: I1011 10:56:22.041059 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerName="neutron-api" Oct 11 10:56:22.041076 master-2 kubenswrapper[4776]: E1011 10:56:22.041072 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" containerName="init" Oct 11 10:56:22.041076 master-2 kubenswrapper[4776]: I1011 10:56:22.041080 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" containerName="init" Oct 11 10:56:22.041322 master-2 kubenswrapper[4776]: E1011 10:56:22.041101 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" containerName="heat-engine" Oct 11 10:56:22.041322 master-2 kubenswrapper[4776]: I1011 10:56:22.041110 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" containerName="heat-engine" Oct 11 10:56:22.041322 master-2 kubenswrapper[4776]: E1011 10:56:22.041135 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6ef0ad-fb25-4af2-a9fc-c89be4b1983b" containerName="mariadb-account-create" Oct 11 10:56:22.041322 master-2 kubenswrapper[4776]: I1011 10:56:22.041181 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6ef0ad-fb25-4af2-a9fc-c89be4b1983b" containerName="mariadb-account-create" Oct 11 10:56:22.041500 master-2 kubenswrapper[4776]: I1011 10:56:22.041408 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" containerName="heat-engine" Oct 11 10:56:22.041500 master-2 kubenswrapper[4776]: I1011 10:56:22.041440 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerName="neutron-api" Oct 11 10:56:22.041500 master-2 kubenswrapper[4776]: I1011 10:56:22.041458 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerName="neutron-httpd" Oct 11 10:56:22.041500 master-2 kubenswrapper[4776]: I1011 10:56:22.041474 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="be6ef0ad-fb25-4af2-a9fc-c89be4b1983b" containerName="mariadb-account-create" Oct 11 10:56:22.041500 master-2 kubenswrapper[4776]: I1011 10:56:22.041485 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" containerName="dnsmasq-dns" Oct 11 10:56:22.044281 master-2 kubenswrapper[4776]: I1011 10:56:22.044240 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.053988 master-1 kubenswrapper[4771]: I1011 10:56:22.053926 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b\") on node \"master-1\" " Oct 11 10:56:22.053988 master-1 kubenswrapper[4771]: I1011 10:56:22.053975 4771 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-public-tls-certs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:22.053988 master-1 kubenswrapper[4771]: I1011 10:56:22.053990 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:22.053988 master-1 kubenswrapper[4771]: I1011 10:56:22.054000 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:22.053988 master-1 kubenswrapper[4771]: I1011 10:56:22.054014 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85jr4\" (UniqueName: \"kubernetes.io/projected/d3028266-255a-43a3-8bdb-9695ad7cbb30-kube-api-access-85jr4\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:22.054279 master-1 kubenswrapper[4771]: I1011 10:56:22.054029 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3028266-255a-43a3-8bdb-9695ad7cbb30-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:22.057123 master-2 kubenswrapper[4776]: I1011 10:56:22.057064 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9pr8j"] Oct 11 10:56:22.077056 master-1 kubenswrapper[4771]: I1011 10:56:22.077003 4771 scope.go:117] "RemoveContainer" containerID="9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c" Oct 11 10:56:22.077908 master-1 kubenswrapper[4771]: E1011 10:56:22.077872 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c\": container with ID starting with 9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c not found: ID does not exist" containerID="9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c" Oct 11 10:56:22.077987 master-1 kubenswrapper[4771]: I1011 10:56:22.077916 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c"} err="failed to get container status \"9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c\": rpc error: code = NotFound desc = could not find container \"9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c\": container with ID starting with 9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c not found: ID does not exist" Oct 11 10:56:22.077987 master-1 kubenswrapper[4771]: I1011 10:56:22.077945 4771 scope.go:117] "RemoveContainer" containerID="7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474" Oct 11 10:56:22.078737 master-1 kubenswrapper[4771]: E1011 10:56:22.078626 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474\": container with ID starting with 7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474 not found: ID does not exist" containerID="7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474" Oct 11 10:56:22.078737 master-1 kubenswrapper[4771]: I1011 10:56:22.078698 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474"} err="failed to get container status \"7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474\": rpc error: code = NotFound desc = could not find container \"7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474\": container with ID starting with 7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474 not found: ID does not exist" Oct 11 10:56:22.081085 master-1 kubenswrapper[4771]: I1011 10:56:22.081052 4771 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 11 10:56:22.081272 master-1 kubenswrapper[4771]: I1011 10:56:22.081251 4771 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9" (UniqueName: "kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b") on node "master-1" Oct 11 10:56:22.134249 master-2 kubenswrapper[4776]: I1011 10:56:22.134199 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-utilities\") pod \"certified-operators-9pr8j\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.134468 master-2 kubenswrapper[4776]: I1011 10:56:22.134277 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98qxz\" (UniqueName: \"kubernetes.io/projected/5baa2228-1c52-469a-abb5-483e30443701-kube-api-access-98qxz\") pod \"certified-operators-9pr8j\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.134468 master-2 kubenswrapper[4776]: I1011 10:56:22.134370 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-catalog-content\") pod \"certified-operators-9pr8j\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.157608 master-1 kubenswrapper[4771]: I1011 10:56:22.157531 4771 reconciler_common.go:293] "Volume detached for volume \"pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:22.235741 master-2 kubenswrapper[4776]: I1011 10:56:22.235597 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-catalog-content\") pod \"certified-operators-9pr8j\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.235741 master-2 kubenswrapper[4776]: I1011 10:56:22.235741 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-utilities\") pod \"certified-operators-9pr8j\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.235946 master-2 kubenswrapper[4776]: I1011 10:56:22.235764 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98qxz\" (UniqueName: \"kubernetes.io/projected/5baa2228-1c52-469a-abb5-483e30443701-kube-api-access-98qxz\") pod \"certified-operators-9pr8j\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.236519 master-2 kubenswrapper[4776]: I1011 10:56:22.236434 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-catalog-content\") pod \"certified-operators-9pr8j\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.236628 master-2 kubenswrapper[4776]: I1011 10:56:22.236574 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-utilities\") pod \"certified-operators-9pr8j\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.260834 master-2 kubenswrapper[4776]: I1011 10:56:22.260790 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98qxz\" (UniqueName: \"kubernetes.io/projected/5baa2228-1c52-469a-abb5-483e30443701-kube-api-access-98qxz\") pod \"certified-operators-9pr8j\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.312757 master-1 kubenswrapper[4771]: I1011 10:56:22.312686 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-external-api-0"] Oct 11 10:56:22.322299 master-1 kubenswrapper[4771]: I1011 10:56:22.322218 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b5802-default-external-api-0"] Oct 11 10:56:22.338198 master-2 kubenswrapper[4776]: I1011 10:56:22.338145 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-4lcts" event={"ID":"60f1a3e8-20d2-48e9-842c-9312ce07efe0","Type":"ContainerDied","Data":"c3c06a80f0b059f9f2526f170c4bc91b415604fc699cd4abb9acf3dd95970a4b"} Oct 11 10:56:22.338198 master-2 kubenswrapper[4776]: I1011 10:56:22.338203 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:56:22.339321 master-2 kubenswrapper[4776]: I1011 10:56:22.338221 4776 scope.go:117] "RemoveContainer" containerID="2c24e9bb8cd753d61f312b08264059f0ef167df16dddaf2f79133f8b02212dea" Oct 11 10:56:22.354402 master-1 kubenswrapper[4771]: I1011 10:56:22.354307 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-external-api-0"] Oct 11 10:56:22.354858 master-1 kubenswrapper[4771]: E1011 10:56:22.354823 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3028266-255a-43a3-8bdb-9695ad7cbb30" containerName="glance-httpd" Oct 11 10:56:22.354944 master-1 kubenswrapper[4771]: I1011 10:56:22.354856 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3028266-255a-43a3-8bdb-9695ad7cbb30" containerName="glance-httpd" Oct 11 10:56:22.354944 master-1 kubenswrapper[4771]: E1011 10:56:22.354895 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3028266-255a-43a3-8bdb-9695ad7cbb30" containerName="glance-log" Oct 11 10:56:22.354944 master-1 kubenswrapper[4771]: I1011 10:56:22.354910 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3028266-255a-43a3-8bdb-9695ad7cbb30" containerName="glance-log" Oct 11 10:56:22.355196 master-1 kubenswrapper[4771]: I1011 10:56:22.355167 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3028266-255a-43a3-8bdb-9695ad7cbb30" containerName="glance-log" Oct 11 10:56:22.355248 master-1 kubenswrapper[4771]: I1011 10:56:22.355231 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3028266-255a-43a3-8bdb-9695ad7cbb30" containerName="glance-httpd" Oct 11 10:56:22.357162 master-1 kubenswrapper[4771]: I1011 10:56:22.357132 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.358348 master-2 kubenswrapper[4776]: I1011 10:56:22.358315 4776 scope.go:117] "RemoveContainer" containerID="a8ba91b5b068d25618fa0c0a4315f75f1ab1929925bf68853e336e2a934b0ca6" Oct 11 10:56:22.360801 master-1 kubenswrapper[4771]: I1011 10:56:22.360777 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-external-config-data" Oct 11 10:56:22.360988 master-1 kubenswrapper[4771]: I1011 10:56:22.360950 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 11 10:56:22.377144 master-2 kubenswrapper[4776]: I1011 10:56:22.377032 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7887b79bcd-4lcts"] Oct 11 10:56:22.381236 master-2 kubenswrapper[4776]: I1011 10:56:22.381198 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.383961 master-2 kubenswrapper[4776]: I1011 10:56:22.383918 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7887b79bcd-4lcts"] Oct 11 10:56:22.384496 master-1 kubenswrapper[4771]: I1011 10:56:22.384418 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-0"] Oct 11 10:56:22.505840 master-1 kubenswrapper[4771]: I1011 10:56:22.504532 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e3e70e-31b7-4245-b138-fdc9401dd344-config-data\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.505840 master-1 kubenswrapper[4771]: I1011 10:56:22.505073 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.505840 master-1 kubenswrapper[4771]: I1011 10:56:22.505205 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8e3e70e-31b7-4245-b138-fdc9401dd344-logs\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.505840 master-1 kubenswrapper[4771]: I1011 10:56:22.505440 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwf9w\" (UniqueName: \"kubernetes.io/projected/d8e3e70e-31b7-4245-b138-fdc9401dd344-kube-api-access-vwf9w\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.505840 master-1 kubenswrapper[4771]: I1011 10:56:22.505604 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8e3e70e-31b7-4245-b138-fdc9401dd344-scripts\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.506182 master-1 kubenswrapper[4771]: I1011 10:56:22.505873 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8e3e70e-31b7-4245-b138-fdc9401dd344-httpd-run\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.506182 master-1 kubenswrapper[4771]: I1011 10:56:22.505973 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e3e70e-31b7-4245-b138-fdc9401dd344-combined-ca-bundle\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.506182 master-1 kubenswrapper[4771]: I1011 10:56:22.506095 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8e3e70e-31b7-4245-b138-fdc9401dd344-public-tls-certs\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.507143 master-1 kubenswrapper[4771]: I1011 10:56:22.507069 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3028266-255a-43a3-8bdb-9695ad7cbb30" path="/var/lib/kubelet/pods/d3028266-255a-43a3-8bdb-9695ad7cbb30/volumes" Oct 11 10:56:22.608101 master-1 kubenswrapper[4771]: I1011 10:56:22.608016 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8e3e70e-31b7-4245-b138-fdc9401dd344-scripts\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.608498 master-1 kubenswrapper[4771]: I1011 10:56:22.608156 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8e3e70e-31b7-4245-b138-fdc9401dd344-httpd-run\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.608498 master-1 kubenswrapper[4771]: I1011 10:56:22.608296 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e3e70e-31b7-4245-b138-fdc9401dd344-combined-ca-bundle\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.608498 master-1 kubenswrapper[4771]: I1011 10:56:22.608348 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8e3e70e-31b7-4245-b138-fdc9401dd344-public-tls-certs\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.608741 master-1 kubenswrapper[4771]: I1011 10:56:22.608711 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e3e70e-31b7-4245-b138-fdc9401dd344-config-data\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.608813 master-1 kubenswrapper[4771]: I1011 10:56:22.608794 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.608885 master-1 kubenswrapper[4771]: I1011 10:56:22.608838 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8e3e70e-31b7-4245-b138-fdc9401dd344-logs\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.608976 master-1 kubenswrapper[4771]: I1011 10:56:22.608882 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwf9w\" (UniqueName: \"kubernetes.io/projected/d8e3e70e-31b7-4245-b138-fdc9401dd344-kube-api-access-vwf9w\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.609794 master-1 kubenswrapper[4771]: I1011 10:56:22.609683 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8e3e70e-31b7-4245-b138-fdc9401dd344-httpd-run\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.609950 master-1 kubenswrapper[4771]: I1011 10:56:22.609800 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8e3e70e-31b7-4245-b138-fdc9401dd344-logs\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.613435 master-1 kubenswrapper[4771]: I1011 10:56:22.613396 4771 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:56:22.613435 master-1 kubenswrapper[4771]: I1011 10:56:22.613435 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/643ba808821ea6db76a2042d255ba68bbc43444ed3cc7e332598424f5540da0c/globalmount\"" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.614278 master-1 kubenswrapper[4771]: I1011 10:56:22.614192 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e3e70e-31b7-4245-b138-fdc9401dd344-combined-ca-bundle\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.615074 master-1 kubenswrapper[4771]: I1011 10:56:22.615016 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8e3e70e-31b7-4245-b138-fdc9401dd344-scripts\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.615185 master-1 kubenswrapper[4771]: I1011 10:56:22.615039 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8e3e70e-31b7-4245-b138-fdc9401dd344-public-tls-certs\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.616595 master-1 kubenswrapper[4771]: I1011 10:56:22.616544 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e3e70e-31b7-4245-b138-fdc9401dd344-config-data\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.635598 master-1 kubenswrapper[4771]: I1011 10:56:22.635529 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwf9w\" (UniqueName: \"kubernetes.io/projected/d8e3e70e-31b7-4245-b138-fdc9401dd344-kube-api-access-vwf9w\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:22.868953 master-2 kubenswrapper[4776]: I1011 10:56:22.868909 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9pr8j"] Oct 11 10:56:22.966900 master-1 kubenswrapper[4771]: I1011 10:56:22.966837 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c56a5472-3816-43a8-9a63-373c7893cd5c","Type":"ContainerStarted","Data":"2f137269af75815653291ff48643748db0ace9751acbd8894e3285de0f3bbbeb"} Oct 11 10:56:22.967647 master-1 kubenswrapper[4771]: I1011 10:56:22.966930 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 11 10:56:23.000754 master-1 kubenswrapper[4771]: I1011 10:56:23.000646 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.000626686 podStartE2EDuration="3.000626686s" podCreationTimestamp="2025-10-11 10:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:22.995402114 +0000 UTC m=+1814.969628605" watchObservedRunningTime="2025-10-11 10:56:23.000626686 +0000 UTC m=+1814.974853137" Oct 11 10:56:23.262362 master-2 kubenswrapper[4776]: I1011 10:56:23.262307 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-b5802-default-internal-api-0" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-log" probeResult="failure" output="Get \"https://10.128.0.160:9292/healthcheck\": read tcp 10.128.0.2:56058->10.128.0.160:9292: read: connection reset by peer" Oct 11 10:56:23.262584 master-2 kubenswrapper[4776]: I1011 10:56:23.262339 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-b5802-default-internal-api-0" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.128.0.160:9292/healthcheck\": read tcp 10.128.0.2:56070->10.128.0.160:9292: read: connection reset by peer" Oct 11 10:56:23.352129 master-2 kubenswrapper[4776]: I1011 10:56:23.352061 4776 generic.go:334] "Generic (PLEG): container finished" podID="7353cefe-e495-4633-9472-93497ca94612" containerID="3b4c4342002f1ca53ae36a5961557e4addd090f4ce6b5d57db4005435cbb0b8e" exitCode=0 Oct 11 10:56:23.352129 master-2 kubenswrapper[4776]: I1011 10:56:23.352138 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"7353cefe-e495-4633-9472-93497ca94612","Type":"ContainerDied","Data":"3b4c4342002f1ca53ae36a5961557e4addd090f4ce6b5d57db4005435cbb0b8e"} Oct 11 10:56:23.355791 master-2 kubenswrapper[4776]: I1011 10:56:23.355744 4776 generic.go:334] "Generic (PLEG): container finished" podID="5baa2228-1c52-469a-abb5-483e30443701" containerID="3bc11fde04d1f8b52a9e917c401ad9aed9276fbde11670b40c9d984d8f15247c" exitCode=0 Oct 11 10:56:23.355791 master-2 kubenswrapper[4776]: I1011 10:56:23.355781 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pr8j" event={"ID":"5baa2228-1c52-469a-abb5-483e30443701","Type":"ContainerDied","Data":"3bc11fde04d1f8b52a9e917c401ad9aed9276fbde11670b40c9d984d8f15247c"} Oct 11 10:56:23.355990 master-2 kubenswrapper[4776]: I1011 10:56:23.355806 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pr8j" event={"ID":"5baa2228-1c52-469a-abb5-483e30443701","Type":"ContainerStarted","Data":"1cb5b9d336455c89439755003242ff1ad85dfa104d62ac4092e9fd018ff8e5cd"} Oct 11 10:56:23.421371 master-1 kubenswrapper[4771]: I1011 10:56:23.420678 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1a24-account-create-pb6gd" Oct 11 10:56:23.527264 master-1 kubenswrapper[4771]: I1011 10:56:23.527195 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-61680409-2444-4e5f-9b6b-1cb1b48ecfb9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e0cfc426-71d8-44a8-9d4b-7bb8001fb12b\") pod \"glance-b5802-default-external-api-0\" (UID: \"d8e3e70e-31b7-4245-b138-fdc9401dd344\") " pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:23.549179 master-1 kubenswrapper[4771]: I1011 10:56:23.549111 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2t9n\" (UniqueName: \"kubernetes.io/projected/b670525b-9ca9-419c-858b-6bb2a2303cf6-kube-api-access-n2t9n\") pod \"b670525b-9ca9-419c-858b-6bb2a2303cf6\" (UID: \"b670525b-9ca9-419c-858b-6bb2a2303cf6\") " Oct 11 10:56:23.552338 master-1 kubenswrapper[4771]: I1011 10:56:23.552295 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b670525b-9ca9-419c-858b-6bb2a2303cf6-kube-api-access-n2t9n" (OuterVolumeSpecName: "kube-api-access-n2t9n") pod "b670525b-9ca9-419c-858b-6bb2a2303cf6" (UID: "b670525b-9ca9-419c-858b-6bb2a2303cf6"). InnerVolumeSpecName "kube-api-access-n2t9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:23.583775 master-1 kubenswrapper[4771]: I1011 10:56:23.583716 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:23.652545 master-1 kubenswrapper[4771]: I1011 10:56:23.652352 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2t9n\" (UniqueName: \"kubernetes.io/projected/b670525b-9ca9-419c-858b-6bb2a2303cf6-kube-api-access-n2t9n\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:23.976813 master-1 kubenswrapper[4771]: I1011 10:56:23.976595 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1a24-account-create-pb6gd" event={"ID":"b670525b-9ca9-419c-858b-6bb2a2303cf6","Type":"ContainerDied","Data":"3995d8b1cb4f3f3ee48eac28241686624d899abf7b682d7cf2e0300e24841328"} Oct 11 10:56:23.976813 master-1 kubenswrapper[4771]: I1011 10:56:23.976682 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3995d8b1cb4f3f3ee48eac28241686624d899abf7b682d7cf2e0300e24841328" Oct 11 10:56:23.976813 master-1 kubenswrapper[4771]: I1011 10:56:23.976620 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1a24-account-create-pb6gd" Oct 11 10:56:24.005618 master-1 kubenswrapper[4771]: E1011 10:56:24.005555 4771 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/bdc960f594c7cf7b61069e0e0abb31efcf62303e52388cd0c49a5755236d9f74/diff" to get inode usage: stat /var/lib/containers/storage/overlay/bdc960f594c7cf7b61069e0e0abb31efcf62303e52388cd0c49a5755236d9f74/diff: no such file or directory, extraDiskErr: Oct 11 10:56:24.067033 master-2 kubenswrapper[4776]: I1011 10:56:24.066982 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" path="/var/lib/kubelet/pods/60f1a3e8-20d2-48e9-842c-9312ce07efe0/volumes" Oct 11 10:56:24.168799 master-1 kubenswrapper[4771]: I1011 10:56:24.164759 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-0"] Oct 11 10:56:24.210755 master-2 kubenswrapper[4776]: I1011 10:56:24.210712 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.279797 master-2 kubenswrapper[4776]: I1011 10:56:24.279750 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-logs\") pod \"7353cefe-e495-4633-9472-93497ca94612\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " Oct 11 10:56:24.280000 master-2 kubenswrapper[4776]: I1011 10:56:24.279827 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-config-data\") pod \"7353cefe-e495-4633-9472-93497ca94612\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " Oct 11 10:56:24.280000 master-2 kubenswrapper[4776]: I1011 10:56:24.279849 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnqtp\" (UniqueName: \"kubernetes.io/projected/7353cefe-e495-4633-9472-93497ca94612-kube-api-access-nnqtp\") pod \"7353cefe-e495-4633-9472-93497ca94612\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " Oct 11 10:56:24.280000 master-2 kubenswrapper[4776]: I1011 10:56:24.279957 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-httpd-run\") pod \"7353cefe-e495-4633-9472-93497ca94612\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " Oct 11 10:56:24.280183 master-2 kubenswrapper[4776]: I1011 10:56:24.280031 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-internal-tls-certs\") pod \"7353cefe-e495-4633-9472-93497ca94612\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " Oct 11 10:56:24.280237 master-2 kubenswrapper[4776]: I1011 10:56:24.280189 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"7353cefe-e495-4633-9472-93497ca94612\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " Oct 11 10:56:24.280366 master-2 kubenswrapper[4776]: I1011 10:56:24.280289 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-combined-ca-bundle\") pod \"7353cefe-e495-4633-9472-93497ca94612\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " Oct 11 10:56:24.280439 master-2 kubenswrapper[4776]: I1011 10:56:24.280410 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-scripts\") pod \"7353cefe-e495-4633-9472-93497ca94612\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " Oct 11 10:56:24.288474 master-2 kubenswrapper[4776]: I1011 10:56:24.288413 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-logs" (OuterVolumeSpecName: "logs") pod "7353cefe-e495-4633-9472-93497ca94612" (UID: "7353cefe-e495-4633-9472-93497ca94612"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:24.295790 master-1 kubenswrapper[4771]: E1011 10:56:24.295608 4771 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/bfd5c1f52f3f50fe03a7dc129b7fce9a308b01a368798b89ac9e8ed524a70b73/diff" to get inode usage: stat /var/lib/containers/storage/overlay/bfd5c1f52f3f50fe03a7dc129b7fce9a308b01a368798b89ac9e8ed524a70b73/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_cinder-b5802-api-0_478147ef-a0d7-4c37-952c-3fc3a23775db/cinder-api/0.log" to get inode usage: stat /var/log/pods/openstack_cinder-b5802-api-0_478147ef-a0d7-4c37-952c-3fc3a23775db/cinder-api/0.log: no such file or directory Oct 11 10:56:24.301741 master-2 kubenswrapper[4776]: I1011 10:56:24.300825 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-scripts" (OuterVolumeSpecName: "scripts") pod "7353cefe-e495-4633-9472-93497ca94612" (UID: "7353cefe-e495-4633-9472-93497ca94612"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:24.301741 master-2 kubenswrapper[4776]: I1011 10:56:24.301046 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7353cefe-e495-4633-9472-93497ca94612-kube-api-access-nnqtp" (OuterVolumeSpecName: "kube-api-access-nnqtp") pod "7353cefe-e495-4633-9472-93497ca94612" (UID: "7353cefe-e495-4633-9472-93497ca94612"). InnerVolumeSpecName "kube-api-access-nnqtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:24.311910 master-2 kubenswrapper[4776]: I1011 10:56:24.311834 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7353cefe-e495-4633-9472-93497ca94612" (UID: "7353cefe-e495-4633-9472-93497ca94612"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:24.352805 master-2 kubenswrapper[4776]: I1011 10:56:24.344042 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574" (OuterVolumeSpecName: "glance") pod "7353cefe-e495-4633-9472-93497ca94612" (UID: "7353cefe-e495-4633-9472-93497ca94612"). InnerVolumeSpecName "pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 11 10:56:24.352805 master-2 kubenswrapper[4776]: I1011 10:56:24.346925 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7353cefe-e495-4633-9472-93497ca94612" (UID: "7353cefe-e495-4633-9472-93497ca94612"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:24.367448 master-2 kubenswrapper[4776]: I1011 10:56:24.367362 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7353cefe-e495-4633-9472-93497ca94612" (UID: "7353cefe-e495-4633-9472-93497ca94612"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:24.372762 master-2 kubenswrapper[4776]: I1011 10:56:24.372553 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"7353cefe-e495-4633-9472-93497ca94612","Type":"ContainerDied","Data":"9a5837d1cb2c6c6bd2ee66332c6614d5f0898097af5aee94ed2b3ff54ca6ee42"} Oct 11 10:56:24.372762 master-2 kubenswrapper[4776]: I1011 10:56:24.372605 4776 scope.go:117] "RemoveContainer" containerID="3b4c4342002f1ca53ae36a5961557e4addd090f4ce6b5d57db4005435cbb0b8e" Oct 11 10:56:24.372762 master-2 kubenswrapper[4776]: I1011 10:56:24.372637 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.382231 master-2 kubenswrapper[4776]: I1011 10:56:24.382182 4776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-logs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:24.382231 master-2 kubenswrapper[4776]: I1011 10:56:24.382224 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnqtp\" (UniqueName: \"kubernetes.io/projected/7353cefe-e495-4633-9472-93497ca94612-kube-api-access-nnqtp\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:24.382417 master-2 kubenswrapper[4776]: I1011 10:56:24.382242 4776 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-httpd-run\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:24.382417 master-2 kubenswrapper[4776]: I1011 10:56:24.382255 4776 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-internal-tls-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:24.382417 master-2 kubenswrapper[4776]: I1011 10:56:24.382289 4776 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") on node \"master-2\" " Oct 11 10:56:24.382417 master-2 kubenswrapper[4776]: I1011 10:56:24.382305 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:24.382417 master-2 kubenswrapper[4776]: I1011 10:56:24.382318 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:24.382417 master-2 kubenswrapper[4776]: I1011 10:56:24.382352 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-config-data" (OuterVolumeSpecName: "config-data") pod "7353cefe-e495-4633-9472-93497ca94612" (UID: "7353cefe-e495-4633-9472-93497ca94612"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:24.408135 master-2 kubenswrapper[4776]: I1011 10:56:24.408094 4776 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 11 10:56:24.408290 master-2 kubenswrapper[4776]: I1011 10:56:24.408266 4776 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c" (UniqueName: "kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574") on node "master-2" Oct 11 10:56:24.423953 master-2 kubenswrapper[4776]: I1011 10:56:24.423930 4776 scope.go:117] "RemoveContainer" containerID="d313f81d0a8278bef4e01893ade947879d98007ed6a8ec60b3e2299775595e63" Oct 11 10:56:24.485108 master-2 kubenswrapper[4776]: I1011 10:56:24.485037 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:24.485108 master-2 kubenswrapper[4776]: I1011 10:56:24.485082 4776 reconciler_common.go:293] "Volume detached for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:24.710875 master-1 kubenswrapper[4771]: E1011 10:56:24.708719 4771 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/52e44d3adbc247b2634220b9a00f35ef026c5019f60eab176f475b3a36c5dd65/diff" to get inode usage: stat /var/lib/containers/storage/overlay/52e44d3adbc247b2634220b9a00f35ef026c5019f60eab176f475b3a36c5dd65/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_glance-b5802-default-external-api-0_d3028266-255a-43a3-8bdb-9695ad7cbb30/glance-log/0.log" to get inode usage: stat /var/log/pods/openstack_glance-b5802-default-external-api-0_d3028266-255a-43a3-8bdb-9695ad7cbb30/glance-log/0.log: no such file or directory Oct 11 10:56:24.733126 master-2 kubenswrapper[4776]: I1011 10:56:24.733041 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:56:24.741920 master-2 kubenswrapper[4776]: I1011 10:56:24.741867 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:56:24.768808 master-2 kubenswrapper[4776]: I1011 10:56:24.768756 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:56:24.769158 master-2 kubenswrapper[4776]: E1011 10:56:24.769097 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-log" Oct 11 10:56:24.769158 master-2 kubenswrapper[4776]: I1011 10:56:24.769110 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-log" Oct 11 10:56:24.769158 master-2 kubenswrapper[4776]: E1011 10:56:24.769124 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-httpd" Oct 11 10:56:24.769158 master-2 kubenswrapper[4776]: I1011 10:56:24.769132 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-httpd" Oct 11 10:56:24.769559 master-2 kubenswrapper[4776]: I1011 10:56:24.769526 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-httpd" Oct 11 10:56:24.769559 master-2 kubenswrapper[4776]: I1011 10:56:24.769561 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-log" Oct 11 10:56:24.770733 master-2 kubenswrapper[4776]: I1011 10:56:24.770692 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.774418 master-2 kubenswrapper[4776]: I1011 10:56:24.773753 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-internal-config-data" Oct 11 10:56:24.774418 master-2 kubenswrapper[4776]: I1011 10:56:24.774392 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 11 10:56:24.801580 master-2 kubenswrapper[4776]: I1011 10:56:24.801520 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:56:24.893369 master-2 kubenswrapper[4776]: I1011 10:56:24.893247 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc645cf2-0900-4b7d-8001-91098664c4cd-logs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.893369 master-2 kubenswrapper[4776]: I1011 10:56:24.893326 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.893369 master-2 kubenswrapper[4776]: I1011 10:56:24.893372 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.893617 master-2 kubenswrapper[4776]: I1011 10:56:24.893476 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-internal-tls-certs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.893617 master-2 kubenswrapper[4776]: I1011 10:56:24.893529 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-config-data\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.893617 master-2 kubenswrapper[4776]: I1011 10:56:24.893552 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc645cf2-0900-4b7d-8001-91098664c4cd-httpd-run\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.893718 master-2 kubenswrapper[4776]: I1011 10:56:24.893643 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-scripts\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.893791 master-2 kubenswrapper[4776]: I1011 10:56:24.893747 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4twqn\" (UniqueName: \"kubernetes.io/projected/bc645cf2-0900-4b7d-8001-91098664c4cd-kube-api-access-4twqn\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.994019 master-1 kubenswrapper[4771]: I1011 10:56:24.993825 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-0" event={"ID":"d8e3e70e-31b7-4245-b138-fdc9401dd344","Type":"ContainerStarted","Data":"0e44421f3bf9e59656d4af212a8941201c6c81fac80f9388ebb84f64d53b59b9"} Oct 11 10:56:24.994019 master-1 kubenswrapper[4771]: I1011 10:56:24.993912 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-0" event={"ID":"d8e3e70e-31b7-4245-b138-fdc9401dd344","Type":"ContainerStarted","Data":"fa33ed8483db0624db533dce5d7a3e460dd7d608c7626bc3e38403fe08260f0e"} Oct 11 10:56:24.995809 master-2 kubenswrapper[4776]: I1011 10:56:24.995764 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc645cf2-0900-4b7d-8001-91098664c4cd-logs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.996074 master-2 kubenswrapper[4776]: I1011 10:56:24.996058 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.996175 master-2 kubenswrapper[4776]: I1011 10:56:24.996163 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.996308 master-2 kubenswrapper[4776]: I1011 10:56:24.996294 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-internal-tls-certs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.996418 master-2 kubenswrapper[4776]: I1011 10:56:24.996406 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-config-data\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.996496 master-2 kubenswrapper[4776]: I1011 10:56:24.996484 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc645cf2-0900-4b7d-8001-91098664c4cd-httpd-run\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.996568 master-2 kubenswrapper[4776]: I1011 10:56:24.996308 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc645cf2-0900-4b7d-8001-91098664c4cd-logs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.996637 master-2 kubenswrapper[4776]: I1011 10:56:24.996623 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-scripts\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.996738 master-2 kubenswrapper[4776]: I1011 10:56:24.996722 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4twqn\" (UniqueName: \"kubernetes.io/projected/bc645cf2-0900-4b7d-8001-91098664c4cd-kube-api-access-4twqn\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.997202 master-2 kubenswrapper[4776]: I1011 10:56:24.997165 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc645cf2-0900-4b7d-8001-91098664c4cd-httpd-run\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.998151 master-2 kubenswrapper[4776]: I1011 10:56:24.998135 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:56:24.998247 master-2 kubenswrapper[4776]: I1011 10:56:24.998231 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/c5f302d2b867cc737f2daf9c42090b10daaee38f14f31a51f3dbff0cf77a4fd1/globalmount\"" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:25.000287 master-2 kubenswrapper[4776]: I1011 10:56:25.000244 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-internal-tls-certs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:25.000371 master-2 kubenswrapper[4776]: I1011 10:56:25.000247 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-scripts\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:25.000371 master-2 kubenswrapper[4776]: I1011 10:56:25.000256 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:25.000863 master-2 kubenswrapper[4776]: I1011 10:56:25.000814 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-config-data\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:25.027096 master-2 kubenswrapper[4776]: I1011 10:56:25.027024 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4twqn\" (UniqueName: \"kubernetes.io/projected/bc645cf2-0900-4b7d-8001-91098664c4cd-kube-api-access-4twqn\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:25.383387 master-2 kubenswrapper[4776]: I1011 10:56:25.383292 4776 generic.go:334] "Generic (PLEG): container finished" podID="5baa2228-1c52-469a-abb5-483e30443701" containerID="fe971507b0681a5d07246d18c8d56528aa0bc3f57ad326820f2a1eadf06f2fcf" exitCode=0 Oct 11 10:56:25.383387 master-2 kubenswrapper[4776]: I1011 10:56:25.383375 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pr8j" event={"ID":"5baa2228-1c52-469a-abb5-483e30443701","Type":"ContainerDied","Data":"fe971507b0681a5d07246d18c8d56528aa0bc3f57ad326820f2a1eadf06f2fcf"} Oct 11 10:56:25.607509 master-1 kubenswrapper[4771]: E1011 10:56:25.607441 4771 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a339af6a375a54fe7e2efcfe927dc406a1de8e87afa50ea2a74db4db8b3b419d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a339af6a375a54fe7e2efcfe927dc406a1de8e87afa50ea2a74db4db8b3b419d/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_glance-b5802-default-external-api-0_d3028266-255a-43a3-8bdb-9695ad7cbb30/glance-httpd/0.log" to get inode usage: stat /var/log/pods/openstack_glance-b5802-default-external-api-0_d3028266-255a-43a3-8bdb-9695ad7cbb30/glance-httpd/0.log: no such file or directory Oct 11 10:56:26.011838 master-1 kubenswrapper[4771]: I1011 10:56:26.011696 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-0" event={"ID":"d8e3e70e-31b7-4245-b138-fdc9401dd344","Type":"ContainerStarted","Data":"1ddd60e696582c7a2790a2a426954d0b37bf6881aab474d30558dc2354cbccd2"} Oct 11 10:56:26.059492 master-1 kubenswrapper[4771]: I1011 10:56:26.059424 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-external-api-0" podStartSLOduration=4.059400988 podStartE2EDuration="4.059400988s" podCreationTimestamp="2025-10-11 10:56:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:26.05159207 +0000 UTC m=+1818.025818531" watchObservedRunningTime="2025-10-11 10:56:26.059400988 +0000 UTC m=+1818.033627429" Oct 11 10:56:26.071740 master-2 kubenswrapper[4776]: I1011 10:56:26.071622 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7353cefe-e495-4633-9472-93497ca94612" path="/var/lib/kubelet/pods/7353cefe-e495-4633-9472-93497ca94612/volumes" Oct 11 10:56:26.116747 master-1 kubenswrapper[4771]: I1011 10:56:26.116674 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-tn8xz"] Oct 11 10:56:26.117542 master-1 kubenswrapper[4771]: E1011 10:56:26.117521 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b670525b-9ca9-419c-858b-6bb2a2303cf6" containerName="mariadb-account-create" Oct 11 10:56:26.117642 master-1 kubenswrapper[4771]: I1011 10:56:26.117630 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b670525b-9ca9-419c-858b-6bb2a2303cf6" containerName="mariadb-account-create" Oct 11 10:56:26.117893 master-1 kubenswrapper[4771]: I1011 10:56:26.117876 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b670525b-9ca9-419c-858b-6bb2a2303cf6" containerName="mariadb-account-create" Oct 11 10:56:26.118882 master-1 kubenswrapper[4771]: I1011 10:56:26.118861 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-tn8xz" Oct 11 10:56:26.123936 master-1 kubenswrapper[4771]: I1011 10:56:26.123828 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Oct 11 10:56:26.124085 master-1 kubenswrapper[4771]: I1011 10:56:26.123839 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Oct 11 10:56:26.129272 master-1 kubenswrapper[4771]: I1011 10:56:26.129220 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-tn8xz"] Oct 11 10:56:26.214332 master-1 kubenswrapper[4771]: I1011 10:56:26.214270 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mgq2\" (UniqueName: \"kubernetes.io/projected/3de492fb-5249-49e2-a327-756234aa92bd-kube-api-access-6mgq2\") pod \"aodh-db-sync-tn8xz\" (UID: \"3de492fb-5249-49e2-a327-756234aa92bd\") " pod="openstack/aodh-db-sync-tn8xz" Oct 11 10:56:26.214756 master-1 kubenswrapper[4771]: I1011 10:56:26.214402 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de492fb-5249-49e2-a327-756234aa92bd-scripts\") pod \"aodh-db-sync-tn8xz\" (UID: \"3de492fb-5249-49e2-a327-756234aa92bd\") " pod="openstack/aodh-db-sync-tn8xz" Oct 11 10:56:26.214756 master-1 kubenswrapper[4771]: I1011 10:56:26.214463 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de492fb-5249-49e2-a327-756234aa92bd-config-data\") pod \"aodh-db-sync-tn8xz\" (UID: \"3de492fb-5249-49e2-a327-756234aa92bd\") " pod="openstack/aodh-db-sync-tn8xz" Oct 11 10:56:26.214756 master-1 kubenswrapper[4771]: I1011 10:56:26.214557 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de492fb-5249-49e2-a327-756234aa92bd-combined-ca-bundle\") pod \"aodh-db-sync-tn8xz\" (UID: \"3de492fb-5249-49e2-a327-756234aa92bd\") " pod="openstack/aodh-db-sync-tn8xz" Oct 11 10:56:26.268881 master-2 kubenswrapper[4776]: I1011 10:56:26.268829 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:26.270019 master-1 kubenswrapper[4771]: I1011 10:56:26.269870 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 11 10:56:26.285905 master-2 kubenswrapper[4776]: I1011 10:56:26.285668 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:26.316304 master-1 kubenswrapper[4771]: I1011 10:56:26.316246 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mgq2\" (UniqueName: \"kubernetes.io/projected/3de492fb-5249-49e2-a327-756234aa92bd-kube-api-access-6mgq2\") pod \"aodh-db-sync-tn8xz\" (UID: \"3de492fb-5249-49e2-a327-756234aa92bd\") " pod="openstack/aodh-db-sync-tn8xz" Oct 11 10:56:26.316548 master-1 kubenswrapper[4771]: I1011 10:56:26.316327 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de492fb-5249-49e2-a327-756234aa92bd-scripts\") pod \"aodh-db-sync-tn8xz\" (UID: \"3de492fb-5249-49e2-a327-756234aa92bd\") " pod="openstack/aodh-db-sync-tn8xz" Oct 11 10:56:26.316548 master-1 kubenswrapper[4771]: I1011 10:56:26.316389 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de492fb-5249-49e2-a327-756234aa92bd-config-data\") pod \"aodh-db-sync-tn8xz\" (UID: \"3de492fb-5249-49e2-a327-756234aa92bd\") " pod="openstack/aodh-db-sync-tn8xz" Oct 11 10:56:26.316548 master-1 kubenswrapper[4771]: I1011 10:56:26.316442 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de492fb-5249-49e2-a327-756234aa92bd-combined-ca-bundle\") pod \"aodh-db-sync-tn8xz\" (UID: \"3de492fb-5249-49e2-a327-756234aa92bd\") " pod="openstack/aodh-db-sync-tn8xz" Oct 11 10:56:26.320690 master-1 kubenswrapper[4771]: I1011 10:56:26.320647 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de492fb-5249-49e2-a327-756234aa92bd-combined-ca-bundle\") pod \"aodh-db-sync-tn8xz\" (UID: \"3de492fb-5249-49e2-a327-756234aa92bd\") " pod="openstack/aodh-db-sync-tn8xz" Oct 11 10:56:26.320921 master-1 kubenswrapper[4771]: I1011 10:56:26.320891 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de492fb-5249-49e2-a327-756234aa92bd-config-data\") pod \"aodh-db-sync-tn8xz\" (UID: \"3de492fb-5249-49e2-a327-756234aa92bd\") " pod="openstack/aodh-db-sync-tn8xz" Oct 11 10:56:26.332644 master-1 kubenswrapper[4771]: I1011 10:56:26.331460 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-b5802-api-0" Oct 11 10:56:26.333696 master-1 kubenswrapper[4771]: I1011 10:56:26.333475 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de492fb-5249-49e2-a327-756234aa92bd-scripts\") pod \"aodh-db-sync-tn8xz\" (UID: \"3de492fb-5249-49e2-a327-756234aa92bd\") " pod="openstack/aodh-db-sync-tn8xz" Oct 11 10:56:26.336777 master-1 kubenswrapper[4771]: I1011 10:56:26.336740 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mgq2\" (UniqueName: \"kubernetes.io/projected/3de492fb-5249-49e2-a327-756234aa92bd-kube-api-access-6mgq2\") pod \"aodh-db-sync-tn8xz\" (UID: \"3de492fb-5249-49e2-a327-756234aa92bd\") " pod="openstack/aodh-db-sync-tn8xz" Oct 11 10:56:26.424925 master-2 kubenswrapper[4776]: I1011 10:56:26.424887 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pr8j" event={"ID":"5baa2228-1c52-469a-abb5-483e30443701","Type":"ContainerStarted","Data":"efedac54a12d10418e1659b5155affd85b591b2264aa7f46fcb0434f06469265"} Oct 11 10:56:26.447323 master-1 kubenswrapper[4771]: I1011 10:56:26.447292 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-tn8xz" Oct 11 10:56:26.479325 master-2 kubenswrapper[4776]: I1011 10:56:26.479256 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9pr8j" podStartSLOduration=2.061721685 podStartE2EDuration="4.479239232s" podCreationTimestamp="2025-10-11 10:56:22 +0000 UTC" firstStartedPulling="2025-10-11 10:56:23.358018096 +0000 UTC m=+1818.142444805" lastFinishedPulling="2025-10-11 10:56:25.775535643 +0000 UTC m=+1820.559962352" observedRunningTime="2025-10-11 10:56:26.466623851 +0000 UTC m=+1821.251050570" watchObservedRunningTime="2025-10-11 10:56:26.479239232 +0000 UTC m=+1821.263665941" Oct 11 10:56:26.896600 master-1 kubenswrapper[4771]: I1011 10:56:26.896544 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-tn8xz"] Oct 11 10:56:26.909370 master-1 kubenswrapper[4771]: I1011 10:56:26.909305 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-2kt7k"] Oct 11 10:56:26.911006 master-1 kubenswrapper[4771]: I1011 10:56:26.910964 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2kt7k" Oct 11 10:56:26.914876 master-1 kubenswrapper[4771]: I1011 10:56:26.914812 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 11 10:56:26.914969 master-1 kubenswrapper[4771]: I1011 10:56:26.914848 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 11 10:56:26.933808 master-1 kubenswrapper[4771]: I1011 10:56:26.933730 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2kt7k"] Oct 11 10:56:26.948728 master-2 kubenswrapper[4776]: I1011 10:56:26.948652 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:56:26.949878 master-2 kubenswrapper[4776]: W1011 10:56:26.949818 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc645cf2_0900_4b7d_8001_91098664c4cd.slice/crio-355fff6ba7d3433f645568d1791e6bb10ce1bc665a8595381fd3681800cf9c69 WatchSource:0}: Error finding container 355fff6ba7d3433f645568d1791e6bb10ce1bc665a8595381fd3681800cf9c69: Status 404 returned error can't find the container with id 355fff6ba7d3433f645568d1791e6bb10ce1bc665a8595381fd3681800cf9c69 Oct 11 10:56:27.033389 master-1 kubenswrapper[4771]: I1011 10:56:27.032759 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-tn8xz" event={"ID":"3de492fb-5249-49e2-a327-756234aa92bd","Type":"ContainerStarted","Data":"4b4427ca1aac1d5de2fce6df6d3c919384d14bf07e538280903548b74f73e344"} Oct 11 10:56:27.044149 master-1 kubenswrapper[4771]: I1011 10:56:27.044069 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Oct 11 10:56:27.047130 master-1 kubenswrapper[4771]: I1011 10:56:27.047061 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 11 10:56:27.051978 master-1 kubenswrapper[4771]: I1011 10:56:27.051920 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Oct 11 10:56:27.066809 master-1 kubenswrapper[4771]: I1011 10:56:27.063218 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Oct 11 10:56:27.080343 master-1 kubenswrapper[4771]: I1011 10:56:27.076671 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clg6t\" (UniqueName: \"kubernetes.io/projected/f85d5cfa-8073-4bbf-9eff-78fde719dadf-kube-api-access-clg6t\") pod \"nova-cell0-cell-mapping-2kt7k\" (UID: \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\") " pod="openstack/nova-cell0-cell-mapping-2kt7k" Oct 11 10:56:27.080343 master-1 kubenswrapper[4771]: I1011 10:56:27.076965 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d5cfa-8073-4bbf-9eff-78fde719dadf-scripts\") pod \"nova-cell0-cell-mapping-2kt7k\" (UID: \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\") " pod="openstack/nova-cell0-cell-mapping-2kt7k" Oct 11 10:56:27.080343 master-1 kubenswrapper[4771]: I1011 10:56:27.077136 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d5cfa-8073-4bbf-9eff-78fde719dadf-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2kt7k\" (UID: \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\") " pod="openstack/nova-cell0-cell-mapping-2kt7k" Oct 11 10:56:27.080343 master-1 kubenswrapper[4771]: I1011 10:56:27.077224 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d5cfa-8073-4bbf-9eff-78fde719dadf-config-data\") pod \"nova-cell0-cell-mapping-2kt7k\" (UID: \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\") " pod="openstack/nova-cell0-cell-mapping-2kt7k" Oct 11 10:56:27.179431 master-1 kubenswrapper[4771]: I1011 10:56:27.179347 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6bt5\" (UniqueName: \"kubernetes.io/projected/3a8c3af6-0b6a-486d-83e3-18bf00346dbc-kube-api-access-c6bt5\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"3a8c3af6-0b6a-486d-83e3-18bf00346dbc\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 11 10:56:27.179617 master-1 kubenswrapper[4771]: I1011 10:56:27.179502 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d5cfa-8073-4bbf-9eff-78fde719dadf-scripts\") pod \"nova-cell0-cell-mapping-2kt7k\" (UID: \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\") " pod="openstack/nova-cell0-cell-mapping-2kt7k" Oct 11 10:56:27.179656 master-1 kubenswrapper[4771]: I1011 10:56:27.179591 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a8c3af6-0b6a-486d-83e3-18bf00346dbc-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"3a8c3af6-0b6a-486d-83e3-18bf00346dbc\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 11 10:56:27.179843 master-1 kubenswrapper[4771]: I1011 10:56:27.179806 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d5cfa-8073-4bbf-9eff-78fde719dadf-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2kt7k\" (UID: \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\") " pod="openstack/nova-cell0-cell-mapping-2kt7k" Oct 11 10:56:27.179898 master-1 kubenswrapper[4771]: I1011 10:56:27.179863 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8c3af6-0b6a-486d-83e3-18bf00346dbc-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"3a8c3af6-0b6a-486d-83e3-18bf00346dbc\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 11 10:56:27.182430 master-1 kubenswrapper[4771]: I1011 10:56:27.179989 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d5cfa-8073-4bbf-9eff-78fde719dadf-config-data\") pod \"nova-cell0-cell-mapping-2kt7k\" (UID: \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\") " pod="openstack/nova-cell0-cell-mapping-2kt7k" Oct 11 10:56:27.182430 master-1 kubenswrapper[4771]: I1011 10:56:27.180212 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clg6t\" (UniqueName: \"kubernetes.io/projected/f85d5cfa-8073-4bbf-9eff-78fde719dadf-kube-api-access-clg6t\") pod \"nova-cell0-cell-mapping-2kt7k\" (UID: \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\") " pod="openstack/nova-cell0-cell-mapping-2kt7k" Oct 11 10:56:27.186226 master-1 kubenswrapper[4771]: I1011 10:56:27.186168 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d5cfa-8073-4bbf-9eff-78fde719dadf-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2kt7k\" (UID: \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\") " pod="openstack/nova-cell0-cell-mapping-2kt7k" Oct 11 10:56:27.187020 master-1 kubenswrapper[4771]: I1011 10:56:27.186968 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d5cfa-8073-4bbf-9eff-78fde719dadf-scripts\") pod \"nova-cell0-cell-mapping-2kt7k\" (UID: \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\") " pod="openstack/nova-cell0-cell-mapping-2kt7k" Oct 11 10:56:27.189175 master-1 kubenswrapper[4771]: I1011 10:56:27.189116 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d5cfa-8073-4bbf-9eff-78fde719dadf-config-data\") pod \"nova-cell0-cell-mapping-2kt7k\" (UID: \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\") " pod="openstack/nova-cell0-cell-mapping-2kt7k" Oct 11 10:56:27.204476 master-2 kubenswrapper[4776]: I1011 10:56:27.204223 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:56:27.205645 master-2 kubenswrapper[4776]: I1011 10:56:27.205566 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:56:27.209181 master-2 kubenswrapper[4776]: I1011 10:56:27.209132 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 11 10:56:27.212399 master-1 kubenswrapper[4771]: I1011 10:56:27.212309 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clg6t\" (UniqueName: \"kubernetes.io/projected/f85d5cfa-8073-4bbf-9eff-78fde719dadf-kube-api-access-clg6t\") pod \"nova-cell0-cell-mapping-2kt7k\" (UID: \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\") " pod="openstack/nova-cell0-cell-mapping-2kt7k" Oct 11 10:56:27.213034 master-1 kubenswrapper[4771]: I1011 10:56:27.212980 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 11 10:56:27.219781 master-1 kubenswrapper[4771]: I1011 10:56:27.219111 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 10:56:27.225403 master-1 kubenswrapper[4771]: I1011 10:56:27.225342 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 11 10:56:27.229245 master-2 kubenswrapper[4776]: I1011 10:56:27.229162 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2"] Oct 11 10:56:27.230236 master-1 kubenswrapper[4771]: I1011 10:56:27.230152 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 11 10:56:27.237095 master-0 kubenswrapper[4790]: I1011 10:56:27.235445 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-2"] Oct 11 10:56:27.249320 master-0 kubenswrapper[4790]: I1011 10:56:27.249246 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-2" Oct 11 10:56:27.252643 master-0 kubenswrapper[4790]: I1011 10:56:27.252593 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 11 10:56:27.252887 master-0 kubenswrapper[4790]: I1011 10:56:27.252851 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1"] Oct 11 10:56:27.255172 master-0 kubenswrapper[4790]: I1011 10:56:27.255133 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1" Oct 11 10:56:27.256578 master-1 kubenswrapper[4771]: I1011 10:56:27.256512 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-1"] Oct 11 10:56:27.258756 master-2 kubenswrapper[4776]: I1011 10:56:27.254332 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:56:27.258756 master-2 kubenswrapper[4776]: I1011 10:56:27.254383 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:56:27.258756 master-2 kubenswrapper[4776]: I1011 10:56:27.254492 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:56:27.258756 master-2 kubenswrapper[4776]: I1011 10:56:27.257950 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 11 10:56:27.258783 master-0 kubenswrapper[4790]: I1011 10:56:27.258265 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 11 10:56:27.259426 master-1 kubenswrapper[4771]: I1011 10:56:27.259385 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2kt7k" Oct 11 10:56:27.261567 master-1 kubenswrapper[4771]: I1011 10:56:27.261538 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 11 10:56:27.264340 master-1 kubenswrapper[4771]: I1011 10:56:27.264300 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-1"] Oct 11 10:56:27.265758 master-1 kubenswrapper[4771]: I1011 10:56:27.265720 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 11 10:56:27.271815 master-0 kubenswrapper[4790]: I1011 10:56:27.267387 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1"] Oct 11 10:56:27.277107 master-0 kubenswrapper[4790]: I1011 10:56:27.276672 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0edb0512-334f-4bfd-b297-cce29a7c510b-config-data\") pod \"nova-scheduler-2\" (UID: \"0edb0512-334f-4bfd-b297-cce29a7c510b\") " pod="openstack/nova-scheduler-2" Oct 11 10:56:27.277107 master-0 kubenswrapper[4790]: I1011 10:56:27.276881 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l6w4\" (UniqueName: \"kubernetes.io/projected/0edb0512-334f-4bfd-b297-cce29a7c510b-kube-api-access-9l6w4\") pod \"nova-scheduler-2\" (UID: \"0edb0512-334f-4bfd-b297-cce29a7c510b\") " pod="openstack/nova-scheduler-2" Oct 11 10:56:27.277107 master-0 kubenswrapper[4790]: I1011 10:56:27.276985 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/253852fc-de03-49f0-8e18-b3ccba3d4966-logs\") pod \"nova-api-1\" (UID: \"253852fc-de03-49f0-8e18-b3ccba3d4966\") " pod="openstack/nova-api-1" Oct 11 10:56:27.277274 master-0 kubenswrapper[4790]: I1011 10:56:27.277073 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhcj7\" (UniqueName: \"kubernetes.io/projected/253852fc-de03-49f0-8e18-b3ccba3d4966-kube-api-access-jhcj7\") pod \"nova-api-1\" (UID: \"253852fc-de03-49f0-8e18-b3ccba3d4966\") " pod="openstack/nova-api-1" Oct 11 10:56:27.277274 master-0 kubenswrapper[4790]: I1011 10:56:27.277157 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/253852fc-de03-49f0-8e18-b3ccba3d4966-config-data\") pod \"nova-api-1\" (UID: \"253852fc-de03-49f0-8e18-b3ccba3d4966\") " pod="openstack/nova-api-1" Oct 11 10:56:27.277274 master-0 kubenswrapper[4790]: I1011 10:56:27.277187 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0edb0512-334f-4bfd-b297-cce29a7c510b-combined-ca-bundle\") pod \"nova-scheduler-2\" (UID: \"0edb0512-334f-4bfd-b297-cce29a7c510b\") " pod="openstack/nova-scheduler-2" Oct 11 10:56:27.277361 master-0 kubenswrapper[4790]: I1011 10:56:27.277331 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/253852fc-de03-49f0-8e18-b3ccba3d4966-combined-ca-bundle\") pod \"nova-api-1\" (UID: \"253852fc-de03-49f0-8e18-b3ccba3d4966\") " pod="openstack/nova-api-1" Oct 11 10:56:27.279175 master-0 kubenswrapper[4790]: I1011 10:56:27.279116 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-2"] Oct 11 10:56:27.284076 master-1 kubenswrapper[4771]: I1011 10:56:27.282578 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6bt5\" (UniqueName: \"kubernetes.io/projected/3a8c3af6-0b6a-486d-83e3-18bf00346dbc-kube-api-access-c6bt5\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"3a8c3af6-0b6a-486d-83e3-18bf00346dbc\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 11 10:56:27.284347 master-1 kubenswrapper[4771]: I1011 10:56:27.284307 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a8c3af6-0b6a-486d-83e3-18bf00346dbc-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"3a8c3af6-0b6a-486d-83e3-18bf00346dbc\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 11 10:56:27.284597 master-1 kubenswrapper[4771]: I1011 10:56:27.284575 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8c3af6-0b6a-486d-83e3-18bf00346dbc-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"3a8c3af6-0b6a-486d-83e3-18bf00346dbc\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 11 10:56:27.290837 master-1 kubenswrapper[4771]: I1011 10:56:27.287392 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a8c3af6-0b6a-486d-83e3-18bf00346dbc-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"3a8c3af6-0b6a-486d-83e3-18bf00346dbc\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 11 10:56:27.292645 master-1 kubenswrapper[4771]: I1011 10:56:27.292607 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8c3af6-0b6a-486d-83e3-18bf00346dbc-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"3a8c3af6-0b6a-486d-83e3-18bf00346dbc\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 11 10:56:27.309261 master-1 kubenswrapper[4771]: I1011 10:56:27.309210 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6bt5\" (UniqueName: \"kubernetes.io/projected/3a8c3af6-0b6a-486d-83e3-18bf00346dbc-kube-api-access-c6bt5\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"3a8c3af6-0b6a-486d-83e3-18bf00346dbc\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 11 10:56:27.309996 master-2 kubenswrapper[4776]: I1011 10:56:27.309829 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzwk6\" (UniqueName: \"kubernetes.io/projected/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-kube-api-access-hzwk6\") pod \"nova-scheduler-0\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " pod="openstack/nova-scheduler-0" Oct 11 10:56:27.309996 master-2 kubenswrapper[4776]: I1011 10:56:27.309926 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.309996 master-2 kubenswrapper[4776]: I1011 10:56:27.309980 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-config-data\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.310568 master-2 kubenswrapper[4776]: I1011 10:56:27.310021 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-logs\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.310568 master-2 kubenswrapper[4776]: I1011 10:56:27.310090 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np4rj\" (UniqueName: \"kubernetes.io/projected/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-kube-api-access-np4rj\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.310568 master-2 kubenswrapper[4776]: I1011 10:56:27.310174 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-config-data\") pod \"nova-scheduler-0\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " pod="openstack/nova-scheduler-0" Oct 11 10:56:27.310568 master-2 kubenswrapper[4776]: I1011 10:56:27.310237 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " pod="openstack/nova-scheduler-0" Oct 11 10:56:27.351931 master-1 kubenswrapper[4771]: I1011 10:56:27.351106 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 11 10:56:27.356779 master-1 kubenswrapper[4771]: I1011 10:56:27.353898 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:56:27.357659 master-1 kubenswrapper[4771]: I1011 10:56:27.357622 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 11 10:56:27.366053 master-1 kubenswrapper[4771]: I1011 10:56:27.366016 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 11 10:56:27.379491 master-0 kubenswrapper[4790]: I1011 10:56:27.379410 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0edb0512-334f-4bfd-b297-cce29a7c510b-config-data\") pod \"nova-scheduler-2\" (UID: \"0edb0512-334f-4bfd-b297-cce29a7c510b\") " pod="openstack/nova-scheduler-2" Oct 11 10:56:27.379788 master-0 kubenswrapper[4790]: I1011 10:56:27.379533 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l6w4\" (UniqueName: \"kubernetes.io/projected/0edb0512-334f-4bfd-b297-cce29a7c510b-kube-api-access-9l6w4\") pod \"nova-scheduler-2\" (UID: \"0edb0512-334f-4bfd-b297-cce29a7c510b\") " pod="openstack/nova-scheduler-2" Oct 11 10:56:27.379788 master-0 kubenswrapper[4790]: I1011 10:56:27.379577 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/253852fc-de03-49f0-8e18-b3ccba3d4966-logs\") pod \"nova-api-1\" (UID: \"253852fc-de03-49f0-8e18-b3ccba3d4966\") " pod="openstack/nova-api-1" Oct 11 10:56:27.379788 master-0 kubenswrapper[4790]: I1011 10:56:27.379620 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhcj7\" (UniqueName: \"kubernetes.io/projected/253852fc-de03-49f0-8e18-b3ccba3d4966-kube-api-access-jhcj7\") pod \"nova-api-1\" (UID: \"253852fc-de03-49f0-8e18-b3ccba3d4966\") " pod="openstack/nova-api-1" Oct 11 10:56:27.379788 master-0 kubenswrapper[4790]: I1011 10:56:27.379643 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/253852fc-de03-49f0-8e18-b3ccba3d4966-config-data\") pod \"nova-api-1\" (UID: \"253852fc-de03-49f0-8e18-b3ccba3d4966\") " pod="openstack/nova-api-1" Oct 11 10:56:27.379788 master-0 kubenswrapper[4790]: I1011 10:56:27.379665 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0edb0512-334f-4bfd-b297-cce29a7c510b-combined-ca-bundle\") pod \"nova-scheduler-2\" (UID: \"0edb0512-334f-4bfd-b297-cce29a7c510b\") " pod="openstack/nova-scheduler-2" Oct 11 10:56:27.380069 master-0 kubenswrapper[4790]: I1011 10:56:27.379801 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/253852fc-de03-49f0-8e18-b3ccba3d4966-combined-ca-bundle\") pod \"nova-api-1\" (UID: \"253852fc-de03-49f0-8e18-b3ccba3d4966\") " pod="openstack/nova-api-1" Oct 11 10:56:27.380433 master-0 kubenswrapper[4790]: I1011 10:56:27.380384 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/253852fc-de03-49f0-8e18-b3ccba3d4966-logs\") pod \"nova-api-1\" (UID: \"253852fc-de03-49f0-8e18-b3ccba3d4966\") " pod="openstack/nova-api-1" Oct 11 10:56:27.381042 master-1 kubenswrapper[4771]: I1011 10:56:27.379565 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 11 10:56:27.384078 master-0 kubenswrapper[4790]: I1011 10:56:27.384013 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/253852fc-de03-49f0-8e18-b3ccba3d4966-combined-ca-bundle\") pod \"nova-api-1\" (UID: \"253852fc-de03-49f0-8e18-b3ccba3d4966\") " pod="openstack/nova-api-1" Oct 11 10:56:27.384664 master-0 kubenswrapper[4790]: I1011 10:56:27.384405 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0edb0512-334f-4bfd-b297-cce29a7c510b-config-data\") pod \"nova-scheduler-2\" (UID: \"0edb0512-334f-4bfd-b297-cce29a7c510b\") " pod="openstack/nova-scheduler-2" Oct 11 10:56:27.388624 master-0 kubenswrapper[4790]: I1011 10:56:27.388565 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0edb0512-334f-4bfd-b297-cce29a7c510b-combined-ca-bundle\") pod \"nova-scheduler-2\" (UID: \"0edb0512-334f-4bfd-b297-cce29a7c510b\") " pod="openstack/nova-scheduler-2" Oct 11 10:56:27.392272 master-0 kubenswrapper[4790]: I1011 10:56:27.392211 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/253852fc-de03-49f0-8e18-b3ccba3d4966-config-data\") pod \"nova-api-1\" (UID: \"253852fc-de03-49f0-8e18-b3ccba3d4966\") " pod="openstack/nova-api-1" Oct 11 10:56:27.393431 master-1 kubenswrapper[4771]: I1011 10:56:27.392904 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c648855c-73f8-4316-9eca-147821b776c2-logs\") pod \"nova-api-0\" (UID: \"c648855c-73f8-4316-9eca-147821b776c2\") " pod="openstack/nova-api-0" Oct 11 10:56:27.393431 master-1 kubenswrapper[4771]: I1011 10:56:27.392969 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c648855c-73f8-4316-9eca-147821b776c2-config-data\") pod \"nova-api-0\" (UID: \"c648855c-73f8-4316-9eca-147821b776c2\") " pod="openstack/nova-api-0" Oct 11 10:56:27.393431 master-1 kubenswrapper[4771]: I1011 10:56:27.393039 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c648855c-73f8-4316-9eca-147821b776c2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c648855c-73f8-4316-9eca-147821b776c2\") " pod="openstack/nova-api-0" Oct 11 10:56:27.393431 master-1 kubenswrapper[4771]: I1011 10:56:27.393069 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4619bcb1-090e-4824-adfe-6a526158d0ea-config-data\") pod \"nova-scheduler-1\" (UID: \"4619bcb1-090e-4824-adfe-6a526158d0ea\") " pod="openstack/nova-scheduler-1" Oct 11 10:56:27.393431 master-1 kubenswrapper[4771]: I1011 10:56:27.393117 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srjhk\" (UniqueName: \"kubernetes.io/projected/4619bcb1-090e-4824-adfe-6a526158d0ea-kube-api-access-srjhk\") pod \"nova-scheduler-1\" (UID: \"4619bcb1-090e-4824-adfe-6a526158d0ea\") " pod="openstack/nova-scheduler-1" Oct 11 10:56:27.393431 master-1 kubenswrapper[4771]: I1011 10:56:27.393162 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4619bcb1-090e-4824-adfe-6a526158d0ea-combined-ca-bundle\") pod \"nova-scheduler-1\" (UID: \"4619bcb1-090e-4824-adfe-6a526158d0ea\") " pod="openstack/nova-scheduler-1" Oct 11 10:56:27.393431 master-1 kubenswrapper[4771]: I1011 10:56:27.393196 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkcdx\" (UniqueName: \"kubernetes.io/projected/c648855c-73f8-4316-9eca-147821b776c2-kube-api-access-rkcdx\") pod \"nova-api-0\" (UID: \"c648855c-73f8-4316-9eca-147821b776c2\") " pod="openstack/nova-api-0" Oct 11 10:56:27.405426 master-0 kubenswrapper[4790]: I1011 10:56:27.405354 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhcj7\" (UniqueName: \"kubernetes.io/projected/253852fc-de03-49f0-8e18-b3ccba3d4966-kube-api-access-jhcj7\") pod \"nova-api-1\" (UID: \"253852fc-de03-49f0-8e18-b3ccba3d4966\") " pod="openstack/nova-api-1" Oct 11 10:56:27.408821 master-0 kubenswrapper[4790]: I1011 10:56:27.408760 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l6w4\" (UniqueName: \"kubernetes.io/projected/0edb0512-334f-4bfd-b297-cce29a7c510b-kube-api-access-9l6w4\") pod \"nova-scheduler-2\" (UID: \"0edb0512-334f-4bfd-b297-cce29a7c510b\") " pod="openstack/nova-scheduler-2" Oct 11 10:56:27.415518 master-2 kubenswrapper[4776]: I1011 10:56:27.415462 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np4rj\" (UniqueName: \"kubernetes.io/projected/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-kube-api-access-np4rj\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.415818 master-2 kubenswrapper[4776]: I1011 10:56:27.415660 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-config-data\") pod \"nova-scheduler-0\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " pod="openstack/nova-scheduler-0" Oct 11 10:56:27.415915 master-2 kubenswrapper[4776]: I1011 10:56:27.415826 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " pod="openstack/nova-scheduler-0" Oct 11 10:56:27.415915 master-2 kubenswrapper[4776]: I1011 10:56:27.415876 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzwk6\" (UniqueName: \"kubernetes.io/projected/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-kube-api-access-hzwk6\") pod \"nova-scheduler-0\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " pod="openstack/nova-scheduler-0" Oct 11 10:56:27.416003 master-2 kubenswrapper[4776]: I1011 10:56:27.415922 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.416003 master-2 kubenswrapper[4776]: I1011 10:56:27.415956 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-config-data\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.419033 master-2 kubenswrapper[4776]: I1011 10:56:27.419000 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.419326 master-2 kubenswrapper[4776]: I1011 10:56:27.419291 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-logs\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.419504 master-2 kubenswrapper[4776]: I1011 10:56:27.419476 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-config-data\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.419813 master-2 kubenswrapper[4776]: I1011 10:56:27.419771 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-logs\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.420517 master-2 kubenswrapper[4776]: I1011 10:56:27.420497 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-config-data\") pod \"nova-scheduler-0\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " pod="openstack/nova-scheduler-0" Oct 11 10:56:27.421580 master-2 kubenswrapper[4776]: I1011 10:56:27.421539 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " pod="openstack/nova-scheduler-0" Oct 11 10:56:27.438312 master-2 kubenswrapper[4776]: I1011 10:56:27.438273 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np4rj\" (UniqueName: \"kubernetes.io/projected/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-kube-api-access-np4rj\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.440646 master-2 kubenswrapper[4776]: I1011 10:56:27.440563 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzwk6\" (UniqueName: \"kubernetes.io/projected/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-kube-api-access-hzwk6\") pod \"nova-scheduler-0\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " pod="openstack/nova-scheduler-0" Oct 11 10:56:27.453814 master-2 kubenswrapper[4776]: I1011 10:56:27.452209 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"bc645cf2-0900-4b7d-8001-91098664c4cd","Type":"ContainerStarted","Data":"355fff6ba7d3433f645568d1791e6bb10ce1bc665a8595381fd3681800cf9c69"} Oct 11 10:56:27.495542 master-1 kubenswrapper[4771]: I1011 10:56:27.495221 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602a8d3a-2ca2-43d2-8def-5718d9baf2ee-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"602a8d3a-2ca2-43d2-8def-5718d9baf2ee\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:56:27.495542 master-1 kubenswrapper[4771]: I1011 10:56:27.495321 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c648855c-73f8-4316-9eca-147821b776c2-logs\") pod \"nova-api-0\" (UID: \"c648855c-73f8-4316-9eca-147821b776c2\") " pod="openstack/nova-api-0" Oct 11 10:56:27.495542 master-1 kubenswrapper[4771]: I1011 10:56:27.495385 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602a8d3a-2ca2-43d2-8def-5718d9baf2ee-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"602a8d3a-2ca2-43d2-8def-5718d9baf2ee\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:56:27.495542 master-1 kubenswrapper[4771]: I1011 10:56:27.495426 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c648855c-73f8-4316-9eca-147821b776c2-config-data\") pod \"nova-api-0\" (UID: \"c648855c-73f8-4316-9eca-147821b776c2\") " pod="openstack/nova-api-0" Oct 11 10:56:27.496279 master-1 kubenswrapper[4771]: I1011 10:56:27.495488 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5zqd\" (UniqueName: \"kubernetes.io/projected/602a8d3a-2ca2-43d2-8def-5718d9baf2ee-kube-api-access-g5zqd\") pod \"nova-cell1-novncproxy-0\" (UID: \"602a8d3a-2ca2-43d2-8def-5718d9baf2ee\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:56:27.496279 master-1 kubenswrapper[4771]: I1011 10:56:27.495987 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c648855c-73f8-4316-9eca-147821b776c2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c648855c-73f8-4316-9eca-147821b776c2\") " pod="openstack/nova-api-0" Oct 11 10:56:27.496279 master-1 kubenswrapper[4771]: I1011 10:56:27.496025 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4619bcb1-090e-4824-adfe-6a526158d0ea-config-data\") pod \"nova-scheduler-1\" (UID: \"4619bcb1-090e-4824-adfe-6a526158d0ea\") " pod="openstack/nova-scheduler-1" Oct 11 10:56:27.496279 master-1 kubenswrapper[4771]: I1011 10:56:27.496077 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srjhk\" (UniqueName: \"kubernetes.io/projected/4619bcb1-090e-4824-adfe-6a526158d0ea-kube-api-access-srjhk\") pod \"nova-scheduler-1\" (UID: \"4619bcb1-090e-4824-adfe-6a526158d0ea\") " pod="openstack/nova-scheduler-1" Oct 11 10:56:27.496279 master-1 kubenswrapper[4771]: I1011 10:56:27.496118 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4619bcb1-090e-4824-adfe-6a526158d0ea-combined-ca-bundle\") pod \"nova-scheduler-1\" (UID: \"4619bcb1-090e-4824-adfe-6a526158d0ea\") " pod="openstack/nova-scheduler-1" Oct 11 10:56:27.496279 master-1 kubenswrapper[4771]: I1011 10:56:27.496151 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkcdx\" (UniqueName: \"kubernetes.io/projected/c648855c-73f8-4316-9eca-147821b776c2-kube-api-access-rkcdx\") pod \"nova-api-0\" (UID: \"c648855c-73f8-4316-9eca-147821b776c2\") " pod="openstack/nova-api-0" Oct 11 10:56:27.497158 master-1 kubenswrapper[4771]: I1011 10:56:27.497099 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c648855c-73f8-4316-9eca-147821b776c2-logs\") pod \"nova-api-0\" (UID: \"c648855c-73f8-4316-9eca-147821b776c2\") " pod="openstack/nova-api-0" Oct 11 10:56:27.500051 master-1 kubenswrapper[4771]: I1011 10:56:27.500003 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c648855c-73f8-4316-9eca-147821b776c2-config-data\") pod \"nova-api-0\" (UID: \"c648855c-73f8-4316-9eca-147821b776c2\") " pod="openstack/nova-api-0" Oct 11 10:56:27.500519 master-1 kubenswrapper[4771]: I1011 10:56:27.500493 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4619bcb1-090e-4824-adfe-6a526158d0ea-combined-ca-bundle\") pod \"nova-scheduler-1\" (UID: \"4619bcb1-090e-4824-adfe-6a526158d0ea\") " pod="openstack/nova-scheduler-1" Oct 11 10:56:27.502290 master-1 kubenswrapper[4771]: I1011 10:56:27.502248 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c648855c-73f8-4316-9eca-147821b776c2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c648855c-73f8-4316-9eca-147821b776c2\") " pod="openstack/nova-api-0" Oct 11 10:56:27.510885 master-1 kubenswrapper[4771]: I1011 10:56:27.508427 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4619bcb1-090e-4824-adfe-6a526158d0ea-config-data\") pod \"nova-scheduler-1\" (UID: \"4619bcb1-090e-4824-adfe-6a526158d0ea\") " pod="openstack/nova-scheduler-1" Oct 11 10:56:27.521106 master-1 kubenswrapper[4771]: I1011 10:56:27.521069 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkcdx\" (UniqueName: \"kubernetes.io/projected/c648855c-73f8-4316-9eca-147821b776c2-kube-api-access-rkcdx\") pod \"nova-api-0\" (UID: \"c648855c-73f8-4316-9eca-147821b776c2\") " pod="openstack/nova-api-0" Oct 11 10:56:27.539377 master-2 kubenswrapper[4776]: I1011 10:56:27.539335 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:56:27.543342 master-1 kubenswrapper[4771]: I1011 10:56:27.543276 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srjhk\" (UniqueName: \"kubernetes.io/projected/4619bcb1-090e-4824-adfe-6a526158d0ea-kube-api-access-srjhk\") pod \"nova-scheduler-1\" (UID: \"4619bcb1-090e-4824-adfe-6a526158d0ea\") " pod="openstack/nova-scheduler-1" Oct 11 10:56:27.549205 master-1 kubenswrapper[4771]: I1011 10:56:27.549152 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 10:56:27.580344 master-1 kubenswrapper[4771]: I1011 10:56:27.580229 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 11 10:56:27.585872 master-2 kubenswrapper[4776]: I1011 10:56:27.585834 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:56:27.589975 master-0 kubenswrapper[4790]: I1011 10:56:27.589176 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-2" Oct 11 10:56:27.598830 master-2 kubenswrapper[4776]: I1011 10:56:27.598771 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 11 10:56:27.599287 master-1 kubenswrapper[4771]: I1011 10:56:27.599234 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5zqd\" (UniqueName: \"kubernetes.io/projected/602a8d3a-2ca2-43d2-8def-5718d9baf2ee-kube-api-access-g5zqd\") pod \"nova-cell1-novncproxy-0\" (UID: \"602a8d3a-2ca2-43d2-8def-5718d9baf2ee\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:56:27.599594 master-1 kubenswrapper[4771]: I1011 10:56:27.599400 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602a8d3a-2ca2-43d2-8def-5718d9baf2ee-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"602a8d3a-2ca2-43d2-8def-5718d9baf2ee\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:56:27.599973 master-1 kubenswrapper[4771]: I1011 10:56:27.599787 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602a8d3a-2ca2-43d2-8def-5718d9baf2ee-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"602a8d3a-2ca2-43d2-8def-5718d9baf2ee\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:56:27.600261 master-2 kubenswrapper[4776]: I1011 10:56:27.600217 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 10:56:27.603661 master-2 kubenswrapper[4776]: I1011 10:56:27.603631 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 11 10:56:27.603782 master-0 kubenswrapper[4790]: I1011 10:56:27.603703 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1" Oct 11 10:56:27.604128 master-1 kubenswrapper[4771]: I1011 10:56:27.604078 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602a8d3a-2ca2-43d2-8def-5718d9baf2ee-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"602a8d3a-2ca2-43d2-8def-5718d9baf2ee\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:56:27.606381 master-1 kubenswrapper[4771]: I1011 10:56:27.606331 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602a8d3a-2ca2-43d2-8def-5718d9baf2ee-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"602a8d3a-2ca2-43d2-8def-5718d9baf2ee\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:56:27.622205 master-2 kubenswrapper[4776]: I1011 10:56:27.621662 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 10:56:27.622205 master-2 kubenswrapper[4776]: I1011 10:56:27.622067 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-config-data\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.622205 master-2 kubenswrapper[4776]: I1011 10:56:27.622118 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.622508 master-2 kubenswrapper[4776]: I1011 10:56:27.622332 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ec11821-cdf5-45e1-a138-2b62dad57cc3-logs\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.622508 master-2 kubenswrapper[4776]: I1011 10:56:27.622425 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75nct\" (UniqueName: \"kubernetes.io/projected/2ec11821-cdf5-45e1-a138-2b62dad57cc3-kube-api-access-75nct\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.623254 master-1 kubenswrapper[4771]: I1011 10:56:27.623213 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-1"] Oct 11 10:56:27.628251 master-1 kubenswrapper[4771]: I1011 10:56:27.625038 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 11 10:56:27.629675 master-1 kubenswrapper[4771]: I1011 10:56:27.629619 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 11 10:56:27.631413 master-1 kubenswrapper[4771]: I1011 10:56:27.630834 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5zqd\" (UniqueName: \"kubernetes.io/projected/602a8d3a-2ca2-43d2-8def-5718d9baf2ee-kube-api-access-g5zqd\") pod \"nova-cell1-novncproxy-0\" (UID: \"602a8d3a-2ca2-43d2-8def-5718d9baf2ee\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:56:27.631764 master-0 kubenswrapper[4790]: I1011 10:56:27.631688 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:56:27.633400 master-0 kubenswrapper[4790]: I1011 10:56:27.633375 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-2" Oct 11 10:56:27.636173 master-1 kubenswrapper[4771]: I1011 10:56:27.636111 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-1"] Oct 11 10:56:27.638830 master-0 kubenswrapper[4790]: I1011 10:56:27.636783 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 11 10:56:27.653154 master-0 kubenswrapper[4790]: I1011 10:56:27.653084 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:56:27.687629 master-0 kubenswrapper[4790]: I1011 10:56:27.687193 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1479006f-cac6-481e-86de-1ec1bed55c2d-combined-ca-bundle\") pod \"nova-metadata-2\" (UID: \"1479006f-cac6-481e-86de-1ec1bed55c2d\") " pod="openstack/nova-metadata-2" Oct 11 10:56:27.687629 master-0 kubenswrapper[4790]: I1011 10:56:27.687342 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1479006f-cac6-481e-86de-1ec1bed55c2d-config-data\") pod \"nova-metadata-2\" (UID: \"1479006f-cac6-481e-86de-1ec1bed55c2d\") " pod="openstack/nova-metadata-2" Oct 11 10:56:27.687629 master-0 kubenswrapper[4790]: I1011 10:56:27.687397 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1479006f-cac6-481e-86de-1ec1bed55c2d-logs\") pod \"nova-metadata-2\" (UID: \"1479006f-cac6-481e-86de-1ec1bed55c2d\") " pod="openstack/nova-metadata-2" Oct 11 10:56:27.687629 master-0 kubenswrapper[4790]: I1011 10:56:27.687443 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cktcw\" (UniqueName: \"kubernetes.io/projected/1479006f-cac6-481e-86de-1ec1bed55c2d-kube-api-access-cktcw\") pod \"nova-metadata-2\" (UID: \"1479006f-cac6-481e-86de-1ec1bed55c2d\") " pod="openstack/nova-metadata-2" Oct 11 10:56:27.725021 master-2 kubenswrapper[4776]: I1011 10:56:27.724451 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-config-data\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.725021 master-2 kubenswrapper[4776]: I1011 10:56:27.724519 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.725021 master-2 kubenswrapper[4776]: I1011 10:56:27.724585 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ec11821-cdf5-45e1-a138-2b62dad57cc3-logs\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.725021 master-2 kubenswrapper[4776]: I1011 10:56:27.724629 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75nct\" (UniqueName: \"kubernetes.io/projected/2ec11821-cdf5-45e1-a138-2b62dad57cc3-kube-api-access-75nct\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.727611 master-2 kubenswrapper[4776]: I1011 10:56:27.727574 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ec11821-cdf5-45e1-a138-2b62dad57cc3-logs\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.728456 master-1 kubenswrapper[4771]: I1011 10:56:27.728297 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2kt7k"] Oct 11 10:56:27.729387 master-2 kubenswrapper[4776]: I1011 10:56:27.728937 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-config-data\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.738739 master-2 kubenswrapper[4776]: I1011 10:56:27.738376 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.758286 master-2 kubenswrapper[4776]: I1011 10:56:27.758241 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75nct\" (UniqueName: \"kubernetes.io/projected/2ec11821-cdf5-45e1-a138-2b62dad57cc3-kube-api-access-75nct\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.776764 master-1 kubenswrapper[4771]: I1011 10:56:27.776714 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:56:27.790629 master-0 kubenswrapper[4790]: I1011 10:56:27.790256 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1479006f-cac6-481e-86de-1ec1bed55c2d-config-data\") pod \"nova-metadata-2\" (UID: \"1479006f-cac6-481e-86de-1ec1bed55c2d\") " pod="openstack/nova-metadata-2" Oct 11 10:56:27.790629 master-0 kubenswrapper[4790]: I1011 10:56:27.790343 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1479006f-cac6-481e-86de-1ec1bed55c2d-logs\") pod \"nova-metadata-2\" (UID: \"1479006f-cac6-481e-86de-1ec1bed55c2d\") " pod="openstack/nova-metadata-2" Oct 11 10:56:27.790629 master-0 kubenswrapper[4790]: I1011 10:56:27.790388 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cktcw\" (UniqueName: \"kubernetes.io/projected/1479006f-cac6-481e-86de-1ec1bed55c2d-kube-api-access-cktcw\") pod \"nova-metadata-2\" (UID: \"1479006f-cac6-481e-86de-1ec1bed55c2d\") " pod="openstack/nova-metadata-2" Oct 11 10:56:27.790629 master-0 kubenswrapper[4790]: I1011 10:56:27.790428 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1479006f-cac6-481e-86de-1ec1bed55c2d-combined-ca-bundle\") pod \"nova-metadata-2\" (UID: \"1479006f-cac6-481e-86de-1ec1bed55c2d\") " pod="openstack/nova-metadata-2" Oct 11 10:56:27.791397 master-0 kubenswrapper[4790]: I1011 10:56:27.791305 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1479006f-cac6-481e-86de-1ec1bed55c2d-logs\") pod \"nova-metadata-2\" (UID: \"1479006f-cac6-481e-86de-1ec1bed55c2d\") " pod="openstack/nova-metadata-2" Oct 11 10:56:27.795259 master-2 kubenswrapper[4776]: I1011 10:56:27.793875 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 10:56:27.799678 master-0 kubenswrapper[4790]: I1011 10:56:27.798727 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1479006f-cac6-481e-86de-1ec1bed55c2d-config-data\") pod \"nova-metadata-2\" (UID: \"1479006f-cac6-481e-86de-1ec1bed55c2d\") " pod="openstack/nova-metadata-2" Oct 11 10:56:27.801242 master-0 kubenswrapper[4790]: I1011 10:56:27.800550 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1479006f-cac6-481e-86de-1ec1bed55c2d-combined-ca-bundle\") pod \"nova-metadata-2\" (UID: \"1479006f-cac6-481e-86de-1ec1bed55c2d\") " pod="openstack/nova-metadata-2" Oct 11 10:56:27.804775 master-2 kubenswrapper[4776]: I1011 10:56:27.804723 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79cbf74f6f-xmqbr"] Oct 11 10:56:27.806567 master-2 kubenswrapper[4776]: I1011 10:56:27.806428 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.806752 master-1 kubenswrapper[4771]: I1011 10:56:27.806691 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1596746b-25ca-487a-9e49-93e532f2838b-combined-ca-bundle\") pod \"nova-metadata-1\" (UID: \"1596746b-25ca-487a-9e49-93e532f2838b\") " pod="openstack/nova-metadata-1" Oct 11 10:56:27.806953 master-1 kubenswrapper[4771]: I1011 10:56:27.806760 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1596746b-25ca-487a-9e49-93e532f2838b-logs\") pod \"nova-metadata-1\" (UID: \"1596746b-25ca-487a-9e49-93e532f2838b\") " pod="openstack/nova-metadata-1" Oct 11 10:56:27.806953 master-1 kubenswrapper[4771]: I1011 10:56:27.806802 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scqmr\" (UniqueName: \"kubernetes.io/projected/1596746b-25ca-487a-9e49-93e532f2838b-kube-api-access-scqmr\") pod \"nova-metadata-1\" (UID: \"1596746b-25ca-487a-9e49-93e532f2838b\") " pod="openstack/nova-metadata-1" Oct 11 10:56:27.806953 master-1 kubenswrapper[4771]: I1011 10:56:27.806946 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1596746b-25ca-487a-9e49-93e532f2838b-config-data\") pod \"nova-metadata-1\" (UID: \"1596746b-25ca-487a-9e49-93e532f2838b\") " pod="openstack/nova-metadata-1" Oct 11 10:56:27.822174 master-2 kubenswrapper[4776]: I1011 10:56:27.811817 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 11 10:56:27.822174 master-2 kubenswrapper[4776]: I1011 10:56:27.818864 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 11 10:56:27.822174 master-2 kubenswrapper[4776]: I1011 10:56:27.821799 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 10:56:27.822174 master-2 kubenswrapper[4776]: I1011 10:56:27.822368 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 10:56:27.831897 master-0 kubenswrapper[4790]: I1011 10:56:27.830435 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cktcw\" (UniqueName: \"kubernetes.io/projected/1479006f-cac6-481e-86de-1ec1bed55c2d-kube-api-access-cktcw\") pod \"nova-metadata-2\" (UID: \"1479006f-cac6-481e-86de-1ec1bed55c2d\") " pod="openstack/nova-metadata-2" Oct 11 10:56:27.838854 master-2 kubenswrapper[4776]: I1011 10:56:27.824695 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 11 10:56:27.838854 master-2 kubenswrapper[4776]: I1011 10:56:27.831037 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-swift-storage-0\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.838854 master-2 kubenswrapper[4776]: I1011 10:56:27.831400 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcqbh\" (UniqueName: \"kubernetes.io/projected/43ef2dc9-3563-4188-8d91-2fc18c396a4a-kube-api-access-xcqbh\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.838854 master-2 kubenswrapper[4776]: I1011 10:56:27.831487 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-nb\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.838854 master-2 kubenswrapper[4776]: I1011 10:56:27.831511 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-config\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.838854 master-2 kubenswrapper[4776]: I1011 10:56:27.831625 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-svc\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.838854 master-2 kubenswrapper[4776]: I1011 10:56:27.831724 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-sb\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.847072 master-2 kubenswrapper[4776]: I1011 10:56:27.847015 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79cbf74f6f-xmqbr"] Oct 11 10:56:27.873496 master-1 kubenswrapper[4771]: I1011 10:56:27.873415 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Oct 11 10:56:27.909221 master-1 kubenswrapper[4771]: I1011 10:56:27.909162 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1596746b-25ca-487a-9e49-93e532f2838b-combined-ca-bundle\") pod \"nova-metadata-1\" (UID: \"1596746b-25ca-487a-9e49-93e532f2838b\") " pod="openstack/nova-metadata-1" Oct 11 10:56:27.909491 master-1 kubenswrapper[4771]: I1011 10:56:27.909240 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1596746b-25ca-487a-9e49-93e532f2838b-logs\") pod \"nova-metadata-1\" (UID: \"1596746b-25ca-487a-9e49-93e532f2838b\") " pod="openstack/nova-metadata-1" Oct 11 10:56:27.909491 master-1 kubenswrapper[4771]: I1011 10:56:27.909310 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scqmr\" (UniqueName: \"kubernetes.io/projected/1596746b-25ca-487a-9e49-93e532f2838b-kube-api-access-scqmr\") pod \"nova-metadata-1\" (UID: \"1596746b-25ca-487a-9e49-93e532f2838b\") " pod="openstack/nova-metadata-1" Oct 11 10:56:27.909491 master-1 kubenswrapper[4771]: I1011 10:56:27.909445 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1596746b-25ca-487a-9e49-93e532f2838b-config-data\") pod \"nova-metadata-1\" (UID: \"1596746b-25ca-487a-9e49-93e532f2838b\") " pod="openstack/nova-metadata-1" Oct 11 10:56:27.909956 master-1 kubenswrapper[4771]: I1011 10:56:27.909910 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1596746b-25ca-487a-9e49-93e532f2838b-logs\") pod \"nova-metadata-1\" (UID: \"1596746b-25ca-487a-9e49-93e532f2838b\") " pod="openstack/nova-metadata-1" Oct 11 10:56:27.914179 master-1 kubenswrapper[4771]: I1011 10:56:27.914121 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1596746b-25ca-487a-9e49-93e532f2838b-combined-ca-bundle\") pod \"nova-metadata-1\" (UID: \"1596746b-25ca-487a-9e49-93e532f2838b\") " pod="openstack/nova-metadata-1" Oct 11 10:56:27.916863 master-1 kubenswrapper[4771]: I1011 10:56:27.916679 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1596746b-25ca-487a-9e49-93e532f2838b-config-data\") pod \"nova-metadata-1\" (UID: \"1596746b-25ca-487a-9e49-93e532f2838b\") " pod="openstack/nova-metadata-1" Oct 11 10:56:27.934124 master-2 kubenswrapper[4776]: I1011 10:56:27.933291 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-swift-storage-0\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.934124 master-2 kubenswrapper[4776]: I1011 10:56:27.933408 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcqbh\" (UniqueName: \"kubernetes.io/projected/43ef2dc9-3563-4188-8d91-2fc18c396a4a-kube-api-access-xcqbh\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.934124 master-2 kubenswrapper[4776]: I1011 10:56:27.933441 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-config\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.934124 master-2 kubenswrapper[4776]: I1011 10:56:27.933465 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-nb\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.934124 master-2 kubenswrapper[4776]: I1011 10:56:27.933508 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-svc\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.934124 master-2 kubenswrapper[4776]: I1011 10:56:27.933543 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-sb\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.940425 master-1 kubenswrapper[4771]: I1011 10:56:27.937530 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scqmr\" (UniqueName: \"kubernetes.io/projected/1596746b-25ca-487a-9e49-93e532f2838b-kube-api-access-scqmr\") pod \"nova-metadata-1\" (UID: \"1596746b-25ca-487a-9e49-93e532f2838b\") " pod="openstack/nova-metadata-1" Oct 11 10:56:27.940564 master-2 kubenswrapper[4776]: I1011 10:56:27.938239 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-swift-storage-0\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.940564 master-2 kubenswrapper[4776]: I1011 10:56:27.939096 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-svc\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.940564 master-2 kubenswrapper[4776]: I1011 10:56:27.940162 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-config\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.941787 master-2 kubenswrapper[4776]: I1011 10:56:27.941744 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-sb\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.943589 master-2 kubenswrapper[4776]: I1011 10:56:27.943480 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-nb\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.961521 master-1 kubenswrapper[4771]: I1011 10:56:27.961464 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 11 10:56:27.968665 master-2 kubenswrapper[4776]: I1011 10:56:27.968152 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcqbh\" (UniqueName: \"kubernetes.io/projected/43ef2dc9-3563-4188-8d91-2fc18c396a4a-kube-api-access-xcqbh\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:28.041936 master-0 kubenswrapper[4790]: I1011 10:56:28.041841 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-2" Oct 11 10:56:28.044878 master-1 kubenswrapper[4771]: I1011 10:56:28.044814 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"3a8c3af6-0b6a-486d-83e3-18bf00346dbc","Type":"ContainerStarted","Data":"21b5136e598560355383aa18f08807d7668d8be22b7a488b7e02d1b5439e130e"} Oct 11 10:56:28.063256 master-1 kubenswrapper[4771]: I1011 10:56:28.052434 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2kt7k" event={"ID":"f85d5cfa-8073-4bbf-9eff-78fde719dadf","Type":"ContainerStarted","Data":"03e80f5bdb6844a3112427ed3612b145765c86f689c582771359401e14c9758e"} Oct 11 10:56:28.063256 master-1 kubenswrapper[4771]: I1011 10:56:28.052512 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2kt7k" event={"ID":"f85d5cfa-8073-4bbf-9eff-78fde719dadf","Type":"ContainerStarted","Data":"792c6e93eebb0025120939bb299c7a87876ec3dbbd22c047c0d886532fa269ba"} Oct 11 10:56:28.063256 master-1 kubenswrapper[4771]: I1011 10:56:28.062890 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 11 10:56:28.074635 master-1 kubenswrapper[4771]: I1011 10:56:28.074573 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tgg64"] Oct 11 10:56:28.076853 master-1 kubenswrapper[4771]: I1011 10:56:28.076835 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tgg64" Oct 11 10:56:28.080876 master-1 kubenswrapper[4771]: I1011 10:56:28.080824 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 11 10:56:28.080940 master-1 kubenswrapper[4771]: I1011 10:56:28.080845 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tgg64"] Oct 11 10:56:28.081153 master-1 kubenswrapper[4771]: I1011 10:56:28.081134 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 11 10:56:28.138127 master-1 kubenswrapper[4771]: I1011 10:56:28.136694 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5jhb\" (UniqueName: \"kubernetes.io/projected/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-kube-api-access-n5jhb\") pod \"nova-cell1-conductor-db-sync-tgg64\" (UID: \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\") " pod="openstack/nova-cell1-conductor-db-sync-tgg64" Oct 11 10:56:28.138127 master-1 kubenswrapper[4771]: I1011 10:56:28.136739 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-scripts\") pod \"nova-cell1-conductor-db-sync-tgg64\" (UID: \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\") " pod="openstack/nova-cell1-conductor-db-sync-tgg64" Oct 11 10:56:28.138127 master-1 kubenswrapper[4771]: I1011 10:56:28.136782 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-config-data\") pod \"nova-cell1-conductor-db-sync-tgg64\" (UID: \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\") " pod="openstack/nova-cell1-conductor-db-sync-tgg64" Oct 11 10:56:28.138127 master-1 kubenswrapper[4771]: I1011 10:56:28.136860 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tgg64\" (UID: \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\") " pod="openstack/nova-cell1-conductor-db-sync-tgg64" Oct 11 10:56:28.142035 master-1 kubenswrapper[4771]: I1011 10:56:28.141692 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-2kt7k" podStartSLOduration=2.141663152 podStartE2EDuration="2.141663152s" podCreationTimestamp="2025-10-11 10:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:28.094329322 +0000 UTC m=+1820.068555763" watchObservedRunningTime="2025-10-11 10:56:28.141663152 +0000 UTC m=+1820.115889593" Oct 11 10:56:28.147092 master-2 kubenswrapper[4776]: I1011 10:56:28.146385 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:28.165353 master-2 kubenswrapper[4776]: I1011 10:56:28.165121 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:56:28.169030 master-2 kubenswrapper[4776]: W1011 10:56:28.168838 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a3bc084_f5d9_4e64_9350_d2c3b3487e76.slice/crio-bba76d3eeea8b745ea58d66c158d3960e6d455c9d713ec1220d847e9aa5a076e WatchSource:0}: Error finding container bba76d3eeea8b745ea58d66c158d3960e6d455c9d713ec1220d847e9aa5a076e: Status 404 returned error can't find the container with id bba76d3eeea8b745ea58d66c158d3960e6d455c9d713ec1220d847e9aa5a076e Oct 11 10:56:28.191249 master-0 kubenswrapper[4790]: I1011 10:56:28.191180 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1"] Oct 11 10:56:28.191952 master-0 kubenswrapper[4790]: W1011 10:56:28.191909 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod253852fc_de03_49f0_8e18_b3ccba3d4966.slice/crio-836f61a45327b518c64768eb8146728d82afb4f76df022ecfc8b1a40c35ad99e WatchSource:0}: Error finding container 836f61a45327b518c64768eb8146728d82afb4f76df022ecfc8b1a40c35ad99e: Status 404 returned error can't find the container with id 836f61a45327b518c64768eb8146728d82afb4f76df022ecfc8b1a40c35ad99e Oct 11 10:56:28.252956 master-1 kubenswrapper[4771]: I1011 10:56:28.225206 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-1"] Oct 11 10:56:28.264239 master-0 kubenswrapper[4790]: I1011 10:56:28.264179 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-2"] Oct 11 10:56:28.266190 master-0 kubenswrapper[4790]: W1011 10:56:28.266002 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0edb0512_334f_4bfd_b297_cce29a7c510b.slice/crio-ce923dabd975e7c88686332f8fea7fed5cd13b9284d3e5e9dcd78a4cee6b26bf WatchSource:0}: Error finding container ce923dabd975e7c88686332f8fea7fed5cd13b9284d3e5e9dcd78a4cee6b26bf: Status 404 returned error can't find the container with id ce923dabd975e7c88686332f8fea7fed5cd13b9284d3e5e9dcd78a4cee6b26bf Oct 11 10:56:28.318176 master-1 kubenswrapper[4771]: I1011 10:56:28.318117 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tgg64\" (UID: \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\") " pod="openstack/nova-cell1-conductor-db-sync-tgg64" Oct 11 10:56:28.322535 master-1 kubenswrapper[4771]: I1011 10:56:28.322498 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5jhb\" (UniqueName: \"kubernetes.io/projected/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-kube-api-access-n5jhb\") pod \"nova-cell1-conductor-db-sync-tgg64\" (UID: \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\") " pod="openstack/nova-cell1-conductor-db-sync-tgg64" Oct 11 10:56:28.322620 master-1 kubenswrapper[4771]: I1011 10:56:28.322570 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-scripts\") pod \"nova-cell1-conductor-db-sync-tgg64\" (UID: \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\") " pod="openstack/nova-cell1-conductor-db-sync-tgg64" Oct 11 10:56:28.323232 master-1 kubenswrapper[4771]: I1011 10:56:28.323186 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-config-data\") pod \"nova-cell1-conductor-db-sync-tgg64\" (UID: \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\") " pod="openstack/nova-cell1-conductor-db-sync-tgg64" Oct 11 10:56:28.326999 master-1 kubenswrapper[4771]: I1011 10:56:28.326954 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tgg64\" (UID: \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\") " pod="openstack/nova-cell1-conductor-db-sync-tgg64" Oct 11 10:56:28.328348 master-1 kubenswrapper[4771]: I1011 10:56:28.328283 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-scripts\") pod \"nova-cell1-conductor-db-sync-tgg64\" (UID: \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\") " pod="openstack/nova-cell1-conductor-db-sync-tgg64" Oct 11 10:56:28.328837 master-1 kubenswrapper[4771]: I1011 10:56:28.328785 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-config-data\") pod \"nova-cell1-conductor-db-sync-tgg64\" (UID: \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\") " pod="openstack/nova-cell1-conductor-db-sync-tgg64" Oct 11 10:56:28.329538 master-1 kubenswrapper[4771]: I1011 10:56:28.329496 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 11 10:56:28.355822 master-1 kubenswrapper[4771]: I1011 10:56:28.355720 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5jhb\" (UniqueName: \"kubernetes.io/projected/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-kube-api-access-n5jhb\") pod \"nova-cell1-conductor-db-sync-tgg64\" (UID: \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\") " pod="openstack/nova-cell1-conductor-db-sync-tgg64" Oct 11 10:56:28.360009 master-1 kubenswrapper[4771]: I1011 10:56:28.359947 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-1"] Oct 11 10:56:28.387813 master-0 kubenswrapper[4790]: I1011 10:56:28.387749 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-2" event={"ID":"0edb0512-334f-4bfd-b297-cce29a7c510b","Type":"ContainerStarted","Data":"ce923dabd975e7c88686332f8fea7fed5cd13b9284d3e5e9dcd78a4cee6b26bf"} Oct 11 10:56:28.392676 master-0 kubenswrapper[4790]: I1011 10:56:28.392632 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1" event={"ID":"253852fc-de03-49f0-8e18-b3ccba3d4966","Type":"ContainerStarted","Data":"836f61a45327b518c64768eb8146728d82afb4f76df022ecfc8b1a40c35ad99e"} Oct 11 10:56:28.465514 master-2 kubenswrapper[4776]: I1011 10:56:28.465391 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"bc645cf2-0900-4b7d-8001-91098664c4cd","Type":"ContainerStarted","Data":"9692711be9bc0b51cc13331b85e1b1ad12740fb7e2db869ee8c8329fe0153917"} Oct 11 10:56:28.488114 master-2 kubenswrapper[4776]: I1011 10:56:28.471841 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a3bc084-f5d9-4e64-9350-d2c3b3487e76","Type":"ContainerStarted","Data":"bba76d3eeea8b745ea58d66c158d3960e6d455c9d713ec1220d847e9aa5a076e"} Oct 11 10:56:28.510721 master-1 kubenswrapper[4771]: I1011 10:56:28.510612 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tgg64" Oct 11 10:56:28.517783 master-0 kubenswrapper[4790]: W1011 10:56:28.517151 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1479006f_cac6_481e_86de_1ec1bed55c2d.slice/crio-c88496f1c62cf4144371d31cb98a180ae9bb8ab68b19a997dabce86192b784b4 WatchSource:0}: Error finding container c88496f1c62cf4144371d31cb98a180ae9bb8ab68b19a997dabce86192b784b4: Status 404 returned error can't find the container with id c88496f1c62cf4144371d31cb98a180ae9bb8ab68b19a997dabce86192b784b4 Oct 11 10:56:28.517783 master-0 kubenswrapper[4790]: I1011 10:56:28.517308 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:56:28.567116 master-2 kubenswrapper[4776]: I1011 10:56:28.565058 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:56:28.573444 master-2 kubenswrapper[4776]: I1011 10:56:28.573397 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 10:56:28.818941 master-2 kubenswrapper[4776]: W1011 10:56:28.818813 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-b2468f50ae0c430498d74cf1413c34467e7f27dc91b6590e32b6437ebfd5f4be WatchSource:0}: Error finding container b2468f50ae0c430498d74cf1413c34467e7f27dc91b6590e32b6437ebfd5f4be: Status 404 returned error can't find the container with id b2468f50ae0c430498d74cf1413c34467e7f27dc91b6590e32b6437ebfd5f4be Oct 11 10:56:28.821816 master-2 kubenswrapper[4776]: I1011 10:56:28.821768 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79cbf74f6f-xmqbr"] Oct 11 10:56:28.976405 master-1 kubenswrapper[4771]: I1011 10:56:28.976316 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tgg64"] Oct 11 10:56:29.070503 master-1 kubenswrapper[4771]: I1011 10:56:29.070446 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"1596746b-25ca-487a-9e49-93e532f2838b","Type":"ContainerStarted","Data":"089dbb4c74bf7c97de17802c964262c34b58cf3f278035cc8b343f4df54f1f61"} Oct 11 10:56:29.072943 master-1 kubenswrapper[4771]: I1011 10:56:29.072674 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c648855c-73f8-4316-9eca-147821b776c2","Type":"ContainerStarted","Data":"901bf1754d9ccab6be8ecc339468f500f7f6d434ca5a6c15ca5caad9d817a352"} Oct 11 10:56:29.074728 master-1 kubenswrapper[4771]: I1011 10:56:29.074700 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"602a8d3a-2ca2-43d2-8def-5718d9baf2ee","Type":"ContainerStarted","Data":"9b7f7d7c0af1b640b51f7a6b8a5687c5423669e8ea192915f2e34e079daaef17"} Oct 11 10:56:29.076544 master-1 kubenswrapper[4771]: I1011 10:56:29.076512 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tgg64" event={"ID":"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad","Type":"ContainerStarted","Data":"8225c71dadd5dd5d7cdb7b603f12129b97565dbfb98b6f8553a5f73b645e62cc"} Oct 11 10:56:29.077973 master-1 kubenswrapper[4771]: I1011 10:56:29.077939 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"4619bcb1-090e-4824-adfe-6a526158d0ea","Type":"ContainerStarted","Data":"2855d39e0600653f4ca98e1b1c4a631cd2cea811da489ab4ee595433738a99d4"} Oct 11 10:56:29.421427 master-0 kubenswrapper[4790]: I1011 10:56:29.421296 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"1479006f-cac6-481e-86de-1ec1bed55c2d","Type":"ContainerStarted","Data":"c88496f1c62cf4144371d31cb98a180ae9bb8ab68b19a997dabce86192b784b4"} Oct 11 10:56:29.488381 master-2 kubenswrapper[4776]: I1011 10:56:29.488208 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b","Type":"ContainerStarted","Data":"443bca660f4af73bca6744e7e355caaadc56f7779ac49304e17e54dc85a7286c"} Oct 11 10:56:29.498730 master-2 kubenswrapper[4776]: I1011 10:56:29.498636 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ec11821-cdf5-45e1-a138-2b62dad57cc3","Type":"ContainerStarted","Data":"9b37b95095e4c99e4c588f15858480444284fe16e29197075e74b845d5fdd23b"} Oct 11 10:56:29.501775 master-2 kubenswrapper[4776]: I1011 10:56:29.501354 4776 generic.go:334] "Generic (PLEG): container finished" podID="43ef2dc9-3563-4188-8d91-2fc18c396a4a" containerID="c248f341904d5a6b165429ebe51185bc10d8bbf637b7b5baa2593c4ecc482b79" exitCode=0 Oct 11 10:56:29.501775 master-2 kubenswrapper[4776]: I1011 10:56:29.501408 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" event={"ID":"43ef2dc9-3563-4188-8d91-2fc18c396a4a","Type":"ContainerDied","Data":"c248f341904d5a6b165429ebe51185bc10d8bbf637b7b5baa2593c4ecc482b79"} Oct 11 10:56:29.501775 master-2 kubenswrapper[4776]: I1011 10:56:29.501426 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" event={"ID":"43ef2dc9-3563-4188-8d91-2fc18c396a4a","Type":"ContainerStarted","Data":"b2468f50ae0c430498d74cf1413c34467e7f27dc91b6590e32b6437ebfd5f4be"} Oct 11 10:56:29.506042 master-2 kubenswrapper[4776]: I1011 10:56:29.505992 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"bc645cf2-0900-4b7d-8001-91098664c4cd","Type":"ContainerStarted","Data":"591aab95f05801f0019f8a3fc7d7819e1d10c278972d44e1c7797799e5a31883"} Oct 11 10:56:29.649057 master-2 kubenswrapper[4776]: I1011 10:56:29.645612 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-internal-api-0" podStartSLOduration=5.645595262 podStartE2EDuration="5.645595262s" podCreationTimestamp="2025-10-11 10:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:29.644733608 +0000 UTC m=+1824.429160317" watchObservedRunningTime="2025-10-11 10:56:29.645595262 +0000 UTC m=+1824.430021971" Oct 11 10:56:29.942384 master-1 kubenswrapper[4771]: I1011 10:56:29.940262 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 11 10:56:30.112371 master-1 kubenswrapper[4771]: I1011 10:56:30.108710 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tgg64" event={"ID":"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad","Type":"ContainerStarted","Data":"af56a7e4623de207ef8289e7bba0d65eef5da9d57f459e288f321109c3a8e4f3"} Oct 11 10:56:30.151382 master-1 kubenswrapper[4771]: I1011 10:56:30.150237 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-tgg64" podStartSLOduration=2.150148926 podStartE2EDuration="2.150148926s" podCreationTimestamp="2025-10-11 10:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:30.128858605 +0000 UTC m=+1822.103085046" watchObservedRunningTime="2025-10-11 10:56:30.150148926 +0000 UTC m=+1822.124375367" Oct 11 10:56:30.522122 master-2 kubenswrapper[4776]: I1011 10:56:30.521942 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" event={"ID":"43ef2dc9-3563-4188-8d91-2fc18c396a4a","Type":"ContainerStarted","Data":"a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142"} Oct 11 10:56:30.522122 master-2 kubenswrapper[4776]: I1011 10:56:30.522087 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:30.554619 master-2 kubenswrapper[4776]: I1011 10:56:30.554287 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" podStartSLOduration=3.554251672 podStartE2EDuration="3.554251672s" podCreationTimestamp="2025-10-11 10:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:30.549603795 +0000 UTC m=+1825.334030504" watchObservedRunningTime="2025-10-11 10:56:30.554251672 +0000 UTC m=+1825.338678381" Oct 11 10:56:31.406381 master-1 kubenswrapper[4771]: I1011 10:56:31.405580 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 11 10:56:31.486079 master-0 kubenswrapper[4790]: I1011 10:56:31.485997 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:56:32.382077 master-2 kubenswrapper[4776]: I1011 10:56:32.381970 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:32.388941 master-2 kubenswrapper[4776]: I1011 10:56:32.388866 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:32.436768 master-2 kubenswrapper[4776]: I1011 10:56:32.436708 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:32.597065 master-2 kubenswrapper[4776]: I1011 10:56:32.597008 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:32.697185 master-2 kubenswrapper[4776]: I1011 10:56:32.697134 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9pr8j"] Oct 11 10:56:33.585261 master-1 kubenswrapper[4771]: I1011 10:56:33.585179 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:33.585261 master-1 kubenswrapper[4771]: I1011 10:56:33.585265 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:33.618135 master-1 kubenswrapper[4771]: I1011 10:56:33.618077 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:33.643242 master-1 kubenswrapper[4771]: I1011 10:56:33.643195 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:34.153075 master-2 kubenswrapper[4776]: I1011 10:56:34.153019 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 10:56:34.154124 master-2 kubenswrapper[4776]: I1011 10:56:34.154090 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="cbfedacb-2045-4297-be8f-3582dd2bcd7b" containerName="kube-state-metrics" containerID="cri-o://6f7f88ff1f83ffde680d343beac013ec48c1906c3184c197f7e12a2ad3e3ad7b" gracePeriod=30 Oct 11 10:56:34.154417 master-1 kubenswrapper[4771]: I1011 10:56:34.154294 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:34.154417 master-1 kubenswrapper[4771]: I1011 10:56:34.154403 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:34.561464 master-2 kubenswrapper[4776]: I1011 10:56:34.561409 4776 generic.go:334] "Generic (PLEG): container finished" podID="cbfedacb-2045-4297-be8f-3582dd2bcd7b" containerID="6f7f88ff1f83ffde680d343beac013ec48c1906c3184c197f7e12a2ad3e3ad7b" exitCode=2 Oct 11 10:56:34.561812 master-2 kubenswrapper[4776]: I1011 10:56:34.561788 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9pr8j" podUID="5baa2228-1c52-469a-abb5-483e30443701" containerName="registry-server" containerID="cri-o://efedac54a12d10418e1659b5155affd85b591b2264aa7f46fcb0434f06469265" gracePeriod=2 Oct 11 10:56:34.562700 master-2 kubenswrapper[4776]: I1011 10:56:34.562276 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cbfedacb-2045-4297-be8f-3582dd2bcd7b","Type":"ContainerDied","Data":"6f7f88ff1f83ffde680d343beac013ec48c1906c3184c197f7e12a2ad3e3ad7b"} Oct 11 10:56:35.175641 master-1 kubenswrapper[4771]: I1011 10:56:35.175322 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"4619bcb1-090e-4824-adfe-6a526158d0ea","Type":"ContainerStarted","Data":"a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8"} Oct 11 10:56:35.185376 master-1 kubenswrapper[4771]: I1011 10:56:35.184825 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"1596746b-25ca-487a-9e49-93e532f2838b","Type":"ContainerStarted","Data":"cc430b45b80be084cd02e8ad7b476be42a8b108197f68c496397cf28d0b094ae"} Oct 11 10:56:35.185376 master-1 kubenswrapper[4771]: I1011 10:56:35.184900 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"1596746b-25ca-487a-9e49-93e532f2838b","Type":"ContainerStarted","Data":"10394d9815d51c3e2ede873c2bea5e3b152ea84d8b7c0cf8df31f7c51efb349f"} Oct 11 10:56:35.191381 master-1 kubenswrapper[4771]: I1011 10:56:35.191325 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c648855c-73f8-4316-9eca-147821b776c2","Type":"ContainerStarted","Data":"684566c2bec8831ad596ea2c67283e963a33a72653ccd7d10aa766e1b9786557"} Oct 11 10:56:35.191381 master-1 kubenswrapper[4771]: I1011 10:56:35.191367 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c648855c-73f8-4316-9eca-147821b776c2","Type":"ContainerStarted","Data":"88924f324b5ca6c196af8b9bc4aa24510c91c3566f0f572201b83024bd6210b6"} Oct 11 10:56:35.193500 master-1 kubenswrapper[4771]: I1011 10:56:35.193466 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"602a8d3a-2ca2-43d2-8def-5718d9baf2ee","Type":"ContainerStarted","Data":"23ed8d2300382ac408577223ca4d96c6b722a1c94b3187394d9ec21991883547"} Oct 11 10:56:35.193652 master-1 kubenswrapper[4771]: I1011 10:56:35.193586 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="602a8d3a-2ca2-43d2-8def-5718d9baf2ee" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://23ed8d2300382ac408577223ca4d96c6b722a1c94b3187394d9ec21991883547" gracePeriod=30 Oct 11 10:56:35.199571 master-1 kubenswrapper[4771]: I1011 10:56:35.199485 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-tn8xz" event={"ID":"3de492fb-5249-49e2-a327-756234aa92bd","Type":"ContainerStarted","Data":"38fe7e740cc7430b1900679565564cc35f6e1964bf7c4a238c960c0377445331"} Oct 11 10:56:35.211692 master-1 kubenswrapper[4771]: I1011 10:56:35.211599 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-1" podStartSLOduration=2.431765014 podStartE2EDuration="8.21157798s" podCreationTimestamp="2025-10-11 10:56:27 +0000 UTC" firstStartedPulling="2025-10-11 10:56:28.313837973 +0000 UTC m=+1820.288064414" lastFinishedPulling="2025-10-11 10:56:34.093650939 +0000 UTC m=+1826.067877380" observedRunningTime="2025-10-11 10:56:35.206577324 +0000 UTC m=+1827.180803785" watchObservedRunningTime="2025-10-11 10:56:35.21157798 +0000 UTC m=+1827.185804421" Oct 11 10:56:35.575376 master-2 kubenswrapper[4776]: I1011 10:56:35.575292 4776 generic.go:334] "Generic (PLEG): container finished" podID="5baa2228-1c52-469a-abb5-483e30443701" containerID="efedac54a12d10418e1659b5155affd85b591b2264aa7f46fcb0434f06469265" exitCode=0 Oct 11 10:56:35.575376 master-2 kubenswrapper[4776]: I1011 10:56:35.575363 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pr8j" event={"ID":"5baa2228-1c52-469a-abb5-483e30443701","Type":"ContainerDied","Data":"efedac54a12d10418e1659b5155affd85b591b2264aa7f46fcb0434f06469265"} Oct 11 10:56:36.085447 master-1 kubenswrapper[4771]: I1011 10:56:36.084809 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-1" podStartSLOduration=3.359572071 podStartE2EDuration="9.084772235s" podCreationTimestamp="2025-10-11 10:56:27 +0000 UTC" firstStartedPulling="2025-10-11 10:56:28.368023493 +0000 UTC m=+1820.342249934" lastFinishedPulling="2025-10-11 10:56:34.093223647 +0000 UTC m=+1826.067450098" observedRunningTime="2025-10-11 10:56:36.038694472 +0000 UTC m=+1828.012920993" watchObservedRunningTime="2025-10-11 10:56:36.084772235 +0000 UTC m=+1828.058998676" Oct 11 10:56:36.090912 master-1 kubenswrapper[4771]: I1011 10:56:36.088220 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.313161298 podStartE2EDuration="9.088205085s" podCreationTimestamp="2025-10-11 10:56:27 +0000 UTC" firstStartedPulling="2025-10-11 10:56:28.327914044 +0000 UTC m=+1820.302140485" lastFinishedPulling="2025-10-11 10:56:34.102957821 +0000 UTC m=+1826.077184272" observedRunningTime="2025-10-11 10:56:36.077536474 +0000 UTC m=+1828.051762925" watchObservedRunningTime="2025-10-11 10:56:36.088205085 +0000 UTC m=+1828.062431526" Oct 11 10:56:36.113564 master-1 kubenswrapper[4771]: I1011 10:56:36.113321 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:36.130101 master-1 kubenswrapper[4771]: I1011 10:56:36.129987 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-tn8xz" podStartSLOduration=2.786464607 podStartE2EDuration="10.129955633s" podCreationTimestamp="2025-10-11 10:56:26 +0000 UTC" firstStartedPulling="2025-10-11 10:56:26.907478891 +0000 UTC m=+1818.881705342" lastFinishedPulling="2025-10-11 10:56:34.250969927 +0000 UTC m=+1826.225196368" observedRunningTime="2025-10-11 10:56:36.110264029 +0000 UTC m=+1828.084490490" watchObservedRunningTime="2025-10-11 10:56:36.129955633 +0000 UTC m=+1828.104182084" Oct 11 10:56:36.145751 master-1 kubenswrapper[4771]: I1011 10:56:36.145634 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.106001827 podStartE2EDuration="9.145605939s" podCreationTimestamp="2025-10-11 10:56:27 +0000 UTC" firstStartedPulling="2025-10-11 10:56:28.057290692 +0000 UTC m=+1820.031517133" lastFinishedPulling="2025-10-11 10:56:34.096894774 +0000 UTC m=+1826.071121245" observedRunningTime="2025-10-11 10:56:36.138105981 +0000 UTC m=+1828.112332422" watchObservedRunningTime="2025-10-11 10:56:36.145605939 +0000 UTC m=+1828.119832390" Oct 11 10:56:36.210612 master-1 kubenswrapper[4771]: I1011 10:56:36.210531 4771 generic.go:334] "Generic (PLEG): container finished" podID="f85d5cfa-8073-4bbf-9eff-78fde719dadf" containerID="03e80f5bdb6844a3112427ed3612b145765c86f689c582771359401e14c9758e" exitCode=0 Oct 11 10:56:36.211878 master-1 kubenswrapper[4771]: I1011 10:56:36.211005 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:56:36.211878 master-1 kubenswrapper[4771]: I1011 10:56:36.211551 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2kt7k" event={"ID":"f85d5cfa-8073-4bbf-9eff-78fde719dadf","Type":"ContainerDied","Data":"03e80f5bdb6844a3112427ed3612b145765c86f689c582771359401e14c9758e"} Oct 11 10:56:36.286653 master-2 kubenswrapper[4776]: I1011 10:56:36.286524 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:36.286653 master-2 kubenswrapper[4776]: I1011 10:56:36.286597 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:36.326720 master-2 kubenswrapper[4776]: I1011 10:56:36.318629 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:36.336854 master-2 kubenswrapper[4776]: I1011 10:56:36.335760 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:36.362086 master-1 kubenswrapper[4771]: I1011 10:56:36.359579 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:56:36.362086 master-1 kubenswrapper[4771]: I1011 10:56:36.360318 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerName="sg-core" containerID="cri-o://dfcca484b1ef8ab38e804d6ef394f26fc64c3bf3d7f1246b3ca103ffc5677a68" gracePeriod=30 Oct 11 10:56:36.362086 master-1 kubenswrapper[4771]: I1011 10:56:36.360322 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerName="proxy-httpd" containerID="cri-o://505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe" gracePeriod=30 Oct 11 10:56:36.362086 master-1 kubenswrapper[4771]: I1011 10:56:36.360368 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerName="ceilometer-notification-agent" containerID="cri-o://6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111" gracePeriod=30 Oct 11 10:56:36.362086 master-1 kubenswrapper[4771]: I1011 10:56:36.360633 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerName="ceilometer-central-agent" containerID="cri-o://2c5f1db8917c8af20f15f8f5c86b116c03c3cf84afbea6d406851b9dc2d31536" gracePeriod=30 Oct 11 10:56:36.456922 master-1 kubenswrapper[4771]: I1011 10:56:36.456509 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-external-api-0" Oct 11 10:56:36.585800 master-2 kubenswrapper[4776]: I1011 10:56:36.585395 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:36.585800 master-2 kubenswrapper[4776]: I1011 10:56:36.585440 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:37.235768 master-1 kubenswrapper[4771]: I1011 10:56:37.234806 4771 generic.go:334] "Generic (PLEG): container finished" podID="3de492fb-5249-49e2-a327-756234aa92bd" containerID="38fe7e740cc7430b1900679565564cc35f6e1964bf7c4a238c960c0377445331" exitCode=0 Oct 11 10:56:37.236501 master-1 kubenswrapper[4771]: I1011 10:56:37.235756 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-tn8xz" event={"ID":"3de492fb-5249-49e2-a327-756234aa92bd","Type":"ContainerDied","Data":"38fe7e740cc7430b1900679565564cc35f6e1964bf7c4a238c960c0377445331"} Oct 11 10:56:37.241674 master-1 kubenswrapper[4771]: I1011 10:56:37.241455 4771 generic.go:334] "Generic (PLEG): container finished" podID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerID="505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe" exitCode=0 Oct 11 10:56:37.241674 master-1 kubenswrapper[4771]: I1011 10:56:37.241568 4771 generic.go:334] "Generic (PLEG): container finished" podID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerID="dfcca484b1ef8ab38e804d6ef394f26fc64c3bf3d7f1246b3ca103ffc5677a68" exitCode=2 Oct 11 10:56:37.241674 master-1 kubenswrapper[4771]: I1011 10:56:37.241580 4771 generic.go:334] "Generic (PLEG): container finished" podID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerID="2c5f1db8917c8af20f15f8f5c86b116c03c3cf84afbea6d406851b9dc2d31536" exitCode=0 Oct 11 10:56:37.243665 master-1 kubenswrapper[4771]: I1011 10:56:37.243529 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0cc5394-b33f-41a9-bbe2-d772e75a8f58","Type":"ContainerDied","Data":"505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe"} Oct 11 10:56:37.243951 master-1 kubenswrapper[4771]: I1011 10:56:37.243924 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0cc5394-b33f-41a9-bbe2-d772e75a8f58","Type":"ContainerDied","Data":"dfcca484b1ef8ab38e804d6ef394f26fc64c3bf3d7f1246b3ca103ffc5677a68"} Oct 11 10:56:37.244016 master-1 kubenswrapper[4771]: I1011 10:56:37.243953 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0cc5394-b33f-41a9-bbe2-d772e75a8f58","Type":"ContainerDied","Data":"2c5f1db8917c8af20f15f8f5c86b116c03c3cf84afbea6d406851b9dc2d31536"} Oct 11 10:56:37.390607 master-1 kubenswrapper[4771]: W1011 10:56:37.390534 4771 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb670525b_9ca9_419c_858b_6bb2a2303cf6.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb670525b_9ca9_419c_858b_6bb2a2303cf6.slice: no such file or directory Oct 11 10:56:37.498885 master-2 kubenswrapper[4776]: I1011 10:56:37.498848 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:37.517160 master-1 kubenswrapper[4771]: E1011 10:56:37.517052 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-48979e1f4a2c3ea1ab7c6b56d19ad482a69e0c44021b08cc343a04080547c276\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-conmon-9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-conmon-7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-5d17ff04cecf6e6d74e4dc9eda892c61efec1bfa4b25f713f94a437e54e6aeed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-conmon-5d17ff04cecf6e6d74e4dc9eda892c61efec1bfa4b25f713f94a437e54e6aeed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-cbc52f8e44c15d5b309b9a5f8920e77aecfb6aec423daa8801871f7c83313c80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-0c0dbb46c9c9dd56fc370dbf3bf4cefb3705db5d800f0bb6bbcaf05c4ff29fa5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-conmon-cbc52f8e44c15d5b309b9a5f8920e77aecfb6aec423daa8801871f7c83313c80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice\": RecentStats: unable to find data in memory cache]" Oct 11 10:56:37.525280 master-1 kubenswrapper[4771]: E1011 10:56:37.525177 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-conmon-9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-conmon-7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-a58cd7c5986caf2dc23489be67139cc61da2a0165758a7758f40955bc3fa5183\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-conmon-fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-conmon-cbc52f8e44c15d5b309b9a5f8920e77aecfb6aec423daa8801871f7c83313c80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-48979e1f4a2c3ea1ab7c6b56d19ad482a69e0c44021b08cc343a04080547c276\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-conmon-5d17ff04cecf6e6d74e4dc9eda892c61efec1bfa4b25f713f94a437e54e6aeed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-cbc52f8e44c15d5b309b9a5f8920e77aecfb6aec423daa8801871f7c83313c80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-2046d1999ef071c637c21e7f0dabc3a11726315f639116dfe75b51029d76c1fe\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-5d17ff04cecf6e6d74e4dc9eda892c61efec1bfa4b25f713f94a437e54e6aeed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:56:37.526015 master-1 kubenswrapper[4771]: E1011 10:56:37.525561 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-conmon-fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-conmon-cbc52f8e44c15d5b309b9a5f8920e77aecfb6aec423daa8801871f7c83313c80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-conmon-5d17ff04cecf6e6d74e4dc9eda892c61efec1bfa4b25f713f94a437e54e6aeed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-5d17ff04cecf6e6d74e4dc9eda892c61efec1bfa4b25f713f94a437e54e6aeed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-conmon-7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-a58cd7c5986caf2dc23489be67139cc61da2a0165758a7758f40955bc3fa5183\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-48979e1f4a2c3ea1ab7c6b56d19ad482a69e0c44021b08cc343a04080547c276\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-2046d1999ef071c637c21e7f0dabc3a11726315f639116dfe75b51029d76c1fe\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-conmon-9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:56:37.535019 master-1 kubenswrapper[4771]: E1011 10:56:37.534913 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-fa2bc31890cd28c5ae042f31a82dc462c8d6a57c6fbf10c4706aaa08a519f43e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-conmon-fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-conmon-5d17ff04cecf6e6d74e4dc9eda892c61efec1bfa4b25f713f94a437e54e6aeed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-cbc52f8e44c15d5b309b9a5f8920e77aecfb6aec423daa8801871f7c83313c80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-0c0dbb46c9c9dd56fc370dbf3bf4cefb3705db5d800f0bb6bbcaf05c4ff29fa5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-2046d1999ef071c637c21e7f0dabc3a11726315f639116dfe75b51029d76c1fe\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-5d17ff04cecf6e6d74e4dc9eda892c61efec1bfa4b25f713f94a437e54e6aeed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-conmon-7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-48979e1f4a2c3ea1ab7c6b56d19ad482a69e0c44021b08cc343a04080547c276\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-a58cd7c5986caf2dc23489be67139cc61da2a0165758a7758f40955bc3fa5183\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-conmon-9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-conmon-cbc52f8e44c15d5b309b9a5f8920e77aecfb6aec423daa8801871f7c83313c80.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:56:37.538393 master-1 kubenswrapper[4771]: E1011 10:56:37.538269 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-conmon-9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-5d17ff04cecf6e6d74e4dc9eda892c61efec1bfa4b25f713f94a437e54e6aeed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-conmon-cbc52f8e44c15d5b309b9a5f8920e77aecfb6aec423daa8801871f7c83313c80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-48979e1f4a2c3ea1ab7c6b56d19ad482a69e0c44021b08cc343a04080547c276\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-conmon-fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-0c0dbb46c9c9dd56fc370dbf3bf4cefb3705db5d800f0bb6bbcaf05c4ff29fa5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-conmon-7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-cbc52f8e44c15d5b309b9a5f8920e77aecfb6aec423daa8801871f7c83313c80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-a58cd7c5986caf2dc23489be67139cc61da2a0165758a7758f40955bc3fa5183\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-conmon-5d17ff04cecf6e6d74e4dc9eda892c61efec1bfa4b25f713f94a437e54e6aeed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-2046d1999ef071c637c21e7f0dabc3a11726315f639116dfe75b51029d76c1fe\": RecentStats: unable to find data in memory cache]" Oct 11 10:56:37.540713 master-1 kubenswrapper[4771]: E1011 10:56:37.539164 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-cbc52f8e44c15d5b309b9a5f8920e77aecfb6aec423daa8801871f7c83313c80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-2046d1999ef071c637c21e7f0dabc3a11726315f639116dfe75b51029d76c1fe\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-conmon-9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-conmon-cbc52f8e44c15d5b309b9a5f8920e77aecfb6aec423daa8801871f7c83313c80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-conmon-fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-5d17ff04cecf6e6d74e4dc9eda892c61efec1bfa4b25f713f94a437e54e6aeed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-0c0dbb46c9c9dd56fc370dbf3bf4cefb3705db5d800f0bb6bbcaf05c4ff29fa5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-conmon-7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-a58cd7c5986caf2dc23489be67139cc61da2a0165758a7758f40955bc3fa5183\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-48979e1f4a2c3ea1ab7c6b56d19ad482a69e0c44021b08cc343a04080547c276\": RecentStats: unable to find data in memory cache]" Oct 11 10:56:37.542538 master-1 kubenswrapper[4771]: E1011 10:56:37.542458 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-conmon-5d17ff04cecf6e6d74e4dc9eda892c61efec1bfa4b25f713f94a437e54e6aeed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-conmon-fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-conmon-9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-0c0dbb46c9c9dd56fc370dbf3bf4cefb3705db5d800f0bb6bbcaf05c4ff29fa5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-a58cd7c5986caf2dc23489be67139cc61da2a0165758a7758f40955bc3fa5183\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-5d17ff04cecf6e6d74e4dc9eda892c61efec1bfa4b25f713f94a437e54e6aeed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-conmon-7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-2046d1999ef071c637c21e7f0dabc3a11726315f639116dfe75b51029d76c1fe\": RecentStats: unable to find data in memory cache]" Oct 11 10:56:37.545320 master-1 kubenswrapper[4771]: E1011 10:56:37.545177 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-5d17ff04cecf6e6d74e4dc9eda892c61efec1bfa4b25f713f94a437e54e6aeed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-conmon-cbc52f8e44c15d5b309b9a5f8920e77aecfb6aec423daa8801871f7c83313c80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice/crio-0c0dbb46c9c9dd56fc370dbf3bf4cefb3705db5d800f0bb6bbcaf05c4ff29fa5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-conmon-fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-a58cd7c5986caf2dc23489be67139cc61da2a0165758a7758f40955bc3fa5183\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-fa2fab9dc9ec47645ed24513cf6556f25abd1fec54df11a83d3b7a83c2bf8f53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-conmon-7a81fdca6b5cfdc52d40a0c549f548b156cef8be2d800941e09638f91911a474.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3028266_255a_43a3_8bdb_9695ad7cbb30.slice/crio-9147911025dd8a83203254747854ba1cbd5aacf26ee1c4c2cc04460690d1b34c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-48979e1f4a2c3ea1ab7c6b56d19ad482a69e0c44021b08cc343a04080547c276\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5921e565_c581_42f4_8da8_df72fae9a3c0.slice/crio-2046d1999ef071c637c21e7f0dabc3a11726315f639116dfe75b51029d76c1fe\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38267a66_0ebd_44ab_bc7f_cd5703503b74.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478147ef_a0d7_4c37_952c_3fc3a23775db.slice/crio-cbc52f8e44c15d5b309b9a5f8920e77aecfb6aec423daa8801871f7c83313c80.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:56:37.547635 master-0 kubenswrapper[4790]: I1011 10:56:37.547542 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-2" event={"ID":"0edb0512-334f-4bfd-b297-cce29a7c510b","Type":"ContainerStarted","Data":"bcee916a93846dcbbb2d3286227b4256009e1e87a4a77cd08320f9d8ba675866"} Oct 11 10:56:37.551827 master-0 kubenswrapper[4790]: I1011 10:56:37.551788 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1" event={"ID":"253852fc-de03-49f0-8e18-b3ccba3d4966","Type":"ContainerStarted","Data":"98d215d30f8c7dc0e7fbe10e4417ac3c671b823010a7e36669d3ba73c51d79e4"} Oct 11 10:56:37.553403 master-1 kubenswrapper[4771]: I1011 10:56:37.552556 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 11 10:56:37.553403 master-1 kubenswrapper[4771]: I1011 10:56:37.552619 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 11 10:56:37.561119 master-0 kubenswrapper[4790]: I1011 10:56:37.557189 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"1479006f-cac6-481e-86de-1ec1bed55c2d","Type":"ContainerStarted","Data":"963ab64307dc468bd5caa5c25c49620c1f8348df5ca7e4fa95e7a4a10d90a66b"} Oct 11 10:56:37.582439 master-1 kubenswrapper[4771]: I1011 10:56:37.580583 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-1" Oct 11 10:56:37.582439 master-1 kubenswrapper[4771]: I1011 10:56:37.582039 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-1" Oct 11 10:56:37.583694 master-0 kubenswrapper[4790]: I1011 10:56:37.582250 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-2" podStartSLOduration=1.631917761 podStartE2EDuration="10.582228737s" podCreationTimestamp="2025-10-11 10:56:27 +0000 UTC" firstStartedPulling="2025-10-11 10:56:28.269293578 +0000 UTC m=+1064.823753870" lastFinishedPulling="2025-10-11 10:56:37.219604554 +0000 UTC m=+1073.774064846" observedRunningTime="2025-10-11 10:56:37.579841361 +0000 UTC m=+1074.134301663" watchObservedRunningTime="2025-10-11 10:56:37.582228737 +0000 UTC m=+1074.136689029" Oct 11 10:56:37.589867 master-0 kubenswrapper[4790]: I1011 10:56:37.589506 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-2" Oct 11 10:56:37.589867 master-0 kubenswrapper[4790]: I1011 10:56:37.589816 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-2" Oct 11 10:56:37.610487 master-2 kubenswrapper[4776]: I1011 10:56:37.610418 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-catalog-content\") pod \"5baa2228-1c52-469a-abb5-483e30443701\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " Oct 11 10:56:37.611217 master-2 kubenswrapper[4776]: I1011 10:56:37.610523 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98qxz\" (UniqueName: \"kubernetes.io/projected/5baa2228-1c52-469a-abb5-483e30443701-kube-api-access-98qxz\") pod \"5baa2228-1c52-469a-abb5-483e30443701\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " Oct 11 10:56:37.611217 master-2 kubenswrapper[4776]: I1011 10:56:37.610609 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-utilities\") pod \"5baa2228-1c52-469a-abb5-483e30443701\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " Oct 11 10:56:37.613251 master-2 kubenswrapper[4776]: I1011 10:56:37.613156 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-utilities" (OuterVolumeSpecName: "utilities") pod "5baa2228-1c52-469a-abb5-483e30443701" (UID: "5baa2228-1c52-469a-abb5-483e30443701"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:37.615426 master-2 kubenswrapper[4776]: I1011 10:56:37.615316 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5baa2228-1c52-469a-abb5-483e30443701-kube-api-access-98qxz" (OuterVolumeSpecName: "kube-api-access-98qxz") pod "5baa2228-1c52-469a-abb5-483e30443701" (UID: "5baa2228-1c52-469a-abb5-483e30443701"). InnerVolumeSpecName "kube-api-access-98qxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:37.621366 master-0 kubenswrapper[4790]: I1011 10:56:37.621127 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-2" Oct 11 10:56:37.622286 master-1 kubenswrapper[4771]: I1011 10:56:37.622241 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-1" Oct 11 10:56:37.624842 master-2 kubenswrapper[4776]: I1011 10:56:37.624425 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pr8j" event={"ID":"5baa2228-1c52-469a-abb5-483e30443701","Type":"ContainerDied","Data":"1cb5b9d336455c89439755003242ff1ad85dfa104d62ac4092e9fd018ff8e5cd"} Oct 11 10:56:37.624842 master-2 kubenswrapper[4776]: I1011 10:56:37.624513 4776 scope.go:117] "RemoveContainer" containerID="efedac54a12d10418e1659b5155affd85b591b2264aa7f46fcb0434f06469265" Oct 11 10:56:37.624842 master-2 kubenswrapper[4776]: I1011 10:56:37.624334 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:37.695165 master-2 kubenswrapper[4776]: I1011 10:56:37.692369 4776 scope.go:117] "RemoveContainer" containerID="fe971507b0681a5d07246d18c8d56528aa0bc3f57ad326820f2a1eadf06f2fcf" Oct 11 10:56:37.695165 master-2 kubenswrapper[4776]: I1011 10:56:37.692947 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5baa2228-1c52-469a-abb5-483e30443701" (UID: "5baa2228-1c52-469a-abb5-483e30443701"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:37.719731 master-2 kubenswrapper[4776]: I1011 10:56:37.715366 4776 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-catalog-content\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:37.719731 master-2 kubenswrapper[4776]: I1011 10:56:37.715411 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98qxz\" (UniqueName: \"kubernetes.io/projected/5baa2228-1c52-469a-abb5-483e30443701-kube-api-access-98qxz\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:37.719731 master-2 kubenswrapper[4776]: I1011 10:56:37.715427 4776 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-utilities\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:37.723391 master-2 kubenswrapper[4776]: I1011 10:56:37.723340 4776 scope.go:117] "RemoveContainer" containerID="3bc11fde04d1f8b52a9e917c401ad9aed9276fbde11670b40c9d984d8f15247c" Oct 11 10:56:37.780822 master-1 kubenswrapper[4771]: I1011 10:56:37.780756 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:56:37.897129 master-2 kubenswrapper[4776]: I1011 10:56:37.897093 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 11 10:56:37.969134 master-1 kubenswrapper[4771]: I1011 10:56:37.968988 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-1" Oct 11 10:56:37.969968 master-1 kubenswrapper[4771]: I1011 10:56:37.969938 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-1" Oct 11 10:56:37.970059 master-1 kubenswrapper[4771]: I1011 10:56:37.969978 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-1" Oct 11 10:56:37.970059 master-1 kubenswrapper[4771]: I1011 10:56:37.969991 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-1" Oct 11 10:56:38.005011 master-2 kubenswrapper[4776]: I1011 10:56:38.004640 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9pr8j"] Oct 11 10:56:38.018598 master-2 kubenswrapper[4776]: I1011 10:56:38.018546 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9pr8j"] Oct 11 10:56:38.020716 master-2 kubenswrapper[4776]: I1011 10:56:38.019917 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm52p\" (UniqueName: \"kubernetes.io/projected/cbfedacb-2045-4297-be8f-3582dd2bcd7b-kube-api-access-cm52p\") pod \"cbfedacb-2045-4297-be8f-3582dd2bcd7b\" (UID: \"cbfedacb-2045-4297-be8f-3582dd2bcd7b\") " Oct 11 10:56:38.026906 master-2 kubenswrapper[4776]: I1011 10:56:38.026855 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbfedacb-2045-4297-be8f-3582dd2bcd7b-kube-api-access-cm52p" (OuterVolumeSpecName: "kube-api-access-cm52p") pod "cbfedacb-2045-4297-be8f-3582dd2bcd7b" (UID: "cbfedacb-2045-4297-be8f-3582dd2bcd7b"). InnerVolumeSpecName "kube-api-access-cm52p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:38.093641 master-2 kubenswrapper[4776]: I1011 10:56:38.093555 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5baa2228-1c52-469a-abb5-483e30443701" path="/var/lib/kubelet/pods/5baa2228-1c52-469a-abb5-483e30443701/volumes" Oct 11 10:56:38.122208 master-2 kubenswrapper[4776]: I1011 10:56:38.122158 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm52p\" (UniqueName: \"kubernetes.io/projected/cbfedacb-2045-4297-be8f-3582dd2bcd7b-kube-api-access-cm52p\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:38.148918 master-2 kubenswrapper[4776]: I1011 10:56:38.148862 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:38.241421 master-0 kubenswrapper[4790]: I1011 10:56:38.241163 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-768f954cfc-9xg22"] Oct 11 10:56:38.241718 master-0 kubenswrapper[4790]: I1011 10:56:38.241505 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-768f954cfc-9xg22" podUID="6ff24705-c685-47d9-ad1b-9ec04c541bf7" containerName="dnsmasq-dns" containerID="cri-o://585d3075d62338a2a9a42f35ce1fe0086b7e13eaad601eec48eba4bae373241a" gracePeriod=10 Oct 11 10:56:38.270383 master-1 kubenswrapper[4771]: I1011 10:56:38.266870 4771 generic.go:334] "Generic (PLEG): container finished" podID="1fe7833d-9251-4545-ba68-f58c146188f1" containerID="3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d" exitCode=137 Oct 11 10:56:38.270383 master-1 kubenswrapper[4771]: I1011 10:56:38.267193 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" event={"ID":"1fe7833d-9251-4545-ba68-f58c146188f1","Type":"ContainerDied","Data":"3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d"} Oct 11 10:56:38.287380 master-1 kubenswrapper[4771]: I1011 10:56:38.286005 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79cbf74f6f-j7kt4"] Oct 11 10:56:38.293378 master-1 kubenswrapper[4771]: I1011 10:56:38.288636 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.311372 master-1 kubenswrapper[4771]: I1011 10:56:38.307958 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79cbf74f6f-j7kt4"] Oct 11 10:56:38.365182 master-1 kubenswrapper[4771]: I1011 10:56:38.352582 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-dns-svc\") pod \"dnsmasq-dns-79cbf74f6f-j7kt4\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.365182 master-1 kubenswrapper[4771]: I1011 10:56:38.352678 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-ovsdbserver-nb\") pod \"dnsmasq-dns-79cbf74f6f-j7kt4\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.365182 master-1 kubenswrapper[4771]: I1011 10:56:38.352728 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-ovsdbserver-sb\") pod \"dnsmasq-dns-79cbf74f6f-j7kt4\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.365182 master-1 kubenswrapper[4771]: I1011 10:56:38.352746 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-dns-swift-storage-0\") pod \"dnsmasq-dns-79cbf74f6f-j7kt4\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.365182 master-1 kubenswrapper[4771]: I1011 10:56:38.352777 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7n6h\" (UniqueName: \"kubernetes.io/projected/a23d84be-f5ab-4261-9ba2-d94aaf104a59-kube-api-access-k7n6h\") pod \"dnsmasq-dns-79cbf74f6f-j7kt4\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.365182 master-1 kubenswrapper[4771]: I1011 10:56:38.352852 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-config\") pod \"dnsmasq-dns-79cbf74f6f-j7kt4\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.380374 master-1 kubenswrapper[4771]: I1011 10:56:38.375931 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-1" Oct 11 10:56:38.460380 master-1 kubenswrapper[4771]: I1011 10:56:38.458212 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-ovsdbserver-nb\") pod \"dnsmasq-dns-79cbf74f6f-j7kt4\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.460380 master-1 kubenswrapper[4771]: I1011 10:56:38.458296 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-ovsdbserver-sb\") pod \"dnsmasq-dns-79cbf74f6f-j7kt4\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.460380 master-1 kubenswrapper[4771]: I1011 10:56:38.458334 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-dns-swift-storage-0\") pod \"dnsmasq-dns-79cbf74f6f-j7kt4\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.460380 master-1 kubenswrapper[4771]: I1011 10:56:38.458391 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7n6h\" (UniqueName: \"kubernetes.io/projected/a23d84be-f5ab-4261-9ba2-d94aaf104a59-kube-api-access-k7n6h\") pod \"dnsmasq-dns-79cbf74f6f-j7kt4\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.460380 master-1 kubenswrapper[4771]: I1011 10:56:38.458490 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-config\") pod \"dnsmasq-dns-79cbf74f6f-j7kt4\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.460380 master-1 kubenswrapper[4771]: I1011 10:56:38.458573 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-dns-svc\") pod \"dnsmasq-dns-79cbf74f6f-j7kt4\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.460380 master-1 kubenswrapper[4771]: I1011 10:56:38.459418 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-dns-svc\") pod \"dnsmasq-dns-79cbf74f6f-j7kt4\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.460380 master-1 kubenswrapper[4771]: I1011 10:56:38.459982 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-ovsdbserver-nb\") pod \"dnsmasq-dns-79cbf74f6f-j7kt4\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.464368 master-1 kubenswrapper[4771]: I1011 10:56:38.460915 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-ovsdbserver-sb\") pod \"dnsmasq-dns-79cbf74f6f-j7kt4\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.464368 master-1 kubenswrapper[4771]: I1011 10:56:38.461882 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-config\") pod \"dnsmasq-dns-79cbf74f6f-j7kt4\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.464368 master-1 kubenswrapper[4771]: I1011 10:56:38.461995 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-dns-swift-storage-0\") pod \"dnsmasq-dns-79cbf74f6f-j7kt4\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.503376 master-1 kubenswrapper[4771]: I1011 10:56:38.503218 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7n6h\" (UniqueName: \"kubernetes.io/projected/a23d84be-f5ab-4261-9ba2-d94aaf104a59-kube-api-access-k7n6h\") pod \"dnsmasq-dns-79cbf74f6f-j7kt4\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.566279 master-0 kubenswrapper[4790]: I1011 10:56:38.566186 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1" event={"ID":"253852fc-de03-49f0-8e18-b3ccba3d4966","Type":"ContainerStarted","Data":"b5982ea4f03ba0bc234b6f62a88bd15cfad78e0ab12b1e1ee49058903171de83"} Oct 11 10:56:38.572090 master-0 kubenswrapper[4790]: I1011 10:56:38.572042 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"1479006f-cac6-481e-86de-1ec1bed55c2d","Type":"ContainerStarted","Data":"edbe8dca445656ff4e9df27c41b52fcad085e14ad25dc7ca18daaf1416a70168"} Oct 11 10:56:38.572434 master-0 kubenswrapper[4790]: I1011 10:56:38.572188 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-2" podUID="1479006f-cac6-481e-86de-1ec1bed55c2d" containerName="nova-metadata-log" containerID="cri-o://963ab64307dc468bd5caa5c25c49620c1f8348df5ca7e4fa95e7a4a10d90a66b" gracePeriod=30 Oct 11 10:56:38.572434 master-0 kubenswrapper[4790]: I1011 10:56:38.572427 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-2" podUID="1479006f-cac6-481e-86de-1ec1bed55c2d" containerName="nova-metadata-metadata" containerID="cri-o://edbe8dca445656ff4e9df27c41b52fcad085e14ad25dc7ca18daaf1416a70168" gracePeriod=30 Oct 11 10:56:38.576967 master-0 kubenswrapper[4790]: I1011 10:56:38.576644 4790 generic.go:334] "Generic (PLEG): container finished" podID="6ff24705-c685-47d9-ad1b-9ec04c541bf7" containerID="585d3075d62338a2a9a42f35ce1fe0086b7e13eaad601eec48eba4bae373241a" exitCode=0 Oct 11 10:56:38.576967 master-0 kubenswrapper[4790]: I1011 10:56:38.576741 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768f954cfc-9xg22" event={"ID":"6ff24705-c685-47d9-ad1b-9ec04c541bf7","Type":"ContainerDied","Data":"585d3075d62338a2a9a42f35ce1fe0086b7e13eaad601eec48eba4bae373241a"} Oct 11 10:56:38.616388 master-1 kubenswrapper[4771]: I1011 10:56:38.612623 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:38.629662 master-0 kubenswrapper[4790]: I1011 10:56:38.629526 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-2" podStartSLOduration=2.920112596 podStartE2EDuration="11.629501693s" podCreationTimestamp="2025-10-11 10:56:27 +0000 UTC" firstStartedPulling="2025-10-11 10:56:28.520188941 +0000 UTC m=+1065.074649233" lastFinishedPulling="2025-10-11 10:56:37.229578038 +0000 UTC m=+1073.784038330" observedRunningTime="2025-10-11 10:56:38.621564085 +0000 UTC m=+1075.176024377" watchObservedRunningTime="2025-10-11 10:56:38.629501693 +0000 UTC m=+1075.183961985" Oct 11 10:56:38.632166 master-0 kubenswrapper[4790]: I1011 10:56:38.632132 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-1" podStartSLOduration=2.630115069 podStartE2EDuration="11.632127065s" podCreationTimestamp="2025-10-11 10:56:27 +0000 UTC" firstStartedPulling="2025-10-11 10:56:28.199329776 +0000 UTC m=+1064.753790068" lastFinishedPulling="2025-10-11 10:56:37.201341772 +0000 UTC m=+1073.755802064" observedRunningTime="2025-10-11 10:56:38.595615032 +0000 UTC m=+1075.150075344" watchObservedRunningTime="2025-10-11 10:56:38.632127065 +0000 UTC m=+1075.186587357" Oct 11 10:56:38.636377 master-1 kubenswrapper[4771]: I1011 10:56:38.634555 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c648855c-73f8-4316-9eca-147821b776c2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.129.0.157:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:56:38.636377 master-1 kubenswrapper[4771]: I1011 10:56:38.634563 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c648855c-73f8-4316-9eca-147821b776c2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.129.0.157:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:56:38.641016 master-0 kubenswrapper[4790]: I1011 10:56:38.640945 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-2" Oct 11 10:56:38.648400 master-2 kubenswrapper[4776]: I1011 10:56:38.648324 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a3bc084-f5d9-4e64-9350-d2c3b3487e76","Type":"ContainerStarted","Data":"bb1eabd915662801894dc6613fb5062fd75144d8e9f8576573aae186cb323f10"} Oct 11 10:56:38.648916 master-2 kubenswrapper[4776]: I1011 10:56:38.648408 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a3bc084-f5d9-4e64-9350-d2c3b3487e76","Type":"ContainerStarted","Data":"5927eac8ba45ef4c7a0dc1c214bcc490374b8d265e2fa04fadc00756760072a6"} Oct 11 10:56:38.653019 master-2 kubenswrapper[4776]: I1011 10:56:38.652974 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b","Type":"ContainerStarted","Data":"c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554"} Oct 11 10:56:38.654937 master-2 kubenswrapper[4776]: I1011 10:56:38.654907 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ec11821-cdf5-45e1-a138-2b62dad57cc3","Type":"ContainerStarted","Data":"a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77"} Oct 11 10:56:38.654937 master-2 kubenswrapper[4776]: I1011 10:56:38.654944 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ec11821-cdf5-45e1-a138-2b62dad57cc3","Type":"ContainerStarted","Data":"a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12"} Oct 11 10:56:38.658269 master-2 kubenswrapper[4776]: I1011 10:56:38.658230 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cbfedacb-2045-4297-be8f-3582dd2bcd7b","Type":"ContainerDied","Data":"b03215961b9c5150bde0b4e7e2b48a359893ef2938e93f7fad3388b4aeef63a0"} Oct 11 10:56:38.658427 master-2 kubenswrapper[4776]: I1011 10:56:38.658244 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.658427 master-2 kubenswrapper[4776]: I1011 10:56:38.658278 4776 scope.go:117] "RemoveContainer" containerID="6f7f88ff1f83ffde680d343beac013ec48c1906c3184c197f7e12a2ad3e3ad7b" Oct 11 10:56:38.658832 master-2 kubenswrapper[4776]: I1011 10:56:38.658808 4776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:56:38.658970 master-2 kubenswrapper[4776]: I1011 10:56:38.658958 4776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:56:38.713024 master-2 kubenswrapper[4776]: I1011 10:56:38.712304 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2" podStartSLOduration=2.400639707 podStartE2EDuration="11.712282597s" podCreationTimestamp="2025-10-11 10:56:27 +0000 UTC" firstStartedPulling="2025-10-11 10:56:28.173377047 +0000 UTC m=+1822.957803756" lastFinishedPulling="2025-10-11 10:56:37.485019937 +0000 UTC m=+1832.269446646" observedRunningTime="2025-10-11 10:56:38.673486006 +0000 UTC m=+1833.457912715" watchObservedRunningTime="2025-10-11 10:56:38.712282597 +0000 UTC m=+1833.496709306" Oct 11 10:56:38.718714 master-2 kubenswrapper[4776]: I1011 10:56:38.718554 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 10:56:38.738333 master-2 kubenswrapper[4776]: I1011 10:56:38.733537 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 10:56:38.746849 master-2 kubenswrapper[4776]: I1011 10:56:38.746788 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 10:56:38.747439 master-2 kubenswrapper[4776]: E1011 10:56:38.747381 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5baa2228-1c52-469a-abb5-483e30443701" containerName="extract-content" Oct 11 10:56:38.747439 master-2 kubenswrapper[4776]: I1011 10:56:38.747404 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="5baa2228-1c52-469a-abb5-483e30443701" containerName="extract-content" Oct 11 10:56:38.747439 master-2 kubenswrapper[4776]: E1011 10:56:38.747420 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5baa2228-1c52-469a-abb5-483e30443701" containerName="registry-server" Oct 11 10:56:38.747439 master-2 kubenswrapper[4776]: I1011 10:56:38.747427 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="5baa2228-1c52-469a-abb5-483e30443701" containerName="registry-server" Oct 11 10:56:38.747439 master-2 kubenswrapper[4776]: E1011 10:56:38.747443 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5baa2228-1c52-469a-abb5-483e30443701" containerName="extract-utilities" Oct 11 10:56:38.747739 master-2 kubenswrapper[4776]: I1011 10:56:38.747450 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="5baa2228-1c52-469a-abb5-483e30443701" containerName="extract-utilities" Oct 11 10:56:38.747739 master-2 kubenswrapper[4776]: E1011 10:56:38.747468 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbfedacb-2045-4297-be8f-3582dd2bcd7b" containerName="kube-state-metrics" Oct 11 10:56:38.747739 master-2 kubenswrapper[4776]: I1011 10:56:38.747474 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbfedacb-2045-4297-be8f-3582dd2bcd7b" containerName="kube-state-metrics" Oct 11 10:56:38.747739 master-2 kubenswrapper[4776]: I1011 10:56:38.747625 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="5baa2228-1c52-469a-abb5-483e30443701" containerName="registry-server" Oct 11 10:56:38.747739 master-2 kubenswrapper[4776]: I1011 10:56:38.747644 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbfedacb-2045-4297-be8f-3582dd2bcd7b" containerName="kube-state-metrics" Oct 11 10:56:38.752741 master-2 kubenswrapper[4776]: I1011 10:56:38.748384 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.835271188 podStartE2EDuration="11.748363054s" podCreationTimestamp="2025-10-11 10:56:27 +0000 UTC" firstStartedPulling="2025-10-11 10:56:28.604626557 +0000 UTC m=+1823.389053266" lastFinishedPulling="2025-10-11 10:56:37.517718423 +0000 UTC m=+1832.302145132" observedRunningTime="2025-10-11 10:56:38.742371802 +0000 UTC m=+1833.526798511" watchObservedRunningTime="2025-10-11 10:56:38.748363054 +0000 UTC m=+1833.532789763" Oct 11 10:56:38.752741 master-2 kubenswrapper[4776]: I1011 10:56:38.748467 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.752741 master-2 kubenswrapper[4776]: I1011 10:56:38.751343 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 11 10:56:38.752741 master-2 kubenswrapper[4776]: I1011 10:56:38.751558 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 11 10:56:38.787241 master-2 kubenswrapper[4776]: I1011 10:56:38.766799 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 10:56:38.787241 master-2 kubenswrapper[4776]: I1011 10:56:38.770506 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.856007601 podStartE2EDuration="11.770493244s" podCreationTimestamp="2025-10-11 10:56:27 +0000 UTC" firstStartedPulling="2025-10-11 10:56:28.603830806 +0000 UTC m=+1823.388257515" lastFinishedPulling="2025-10-11 10:56:37.518316449 +0000 UTC m=+1832.302743158" observedRunningTime="2025-10-11 10:56:38.767148363 +0000 UTC m=+1833.551575072" watchObservedRunningTime="2025-10-11 10:56:38.770493244 +0000 UTC m=+1833.554919953" Oct 11 10:56:38.791770 master-2 kubenswrapper[4776]: I1011 10:56:38.788751 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:38.806452 master-2 kubenswrapper[4776]: I1011 10:56:38.806400 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:38.848335 master-2 kubenswrapper[4776]: I1011 10:56:38.844063 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e540cea-23f3-44cc-8c37-178e530eb1f1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.848335 master-2 kubenswrapper[4776]: I1011 10:56:38.844313 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e540cea-23f3-44cc-8c37-178e530eb1f1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.848335 master-2 kubenswrapper[4776]: I1011 10:56:38.844361 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3e540cea-23f3-44cc-8c37-178e530eb1f1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.848335 master-2 kubenswrapper[4776]: I1011 10:56:38.844401 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74445\" (UniqueName: \"kubernetes.io/projected/3e540cea-23f3-44cc-8c37-178e530eb1f1-kube-api-access-74445\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.887512 master-0 kubenswrapper[4790]: I1011 10:56:38.887432 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:56:38.966271 master-2 kubenswrapper[4776]: I1011 10:56:38.966200 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e540cea-23f3-44cc-8c37-178e530eb1f1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.966271 master-2 kubenswrapper[4776]: I1011 10:56:38.966257 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3e540cea-23f3-44cc-8c37-178e530eb1f1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.966547 master-2 kubenswrapper[4776]: I1011 10:56:38.966285 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74445\" (UniqueName: \"kubernetes.io/projected/3e540cea-23f3-44cc-8c37-178e530eb1f1-kube-api-access-74445\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.966547 master-2 kubenswrapper[4776]: I1011 10:56:38.966326 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e540cea-23f3-44cc-8c37-178e530eb1f1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.970692 master-2 kubenswrapper[4776]: I1011 10:56:38.969864 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3e540cea-23f3-44cc-8c37-178e530eb1f1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.970692 master-2 kubenswrapper[4776]: I1011 10:56:38.970209 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e540cea-23f3-44cc-8c37-178e530eb1f1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.970692 master-2 kubenswrapper[4776]: I1011 10:56:38.970230 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e540cea-23f3-44cc-8c37-178e530eb1f1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.995628 master-2 kubenswrapper[4776]: I1011 10:56:38.992903 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74445\" (UniqueName: \"kubernetes.io/projected/3e540cea-23f3-44cc-8c37-178e530eb1f1-kube-api-access-74445\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:39.000859 master-0 kubenswrapper[4790]: I1011 10:56:39.000760 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-ovsdbserver-sb\") pod \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " Oct 11 10:56:39.002941 master-0 kubenswrapper[4790]: I1011 10:56:39.002878 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb9f4\" (UniqueName: \"kubernetes.io/projected/6ff24705-c685-47d9-ad1b-9ec04c541bf7-kube-api-access-hb9f4\") pod \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " Oct 11 10:56:39.003024 master-0 kubenswrapper[4790]: I1011 10:56:39.002982 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-dns-svc\") pod \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " Oct 11 10:56:39.003201 master-0 kubenswrapper[4790]: I1011 10:56:39.003174 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-dns-swift-storage-0\") pod \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " Oct 11 10:56:39.003260 master-0 kubenswrapper[4790]: I1011 10:56:39.003244 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-config\") pod \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " Oct 11 10:56:39.003739 master-0 kubenswrapper[4790]: I1011 10:56:39.003680 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-ovsdbserver-nb\") pod \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\" (UID: \"6ff24705-c685-47d9-ad1b-9ec04c541bf7\") " Oct 11 10:56:39.010388 master-1 kubenswrapper[4771]: I1011 10:56:39.009692 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-1" podUID="1596746b-25ca-487a-9e49-93e532f2838b" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.129.0.160:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:56:39.015603 master-0 kubenswrapper[4790]: I1011 10:56:39.015301 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ff24705-c685-47d9-ad1b-9ec04c541bf7-kube-api-access-hb9f4" (OuterVolumeSpecName: "kube-api-access-hb9f4") pod "6ff24705-c685-47d9-ad1b-9ec04c541bf7" (UID: "6ff24705-c685-47d9-ad1b-9ec04c541bf7"). InnerVolumeSpecName "kube-api-access-hb9f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:39.051138 master-1 kubenswrapper[4771]: I1011 10:56:39.050561 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-1" podUID="1596746b-25ca-487a-9e49-93e532f2838b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.129.0.160:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:56:39.064420 master-0 kubenswrapper[4790]: I1011 10:56:39.064337 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6ff24705-c685-47d9-ad1b-9ec04c541bf7" (UID: "6ff24705-c685-47d9-ad1b-9ec04c541bf7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:56:39.065384 master-0 kubenswrapper[4790]: I1011 10:56:39.065315 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6ff24705-c685-47d9-ad1b-9ec04c541bf7" (UID: "6ff24705-c685-47d9-ad1b-9ec04c541bf7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:56:39.065533 master-0 kubenswrapper[4790]: I1011 10:56:39.065457 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6ff24705-c685-47d9-ad1b-9ec04c541bf7" (UID: "6ff24705-c685-47d9-ad1b-9ec04c541bf7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:56:39.075466 master-2 kubenswrapper[4776]: I1011 10:56:39.075398 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 11 10:56:39.077045 master-0 kubenswrapper[4790]: I1011 10:56:39.076896 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-config" (OuterVolumeSpecName: "config") pod "6ff24705-c685-47d9-ad1b-9ec04c541bf7" (UID: "6ff24705-c685-47d9-ad1b-9ec04c541bf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:56:39.092886 master-0 kubenswrapper[4790]: I1011 10:56:39.092800 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6ff24705-c685-47d9-ad1b-9ec04c541bf7" (UID: "6ff24705-c685-47d9-ad1b-9ec04c541bf7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:56:39.108058 master-0 kubenswrapper[4790]: I1011 10:56:39.107972 4790 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:39.108058 master-0 kubenswrapper[4790]: I1011 10:56:39.108045 4790 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-config\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:39.108058 master-0 kubenswrapper[4790]: I1011 10:56:39.108061 4790 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:39.108058 master-0 kubenswrapper[4790]: I1011 10:56:39.108071 4790 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:39.108354 master-0 kubenswrapper[4790]: I1011 10:56:39.108081 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb9f4\" (UniqueName: \"kubernetes.io/projected/6ff24705-c685-47d9-ad1b-9ec04c541bf7-kube-api-access-hb9f4\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:39.108354 master-0 kubenswrapper[4790]: I1011 10:56:39.108093 4790 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ff24705-c685-47d9-ad1b-9ec04c541bf7-dns-svc\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:39.282455 master-1 kubenswrapper[4771]: I1011 10:56:39.281707 4771 generic.go:334] "Generic (PLEG): container finished" podID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerID="6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111" exitCode=0 Oct 11 10:56:39.282455 master-1 kubenswrapper[4771]: I1011 10:56:39.281848 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0cc5394-b33f-41a9-bbe2-d772e75a8f58","Type":"ContainerDied","Data":"6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111"} Oct 11 10:56:39.359835 master-0 kubenswrapper[4790]: I1011 10:56:39.358508 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-2" Oct 11 10:56:39.514569 master-0 kubenswrapper[4790]: I1011 10:56:39.514500 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1479006f-cac6-481e-86de-1ec1bed55c2d-logs\") pod \"1479006f-cac6-481e-86de-1ec1bed55c2d\" (UID: \"1479006f-cac6-481e-86de-1ec1bed55c2d\") " Oct 11 10:56:39.514807 master-0 kubenswrapper[4790]: I1011 10:56:39.514749 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cktcw\" (UniqueName: \"kubernetes.io/projected/1479006f-cac6-481e-86de-1ec1bed55c2d-kube-api-access-cktcw\") pod \"1479006f-cac6-481e-86de-1ec1bed55c2d\" (UID: \"1479006f-cac6-481e-86de-1ec1bed55c2d\") " Oct 11 10:56:39.514807 master-0 kubenswrapper[4790]: I1011 10:56:39.514787 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1479006f-cac6-481e-86de-1ec1bed55c2d-config-data\") pod \"1479006f-cac6-481e-86de-1ec1bed55c2d\" (UID: \"1479006f-cac6-481e-86de-1ec1bed55c2d\") " Oct 11 10:56:39.514868 master-0 kubenswrapper[4790]: I1011 10:56:39.514839 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1479006f-cac6-481e-86de-1ec1bed55c2d-combined-ca-bundle\") pod \"1479006f-cac6-481e-86de-1ec1bed55c2d\" (UID: \"1479006f-cac6-481e-86de-1ec1bed55c2d\") " Oct 11 10:56:39.515170 master-0 kubenswrapper[4790]: I1011 10:56:39.515102 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1479006f-cac6-481e-86de-1ec1bed55c2d-logs" (OuterVolumeSpecName: "logs") pod "1479006f-cac6-481e-86de-1ec1bed55c2d" (UID: "1479006f-cac6-481e-86de-1ec1bed55c2d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:39.518017 master-0 kubenswrapper[4790]: I1011 10:56:39.517946 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1479006f-cac6-481e-86de-1ec1bed55c2d-kube-api-access-cktcw" (OuterVolumeSpecName: "kube-api-access-cktcw") pod "1479006f-cac6-481e-86de-1ec1bed55c2d" (UID: "1479006f-cac6-481e-86de-1ec1bed55c2d"). InnerVolumeSpecName "kube-api-access-cktcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:39.538284 master-0 kubenswrapper[4790]: I1011 10:56:39.538196 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1479006f-cac6-481e-86de-1ec1bed55c2d-config-data" (OuterVolumeSpecName: "config-data") pod "1479006f-cac6-481e-86de-1ec1bed55c2d" (UID: "1479006f-cac6-481e-86de-1ec1bed55c2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:39.554837 master-0 kubenswrapper[4790]: I1011 10:56:39.554755 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1479006f-cac6-481e-86de-1ec1bed55c2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1479006f-cac6-481e-86de-1ec1bed55c2d" (UID: "1479006f-cac6-481e-86de-1ec1bed55c2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:39.563916 master-2 kubenswrapper[4776]: I1011 10:56:39.563594 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 10:56:39.588548 master-0 kubenswrapper[4790]: I1011 10:56:39.588450 4790 generic.go:334] "Generic (PLEG): container finished" podID="1479006f-cac6-481e-86de-1ec1bed55c2d" containerID="edbe8dca445656ff4e9df27c41b52fcad085e14ad25dc7ca18daaf1416a70168" exitCode=0 Oct 11 10:56:39.588548 master-0 kubenswrapper[4790]: I1011 10:56:39.588521 4790 generic.go:334] "Generic (PLEG): container finished" podID="1479006f-cac6-481e-86de-1ec1bed55c2d" containerID="963ab64307dc468bd5caa5c25c49620c1f8348df5ca7e4fa95e7a4a10d90a66b" exitCode=143 Oct 11 10:56:39.589188 master-0 kubenswrapper[4790]: I1011 10:56:39.588554 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"1479006f-cac6-481e-86de-1ec1bed55c2d","Type":"ContainerDied","Data":"edbe8dca445656ff4e9df27c41b52fcad085e14ad25dc7ca18daaf1416a70168"} Oct 11 10:56:39.589188 master-0 kubenswrapper[4790]: I1011 10:56:39.588685 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"1479006f-cac6-481e-86de-1ec1bed55c2d","Type":"ContainerDied","Data":"963ab64307dc468bd5caa5c25c49620c1f8348df5ca7e4fa95e7a4a10d90a66b"} Oct 11 10:56:39.589188 master-0 kubenswrapper[4790]: I1011 10:56:39.588701 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"1479006f-cac6-481e-86de-1ec1bed55c2d","Type":"ContainerDied","Data":"c88496f1c62cf4144371d31cb98a180ae9bb8ab68b19a997dabce86192b784b4"} Oct 11 10:56:39.589188 master-0 kubenswrapper[4790]: I1011 10:56:39.588748 4790 scope.go:117] "RemoveContainer" containerID="edbe8dca445656ff4e9df27c41b52fcad085e14ad25dc7ca18daaf1416a70168" Oct 11 10:56:39.589188 master-0 kubenswrapper[4790]: I1011 10:56:39.588975 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-2" Oct 11 10:56:39.594560 master-0 kubenswrapper[4790]: I1011 10:56:39.594481 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768f954cfc-9xg22" event={"ID":"6ff24705-c685-47d9-ad1b-9ec04c541bf7","Type":"ContainerDied","Data":"2e505acf6ba0dd723d6c70412053c327d464e773f7d7a12371848c3428f7bbf6"} Oct 11 10:56:39.594642 master-0 kubenswrapper[4790]: I1011 10:56:39.594592 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-768f954cfc-9xg22" Oct 11 10:56:39.610500 master-0 kubenswrapper[4790]: I1011 10:56:39.609899 4790 scope.go:117] "RemoveContainer" containerID="963ab64307dc468bd5caa5c25c49620c1f8348df5ca7e4fa95e7a4a10d90a66b" Oct 11 10:56:39.617144 master-0 kubenswrapper[4790]: I1011 10:56:39.617078 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cktcw\" (UniqueName: \"kubernetes.io/projected/1479006f-cac6-481e-86de-1ec1bed55c2d-kube-api-access-cktcw\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:39.617144 master-0 kubenswrapper[4790]: I1011 10:56:39.617130 4790 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1479006f-cac6-481e-86de-1ec1bed55c2d-config-data\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:39.617144 master-0 kubenswrapper[4790]: I1011 10:56:39.617149 4790 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1479006f-cac6-481e-86de-1ec1bed55c2d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:39.617347 master-0 kubenswrapper[4790]: I1011 10:56:39.617162 4790 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1479006f-cac6-481e-86de-1ec1bed55c2d-logs\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:39.644092 master-0 kubenswrapper[4790]: I1011 10:56:39.644024 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-768f954cfc-9xg22"] Oct 11 10:56:39.645776 master-0 kubenswrapper[4790]: I1011 10:56:39.645696 4790 scope.go:117] "RemoveContainer" containerID="edbe8dca445656ff4e9df27c41b52fcad085e14ad25dc7ca18daaf1416a70168" Oct 11 10:56:39.648212 master-0 kubenswrapper[4790]: E1011 10:56:39.648153 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edbe8dca445656ff4e9df27c41b52fcad085e14ad25dc7ca18daaf1416a70168\": container with ID starting with edbe8dca445656ff4e9df27c41b52fcad085e14ad25dc7ca18daaf1416a70168 not found: ID does not exist" containerID="edbe8dca445656ff4e9df27c41b52fcad085e14ad25dc7ca18daaf1416a70168" Oct 11 10:56:39.648297 master-0 kubenswrapper[4790]: I1011 10:56:39.648215 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edbe8dca445656ff4e9df27c41b52fcad085e14ad25dc7ca18daaf1416a70168"} err="failed to get container status \"edbe8dca445656ff4e9df27c41b52fcad085e14ad25dc7ca18daaf1416a70168\": rpc error: code = NotFound desc = could not find container \"edbe8dca445656ff4e9df27c41b52fcad085e14ad25dc7ca18daaf1416a70168\": container with ID starting with edbe8dca445656ff4e9df27c41b52fcad085e14ad25dc7ca18daaf1416a70168 not found: ID does not exist" Oct 11 10:56:39.648297 master-0 kubenswrapper[4790]: I1011 10:56:39.648248 4790 scope.go:117] "RemoveContainer" containerID="963ab64307dc468bd5caa5c25c49620c1f8348df5ca7e4fa95e7a4a10d90a66b" Oct 11 10:56:39.648772 master-0 kubenswrapper[4790]: E1011 10:56:39.648738 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"963ab64307dc468bd5caa5c25c49620c1f8348df5ca7e4fa95e7a4a10d90a66b\": container with ID starting with 963ab64307dc468bd5caa5c25c49620c1f8348df5ca7e4fa95e7a4a10d90a66b not found: ID does not exist" containerID="963ab64307dc468bd5caa5c25c49620c1f8348df5ca7e4fa95e7a4a10d90a66b" Oct 11 10:56:39.648772 master-0 kubenswrapper[4790]: I1011 10:56:39.648760 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"963ab64307dc468bd5caa5c25c49620c1f8348df5ca7e4fa95e7a4a10d90a66b"} err="failed to get container status \"963ab64307dc468bd5caa5c25c49620c1f8348df5ca7e4fa95e7a4a10d90a66b\": rpc error: code = NotFound desc = could not find container \"963ab64307dc468bd5caa5c25c49620c1f8348df5ca7e4fa95e7a4a10d90a66b\": container with ID starting with 963ab64307dc468bd5caa5c25c49620c1f8348df5ca7e4fa95e7a4a10d90a66b not found: ID does not exist" Oct 11 10:56:39.648772 master-0 kubenswrapper[4790]: I1011 10:56:39.648774 4790 scope.go:117] "RemoveContainer" containerID="edbe8dca445656ff4e9df27c41b52fcad085e14ad25dc7ca18daaf1416a70168" Oct 11 10:56:39.649108 master-0 kubenswrapper[4790]: I1011 10:56:39.649072 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edbe8dca445656ff4e9df27c41b52fcad085e14ad25dc7ca18daaf1416a70168"} err="failed to get container status \"edbe8dca445656ff4e9df27c41b52fcad085e14ad25dc7ca18daaf1416a70168\": rpc error: code = NotFound desc = could not find container \"edbe8dca445656ff4e9df27c41b52fcad085e14ad25dc7ca18daaf1416a70168\": container with ID starting with edbe8dca445656ff4e9df27c41b52fcad085e14ad25dc7ca18daaf1416a70168 not found: ID does not exist" Oct 11 10:56:39.649108 master-0 kubenswrapper[4790]: I1011 10:56:39.649096 4790 scope.go:117] "RemoveContainer" containerID="963ab64307dc468bd5caa5c25c49620c1f8348df5ca7e4fa95e7a4a10d90a66b" Oct 11 10:56:39.649378 master-0 kubenswrapper[4790]: I1011 10:56:39.649345 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"963ab64307dc468bd5caa5c25c49620c1f8348df5ca7e4fa95e7a4a10d90a66b"} err="failed to get container status \"963ab64307dc468bd5caa5c25c49620c1f8348df5ca7e4fa95e7a4a10d90a66b\": rpc error: code = NotFound desc = could not find container \"963ab64307dc468bd5caa5c25c49620c1f8348df5ca7e4fa95e7a4a10d90a66b\": container with ID starting with 963ab64307dc468bd5caa5c25c49620c1f8348df5ca7e4fa95e7a4a10d90a66b not found: ID does not exist" Oct 11 10:56:39.649378 master-0 kubenswrapper[4790]: I1011 10:56:39.649366 4790 scope.go:117] "RemoveContainer" containerID="585d3075d62338a2a9a42f35ce1fe0086b7e13eaad601eec48eba4bae373241a" Oct 11 10:56:39.649470 master-0 kubenswrapper[4790]: I1011 10:56:39.649418 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-768f954cfc-9xg22"] Oct 11 10:56:39.671602 master-0 kubenswrapper[4790]: I1011 10:56:39.671540 4790 scope.go:117] "RemoveContainer" containerID="7210d87c28a292af798e5994b8f7c1185cbe0c9dd8ab3744872cfdcf6e01c602" Oct 11 10:56:39.674238 master-0 kubenswrapper[4790]: I1011 10:56:39.674192 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:56:39.678757 master-2 kubenswrapper[4776]: I1011 10:56:39.678688 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3e540cea-23f3-44cc-8c37-178e530eb1f1","Type":"ContainerStarted","Data":"4968e57c5e03e403068a64e2f89f207886b2485d5f203a20fa06002df46a1d63"} Oct 11 10:56:39.683124 master-0 kubenswrapper[4790]: I1011 10:56:39.683055 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:56:39.713413 master-0 kubenswrapper[4790]: I1011 10:56:39.713087 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:56:39.713849 master-0 kubenswrapper[4790]: E1011 10:56:39.713807 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff24705-c685-47d9-ad1b-9ec04c541bf7" containerName="init" Oct 11 10:56:39.713849 master-0 kubenswrapper[4790]: I1011 10:56:39.713838 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff24705-c685-47d9-ad1b-9ec04c541bf7" containerName="init" Oct 11 10:56:39.713980 master-0 kubenswrapper[4790]: E1011 10:56:39.713870 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff24705-c685-47d9-ad1b-9ec04c541bf7" containerName="dnsmasq-dns" Oct 11 10:56:39.713980 master-0 kubenswrapper[4790]: I1011 10:56:39.713879 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff24705-c685-47d9-ad1b-9ec04c541bf7" containerName="dnsmasq-dns" Oct 11 10:56:39.713980 master-0 kubenswrapper[4790]: E1011 10:56:39.713892 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1479006f-cac6-481e-86de-1ec1bed55c2d" containerName="nova-metadata-log" Oct 11 10:56:39.713980 master-0 kubenswrapper[4790]: I1011 10:56:39.713900 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="1479006f-cac6-481e-86de-1ec1bed55c2d" containerName="nova-metadata-log" Oct 11 10:56:39.713980 master-0 kubenswrapper[4790]: E1011 10:56:39.713929 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1479006f-cac6-481e-86de-1ec1bed55c2d" containerName="nova-metadata-metadata" Oct 11 10:56:39.713980 master-0 kubenswrapper[4790]: I1011 10:56:39.713935 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="1479006f-cac6-481e-86de-1ec1bed55c2d" containerName="nova-metadata-metadata" Oct 11 10:56:39.714224 master-0 kubenswrapper[4790]: I1011 10:56:39.714115 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="1479006f-cac6-481e-86de-1ec1bed55c2d" containerName="nova-metadata-log" Oct 11 10:56:39.714224 master-0 kubenswrapper[4790]: I1011 10:56:39.714154 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="1479006f-cac6-481e-86de-1ec1bed55c2d" containerName="nova-metadata-metadata" Oct 11 10:56:39.714224 master-0 kubenswrapper[4790]: I1011 10:56:39.714167 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ff24705-c685-47d9-ad1b-9ec04c541bf7" containerName="dnsmasq-dns" Oct 11 10:56:39.717499 master-0 kubenswrapper[4790]: I1011 10:56:39.717440 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-2" Oct 11 10:56:39.721427 master-0 kubenswrapper[4790]: I1011 10:56:39.721366 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 11 10:56:39.721682 master-0 kubenswrapper[4790]: I1011 10:56:39.721641 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 11 10:56:39.729477 master-0 kubenswrapper[4790]: I1011 10:56:39.729389 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:56:39.820474 master-0 kubenswrapper[4790]: I1011 10:56:39.820396 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-logs\") pod \"nova-metadata-2\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " pod="openstack/nova-metadata-2" Oct 11 10:56:39.820748 master-0 kubenswrapper[4790]: I1011 10:56:39.820502 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ds6m\" (UniqueName: \"kubernetes.io/projected/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-kube-api-access-5ds6m\") pod \"nova-metadata-2\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " pod="openstack/nova-metadata-2" Oct 11 10:56:39.820748 master-0 kubenswrapper[4790]: I1011 10:56:39.820539 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-combined-ca-bundle\") pod \"nova-metadata-2\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " pod="openstack/nova-metadata-2" Oct 11 10:56:39.820748 master-0 kubenswrapper[4790]: I1011 10:56:39.820570 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-config-data\") pod \"nova-metadata-2\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " pod="openstack/nova-metadata-2" Oct 11 10:56:39.820748 master-0 kubenswrapper[4790]: I1011 10:56:39.820612 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-nova-metadata-tls-certs\") pod \"nova-metadata-2\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " pod="openstack/nova-metadata-2" Oct 11 10:56:39.922663 master-0 kubenswrapper[4790]: I1011 10:56:39.922518 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-logs\") pod \"nova-metadata-2\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " pod="openstack/nova-metadata-2" Oct 11 10:56:39.922663 master-0 kubenswrapper[4790]: I1011 10:56:39.922605 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ds6m\" (UniqueName: \"kubernetes.io/projected/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-kube-api-access-5ds6m\") pod \"nova-metadata-2\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " pod="openstack/nova-metadata-2" Oct 11 10:56:39.922663 master-0 kubenswrapper[4790]: I1011 10:56:39.922633 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-combined-ca-bundle\") pod \"nova-metadata-2\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " pod="openstack/nova-metadata-2" Oct 11 10:56:39.922663 master-0 kubenswrapper[4790]: I1011 10:56:39.922660 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-config-data\") pod \"nova-metadata-2\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " pod="openstack/nova-metadata-2" Oct 11 10:56:39.922991 master-0 kubenswrapper[4790]: I1011 10:56:39.922682 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-nova-metadata-tls-certs\") pod \"nova-metadata-2\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " pod="openstack/nova-metadata-2" Oct 11 10:56:39.924442 master-0 kubenswrapper[4790]: I1011 10:56:39.924398 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-logs\") pod \"nova-metadata-2\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " pod="openstack/nova-metadata-2" Oct 11 10:56:39.926952 master-0 kubenswrapper[4790]: I1011 10:56:39.926915 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-nova-metadata-tls-certs\") pod \"nova-metadata-2\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " pod="openstack/nova-metadata-2" Oct 11 10:56:39.928186 master-0 kubenswrapper[4790]: I1011 10:56:39.928135 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-combined-ca-bundle\") pod \"nova-metadata-2\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " pod="openstack/nova-metadata-2" Oct 11 10:56:39.928388 master-0 kubenswrapper[4790]: I1011 10:56:39.928361 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-config-data\") pod \"nova-metadata-2\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " pod="openstack/nova-metadata-2" Oct 11 10:56:39.946661 master-0 kubenswrapper[4790]: I1011 10:56:39.946593 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ds6m\" (UniqueName: \"kubernetes.io/projected/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-kube-api-access-5ds6m\") pod \"nova-metadata-2\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " pod="openstack/nova-metadata-2" Oct 11 10:56:40.040686 master-0 kubenswrapper[4790]: I1011 10:56:40.040607 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-2" Oct 11 10:56:40.068346 master-2 kubenswrapper[4776]: I1011 10:56:40.068291 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbfedacb-2045-4297-be8f-3582dd2bcd7b" path="/var/lib/kubelet/pods/cbfedacb-2045-4297-be8f-3582dd2bcd7b/volumes" Oct 11 10:56:40.304775 master-0 kubenswrapper[4790]: I1011 10:56:40.304661 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1479006f-cac6-481e-86de-1ec1bed55c2d" path="/var/lib/kubelet/pods/1479006f-cac6-481e-86de-1ec1bed55c2d/volumes" Oct 11 10:56:40.305418 master-0 kubenswrapper[4790]: I1011 10:56:40.305399 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ff24705-c685-47d9-ad1b-9ec04c541bf7" path="/var/lib/kubelet/pods/6ff24705-c685-47d9-ad1b-9ec04c541bf7/volumes" Oct 11 10:56:40.493368 master-0 kubenswrapper[4790]: I1011 10:56:40.493302 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:56:40.617931 master-0 kubenswrapper[4790]: I1011 10:56:40.617834 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb","Type":"ContainerStarted","Data":"b30caa9805cc96830370be2ccb0efbda932cb9d93ac0013c9544b523e620e980"} Oct 11 10:56:40.689907 master-2 kubenswrapper[4776]: I1011 10:56:40.689861 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3e540cea-23f3-44cc-8c37-178e530eb1f1","Type":"ContainerStarted","Data":"ac83364aaed2cb47eea651c765afd60741b473edf1ad8a1ac60b6ff303197735"} Oct 11 10:56:40.768489 master-2 kubenswrapper[4776]: I1011 10:56:40.768416 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.406022711 podStartE2EDuration="2.768397806s" podCreationTimestamp="2025-10-11 10:56:38 +0000 UTC" firstStartedPulling="2025-10-11 10:56:39.567005917 +0000 UTC m=+1834.351432626" lastFinishedPulling="2025-10-11 10:56:39.929381012 +0000 UTC m=+1834.713807721" observedRunningTime="2025-10-11 10:56:40.762094815 +0000 UTC m=+1835.546521534" watchObservedRunningTime="2025-10-11 10:56:40.768397806 +0000 UTC m=+1835.552824505" Oct 11 10:56:41.302278 master-1 kubenswrapper[4771]: I1011 10:56:41.302198 4771 generic.go:334] "Generic (PLEG): container finished" podID="5b5e37e3-9afd-4ff3-b992-1e6c28a986ad" containerID="af56a7e4623de207ef8289e7bba0d65eef5da9d57f459e288f321109c3a8e4f3" exitCode=0 Oct 11 10:56:41.302278 master-1 kubenswrapper[4771]: I1011 10:56:41.302244 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tgg64" event={"ID":"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad","Type":"ContainerDied","Data":"af56a7e4623de207ef8289e7bba0d65eef5da9d57f459e288f321109c3a8e4f3"} Oct 11 10:56:41.374117 master-1 kubenswrapper[4771]: I1011 10:56:41.374031 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" podUID="1fe7833d-9251-4545-ba68-f58c146188f1" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.129.0.137:8000/healthcheck\": dial tcp 10.129.0.137:8000: connect: connection refused" Oct 11 10:56:41.633511 master-0 kubenswrapper[4790]: I1011 10:56:41.633408 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb","Type":"ContainerStarted","Data":"dd353e02d9ecaea5cfa6a72e50b08a9d7d2734f7d93ee1f944d9a386c0f703b5"} Oct 11 10:56:41.633511 master-0 kubenswrapper[4790]: I1011 10:56:41.633488 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb","Type":"ContainerStarted","Data":"edd9d5cc67a7fa520302dd1bce6ecd1726176be60b3256d0edfdffc2d8e7a1ec"} Oct 11 10:56:41.691066 master-0 kubenswrapper[4790]: I1011 10:56:41.690937 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-2" podStartSLOduration=2.690908991 podStartE2EDuration="2.690908991s" podCreationTimestamp="2025-10-11 10:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:41.683194949 +0000 UTC m=+1078.237655271" watchObservedRunningTime="2025-10-11 10:56:41.690908991 +0000 UTC m=+1078.245369293" Oct 11 10:56:41.696808 master-2 kubenswrapper[4776]: I1011 10:56:41.696723 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 11 10:56:41.787338 master-1 kubenswrapper[4771]: I1011 10:56:41.787245 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-tn8xz" Oct 11 10:56:41.833428 master-1 kubenswrapper[4771]: I1011 10:56:41.831959 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2kt7k" Oct 11 10:56:41.884015 master-1 kubenswrapper[4771]: I1011 10:56:41.883537 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mgq2\" (UniqueName: \"kubernetes.io/projected/3de492fb-5249-49e2-a327-756234aa92bd-kube-api-access-6mgq2\") pod \"3de492fb-5249-49e2-a327-756234aa92bd\" (UID: \"3de492fb-5249-49e2-a327-756234aa92bd\") " Oct 11 10:56:41.884015 master-1 kubenswrapper[4771]: I1011 10:56:41.883740 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de492fb-5249-49e2-a327-756234aa92bd-config-data\") pod \"3de492fb-5249-49e2-a327-756234aa92bd\" (UID: \"3de492fb-5249-49e2-a327-756234aa92bd\") " Oct 11 10:56:41.884015 master-1 kubenswrapper[4771]: I1011 10:56:41.883831 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de492fb-5249-49e2-a327-756234aa92bd-combined-ca-bundle\") pod \"3de492fb-5249-49e2-a327-756234aa92bd\" (UID: \"3de492fb-5249-49e2-a327-756234aa92bd\") " Oct 11 10:56:41.884015 master-1 kubenswrapper[4771]: I1011 10:56:41.883895 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de492fb-5249-49e2-a327-756234aa92bd-scripts\") pod \"3de492fb-5249-49e2-a327-756234aa92bd\" (UID: \"3de492fb-5249-49e2-a327-756234aa92bd\") " Oct 11 10:56:41.888739 master-1 kubenswrapper[4771]: I1011 10:56:41.888656 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de492fb-5249-49e2-a327-756234aa92bd-scripts" (OuterVolumeSpecName: "scripts") pod "3de492fb-5249-49e2-a327-756234aa92bd" (UID: "3de492fb-5249-49e2-a327-756234aa92bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:41.890460 master-1 kubenswrapper[4771]: I1011 10:56:41.889440 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de492fb-5249-49e2-a327-756234aa92bd-kube-api-access-6mgq2" (OuterVolumeSpecName: "kube-api-access-6mgq2") pod "3de492fb-5249-49e2-a327-756234aa92bd" (UID: "3de492fb-5249-49e2-a327-756234aa92bd"). InnerVolumeSpecName "kube-api-access-6mgq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:41.920683 master-1 kubenswrapper[4771]: I1011 10:56:41.920626 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de492fb-5249-49e2-a327-756234aa92bd-config-data" (OuterVolumeSpecName: "config-data") pod "3de492fb-5249-49e2-a327-756234aa92bd" (UID: "3de492fb-5249-49e2-a327-756234aa92bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:41.936736 master-1 kubenswrapper[4771]: I1011 10:56:41.936674 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de492fb-5249-49e2-a327-756234aa92bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3de492fb-5249-49e2-a327-756234aa92bd" (UID: "3de492fb-5249-49e2-a327-756234aa92bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:41.988080 master-1 kubenswrapper[4771]: I1011 10:56:41.987755 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clg6t\" (UniqueName: \"kubernetes.io/projected/f85d5cfa-8073-4bbf-9eff-78fde719dadf-kube-api-access-clg6t\") pod \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\" (UID: \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\") " Oct 11 10:56:41.988080 master-1 kubenswrapper[4771]: I1011 10:56:41.987922 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d5cfa-8073-4bbf-9eff-78fde719dadf-scripts\") pod \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\" (UID: \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\") " Oct 11 10:56:41.988080 master-1 kubenswrapper[4771]: I1011 10:56:41.987942 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d5cfa-8073-4bbf-9eff-78fde719dadf-config-data\") pod \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\" (UID: \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\") " Oct 11 10:56:41.988080 master-1 kubenswrapper[4771]: I1011 10:56:41.988028 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d5cfa-8073-4bbf-9eff-78fde719dadf-combined-ca-bundle\") pod \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\" (UID: \"f85d5cfa-8073-4bbf-9eff-78fde719dadf\") " Oct 11 10:56:41.989185 master-1 kubenswrapper[4771]: I1011 10:56:41.988444 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de492fb-5249-49e2-a327-756234aa92bd-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:41.989185 master-1 kubenswrapper[4771]: I1011 10:56:41.988461 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mgq2\" (UniqueName: \"kubernetes.io/projected/3de492fb-5249-49e2-a327-756234aa92bd-kube-api-access-6mgq2\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:41.989185 master-1 kubenswrapper[4771]: I1011 10:56:41.988471 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de492fb-5249-49e2-a327-756234aa92bd-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:41.989185 master-1 kubenswrapper[4771]: I1011 10:56:41.988479 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de492fb-5249-49e2-a327-756234aa92bd-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:41.992843 master-1 kubenswrapper[4771]: I1011 10:56:41.992785 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d5cfa-8073-4bbf-9eff-78fde719dadf-scripts" (OuterVolumeSpecName: "scripts") pod "f85d5cfa-8073-4bbf-9eff-78fde719dadf" (UID: "f85d5cfa-8073-4bbf-9eff-78fde719dadf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:41.996234 master-1 kubenswrapper[4771]: I1011 10:56:41.996156 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f85d5cfa-8073-4bbf-9eff-78fde719dadf-kube-api-access-clg6t" (OuterVolumeSpecName: "kube-api-access-clg6t") pod "f85d5cfa-8073-4bbf-9eff-78fde719dadf" (UID: "f85d5cfa-8073-4bbf-9eff-78fde719dadf"). InnerVolumeSpecName "kube-api-access-clg6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:42.038245 master-1 kubenswrapper[4771]: I1011 10:56:42.038172 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d5cfa-8073-4bbf-9eff-78fde719dadf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f85d5cfa-8073-4bbf-9eff-78fde719dadf" (UID: "f85d5cfa-8073-4bbf-9eff-78fde719dadf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:42.040560 master-1 kubenswrapper[4771]: I1011 10:56:42.040493 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d5cfa-8073-4bbf-9eff-78fde719dadf-config-data" (OuterVolumeSpecName: "config-data") pod "f85d5cfa-8073-4bbf-9eff-78fde719dadf" (UID: "f85d5cfa-8073-4bbf-9eff-78fde719dadf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:42.066724 master-1 kubenswrapper[4771]: I1011 10:56:42.064847 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" Oct 11 10:56:42.090958 master-1 kubenswrapper[4771]: I1011 10:56:42.090906 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d5cfa-8073-4bbf-9eff-78fde719dadf-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:42.091220 master-1 kubenswrapper[4771]: I1011 10:56:42.091207 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d5cfa-8073-4bbf-9eff-78fde719dadf-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:42.091317 master-1 kubenswrapper[4771]: I1011 10:56:42.091302 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d5cfa-8073-4bbf-9eff-78fde719dadf-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:42.091474 master-1 kubenswrapper[4771]: I1011 10:56:42.091459 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clg6t\" (UniqueName: \"kubernetes.io/projected/f85d5cfa-8073-4bbf-9eff-78fde719dadf-kube-api-access-clg6t\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:42.172275 master-1 kubenswrapper[4771]: I1011 10:56:42.172203 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:56:42.193141 master-1 kubenswrapper[4771]: I1011 10:56:42.193062 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe7833d-9251-4545-ba68-f58c146188f1-combined-ca-bundle\") pod \"1fe7833d-9251-4545-ba68-f58c146188f1\" (UID: \"1fe7833d-9251-4545-ba68-f58c146188f1\") " Oct 11 10:56:42.194031 master-1 kubenswrapper[4771]: I1011 10:56:42.193263 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzbk4\" (UniqueName: \"kubernetes.io/projected/1fe7833d-9251-4545-ba68-f58c146188f1-kube-api-access-gzbk4\") pod \"1fe7833d-9251-4545-ba68-f58c146188f1\" (UID: \"1fe7833d-9251-4545-ba68-f58c146188f1\") " Oct 11 10:56:42.194031 master-1 kubenswrapper[4771]: I1011 10:56:42.193334 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fe7833d-9251-4545-ba68-f58c146188f1-config-data-custom\") pod \"1fe7833d-9251-4545-ba68-f58c146188f1\" (UID: \"1fe7833d-9251-4545-ba68-f58c146188f1\") " Oct 11 10:56:42.194031 master-1 kubenswrapper[4771]: I1011 10:56:42.193445 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fe7833d-9251-4545-ba68-f58c146188f1-config-data\") pod \"1fe7833d-9251-4545-ba68-f58c146188f1\" (UID: \"1fe7833d-9251-4545-ba68-f58c146188f1\") " Oct 11 10:56:42.197467 master-1 kubenswrapper[4771]: I1011 10:56:42.197424 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fe7833d-9251-4545-ba68-f58c146188f1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1fe7833d-9251-4545-ba68-f58c146188f1" (UID: "1fe7833d-9251-4545-ba68-f58c146188f1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:42.197998 master-1 kubenswrapper[4771]: I1011 10:56:42.197923 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fe7833d-9251-4545-ba68-f58c146188f1-kube-api-access-gzbk4" (OuterVolumeSpecName: "kube-api-access-gzbk4") pod "1fe7833d-9251-4545-ba68-f58c146188f1" (UID: "1fe7833d-9251-4545-ba68-f58c146188f1"). InnerVolumeSpecName "kube-api-access-gzbk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:42.219384 master-1 kubenswrapper[4771]: I1011 10:56:42.219313 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fe7833d-9251-4545-ba68-f58c146188f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fe7833d-9251-4545-ba68-f58c146188f1" (UID: "1fe7833d-9251-4545-ba68-f58c146188f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:42.233412 master-1 kubenswrapper[4771]: I1011 10:56:42.233342 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fe7833d-9251-4545-ba68-f58c146188f1-config-data" (OuterVolumeSpecName: "config-data") pod "1fe7833d-9251-4545-ba68-f58c146188f1" (UID: "1fe7833d-9251-4545-ba68-f58c146188f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:42.274490 master-0 kubenswrapper[4790]: I1011 10:56:42.274437 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-748bbfcf89-9smpw" Oct 11 10:56:42.294781 master-1 kubenswrapper[4771]: I1011 10:56:42.294735 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75dsl\" (UniqueName: \"kubernetes.io/projected/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-kube-api-access-75dsl\") pod \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " Oct 11 10:56:42.294930 master-1 kubenswrapper[4771]: I1011 10:56:42.294884 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-sg-core-conf-yaml\") pod \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " Oct 11 10:56:42.294979 master-1 kubenswrapper[4771]: I1011 10:56:42.294948 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-config-data\") pod \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " Oct 11 10:56:42.295058 master-1 kubenswrapper[4771]: I1011 10:56:42.295002 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-combined-ca-bundle\") pod \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " Oct 11 10:56:42.295102 master-1 kubenswrapper[4771]: I1011 10:56:42.295073 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-run-httpd\") pod \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " Oct 11 10:56:42.295102 master-1 kubenswrapper[4771]: I1011 10:56:42.295097 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-scripts\") pod \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " Oct 11 10:56:42.295190 master-1 kubenswrapper[4771]: I1011 10:56:42.295119 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-log-httpd\") pod \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\" (UID: \"d0cc5394-b33f-41a9-bbe2-d772e75a8f58\") " Oct 11 10:56:42.295525 master-1 kubenswrapper[4771]: I1011 10:56:42.295489 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fe7833d-9251-4545-ba68-f58c146188f1-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:42.295525 master-1 kubenswrapper[4771]: I1011 10:56:42.295507 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe7833d-9251-4545-ba68-f58c146188f1-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:42.295525 master-1 kubenswrapper[4771]: I1011 10:56:42.295517 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzbk4\" (UniqueName: \"kubernetes.io/projected/1fe7833d-9251-4545-ba68-f58c146188f1-kube-api-access-gzbk4\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:42.295525 master-1 kubenswrapper[4771]: I1011 10:56:42.295529 4771 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fe7833d-9251-4545-ba68-f58c146188f1-config-data-custom\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:42.295888 master-1 kubenswrapper[4771]: I1011 10:56:42.295853 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d0cc5394-b33f-41a9-bbe2-d772e75a8f58" (UID: "d0cc5394-b33f-41a9-bbe2-d772e75a8f58"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:42.296137 master-1 kubenswrapper[4771]: I1011 10:56:42.296101 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d0cc5394-b33f-41a9-bbe2-d772e75a8f58" (UID: "d0cc5394-b33f-41a9-bbe2-d772e75a8f58"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:42.298532 master-1 kubenswrapper[4771]: I1011 10:56:42.298493 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-kube-api-access-75dsl" (OuterVolumeSpecName: "kube-api-access-75dsl") pod "d0cc5394-b33f-41a9-bbe2-d772e75a8f58" (UID: "d0cc5394-b33f-41a9-bbe2-d772e75a8f58"). InnerVolumeSpecName "kube-api-access-75dsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:42.298676 master-1 kubenswrapper[4771]: I1011 10:56:42.298649 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-scripts" (OuterVolumeSpecName: "scripts") pod "d0cc5394-b33f-41a9-bbe2-d772e75a8f58" (UID: "d0cc5394-b33f-41a9-bbe2-d772e75a8f58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:42.314465 master-1 kubenswrapper[4771]: I1011 10:56:42.314348 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" event={"ID":"1fe7833d-9251-4545-ba68-f58c146188f1","Type":"ContainerDied","Data":"fa2bc31890cd28c5ae042f31a82dc462c8d6a57c6fbf10c4706aaa08a519f43e"} Oct 11 10:56:42.314926 master-1 kubenswrapper[4771]: I1011 10:56:42.314477 4771 scope.go:117] "RemoveContainer" containerID="3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d" Oct 11 10:56:42.314926 master-1 kubenswrapper[4771]: I1011 10:56:42.314675 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-b8cb664c5-5zrqf" Oct 11 10:56:42.318711 master-1 kubenswrapper[4771]: I1011 10:56:42.318660 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0cc5394-b33f-41a9-bbe2-d772e75a8f58","Type":"ContainerDied","Data":"c62163629aca98833fc3692d5fe6b9a44972b443acaee8cb73af5daad3f74fd0"} Oct 11 10:56:42.318816 master-1 kubenswrapper[4771]: I1011 10:56:42.318794 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:56:42.321259 master-1 kubenswrapper[4771]: I1011 10:56:42.321218 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"3a8c3af6-0b6a-486d-83e3-18bf00346dbc","Type":"ContainerStarted","Data":"618dbb2c6015f37d5b77c36226ff32674b1cdb27c9a2c24d18db48b89442be0b"} Oct 11 10:56:42.322221 master-1 kubenswrapper[4771]: I1011 10:56:42.322198 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 11 10:56:42.323429 master-1 kubenswrapper[4771]: I1011 10:56:42.323405 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-tn8xz" event={"ID":"3de492fb-5249-49e2-a327-756234aa92bd","Type":"ContainerDied","Data":"4b4427ca1aac1d5de2fce6df6d3c919384d14bf07e538280903548b74f73e344"} Oct 11 10:56:42.323429 master-1 kubenswrapper[4771]: I1011 10:56:42.323430 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b4427ca1aac1d5de2fce6df6d3c919384d14bf07e538280903548b74f73e344" Oct 11 10:56:42.323570 master-1 kubenswrapper[4771]: I1011 10:56:42.323472 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-tn8xz" Oct 11 10:56:42.338345 master-1 kubenswrapper[4771]: I1011 10:56:42.338232 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2kt7k" event={"ID":"f85d5cfa-8073-4bbf-9eff-78fde719dadf","Type":"ContainerDied","Data":"792c6e93eebb0025120939bb299c7a87876ec3dbbd22c047c0d886532fa269ba"} Oct 11 10:56:42.338572 master-1 kubenswrapper[4771]: I1011 10:56:42.338412 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="792c6e93eebb0025120939bb299c7a87876ec3dbbd22c047c0d886532fa269ba" Oct 11 10:56:42.338572 master-1 kubenswrapper[4771]: I1011 10:56:42.338301 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2kt7k" Oct 11 10:56:42.339272 master-1 kubenswrapper[4771]: I1011 10:56:42.339215 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d0cc5394-b33f-41a9-bbe2-d772e75a8f58" (UID: "d0cc5394-b33f-41a9-bbe2-d772e75a8f58"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:42.339775 master-1 kubenswrapper[4771]: I1011 10:56:42.339721 4771 scope.go:117] "RemoveContainer" containerID="505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe" Oct 11 10:56:42.354884 master-1 kubenswrapper[4771]: I1011 10:56:42.354834 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 11 10:56:42.371502 master-1 kubenswrapper[4771]: I1011 10:56:42.371456 4771 scope.go:117] "RemoveContainer" containerID="dfcca484b1ef8ab38e804d6ef394f26fc64c3bf3d7f1246b3ca103ffc5677a68" Oct 11 10:56:42.379281 master-1 kubenswrapper[4771]: I1011 10:56:42.379222 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0cc5394-b33f-41a9-bbe2-d772e75a8f58" (UID: "d0cc5394-b33f-41a9-bbe2-d772e75a8f58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:42.397743 master-1 kubenswrapper[4771]: I1011 10:56:42.397698 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:42.397743 master-1 kubenswrapper[4771]: I1011 10:56:42.397743 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-run-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:42.397901 master-1 kubenswrapper[4771]: I1011 10:56:42.397753 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:42.397901 master-1 kubenswrapper[4771]: I1011 10:56:42.397763 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-log-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:42.397901 master-1 kubenswrapper[4771]: I1011 10:56:42.397772 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75dsl\" (UniqueName: \"kubernetes.io/projected/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-kube-api-access-75dsl\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:42.397901 master-1 kubenswrapper[4771]: I1011 10:56:42.397783 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-sg-core-conf-yaml\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:42.399745 master-1 kubenswrapper[4771]: I1011 10:56:42.399687 4771 scope.go:117] "RemoveContainer" containerID="6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111" Oct 11 10:56:42.402038 master-1 kubenswrapper[4771]: I1011 10:56:42.401936 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-config-data" (OuterVolumeSpecName: "config-data") pod "d0cc5394-b33f-41a9-bbe2-d772e75a8f58" (UID: "d0cc5394-b33f-41a9-bbe2-d772e75a8f58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:42.421622 master-1 kubenswrapper[4771]: I1011 10:56:42.421570 4771 scope.go:117] "RemoveContainer" containerID="2c5f1db8917c8af20f15f8f5c86b116c03c3cf84afbea6d406851b9dc2d31536" Oct 11 10:56:42.461752 master-2 kubenswrapper[4776]: I1011 10:56:42.461683 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="cbfedacb-2045-4297-be8f-3582dd2bcd7b" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.128.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:56:42.500589 master-1 kubenswrapper[4771]: I1011 10:56:42.500524 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc5394-b33f-41a9-bbe2-d772e75a8f58-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:42.540348 master-2 kubenswrapper[4776]: I1011 10:56:42.540279 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 11 10:56:42.795553 master-2 kubenswrapper[4776]: I1011 10:56:42.795390 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 11 10:56:42.796263 master-2 kubenswrapper[4776]: I1011 10:56:42.796245 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 11 10:56:42.833519 master-1 kubenswrapper[4771]: I1011 10:56:42.833456 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tgg64" Oct 11 10:56:42.910099 master-1 kubenswrapper[4771]: I1011 10:56:42.908440 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-config-data\") pod \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\" (UID: \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\") " Oct 11 10:56:42.910099 master-1 kubenswrapper[4771]: I1011 10:56:42.908552 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5jhb\" (UniqueName: \"kubernetes.io/projected/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-kube-api-access-n5jhb\") pod \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\" (UID: \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\") " Oct 11 10:56:42.910099 master-1 kubenswrapper[4771]: I1011 10:56:42.908658 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-combined-ca-bundle\") pod \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\" (UID: \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\") " Oct 11 10:56:42.910099 master-1 kubenswrapper[4771]: I1011 10:56:42.908864 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-scripts\") pod \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\" (UID: \"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad\") " Oct 11 10:56:42.914018 master-1 kubenswrapper[4771]: I1011 10:56:42.913564 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-scripts" (OuterVolumeSpecName: "scripts") pod "5b5e37e3-9afd-4ff3-b992-1e6c28a986ad" (UID: "5b5e37e3-9afd-4ff3-b992-1e6c28a986ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:42.914504 master-1 kubenswrapper[4771]: I1011 10:56:42.914388 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79cbf74f6f-j7kt4"] Oct 11 10:56:42.918743 master-1 kubenswrapper[4771]: I1011 10:56:42.918705 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-kube-api-access-n5jhb" (OuterVolumeSpecName: "kube-api-access-n5jhb") pod "5b5e37e3-9afd-4ff3-b992-1e6c28a986ad" (UID: "5b5e37e3-9afd-4ff3-b992-1e6c28a986ad"). InnerVolumeSpecName "kube-api-access-n5jhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:42.938420 master-1 kubenswrapper[4771]: I1011 10:56:42.938287 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-config-data" (OuterVolumeSpecName: "config-data") pod "5b5e37e3-9afd-4ff3-b992-1e6c28a986ad" (UID: "5b5e37e3-9afd-4ff3-b992-1e6c28a986ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:42.944640 master-1 kubenswrapper[4771]: I1011 10:56:42.944558 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b5e37e3-9afd-4ff3-b992-1e6c28a986ad" (UID: "5b5e37e3-9afd-4ff3-b992-1e6c28a986ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:43.010993 master-1 kubenswrapper[4771]: I1011 10:56:43.010912 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:43.010993 master-1 kubenswrapper[4771]: I1011 10:56:43.010975 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:43.010993 master-1 kubenswrapper[4771]: I1011 10:56:43.010991 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5jhb\" (UniqueName: \"kubernetes.io/projected/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-kube-api-access-n5jhb\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:43.010993 master-1 kubenswrapper[4771]: I1011 10:56:43.011004 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:43.198868 master-1 kubenswrapper[4771]: I1011 10:56:43.198639 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=2.241282039 podStartE2EDuration="16.198611054s" podCreationTimestamp="2025-10-11 10:56:27 +0000 UTC" firstStartedPulling="2025-10-11 10:56:27.880322021 +0000 UTC m=+1819.854548462" lastFinishedPulling="2025-10-11 10:56:41.837651036 +0000 UTC m=+1833.811877477" observedRunningTime="2025-10-11 10:56:43.189388225 +0000 UTC m=+1835.163614736" watchObservedRunningTime="2025-10-11 10:56:43.198611054 +0000 UTC m=+1835.172837495" Oct 11 10:56:43.350573 master-1 kubenswrapper[4771]: I1011 10:56:43.350523 4771 generic.go:334] "Generic (PLEG): container finished" podID="a23d84be-f5ab-4261-9ba2-d94aaf104a59" containerID="14cbbf6abeb88d28f08a7099ac711df9a488bc85a2f7bd445bc229705a05a25b" exitCode=0 Oct 11 10:56:43.351341 master-1 kubenswrapper[4771]: I1011 10:56:43.350740 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" event={"ID":"a23d84be-f5ab-4261-9ba2-d94aaf104a59","Type":"ContainerDied","Data":"14cbbf6abeb88d28f08a7099ac711df9a488bc85a2f7bd445bc229705a05a25b"} Oct 11 10:56:43.351489 master-1 kubenswrapper[4771]: I1011 10:56:43.351471 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" event={"ID":"a23d84be-f5ab-4261-9ba2-d94aaf104a59","Type":"ContainerStarted","Data":"b0d191f73463f5a71aeb190809caf8100724d2aeec1100c76a864a58130b5a3d"} Oct 11 10:56:43.355456 master-1 kubenswrapper[4771]: I1011 10:56:43.355430 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tgg64" event={"ID":"5b5e37e3-9afd-4ff3-b992-1e6c28a986ad","Type":"ContainerDied","Data":"8225c71dadd5dd5d7cdb7b603f12129b97565dbfb98b6f8553a5f73b645e62cc"} Oct 11 10:56:43.355526 master-1 kubenswrapper[4771]: I1011 10:56:43.355459 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8225c71dadd5dd5d7cdb7b603f12129b97565dbfb98b6f8553a5f73b645e62cc" Oct 11 10:56:43.358269 master-1 kubenswrapper[4771]: I1011 10:56:43.358202 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tgg64" Oct 11 10:56:43.892770 master-0 kubenswrapper[4790]: I1011 10:56:43.892669 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7887b79bcd-vk5xz"] Oct 11 10:56:43.893427 master-0 kubenswrapper[4790]: I1011 10:56:43.893055 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7887b79bcd-vk5xz" podUID="7739fd2d-10b5-425d-acbf-f50630f07017" containerName="neutron-api" containerID="cri-o://477999ad20086f7e774ba456600ecbd44f0948bdc686eaeda5ddb795290ac9fe" gracePeriod=30 Oct 11 10:56:43.893427 master-0 kubenswrapper[4790]: I1011 10:56:43.893136 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7887b79bcd-vk5xz" podUID="7739fd2d-10b5-425d-acbf-f50630f07017" containerName="neutron-httpd" containerID="cri-o://2d24a46f82f260f086d4dccdce3d54977e12591932842b6bc27cb71f5af9d42e" gracePeriod=30 Oct 11 10:56:43.974573 master-1 kubenswrapper[4771]: I1011 10:56:43.974514 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-b8cb664c5-5zrqf"] Oct 11 10:56:44.373477 master-1 kubenswrapper[4771]: I1011 10:56:44.373337 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" event={"ID":"a23d84be-f5ab-4261-9ba2-d94aaf104a59","Type":"ContainerStarted","Data":"12c6ff03be76828491f921afc8c9ec6e58880687794d58647b68e34022915241"} Oct 11 10:56:44.374421 master-1 kubenswrapper[4771]: I1011 10:56:44.373584 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:44.551760 master-1 kubenswrapper[4771]: I1011 10:56:44.551672 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-b8cb664c5-5zrqf"] Oct 11 10:56:44.666985 master-0 kubenswrapper[4790]: I1011 10:56:44.666881 4790 generic.go:334] "Generic (PLEG): container finished" podID="7739fd2d-10b5-425d-acbf-f50630f07017" containerID="2d24a46f82f260f086d4dccdce3d54977e12591932842b6bc27cb71f5af9d42e" exitCode=0 Oct 11 10:56:44.667689 master-0 kubenswrapper[4790]: I1011 10:56:44.666979 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-vk5xz" event={"ID":"7739fd2d-10b5-425d-acbf-f50630f07017","Type":"ContainerDied","Data":"2d24a46f82f260f086d4dccdce3d54977e12591932842b6bc27cb71f5af9d42e"} Oct 11 10:56:45.040790 master-0 kubenswrapper[4790]: I1011 10:56:45.040720 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-2" Oct 11 10:56:45.040790 master-0 kubenswrapper[4790]: I1011 10:56:45.040792 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-2" Oct 11 10:56:45.086071 master-1 kubenswrapper[4771]: I1011 10:56:45.085722 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:56:45.297437 master-1 kubenswrapper[4771]: I1011 10:56:45.297224 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:56:45.308259 master-2 kubenswrapper[4776]: I1011 10:56:45.308044 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:56:45.309039 master-2 kubenswrapper[4776]: I1011 10:56:45.308340 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-2" podUID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerName="nova-api-log" containerID="cri-o://5927eac8ba45ef4c7a0dc1c214bcc490374b8d265e2fa04fadc00756760072a6" gracePeriod=30 Oct 11 10:56:45.309215 master-2 kubenswrapper[4776]: I1011 10:56:45.309067 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-2" podUID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerName="nova-api-api" containerID="cri-o://bb1eabd915662801894dc6613fb5062fd75144d8e9f8576573aae186cb323f10" gracePeriod=30 Oct 11 10:56:45.329491 master-0 kubenswrapper[4790]: I1011 10:56:45.329403 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-2"] Oct 11 10:56:45.330995 master-0 kubenswrapper[4790]: I1011 10:56:45.329700 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-2" podUID="0edb0512-334f-4bfd-b297-cce29a7c510b" containerName="nova-scheduler-scheduler" containerID="cri-o://bcee916a93846dcbbb2d3286227b4256009e1e87a4a77cd08320f9d8ba675866" gracePeriod=30 Oct 11 10:56:45.357509 master-1 kubenswrapper[4771]: I1011 10:56:45.357423 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:56:45.357858 master-1 kubenswrapper[4771]: E1011 10:56:45.357794 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe7833d-9251-4545-ba68-f58c146188f1" containerName="heat-cfnapi" Oct 11 10:56:45.357858 master-1 kubenswrapper[4771]: I1011 10:56:45.357813 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe7833d-9251-4545-ba68-f58c146188f1" containerName="heat-cfnapi" Oct 11 10:56:45.357858 master-1 kubenswrapper[4771]: E1011 10:56:45.357826 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85d5cfa-8073-4bbf-9eff-78fde719dadf" containerName="nova-manage" Oct 11 10:56:45.357858 master-1 kubenswrapper[4771]: I1011 10:56:45.357835 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85d5cfa-8073-4bbf-9eff-78fde719dadf" containerName="nova-manage" Oct 11 10:56:45.357858 master-1 kubenswrapper[4771]: E1011 10:56:45.357852 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b5e37e3-9afd-4ff3-b992-1e6c28a986ad" containerName="nova-cell1-conductor-db-sync" Oct 11 10:56:45.357858 master-1 kubenswrapper[4771]: I1011 10:56:45.357860 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b5e37e3-9afd-4ff3-b992-1e6c28a986ad" containerName="nova-cell1-conductor-db-sync" Oct 11 10:56:45.358558 master-1 kubenswrapper[4771]: E1011 10:56:45.357892 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerName="ceilometer-central-agent" Oct 11 10:56:45.358558 master-1 kubenswrapper[4771]: I1011 10:56:45.357901 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerName="ceilometer-central-agent" Oct 11 10:56:45.358558 master-1 kubenswrapper[4771]: E1011 10:56:45.357914 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerName="sg-core" Oct 11 10:56:45.358558 master-1 kubenswrapper[4771]: I1011 10:56:45.357924 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerName="sg-core" Oct 11 10:56:45.358558 master-1 kubenswrapper[4771]: E1011 10:56:45.357941 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerName="ceilometer-notification-agent" Oct 11 10:56:45.358558 master-1 kubenswrapper[4771]: I1011 10:56:45.357948 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerName="ceilometer-notification-agent" Oct 11 10:56:45.358558 master-1 kubenswrapper[4771]: E1011 10:56:45.357961 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de492fb-5249-49e2-a327-756234aa92bd" containerName="aodh-db-sync" Oct 11 10:56:45.358558 master-1 kubenswrapper[4771]: I1011 10:56:45.357969 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de492fb-5249-49e2-a327-756234aa92bd" containerName="aodh-db-sync" Oct 11 10:56:45.358558 master-1 kubenswrapper[4771]: E1011 10:56:45.357984 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerName="proxy-httpd" Oct 11 10:56:45.358558 master-1 kubenswrapper[4771]: I1011 10:56:45.357991 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerName="proxy-httpd" Oct 11 10:56:45.358558 master-1 kubenswrapper[4771]: I1011 10:56:45.358151 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de492fb-5249-49e2-a327-756234aa92bd" containerName="aodh-db-sync" Oct 11 10:56:45.358558 master-1 kubenswrapper[4771]: I1011 10:56:45.358172 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerName="proxy-httpd" Oct 11 10:56:45.358558 master-1 kubenswrapper[4771]: I1011 10:56:45.358183 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b5e37e3-9afd-4ff3-b992-1e6c28a986ad" containerName="nova-cell1-conductor-db-sync" Oct 11 10:56:45.358558 master-1 kubenswrapper[4771]: I1011 10:56:45.358196 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85d5cfa-8073-4bbf-9eff-78fde719dadf" containerName="nova-manage" Oct 11 10:56:45.358558 master-1 kubenswrapper[4771]: I1011 10:56:45.358207 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerName="ceilometer-central-agent" Oct 11 10:56:45.358558 master-1 kubenswrapper[4771]: I1011 10:56:45.358214 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerName="sg-core" Oct 11 10:56:45.358558 master-1 kubenswrapper[4771]: I1011 10:56:45.358225 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe7833d-9251-4545-ba68-f58c146188f1" containerName="heat-cfnapi" Oct 11 10:56:45.358558 master-1 kubenswrapper[4771]: I1011 10:56:45.358239 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" containerName="ceilometer-notification-agent" Oct 11 10:56:45.360729 master-1 kubenswrapper[4771]: I1011 10:56:45.360282 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:56:45.362903 master-1 kubenswrapper[4771]: W1011 10:56:45.362827 4771 reflector.go:561] object-"openstack"/"ceilometer-config-data": failed to list *v1.Secret: secrets "ceilometer-config-data" is forbidden: User "system:node:master-1" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'master-1' and this object Oct 11 10:56:45.362903 master-1 kubenswrapper[4771]: E1011 10:56:45.362893 4771 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceilometer-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ceilometer-config-data\" is forbidden: User \"system:node:master-1\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'master-1' and this object" logger="UnhandledError" Oct 11 10:56:45.363199 master-1 kubenswrapper[4771]: W1011 10:56:45.362938 4771 reflector.go:561] object-"openstack"/"cert-ceilometer-internal-svc": failed to list *v1.Secret: secrets "cert-ceilometer-internal-svc" is forbidden: User "system:node:master-1" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'master-1' and this object Oct 11 10:56:45.363199 master-1 kubenswrapper[4771]: E1011 10:56:45.362996 4771 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ceilometer-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-ceilometer-internal-svc\" is forbidden: User \"system:node:master-1\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'master-1' and this object" logger="UnhandledError" Oct 11 10:56:45.363579 master-1 kubenswrapper[4771]: W1011 10:56:45.363393 4771 reflector.go:561] object-"openstack"/"ceilometer-scripts": failed to list *v1.Secret: secrets "ceilometer-scripts" is forbidden: User "system:node:master-1" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'master-1' and this object Oct 11 10:56:45.363579 master-1 kubenswrapper[4771]: E1011 10:56:45.363450 4771 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceilometer-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ceilometer-scripts\" is forbidden: User \"system:node:master-1\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'master-1' and this object" logger="UnhandledError" Oct 11 10:56:45.382903 master-1 kubenswrapper[4771]: I1011 10:56:45.382488 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" podStartSLOduration=7.382462321 podStartE2EDuration="7.382462321s" podCreationTimestamp="2025-10-11 10:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:45.377167287 +0000 UTC m=+1837.351393778" watchObservedRunningTime="2025-10-11 10:56:45.382462321 +0000 UTC m=+1837.356688772" Oct 11 10:56:45.403748 master-1 kubenswrapper[4771]: I1011 10:56:45.403616 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:56:45.429954 master-0 kubenswrapper[4790]: I1011 10:56:45.429879 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:56:45.481827 master-1 kubenswrapper[4771]: I1011 10:56:45.481716 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9356157-35da-4cf7-a755-86123f5e09a0-run-httpd\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.482123 master-1 kubenswrapper[4771]: I1011 10:56:45.481954 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.482123 master-1 kubenswrapper[4771]: I1011 10:56:45.482013 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wjb6\" (UniqueName: \"kubernetes.io/projected/e9356157-35da-4cf7-a755-86123f5e09a0-kube-api-access-5wjb6\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.482123 master-1 kubenswrapper[4771]: I1011 10:56:45.482077 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9356157-35da-4cf7-a755-86123f5e09a0-log-httpd\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.482344 master-1 kubenswrapper[4771]: I1011 10:56:45.482127 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-config-data\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.482344 master-1 kubenswrapper[4771]: I1011 10:56:45.482292 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-scripts\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.482515 master-1 kubenswrapper[4771]: I1011 10:56:45.482452 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.482587 master-1 kubenswrapper[4771]: I1011 10:56:45.482553 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.585007 master-1 kubenswrapper[4771]: I1011 10:56:45.584888 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.585392 master-1 kubenswrapper[4771]: I1011 10:56:45.585030 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.585392 master-1 kubenswrapper[4771]: I1011 10:56:45.585122 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9356157-35da-4cf7-a755-86123f5e09a0-run-httpd\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.585392 master-1 kubenswrapper[4771]: I1011 10:56:45.585282 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.585392 master-1 kubenswrapper[4771]: I1011 10:56:45.585343 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wjb6\" (UniqueName: \"kubernetes.io/projected/e9356157-35da-4cf7-a755-86123f5e09a0-kube-api-access-5wjb6\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.585801 master-1 kubenswrapper[4771]: I1011 10:56:45.585470 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9356157-35da-4cf7-a755-86123f5e09a0-log-httpd\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.585934 master-1 kubenswrapper[4771]: I1011 10:56:45.585837 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9356157-35da-4cf7-a755-86123f5e09a0-run-httpd\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.586267 master-1 kubenswrapper[4771]: I1011 10:56:45.586224 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9356157-35da-4cf7-a755-86123f5e09a0-log-httpd\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.586443 master-1 kubenswrapper[4771]: I1011 10:56:45.586313 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-config-data\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.586641 master-1 kubenswrapper[4771]: I1011 10:56:45.586586 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-scripts\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.589722 master-1 kubenswrapper[4771]: I1011 10:56:45.589655 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.609608 master-1 kubenswrapper[4771]: I1011 10:56:45.609515 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wjb6\" (UniqueName: \"kubernetes.io/projected/e9356157-35da-4cf7-a755-86123f5e09a0-kube-api-access-5wjb6\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:45.672997 master-0 kubenswrapper[4790]: I1011 10:56:45.672857 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-2" podUID="aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb" containerName="nova-metadata-log" containerID="cri-o://edd9d5cc67a7fa520302dd1bce6ecd1726176be60b3256d0edfdffc2d8e7a1ec" gracePeriod=30 Oct 11 10:56:45.673326 master-0 kubenswrapper[4790]: I1011 10:56:45.673005 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-2" podUID="aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb" containerName="nova-metadata-metadata" containerID="cri-o://dd353e02d9ecaea5cfa6a72e50b08a9d7d2734f7d93ee1f944d9a386c0f703b5" gracePeriod=30 Oct 11 10:56:45.751115 master-2 kubenswrapper[4776]: I1011 10:56:45.751041 4776 generic.go:334] "Generic (PLEG): container finished" podID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerID="bb1eabd915662801894dc6613fb5062fd75144d8e9f8576573aae186cb323f10" exitCode=0 Oct 11 10:56:45.751115 master-2 kubenswrapper[4776]: I1011 10:56:45.751085 4776 generic.go:334] "Generic (PLEG): container finished" podID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerID="5927eac8ba45ef4c7a0dc1c214bcc490374b8d265e2fa04fadc00756760072a6" exitCode=143 Oct 11 10:56:45.751115 master-2 kubenswrapper[4776]: I1011 10:56:45.751109 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a3bc084-f5d9-4e64-9350-d2c3b3487e76","Type":"ContainerDied","Data":"bb1eabd915662801894dc6613fb5062fd75144d8e9f8576573aae186cb323f10"} Oct 11 10:56:45.751896 master-2 kubenswrapper[4776]: I1011 10:56:45.751149 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a3bc084-f5d9-4e64-9350-d2c3b3487e76","Type":"ContainerDied","Data":"5927eac8ba45ef4c7a0dc1c214bcc490374b8d265e2fa04fadc00756760072a6"} Oct 11 10:56:46.184243 master-1 kubenswrapper[4771]: I1011 10:56:46.184145 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 11 10:56:46.190707 master-1 kubenswrapper[4771]: I1011 10:56:46.190612 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:46.236754 master-2 kubenswrapper[4776]: I1011 10:56:46.236719 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:56:46.327943 master-0 kubenswrapper[4790]: I1011 10:56:46.327876 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-2" Oct 11 10:56:46.367245 master-1 kubenswrapper[4771]: I1011 10:56:46.367132 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 11 10:56:46.375820 master-1 kubenswrapper[4771]: I1011 10:56:46.375718 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-scripts\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:46.387405 master-0 kubenswrapper[4790]: I1011 10:56:46.387321 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-combined-ca-bundle\") pod \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " Oct 11 10:56:46.387743 master-0 kubenswrapper[4790]: I1011 10:56:46.387433 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-logs\") pod \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " Oct 11 10:56:46.387743 master-0 kubenswrapper[4790]: I1011 10:56:46.387537 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-config-data\") pod \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " Oct 11 10:56:46.387743 master-0 kubenswrapper[4790]: I1011 10:56:46.387602 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-nova-metadata-tls-certs\") pod \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " Oct 11 10:56:46.387957 master-0 kubenswrapper[4790]: I1011 10:56:46.387852 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ds6m\" (UniqueName: \"kubernetes.io/projected/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-kube-api-access-5ds6m\") pod \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\" (UID: \"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb\") " Oct 11 10:56:46.387957 master-0 kubenswrapper[4790]: I1011 10:56:46.387859 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-logs" (OuterVolumeSpecName: "logs") pod "aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb" (UID: "aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:46.388850 master-0 kubenswrapper[4790]: I1011 10:56:46.388793 4790 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-logs\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:46.393011 master-0 kubenswrapper[4790]: I1011 10:56:46.392944 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-kube-api-access-5ds6m" (OuterVolumeSpecName: "kube-api-access-5ds6m") pod "aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb" (UID: "aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb"). InnerVolumeSpecName "kube-api-access-5ds6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:46.405953 master-0 kubenswrapper[4790]: I1011 10:56:46.405869 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb" (UID: "aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:46.416591 master-2 kubenswrapper[4776]: I1011 10:56:46.416436 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-logs\") pod \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " Oct 11 10:56:46.416591 master-2 kubenswrapper[4776]: I1011 10:56:46.416538 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-config-data\") pod \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " Oct 11 10:56:46.417253 master-2 kubenswrapper[4776]: I1011 10:56:46.416822 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-logs" (OuterVolumeSpecName: "logs") pod "3a3bc084-f5d9-4e64-9350-d2c3b3487e76" (UID: "3a3bc084-f5d9-4e64-9350-d2c3b3487e76"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:46.417253 master-2 kubenswrapper[4776]: I1011 10:56:46.416850 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-combined-ca-bundle\") pod \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " Oct 11 10:56:46.417253 master-2 kubenswrapper[4776]: I1011 10:56:46.416968 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np4rj\" (UniqueName: \"kubernetes.io/projected/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-kube-api-access-np4rj\") pod \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " Oct 11 10:56:46.417534 master-2 kubenswrapper[4776]: I1011 10:56:46.417498 4776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-logs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:46.420140 master-2 kubenswrapper[4776]: I1011 10:56:46.420089 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-kube-api-access-np4rj" (OuterVolumeSpecName: "kube-api-access-np4rj") pod "3a3bc084-f5d9-4e64-9350-d2c3b3487e76" (UID: "3a3bc084-f5d9-4e64-9350-d2c3b3487e76"). InnerVolumeSpecName "kube-api-access-np4rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:46.423210 master-0 kubenswrapper[4790]: I1011 10:56:46.423131 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-config-data" (OuterVolumeSpecName: "config-data") pod "aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb" (UID: "aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:46.437875 master-2 kubenswrapper[4776]: I1011 10:56:46.437812 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a3bc084-f5d9-4e64-9350-d2c3b3487e76" (UID: "3a3bc084-f5d9-4e64-9350-d2c3b3487e76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:46.438203 master-2 kubenswrapper[4776]: I1011 10:56:46.438136 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-config-data" (OuterVolumeSpecName: "config-data") pod "3a3bc084-f5d9-4e64-9350-d2c3b3487e76" (UID: "3a3bc084-f5d9-4e64-9350-d2c3b3487e76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:46.439363 master-0 kubenswrapper[4790]: I1011 10:56:46.439298 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb" (UID: "aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:46.455155 master-1 kubenswrapper[4771]: I1011 10:56:46.454929 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fe7833d-9251-4545-ba68-f58c146188f1" path="/var/lib/kubelet/pods/1fe7833d-9251-4545-ba68-f58c146188f1/volumes" Oct 11 10:56:46.456471 master-1 kubenswrapper[4771]: I1011 10:56:46.456421 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0cc5394-b33f-41a9-bbe2-d772e75a8f58" path="/var/lib/kubelet/pods/d0cc5394-b33f-41a9-bbe2-d772e75a8f58/volumes" Oct 11 10:56:46.490931 master-0 kubenswrapper[4790]: I1011 10:56:46.490874 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ds6m\" (UniqueName: \"kubernetes.io/projected/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-kube-api-access-5ds6m\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:46.491452 master-0 kubenswrapper[4790]: I1011 10:56:46.491434 4790 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:46.491563 master-0 kubenswrapper[4790]: I1011 10:56:46.491548 4790 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-config-data\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:46.491657 master-0 kubenswrapper[4790]: I1011 10:56:46.491642 4790 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:46.520255 master-2 kubenswrapper[4776]: I1011 10:56:46.520167 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:46.520255 master-2 kubenswrapper[4776]: I1011 10:56:46.520229 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np4rj\" (UniqueName: \"kubernetes.io/projected/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-kube-api-access-np4rj\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:46.520255 master-2 kubenswrapper[4776]: I1011 10:56:46.520251 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:46.533416 master-1 kubenswrapper[4771]: I1011 10:56:46.533323 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 11 10:56:46.542558 master-1 kubenswrapper[4771]: I1011 10:56:46.542486 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:46.548297 master-1 kubenswrapper[4771]: I1011 10:56:46.548211 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-config-data\") pod \"ceilometer-0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " pod="openstack/ceilometer-0" Oct 11 10:56:46.589990 master-1 kubenswrapper[4771]: I1011 10:56:46.589915 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:56:46.685536 master-0 kubenswrapper[4790]: I1011 10:56:46.685470 4790 generic.go:334] "Generic (PLEG): container finished" podID="aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb" containerID="dd353e02d9ecaea5cfa6a72e50b08a9d7d2734f7d93ee1f944d9a386c0f703b5" exitCode=0 Oct 11 10:56:46.685536 master-0 kubenswrapper[4790]: I1011 10:56:46.685520 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-2" Oct 11 10:56:46.685536 master-0 kubenswrapper[4790]: I1011 10:56:46.685531 4790 generic.go:334] "Generic (PLEG): container finished" podID="aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb" containerID="edd9d5cc67a7fa520302dd1bce6ecd1726176be60b3256d0edfdffc2d8e7a1ec" exitCode=143 Oct 11 10:56:46.685536 master-0 kubenswrapper[4790]: I1011 10:56:46.685568 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb","Type":"ContainerDied","Data":"dd353e02d9ecaea5cfa6a72e50b08a9d7d2734f7d93ee1f944d9a386c0f703b5"} Oct 11 10:56:46.686187 master-0 kubenswrapper[4790]: I1011 10:56:46.685612 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb","Type":"ContainerDied","Data":"edd9d5cc67a7fa520302dd1bce6ecd1726176be60b3256d0edfdffc2d8e7a1ec"} Oct 11 10:56:46.686187 master-0 kubenswrapper[4790]: I1011 10:56:46.685632 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb","Type":"ContainerDied","Data":"b30caa9805cc96830370be2ccb0efbda932cb9d93ac0013c9544b523e620e980"} Oct 11 10:56:46.686187 master-0 kubenswrapper[4790]: I1011 10:56:46.685697 4790 scope.go:117] "RemoveContainer" containerID="dd353e02d9ecaea5cfa6a72e50b08a9d7d2734f7d93ee1f944d9a386c0f703b5" Oct 11 10:56:46.721643 master-0 kubenswrapper[4790]: I1011 10:56:46.721590 4790 scope.go:117] "RemoveContainer" containerID="edd9d5cc67a7fa520302dd1bce6ecd1726176be60b3256d0edfdffc2d8e7a1ec" Oct 11 10:56:46.745065 master-0 kubenswrapper[4790]: I1011 10:56:46.745031 4790 scope.go:117] "RemoveContainer" containerID="dd353e02d9ecaea5cfa6a72e50b08a9d7d2734f7d93ee1f944d9a386c0f703b5" Oct 11 10:56:46.745933 master-0 kubenswrapper[4790]: E1011 10:56:46.745894 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd353e02d9ecaea5cfa6a72e50b08a9d7d2734f7d93ee1f944d9a386c0f703b5\": container with ID starting with dd353e02d9ecaea5cfa6a72e50b08a9d7d2734f7d93ee1f944d9a386c0f703b5 not found: ID does not exist" containerID="dd353e02d9ecaea5cfa6a72e50b08a9d7d2734f7d93ee1f944d9a386c0f703b5" Oct 11 10:56:46.746066 master-0 kubenswrapper[4790]: I1011 10:56:46.745949 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd353e02d9ecaea5cfa6a72e50b08a9d7d2734f7d93ee1f944d9a386c0f703b5"} err="failed to get container status \"dd353e02d9ecaea5cfa6a72e50b08a9d7d2734f7d93ee1f944d9a386c0f703b5\": rpc error: code = NotFound desc = could not find container \"dd353e02d9ecaea5cfa6a72e50b08a9d7d2734f7d93ee1f944d9a386c0f703b5\": container with ID starting with dd353e02d9ecaea5cfa6a72e50b08a9d7d2734f7d93ee1f944d9a386c0f703b5 not found: ID does not exist" Oct 11 10:56:46.746066 master-0 kubenswrapper[4790]: I1011 10:56:46.745980 4790 scope.go:117] "RemoveContainer" containerID="edd9d5cc67a7fa520302dd1bce6ecd1726176be60b3256d0edfdffc2d8e7a1ec" Oct 11 10:56:46.746463 master-0 kubenswrapper[4790]: E1011 10:56:46.746425 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edd9d5cc67a7fa520302dd1bce6ecd1726176be60b3256d0edfdffc2d8e7a1ec\": container with ID starting with edd9d5cc67a7fa520302dd1bce6ecd1726176be60b3256d0edfdffc2d8e7a1ec not found: ID does not exist" containerID="edd9d5cc67a7fa520302dd1bce6ecd1726176be60b3256d0edfdffc2d8e7a1ec" Oct 11 10:56:46.746583 master-0 kubenswrapper[4790]: I1011 10:56:46.746554 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd9d5cc67a7fa520302dd1bce6ecd1726176be60b3256d0edfdffc2d8e7a1ec"} err="failed to get container status \"edd9d5cc67a7fa520302dd1bce6ecd1726176be60b3256d0edfdffc2d8e7a1ec\": rpc error: code = NotFound desc = could not find container \"edd9d5cc67a7fa520302dd1bce6ecd1726176be60b3256d0edfdffc2d8e7a1ec\": container with ID starting with edd9d5cc67a7fa520302dd1bce6ecd1726176be60b3256d0edfdffc2d8e7a1ec not found: ID does not exist" Oct 11 10:56:46.746623 master-0 kubenswrapper[4790]: I1011 10:56:46.746590 4790 scope.go:117] "RemoveContainer" containerID="dd353e02d9ecaea5cfa6a72e50b08a9d7d2734f7d93ee1f944d9a386c0f703b5" Oct 11 10:56:46.747110 master-0 kubenswrapper[4790]: I1011 10:56:46.747079 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd353e02d9ecaea5cfa6a72e50b08a9d7d2734f7d93ee1f944d9a386c0f703b5"} err="failed to get container status \"dd353e02d9ecaea5cfa6a72e50b08a9d7d2734f7d93ee1f944d9a386c0f703b5\": rpc error: code = NotFound desc = could not find container \"dd353e02d9ecaea5cfa6a72e50b08a9d7d2734f7d93ee1f944d9a386c0f703b5\": container with ID starting with dd353e02d9ecaea5cfa6a72e50b08a9d7d2734f7d93ee1f944d9a386c0f703b5 not found: ID does not exist" Oct 11 10:56:46.747165 master-0 kubenswrapper[4790]: I1011 10:56:46.747112 4790 scope.go:117] "RemoveContainer" containerID="edd9d5cc67a7fa520302dd1bce6ecd1726176be60b3256d0edfdffc2d8e7a1ec" Oct 11 10:56:46.747446 master-0 kubenswrapper[4790]: I1011 10:56:46.747425 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd9d5cc67a7fa520302dd1bce6ecd1726176be60b3256d0edfdffc2d8e7a1ec"} err="failed to get container status \"edd9d5cc67a7fa520302dd1bce6ecd1726176be60b3256d0edfdffc2d8e7a1ec\": rpc error: code = NotFound desc = could not find container \"edd9d5cc67a7fa520302dd1bce6ecd1726176be60b3256d0edfdffc2d8e7a1ec\": container with ID starting with edd9d5cc67a7fa520302dd1bce6ecd1726176be60b3256d0edfdffc2d8e7a1ec not found: ID does not exist" Oct 11 10:56:46.761806 master-2 kubenswrapper[4776]: I1011 10:56:46.761663 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a3bc084-f5d9-4e64-9350-d2c3b3487e76","Type":"ContainerDied","Data":"bba76d3eeea8b745ea58d66c158d3960e6d455c9d713ec1220d847e9aa5a076e"} Oct 11 10:56:46.761806 master-2 kubenswrapper[4776]: I1011 10:56:46.761747 4776 scope.go:117] "RemoveContainer" containerID="bb1eabd915662801894dc6613fb5062fd75144d8e9f8576573aae186cb323f10" Oct 11 10:56:46.762022 master-2 kubenswrapper[4776]: I1011 10:56:46.761820 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:56:46.779068 master-2 kubenswrapper[4776]: I1011 10:56:46.778979 4776 scope.go:117] "RemoveContainer" containerID="5927eac8ba45ef4c7a0dc1c214bcc490374b8d265e2fa04fadc00756760072a6" Oct 11 10:56:46.987830 master-0 kubenswrapper[4790]: I1011 10:56:46.987612 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:56:47.306747 master-1 kubenswrapper[4771]: I1011 10:56:47.306637 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 11 10:56:47.308610 master-1 kubenswrapper[4771]: I1011 10:56:47.308559 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 11 10:56:47.312053 master-1 kubenswrapper[4771]: I1011 10:56:47.311990 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 11 10:56:47.393273 master-1 kubenswrapper[4771]: I1011 10:56:47.393194 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 11 10:56:47.399763 master-0 kubenswrapper[4790]: I1011 10:56:47.398975 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:56:47.430549 master-1 kubenswrapper[4771]: I1011 10:56:47.430487 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5934858-6421-4f73-9a74-3111541cc898-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b5934858-6421-4f73-9a74-3111541cc898\") " pod="openstack/nova-cell1-conductor-0" Oct 11 10:56:47.430783 master-1 kubenswrapper[4771]: I1011 10:56:47.430661 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5934858-6421-4f73-9a74-3111541cc898-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b5934858-6421-4f73-9a74-3111541cc898\") " pod="openstack/nova-cell1-conductor-0" Oct 11 10:56:47.430783 master-1 kubenswrapper[4771]: I1011 10:56:47.430717 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcrm8\" (UniqueName: \"kubernetes.io/projected/b5934858-6421-4f73-9a74-3111541cc898-kube-api-access-mcrm8\") pod \"nova-cell1-conductor-0\" (UID: \"b5934858-6421-4f73-9a74-3111541cc898\") " pod="openstack/nova-cell1-conductor-0" Oct 11 10:56:47.531321 master-1 kubenswrapper[4771]: I1011 10:56:47.531227 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:56:47.532790 master-1 kubenswrapper[4771]: I1011 10:56:47.532721 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5934858-6421-4f73-9a74-3111541cc898-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b5934858-6421-4f73-9a74-3111541cc898\") " pod="openstack/nova-cell1-conductor-0" Oct 11 10:56:47.532924 master-1 kubenswrapper[4771]: I1011 10:56:47.532819 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcrm8\" (UniqueName: \"kubernetes.io/projected/b5934858-6421-4f73-9a74-3111541cc898-kube-api-access-mcrm8\") pod \"nova-cell1-conductor-0\" (UID: \"b5934858-6421-4f73-9a74-3111541cc898\") " pod="openstack/nova-cell1-conductor-0" Oct 11 10:56:47.533011 master-1 kubenswrapper[4771]: I1011 10:56:47.532944 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5934858-6421-4f73-9a74-3111541cc898-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b5934858-6421-4f73-9a74-3111541cc898\") " pod="openstack/nova-cell1-conductor-0" Oct 11 10:56:47.538443 master-1 kubenswrapper[4771]: I1011 10:56:47.538310 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5934858-6421-4f73-9a74-3111541cc898-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b5934858-6421-4f73-9a74-3111541cc898\") " pod="openstack/nova-cell1-conductor-0" Oct 11 10:56:47.540014 master-2 kubenswrapper[4776]: I1011 10:56:47.539948 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 11 10:56:47.541444 master-1 kubenswrapper[4771]: I1011 10:56:47.541381 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5934858-6421-4f73-9a74-3111541cc898-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b5934858-6421-4f73-9a74-3111541cc898\") " pod="openstack/nova-cell1-conductor-0" Oct 11 10:56:47.554868 master-1 kubenswrapper[4771]: I1011 10:56:47.554801 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 11 10:56:47.556611 master-1 kubenswrapper[4771]: I1011 10:56:47.556512 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 11 10:56:47.557881 master-1 kubenswrapper[4771]: I1011 10:56:47.557785 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 11 10:56:47.559865 master-1 kubenswrapper[4771]: I1011 10:56:47.559810 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 11 10:56:47.564749 master-2 kubenswrapper[4776]: I1011 10:56:47.564668 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 11 10:56:47.593200 master-0 kubenswrapper[4790]: E1011 10:56:47.593058 4790 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bcee916a93846dcbbb2d3286227b4256009e1e87a4a77cd08320f9d8ba675866" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 11 10:56:47.595758 master-0 kubenswrapper[4790]: E1011 10:56:47.595637 4790 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bcee916a93846dcbbb2d3286227b4256009e1e87a4a77cd08320f9d8ba675866" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 11 10:56:47.599747 master-0 kubenswrapper[4790]: E1011 10:56:47.599601 4790 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bcee916a93846dcbbb2d3286227b4256009e1e87a4a77cd08320f9d8ba675866" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 11 10:56:47.599866 master-0 kubenswrapper[4790]: E1011 10:56:47.599775 4790 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-2" podUID="0edb0512-334f-4bfd-b297-cce29a7c510b" containerName="nova-scheduler-scheduler" Oct 11 10:56:47.605825 master-0 kubenswrapper[4790]: I1011 10:56:47.605745 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-1" Oct 11 10:56:47.606021 master-0 kubenswrapper[4790]: I1011 10:56:47.605838 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-1" Oct 11 10:56:47.691841 master-0 kubenswrapper[4790]: I1011 10:56:47.691652 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:56:47.692104 master-0 kubenswrapper[4790]: E1011 10:56:47.692068 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb" containerName="nova-metadata-log" Oct 11 10:56:47.692104 master-0 kubenswrapper[4790]: I1011 10:56:47.692091 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb" containerName="nova-metadata-log" Oct 11 10:56:47.692210 master-0 kubenswrapper[4790]: E1011 10:56:47.692137 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb" containerName="nova-metadata-metadata" Oct 11 10:56:47.692210 master-0 kubenswrapper[4790]: I1011 10:56:47.692145 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb" containerName="nova-metadata-metadata" Oct 11 10:56:47.692340 master-0 kubenswrapper[4790]: I1011 10:56:47.692303 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb" containerName="nova-metadata-metadata" Oct 11 10:56:47.692340 master-0 kubenswrapper[4790]: I1011 10:56:47.692334 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb" containerName="nova-metadata-log" Oct 11 10:56:47.693609 master-0 kubenswrapper[4790]: I1011 10:56:47.693574 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-2" Oct 11 10:56:47.696948 master-0 kubenswrapper[4790]: I1011 10:56:47.696893 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 11 10:56:47.697171 master-0 kubenswrapper[4790]: I1011 10:56:47.697146 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 11 10:56:47.794956 master-2 kubenswrapper[4776]: I1011 10:56:47.794826 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 11 10:56:47.794956 master-2 kubenswrapper[4776]: I1011 10:56:47.794887 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 11 10:56:47.810372 master-2 kubenswrapper[4776]: I1011 10:56:47.810329 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 11 10:56:47.889788 master-0 kubenswrapper[4790]: I1011 10:56:47.889691 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7ttc\" (UniqueName: \"kubernetes.io/projected/d221eb73-a42b-4c47-a912-4e47b88297a4-kube-api-access-z7ttc\") pod \"nova-metadata-2\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " pod="openstack/nova-metadata-2" Oct 11 10:56:47.890098 master-0 kubenswrapper[4790]: I1011 10:56:47.889832 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d221eb73-a42b-4c47-a912-4e47b88297a4-logs\") pod \"nova-metadata-2\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " pod="openstack/nova-metadata-2" Oct 11 10:56:47.890098 master-0 kubenswrapper[4790]: I1011 10:56:47.889896 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d221eb73-a42b-4c47-a912-4e47b88297a4-nova-metadata-tls-certs\") pod \"nova-metadata-2\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " pod="openstack/nova-metadata-2" Oct 11 10:56:47.890098 master-0 kubenswrapper[4790]: I1011 10:56:47.889947 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d221eb73-a42b-4c47-a912-4e47b88297a4-config-data\") pod \"nova-metadata-2\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " pod="openstack/nova-metadata-2" Oct 11 10:56:47.890098 master-0 kubenswrapper[4790]: I1011 10:56:47.889979 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d221eb73-a42b-4c47-a912-4e47b88297a4-combined-ca-bundle\") pod \"nova-metadata-2\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " pod="openstack/nova-metadata-2" Oct 11 10:56:47.964726 master-1 kubenswrapper[4771]: I1011 10:56:47.964552 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-1" Oct 11 10:56:47.965140 master-1 kubenswrapper[4771]: I1011 10:56:47.965074 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-1" Oct 11 10:56:47.966808 master-1 kubenswrapper[4771]: I1011 10:56:47.966754 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-1" Oct 11 10:56:47.991134 master-2 kubenswrapper[4776]: I1011 10:56:47.991086 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:56:47.991927 master-0 kubenswrapper[4790]: I1011 10:56:47.991760 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7ttc\" (UniqueName: \"kubernetes.io/projected/d221eb73-a42b-4c47-a912-4e47b88297a4-kube-api-access-z7ttc\") pod \"nova-metadata-2\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " pod="openstack/nova-metadata-2" Oct 11 10:56:47.991927 master-0 kubenswrapper[4790]: I1011 10:56:47.991847 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d221eb73-a42b-4c47-a912-4e47b88297a4-logs\") pod \"nova-metadata-2\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " pod="openstack/nova-metadata-2" Oct 11 10:56:47.991927 master-0 kubenswrapper[4790]: I1011 10:56:47.991870 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d221eb73-a42b-4c47-a912-4e47b88297a4-nova-metadata-tls-certs\") pod \"nova-metadata-2\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " pod="openstack/nova-metadata-2" Oct 11 10:56:47.991927 master-0 kubenswrapper[4790]: I1011 10:56:47.991902 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d221eb73-a42b-4c47-a912-4e47b88297a4-config-data\") pod \"nova-metadata-2\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " pod="openstack/nova-metadata-2" Oct 11 10:56:47.991927 master-0 kubenswrapper[4790]: I1011 10:56:47.991941 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d221eb73-a42b-4c47-a912-4e47b88297a4-combined-ca-bundle\") pod \"nova-metadata-2\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " pod="openstack/nova-metadata-2" Oct 11 10:56:47.992629 master-0 kubenswrapper[4790]: I1011 10:56:47.992574 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d221eb73-a42b-4c47-a912-4e47b88297a4-logs\") pod \"nova-metadata-2\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " pod="openstack/nova-metadata-2" Oct 11 10:56:48.003097 master-0 kubenswrapper[4790]: I1011 10:56:48.003023 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d221eb73-a42b-4c47-a912-4e47b88297a4-config-data\") pod \"nova-metadata-2\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " pod="openstack/nova-metadata-2" Oct 11 10:56:48.003241 master-0 kubenswrapper[4790]: I1011 10:56:48.003168 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d221eb73-a42b-4c47-a912-4e47b88297a4-combined-ca-bundle\") pod \"nova-metadata-2\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " pod="openstack/nova-metadata-2" Oct 11 10:56:48.004487 master-0 kubenswrapper[4790]: I1011 10:56:48.004428 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d221eb73-a42b-4c47-a912-4e47b88297a4-nova-metadata-tls-certs\") pod \"nova-metadata-2\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " pod="openstack/nova-metadata-2" Oct 11 10:56:48.301920 master-0 kubenswrapper[4790]: I1011 10:56:48.301850 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb" path="/var/lib/kubelet/pods/aca84ed9-2b87-42a5-a6f5-33ab7f2d25eb/volumes" Oct 11 10:56:48.412014 master-0 kubenswrapper[4790]: I1011 10:56:48.411937 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:56:48.420060 master-1 kubenswrapper[4771]: I1011 10:56:48.419993 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcrm8\" (UniqueName: \"kubernetes.io/projected/b5934858-6421-4f73-9a74-3111541cc898-kube-api-access-mcrm8\") pod \"nova-cell1-conductor-0\" (UID: \"b5934858-6421-4f73-9a74-3111541cc898\") " pod="openstack/nova-cell1-conductor-0" Oct 11 10:56:48.436454 master-1 kubenswrapper[4771]: I1011 10:56:48.433444 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9356157-35da-4cf7-a755-86123f5e09a0","Type":"ContainerStarted","Data":"4a07614606065916a0a6a5a0a24779d53f74c10d0d6a9319c0eda823b68adb65"} Oct 11 10:56:48.436454 master-1 kubenswrapper[4771]: I1011 10:56:48.434042 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 11 10:56:48.450431 master-1 kubenswrapper[4771]: I1011 10:56:48.450314 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 11 10:56:48.450431 master-1 kubenswrapper[4771]: I1011 10:56:48.450402 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-1" Oct 11 10:56:48.538129 master-1 kubenswrapper[4771]: I1011 10:56:48.538027 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 11 10:56:48.597602 master-2 kubenswrapper[4776]: I1011 10:56:48.597417 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:56:48.614198 master-0 kubenswrapper[4790]: I1011 10:56:48.614052 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7ttc\" (UniqueName: \"kubernetes.io/projected/d221eb73-a42b-4c47-a912-4e47b88297a4-kube-api-access-z7ttc\") pod \"nova-metadata-2\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " pod="openstack/nova-metadata-2" Oct 11 10:56:48.619825 master-1 kubenswrapper[4771]: I1011 10:56:48.619739 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:56:48.688268 master-0 kubenswrapper[4790]: I1011 10:56:48.688153 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-1" podUID="253852fc-de03-49f0-8e18-b3ccba3d4966" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.130.0.112:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:56:48.688536 master-0 kubenswrapper[4790]: I1011 10:56:48.688243 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-1" podUID="253852fc-de03-49f0-8e18-b3ccba3d4966" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.130.0.112:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:56:48.786757 master-2 kubenswrapper[4776]: I1011 10:56:48.786668 4776 generic.go:334] "Generic (PLEG): container finished" podID="98ff7c8d-cc7c-4b25-917b-88dfa7f837c5" containerID="23f0e7b89983da20c11b93039bc87953bf1c5b41a82bfad5304a8f7dfd94bc3f" exitCode=0 Oct 11 10:56:48.786757 master-2 kubenswrapper[4776]: I1011 10:56:48.786753 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerDied","Data":"23f0e7b89983da20c11b93039bc87953bf1c5b41a82bfad5304a8f7dfd94bc3f"} Oct 11 10:56:48.837080 master-2 kubenswrapper[4776]: I1011 10:56:48.836958 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.128.0.172:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:56:48.878105 master-2 kubenswrapper[4776]: I1011 10:56:48.877880 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.128.0.172:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:56:48.910167 master-0 kubenswrapper[4790]: I1011 10:56:48.909966 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-2" Oct 11 10:56:49.035000 master-2 kubenswrapper[4776]: I1011 10:56:49.034935 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2"] Oct 11 10:56:49.035285 master-2 kubenswrapper[4776]: E1011 10:56:49.035263 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerName="nova-api-log" Oct 11 10:56:49.035285 master-2 kubenswrapper[4776]: I1011 10:56:49.035280 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerName="nova-api-log" Oct 11 10:56:49.035364 master-2 kubenswrapper[4776]: E1011 10:56:49.035300 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerName="nova-api-api" Oct 11 10:56:49.035364 master-2 kubenswrapper[4776]: I1011 10:56:49.035306 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerName="nova-api-api" Oct 11 10:56:49.035491 master-2 kubenswrapper[4776]: I1011 10:56:49.035471 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerName="nova-api-api" Oct 11 10:56:49.035491 master-2 kubenswrapper[4776]: I1011 10:56:49.035487 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerName="nova-api-log" Oct 11 10:56:49.036408 master-2 kubenswrapper[4776]: I1011 10:56:49.036384 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:56:49.048424 master-2 kubenswrapper[4776]: I1011 10:56:49.047414 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 11 10:56:49.082806 master-2 kubenswrapper[4776]: I1011 10:56:49.082756 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 11 10:56:49.267189 master-2 kubenswrapper[4776]: I1011 10:56:49.267132 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:56:49.277468 master-2 kubenswrapper[4776]: I1011 10:56:49.277390 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Oct 11 10:56:49.280710 master-2 kubenswrapper[4776]: I1011 10:56:49.280633 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 11 10:56:49.282963 master-2 kubenswrapper[4776]: I1011 10:56:49.282908 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Oct 11 10:56:49.283262 master-2 kubenswrapper[4776]: I1011 10:56:49.283227 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Oct 11 10:56:49.376692 master-2 kubenswrapper[4776]: I1011 10:56:49.376602 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bswtb\" (UniqueName: \"kubernetes.io/projected/1e253bd1-27b5-4423-8212-c9e698198d47-kube-api-access-bswtb\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.376910 master-2 kubenswrapper[4776]: I1011 10:56:49.376714 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.376910 master-2 kubenswrapper[4776]: I1011 10:56:49.376774 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-config-data\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.376910 master-2 kubenswrapper[4776]: I1011 10:56:49.376889 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-scripts\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.377013 master-2 kubenswrapper[4776]: I1011 10:56:49.376981 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e253bd1-27b5-4423-8212-c9e698198d47-logs\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.377088 master-2 kubenswrapper[4776]: I1011 10:56:49.377056 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2tgl\" (UniqueName: \"kubernetes.io/projected/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-kube-api-access-p2tgl\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.377130 master-2 kubenswrapper[4776]: I1011 10:56:49.377103 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.377179 master-2 kubenswrapper[4776]: I1011 10:56:49.377135 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-config-data\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.420927 master-2 kubenswrapper[4776]: I1011 10:56:49.420849 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 11 10:56:49.441848 master-1 kubenswrapper[4771]: I1011 10:56:49.441547 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 11 10:56:49.463896 master-1 kubenswrapper[4771]: I1011 10:56:49.463702 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b5934858-6421-4f73-9a74-3111541cc898","Type":"ContainerStarted","Data":"a03501b6649e0c6f0e4295eaab4fcf27313def1221099088aa148210423ed3da"} Oct 11 10:56:49.465447 master-1 kubenswrapper[4771]: I1011 10:56:49.465386 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9356157-35da-4cf7-a755-86123f5e09a0","Type":"ContainerStarted","Data":"5d78c693491970b1a8eb3619842707db804737dd9e6aea89c13ff6875afaf87a"} Oct 11 10:56:49.489404 master-2 kubenswrapper[4776]: I1011 10:56:49.488894 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e253bd1-27b5-4423-8212-c9e698198d47-logs\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.489404 master-2 kubenswrapper[4776]: I1011 10:56:49.489059 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2tgl\" (UniqueName: \"kubernetes.io/projected/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-kube-api-access-p2tgl\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.489404 master-2 kubenswrapper[4776]: I1011 10:56:49.489086 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.489404 master-2 kubenswrapper[4776]: I1011 10:56:49.489147 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-config-data\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.489404 master-2 kubenswrapper[4776]: I1011 10:56:49.489214 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bswtb\" (UniqueName: \"kubernetes.io/projected/1e253bd1-27b5-4423-8212-c9e698198d47-kube-api-access-bswtb\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.489404 master-2 kubenswrapper[4776]: I1011 10:56:49.489270 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.489404 master-2 kubenswrapper[4776]: I1011 10:56:49.489302 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-config-data\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.489404 master-2 kubenswrapper[4776]: I1011 10:56:49.489358 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-scripts\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.492402 master-2 kubenswrapper[4776]: I1011 10:56:49.492292 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e253bd1-27b5-4423-8212-c9e698198d47-logs\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.501866 master-2 kubenswrapper[4776]: I1011 10:56:49.501188 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-scripts\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.501866 master-2 kubenswrapper[4776]: I1011 10:56:49.501400 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-config-data\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.503202 master-2 kubenswrapper[4776]: I1011 10:56:49.502879 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.503391 master-2 kubenswrapper[4776]: I1011 10:56:49.503144 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-config-data\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.507291 master-2 kubenswrapper[4776]: I1011 10:56:49.504835 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.526540 master-0 kubenswrapper[4790]: I1011 10:56:49.526468 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:56:49.539829 master-2 kubenswrapper[4776]: I1011 10:56:49.539718 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bswtb\" (UniqueName: \"kubernetes.io/projected/1e253bd1-27b5-4423-8212-c9e698198d47-kube-api-access-bswtb\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.546522 master-2 kubenswrapper[4776]: I1011 10:56:49.546481 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2tgl\" (UniqueName: \"kubernetes.io/projected/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-kube-api-access-p2tgl\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.572112 master-1 kubenswrapper[4771]: I1011 10:56:49.572054 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-768f954cfc-gvj8j"] Oct 11 10:56:49.572574 master-1 kubenswrapper[4771]: I1011 10:56:49.572394 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" podUID="7f2f3d22-d709-4602-bb25-2c17626b75f1" containerName="dnsmasq-dns" containerID="cri-o://f842da8207799702100719e16d6957042c6a44764caa5bd18413f4806a961973" gracePeriod=10 Oct 11 10:56:49.652111 master-2 kubenswrapper[4776]: I1011 10:56:49.652056 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 11 10:56:49.665762 master-2 kubenswrapper[4776]: I1011 10:56:49.665698 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:56:49.716090 master-0 kubenswrapper[4790]: I1011 10:56:49.716036 4790 generic.go:334] "Generic (PLEG): container finished" podID="0edb0512-334f-4bfd-b297-cce29a7c510b" containerID="bcee916a93846dcbbb2d3286227b4256009e1e87a4a77cd08320f9d8ba675866" exitCode=0 Oct 11 10:56:49.716292 master-0 kubenswrapper[4790]: I1011 10:56:49.716128 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-2" event={"ID":"0edb0512-334f-4bfd-b297-cce29a7c510b","Type":"ContainerDied","Data":"bcee916a93846dcbbb2d3286227b4256009e1e87a4a77cd08320f9d8ba675866"} Oct 11 10:56:49.717889 master-0 kubenswrapper[4790]: I1011 10:56:49.717855 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"d221eb73-a42b-4c47-a912-4e47b88297a4","Type":"ContainerStarted","Data":"a01fc17bb36e96804df4939bb484d9c50eb215a10fead9c510b32b80ed9bd4c0"} Oct 11 10:56:49.867832 master-2 kubenswrapper[4776]: I1011 10:56:49.867522 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerStarted","Data":"bc39980fe155a215983f5125ab18f03af9b1943c164a6004518235f80c71b417"} Oct 11 10:56:50.036736 master-0 kubenswrapper[4790]: I1011 10:56:50.035690 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-2" Oct 11 10:56:50.075439 master-2 kubenswrapper[4776]: I1011 10:56:50.075389 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" path="/var/lib/kubelet/pods/3a3bc084-f5d9-4e64-9350-d2c3b3487e76/volumes" Oct 11 10:56:50.078936 master-1 kubenswrapper[4771]: I1011 10:56:50.078864 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:56:50.142317 master-0 kubenswrapper[4790]: I1011 10:56:50.141223 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0edb0512-334f-4bfd-b297-cce29a7c510b-config-data\") pod \"0edb0512-334f-4bfd-b297-cce29a7c510b\" (UID: \"0edb0512-334f-4bfd-b297-cce29a7c510b\") " Oct 11 10:56:50.142317 master-0 kubenswrapper[4790]: I1011 10:56:50.141337 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0edb0512-334f-4bfd-b297-cce29a7c510b-combined-ca-bundle\") pod \"0edb0512-334f-4bfd-b297-cce29a7c510b\" (UID: \"0edb0512-334f-4bfd-b297-cce29a7c510b\") " Oct 11 10:56:50.142317 master-0 kubenswrapper[4790]: I1011 10:56:50.141380 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l6w4\" (UniqueName: \"kubernetes.io/projected/0edb0512-334f-4bfd-b297-cce29a7c510b-kube-api-access-9l6w4\") pod \"0edb0512-334f-4bfd-b297-cce29a7c510b\" (UID: \"0edb0512-334f-4bfd-b297-cce29a7c510b\") " Oct 11 10:56:50.147154 master-0 kubenswrapper[4790]: I1011 10:56:50.146602 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0edb0512-334f-4bfd-b297-cce29a7c510b-kube-api-access-9l6w4" (OuterVolumeSpecName: "kube-api-access-9l6w4") pod "0edb0512-334f-4bfd-b297-cce29a7c510b" (UID: "0edb0512-334f-4bfd-b297-cce29a7c510b"). InnerVolumeSpecName "kube-api-access-9l6w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:50.168254 master-0 kubenswrapper[4790]: I1011 10:56:50.168164 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0edb0512-334f-4bfd-b297-cce29a7c510b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0edb0512-334f-4bfd-b297-cce29a7c510b" (UID: "0edb0512-334f-4bfd-b297-cce29a7c510b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:50.175717 master-0 kubenswrapper[4790]: I1011 10:56:50.175018 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0edb0512-334f-4bfd-b297-cce29a7c510b-config-data" (OuterVolumeSpecName: "config-data") pod "0edb0512-334f-4bfd-b297-cce29a7c510b" (UID: "0edb0512-334f-4bfd-b297-cce29a7c510b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:50.199557 master-1 kubenswrapper[4771]: I1011 10:56:50.199486 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-dns-swift-storage-0\") pod \"7f2f3d22-d709-4602-bb25-2c17626b75f1\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " Oct 11 10:56:50.199933 master-1 kubenswrapper[4771]: I1011 10:56:50.199637 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64zxc\" (UniqueName: \"kubernetes.io/projected/7f2f3d22-d709-4602-bb25-2c17626b75f1-kube-api-access-64zxc\") pod \"7f2f3d22-d709-4602-bb25-2c17626b75f1\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " Oct 11 10:56:50.199933 master-1 kubenswrapper[4771]: I1011 10:56:50.199683 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-ovsdbserver-nb\") pod \"7f2f3d22-d709-4602-bb25-2c17626b75f1\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " Oct 11 10:56:50.199933 master-1 kubenswrapper[4771]: I1011 10:56:50.199735 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-dns-svc\") pod \"7f2f3d22-d709-4602-bb25-2c17626b75f1\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " Oct 11 10:56:50.199933 master-1 kubenswrapper[4771]: I1011 10:56:50.199808 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-config\") pod \"7f2f3d22-d709-4602-bb25-2c17626b75f1\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " Oct 11 10:56:50.199933 master-1 kubenswrapper[4771]: I1011 10:56:50.199858 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-ovsdbserver-sb\") pod \"7f2f3d22-d709-4602-bb25-2c17626b75f1\" (UID: \"7f2f3d22-d709-4602-bb25-2c17626b75f1\") " Oct 11 10:56:50.206793 master-1 kubenswrapper[4771]: I1011 10:56:50.206712 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2f3d22-d709-4602-bb25-2c17626b75f1-kube-api-access-64zxc" (OuterVolumeSpecName: "kube-api-access-64zxc") pod "7f2f3d22-d709-4602-bb25-2c17626b75f1" (UID: "7f2f3d22-d709-4602-bb25-2c17626b75f1"). InnerVolumeSpecName "kube-api-access-64zxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:50.239242 master-1 kubenswrapper[4771]: I1011 10:56:50.239003 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-config" (OuterVolumeSpecName: "config") pod "7f2f3d22-d709-4602-bb25-2c17626b75f1" (UID: "7f2f3d22-d709-4602-bb25-2c17626b75f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:56:50.243974 master-0 kubenswrapper[4790]: I1011 10:56:50.243675 4790 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0edb0512-334f-4bfd-b297-cce29a7c510b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:50.243974 master-0 kubenswrapper[4790]: I1011 10:56:50.243740 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l6w4\" (UniqueName: \"kubernetes.io/projected/0edb0512-334f-4bfd-b297-cce29a7c510b-kube-api-access-9l6w4\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:50.243974 master-0 kubenswrapper[4790]: I1011 10:56:50.243755 4790 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0edb0512-334f-4bfd-b297-cce29a7c510b-config-data\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:50.247027 master-1 kubenswrapper[4771]: I1011 10:56:50.246960 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7f2f3d22-d709-4602-bb25-2c17626b75f1" (UID: "7f2f3d22-d709-4602-bb25-2c17626b75f1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:56:50.262203 master-1 kubenswrapper[4771]: I1011 10:56:50.261879 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f2f3d22-d709-4602-bb25-2c17626b75f1" (UID: "7f2f3d22-d709-4602-bb25-2c17626b75f1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:56:50.262453 master-1 kubenswrapper[4771]: I1011 10:56:50.262215 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f2f3d22-d709-4602-bb25-2c17626b75f1" (UID: "7f2f3d22-d709-4602-bb25-2c17626b75f1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:56:50.273444 master-1 kubenswrapper[4771]: I1011 10:56:50.273378 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f2f3d22-d709-4602-bb25-2c17626b75f1" (UID: "7f2f3d22-d709-4602-bb25-2c17626b75f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:56:50.289233 master-2 kubenswrapper[4776]: I1011 10:56:50.288904 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:56:50.302837 master-1 kubenswrapper[4771]: I1011 10:56:50.302781 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:50.302837 master-1 kubenswrapper[4771]: I1011 10:56:50.302823 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-ovsdbserver-sb\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:50.303075 master-1 kubenswrapper[4771]: I1011 10:56:50.302866 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-dns-swift-storage-0\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:50.303075 master-1 kubenswrapper[4771]: I1011 10:56:50.302878 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64zxc\" (UniqueName: \"kubernetes.io/projected/7f2f3d22-d709-4602-bb25-2c17626b75f1-kube-api-access-64zxc\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:50.303075 master-1 kubenswrapper[4771]: I1011 10:56:50.302886 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-ovsdbserver-nb\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:50.303075 master-1 kubenswrapper[4771]: I1011 10:56:50.302895 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f2f3d22-d709-4602-bb25-2c17626b75f1-dns-svc\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:50.420512 master-0 kubenswrapper[4790]: I1011 10:56:50.420463 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:56:50.447750 master-0 kubenswrapper[4790]: I1011 10:56:50.446865 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-config\") pod \"7739fd2d-10b5-425d-acbf-f50630f07017\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " Oct 11 10:56:50.447750 master-0 kubenswrapper[4790]: I1011 10:56:50.446954 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f96jc\" (UniqueName: \"kubernetes.io/projected/7739fd2d-10b5-425d-acbf-f50630f07017-kube-api-access-f96jc\") pod \"7739fd2d-10b5-425d-acbf-f50630f07017\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " Oct 11 10:56:50.447750 master-0 kubenswrapper[4790]: I1011 10:56:50.447073 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-combined-ca-bundle\") pod \"7739fd2d-10b5-425d-acbf-f50630f07017\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " Oct 11 10:56:50.447750 master-0 kubenswrapper[4790]: I1011 10:56:50.447124 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-ovndb-tls-certs\") pod \"7739fd2d-10b5-425d-acbf-f50630f07017\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " Oct 11 10:56:50.447750 master-0 kubenswrapper[4790]: I1011 10:56:50.447171 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-httpd-config\") pod \"7739fd2d-10b5-425d-acbf-f50630f07017\" (UID: \"7739fd2d-10b5-425d-acbf-f50630f07017\") " Oct 11 10:56:50.457751 master-0 kubenswrapper[4790]: I1011 10:56:50.451848 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7739fd2d-10b5-425d-acbf-f50630f07017" (UID: "7739fd2d-10b5-425d-acbf-f50630f07017"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:50.457751 master-0 kubenswrapper[4790]: I1011 10:56:50.454423 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7739fd2d-10b5-425d-acbf-f50630f07017-kube-api-access-f96jc" (OuterVolumeSpecName: "kube-api-access-f96jc") pod "7739fd2d-10b5-425d-acbf-f50630f07017" (UID: "7739fd2d-10b5-425d-acbf-f50630f07017"). InnerVolumeSpecName "kube-api-access-f96jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:50.476672 master-1 kubenswrapper[4771]: I1011 10:56:50.476608 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9356157-35da-4cf7-a755-86123f5e09a0","Type":"ContainerStarted","Data":"40e6f1404086cf0d3b630377374f5782adbaa5371b9d23d9249ebbb3d545f95b"} Oct 11 10:56:50.479553 master-1 kubenswrapper[4771]: I1011 10:56:50.479436 4771 generic.go:334] "Generic (PLEG): container finished" podID="7f2f3d22-d709-4602-bb25-2c17626b75f1" containerID="f842da8207799702100719e16d6957042c6a44764caa5bd18413f4806a961973" exitCode=0 Oct 11 10:56:50.479553 master-1 kubenswrapper[4771]: I1011 10:56:50.479501 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" event={"ID":"7f2f3d22-d709-4602-bb25-2c17626b75f1","Type":"ContainerDied","Data":"f842da8207799702100719e16d6957042c6a44764caa5bd18413f4806a961973"} Oct 11 10:56:50.479684 master-1 kubenswrapper[4771]: I1011 10:56:50.479593 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" event={"ID":"7f2f3d22-d709-4602-bb25-2c17626b75f1","Type":"ContainerDied","Data":"44d3f4f2f792c76343fbbdc36dc9dcbfc23552c6a12758c872ba3f43c891c162"} Oct 11 10:56:50.479684 master-1 kubenswrapper[4771]: I1011 10:56:50.479626 4771 scope.go:117] "RemoveContainer" containerID="f842da8207799702100719e16d6957042c6a44764caa5bd18413f4806a961973" Oct 11 10:56:50.479684 master-1 kubenswrapper[4771]: I1011 10:56:50.479531 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-768f954cfc-gvj8j" Oct 11 10:56:50.481971 master-1 kubenswrapper[4771]: I1011 10:56:50.481904 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b5934858-6421-4f73-9a74-3111541cc898","Type":"ContainerStarted","Data":"9910b1f3b104026b038b4ca3a28b0b7ef93f104d170c8aa8aecb636462fb76f3"} Oct 11 10:56:50.482229 master-1 kubenswrapper[4771]: I1011 10:56:50.482177 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 11 10:56:50.502877 master-0 kubenswrapper[4790]: I1011 10:56:50.498842 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-config" (OuterVolumeSpecName: "config") pod "7739fd2d-10b5-425d-acbf-f50630f07017" (UID: "7739fd2d-10b5-425d-acbf-f50630f07017"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:50.503358 master-0 kubenswrapper[4790]: I1011 10:56:50.503320 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7739fd2d-10b5-425d-acbf-f50630f07017" (UID: "7739fd2d-10b5-425d-acbf-f50630f07017"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:50.514222 master-1 kubenswrapper[4771]: I1011 10:56:50.514139 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=4.514117184 podStartE2EDuration="4.514117184s" podCreationTimestamp="2025-10-11 10:56:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:50.508803839 +0000 UTC m=+1842.483030270" watchObservedRunningTime="2025-10-11 10:56:50.514117184 +0000 UTC m=+1842.488343635" Oct 11 10:56:50.523721 master-2 kubenswrapper[4776]: W1011 10:56:50.523632 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-317d33ecc929d12f2e23fd3b8f482ae3dda6757c15d263ac0d8348bb3993f8df WatchSource:0}: Error finding container 317d33ecc929d12f2e23fd3b8f482ae3dda6757c15d263ac0d8348bb3993f8df: Status 404 returned error can't find the container with id 317d33ecc929d12f2e23fd3b8f482ae3dda6757c15d263ac0d8348bb3993f8df Oct 11 10:56:50.525360 master-2 kubenswrapper[4776]: I1011 10:56:50.525319 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 11 10:56:50.533726 master-0 kubenswrapper[4790]: I1011 10:56:50.533622 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "7739fd2d-10b5-425d-acbf-f50630f07017" (UID: "7739fd2d-10b5-425d-acbf-f50630f07017"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:50.549455 master-1 kubenswrapper[4771]: I1011 10:56:50.549348 4771 scope.go:117] "RemoveContainer" containerID="4fae999cde829daae9a63e594b851b24a38e90a61f567ae366b32cc609a54cf1" Oct 11 10:56:50.550066 master-0 kubenswrapper[4790]: I1011 10:56:50.550012 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f96jc\" (UniqueName: \"kubernetes.io/projected/7739fd2d-10b5-425d-acbf-f50630f07017-kube-api-access-f96jc\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:50.550066 master-0 kubenswrapper[4790]: I1011 10:56:50.550067 4790 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:50.550263 master-0 kubenswrapper[4790]: I1011 10:56:50.550082 4790 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:50.550263 master-0 kubenswrapper[4790]: I1011 10:56:50.550095 4790 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-httpd-config\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:50.550263 master-0 kubenswrapper[4790]: I1011 10:56:50.550110 4790 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7739fd2d-10b5-425d-acbf-f50630f07017-config\") on node \"master-0\" DevicePath \"\"" Oct 11 10:56:50.566777 master-1 kubenswrapper[4771]: I1011 10:56:50.566510 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-768f954cfc-gvj8j"] Oct 11 10:56:50.572955 master-1 kubenswrapper[4771]: I1011 10:56:50.572894 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-768f954cfc-gvj8j"] Oct 11 10:56:50.591244 master-1 kubenswrapper[4771]: I1011 10:56:50.591205 4771 scope.go:117] "RemoveContainer" containerID="f842da8207799702100719e16d6957042c6a44764caa5bd18413f4806a961973" Oct 11 10:56:50.592082 master-1 kubenswrapper[4771]: E1011 10:56:50.591973 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f842da8207799702100719e16d6957042c6a44764caa5bd18413f4806a961973\": container with ID starting with f842da8207799702100719e16d6957042c6a44764caa5bd18413f4806a961973 not found: ID does not exist" containerID="f842da8207799702100719e16d6957042c6a44764caa5bd18413f4806a961973" Oct 11 10:56:50.592167 master-1 kubenswrapper[4771]: I1011 10:56:50.592105 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f842da8207799702100719e16d6957042c6a44764caa5bd18413f4806a961973"} err="failed to get container status \"f842da8207799702100719e16d6957042c6a44764caa5bd18413f4806a961973\": rpc error: code = NotFound desc = could not find container \"f842da8207799702100719e16d6957042c6a44764caa5bd18413f4806a961973\": container with ID starting with f842da8207799702100719e16d6957042c6a44764caa5bd18413f4806a961973 not found: ID does not exist" Oct 11 10:56:50.592167 master-1 kubenswrapper[4771]: I1011 10:56:50.592155 4771 scope.go:117] "RemoveContainer" containerID="4fae999cde829daae9a63e594b851b24a38e90a61f567ae366b32cc609a54cf1" Oct 11 10:56:50.592826 master-1 kubenswrapper[4771]: E1011 10:56:50.592797 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fae999cde829daae9a63e594b851b24a38e90a61f567ae366b32cc609a54cf1\": container with ID starting with 4fae999cde829daae9a63e594b851b24a38e90a61f567ae366b32cc609a54cf1 not found: ID does not exist" containerID="4fae999cde829daae9a63e594b851b24a38e90a61f567ae366b32cc609a54cf1" Oct 11 10:56:50.593025 master-1 kubenswrapper[4771]: I1011 10:56:50.592990 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fae999cde829daae9a63e594b851b24a38e90a61f567ae366b32cc609a54cf1"} err="failed to get container status \"4fae999cde829daae9a63e594b851b24a38e90a61f567ae366b32cc609a54cf1\": rpc error: code = NotFound desc = could not find container \"4fae999cde829daae9a63e594b851b24a38e90a61f567ae366b32cc609a54cf1\": container with ID starting with 4fae999cde829daae9a63e594b851b24a38e90a61f567ae366b32cc609a54cf1 not found: ID does not exist" Oct 11 10:56:50.727524 master-0 kubenswrapper[4790]: I1011 10:56:50.727449 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"d221eb73-a42b-4c47-a912-4e47b88297a4","Type":"ContainerStarted","Data":"b47f7b57ab8ce5d9b1935a495e120a3eee5b2f462ead1f3d6820c61b709a1bbc"} Oct 11 10:56:50.727524 master-0 kubenswrapper[4790]: I1011 10:56:50.727522 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"d221eb73-a42b-4c47-a912-4e47b88297a4","Type":"ContainerStarted","Data":"15ed041bdb39ede94c3334a41421fc333a2760be59ed774b90e256cc011bfd8a"} Oct 11 10:56:50.731062 master-0 kubenswrapper[4790]: I1011 10:56:50.731002 4790 generic.go:334] "Generic (PLEG): container finished" podID="7739fd2d-10b5-425d-acbf-f50630f07017" containerID="477999ad20086f7e774ba456600ecbd44f0948bdc686eaeda5ddb795290ac9fe" exitCode=0 Oct 11 10:56:50.731152 master-0 kubenswrapper[4790]: I1011 10:56:50.731068 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-vk5xz" event={"ID":"7739fd2d-10b5-425d-acbf-f50630f07017","Type":"ContainerDied","Data":"477999ad20086f7e774ba456600ecbd44f0948bdc686eaeda5ddb795290ac9fe"} Oct 11 10:56:50.731187 master-0 kubenswrapper[4790]: I1011 10:56:50.731132 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887b79bcd-vk5xz" Oct 11 10:56:50.731187 master-0 kubenswrapper[4790]: I1011 10:56:50.731164 4790 scope.go:117] "RemoveContainer" containerID="2d24a46f82f260f086d4dccdce3d54977e12591932842b6bc27cb71f5af9d42e" Oct 11 10:56:50.731259 master-0 kubenswrapper[4790]: I1011 10:56:50.731145 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-vk5xz" event={"ID":"7739fd2d-10b5-425d-acbf-f50630f07017","Type":"ContainerDied","Data":"bff1b3163e12e8cb39a3fbc83d1e97e1c30281603df9047eeb4b5e8ea878efe5"} Oct 11 10:56:50.733496 master-0 kubenswrapper[4790]: I1011 10:56:50.733457 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-2" event={"ID":"0edb0512-334f-4bfd-b297-cce29a7c510b","Type":"ContainerDied","Data":"ce923dabd975e7c88686332f8fea7fed5cd13b9284d3e5e9dcd78a4cee6b26bf"} Oct 11 10:56:50.733552 master-0 kubenswrapper[4790]: I1011 10:56:50.733511 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-2" Oct 11 10:56:50.762032 master-0 kubenswrapper[4790]: I1011 10:56:50.761905 4790 scope.go:117] "RemoveContainer" containerID="477999ad20086f7e774ba456600ecbd44f0948bdc686eaeda5ddb795290ac9fe" Oct 11 10:56:50.781973 master-0 kubenswrapper[4790]: I1011 10:56:50.781852 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-2" podStartSLOduration=3.781818001 podStartE2EDuration="3.781818001s" podCreationTimestamp="2025-10-11 10:56:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:50.773003529 +0000 UTC m=+1087.327463841" watchObservedRunningTime="2025-10-11 10:56:50.781818001 +0000 UTC m=+1087.336278303" Oct 11 10:56:50.793822 master-0 kubenswrapper[4790]: I1011 10:56:50.793769 4790 scope.go:117] "RemoveContainer" containerID="2d24a46f82f260f086d4dccdce3d54977e12591932842b6bc27cb71f5af9d42e" Oct 11 10:56:50.794419 master-0 kubenswrapper[4790]: E1011 10:56:50.794339 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d24a46f82f260f086d4dccdce3d54977e12591932842b6bc27cb71f5af9d42e\": container with ID starting with 2d24a46f82f260f086d4dccdce3d54977e12591932842b6bc27cb71f5af9d42e not found: ID does not exist" containerID="2d24a46f82f260f086d4dccdce3d54977e12591932842b6bc27cb71f5af9d42e" Oct 11 10:56:50.794507 master-0 kubenswrapper[4790]: I1011 10:56:50.794415 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d24a46f82f260f086d4dccdce3d54977e12591932842b6bc27cb71f5af9d42e"} err="failed to get container status \"2d24a46f82f260f086d4dccdce3d54977e12591932842b6bc27cb71f5af9d42e\": rpc error: code = NotFound desc = could not find container \"2d24a46f82f260f086d4dccdce3d54977e12591932842b6bc27cb71f5af9d42e\": container with ID starting with 2d24a46f82f260f086d4dccdce3d54977e12591932842b6bc27cb71f5af9d42e not found: ID does not exist" Oct 11 10:56:50.794507 master-0 kubenswrapper[4790]: I1011 10:56:50.794449 4790 scope.go:117] "RemoveContainer" containerID="477999ad20086f7e774ba456600ecbd44f0948bdc686eaeda5ddb795290ac9fe" Oct 11 10:56:50.799158 master-0 kubenswrapper[4790]: E1011 10:56:50.798863 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"477999ad20086f7e774ba456600ecbd44f0948bdc686eaeda5ddb795290ac9fe\": container with ID starting with 477999ad20086f7e774ba456600ecbd44f0948bdc686eaeda5ddb795290ac9fe not found: ID does not exist" containerID="477999ad20086f7e774ba456600ecbd44f0948bdc686eaeda5ddb795290ac9fe" Oct 11 10:56:50.799158 master-0 kubenswrapper[4790]: I1011 10:56:50.798937 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"477999ad20086f7e774ba456600ecbd44f0948bdc686eaeda5ddb795290ac9fe"} err="failed to get container status \"477999ad20086f7e774ba456600ecbd44f0948bdc686eaeda5ddb795290ac9fe\": rpc error: code = NotFound desc = could not find container \"477999ad20086f7e774ba456600ecbd44f0948bdc686eaeda5ddb795290ac9fe\": container with ID starting with 477999ad20086f7e774ba456600ecbd44f0948bdc686eaeda5ddb795290ac9fe not found: ID does not exist" Oct 11 10:56:50.799158 master-0 kubenswrapper[4790]: I1011 10:56:50.798975 4790 scope.go:117] "RemoveContainer" containerID="bcee916a93846dcbbb2d3286227b4256009e1e87a4a77cd08320f9d8ba675866" Oct 11 10:56:50.810200 master-0 kubenswrapper[4790]: I1011 10:56:50.810064 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7887b79bcd-vk5xz"] Oct 11 10:56:50.818211 master-0 kubenswrapper[4790]: I1011 10:56:50.818144 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7887b79bcd-vk5xz"] Oct 11 10:56:50.830395 master-0 kubenswrapper[4790]: I1011 10:56:50.830328 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-2"] Oct 11 10:56:50.839935 master-0 kubenswrapper[4790]: I1011 10:56:50.839866 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-2"] Oct 11 10:56:50.876966 master-0 kubenswrapper[4790]: I1011 10:56:50.876867 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-2"] Oct 11 10:56:50.877548 master-0 kubenswrapper[4790]: E1011 10:56:50.877502 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0edb0512-334f-4bfd-b297-cce29a7c510b" containerName="nova-scheduler-scheduler" Oct 11 10:56:50.877548 master-0 kubenswrapper[4790]: I1011 10:56:50.877531 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="0edb0512-334f-4bfd-b297-cce29a7c510b" containerName="nova-scheduler-scheduler" Oct 11 10:56:50.877748 master-0 kubenswrapper[4790]: E1011 10:56:50.877556 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7739fd2d-10b5-425d-acbf-f50630f07017" containerName="neutron-api" Oct 11 10:56:50.877748 master-0 kubenswrapper[4790]: I1011 10:56:50.877567 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="7739fd2d-10b5-425d-acbf-f50630f07017" containerName="neutron-api" Oct 11 10:56:50.877748 master-0 kubenswrapper[4790]: E1011 10:56:50.877584 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7739fd2d-10b5-425d-acbf-f50630f07017" containerName="neutron-httpd" Oct 11 10:56:50.877748 master-0 kubenswrapper[4790]: I1011 10:56:50.877592 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="7739fd2d-10b5-425d-acbf-f50630f07017" containerName="neutron-httpd" Oct 11 10:56:50.880859 master-0 kubenswrapper[4790]: I1011 10:56:50.879264 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="7739fd2d-10b5-425d-acbf-f50630f07017" containerName="neutron-httpd" Oct 11 10:56:50.880859 master-0 kubenswrapper[4790]: I1011 10:56:50.879295 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="0edb0512-334f-4bfd-b297-cce29a7c510b" containerName="nova-scheduler-scheduler" Oct 11 10:56:50.880859 master-0 kubenswrapper[4790]: I1011 10:56:50.879305 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="7739fd2d-10b5-425d-acbf-f50630f07017" containerName="neutron-api" Oct 11 10:56:50.880859 master-0 kubenswrapper[4790]: I1011 10:56:50.880192 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-2" Oct 11 10:56:50.883089 master-2 kubenswrapper[4776]: I1011 10:56:50.882974 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerStarted","Data":"6f4994c81eacfe212a6b9366f040eee5ed5fea00a06f081dd304d4196753ce98"} Oct 11 10:56:50.883089 master-2 kubenswrapper[4776]: I1011 10:56:50.883066 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerStarted","Data":"031ee9bd372b45f4446076c4504a27d6422c72aa7df9a11e5f860b1bc605188d"} Oct 11 10:56:50.883927 master-2 kubenswrapper[4776]: I1011 10:56:50.883554 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Oct 11 10:56:50.883927 master-2 kubenswrapper[4776]: I1011 10:56:50.883796 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Oct 11 10:56:50.884279 master-0 kubenswrapper[4790]: I1011 10:56:50.883462 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 11 10:56:50.885508 master-2 kubenswrapper[4776]: I1011 10:56:50.885424 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerStarted","Data":"317d33ecc929d12f2e23fd3b8f482ae3dda6757c15d263ac0d8348bb3993f8df"} Oct 11 10:56:50.887860 master-0 kubenswrapper[4790]: I1011 10:56:50.887787 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-2"] Oct 11 10:56:50.888443 master-2 kubenswrapper[4776]: I1011 10:56:50.888390 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"1e253bd1-27b5-4423-8212-c9e698198d47","Type":"ContainerStarted","Data":"01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70"} Oct 11 10:56:50.888443 master-2 kubenswrapper[4776]: I1011 10:56:50.888437 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"1e253bd1-27b5-4423-8212-c9e698198d47","Type":"ContainerStarted","Data":"756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954"} Oct 11 10:56:50.888563 master-2 kubenswrapper[4776]: I1011 10:56:50.888458 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"1e253bd1-27b5-4423-8212-c9e698198d47","Type":"ContainerStarted","Data":"96fd4fc611b6a4bbbf2d990ec6feda8d8e8df9da67b88b7ad2b8d15e7527f7ac"} Oct 11 10:56:50.935462 master-2 kubenswrapper[4776]: I1011 10:56:50.935334 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=59.218879773 podStartE2EDuration="1m36.93529401s" podCreationTimestamp="2025-10-11 10:55:14 +0000 UTC" firstStartedPulling="2025-10-11 10:55:28.58384402 +0000 UTC m=+1763.368270729" lastFinishedPulling="2025-10-11 10:56:06.300258237 +0000 UTC m=+1801.084684966" observedRunningTime="2025-10-11 10:56:50.924039595 +0000 UTC m=+1845.708466304" watchObservedRunningTime="2025-10-11 10:56:50.93529401 +0000 UTC m=+1845.719720719" Oct 11 10:56:50.953158 master-2 kubenswrapper[4776]: I1011 10:56:50.953032 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2" podStartSLOduration=2.952967759 podStartE2EDuration="2.952967759s" podCreationTimestamp="2025-10-11 10:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:50.947015997 +0000 UTC m=+1845.731442706" watchObservedRunningTime="2025-10-11 10:56:50.952967759 +0000 UTC m=+1845.737394479" Oct 11 10:56:50.961408 master-0 kubenswrapper[4790]: I1011 10:56:50.961324 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj2b6\" (UniqueName: \"kubernetes.io/projected/08f5fb34-a451-48f6-91f4-60d27bfd939c-kube-api-access-xj2b6\") pod \"nova-scheduler-2\" (UID: \"08f5fb34-a451-48f6-91f4-60d27bfd939c\") " pod="openstack/nova-scheduler-2" Oct 11 10:56:50.961408 master-0 kubenswrapper[4790]: I1011 10:56:50.961389 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f5fb34-a451-48f6-91f4-60d27bfd939c-config-data\") pod \"nova-scheduler-2\" (UID: \"08f5fb34-a451-48f6-91f4-60d27bfd939c\") " pod="openstack/nova-scheduler-2" Oct 11 10:56:50.961739 master-0 kubenswrapper[4790]: I1011 10:56:50.961475 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f5fb34-a451-48f6-91f4-60d27bfd939c-combined-ca-bundle\") pod \"nova-scheduler-2\" (UID: \"08f5fb34-a451-48f6-91f4-60d27bfd939c\") " pod="openstack/nova-scheduler-2" Oct 11 10:56:51.063477 master-0 kubenswrapper[4790]: I1011 10:56:51.063381 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f5fb34-a451-48f6-91f4-60d27bfd939c-combined-ca-bundle\") pod \"nova-scheduler-2\" (UID: \"08f5fb34-a451-48f6-91f4-60d27bfd939c\") " pod="openstack/nova-scheduler-2" Oct 11 10:56:51.063773 master-0 kubenswrapper[4790]: I1011 10:56:51.063519 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj2b6\" (UniqueName: \"kubernetes.io/projected/08f5fb34-a451-48f6-91f4-60d27bfd939c-kube-api-access-xj2b6\") pod \"nova-scheduler-2\" (UID: \"08f5fb34-a451-48f6-91f4-60d27bfd939c\") " pod="openstack/nova-scheduler-2" Oct 11 10:56:51.063773 master-0 kubenswrapper[4790]: I1011 10:56:51.063554 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f5fb34-a451-48f6-91f4-60d27bfd939c-config-data\") pod \"nova-scheduler-2\" (UID: \"08f5fb34-a451-48f6-91f4-60d27bfd939c\") " pod="openstack/nova-scheduler-2" Oct 11 10:56:51.067850 master-0 kubenswrapper[4790]: I1011 10:56:51.067810 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f5fb34-a451-48f6-91f4-60d27bfd939c-config-data\") pod \"nova-scheduler-2\" (UID: \"08f5fb34-a451-48f6-91f4-60d27bfd939c\") " pod="openstack/nova-scheduler-2" Oct 11 10:56:51.070797 master-0 kubenswrapper[4790]: I1011 10:56:51.070243 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f5fb34-a451-48f6-91f4-60d27bfd939c-combined-ca-bundle\") pod \"nova-scheduler-2\" (UID: \"08f5fb34-a451-48f6-91f4-60d27bfd939c\") " pod="openstack/nova-scheduler-2" Oct 11 10:56:51.090590 master-0 kubenswrapper[4790]: I1011 10:56:51.090525 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj2b6\" (UniqueName: \"kubernetes.io/projected/08f5fb34-a451-48f6-91f4-60d27bfd939c-kube-api-access-xj2b6\") pod \"nova-scheduler-2\" (UID: \"08f5fb34-a451-48f6-91f4-60d27bfd939c\") " pod="openstack/nova-scheduler-2" Oct 11 10:56:51.215051 master-0 kubenswrapper[4790]: I1011 10:56:51.214952 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-2" Oct 11 10:56:51.364193 master-1 kubenswrapper[4771]: I1011 10:56:51.364063 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:56:51.495130 master-1 kubenswrapper[4771]: I1011 10:56:51.495046 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9356157-35da-4cf7-a755-86123f5e09a0","Type":"ContainerStarted","Data":"815d31b7bfb4da3463e478b429dc12c791555c18207b5e1a4435119f5ff42961"} Oct 11 10:56:51.658981 master-0 kubenswrapper[4790]: I1011 10:56:51.658920 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-2"] Oct 11 10:56:51.744969 master-0 kubenswrapper[4790]: I1011 10:56:51.744905 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-2" event={"ID":"08f5fb34-a451-48f6-91f4-60d27bfd939c","Type":"ContainerStarted","Data":"7550d6a37aff89c94d6dda17710e1becb7a0d5864e9949954fef2e9819f39291"} Oct 11 10:56:51.898122 master-2 kubenswrapper[4776]: I1011 10:56:51.898077 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Oct 11 10:56:52.304256 master-0 kubenswrapper[4790]: I1011 10:56:52.304132 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0edb0512-334f-4bfd-b297-cce29a7c510b" path="/var/lib/kubelet/pods/0edb0512-334f-4bfd-b297-cce29a7c510b/volumes" Oct 11 10:56:52.305075 master-0 kubenswrapper[4790]: I1011 10:56:52.305019 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7739fd2d-10b5-425d-acbf-f50630f07017" path="/var/lib/kubelet/pods/7739fd2d-10b5-425d-acbf-f50630f07017/volumes" Oct 11 10:56:52.450077 master-1 kubenswrapper[4771]: I1011 10:56:52.450024 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f2f3d22-d709-4602-bb25-2c17626b75f1" path="/var/lib/kubelet/pods/7f2f3d22-d709-4602-bb25-2c17626b75f1/volumes" Oct 11 10:56:52.503113 master-1 kubenswrapper[4771]: I1011 10:56:52.503044 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9356157-35da-4cf7-a755-86123f5e09a0","Type":"ContainerStarted","Data":"32a9f5966292aad5b339211592f59adf619a4421ecf5cf67f9fe01db264f2010"} Oct 11 10:56:52.503331 master-1 kubenswrapper[4771]: I1011 10:56:52.503239 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9356157-35da-4cf7-a755-86123f5e09a0" containerName="ceilometer-central-agent" containerID="cri-o://5d78c693491970b1a8eb3619842707db804737dd9e6aea89c13ff6875afaf87a" gracePeriod=30 Oct 11 10:56:52.503546 master-1 kubenswrapper[4771]: I1011 10:56:52.503520 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 11 10:56:52.503764 master-1 kubenswrapper[4771]: I1011 10:56:52.503737 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9356157-35da-4cf7-a755-86123f5e09a0" containerName="proxy-httpd" containerID="cri-o://32a9f5966292aad5b339211592f59adf619a4421ecf5cf67f9fe01db264f2010" gracePeriod=30 Oct 11 10:56:52.503813 master-1 kubenswrapper[4771]: I1011 10:56:52.503796 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9356157-35da-4cf7-a755-86123f5e09a0" containerName="sg-core" containerID="cri-o://815d31b7bfb4da3463e478b429dc12c791555c18207b5e1a4435119f5ff42961" gracePeriod=30 Oct 11 10:56:52.503861 master-1 kubenswrapper[4771]: I1011 10:56:52.503842 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9356157-35da-4cf7-a755-86123f5e09a0" containerName="ceilometer-notification-agent" containerID="cri-o://40e6f1404086cf0d3b630377374f5782adbaa5371b9d23d9249ebbb3d545f95b" gracePeriod=30 Oct 11 10:56:52.539657 master-1 kubenswrapper[4771]: I1011 10:56:52.539547 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.12288801 podStartE2EDuration="7.539521911s" podCreationTimestamp="2025-10-11 10:56:45 +0000 UTC" firstStartedPulling="2025-10-11 10:56:47.459936986 +0000 UTC m=+1839.434163437" lastFinishedPulling="2025-10-11 10:56:51.876570877 +0000 UTC m=+1843.850797338" observedRunningTime="2025-10-11 10:56:52.530867178 +0000 UTC m=+1844.505093619" watchObservedRunningTime="2025-10-11 10:56:52.539521911 +0000 UTC m=+1844.513748352" Oct 11 10:56:52.776510 master-0 kubenswrapper[4790]: I1011 10:56:52.776425 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-2" event={"ID":"08f5fb34-a451-48f6-91f4-60d27bfd939c","Type":"ContainerStarted","Data":"4e0c106293c88310a5257072d57742a9c018d67c1675782ea7da7a234c8ae82b"} Oct 11 10:56:52.810145 master-0 kubenswrapper[4790]: I1011 10:56:52.810055 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-2" podStartSLOduration=2.8100354899999997 podStartE2EDuration="2.81003549s" podCreationTimestamp="2025-10-11 10:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:52.80530884 +0000 UTC m=+1089.359769142" watchObservedRunningTime="2025-10-11 10:56:52.81003549 +0000 UTC m=+1089.364495782" Oct 11 10:56:53.519012 master-1 kubenswrapper[4771]: I1011 10:56:53.518939 4771 generic.go:334] "Generic (PLEG): container finished" podID="e9356157-35da-4cf7-a755-86123f5e09a0" containerID="32a9f5966292aad5b339211592f59adf619a4421ecf5cf67f9fe01db264f2010" exitCode=0 Oct 11 10:56:53.519012 master-1 kubenswrapper[4771]: I1011 10:56:53.518981 4771 generic.go:334] "Generic (PLEG): container finished" podID="e9356157-35da-4cf7-a755-86123f5e09a0" containerID="815d31b7bfb4da3463e478b429dc12c791555c18207b5e1a4435119f5ff42961" exitCode=2 Oct 11 10:56:53.519012 master-1 kubenswrapper[4771]: I1011 10:56:53.518991 4771 generic.go:334] "Generic (PLEG): container finished" podID="e9356157-35da-4cf7-a755-86123f5e09a0" containerID="40e6f1404086cf0d3b630377374f5782adbaa5371b9d23d9249ebbb3d545f95b" exitCode=0 Oct 11 10:56:53.519012 master-1 kubenswrapper[4771]: I1011 10:56:53.519017 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9356157-35da-4cf7-a755-86123f5e09a0","Type":"ContainerDied","Data":"32a9f5966292aad5b339211592f59adf619a4421ecf5cf67f9fe01db264f2010"} Oct 11 10:56:53.520054 master-1 kubenswrapper[4771]: I1011 10:56:53.519050 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9356157-35da-4cf7-a755-86123f5e09a0","Type":"ContainerDied","Data":"815d31b7bfb4da3463e478b429dc12c791555c18207b5e1a4435119f5ff42961"} Oct 11 10:56:53.520054 master-1 kubenswrapper[4771]: I1011 10:56:53.519068 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9356157-35da-4cf7-a755-86123f5e09a0","Type":"ContainerDied","Data":"40e6f1404086cf0d3b630377374f5782adbaa5371b9d23d9249ebbb3d545f95b"} Oct 11 10:56:53.910573 master-0 kubenswrapper[4790]: I1011 10:56:53.910487 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-2" Oct 11 10:56:53.911178 master-0 kubenswrapper[4790]: I1011 10:56:53.911042 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-2" Oct 11 10:56:54.092139 master-2 kubenswrapper[4776]: I1011 10:56:54.091982 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Oct 11 10:56:54.928621 master-2 kubenswrapper[4776]: I1011 10:56:54.928556 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerStarted","Data":"e4ba08b3d6a3495897a11277b1fc244a6f1d3e3d1223b0873ad4182dea2a4b01"} Oct 11 10:56:56.215540 master-0 kubenswrapper[4790]: I1011 10:56:56.215455 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-2" Oct 11 10:56:56.955435 master-2 kubenswrapper[4776]: I1011 10:56:56.955377 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerStarted","Data":"326e9fc4149cc880197755466f98e8b8e170fd384dd03ab05ef261cdaf3f4253"} Oct 11 10:56:57.604466 master-0 kubenswrapper[4790]: I1011 10:56:57.604354 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-1" Oct 11 10:56:57.604466 master-0 kubenswrapper[4790]: I1011 10:56:57.604436 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-1" Oct 11 10:56:57.611084 master-0 kubenswrapper[4790]: I1011 10:56:57.611001 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-1" Oct 11 10:56:57.611311 master-0 kubenswrapper[4790]: I1011 10:56:57.611162 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-1" Oct 11 10:56:57.797802 master-2 kubenswrapper[4776]: I1011 10:56:57.797751 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 11 10:56:57.798443 master-2 kubenswrapper[4776]: I1011 10:56:57.798365 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 11 10:56:57.800752 master-2 kubenswrapper[4776]: I1011 10:56:57.800699 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 11 10:56:57.829497 master-0 kubenswrapper[4790]: I1011 10:56:57.829436 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-1" Oct 11 10:56:57.834609 master-0 kubenswrapper[4790]: I1011 10:56:57.834528 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-1" Oct 11 10:56:57.967250 master-2 kubenswrapper[4776]: I1011 10:56:57.967202 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 11 10:56:58.211881 master-1 kubenswrapper[4771]: I1011 10:56:58.211807 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:56:58.291307 master-1 kubenswrapper[4771]: I1011 10:56:58.290100 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9356157-35da-4cf7-a755-86123f5e09a0-run-httpd\") pod \"e9356157-35da-4cf7-a755-86123f5e09a0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " Oct 11 10:56:58.291307 master-1 kubenswrapper[4771]: I1011 10:56:58.290194 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-combined-ca-bundle\") pod \"e9356157-35da-4cf7-a755-86123f5e09a0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " Oct 11 10:56:58.291307 master-1 kubenswrapper[4771]: I1011 10:56:58.290261 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-ceilometer-tls-certs\") pod \"e9356157-35da-4cf7-a755-86123f5e09a0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " Oct 11 10:56:58.291307 master-1 kubenswrapper[4771]: I1011 10:56:58.290407 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wjb6\" (UniqueName: \"kubernetes.io/projected/e9356157-35da-4cf7-a755-86123f5e09a0-kube-api-access-5wjb6\") pod \"e9356157-35da-4cf7-a755-86123f5e09a0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " Oct 11 10:56:58.291307 master-1 kubenswrapper[4771]: I1011 10:56:58.290441 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-config-data\") pod \"e9356157-35da-4cf7-a755-86123f5e09a0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " Oct 11 10:56:58.291307 master-1 kubenswrapper[4771]: I1011 10:56:58.290487 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-scripts\") pod \"e9356157-35da-4cf7-a755-86123f5e09a0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " Oct 11 10:56:58.291307 master-1 kubenswrapper[4771]: I1011 10:56:58.290531 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9356157-35da-4cf7-a755-86123f5e09a0-log-httpd\") pod \"e9356157-35da-4cf7-a755-86123f5e09a0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " Oct 11 10:56:58.291307 master-1 kubenswrapper[4771]: I1011 10:56:58.290571 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-sg-core-conf-yaml\") pod \"e9356157-35da-4cf7-a755-86123f5e09a0\" (UID: \"e9356157-35da-4cf7-a755-86123f5e09a0\") " Oct 11 10:56:58.291307 master-1 kubenswrapper[4771]: I1011 10:56:58.290696 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9356157-35da-4cf7-a755-86123f5e09a0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e9356157-35da-4cf7-a755-86123f5e09a0" (UID: "e9356157-35da-4cf7-a755-86123f5e09a0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:58.291307 master-1 kubenswrapper[4771]: I1011 10:56:58.291053 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9356157-35da-4cf7-a755-86123f5e09a0-run-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:58.296963 master-1 kubenswrapper[4771]: I1011 10:56:58.296894 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9356157-35da-4cf7-a755-86123f5e09a0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e9356157-35da-4cf7-a755-86123f5e09a0" (UID: "e9356157-35da-4cf7-a755-86123f5e09a0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:58.298483 master-1 kubenswrapper[4771]: I1011 10:56:58.298394 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-scripts" (OuterVolumeSpecName: "scripts") pod "e9356157-35da-4cf7-a755-86123f5e09a0" (UID: "e9356157-35da-4cf7-a755-86123f5e09a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:58.299097 master-1 kubenswrapper[4771]: I1011 10:56:58.298987 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9356157-35da-4cf7-a755-86123f5e09a0-kube-api-access-5wjb6" (OuterVolumeSpecName: "kube-api-access-5wjb6") pod "e9356157-35da-4cf7-a755-86123f5e09a0" (UID: "e9356157-35da-4cf7-a755-86123f5e09a0"). InnerVolumeSpecName "kube-api-access-5wjb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:58.325092 master-1 kubenswrapper[4771]: I1011 10:56:58.325009 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e9356157-35da-4cf7-a755-86123f5e09a0" (UID: "e9356157-35da-4cf7-a755-86123f5e09a0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:58.338195 master-1 kubenswrapper[4771]: I1011 10:56:58.338134 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e9356157-35da-4cf7-a755-86123f5e09a0" (UID: "e9356157-35da-4cf7-a755-86123f5e09a0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:58.368553 master-1 kubenswrapper[4771]: I1011 10:56:58.368468 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9356157-35da-4cf7-a755-86123f5e09a0" (UID: "e9356157-35da-4cf7-a755-86123f5e09a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:58.392914 master-1 kubenswrapper[4771]: I1011 10:56:58.392756 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wjb6\" (UniqueName: \"kubernetes.io/projected/e9356157-35da-4cf7-a755-86123f5e09a0-kube-api-access-5wjb6\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:58.392914 master-1 kubenswrapper[4771]: I1011 10:56:58.392790 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:58.392914 master-1 kubenswrapper[4771]: I1011 10:56:58.392806 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9356157-35da-4cf7-a755-86123f5e09a0-log-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:58.392914 master-1 kubenswrapper[4771]: I1011 10:56:58.392815 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-sg-core-conf-yaml\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:58.392914 master-1 kubenswrapper[4771]: I1011 10:56:58.392824 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:58.392914 master-1 kubenswrapper[4771]: I1011 10:56:58.392833 4771 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-ceilometer-tls-certs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:58.399733 master-1 kubenswrapper[4771]: I1011 10:56:58.399667 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-config-data" (OuterVolumeSpecName: "config-data") pod "e9356157-35da-4cf7-a755-86123f5e09a0" (UID: "e9356157-35da-4cf7-a755-86123f5e09a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:58.494570 master-1 kubenswrapper[4771]: I1011 10:56:58.494519 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9356157-35da-4cf7-a755-86123f5e09a0-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:56:58.580563 master-1 kubenswrapper[4771]: I1011 10:56:58.580210 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 11 10:56:58.594541 master-1 kubenswrapper[4771]: I1011 10:56:58.594465 4771 generic.go:334] "Generic (PLEG): container finished" podID="e9356157-35da-4cf7-a755-86123f5e09a0" containerID="5d78c693491970b1a8eb3619842707db804737dd9e6aea89c13ff6875afaf87a" exitCode=0 Oct 11 10:56:58.594541 master-1 kubenswrapper[4771]: I1011 10:56:58.594542 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9356157-35da-4cf7-a755-86123f5e09a0","Type":"ContainerDied","Data":"5d78c693491970b1a8eb3619842707db804737dd9e6aea89c13ff6875afaf87a"} Oct 11 10:56:58.594857 master-1 kubenswrapper[4771]: I1011 10:56:58.594589 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9356157-35da-4cf7-a755-86123f5e09a0","Type":"ContainerDied","Data":"4a07614606065916a0a6a5a0a24779d53f74c10d0d6a9319c0eda823b68adb65"} Oct 11 10:56:58.594857 master-1 kubenswrapper[4771]: I1011 10:56:58.594613 4771 scope.go:117] "RemoveContainer" containerID="32a9f5966292aad5b339211592f59adf619a4421ecf5cf67f9fe01db264f2010" Oct 11 10:56:58.594857 master-1 kubenswrapper[4771]: I1011 10:56:58.594658 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:56:58.634274 master-1 kubenswrapper[4771]: I1011 10:56:58.634209 4771 scope.go:117] "RemoveContainer" containerID="815d31b7bfb4da3463e478b429dc12c791555c18207b5e1a4435119f5ff42961" Oct 11 10:56:58.639440 master-1 kubenswrapper[4771]: I1011 10:56:58.639389 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:56:58.657897 master-1 kubenswrapper[4771]: I1011 10:56:58.657824 4771 scope.go:117] "RemoveContainer" containerID="40e6f1404086cf0d3b630377374f5782adbaa5371b9d23d9249ebbb3d545f95b" Oct 11 10:56:58.669085 master-1 kubenswrapper[4771]: I1011 10:56:58.669020 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:56:58.676813 master-1 kubenswrapper[4771]: I1011 10:56:58.676770 4771 scope.go:117] "RemoveContainer" containerID="5d78c693491970b1a8eb3619842707db804737dd9e6aea89c13ff6875afaf87a" Oct 11 10:56:58.700954 master-1 kubenswrapper[4771]: I1011 10:56:58.700895 4771 scope.go:117] "RemoveContainer" containerID="32a9f5966292aad5b339211592f59adf619a4421ecf5cf67f9fe01db264f2010" Oct 11 10:56:58.701426 master-1 kubenswrapper[4771]: E1011 10:56:58.701315 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32a9f5966292aad5b339211592f59adf619a4421ecf5cf67f9fe01db264f2010\": container with ID starting with 32a9f5966292aad5b339211592f59adf619a4421ecf5cf67f9fe01db264f2010 not found: ID does not exist" containerID="32a9f5966292aad5b339211592f59adf619a4421ecf5cf67f9fe01db264f2010" Oct 11 10:56:58.701426 master-1 kubenswrapper[4771]: I1011 10:56:58.701391 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32a9f5966292aad5b339211592f59adf619a4421ecf5cf67f9fe01db264f2010"} err="failed to get container status \"32a9f5966292aad5b339211592f59adf619a4421ecf5cf67f9fe01db264f2010\": rpc error: code = NotFound desc = could not find container \"32a9f5966292aad5b339211592f59adf619a4421ecf5cf67f9fe01db264f2010\": container with ID starting with 32a9f5966292aad5b339211592f59adf619a4421ecf5cf67f9fe01db264f2010 not found: ID does not exist" Oct 11 10:56:58.701569 master-1 kubenswrapper[4771]: I1011 10:56:58.701432 4771 scope.go:117] "RemoveContainer" containerID="815d31b7bfb4da3463e478b429dc12c791555c18207b5e1a4435119f5ff42961" Oct 11 10:56:58.701973 master-1 kubenswrapper[4771]: E1011 10:56:58.701909 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"815d31b7bfb4da3463e478b429dc12c791555c18207b5e1a4435119f5ff42961\": container with ID starting with 815d31b7bfb4da3463e478b429dc12c791555c18207b5e1a4435119f5ff42961 not found: ID does not exist" containerID="815d31b7bfb4da3463e478b429dc12c791555c18207b5e1a4435119f5ff42961" Oct 11 10:56:58.702069 master-1 kubenswrapper[4771]: I1011 10:56:58.701971 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"815d31b7bfb4da3463e478b429dc12c791555c18207b5e1a4435119f5ff42961"} err="failed to get container status \"815d31b7bfb4da3463e478b429dc12c791555c18207b5e1a4435119f5ff42961\": rpc error: code = NotFound desc = could not find container \"815d31b7bfb4da3463e478b429dc12c791555c18207b5e1a4435119f5ff42961\": container with ID starting with 815d31b7bfb4da3463e478b429dc12c791555c18207b5e1a4435119f5ff42961 not found: ID does not exist" Oct 11 10:56:58.702069 master-1 kubenswrapper[4771]: I1011 10:56:58.701990 4771 scope.go:117] "RemoveContainer" containerID="40e6f1404086cf0d3b630377374f5782adbaa5371b9d23d9249ebbb3d545f95b" Oct 11 10:56:58.702636 master-1 kubenswrapper[4771]: E1011 10:56:58.702577 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40e6f1404086cf0d3b630377374f5782adbaa5371b9d23d9249ebbb3d545f95b\": container with ID starting with 40e6f1404086cf0d3b630377374f5782adbaa5371b9d23d9249ebbb3d545f95b not found: ID does not exist" containerID="40e6f1404086cf0d3b630377374f5782adbaa5371b9d23d9249ebbb3d545f95b" Oct 11 10:56:58.702636 master-1 kubenswrapper[4771]: I1011 10:56:58.702615 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40e6f1404086cf0d3b630377374f5782adbaa5371b9d23d9249ebbb3d545f95b"} err="failed to get container status \"40e6f1404086cf0d3b630377374f5782adbaa5371b9d23d9249ebbb3d545f95b\": rpc error: code = NotFound desc = could not find container \"40e6f1404086cf0d3b630377374f5782adbaa5371b9d23d9249ebbb3d545f95b\": container with ID starting with 40e6f1404086cf0d3b630377374f5782adbaa5371b9d23d9249ebbb3d545f95b not found: ID does not exist" Oct 11 10:56:58.702636 master-1 kubenswrapper[4771]: I1011 10:56:58.702639 4771 scope.go:117] "RemoveContainer" containerID="5d78c693491970b1a8eb3619842707db804737dd9e6aea89c13ff6875afaf87a" Oct 11 10:56:58.703178 master-1 kubenswrapper[4771]: E1011 10:56:58.703149 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d78c693491970b1a8eb3619842707db804737dd9e6aea89c13ff6875afaf87a\": container with ID starting with 5d78c693491970b1a8eb3619842707db804737dd9e6aea89c13ff6875afaf87a not found: ID does not exist" containerID="5d78c693491970b1a8eb3619842707db804737dd9e6aea89c13ff6875afaf87a" Oct 11 10:56:58.703178 master-1 kubenswrapper[4771]: I1011 10:56:58.703175 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d78c693491970b1a8eb3619842707db804737dd9e6aea89c13ff6875afaf87a"} err="failed to get container status \"5d78c693491970b1a8eb3619842707db804737dd9e6aea89c13ff6875afaf87a\": rpc error: code = NotFound desc = could not find container \"5d78c693491970b1a8eb3619842707db804737dd9e6aea89c13ff6875afaf87a\": container with ID starting with 5d78c693491970b1a8eb3619842707db804737dd9e6aea89c13ff6875afaf87a not found: ID does not exist" Oct 11 10:56:58.848928 master-1 kubenswrapper[4771]: I1011 10:56:58.848823 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:56:58.849245 master-1 kubenswrapper[4771]: E1011 10:56:58.849187 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9356157-35da-4cf7-a755-86123f5e09a0" containerName="ceilometer-central-agent" Oct 11 10:56:58.849245 master-1 kubenswrapper[4771]: I1011 10:56:58.849204 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9356157-35da-4cf7-a755-86123f5e09a0" containerName="ceilometer-central-agent" Oct 11 10:56:58.849245 master-1 kubenswrapper[4771]: E1011 10:56:58.849227 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9356157-35da-4cf7-a755-86123f5e09a0" containerName="sg-core" Oct 11 10:56:58.849245 master-1 kubenswrapper[4771]: I1011 10:56:58.849236 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9356157-35da-4cf7-a755-86123f5e09a0" containerName="sg-core" Oct 11 10:56:58.849618 master-1 kubenswrapper[4771]: E1011 10:56:58.849262 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9356157-35da-4cf7-a755-86123f5e09a0" containerName="proxy-httpd" Oct 11 10:56:58.849618 master-1 kubenswrapper[4771]: I1011 10:56:58.849273 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9356157-35da-4cf7-a755-86123f5e09a0" containerName="proxy-httpd" Oct 11 10:56:58.849618 master-1 kubenswrapper[4771]: E1011 10:56:58.849291 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f2f3d22-d709-4602-bb25-2c17626b75f1" containerName="dnsmasq-dns" Oct 11 10:56:58.849618 master-1 kubenswrapper[4771]: I1011 10:56:58.849300 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f2f3d22-d709-4602-bb25-2c17626b75f1" containerName="dnsmasq-dns" Oct 11 10:56:58.849618 master-1 kubenswrapper[4771]: E1011 10:56:58.849315 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f2f3d22-d709-4602-bb25-2c17626b75f1" containerName="init" Oct 11 10:56:58.849618 master-1 kubenswrapper[4771]: I1011 10:56:58.849324 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f2f3d22-d709-4602-bb25-2c17626b75f1" containerName="init" Oct 11 10:56:58.849618 master-1 kubenswrapper[4771]: E1011 10:56:58.849336 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9356157-35da-4cf7-a755-86123f5e09a0" containerName="ceilometer-notification-agent" Oct 11 10:56:58.849618 master-1 kubenswrapper[4771]: I1011 10:56:58.849346 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9356157-35da-4cf7-a755-86123f5e09a0" containerName="ceilometer-notification-agent" Oct 11 10:56:58.849618 master-1 kubenswrapper[4771]: I1011 10:56:58.849535 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9356157-35da-4cf7-a755-86123f5e09a0" containerName="proxy-httpd" Oct 11 10:56:58.849618 master-1 kubenswrapper[4771]: I1011 10:56:58.849563 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9356157-35da-4cf7-a755-86123f5e09a0" containerName="sg-core" Oct 11 10:56:58.849618 master-1 kubenswrapper[4771]: I1011 10:56:58.849583 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9356157-35da-4cf7-a755-86123f5e09a0" containerName="ceilometer-notification-agent" Oct 11 10:56:58.849618 master-1 kubenswrapper[4771]: I1011 10:56:58.849596 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f2f3d22-d709-4602-bb25-2c17626b75f1" containerName="dnsmasq-dns" Oct 11 10:56:58.849618 master-1 kubenswrapper[4771]: I1011 10:56:58.849616 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9356157-35da-4cf7-a755-86123f5e09a0" containerName="ceilometer-central-agent" Oct 11 10:56:58.851594 master-1 kubenswrapper[4771]: I1011 10:56:58.851548 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:56:58.853894 master-1 kubenswrapper[4771]: W1011 10:56:58.853826 4771 reflector.go:561] object-"openstack"/"ceilometer-config-data": failed to list *v1.Secret: secrets "ceilometer-config-data" is forbidden: User "system:node:master-1" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'master-1' and this object Oct 11 10:56:58.853894 master-1 kubenswrapper[4771]: E1011 10:56:58.853870 4771 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceilometer-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ceilometer-config-data\" is forbidden: User \"system:node:master-1\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'master-1' and this object" logger="UnhandledError" Oct 11 10:56:58.854433 master-1 kubenswrapper[4771]: W1011 10:56:58.854306 4771 reflector.go:561] object-"openstack"/"ceilometer-scripts": failed to list *v1.Secret: secrets "ceilometer-scripts" is forbidden: User "system:node:master-1" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'master-1' and this object Oct 11 10:56:58.854433 master-1 kubenswrapper[4771]: E1011 10:56:58.854336 4771 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceilometer-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ceilometer-scripts\" is forbidden: User \"system:node:master-1\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'master-1' and this object" logger="UnhandledError" Oct 11 10:56:58.854708 master-1 kubenswrapper[4771]: W1011 10:56:58.854471 4771 reflector.go:561] object-"openstack"/"cert-ceilometer-internal-svc": failed to list *v1.Secret: secrets "cert-ceilometer-internal-svc" is forbidden: User "system:node:master-1" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'master-1' and this object Oct 11 10:56:58.854708 master-1 kubenswrapper[4771]: E1011 10:56:58.854491 4771 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ceilometer-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-ceilometer-internal-svc\" is forbidden: User \"system:node:master-1\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'master-1' and this object" logger="UnhandledError" Oct 11 10:56:58.910672 master-0 kubenswrapper[4790]: I1011 10:56:58.910573 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-2" Oct 11 10:56:58.910672 master-0 kubenswrapper[4790]: I1011 10:56:58.910662 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-2" Oct 11 10:56:58.978145 master-2 kubenswrapper[4776]: I1011 10:56:58.978064 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerStarted","Data":"37c9c3f974a6d34524b1afbc2045074821422c1756dbdd22caa6addb90b8625b"} Oct 11 10:56:59.005265 master-1 kubenswrapper[4771]: I1011 10:56:59.005080 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.005631 master-1 kubenswrapper[4771]: I1011 10:56:59.005448 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-config-data\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.005631 master-1 kubenswrapper[4771]: I1011 10:56:59.005534 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a279d25-518e-4a12-8b75-e3781fb22f05-run-httpd\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.005631 master-1 kubenswrapper[4771]: I1011 10:56:59.005606 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a279d25-518e-4a12-8b75-e3781fb22f05-log-httpd\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.005886 master-1 kubenswrapper[4771]: I1011 10:56:59.005694 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.005966 master-1 kubenswrapper[4771]: I1011 10:56:59.005911 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.006187 master-1 kubenswrapper[4771]: I1011 10:56:59.006024 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtfvl\" (UniqueName: \"kubernetes.io/projected/9a279d25-518e-4a12-8b75-e3781fb22f05-kube-api-access-wtfvl\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.006187 master-1 kubenswrapper[4771]: I1011 10:56:59.006140 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-scripts\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.039465 master-1 kubenswrapper[4771]: I1011 10:56:59.039341 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:56:59.108617 master-1 kubenswrapper[4771]: I1011 10:56:59.108489 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-scripts\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.108617 master-1 kubenswrapper[4771]: I1011 10:56:59.108607 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.109091 master-1 kubenswrapper[4771]: I1011 10:56:59.108813 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-config-data\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.109091 master-1 kubenswrapper[4771]: I1011 10:56:59.108862 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a279d25-518e-4a12-8b75-e3781fb22f05-run-httpd\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.109091 master-1 kubenswrapper[4771]: I1011 10:56:59.108915 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a279d25-518e-4a12-8b75-e3781fb22f05-log-httpd\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.109091 master-1 kubenswrapper[4771]: I1011 10:56:59.108955 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.109091 master-1 kubenswrapper[4771]: I1011 10:56:59.109034 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.109705 master-1 kubenswrapper[4771]: I1011 10:56:59.109117 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtfvl\" (UniqueName: \"kubernetes.io/projected/9a279d25-518e-4a12-8b75-e3781fb22f05-kube-api-access-wtfvl\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.109705 master-1 kubenswrapper[4771]: I1011 10:56:59.109650 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a279d25-518e-4a12-8b75-e3781fb22f05-log-httpd\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.109922 master-1 kubenswrapper[4771]: I1011 10:56:59.109755 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a279d25-518e-4a12-8b75-e3781fb22f05-run-httpd\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.113303 master-1 kubenswrapper[4771]: I1011 10:56:59.113222 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.138387 master-1 kubenswrapper[4771]: I1011 10:56:59.138278 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtfvl\" (UniqueName: \"kubernetes.io/projected/9a279d25-518e-4a12-8b75-e3781fb22f05-kube-api-access-wtfvl\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:56:59.666400 master-2 kubenswrapper[4776]: I1011 10:56:59.666345 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-2" Oct 11 10:56:59.666631 master-2 kubenswrapper[4776]: I1011 10:56:59.666473 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-2" Oct 11 10:56:59.939852 master-0 kubenswrapper[4790]: I1011 10:56:59.935941 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-2" podUID="d221eb73-a42b-4c47-a912-4e47b88297a4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.130.0.115:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:56:59.939852 master-0 kubenswrapper[4790]: I1011 10:56:59.936760 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-2" podUID="d221eb73-a42b-4c47-a912-4e47b88297a4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.130.0.115:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:57:00.059488 master-1 kubenswrapper[4771]: I1011 10:57:00.059407 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 11 10:57:00.065497 master-1 kubenswrapper[4771]: I1011 10:57:00.065443 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:57:00.083640 master-1 kubenswrapper[4771]: I1011 10:57:00.083605 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 11 10:57:00.093122 master-1 kubenswrapper[4771]: I1011 10:57:00.093097 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-config-data\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:57:00.094679 master-1 kubenswrapper[4771]: I1011 10:57:00.094628 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:57:00.110479 master-1 kubenswrapper[4771]: E1011 10:57:00.108878 4771 secret.go:189] Couldn't get secret openstack/ceilometer-scripts: failed to sync secret cache: timed out waiting for the condition Oct 11 10:57:00.110479 master-1 kubenswrapper[4771]: E1011 10:57:00.108977 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-scripts podName:9a279d25-518e-4a12-8b75-e3781fb22f05 nodeName:}" failed. No retries permitted until 2025-10-11 10:57:00.608950395 +0000 UTC m=+1852.583176866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-scripts") pod "ceilometer-0" (UID: "9a279d25-518e-4a12-8b75-e3781fb22f05") : failed to sync secret cache: timed out waiting for the condition Oct 11 10:57:00.300331 master-1 kubenswrapper[4771]: I1011 10:57:00.300258 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 11 10:57:00.455873 master-1 kubenswrapper[4771]: I1011 10:57:00.455674 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9356157-35da-4cf7-a755-86123f5e09a0" path="/var/lib/kubelet/pods/e9356157-35da-4cf7-a755-86123f5e09a0/volumes" Oct 11 10:57:00.647619 master-1 kubenswrapper[4771]: I1011 10:57:00.647497 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-scripts\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:57:00.653675 master-1 kubenswrapper[4771]: I1011 10:57:00.653608 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-scripts\") pod \"ceilometer-0\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " pod="openstack/ceilometer-0" Oct 11 10:57:00.672456 master-1 kubenswrapper[4771]: I1011 10:57:00.672390 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:57:00.747939 master-2 kubenswrapper[4776]: I1011 10:57:00.747870 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-2" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.0.175:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:57:00.748543 master-2 kubenswrapper[4776]: I1011 10:57:00.748202 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-2" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.0.175:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:57:00.899404 master-1 kubenswrapper[4771]: E1011 10:57:00.898832 4771 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f67f5f4f9f37aac275f10afbe09a8c9fa57c1952c80ddcea69a513171faa9df7/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f67f5f4f9f37aac275f10afbe09a8c9fa57c1952c80ddcea69a513171faa9df7/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_ceilometer-0_d0cc5394-b33f-41a9-bbe2-d772e75a8f58/ceilometer-central-agent/0.log" to get inode usage: stat /var/log/pods/openstack_ceilometer-0_d0cc5394-b33f-41a9-bbe2-d772e75a8f58/ceilometer-central-agent/0.log: no such file or directory Oct 11 10:57:01.158670 master-1 kubenswrapper[4771]: I1011 10:57:01.158624 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:57:01.176577 master-1 kubenswrapper[4771]: I1011 10:57:01.176545 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:57:01.216388 master-0 kubenswrapper[4790]: I1011 10:57:01.216168 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-2" Oct 11 10:57:01.247629 master-0 kubenswrapper[4790]: I1011 10:57:01.247560 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-2" Oct 11 10:57:01.631067 master-1 kubenswrapper[4771]: I1011 10:57:01.630994 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9a279d25-518e-4a12-8b75-e3781fb22f05","Type":"ContainerStarted","Data":"26bb9eb9589e069594c3ca946945eba89d16e626b603cd344d3ddae2ac0f7ded"} Oct 11 10:57:01.807742 master-1 kubenswrapper[4771]: E1011 10:57:01.807673 4771 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/55955291c1ae6b17a0bb7ff47fc06e286ff1749fa260fead2a93c54d085cacc9/diff" to get inode usage: stat /var/lib/containers/storage/overlay/55955291c1ae6b17a0bb7ff47fc06e286ff1749fa260fead2a93c54d085cacc9/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_ceilometer-0_d0cc5394-b33f-41a9-bbe2-d772e75a8f58/ceilometer-notification-agent/0.log" to get inode usage: stat /var/log/pods/openstack_ceilometer-0_d0cc5394-b33f-41a9-bbe2-d772e75a8f58/ceilometer-notification-agent/0.log: no such file or directory Oct 11 10:57:01.889914 master-0 kubenswrapper[4790]: I1011 10:57:01.889837 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-2" Oct 11 10:57:01.942741 master-1 kubenswrapper[4771]: I1011 10:57:01.942635 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-1"] Oct 11 10:57:01.943035 master-1 kubenswrapper[4771]: I1011 10:57:01.942895 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-1" podUID="4619bcb1-090e-4824-adfe-6a526158d0ea" containerName="nova-scheduler-scheduler" containerID="cri-o://a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8" gracePeriod=30 Oct 11 10:57:02.002017 master-2 kubenswrapper[4776]: I1011 10:57:02.001960 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerStarted","Data":"b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9"} Oct 11 10:57:02.002729 master-2 kubenswrapper[4776]: I1011 10:57:02.002107 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-api" containerID="cri-o://e4ba08b3d6a3495897a11277b1fc244a6f1d3e3d1223b0873ad4182dea2a4b01" gracePeriod=30 Oct 11 10:57:02.002729 master-2 kubenswrapper[4776]: I1011 10:57:02.002531 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-listener" containerID="cri-o://b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9" gracePeriod=30 Oct 11 10:57:02.002729 master-2 kubenswrapper[4776]: I1011 10:57:02.002575 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-notifier" containerID="cri-o://37c9c3f974a6d34524b1afbc2045074821422c1756dbdd22caa6addb90b8625b" gracePeriod=30 Oct 11 10:57:02.002729 master-2 kubenswrapper[4776]: I1011 10:57:02.002613 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-evaluator" containerID="cri-o://326e9fc4149cc880197755466f98e8b8e170fd384dd03ab05ef261cdaf3f4253" gracePeriod=30 Oct 11 10:57:02.038099 master-2 kubenswrapper[4776]: I1011 10:57:02.038013 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.540885194 podStartE2EDuration="14.037995912s" podCreationTimestamp="2025-10-11 10:56:48 +0000 UTC" firstStartedPulling="2025-10-11 10:56:50.526443417 +0000 UTC m=+1845.310870126" lastFinishedPulling="2025-10-11 10:57:01.023554135 +0000 UTC m=+1855.807980844" observedRunningTime="2025-10-11 10:57:02.037087326 +0000 UTC m=+1856.821514035" watchObservedRunningTime="2025-10-11 10:57:02.037995912 +0000 UTC m=+1856.822422631" Oct 11 10:57:02.488379 master-1 kubenswrapper[4771]: E1011 10:57:02.487702 4771 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b6dfdc9be6dfc3040d4a1da475e8e872f701d8009c5039d372318d1f45772fb0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b6dfdc9be6dfc3040d4a1da475e8e872f701d8009c5039d372318d1f45772fb0/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_ceilometer-0_d0cc5394-b33f-41a9-bbe2-d772e75a8f58/sg-core/0.log" to get inode usage: stat /var/log/pods/openstack_ceilometer-0_d0cc5394-b33f-41a9-bbe2-d772e75a8f58/sg-core/0.log: no such file or directory Oct 11 10:57:02.589768 master-1 kubenswrapper[4771]: E1011 10:57:02.589700 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 11 10:57:02.591485 master-1 kubenswrapper[4771]: E1011 10:57:02.591390 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 11 10:57:02.592772 master-1 kubenswrapper[4771]: E1011 10:57:02.592724 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 11 10:57:02.592898 master-1 kubenswrapper[4771]: E1011 10:57:02.592771 4771 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-1" podUID="4619bcb1-090e-4824-adfe-6a526158d0ea" containerName="nova-scheduler-scheduler" Oct 11 10:57:02.642581 master-1 kubenswrapper[4771]: I1011 10:57:02.642502 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9a279d25-518e-4a12-8b75-e3781fb22f05","Type":"ContainerStarted","Data":"dd5ef923fe19c8499d59b13d67599b3bf99730453e82ff9eb110343bd5333a66"} Oct 11 10:57:02.642581 master-1 kubenswrapper[4771]: I1011 10:57:02.642560 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9a279d25-518e-4a12-8b75-e3781fb22f05","Type":"ContainerStarted","Data":"329f7fd8063c9aaba98ccaa627d41506880615f2c94e9a537c2fbe8e686bc8be"} Oct 11 10:57:03.014445 master-2 kubenswrapper[4776]: I1011 10:57:03.014369 4776 generic.go:334] "Generic (PLEG): container finished" podID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerID="326e9fc4149cc880197755466f98e8b8e170fd384dd03ab05ef261cdaf3f4253" exitCode=0 Oct 11 10:57:03.014445 master-2 kubenswrapper[4776]: I1011 10:57:03.014418 4776 generic.go:334] "Generic (PLEG): container finished" podID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerID="e4ba08b3d6a3495897a11277b1fc244a6f1d3e3d1223b0873ad4182dea2a4b01" exitCode=0 Oct 11 10:57:03.014445 master-2 kubenswrapper[4776]: I1011 10:57:03.014442 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerDied","Data":"326e9fc4149cc880197755466f98e8b8e170fd384dd03ab05ef261cdaf3f4253"} Oct 11 10:57:03.015319 master-2 kubenswrapper[4776]: I1011 10:57:03.014474 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerDied","Data":"e4ba08b3d6a3495897a11277b1fc244a6f1d3e3d1223b0873ad4182dea2a4b01"} Oct 11 10:57:03.660016 master-1 kubenswrapper[4771]: I1011 10:57:03.659908 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9a279d25-518e-4a12-8b75-e3781fb22f05","Type":"ContainerStarted","Data":"06b1b45211e1a6439784f737849060952f6c89a6f37d2a3b15906929efc29bff"} Oct 11 10:57:04.026091 master-2 kubenswrapper[4776]: I1011 10:57:04.026028 4776 generic.go:334] "Generic (PLEG): container finished" podID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerID="37c9c3f974a6d34524b1afbc2045074821422c1756dbdd22caa6addb90b8625b" exitCode=0 Oct 11 10:57:04.026091 master-2 kubenswrapper[4776]: I1011 10:57:04.026081 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerDied","Data":"37c9c3f974a6d34524b1afbc2045074821422c1756dbdd22caa6addb90b8625b"} Oct 11 10:57:04.212430 master-1 kubenswrapper[4771]: E1011 10:57:04.212325 4771 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ff4f35cc08d7b02affab5f4eeaf66f8103ae06a636fc32c1be940b7334c4f803/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ff4f35cc08d7b02affab5f4eeaf66f8103ae06a636fc32c1be940b7334c4f803/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_ceilometer-0_d0cc5394-b33f-41a9-bbe2-d772e75a8f58/proxy-httpd/0.log" to get inode usage: stat /var/log/pods/openstack_ceilometer-0_d0cc5394-b33f-41a9-bbe2-d772e75a8f58/proxy-httpd/0.log: no such file or directory Oct 11 10:57:05.379846 master-1 kubenswrapper[4771]: W1011 10:57:05.379765 4771 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9356157_35da_4cf7_a755_86123f5e09a0.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9356157_35da_4cf7_a755_86123f5e09a0.slice: no such file or directory Oct 11 10:57:05.394900 master-1 kubenswrapper[4771]: E1011 10:57:05.394758 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-2c5f1db8917c8af20f15f8f5c86b116c03c3cf84afbea6d406851b9dc2d31536.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-dfcca484b1ef8ab38e804d6ef394f26fc64c3bf3d7f1246b3ca103ffc5677a68.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-dfcca484b1ef8ab38e804d6ef394f26fc64c3bf3d7f1246b3ca103ffc5677a68.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-fa2bc31890cd28c5ae042f31a82dc462c8d6a57c6fbf10c4706aaa08a519f43e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3de492fb_5249_49e2_a327_756234aa92bd.slice/crio-4b4427ca1aac1d5de2fce6df6d3c919384d14bf07e538280903548b74f73e344\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3de492fb_5249_49e2_a327_756234aa92bd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-conmon-3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-2c5f1db8917c8af20f15f8f5c86b116c03c3cf84afbea6d406851b9dc2d31536.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:05.395199 master-1 kubenswrapper[4771]: E1011 10:57:05.394942 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-2c5f1db8917c8af20f15f8f5c86b116c03c3cf84afbea6d406851b9dc2d31536.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-conmon-3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-fa2bc31890cd28c5ae042f31a82dc462c8d6a57c6fbf10c4706aaa08a519f43e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3de492fb_5249_49e2_a327_756234aa92bd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-2c5f1db8917c8af20f15f8f5c86b116c03c3cf84afbea6d406851b9dc2d31536.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-dfcca484b1ef8ab38e804d6ef394f26fc64c3bf3d7f1246b3ca103ffc5677a68.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:05.395313 master-1 kubenswrapper[4771]: E1011 10:57:05.394947 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-2c5f1db8917c8af20f15f8f5c86b116c03c3cf84afbea6d406851b9dc2d31536.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-conmon-3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-fa2bc31890cd28c5ae042f31a82dc462c8d6a57c6fbf10c4706aaa08a519f43e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-2c5f1db8917c8af20f15f8f5c86b116c03c3cf84afbea6d406851b9dc2d31536.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3de492fb_5249_49e2_a327_756234aa92bd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-c62163629aca98833fc3692d5fe6b9a44972b443acaee8cb73af5daad3f74fd0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-dfcca484b1ef8ab38e804d6ef394f26fc64c3bf3d7f1246b3ca103ffc5677a68.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-dfcca484b1ef8ab38e804d6ef394f26fc64c3bf3d7f1246b3ca103ffc5677a68.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3de492fb_5249_49e2_a327_756234aa92bd.slice/crio-4b4427ca1aac1d5de2fce6df6d3c919384d14bf07e538280903548b74f73e344\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:05.395453 master-1 kubenswrapper[4771]: E1011 10:57:05.395183 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-dfcca484b1ef8ab38e804d6ef394f26fc64c3bf3d7f1246b3ca103ffc5677a68.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-2c5f1db8917c8af20f15f8f5c86b116c03c3cf84afbea6d406851b9dc2d31536.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-dfcca484b1ef8ab38e804d6ef394f26fc64c3bf3d7f1246b3ca103ffc5677a68.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-conmon-3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-2c5f1db8917c8af20f15f8f5c86b116c03c3cf84afbea6d406851b9dc2d31536.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3de492fb_5249_49e2_a327_756234aa92bd.slice/crio-4b4427ca1aac1d5de2fce6df6d3c919384d14bf07e538280903548b74f73e344\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85d5cfa_8073_4bbf_9eff_78fde719dadf.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-fa2bc31890cd28c5ae042f31a82dc462c8d6a57c6fbf10c4706aaa08a519f43e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:05.395943 master-1 kubenswrapper[4771]: E1011 10:57:05.395767 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod602a8d3a_2ca2_43d2_8def_5718d9baf2ee.slice/crio-conmon-23ed8d2300382ac408577223ca4d96c6b722a1c94b3187394d9ec21991883547.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85d5cfa_8073_4bbf_9eff_78fde719dadf.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-dfcca484b1ef8ab38e804d6ef394f26fc64c3bf3d7f1246b3ca103ffc5677a68.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-2c5f1db8917c8af20f15f8f5c86b116c03c3cf84afbea6d406851b9dc2d31536.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-fa2bc31890cd28c5ae042f31a82dc462c8d6a57c6fbf10c4706aaa08a519f43e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3de492fb_5249_49e2_a327_756234aa92bd.slice/crio-4b4427ca1aac1d5de2fce6df6d3c919384d14bf07e538280903548b74f73e344\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-conmon-3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-2c5f1db8917c8af20f15f8f5c86b116c03c3cf84afbea6d406851b9dc2d31536.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-dfcca484b1ef8ab38e804d6ef394f26fc64c3bf3d7f1246b3ca103ffc5677a68.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3de492fb_5249_49e2_a327_756234aa92bd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-c62163629aca98833fc3692d5fe6b9a44972b443acaee8cb73af5daad3f74fd0\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:05.397728 master-1 kubenswrapper[4771]: E1011 10:57:05.397649 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-conmon-3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-dfcca484b1ef8ab38e804d6ef394f26fc64c3bf3d7f1246b3ca103ffc5677a68.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-c62163629aca98833fc3692d5fe6b9a44972b443acaee8cb73af5daad3f74fd0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-dfcca484b1ef8ab38e804d6ef394f26fc64c3bf3d7f1246b3ca103ffc5677a68.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85d5cfa_8073_4bbf_9eff_78fde719dadf.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-fa2bc31890cd28c5ae042f31a82dc462c8d6a57c6fbf10c4706aaa08a519f43e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85d5cfa_8073_4bbf_9eff_78fde719dadf.slice/crio-792c6e93eebb0025120939bb299c7a87876ec3dbbd22c047c0d886532fa269ba\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:05.399887 master-1 kubenswrapper[4771]: E1011 10:57:05.399795 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-conmon-3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-c62163629aca98833fc3692d5fe6b9a44972b443acaee8cb73af5daad3f74fd0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3de492fb_5249_49e2_a327_756234aa92bd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-dfcca484b1ef8ab38e804d6ef394f26fc64c3bf3d7f1246b3ca103ffc5677a68.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-fa2bc31890cd28c5ae042f31a82dc462c8d6a57c6fbf10c4706aaa08a519f43e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85d5cfa_8073_4bbf_9eff_78fde719dadf.slice/crio-792c6e93eebb0025120939bb299c7a87876ec3dbbd22c047c0d886532fa269ba\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3de492fb_5249_49e2_a327_756234aa92bd.slice/crio-4b4427ca1aac1d5de2fce6df6d3c919384d14bf07e538280903548b74f73e344\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-dfcca484b1ef8ab38e804d6ef394f26fc64c3bf3d7f1246b3ca103ffc5677a68.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-2c5f1db8917c8af20f15f8f5c86b116c03c3cf84afbea6d406851b9dc2d31536.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-2c5f1db8917c8af20f15f8f5c86b116c03c3cf84afbea6d406851b9dc2d31536.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85d5cfa_8073_4bbf_9eff_78fde719dadf.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:05.401956 master-1 kubenswrapper[4771]: E1011 10:57:05.401870 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-2c5f1db8917c8af20f15f8f5c86b116c03c3cf84afbea6d406851b9dc2d31536.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-conmon-3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85d5cfa_8073_4bbf_9eff_78fde719dadf.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-conmon-2c5f1db8917c8af20f15f8f5c86b116c03c3cf84afbea6d406851b9dc2d31536.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-fa2bc31890cd28c5ae042f31a82dc462c8d6a57c6fbf10c4706aaa08a519f43e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-6adea3e5f64531e0c86b74d225c5177d037bfd577bffef9095ea2e44e640d111.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3de492fb_5249_49e2_a327_756234aa92bd.slice/crio-4b4427ca1aac1d5de2fce6df6d3c919384d14bf07e538280903548b74f73e344\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-505e62311b6ed5c9ff0953d8d85544302f51dd00d4e8b39726a1f15af9f39dfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-dfcca484b1ef8ab38e804d6ef394f26fc64c3bf3d7f1246b3ca103ffc5677a68.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-3a34a73cdd7c1cec4079fa29156740bcbf8771fe95c00057f168b2498eac713d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-c62163629aca98833fc3692d5fe6b9a44972b443acaee8cb73af5daad3f74fd0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:05.405319 master-1 kubenswrapper[4771]: E1011 10:57:05.405199 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b5e37e3_9afd_4ff3_b992_1e6c28a986ad.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b5e37e3_9afd_4ff3_b992_1e6c28a986ad.slice/crio-8225c71dadd5dd5d7cdb7b603f12129b97565dbfb98b6f8553a5f73b645e62cc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe7833d_9251_4545_ba68_f58c146188f1.slice/crio-fa2bc31890cd28c5ae042f31a82dc462c8d6a57c6fbf10c4706aaa08a519f43e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85d5cfa_8073_4bbf_9eff_78fde719dadf.slice/crio-792c6e93eebb0025120939bb299c7a87876ec3dbbd22c047c0d886532fa269ba\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3de492fb_5249_49e2_a327_756234aa92bd.slice/crio-4b4427ca1aac1d5de2fce6df6d3c919384d14bf07e538280903548b74f73e344\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice/crio-c62163629aca98833fc3692d5fe6b9a44972b443acaee8cb73af5daad3f74fd0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod602a8d3a_2ca2_43d2_8def_5718d9baf2ee.slice/crio-conmon-23ed8d2300382ac408577223ca4d96c6b722a1c94b3187394d9ec21991883547.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc5394_b33f_41a9_bbe2_d772e75a8f58.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85d5cfa_8073_4bbf_9eff_78fde719dadf.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3de492fb_5249_49e2_a327_756234aa92bd.slice\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:05.694805 master-1 kubenswrapper[4771]: I1011 10:57:05.694683 4771 generic.go:334] "Generic (PLEG): container finished" podID="602a8d3a-2ca2-43d2-8def-5718d9baf2ee" containerID="23ed8d2300382ac408577223ca4d96c6b722a1c94b3187394d9ec21991883547" exitCode=137 Oct 11 10:57:05.694805 master-1 kubenswrapper[4771]: I1011 10:57:05.694767 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"602a8d3a-2ca2-43d2-8def-5718d9baf2ee","Type":"ContainerDied","Data":"23ed8d2300382ac408577223ca4d96c6b722a1c94b3187394d9ec21991883547"} Oct 11 10:57:05.700305 master-1 kubenswrapper[4771]: I1011 10:57:05.696582 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9a279d25-518e-4a12-8b75-e3781fb22f05","Type":"ContainerStarted","Data":"1856a76e1ae1c62aaea063758c5bd874b8a3b66130c9fe8eb095d5cef903001f"} Oct 11 10:57:05.700305 master-1 kubenswrapper[4771]: I1011 10:57:05.696872 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 11 10:57:05.856834 master-1 kubenswrapper[4771]: I1011 10:57:05.856781 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:05.883408 master-1 kubenswrapper[4771]: I1011 10:57:05.883316 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.420750834 podStartE2EDuration="7.883296042s" podCreationTimestamp="2025-10-11 10:56:58 +0000 UTC" firstStartedPulling="2025-10-11 10:57:01.176497738 +0000 UTC m=+1853.150724189" lastFinishedPulling="2025-10-11 10:57:04.639042946 +0000 UTC m=+1856.613269397" observedRunningTime="2025-10-11 10:57:05.758183273 +0000 UTC m=+1857.732409714" watchObservedRunningTime="2025-10-11 10:57:05.883296042 +0000 UTC m=+1857.857522483" Oct 11 10:57:05.889435 master-1 kubenswrapper[4771]: I1011 10:57:05.889391 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602a8d3a-2ca2-43d2-8def-5718d9baf2ee-config-data\") pod \"602a8d3a-2ca2-43d2-8def-5718d9baf2ee\" (UID: \"602a8d3a-2ca2-43d2-8def-5718d9baf2ee\") " Oct 11 10:57:05.889608 master-1 kubenswrapper[4771]: I1011 10:57:05.889455 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602a8d3a-2ca2-43d2-8def-5718d9baf2ee-combined-ca-bundle\") pod \"602a8d3a-2ca2-43d2-8def-5718d9baf2ee\" (UID: \"602a8d3a-2ca2-43d2-8def-5718d9baf2ee\") " Oct 11 10:57:05.889760 master-1 kubenswrapper[4771]: I1011 10:57:05.889730 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5zqd\" (UniqueName: \"kubernetes.io/projected/602a8d3a-2ca2-43d2-8def-5718d9baf2ee-kube-api-access-g5zqd\") pod \"602a8d3a-2ca2-43d2-8def-5718d9baf2ee\" (UID: \"602a8d3a-2ca2-43d2-8def-5718d9baf2ee\") " Oct 11 10:57:05.901438 master-1 kubenswrapper[4771]: I1011 10:57:05.901348 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602a8d3a-2ca2-43d2-8def-5718d9baf2ee-kube-api-access-g5zqd" (OuterVolumeSpecName: "kube-api-access-g5zqd") pod "602a8d3a-2ca2-43d2-8def-5718d9baf2ee" (UID: "602a8d3a-2ca2-43d2-8def-5718d9baf2ee"). InnerVolumeSpecName "kube-api-access-g5zqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:05.913396 master-1 kubenswrapper[4771]: I1011 10:57:05.913325 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/602a8d3a-2ca2-43d2-8def-5718d9baf2ee-config-data" (OuterVolumeSpecName: "config-data") pod "602a8d3a-2ca2-43d2-8def-5718d9baf2ee" (UID: "602a8d3a-2ca2-43d2-8def-5718d9baf2ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:05.913535 master-1 kubenswrapper[4771]: I1011 10:57:05.913490 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/602a8d3a-2ca2-43d2-8def-5718d9baf2ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "602a8d3a-2ca2-43d2-8def-5718d9baf2ee" (UID: "602a8d3a-2ca2-43d2-8def-5718d9baf2ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:05.991196 master-1 kubenswrapper[4771]: I1011 10:57:05.991134 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5zqd\" (UniqueName: \"kubernetes.io/projected/602a8d3a-2ca2-43d2-8def-5718d9baf2ee-kube-api-access-g5zqd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:05.991196 master-1 kubenswrapper[4771]: I1011 10:57:05.991178 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602a8d3a-2ca2-43d2-8def-5718d9baf2ee-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:05.991196 master-1 kubenswrapper[4771]: I1011 10:57:05.991190 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602a8d3a-2ca2-43d2-8def-5718d9baf2ee-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:06.720497 master-1 kubenswrapper[4771]: I1011 10:57:06.720438 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"602a8d3a-2ca2-43d2-8def-5718d9baf2ee","Type":"ContainerDied","Data":"9b7f7d7c0af1b640b51f7a6b8a5687c5423669e8ea192915f2e34e079daaef17"} Oct 11 10:57:06.721004 master-1 kubenswrapper[4771]: I1011 10:57:06.720508 4771 scope.go:117] "RemoveContainer" containerID="23ed8d2300382ac408577223ca4d96c6b722a1c94b3187394d9ec21991883547" Oct 11 10:57:06.721004 master-1 kubenswrapper[4771]: I1011 10:57:06.720693 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:06.750206 master-1 kubenswrapper[4771]: I1011 10:57:06.750108 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 11 10:57:06.760732 master-1 kubenswrapper[4771]: I1011 10:57:06.759281 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 11 10:57:06.798249 master-1 kubenswrapper[4771]: I1011 10:57:06.798147 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 11 10:57:06.798640 master-1 kubenswrapper[4771]: E1011 10:57:06.798617 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602a8d3a-2ca2-43d2-8def-5718d9baf2ee" containerName="nova-cell1-novncproxy-novncproxy" Oct 11 10:57:06.798687 master-1 kubenswrapper[4771]: I1011 10:57:06.798639 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="602a8d3a-2ca2-43d2-8def-5718d9baf2ee" containerName="nova-cell1-novncproxy-novncproxy" Oct 11 10:57:06.798998 master-1 kubenswrapper[4771]: I1011 10:57:06.798842 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="602a8d3a-2ca2-43d2-8def-5718d9baf2ee" containerName="nova-cell1-novncproxy-novncproxy" Oct 11 10:57:06.799732 master-1 kubenswrapper[4771]: I1011 10:57:06.799706 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:06.802712 master-1 kubenswrapper[4771]: I1011 10:57:06.802674 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 11 10:57:06.803575 master-1 kubenswrapper[4771]: I1011 10:57:06.802970 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 11 10:57:06.803575 master-1 kubenswrapper[4771]: I1011 10:57:06.803107 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 11 10:57:06.806537 master-1 kubenswrapper[4771]: I1011 10:57:06.806403 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45l2x\" (UniqueName: \"kubernetes.io/projected/1c2ae850-015d-40bc-8af0-b47b9bb6f46b-kube-api-access-45l2x\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c2ae850-015d-40bc-8af0-b47b9bb6f46b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:06.806537 master-1 kubenswrapper[4771]: I1011 10:57:06.806474 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c2ae850-015d-40bc-8af0-b47b9bb6f46b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c2ae850-015d-40bc-8af0-b47b9bb6f46b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:06.806537 master-1 kubenswrapper[4771]: I1011 10:57:06.806542 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c2ae850-015d-40bc-8af0-b47b9bb6f46b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c2ae850-015d-40bc-8af0-b47b9bb6f46b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:06.806686 master-1 kubenswrapper[4771]: I1011 10:57:06.806565 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c2ae850-015d-40bc-8af0-b47b9bb6f46b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c2ae850-015d-40bc-8af0-b47b9bb6f46b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:06.806686 master-1 kubenswrapper[4771]: I1011 10:57:06.806587 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c2ae850-015d-40bc-8af0-b47b9bb6f46b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c2ae850-015d-40bc-8af0-b47b9bb6f46b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:06.816719 master-1 kubenswrapper[4771]: I1011 10:57:06.816639 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 11 10:57:06.908803 master-1 kubenswrapper[4771]: I1011 10:57:06.908719 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45l2x\" (UniqueName: \"kubernetes.io/projected/1c2ae850-015d-40bc-8af0-b47b9bb6f46b-kube-api-access-45l2x\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c2ae850-015d-40bc-8af0-b47b9bb6f46b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:06.909227 master-1 kubenswrapper[4771]: I1011 10:57:06.908841 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c2ae850-015d-40bc-8af0-b47b9bb6f46b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c2ae850-015d-40bc-8af0-b47b9bb6f46b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:06.909227 master-1 kubenswrapper[4771]: I1011 10:57:06.908956 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c2ae850-015d-40bc-8af0-b47b9bb6f46b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c2ae850-015d-40bc-8af0-b47b9bb6f46b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:06.909227 master-1 kubenswrapper[4771]: I1011 10:57:06.908984 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c2ae850-015d-40bc-8af0-b47b9bb6f46b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c2ae850-015d-40bc-8af0-b47b9bb6f46b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:06.909227 master-1 kubenswrapper[4771]: I1011 10:57:06.909011 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c2ae850-015d-40bc-8af0-b47b9bb6f46b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c2ae850-015d-40bc-8af0-b47b9bb6f46b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:06.912855 master-1 kubenswrapper[4771]: I1011 10:57:06.912801 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c2ae850-015d-40bc-8af0-b47b9bb6f46b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c2ae850-015d-40bc-8af0-b47b9bb6f46b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:06.915203 master-1 kubenswrapper[4771]: I1011 10:57:06.915162 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c2ae850-015d-40bc-8af0-b47b9bb6f46b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c2ae850-015d-40bc-8af0-b47b9bb6f46b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:06.915898 master-1 kubenswrapper[4771]: I1011 10:57:06.915855 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c2ae850-015d-40bc-8af0-b47b9bb6f46b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c2ae850-015d-40bc-8af0-b47b9bb6f46b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:06.915965 master-1 kubenswrapper[4771]: I1011 10:57:06.915895 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c2ae850-015d-40bc-8af0-b47b9bb6f46b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c2ae850-015d-40bc-8af0-b47b9bb6f46b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:06.935055 master-1 kubenswrapper[4771]: I1011 10:57:06.934986 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45l2x\" (UniqueName: \"kubernetes.io/projected/1c2ae850-015d-40bc-8af0-b47b9bb6f46b-kube-api-access-45l2x\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c2ae850-015d-40bc-8af0-b47b9bb6f46b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:07.115691 master-1 kubenswrapper[4771]: I1011 10:57:07.115599 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:07.575593 master-1 kubenswrapper[4771]: I1011 10:57:07.575536 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 11 10:57:07.581918 master-1 kubenswrapper[4771]: W1011 10:57:07.580759 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c2ae850_015d_40bc_8af0_b47b9bb6f46b.slice/crio-31ee81c7ef7c26bafc8de05589408cca2a0aa069e0c298182198521073763c3f WatchSource:0}: Error finding container 31ee81c7ef7c26bafc8de05589408cca2a0aa069e0c298182198521073763c3f: Status 404 returned error can't find the container with id 31ee81c7ef7c26bafc8de05589408cca2a0aa069e0c298182198521073763c3f Oct 11 10:57:07.581918 master-1 kubenswrapper[4771]: E1011 10:57:07.581515 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8 is running failed: container process not found" containerID="a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 11 10:57:07.581918 master-1 kubenswrapper[4771]: E1011 10:57:07.581834 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8 is running failed: container process not found" containerID="a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 11 10:57:07.582756 master-1 kubenswrapper[4771]: E1011 10:57:07.582348 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8 is running failed: container process not found" containerID="a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 11 10:57:07.582756 master-1 kubenswrapper[4771]: E1011 10:57:07.582425 4771 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-1" podUID="4619bcb1-090e-4824-adfe-6a526158d0ea" containerName="nova-scheduler-scheduler" Oct 11 10:57:07.623164 master-1 kubenswrapper[4771]: I1011 10:57:07.623127 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 11 10:57:07.730091 master-1 kubenswrapper[4771]: I1011 10:57:07.730020 4771 generic.go:334] "Generic (PLEG): container finished" podID="4619bcb1-090e-4824-adfe-6a526158d0ea" containerID="a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8" exitCode=0 Oct 11 10:57:07.743729 master-1 kubenswrapper[4771]: I1011 10:57:07.730106 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 11 10:57:07.743729 master-1 kubenswrapper[4771]: I1011 10:57:07.730131 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"4619bcb1-090e-4824-adfe-6a526158d0ea","Type":"ContainerDied","Data":"a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8"} Oct 11 10:57:07.743729 master-1 kubenswrapper[4771]: I1011 10:57:07.730162 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"4619bcb1-090e-4824-adfe-6a526158d0ea","Type":"ContainerDied","Data":"2855d39e0600653f4ca98e1b1c4a631cd2cea811da489ab4ee595433738a99d4"} Oct 11 10:57:07.743729 master-1 kubenswrapper[4771]: I1011 10:57:07.730181 4771 scope.go:117] "RemoveContainer" containerID="a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8" Oct 11 10:57:07.743729 master-1 kubenswrapper[4771]: I1011 10:57:07.733310 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1c2ae850-015d-40bc-8af0-b47b9bb6f46b","Type":"ContainerStarted","Data":"31ee81c7ef7c26bafc8de05589408cca2a0aa069e0c298182198521073763c3f"} Oct 11 10:57:07.743729 master-1 kubenswrapper[4771]: I1011 10:57:07.739151 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4619bcb1-090e-4824-adfe-6a526158d0ea-combined-ca-bundle\") pod \"4619bcb1-090e-4824-adfe-6a526158d0ea\" (UID: \"4619bcb1-090e-4824-adfe-6a526158d0ea\") " Oct 11 10:57:07.743729 master-1 kubenswrapper[4771]: I1011 10:57:07.739298 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4619bcb1-090e-4824-adfe-6a526158d0ea-config-data\") pod \"4619bcb1-090e-4824-adfe-6a526158d0ea\" (UID: \"4619bcb1-090e-4824-adfe-6a526158d0ea\") " Oct 11 10:57:07.743729 master-1 kubenswrapper[4771]: I1011 10:57:07.739393 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srjhk\" (UniqueName: \"kubernetes.io/projected/4619bcb1-090e-4824-adfe-6a526158d0ea-kube-api-access-srjhk\") pod \"4619bcb1-090e-4824-adfe-6a526158d0ea\" (UID: \"4619bcb1-090e-4824-adfe-6a526158d0ea\") " Oct 11 10:57:07.743729 master-1 kubenswrapper[4771]: I1011 10:57:07.742229 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4619bcb1-090e-4824-adfe-6a526158d0ea-kube-api-access-srjhk" (OuterVolumeSpecName: "kube-api-access-srjhk") pod "4619bcb1-090e-4824-adfe-6a526158d0ea" (UID: "4619bcb1-090e-4824-adfe-6a526158d0ea"). InnerVolumeSpecName "kube-api-access-srjhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:07.752228 master-1 kubenswrapper[4771]: I1011 10:57:07.752187 4771 scope.go:117] "RemoveContainer" containerID="a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8" Oct 11 10:57:07.752698 master-1 kubenswrapper[4771]: E1011 10:57:07.752665 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8\": container with ID starting with a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8 not found: ID does not exist" containerID="a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8" Oct 11 10:57:07.752748 master-1 kubenswrapper[4771]: I1011 10:57:07.752701 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8"} err="failed to get container status \"a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8\": rpc error: code = NotFound desc = could not find container \"a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8\": container with ID starting with a23248db23cc07663daa71fc11e05668a4c12da23548369d411cb7d7393beab8 not found: ID does not exist" Oct 11 10:57:07.764892 master-1 kubenswrapper[4771]: I1011 10:57:07.764829 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4619bcb1-090e-4824-adfe-6a526158d0ea-config-data" (OuterVolumeSpecName: "config-data") pod "4619bcb1-090e-4824-adfe-6a526158d0ea" (UID: "4619bcb1-090e-4824-adfe-6a526158d0ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:07.767738 master-1 kubenswrapper[4771]: I1011 10:57:07.767689 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4619bcb1-090e-4824-adfe-6a526158d0ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4619bcb1-090e-4824-adfe-6a526158d0ea" (UID: "4619bcb1-090e-4824-adfe-6a526158d0ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:07.841509 master-1 kubenswrapper[4771]: I1011 10:57:07.841430 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srjhk\" (UniqueName: \"kubernetes.io/projected/4619bcb1-090e-4824-adfe-6a526158d0ea-kube-api-access-srjhk\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:07.841509 master-1 kubenswrapper[4771]: I1011 10:57:07.841478 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4619bcb1-090e-4824-adfe-6a526158d0ea-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:07.841509 master-1 kubenswrapper[4771]: I1011 10:57:07.841489 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4619bcb1-090e-4824-adfe-6a526158d0ea-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:08.068957 master-1 kubenswrapper[4771]: I1011 10:57:08.068877 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-1"] Oct 11 10:57:08.074267 master-1 kubenswrapper[4771]: I1011 10:57:08.074214 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-1"] Oct 11 10:57:08.106060 master-1 kubenswrapper[4771]: I1011 10:57:08.105990 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-1"] Oct 11 10:57:08.106470 master-1 kubenswrapper[4771]: E1011 10:57:08.106441 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4619bcb1-090e-4824-adfe-6a526158d0ea" containerName="nova-scheduler-scheduler" Oct 11 10:57:08.106470 master-1 kubenswrapper[4771]: I1011 10:57:08.106464 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4619bcb1-090e-4824-adfe-6a526158d0ea" containerName="nova-scheduler-scheduler" Oct 11 10:57:08.106683 master-1 kubenswrapper[4771]: I1011 10:57:08.106648 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4619bcb1-090e-4824-adfe-6a526158d0ea" containerName="nova-scheduler-scheduler" Oct 11 10:57:08.107596 master-1 kubenswrapper[4771]: I1011 10:57:08.107566 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 11 10:57:08.110454 master-1 kubenswrapper[4771]: I1011 10:57:08.110406 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 11 10:57:08.129974 master-1 kubenswrapper[4771]: I1011 10:57:08.129907 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-1"] Oct 11 10:57:08.247442 master-1 kubenswrapper[4771]: I1011 10:57:08.247212 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82101e85-023a-4398-bb5e-4162dea69f46-combined-ca-bundle\") pod \"nova-scheduler-1\" (UID: \"82101e85-023a-4398-bb5e-4162dea69f46\") " pod="openstack/nova-scheduler-1" Oct 11 10:57:08.247442 master-1 kubenswrapper[4771]: I1011 10:57:08.247275 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh5m8\" (UniqueName: \"kubernetes.io/projected/82101e85-023a-4398-bb5e-4162dea69f46-kube-api-access-kh5m8\") pod \"nova-scheduler-1\" (UID: \"82101e85-023a-4398-bb5e-4162dea69f46\") " pod="openstack/nova-scheduler-1" Oct 11 10:57:08.247683 master-1 kubenswrapper[4771]: I1011 10:57:08.247534 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82101e85-023a-4398-bb5e-4162dea69f46-config-data\") pod \"nova-scheduler-1\" (UID: \"82101e85-023a-4398-bb5e-4162dea69f46\") " pod="openstack/nova-scheduler-1" Oct 11 10:57:08.350374 master-1 kubenswrapper[4771]: I1011 10:57:08.350270 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82101e85-023a-4398-bb5e-4162dea69f46-config-data\") pod \"nova-scheduler-1\" (UID: \"82101e85-023a-4398-bb5e-4162dea69f46\") " pod="openstack/nova-scheduler-1" Oct 11 10:57:08.350635 master-1 kubenswrapper[4771]: I1011 10:57:08.350533 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82101e85-023a-4398-bb5e-4162dea69f46-combined-ca-bundle\") pod \"nova-scheduler-1\" (UID: \"82101e85-023a-4398-bb5e-4162dea69f46\") " pod="openstack/nova-scheduler-1" Oct 11 10:57:08.350635 master-1 kubenswrapper[4771]: I1011 10:57:08.350581 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh5m8\" (UniqueName: \"kubernetes.io/projected/82101e85-023a-4398-bb5e-4162dea69f46-kube-api-access-kh5m8\") pod \"nova-scheduler-1\" (UID: \"82101e85-023a-4398-bb5e-4162dea69f46\") " pod="openstack/nova-scheduler-1" Oct 11 10:57:08.376177 master-1 kubenswrapper[4771]: I1011 10:57:08.376115 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82101e85-023a-4398-bb5e-4162dea69f46-combined-ca-bundle\") pod \"nova-scheduler-1\" (UID: \"82101e85-023a-4398-bb5e-4162dea69f46\") " pod="openstack/nova-scheduler-1" Oct 11 10:57:08.376470 master-1 kubenswrapper[4771]: I1011 10:57:08.376426 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82101e85-023a-4398-bb5e-4162dea69f46-config-data\") pod \"nova-scheduler-1\" (UID: \"82101e85-023a-4398-bb5e-4162dea69f46\") " pod="openstack/nova-scheduler-1" Oct 11 10:57:08.382252 master-1 kubenswrapper[4771]: I1011 10:57:08.381703 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh5m8\" (UniqueName: \"kubernetes.io/projected/82101e85-023a-4398-bb5e-4162dea69f46-kube-api-access-kh5m8\") pod \"nova-scheduler-1\" (UID: \"82101e85-023a-4398-bb5e-4162dea69f46\") " pod="openstack/nova-scheduler-1" Oct 11 10:57:08.435158 master-1 kubenswrapper[4771]: I1011 10:57:08.435053 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 11 10:57:08.454316 master-1 kubenswrapper[4771]: I1011 10:57:08.454254 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4619bcb1-090e-4824-adfe-6a526158d0ea" path="/var/lib/kubelet/pods/4619bcb1-090e-4824-adfe-6a526158d0ea/volumes" Oct 11 10:57:08.459223 master-1 kubenswrapper[4771]: I1011 10:57:08.459161 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602a8d3a-2ca2-43d2-8def-5718d9baf2ee" path="/var/lib/kubelet/pods/602a8d3a-2ca2-43d2-8def-5718d9baf2ee/volumes" Oct 11 10:57:08.745724 master-1 kubenswrapper[4771]: I1011 10:57:08.745619 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1c2ae850-015d-40bc-8af0-b47b9bb6f46b","Type":"ContainerStarted","Data":"6a8f41b531ca846c95fe153a07f06cfef2fda05873909ec82bf0978ab3366378"} Oct 11 10:57:08.783938 master-1 kubenswrapper[4771]: I1011 10:57:08.779648 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.779627337 podStartE2EDuration="2.779627337s" podCreationTimestamp="2025-10-11 10:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:57:08.772579761 +0000 UTC m=+1860.746806222" watchObservedRunningTime="2025-10-11 10:57:08.779627337 +0000 UTC m=+1860.753853788" Oct 11 10:57:08.919528 master-0 kubenswrapper[4790]: I1011 10:57:08.919443 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-2" Oct 11 10:57:08.925181 master-0 kubenswrapper[4790]: I1011 10:57:08.923392 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-2" Oct 11 10:57:08.931419 master-0 kubenswrapper[4790]: I1011 10:57:08.931229 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-2" Oct 11 10:57:08.969821 master-0 kubenswrapper[4790]: I1011 10:57:08.962806 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-2" Oct 11 10:57:08.990768 master-1 kubenswrapper[4771]: W1011 10:57:08.990690 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82101e85_023a_4398_bb5e_4162dea69f46.slice/crio-344c84a1c3165ebb54446b68471bee6244a2b1504eca4e8fa46ae99da6e9b301 WatchSource:0}: Error finding container 344c84a1c3165ebb54446b68471bee6244a2b1504eca4e8fa46ae99da6e9b301: Status 404 returned error can't find the container with id 344c84a1c3165ebb54446b68471bee6244a2b1504eca4e8fa46ae99da6e9b301 Oct 11 10:57:08.993372 master-1 kubenswrapper[4771]: I1011 10:57:08.992946 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-1"] Oct 11 10:57:09.049379 master-1 kubenswrapper[4771]: I1011 10:57:09.049269 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-1"] Oct 11 10:57:09.050160 master-1 kubenswrapper[4771]: I1011 10:57:09.050060 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-1" podUID="1596746b-25ca-487a-9e49-93e532f2838b" containerName="nova-metadata-metadata" containerID="cri-o://cc430b45b80be084cd02e8ad7b476be42a8b108197f68c496397cf28d0b094ae" gracePeriod=30 Oct 11 10:57:09.071550 master-1 kubenswrapper[4771]: I1011 10:57:09.063724 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-1" podUID="1596746b-25ca-487a-9e49-93e532f2838b" containerName="nova-metadata-log" containerID="cri-o://10394d9815d51c3e2ede873c2bea5e3b152ea84d8b7c0cf8df31f7c51efb349f" gracePeriod=30 Oct 11 10:57:09.670577 master-2 kubenswrapper[4776]: I1011 10:57:09.670435 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-2" Oct 11 10:57:09.671193 master-2 kubenswrapper[4776]: I1011 10:57:09.670991 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-2" Oct 11 10:57:09.673155 master-2 kubenswrapper[4776]: I1011 10:57:09.673104 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-2" Oct 11 10:57:09.673510 master-2 kubenswrapper[4776]: I1011 10:57:09.673432 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-2" Oct 11 10:57:09.762506 master-1 kubenswrapper[4771]: I1011 10:57:09.762433 4771 generic.go:334] "Generic (PLEG): container finished" podID="1596746b-25ca-487a-9e49-93e532f2838b" containerID="10394d9815d51c3e2ede873c2bea5e3b152ea84d8b7c0cf8df31f7c51efb349f" exitCode=143 Oct 11 10:57:09.763424 master-1 kubenswrapper[4771]: I1011 10:57:09.762530 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"1596746b-25ca-487a-9e49-93e532f2838b","Type":"ContainerDied","Data":"10394d9815d51c3e2ede873c2bea5e3b152ea84d8b7c0cf8df31f7c51efb349f"} Oct 11 10:57:09.764531 master-1 kubenswrapper[4771]: I1011 10:57:09.764486 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"82101e85-023a-4398-bb5e-4162dea69f46","Type":"ContainerStarted","Data":"ceb4ba3a78d59ca572872a25024937ab120ba7e7f85e67747ea7ab0ace9a66fc"} Oct 11 10:57:09.764698 master-1 kubenswrapper[4771]: I1011 10:57:09.764600 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"82101e85-023a-4398-bb5e-4162dea69f46","Type":"ContainerStarted","Data":"344c84a1c3165ebb54446b68471bee6244a2b1504eca4e8fa46ae99da6e9b301"} Oct 11 10:57:09.811799 master-1 kubenswrapper[4771]: I1011 10:57:09.811716 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-1" podStartSLOduration=1.811689355 podStartE2EDuration="1.811689355s" podCreationTimestamp="2025-10-11 10:57:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:57:09.793460363 +0000 UTC m=+1861.767686864" watchObservedRunningTime="2025-10-11 10:57:09.811689355 +0000 UTC m=+1861.785915796" Oct 11 10:57:10.076533 master-2 kubenswrapper[4776]: I1011 10:57:10.076380 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-2" Oct 11 10:57:10.079325 master-2 kubenswrapper[4776]: I1011 10:57:10.079280 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-2" Oct 11 10:57:10.139530 master-0 kubenswrapper[4790]: I1011 10:57:10.139457 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1"] Oct 11 10:57:10.140254 master-0 kubenswrapper[4790]: I1011 10:57:10.139798 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-1" podUID="253852fc-de03-49f0-8e18-b3ccba3d4966" containerName="nova-api-log" containerID="cri-o://98d215d30f8c7dc0e7fbe10e4417ac3c671b823010a7e36669d3ba73c51d79e4" gracePeriod=30 Oct 11 10:57:10.140254 master-0 kubenswrapper[4790]: I1011 10:57:10.140010 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-1" podUID="253852fc-de03-49f0-8e18-b3ccba3d4966" containerName="nova-api-api" containerID="cri-o://b5982ea4f03ba0bc234b6f62a88bd15cfad78e0ab12b1e1ee49058903171de83" gracePeriod=30 Oct 11 10:57:10.370045 master-0 kubenswrapper[4790]: I1011 10:57:10.369967 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb9b8c955-g7qzg"] Oct 11 10:57:10.375793 master-0 kubenswrapper[4790]: I1011 10:57:10.372109 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.376646 master-0 kubenswrapper[4790]: I1011 10:57:10.376563 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 11 10:57:10.377957 master-0 kubenswrapper[4790]: I1011 10:57:10.377739 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 11 10:57:10.378113 master-0 kubenswrapper[4790]: I1011 10:57:10.377990 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 10:57:10.378248 master-0 kubenswrapper[4790]: I1011 10:57:10.378185 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 10:57:10.378555 master-0 kubenswrapper[4790]: I1011 10:57:10.378490 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 11 10:57:10.385924 master-0 kubenswrapper[4790]: I1011 10:57:10.385878 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb9b8c955-g7qzg"] Oct 11 10:57:10.467943 master-0 kubenswrapper[4790]: I1011 10:57:10.467601 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb9b8c955-g7qzg\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.467943 master-0 kubenswrapper[4790]: I1011 10:57:10.467674 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4srbk\" (UniqueName: \"kubernetes.io/projected/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-kube-api-access-4srbk\") pod \"dnsmasq-dns-6cb9b8c955-g7qzg\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.468234 master-0 kubenswrapper[4790]: I1011 10:57:10.467997 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-dns-swift-storage-0\") pod \"dnsmasq-dns-6cb9b8c955-g7qzg\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.468234 master-0 kubenswrapper[4790]: I1011 10:57:10.468086 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb9b8c955-g7qzg\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.468234 master-0 kubenswrapper[4790]: I1011 10:57:10.468121 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-dns-svc\") pod \"dnsmasq-dns-6cb9b8c955-g7qzg\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.468947 master-0 kubenswrapper[4790]: I1011 10:57:10.468510 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-config\") pod \"dnsmasq-dns-6cb9b8c955-g7qzg\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.571249 master-0 kubenswrapper[4790]: I1011 10:57:10.571147 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-dns-swift-storage-0\") pod \"dnsmasq-dns-6cb9b8c955-g7qzg\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.571548 master-0 kubenswrapper[4790]: I1011 10:57:10.571410 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb9b8c955-g7qzg\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.571548 master-0 kubenswrapper[4790]: I1011 10:57:10.571450 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-dns-svc\") pod \"dnsmasq-dns-6cb9b8c955-g7qzg\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.571548 master-0 kubenswrapper[4790]: I1011 10:57:10.571545 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-config\") pod \"dnsmasq-dns-6cb9b8c955-g7qzg\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.571770 master-0 kubenswrapper[4790]: I1011 10:57:10.571582 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4srbk\" (UniqueName: \"kubernetes.io/projected/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-kube-api-access-4srbk\") pod \"dnsmasq-dns-6cb9b8c955-g7qzg\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.571770 master-0 kubenswrapper[4790]: I1011 10:57:10.571606 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb9b8c955-g7qzg\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.573088 master-0 kubenswrapper[4790]: I1011 10:57:10.573026 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-dns-swift-storage-0\") pod \"dnsmasq-dns-6cb9b8c955-g7qzg\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.573185 master-0 kubenswrapper[4790]: I1011 10:57:10.573145 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb9b8c955-g7qzg\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.573286 master-0 kubenswrapper[4790]: I1011 10:57:10.573145 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-dns-svc\") pod \"dnsmasq-dns-6cb9b8c955-g7qzg\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.573394 master-0 kubenswrapper[4790]: I1011 10:57:10.573347 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-config\") pod \"dnsmasq-dns-6cb9b8c955-g7qzg\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.574441 master-0 kubenswrapper[4790]: I1011 10:57:10.574366 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb9b8c955-g7qzg\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.595529 master-0 kubenswrapper[4790]: I1011 10:57:10.595190 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4srbk\" (UniqueName: \"kubernetes.io/projected/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-kube-api-access-4srbk\") pod \"dnsmasq-dns-6cb9b8c955-g7qzg\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.694842 master-0 kubenswrapper[4790]: I1011 10:57:10.694780 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:10.974928 master-0 kubenswrapper[4790]: I1011 10:57:10.974822 4790 generic.go:334] "Generic (PLEG): container finished" podID="253852fc-de03-49f0-8e18-b3ccba3d4966" containerID="98d215d30f8c7dc0e7fbe10e4417ac3c671b823010a7e36669d3ba73c51d79e4" exitCode=143 Oct 11 10:57:10.975555 master-0 kubenswrapper[4790]: I1011 10:57:10.974933 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1" event={"ID":"253852fc-de03-49f0-8e18-b3ccba3d4966","Type":"ContainerDied","Data":"98d215d30f8c7dc0e7fbe10e4417ac3c671b823010a7e36669d3ba73c51d79e4"} Oct 11 10:57:11.281455 master-0 kubenswrapper[4790]: I1011 10:57:11.281373 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb9b8c955-g7qzg"] Oct 11 10:57:11.988789 master-0 kubenswrapper[4790]: I1011 10:57:11.988718 4790 generic.go:334] "Generic (PLEG): container finished" podID="aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c" containerID="148a01e0193a2980ace147ec2984826e51e9857bd60dd500d2908e53b675c079" exitCode=0 Oct 11 10:57:11.988789 master-0 kubenswrapper[4790]: I1011 10:57:11.988775 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" event={"ID":"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c","Type":"ContainerDied","Data":"148a01e0193a2980ace147ec2984826e51e9857bd60dd500d2908e53b675c079"} Oct 11 10:57:11.989404 master-0 kubenswrapper[4790]: I1011 10:57:11.988812 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" event={"ID":"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c","Type":"ContainerStarted","Data":"82ab564208fa75b6a416368edc3991b9aae0b1bdbf1f7ab61745c571e8067316"} Oct 11 10:57:12.117315 master-1 kubenswrapper[4771]: I1011 10:57:12.117224 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:57:12.118146 master-1 kubenswrapper[4771]: I1011 10:57:12.118093 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:12.118288 master-1 kubenswrapper[4771]: I1011 10:57:12.118218 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerName="ceilometer-central-agent" containerID="cri-o://329f7fd8063c9aaba98ccaa627d41506880615f2c94e9a537c2fbe8e686bc8be" gracePeriod=30 Oct 11 10:57:12.118412 master-1 kubenswrapper[4771]: I1011 10:57:12.118385 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerName="ceilometer-notification-agent" containerID="cri-o://dd5ef923fe19c8499d59b13d67599b3bf99730453e82ff9eb110343bd5333a66" gracePeriod=30 Oct 11 10:57:12.118480 master-1 kubenswrapper[4771]: I1011 10:57:12.118433 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerName="sg-core" containerID="cri-o://06b1b45211e1a6439784f737849060952f6c89a6f37d2a3b15906929efc29bff" gracePeriod=30 Oct 11 10:57:12.118528 master-1 kubenswrapper[4771]: I1011 10:57:12.118444 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerName="proxy-httpd" containerID="cri-o://1856a76e1ae1c62aaea063758c5bd874b8a3b66130c9fe8eb095d5cef903001f" gracePeriod=30 Oct 11 10:57:12.764409 master-1 kubenswrapper[4771]: I1011 10:57:12.764019 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 11 10:57:12.812678 master-1 kubenswrapper[4771]: I1011 10:57:12.812502 4771 generic.go:334] "Generic (PLEG): container finished" podID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerID="1856a76e1ae1c62aaea063758c5bd874b8a3b66130c9fe8eb095d5cef903001f" exitCode=0 Oct 11 10:57:12.812678 master-1 kubenswrapper[4771]: I1011 10:57:12.812549 4771 generic.go:334] "Generic (PLEG): container finished" podID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerID="06b1b45211e1a6439784f737849060952f6c89a6f37d2a3b15906929efc29bff" exitCode=2 Oct 11 10:57:12.812678 master-1 kubenswrapper[4771]: I1011 10:57:12.812559 4771 generic.go:334] "Generic (PLEG): container finished" podID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerID="dd5ef923fe19c8499d59b13d67599b3bf99730453e82ff9eb110343bd5333a66" exitCode=0 Oct 11 10:57:12.812678 master-1 kubenswrapper[4771]: I1011 10:57:12.812568 4771 generic.go:334] "Generic (PLEG): container finished" podID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerID="329f7fd8063c9aaba98ccaa627d41506880615f2c94e9a537c2fbe8e686bc8be" exitCode=0 Oct 11 10:57:12.812678 master-1 kubenswrapper[4771]: I1011 10:57:12.812566 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9a279d25-518e-4a12-8b75-e3781fb22f05","Type":"ContainerDied","Data":"1856a76e1ae1c62aaea063758c5bd874b8a3b66130c9fe8eb095d5cef903001f"} Oct 11 10:57:12.812678 master-1 kubenswrapper[4771]: I1011 10:57:12.812639 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9a279d25-518e-4a12-8b75-e3781fb22f05","Type":"ContainerDied","Data":"06b1b45211e1a6439784f737849060952f6c89a6f37d2a3b15906929efc29bff"} Oct 11 10:57:12.812678 master-1 kubenswrapper[4771]: I1011 10:57:12.812656 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9a279d25-518e-4a12-8b75-e3781fb22f05","Type":"ContainerDied","Data":"dd5ef923fe19c8499d59b13d67599b3bf99730453e82ff9eb110343bd5333a66"} Oct 11 10:57:12.812678 master-1 kubenswrapper[4771]: I1011 10:57:12.812669 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9a279d25-518e-4a12-8b75-e3781fb22f05","Type":"ContainerDied","Data":"329f7fd8063c9aaba98ccaa627d41506880615f2c94e9a537c2fbe8e686bc8be"} Oct 11 10:57:12.817951 master-1 kubenswrapper[4771]: I1011 10:57:12.817803 4771 generic.go:334] "Generic (PLEG): container finished" podID="1596746b-25ca-487a-9e49-93e532f2838b" containerID="cc430b45b80be084cd02e8ad7b476be42a8b108197f68c496397cf28d0b094ae" exitCode=0 Oct 11 10:57:12.817951 master-1 kubenswrapper[4771]: I1011 10:57:12.817863 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"1596746b-25ca-487a-9e49-93e532f2838b","Type":"ContainerDied","Data":"cc430b45b80be084cd02e8ad7b476be42a8b108197f68c496397cf28d0b094ae"} Oct 11 10:57:12.817951 master-1 kubenswrapper[4771]: I1011 10:57:12.817889 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 11 10:57:12.817951 master-1 kubenswrapper[4771]: I1011 10:57:12.817907 4771 scope.go:117] "RemoveContainer" containerID="cc430b45b80be084cd02e8ad7b476be42a8b108197f68c496397cf28d0b094ae" Oct 11 10:57:12.818184 master-1 kubenswrapper[4771]: I1011 10:57:12.817891 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"1596746b-25ca-487a-9e49-93e532f2838b","Type":"ContainerDied","Data":"089dbb4c74bf7c97de17802c964262c34b58cf3f278035cc8b343f4df54f1f61"} Oct 11 10:57:12.872556 master-1 kubenswrapper[4771]: I1011 10:57:12.872507 4771 scope.go:117] "RemoveContainer" containerID="10394d9815d51c3e2ede873c2bea5e3b152ea84d8b7c0cf8df31f7c51efb349f" Oct 11 10:57:12.885130 master-1 kubenswrapper[4771]: I1011 10:57:12.885064 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1596746b-25ca-487a-9e49-93e532f2838b-config-data\") pod \"1596746b-25ca-487a-9e49-93e532f2838b\" (UID: \"1596746b-25ca-487a-9e49-93e532f2838b\") " Oct 11 10:57:12.885252 master-1 kubenswrapper[4771]: I1011 10:57:12.885209 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1596746b-25ca-487a-9e49-93e532f2838b-combined-ca-bundle\") pod \"1596746b-25ca-487a-9e49-93e532f2838b\" (UID: \"1596746b-25ca-487a-9e49-93e532f2838b\") " Oct 11 10:57:12.885317 master-1 kubenswrapper[4771]: I1011 10:57:12.885252 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scqmr\" (UniqueName: \"kubernetes.io/projected/1596746b-25ca-487a-9e49-93e532f2838b-kube-api-access-scqmr\") pod \"1596746b-25ca-487a-9e49-93e532f2838b\" (UID: \"1596746b-25ca-487a-9e49-93e532f2838b\") " Oct 11 10:57:12.885495 master-1 kubenswrapper[4771]: I1011 10:57:12.885444 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1596746b-25ca-487a-9e49-93e532f2838b-logs\") pod \"1596746b-25ca-487a-9e49-93e532f2838b\" (UID: \"1596746b-25ca-487a-9e49-93e532f2838b\") " Oct 11 10:57:12.886713 master-1 kubenswrapper[4771]: I1011 10:57:12.886669 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1596746b-25ca-487a-9e49-93e532f2838b-logs" (OuterVolumeSpecName: "logs") pod "1596746b-25ca-487a-9e49-93e532f2838b" (UID: "1596746b-25ca-487a-9e49-93e532f2838b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:57:12.889880 master-1 kubenswrapper[4771]: I1011 10:57:12.889508 4771 scope.go:117] "RemoveContainer" containerID="cc430b45b80be084cd02e8ad7b476be42a8b108197f68c496397cf28d0b094ae" Oct 11 10:57:12.890621 master-1 kubenswrapper[4771]: I1011 10:57:12.889981 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1596746b-25ca-487a-9e49-93e532f2838b-kube-api-access-scqmr" (OuterVolumeSpecName: "kube-api-access-scqmr") pod "1596746b-25ca-487a-9e49-93e532f2838b" (UID: "1596746b-25ca-487a-9e49-93e532f2838b"). InnerVolumeSpecName "kube-api-access-scqmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:12.890621 master-1 kubenswrapper[4771]: E1011 10:57:12.889988 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc430b45b80be084cd02e8ad7b476be42a8b108197f68c496397cf28d0b094ae\": container with ID starting with cc430b45b80be084cd02e8ad7b476be42a8b108197f68c496397cf28d0b094ae not found: ID does not exist" containerID="cc430b45b80be084cd02e8ad7b476be42a8b108197f68c496397cf28d0b094ae" Oct 11 10:57:12.890621 master-1 kubenswrapper[4771]: I1011 10:57:12.890071 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc430b45b80be084cd02e8ad7b476be42a8b108197f68c496397cf28d0b094ae"} err="failed to get container status \"cc430b45b80be084cd02e8ad7b476be42a8b108197f68c496397cf28d0b094ae\": rpc error: code = NotFound desc = could not find container \"cc430b45b80be084cd02e8ad7b476be42a8b108197f68c496397cf28d0b094ae\": container with ID starting with cc430b45b80be084cd02e8ad7b476be42a8b108197f68c496397cf28d0b094ae not found: ID does not exist" Oct 11 10:57:12.890621 master-1 kubenswrapper[4771]: I1011 10:57:12.890113 4771 scope.go:117] "RemoveContainer" containerID="10394d9815d51c3e2ede873c2bea5e3b152ea84d8b7c0cf8df31f7c51efb349f" Oct 11 10:57:12.890855 master-1 kubenswrapper[4771]: E1011 10:57:12.890626 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10394d9815d51c3e2ede873c2bea5e3b152ea84d8b7c0cf8df31f7c51efb349f\": container with ID starting with 10394d9815d51c3e2ede873c2bea5e3b152ea84d8b7c0cf8df31f7c51efb349f not found: ID does not exist" containerID="10394d9815d51c3e2ede873c2bea5e3b152ea84d8b7c0cf8df31f7c51efb349f" Oct 11 10:57:12.890855 master-1 kubenswrapper[4771]: I1011 10:57:12.890676 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10394d9815d51c3e2ede873c2bea5e3b152ea84d8b7c0cf8df31f7c51efb349f"} err="failed to get container status \"10394d9815d51c3e2ede873c2bea5e3b152ea84d8b7c0cf8df31f7c51efb349f\": rpc error: code = NotFound desc = could not find container \"10394d9815d51c3e2ede873c2bea5e3b152ea84d8b7c0cf8df31f7c51efb349f\": container with ID starting with 10394d9815d51c3e2ede873c2bea5e3b152ea84d8b7c0cf8df31f7c51efb349f not found: ID does not exist" Oct 11 10:57:12.919413 master-1 kubenswrapper[4771]: I1011 10:57:12.914739 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1596746b-25ca-487a-9e49-93e532f2838b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1596746b-25ca-487a-9e49-93e532f2838b" (UID: "1596746b-25ca-487a-9e49-93e532f2838b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:12.926814 master-1 kubenswrapper[4771]: I1011 10:57:12.926766 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1596746b-25ca-487a-9e49-93e532f2838b-config-data" (OuterVolumeSpecName: "config-data") pod "1596746b-25ca-487a-9e49-93e532f2838b" (UID: "1596746b-25ca-487a-9e49-93e532f2838b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:12.927235 master-1 kubenswrapper[4771]: I1011 10:57:12.927200 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:57:12.987459 master-1 kubenswrapper[4771]: I1011 10:57:12.987392 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-scripts\") pod \"9a279d25-518e-4a12-8b75-e3781fb22f05\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " Oct 11 10:57:12.987667 master-1 kubenswrapper[4771]: I1011 10:57:12.987506 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-combined-ca-bundle\") pod \"9a279d25-518e-4a12-8b75-e3781fb22f05\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " Oct 11 10:57:12.987667 master-1 kubenswrapper[4771]: I1011 10:57:12.987555 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a279d25-518e-4a12-8b75-e3781fb22f05-run-httpd\") pod \"9a279d25-518e-4a12-8b75-e3781fb22f05\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " Oct 11 10:57:12.987667 master-1 kubenswrapper[4771]: I1011 10:57:12.987619 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtfvl\" (UniqueName: \"kubernetes.io/projected/9a279d25-518e-4a12-8b75-e3781fb22f05-kube-api-access-wtfvl\") pod \"9a279d25-518e-4a12-8b75-e3781fb22f05\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " Oct 11 10:57:12.987667 master-1 kubenswrapper[4771]: I1011 10:57:12.987662 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-config-data\") pod \"9a279d25-518e-4a12-8b75-e3781fb22f05\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " Oct 11 10:57:12.987887 master-1 kubenswrapper[4771]: I1011 10:57:12.987696 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a279d25-518e-4a12-8b75-e3781fb22f05-log-httpd\") pod \"9a279d25-518e-4a12-8b75-e3781fb22f05\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " Oct 11 10:57:12.987887 master-1 kubenswrapper[4771]: I1011 10:57:12.987720 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-ceilometer-tls-certs\") pod \"9a279d25-518e-4a12-8b75-e3781fb22f05\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " Oct 11 10:57:12.987887 master-1 kubenswrapper[4771]: I1011 10:57:12.987760 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-sg-core-conf-yaml\") pod \"9a279d25-518e-4a12-8b75-e3781fb22f05\" (UID: \"9a279d25-518e-4a12-8b75-e3781fb22f05\") " Oct 11 10:57:12.988005 master-1 kubenswrapper[4771]: I1011 10:57:12.987969 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a279d25-518e-4a12-8b75-e3781fb22f05-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9a279d25-518e-4a12-8b75-e3781fb22f05" (UID: "9a279d25-518e-4a12-8b75-e3781fb22f05"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:57:12.988226 master-1 kubenswrapper[4771]: I1011 10:57:12.988099 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a279d25-518e-4a12-8b75-e3781fb22f05-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9a279d25-518e-4a12-8b75-e3781fb22f05" (UID: "9a279d25-518e-4a12-8b75-e3781fb22f05"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:57:12.990158 master-1 kubenswrapper[4771]: I1011 10:57:12.990117 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1596746b-25ca-487a-9e49-93e532f2838b-logs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:12.990158 master-1 kubenswrapper[4771]: I1011 10:57:12.990157 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1596746b-25ca-487a-9e49-93e532f2838b-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:12.990304 master-1 kubenswrapper[4771]: I1011 10:57:12.990176 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1596746b-25ca-487a-9e49-93e532f2838b-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:12.990304 master-1 kubenswrapper[4771]: I1011 10:57:12.990198 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scqmr\" (UniqueName: \"kubernetes.io/projected/1596746b-25ca-487a-9e49-93e532f2838b-kube-api-access-scqmr\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:12.990304 master-1 kubenswrapper[4771]: I1011 10:57:12.990222 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a279d25-518e-4a12-8b75-e3781fb22f05-run-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:12.990304 master-1 kubenswrapper[4771]: I1011 10:57:12.990235 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a279d25-518e-4a12-8b75-e3781fb22f05-log-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:13.000340 master-0 kubenswrapper[4790]: I1011 10:57:13.000273 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" event={"ID":"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c","Type":"ContainerStarted","Data":"0bbeab6347207b28833870198731d207ead777fe2d62b52c20f939994946673c"} Oct 11 10:57:13.000931 master-0 kubenswrapper[4790]: I1011 10:57:13.000878 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:13.003704 master-1 kubenswrapper[4771]: I1011 10:57:13.003632 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-scripts" (OuterVolumeSpecName: "scripts") pod "9a279d25-518e-4a12-8b75-e3781fb22f05" (UID: "9a279d25-518e-4a12-8b75-e3781fb22f05"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:13.007255 master-1 kubenswrapper[4771]: I1011 10:57:13.007192 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a279d25-518e-4a12-8b75-e3781fb22f05-kube-api-access-wtfvl" (OuterVolumeSpecName: "kube-api-access-wtfvl") pod "9a279d25-518e-4a12-8b75-e3781fb22f05" (UID: "9a279d25-518e-4a12-8b75-e3781fb22f05"). InnerVolumeSpecName "kube-api-access-wtfvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:13.030945 master-0 kubenswrapper[4790]: I1011 10:57:13.030807 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" podStartSLOduration=3.030778655 podStartE2EDuration="3.030778655s" podCreationTimestamp="2025-10-11 10:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:57:13.026280931 +0000 UTC m=+1109.580741243" watchObservedRunningTime="2025-10-11 10:57:13.030778655 +0000 UTC m=+1109.585238947" Oct 11 10:57:13.037615 master-1 kubenswrapper[4771]: I1011 10:57:13.037545 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9a279d25-518e-4a12-8b75-e3781fb22f05" (UID: "9a279d25-518e-4a12-8b75-e3781fb22f05"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:13.047171 master-1 kubenswrapper[4771]: I1011 10:57:13.047117 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9a279d25-518e-4a12-8b75-e3781fb22f05" (UID: "9a279d25-518e-4a12-8b75-e3781fb22f05"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:13.048294 master-1 kubenswrapper[4771]: I1011 10:57:13.048245 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a279d25-518e-4a12-8b75-e3781fb22f05" (UID: "9a279d25-518e-4a12-8b75-e3781fb22f05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:13.086714 master-1 kubenswrapper[4771]: I1011 10:57:13.085573 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-config-data" (OuterVolumeSpecName: "config-data") pod "9a279d25-518e-4a12-8b75-e3781fb22f05" (UID: "9a279d25-518e-4a12-8b75-e3781fb22f05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:13.094490 master-1 kubenswrapper[4771]: I1011 10:57:13.093389 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtfvl\" (UniqueName: \"kubernetes.io/projected/9a279d25-518e-4a12-8b75-e3781fb22f05-kube-api-access-wtfvl\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:13.094490 master-1 kubenswrapper[4771]: I1011 10:57:13.093433 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:13.094490 master-1 kubenswrapper[4771]: I1011 10:57:13.093446 4771 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-ceilometer-tls-certs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:13.094490 master-1 kubenswrapper[4771]: I1011 10:57:13.093457 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-sg-core-conf-yaml\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:13.094490 master-1 kubenswrapper[4771]: I1011 10:57:13.093466 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:13.094490 master-1 kubenswrapper[4771]: I1011 10:57:13.093475 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a279d25-518e-4a12-8b75-e3781fb22f05-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:13.158267 master-1 kubenswrapper[4771]: I1011 10:57:13.158195 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-1"] Oct 11 10:57:13.173777 master-1 kubenswrapper[4771]: I1011 10:57:13.173635 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-1"] Oct 11 10:57:13.196139 master-1 kubenswrapper[4771]: I1011 10:57:13.196076 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-1"] Oct 11 10:57:13.196411 master-1 kubenswrapper[4771]: E1011 10:57:13.196390 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerName="ceilometer-central-agent" Oct 11 10:57:13.196411 master-1 kubenswrapper[4771]: I1011 10:57:13.196406 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerName="ceilometer-central-agent" Oct 11 10:57:13.196632 master-1 kubenswrapper[4771]: E1011 10:57:13.196420 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerName="sg-core" Oct 11 10:57:13.196632 master-1 kubenswrapper[4771]: I1011 10:57:13.196427 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerName="sg-core" Oct 11 10:57:13.196632 master-1 kubenswrapper[4771]: E1011 10:57:13.196440 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1596746b-25ca-487a-9e49-93e532f2838b" containerName="nova-metadata-metadata" Oct 11 10:57:13.196632 master-1 kubenswrapper[4771]: I1011 10:57:13.196449 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1596746b-25ca-487a-9e49-93e532f2838b" containerName="nova-metadata-metadata" Oct 11 10:57:13.196632 master-1 kubenswrapper[4771]: E1011 10:57:13.196457 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerName="ceilometer-notification-agent" Oct 11 10:57:13.196632 master-1 kubenswrapper[4771]: I1011 10:57:13.196463 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerName="ceilometer-notification-agent" Oct 11 10:57:13.196632 master-1 kubenswrapper[4771]: E1011 10:57:13.196475 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerName="proxy-httpd" Oct 11 10:57:13.196632 master-1 kubenswrapper[4771]: I1011 10:57:13.196481 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerName="proxy-httpd" Oct 11 10:57:13.196632 master-1 kubenswrapper[4771]: E1011 10:57:13.196495 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1596746b-25ca-487a-9e49-93e532f2838b" containerName="nova-metadata-log" Oct 11 10:57:13.196632 master-1 kubenswrapper[4771]: I1011 10:57:13.196501 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1596746b-25ca-487a-9e49-93e532f2838b" containerName="nova-metadata-log" Oct 11 10:57:13.196632 master-1 kubenswrapper[4771]: I1011 10:57:13.196630 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="1596746b-25ca-487a-9e49-93e532f2838b" containerName="nova-metadata-metadata" Oct 11 10:57:13.197127 master-1 kubenswrapper[4771]: I1011 10:57:13.196650 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerName="ceilometer-central-agent" Oct 11 10:57:13.197127 master-1 kubenswrapper[4771]: I1011 10:57:13.196661 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="1596746b-25ca-487a-9e49-93e532f2838b" containerName="nova-metadata-log" Oct 11 10:57:13.197127 master-1 kubenswrapper[4771]: I1011 10:57:13.196674 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerName="ceilometer-notification-agent" Oct 11 10:57:13.197127 master-1 kubenswrapper[4771]: I1011 10:57:13.196686 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerName="sg-core" Oct 11 10:57:13.197127 master-1 kubenswrapper[4771]: I1011 10:57:13.196696 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a279d25-518e-4a12-8b75-e3781fb22f05" containerName="proxy-httpd" Oct 11 10:57:13.197640 master-1 kubenswrapper[4771]: I1011 10:57:13.197608 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 11 10:57:13.201159 master-1 kubenswrapper[4771]: I1011 10:57:13.201116 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 11 10:57:13.201874 master-1 kubenswrapper[4771]: I1011 10:57:13.201844 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 11 10:57:13.219223 master-2 kubenswrapper[4776]: I1011 10:57:13.219155 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:13.219774 master-2 kubenswrapper[4776]: I1011 10:57:13.219406 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-2" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-log" containerID="cri-o://756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954" gracePeriod=30 Oct 11 10:57:13.219995 master-2 kubenswrapper[4776]: I1011 10:57:13.219953 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-2" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-api" containerID="cri-o://01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70" gracePeriod=30 Oct 11 10:57:13.246538 master-1 kubenswrapper[4771]: I1011 10:57:13.246470 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-1"] Oct 11 10:57:13.305710 master-1 kubenswrapper[4771]: I1011 10:57:13.305666 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/486db0a3-f081-43d5-b20d-d7386531632e-logs\") pod \"nova-metadata-1\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " pod="openstack/nova-metadata-1" Oct 11 10:57:13.305801 master-1 kubenswrapper[4771]: I1011 10:57:13.305758 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/486db0a3-f081-43d5-b20d-d7386531632e-combined-ca-bundle\") pod \"nova-metadata-1\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " pod="openstack/nova-metadata-1" Oct 11 10:57:13.305943 master-1 kubenswrapper[4771]: I1011 10:57:13.305913 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/486db0a3-f081-43d5-b20d-d7386531632e-nova-metadata-tls-certs\") pod \"nova-metadata-1\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " pod="openstack/nova-metadata-1" Oct 11 10:57:13.306216 master-1 kubenswrapper[4771]: I1011 10:57:13.305998 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbq4h\" (UniqueName: \"kubernetes.io/projected/486db0a3-f081-43d5-b20d-d7386531632e-kube-api-access-lbq4h\") pod \"nova-metadata-1\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " pod="openstack/nova-metadata-1" Oct 11 10:57:13.306216 master-1 kubenswrapper[4771]: I1011 10:57:13.306048 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/486db0a3-f081-43d5-b20d-d7386531632e-config-data\") pod \"nova-metadata-1\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " pod="openstack/nova-metadata-1" Oct 11 10:57:13.407660 master-1 kubenswrapper[4771]: I1011 10:57:13.407581 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbq4h\" (UniqueName: \"kubernetes.io/projected/486db0a3-f081-43d5-b20d-d7386531632e-kube-api-access-lbq4h\") pod \"nova-metadata-1\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " pod="openstack/nova-metadata-1" Oct 11 10:57:13.407899 master-1 kubenswrapper[4771]: I1011 10:57:13.407675 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/486db0a3-f081-43d5-b20d-d7386531632e-config-data\") pod \"nova-metadata-1\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " pod="openstack/nova-metadata-1" Oct 11 10:57:13.407899 master-1 kubenswrapper[4771]: I1011 10:57:13.407782 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/486db0a3-f081-43d5-b20d-d7386531632e-logs\") pod \"nova-metadata-1\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " pod="openstack/nova-metadata-1" Oct 11 10:57:13.407899 master-1 kubenswrapper[4771]: I1011 10:57:13.407832 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/486db0a3-f081-43d5-b20d-d7386531632e-combined-ca-bundle\") pod \"nova-metadata-1\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " pod="openstack/nova-metadata-1" Oct 11 10:57:13.407899 master-1 kubenswrapper[4771]: I1011 10:57:13.407868 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/486db0a3-f081-43d5-b20d-d7386531632e-nova-metadata-tls-certs\") pod \"nova-metadata-1\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " pod="openstack/nova-metadata-1" Oct 11 10:57:13.409250 master-1 kubenswrapper[4771]: I1011 10:57:13.409187 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/486db0a3-f081-43d5-b20d-d7386531632e-logs\") pod \"nova-metadata-1\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " pod="openstack/nova-metadata-1" Oct 11 10:57:13.413319 master-1 kubenswrapper[4771]: I1011 10:57:13.412999 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/486db0a3-f081-43d5-b20d-d7386531632e-nova-metadata-tls-certs\") pod \"nova-metadata-1\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " pod="openstack/nova-metadata-1" Oct 11 10:57:13.414923 master-1 kubenswrapper[4771]: I1011 10:57:13.414870 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/486db0a3-f081-43d5-b20d-d7386531632e-config-data\") pod \"nova-metadata-1\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " pod="openstack/nova-metadata-1" Oct 11 10:57:13.415651 master-1 kubenswrapper[4771]: I1011 10:57:13.415610 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/486db0a3-f081-43d5-b20d-d7386531632e-combined-ca-bundle\") pod \"nova-metadata-1\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " pod="openstack/nova-metadata-1" Oct 11 10:57:13.432370 master-1 kubenswrapper[4771]: I1011 10:57:13.432295 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbq4h\" (UniqueName: \"kubernetes.io/projected/486db0a3-f081-43d5-b20d-d7386531632e-kube-api-access-lbq4h\") pod \"nova-metadata-1\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " pod="openstack/nova-metadata-1" Oct 11 10:57:13.436580 master-1 kubenswrapper[4771]: I1011 10:57:13.436504 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-1" Oct 11 10:57:13.546284 master-1 kubenswrapper[4771]: I1011 10:57:13.543485 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 11 10:57:13.834472 master-1 kubenswrapper[4771]: I1011 10:57:13.833970 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9a279d25-518e-4a12-8b75-e3781fb22f05","Type":"ContainerDied","Data":"26bb9eb9589e069594c3ca946945eba89d16e626b603cd344d3ddae2ac0f7ded"} Oct 11 10:57:13.834472 master-1 kubenswrapper[4771]: I1011 10:57:13.834060 4771 scope.go:117] "RemoveContainer" containerID="1856a76e1ae1c62aaea063758c5bd874b8a3b66130c9fe8eb095d5cef903001f" Oct 11 10:57:13.834472 master-1 kubenswrapper[4771]: I1011 10:57:13.834391 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:57:13.849834 master-0 kubenswrapper[4790]: I1011 10:57:13.849651 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1" Oct 11 10:57:13.870817 master-1 kubenswrapper[4771]: I1011 10:57:13.870748 4771 scope.go:117] "RemoveContainer" containerID="06b1b45211e1a6439784f737849060952f6c89a6f37d2a3b15906929efc29bff" Oct 11 10:57:13.897525 master-1 kubenswrapper[4771]: I1011 10:57:13.897439 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:57:13.903733 master-1 kubenswrapper[4771]: I1011 10:57:13.903665 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:57:13.903856 master-1 kubenswrapper[4771]: I1011 10:57:13.903735 4771 scope.go:117] "RemoveContainer" containerID="dd5ef923fe19c8499d59b13d67599b3bf99730453e82ff9eb110343bd5333a66" Oct 11 10:57:13.932778 master-1 kubenswrapper[4771]: I1011 10:57:13.932714 4771 scope.go:117] "RemoveContainer" containerID="329f7fd8063c9aaba98ccaa627d41506880615f2c94e9a537c2fbe8e686bc8be" Oct 11 10:57:13.933840 master-1 kubenswrapper[4771]: I1011 10:57:13.933774 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:57:13.936549 master-1 kubenswrapper[4771]: I1011 10:57:13.936509 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:57:13.939457 master-1 kubenswrapper[4771]: I1011 10:57:13.939421 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 11 10:57:13.939773 master-1 kubenswrapper[4771]: I1011 10:57:13.939727 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 11 10:57:13.939981 master-1 kubenswrapper[4771]: I1011 10:57:13.939936 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 11 10:57:13.954791 master-0 kubenswrapper[4790]: I1011 10:57:13.954432 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/253852fc-de03-49f0-8e18-b3ccba3d4966-config-data\") pod \"253852fc-de03-49f0-8e18-b3ccba3d4966\" (UID: \"253852fc-de03-49f0-8e18-b3ccba3d4966\") " Oct 11 10:57:13.954791 master-0 kubenswrapper[4790]: I1011 10:57:13.954559 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhcj7\" (UniqueName: \"kubernetes.io/projected/253852fc-de03-49f0-8e18-b3ccba3d4966-kube-api-access-jhcj7\") pod \"253852fc-de03-49f0-8e18-b3ccba3d4966\" (UID: \"253852fc-de03-49f0-8e18-b3ccba3d4966\") " Oct 11 10:57:13.954791 master-0 kubenswrapper[4790]: I1011 10:57:13.954603 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/253852fc-de03-49f0-8e18-b3ccba3d4966-combined-ca-bundle\") pod \"253852fc-de03-49f0-8e18-b3ccba3d4966\" (UID: \"253852fc-de03-49f0-8e18-b3ccba3d4966\") " Oct 11 10:57:13.954791 master-0 kubenswrapper[4790]: I1011 10:57:13.954696 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/253852fc-de03-49f0-8e18-b3ccba3d4966-logs\") pod \"253852fc-de03-49f0-8e18-b3ccba3d4966\" (UID: \"253852fc-de03-49f0-8e18-b3ccba3d4966\") " Oct 11 10:57:13.955126 master-1 kubenswrapper[4771]: I1011 10:57:13.955038 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:57:13.958959 master-0 kubenswrapper[4790]: I1011 10:57:13.958911 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/253852fc-de03-49f0-8e18-b3ccba3d4966-kube-api-access-jhcj7" (OuterVolumeSpecName: "kube-api-access-jhcj7") pod "253852fc-de03-49f0-8e18-b3ccba3d4966" (UID: "253852fc-de03-49f0-8e18-b3ccba3d4966"). InnerVolumeSpecName "kube-api-access-jhcj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:13.965087 master-0 kubenswrapper[4790]: I1011 10:57:13.965029 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/253852fc-de03-49f0-8e18-b3ccba3d4966-logs" (OuterVolumeSpecName: "logs") pod "253852fc-de03-49f0-8e18-b3ccba3d4966" (UID: "253852fc-de03-49f0-8e18-b3ccba3d4966"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:57:13.985252 master-0 kubenswrapper[4790]: I1011 10:57:13.985175 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/253852fc-de03-49f0-8e18-b3ccba3d4966-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "253852fc-de03-49f0-8e18-b3ccba3d4966" (UID: "253852fc-de03-49f0-8e18-b3ccba3d4966"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:13.986460 master-0 kubenswrapper[4790]: I1011 10:57:13.986368 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/253852fc-de03-49f0-8e18-b3ccba3d4966-config-data" (OuterVolumeSpecName: "config-data") pod "253852fc-de03-49f0-8e18-b3ccba3d4966" (UID: "253852fc-de03-49f0-8e18-b3ccba3d4966"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:14.015639 master-0 kubenswrapper[4790]: I1011 10:57:14.015575 4790 generic.go:334] "Generic (PLEG): container finished" podID="253852fc-de03-49f0-8e18-b3ccba3d4966" containerID="b5982ea4f03ba0bc234b6f62a88bd15cfad78e0ab12b1e1ee49058903171de83" exitCode=0 Oct 11 10:57:14.016589 master-0 kubenswrapper[4790]: I1011 10:57:14.015631 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1" event={"ID":"253852fc-de03-49f0-8e18-b3ccba3d4966","Type":"ContainerDied","Data":"b5982ea4f03ba0bc234b6f62a88bd15cfad78e0ab12b1e1ee49058903171de83"} Oct 11 10:57:14.016589 master-0 kubenswrapper[4790]: I1011 10:57:14.015673 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1" Oct 11 10:57:14.016589 master-0 kubenswrapper[4790]: I1011 10:57:14.015728 4790 scope.go:117] "RemoveContainer" containerID="b5982ea4f03ba0bc234b6f62a88bd15cfad78e0ab12b1e1ee49058903171de83" Oct 11 10:57:14.016589 master-0 kubenswrapper[4790]: I1011 10:57:14.015692 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1" event={"ID":"253852fc-de03-49f0-8e18-b3ccba3d4966","Type":"ContainerDied","Data":"836f61a45327b518c64768eb8146728d82afb4f76df022ecfc8b1a40c35ad99e"} Oct 11 10:57:14.028579 master-1 kubenswrapper[4771]: I1011 10:57:14.028506 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-1"] Oct 11 10:57:14.033692 master-1 kubenswrapper[4771]: I1011 10:57:14.033641 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-config-data\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.033787 master-1 kubenswrapper[4771]: I1011 10:57:14.033759 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.033864 master-1 kubenswrapper[4771]: I1011 10:57:14.033802 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e85bacc4-2a43-4bc5-acc7-67f930ed6331-run-httpd\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.033864 master-1 kubenswrapper[4771]: I1011 10:57:14.033822 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-scripts\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.033864 master-1 kubenswrapper[4771]: I1011 10:57:14.033848 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.034023 master-1 kubenswrapper[4771]: I1011 10:57:14.033872 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e85bacc4-2a43-4bc5-acc7-67f930ed6331-log-httpd\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.034023 master-1 kubenswrapper[4771]: I1011 10:57:14.033908 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.034023 master-1 kubenswrapper[4771]: I1011 10:57:14.033928 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmlq6\" (UniqueName: \"kubernetes.io/projected/e85bacc4-2a43-4bc5-acc7-67f930ed6331-kube-api-access-tmlq6\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.048459 master-0 kubenswrapper[4790]: I1011 10:57:14.048400 4790 scope.go:117] "RemoveContainer" containerID="98d215d30f8c7dc0e7fbe10e4417ac3c671b823010a7e36669d3ba73c51d79e4" Oct 11 10:57:14.057207 master-0 kubenswrapper[4790]: I1011 10:57:14.056805 4790 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/253852fc-de03-49f0-8e18-b3ccba3d4966-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Oct 11 10:57:14.057207 master-0 kubenswrapper[4790]: I1011 10:57:14.056856 4790 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/253852fc-de03-49f0-8e18-b3ccba3d4966-logs\") on node \"master-0\" DevicePath \"\"" Oct 11 10:57:14.057207 master-0 kubenswrapper[4790]: I1011 10:57:14.056871 4790 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/253852fc-de03-49f0-8e18-b3ccba3d4966-config-data\") on node \"master-0\" DevicePath \"\"" Oct 11 10:57:14.057207 master-0 kubenswrapper[4790]: I1011 10:57:14.056881 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhcj7\" (UniqueName: \"kubernetes.io/projected/253852fc-de03-49f0-8e18-b3ccba3d4966-kube-api-access-jhcj7\") on node \"master-0\" DevicePath \"\"" Oct 11 10:57:14.082123 master-0 kubenswrapper[4790]: I1011 10:57:14.082073 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1"] Oct 11 10:57:14.082518 master-0 kubenswrapper[4790]: I1011 10:57:14.082373 4790 scope.go:117] "RemoveContainer" containerID="b5982ea4f03ba0bc234b6f62a88bd15cfad78e0ab12b1e1ee49058903171de83" Oct 11 10:57:14.083646 master-0 kubenswrapper[4790]: E1011 10:57:14.083602 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5982ea4f03ba0bc234b6f62a88bd15cfad78e0ab12b1e1ee49058903171de83\": container with ID starting with b5982ea4f03ba0bc234b6f62a88bd15cfad78e0ab12b1e1ee49058903171de83 not found: ID does not exist" containerID="b5982ea4f03ba0bc234b6f62a88bd15cfad78e0ab12b1e1ee49058903171de83" Oct 11 10:57:14.083764 master-0 kubenswrapper[4790]: I1011 10:57:14.083670 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5982ea4f03ba0bc234b6f62a88bd15cfad78e0ab12b1e1ee49058903171de83"} err="failed to get container status \"b5982ea4f03ba0bc234b6f62a88bd15cfad78e0ab12b1e1ee49058903171de83\": rpc error: code = NotFound desc = could not find container \"b5982ea4f03ba0bc234b6f62a88bd15cfad78e0ab12b1e1ee49058903171de83\": container with ID starting with b5982ea4f03ba0bc234b6f62a88bd15cfad78e0ab12b1e1ee49058903171de83 not found: ID does not exist" Oct 11 10:57:14.083807 master-0 kubenswrapper[4790]: I1011 10:57:14.083782 4790 scope.go:117] "RemoveContainer" containerID="98d215d30f8c7dc0e7fbe10e4417ac3c671b823010a7e36669d3ba73c51d79e4" Oct 11 10:57:14.084607 master-0 kubenswrapper[4790]: E1011 10:57:14.084545 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98d215d30f8c7dc0e7fbe10e4417ac3c671b823010a7e36669d3ba73c51d79e4\": container with ID starting with 98d215d30f8c7dc0e7fbe10e4417ac3c671b823010a7e36669d3ba73c51d79e4 not found: ID does not exist" containerID="98d215d30f8c7dc0e7fbe10e4417ac3c671b823010a7e36669d3ba73c51d79e4" Oct 11 10:57:14.084660 master-0 kubenswrapper[4790]: I1011 10:57:14.084611 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98d215d30f8c7dc0e7fbe10e4417ac3c671b823010a7e36669d3ba73c51d79e4"} err="failed to get container status \"98d215d30f8c7dc0e7fbe10e4417ac3c671b823010a7e36669d3ba73c51d79e4\": rpc error: code = NotFound desc = could not find container \"98d215d30f8c7dc0e7fbe10e4417ac3c671b823010a7e36669d3ba73c51d79e4\": container with ID starting with 98d215d30f8c7dc0e7fbe10e4417ac3c671b823010a7e36669d3ba73c51d79e4 not found: ID does not exist" Oct 11 10:57:14.088116 master-0 kubenswrapper[4790]: I1011 10:57:14.088068 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1"] Oct 11 10:57:14.113333 master-2 kubenswrapper[4776]: I1011 10:57:14.113186 4776 generic.go:334] "Generic (PLEG): container finished" podID="1e253bd1-27b5-4423-8212-c9e698198d47" containerID="756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954" exitCode=143 Oct 11 10:57:14.113333 master-2 kubenswrapper[4776]: I1011 10:57:14.113298 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"1e253bd1-27b5-4423-8212-c9e698198d47","Type":"ContainerDied","Data":"756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954"} Oct 11 10:57:14.120799 master-0 kubenswrapper[4790]: I1011 10:57:14.119667 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1"] Oct 11 10:57:14.120799 master-0 kubenswrapper[4790]: E1011 10:57:14.120140 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="253852fc-de03-49f0-8e18-b3ccba3d4966" containerName="nova-api-log" Oct 11 10:57:14.120799 master-0 kubenswrapper[4790]: I1011 10:57:14.120160 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="253852fc-de03-49f0-8e18-b3ccba3d4966" containerName="nova-api-log" Oct 11 10:57:14.120799 master-0 kubenswrapper[4790]: E1011 10:57:14.120186 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="253852fc-de03-49f0-8e18-b3ccba3d4966" containerName="nova-api-api" Oct 11 10:57:14.120799 master-0 kubenswrapper[4790]: I1011 10:57:14.120194 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="253852fc-de03-49f0-8e18-b3ccba3d4966" containerName="nova-api-api" Oct 11 10:57:14.120799 master-0 kubenswrapper[4790]: I1011 10:57:14.120384 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="253852fc-de03-49f0-8e18-b3ccba3d4966" containerName="nova-api-api" Oct 11 10:57:14.120799 master-0 kubenswrapper[4790]: I1011 10:57:14.120411 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="253852fc-de03-49f0-8e18-b3ccba3d4966" containerName="nova-api-log" Oct 11 10:57:14.121502 master-0 kubenswrapper[4790]: I1011 10:57:14.121481 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1" Oct 11 10:57:14.131562 master-0 kubenswrapper[4790]: I1011 10:57:14.129837 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 11 10:57:14.134889 master-0 kubenswrapper[4790]: I1011 10:57:14.134837 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 11 10:57:14.135643 master-1 kubenswrapper[4771]: I1011 10:57:14.135580 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.135991 master-1 kubenswrapper[4771]: I1011 10:57:14.135663 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e85bacc4-2a43-4bc5-acc7-67f930ed6331-run-httpd\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.135991 master-1 kubenswrapper[4771]: I1011 10:57:14.135685 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-scripts\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.135991 master-1 kubenswrapper[4771]: I1011 10:57:14.135735 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.135991 master-1 kubenswrapper[4771]: I1011 10:57:14.135771 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e85bacc4-2a43-4bc5-acc7-67f930ed6331-log-httpd\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.135991 master-1 kubenswrapper[4771]: I1011 10:57:14.135830 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.135991 master-1 kubenswrapper[4771]: I1011 10:57:14.135853 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmlq6\" (UniqueName: \"kubernetes.io/projected/e85bacc4-2a43-4bc5-acc7-67f930ed6331-kube-api-access-tmlq6\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.137433 master-1 kubenswrapper[4771]: I1011 10:57:14.136575 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-config-data\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.141068 master-0 kubenswrapper[4790]: I1011 10:57:14.140975 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 11 10:57:14.141638 master-1 kubenswrapper[4771]: I1011 10:57:14.138258 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e85bacc4-2a43-4bc5-acc7-67f930ed6331-log-httpd\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.141638 master-1 kubenswrapper[4771]: I1011 10:57:14.138280 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e85bacc4-2a43-4bc5-acc7-67f930ed6331-run-httpd\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.144437 master-1 kubenswrapper[4771]: I1011 10:57:14.141882 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.144437 master-1 kubenswrapper[4771]: I1011 10:57:14.143478 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-config-data\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.144437 master-1 kubenswrapper[4771]: I1011 10:57:14.143759 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.144437 master-1 kubenswrapper[4771]: I1011 10:57:14.144128 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.144789 master-1 kubenswrapper[4771]: I1011 10:57:14.144435 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-scripts\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.164931 master-0 kubenswrapper[4790]: I1011 10:57:14.164876 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1"] Oct 11 10:57:14.165967 master-1 kubenswrapper[4771]: I1011 10:57:14.165897 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmlq6\" (UniqueName: \"kubernetes.io/projected/e85bacc4-2a43-4bc5-acc7-67f930ed6331-kube-api-access-tmlq6\") pod \"ceilometer-0\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " pod="openstack/ceilometer-0" Oct 11 10:57:14.260527 master-1 kubenswrapper[4771]: I1011 10:57:14.260409 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:57:14.262120 master-0 kubenswrapper[4790]: I1011 10:57:14.262032 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-public-tls-certs\") pod \"nova-api-1\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " pod="openstack/nova-api-1" Oct 11 10:57:14.262260 master-0 kubenswrapper[4790]: I1011 10:57:14.262198 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44bcb391-53f2-438c-b46e-1f3208011f01-logs\") pod \"nova-api-1\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " pod="openstack/nova-api-1" Oct 11 10:57:14.262260 master-0 kubenswrapper[4790]: I1011 10:57:14.262239 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-combined-ca-bundle\") pod \"nova-api-1\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " pod="openstack/nova-api-1" Oct 11 10:57:14.262345 master-0 kubenswrapper[4790]: I1011 10:57:14.262271 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-internal-tls-certs\") pod \"nova-api-1\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " pod="openstack/nova-api-1" Oct 11 10:57:14.262345 master-0 kubenswrapper[4790]: I1011 10:57:14.262303 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8qlt\" (UniqueName: \"kubernetes.io/projected/44bcb391-53f2-438c-b46e-1f3208011f01-kube-api-access-h8qlt\") pod \"nova-api-1\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " pod="openstack/nova-api-1" Oct 11 10:57:14.262435 master-0 kubenswrapper[4790]: I1011 10:57:14.262380 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-config-data\") pod \"nova-api-1\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " pod="openstack/nova-api-1" Oct 11 10:57:14.312442 master-0 kubenswrapper[4790]: I1011 10:57:14.312370 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="253852fc-de03-49f0-8e18-b3ccba3d4966" path="/var/lib/kubelet/pods/253852fc-de03-49f0-8e18-b3ccba3d4966/volumes" Oct 11 10:57:14.364456 master-0 kubenswrapper[4790]: I1011 10:57:14.364387 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-public-tls-certs\") pod \"nova-api-1\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " pod="openstack/nova-api-1" Oct 11 10:57:14.364665 master-0 kubenswrapper[4790]: I1011 10:57:14.364515 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44bcb391-53f2-438c-b46e-1f3208011f01-logs\") pod \"nova-api-1\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " pod="openstack/nova-api-1" Oct 11 10:57:14.364665 master-0 kubenswrapper[4790]: I1011 10:57:14.364582 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-combined-ca-bundle\") pod \"nova-api-1\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " pod="openstack/nova-api-1" Oct 11 10:57:14.364665 master-0 kubenswrapper[4790]: I1011 10:57:14.364621 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-internal-tls-certs\") pod \"nova-api-1\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " pod="openstack/nova-api-1" Oct 11 10:57:14.364665 master-0 kubenswrapper[4790]: I1011 10:57:14.364655 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8qlt\" (UniqueName: \"kubernetes.io/projected/44bcb391-53f2-438c-b46e-1f3208011f01-kube-api-access-h8qlt\") pod \"nova-api-1\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " pod="openstack/nova-api-1" Oct 11 10:57:14.364841 master-0 kubenswrapper[4790]: I1011 10:57:14.364685 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-config-data\") pod \"nova-api-1\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " pod="openstack/nova-api-1" Oct 11 10:57:14.365342 master-0 kubenswrapper[4790]: I1011 10:57:14.365276 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44bcb391-53f2-438c-b46e-1f3208011f01-logs\") pod \"nova-api-1\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " pod="openstack/nova-api-1" Oct 11 10:57:14.368079 master-0 kubenswrapper[4790]: I1011 10:57:14.368024 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-public-tls-certs\") pod \"nova-api-1\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " pod="openstack/nova-api-1" Oct 11 10:57:14.368327 master-0 kubenswrapper[4790]: I1011 10:57:14.368301 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-combined-ca-bundle\") pod \"nova-api-1\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " pod="openstack/nova-api-1" Oct 11 10:57:14.368435 master-0 kubenswrapper[4790]: I1011 10:57:14.368402 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-config-data\") pod \"nova-api-1\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " pod="openstack/nova-api-1" Oct 11 10:57:14.370915 master-0 kubenswrapper[4790]: I1011 10:57:14.370812 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-internal-tls-certs\") pod \"nova-api-1\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " pod="openstack/nova-api-1" Oct 11 10:57:14.402354 master-0 kubenswrapper[4790]: I1011 10:57:14.402275 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8qlt\" (UniqueName: \"kubernetes.io/projected/44bcb391-53f2-438c-b46e-1f3208011f01-kube-api-access-h8qlt\") pod \"nova-api-1\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " pod="openstack/nova-api-1" Oct 11 10:57:14.468470 master-1 kubenswrapper[4771]: I1011 10:57:14.468427 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1596746b-25ca-487a-9e49-93e532f2838b" path="/var/lib/kubelet/pods/1596746b-25ca-487a-9e49-93e532f2838b/volumes" Oct 11 10:57:14.471506 master-0 kubenswrapper[4790]: I1011 10:57:14.471421 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1" Oct 11 10:57:14.471597 master-1 kubenswrapper[4771]: I1011 10:57:14.471551 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a279d25-518e-4a12-8b75-e3781fb22f05" path="/var/lib/kubelet/pods/9a279d25-518e-4a12-8b75-e3781fb22f05/volumes" Oct 11 10:57:14.722263 master-1 kubenswrapper[4771]: I1011 10:57:14.721834 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:57:14.742061 master-1 kubenswrapper[4771]: W1011 10:57:14.741968 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode85bacc4_2a43_4bc5_acc7_67f930ed6331.slice/crio-be2095abce3f4ac809c0e84a1273a55d460312bdbba18373f246c0e8a9de1c8d WatchSource:0}: Error finding container be2095abce3f4ac809c0e84a1273a55d460312bdbba18373f246c0e8a9de1c8d: Status 404 returned error can't find the container with id be2095abce3f4ac809c0e84a1273a55d460312bdbba18373f246c0e8a9de1c8d Oct 11 10:57:14.858221 master-1 kubenswrapper[4771]: I1011 10:57:14.858139 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"486db0a3-f081-43d5-b20d-d7386531632e","Type":"ContainerStarted","Data":"80f0dd7f7bf3c1c4412d1e7a39722e2cd092b7c0f670f349af91c500d917aa10"} Oct 11 10:57:14.858221 master-1 kubenswrapper[4771]: I1011 10:57:14.858217 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"486db0a3-f081-43d5-b20d-d7386531632e","Type":"ContainerStarted","Data":"06a1bb6aa8f352b55c65006ce50a6d509e8dff8e484bf4f522e69ff0c42ae932"} Oct 11 10:57:14.858546 master-1 kubenswrapper[4771]: I1011 10:57:14.858239 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"486db0a3-f081-43d5-b20d-d7386531632e","Type":"ContainerStarted","Data":"5409d9254a9ac28cc3c943a6262472ba07dc81056b84eeb207f5cc4057ceaafd"} Oct 11 10:57:14.859739 master-1 kubenswrapper[4771]: I1011 10:57:14.859692 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e85bacc4-2a43-4bc5-acc7-67f930ed6331","Type":"ContainerStarted","Data":"be2095abce3f4ac809c0e84a1273a55d460312bdbba18373f246c0e8a9de1c8d"} Oct 11 10:57:14.883845 master-1 kubenswrapper[4771]: I1011 10:57:14.883700 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-1" podStartSLOduration=1.8836824170000002 podStartE2EDuration="1.883682417s" podCreationTimestamp="2025-10-11 10:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:57:14.880832244 +0000 UTC m=+1866.855058685" watchObservedRunningTime="2025-10-11 10:57:14.883682417 +0000 UTC m=+1866.857908858" Oct 11 10:57:14.951604 master-0 kubenswrapper[4790]: I1011 10:57:14.951559 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1"] Oct 11 10:57:15.027147 master-0 kubenswrapper[4790]: I1011 10:57:15.027058 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1" event={"ID":"44bcb391-53f2-438c-b46e-1f3208011f01","Type":"ContainerStarted","Data":"94a5d59118af400f9a8aa989b9def37c55ec9402d3102f10a1ba404bedd55ff9"} Oct 11 10:57:15.874075 master-1 kubenswrapper[4771]: I1011 10:57:15.873944 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e85bacc4-2a43-4bc5-acc7-67f930ed6331","Type":"ContainerStarted","Data":"17e2191a3f20167201f2c27d474f69eb8338b55a6d88728e3892cdd495c418f4"} Oct 11 10:57:16.000877 master-1 kubenswrapper[4771]: I1011 10:57:16.000797 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:57:16.041635 master-0 kubenswrapper[4790]: I1011 10:57:16.041449 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1" event={"ID":"44bcb391-53f2-438c-b46e-1f3208011f01","Type":"ContainerStarted","Data":"9c19ce238ad447003ee3f2fc9c2488f5e38167272b8ba8dbab542ad52ae186e6"} Oct 11 10:57:16.041635 master-0 kubenswrapper[4790]: I1011 10:57:16.041531 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1" event={"ID":"44bcb391-53f2-438c-b46e-1f3208011f01","Type":"ContainerStarted","Data":"7c441b122cde11d0c0dd59e93d58166fba491455c97757aa05c250678bb0d192"} Oct 11 10:57:16.078436 master-0 kubenswrapper[4790]: I1011 10:57:16.078324 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-1" podStartSLOduration=2.078300063 podStartE2EDuration="2.078300063s" podCreationTimestamp="2025-10-11 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:57:16.072995097 +0000 UTC m=+1112.627455389" watchObservedRunningTime="2025-10-11 10:57:16.078300063 +0000 UTC m=+1112.632760355" Oct 11 10:57:16.898421 master-1 kubenswrapper[4771]: I1011 10:57:16.897628 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e85bacc4-2a43-4bc5-acc7-67f930ed6331","Type":"ContainerStarted","Data":"ebd23fc1f4d62b1e39557d15a1b710afe0ddaa7347500388c4a14b187788b9df"} Oct 11 10:57:16.898421 master-1 kubenswrapper[4771]: I1011 10:57:16.897736 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e85bacc4-2a43-4bc5-acc7-67f930ed6331","Type":"ContainerStarted","Data":"7e99c868e28ca53790248959b5b2de659cb6170102635e6ace2cabc6e6703b85"} Oct 11 10:57:17.117599 master-1 kubenswrapper[4771]: I1011 10:57:17.117433 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:17.145509 master-1 kubenswrapper[4771]: I1011 10:57:17.145438 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:17.154291 master-2 kubenswrapper[4776]: I1011 10:57:17.154153 4776 generic.go:334] "Generic (PLEG): container finished" podID="1e253bd1-27b5-4423-8212-c9e698198d47" containerID="01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70" exitCode=0 Oct 11 10:57:17.154291 master-2 kubenswrapper[4776]: I1011 10:57:17.154206 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"1e253bd1-27b5-4423-8212-c9e698198d47","Type":"ContainerDied","Data":"01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70"} Oct 11 10:57:17.264739 master-2 kubenswrapper[4776]: I1011 10:57:17.264689 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:57:17.320716 master-2 kubenswrapper[4776]: I1011 10:57:17.320640 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-combined-ca-bundle\") pod \"1e253bd1-27b5-4423-8212-c9e698198d47\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " Oct 11 10:57:17.321412 master-2 kubenswrapper[4776]: I1011 10:57:17.321382 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-config-data\") pod \"1e253bd1-27b5-4423-8212-c9e698198d47\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " Oct 11 10:57:17.321716 master-2 kubenswrapper[4776]: I1011 10:57:17.321659 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bswtb\" (UniqueName: \"kubernetes.io/projected/1e253bd1-27b5-4423-8212-c9e698198d47-kube-api-access-bswtb\") pod \"1e253bd1-27b5-4423-8212-c9e698198d47\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " Oct 11 10:57:17.321915 master-2 kubenswrapper[4776]: I1011 10:57:17.321900 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e253bd1-27b5-4423-8212-c9e698198d47-logs\") pod \"1e253bd1-27b5-4423-8212-c9e698198d47\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " Oct 11 10:57:17.323715 master-2 kubenswrapper[4776]: I1011 10:57:17.323665 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e253bd1-27b5-4423-8212-c9e698198d47-logs" (OuterVolumeSpecName: "logs") pod "1e253bd1-27b5-4423-8212-c9e698198d47" (UID: "1e253bd1-27b5-4423-8212-c9e698198d47"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:57:17.331983 master-2 kubenswrapper[4776]: I1011 10:57:17.331901 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e253bd1-27b5-4423-8212-c9e698198d47-kube-api-access-bswtb" (OuterVolumeSpecName: "kube-api-access-bswtb") pod "1e253bd1-27b5-4423-8212-c9e698198d47" (UID: "1e253bd1-27b5-4423-8212-c9e698198d47"). InnerVolumeSpecName "kube-api-access-bswtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:17.348019 master-2 kubenswrapper[4776]: I1011 10:57:17.347956 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-config-data" (OuterVolumeSpecName: "config-data") pod "1e253bd1-27b5-4423-8212-c9e698198d47" (UID: "1e253bd1-27b5-4423-8212-c9e698198d47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:17.349894 master-2 kubenswrapper[4776]: I1011 10:57:17.349839 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e253bd1-27b5-4423-8212-c9e698198d47" (UID: "1e253bd1-27b5-4423-8212-c9e698198d47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:17.425004 master-2 kubenswrapper[4776]: I1011 10:57:17.424918 4776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e253bd1-27b5-4423-8212-c9e698198d47-logs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:17.425004 master-2 kubenswrapper[4776]: I1011 10:57:17.424987 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:17.425004 master-2 kubenswrapper[4776]: I1011 10:57:17.424998 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:17.425004 master-2 kubenswrapper[4776]: I1011 10:57:17.425008 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bswtb\" (UniqueName: \"kubernetes.io/projected/1e253bd1-27b5-4423-8212-c9e698198d47-kube-api-access-bswtb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:17.930414 master-1 kubenswrapper[4771]: I1011 10:57:17.930312 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 11 10:57:18.164297 master-2 kubenswrapper[4776]: I1011 10:57:18.164226 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"1e253bd1-27b5-4423-8212-c9e698198d47","Type":"ContainerDied","Data":"96fd4fc611b6a4bbbf2d990ec6feda8d8e8df9da67b88b7ad2b8d15e7527f7ac"} Oct 11 10:57:18.164297 master-2 kubenswrapper[4776]: I1011 10:57:18.164306 4776 scope.go:117] "RemoveContainer" containerID="01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70" Oct 11 10:57:18.165145 master-2 kubenswrapper[4776]: I1011 10:57:18.164604 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:57:18.189259 master-2 kubenswrapper[4776]: I1011 10:57:18.189201 4776 scope.go:117] "RemoveContainer" containerID="756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954" Oct 11 10:57:18.212644 master-2 kubenswrapper[4776]: I1011 10:57:18.212572 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:18.225068 master-2 kubenswrapper[4776]: I1011 10:57:18.224990 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:18.275999 master-2 kubenswrapper[4776]: I1011 10:57:18.275903 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:18.276712 master-2 kubenswrapper[4776]: E1011 10:57:18.276668 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-log" Oct 11 10:57:18.276712 master-2 kubenswrapper[4776]: I1011 10:57:18.276711 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-log" Oct 11 10:57:18.276837 master-2 kubenswrapper[4776]: E1011 10:57:18.276774 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-api" Oct 11 10:57:18.276837 master-2 kubenswrapper[4776]: I1011 10:57:18.276783 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-api" Oct 11 10:57:18.277905 master-2 kubenswrapper[4776]: I1011 10:57:18.277780 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-api" Oct 11 10:57:18.277905 master-2 kubenswrapper[4776]: I1011 10:57:18.277905 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-log" Oct 11 10:57:18.280845 master-2 kubenswrapper[4776]: I1011 10:57:18.280761 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:57:18.284841 master-2 kubenswrapper[4776]: I1011 10:57:18.284772 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 11 10:57:18.285173 master-2 kubenswrapper[4776]: I1011 10:57:18.285125 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 11 10:57:18.285601 master-2 kubenswrapper[4776]: I1011 10:57:18.285480 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 11 10:57:18.288342 master-1 kubenswrapper[4771]: I1011 10:57:18.288263 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-bwgtz"] Oct 11 10:57:18.291318 master-1 kubenswrapper[4771]: I1011 10:57:18.291008 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bwgtz" Oct 11 10:57:18.292600 master-2 kubenswrapper[4776]: I1011 10:57:18.292524 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:18.293974 master-1 kubenswrapper[4771]: I1011 10:57:18.293879 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 11 10:57:18.299502 master-1 kubenswrapper[4771]: I1011 10:57:18.299432 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 11 10:57:18.313266 master-1 kubenswrapper[4771]: I1011 10:57:18.313208 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-host-discover-5f556"] Oct 11 10:57:18.315826 master-1 kubenswrapper[4771]: I1011 10:57:18.315768 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-5f556" Oct 11 10:57:18.326266 master-1 kubenswrapper[4771]: I1011 10:57:18.326201 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bwgtz"] Oct 11 10:57:18.342203 master-1 kubenswrapper[4771]: I1011 10:57:18.342100 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-5f556"] Oct 11 10:57:18.435658 master-1 kubenswrapper[4771]: I1011 10:57:18.435561 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-1" Oct 11 10:57:18.454852 master-1 kubenswrapper[4771]: I1011 10:57:18.454782 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709c362a-6ace-46bf-9f94-86852f78f6f2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bwgtz\" (UID: \"709c362a-6ace-46bf-9f94-86852f78f6f2\") " pod="openstack/nova-cell1-cell-mapping-bwgtz" Oct 11 10:57:18.454852 master-1 kubenswrapper[4771]: I1011 10:57:18.454839 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae49cc63-d351-440f-9334-4ef2550565a2-config-data\") pod \"nova-cell1-host-discover-5f556\" (UID: \"ae49cc63-d351-440f-9334-4ef2550565a2\") " pod="openstack/nova-cell1-host-discover-5f556" Oct 11 10:57:18.455193 master-1 kubenswrapper[4771]: I1011 10:57:18.454907 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d7dq\" (UniqueName: \"kubernetes.io/projected/ae49cc63-d351-440f-9334-4ef2550565a2-kube-api-access-6d7dq\") pod \"nova-cell1-host-discover-5f556\" (UID: \"ae49cc63-d351-440f-9334-4ef2550565a2\") " pod="openstack/nova-cell1-host-discover-5f556" Oct 11 10:57:18.455193 master-1 kubenswrapper[4771]: I1011 10:57:18.454943 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae49cc63-d351-440f-9334-4ef2550565a2-combined-ca-bundle\") pod \"nova-cell1-host-discover-5f556\" (UID: \"ae49cc63-d351-440f-9334-4ef2550565a2\") " pod="openstack/nova-cell1-host-discover-5f556" Oct 11 10:57:18.455193 master-1 kubenswrapper[4771]: I1011 10:57:18.454969 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae49cc63-d351-440f-9334-4ef2550565a2-scripts\") pod \"nova-cell1-host-discover-5f556\" (UID: \"ae49cc63-d351-440f-9334-4ef2550565a2\") " pod="openstack/nova-cell1-host-discover-5f556" Oct 11 10:57:18.455193 master-1 kubenswrapper[4771]: I1011 10:57:18.454990 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glspf\" (UniqueName: \"kubernetes.io/projected/709c362a-6ace-46bf-9f94-86852f78f6f2-kube-api-access-glspf\") pod \"nova-cell1-cell-mapping-bwgtz\" (UID: \"709c362a-6ace-46bf-9f94-86852f78f6f2\") " pod="openstack/nova-cell1-cell-mapping-bwgtz" Oct 11 10:57:18.455193 master-1 kubenswrapper[4771]: I1011 10:57:18.455014 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/709c362a-6ace-46bf-9f94-86852f78f6f2-scripts\") pod \"nova-cell1-cell-mapping-bwgtz\" (UID: \"709c362a-6ace-46bf-9f94-86852f78f6f2\") " pod="openstack/nova-cell1-cell-mapping-bwgtz" Oct 11 10:57:18.455193 master-1 kubenswrapper[4771]: I1011 10:57:18.455034 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/709c362a-6ace-46bf-9f94-86852f78f6f2-config-data\") pod \"nova-cell1-cell-mapping-bwgtz\" (UID: \"709c362a-6ace-46bf-9f94-86852f78f6f2\") " pod="openstack/nova-cell1-cell-mapping-bwgtz" Oct 11 10:57:18.466902 master-1 kubenswrapper[4771]: I1011 10:57:18.466832 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-1" Oct 11 10:57:18.471000 master-2 kubenswrapper[4776]: I1011 10:57:18.470581 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.471000 master-2 kubenswrapper[4776]: I1011 10:57:18.470657 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-internal-tls-certs\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.471000 master-2 kubenswrapper[4776]: I1011 10:57:18.470908 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a4feec2-a4ba-4906-90f7-0912fc708375-logs\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.471000 master-2 kubenswrapper[4776]: I1011 10:57:18.470949 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-public-tls-certs\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.471000 master-2 kubenswrapper[4776]: I1011 10:57:18.470975 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z75mf\" (UniqueName: \"kubernetes.io/projected/3a4feec2-a4ba-4906-90f7-0912fc708375-kube-api-access-z75mf\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.471817 master-2 kubenswrapper[4776]: I1011 10:57:18.471130 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-config-data\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.545195 master-1 kubenswrapper[4771]: I1011 10:57:18.545126 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-1" Oct 11 10:57:18.545195 master-1 kubenswrapper[4771]: I1011 10:57:18.545194 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-1" Oct 11 10:57:18.556821 master-1 kubenswrapper[4771]: I1011 10:57:18.556440 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d7dq\" (UniqueName: \"kubernetes.io/projected/ae49cc63-d351-440f-9334-4ef2550565a2-kube-api-access-6d7dq\") pod \"nova-cell1-host-discover-5f556\" (UID: \"ae49cc63-d351-440f-9334-4ef2550565a2\") " pod="openstack/nova-cell1-host-discover-5f556" Oct 11 10:57:18.557459 master-1 kubenswrapper[4771]: I1011 10:57:18.557408 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae49cc63-d351-440f-9334-4ef2550565a2-combined-ca-bundle\") pod \"nova-cell1-host-discover-5f556\" (UID: \"ae49cc63-d351-440f-9334-4ef2550565a2\") " pod="openstack/nova-cell1-host-discover-5f556" Oct 11 10:57:18.557567 master-1 kubenswrapper[4771]: I1011 10:57:18.557475 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae49cc63-d351-440f-9334-4ef2550565a2-scripts\") pod \"nova-cell1-host-discover-5f556\" (UID: \"ae49cc63-d351-440f-9334-4ef2550565a2\") " pod="openstack/nova-cell1-host-discover-5f556" Oct 11 10:57:18.557567 master-1 kubenswrapper[4771]: I1011 10:57:18.557513 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glspf\" (UniqueName: \"kubernetes.io/projected/709c362a-6ace-46bf-9f94-86852f78f6f2-kube-api-access-glspf\") pod \"nova-cell1-cell-mapping-bwgtz\" (UID: \"709c362a-6ace-46bf-9f94-86852f78f6f2\") " pod="openstack/nova-cell1-cell-mapping-bwgtz" Oct 11 10:57:18.557567 master-1 kubenswrapper[4771]: I1011 10:57:18.557546 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/709c362a-6ace-46bf-9f94-86852f78f6f2-scripts\") pod \"nova-cell1-cell-mapping-bwgtz\" (UID: \"709c362a-6ace-46bf-9f94-86852f78f6f2\") " pod="openstack/nova-cell1-cell-mapping-bwgtz" Oct 11 10:57:18.557770 master-1 kubenswrapper[4771]: I1011 10:57:18.557574 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/709c362a-6ace-46bf-9f94-86852f78f6f2-config-data\") pod \"nova-cell1-cell-mapping-bwgtz\" (UID: \"709c362a-6ace-46bf-9f94-86852f78f6f2\") " pod="openstack/nova-cell1-cell-mapping-bwgtz" Oct 11 10:57:18.557770 master-1 kubenswrapper[4771]: I1011 10:57:18.557676 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709c362a-6ace-46bf-9f94-86852f78f6f2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bwgtz\" (UID: \"709c362a-6ace-46bf-9f94-86852f78f6f2\") " pod="openstack/nova-cell1-cell-mapping-bwgtz" Oct 11 10:57:18.557770 master-1 kubenswrapper[4771]: I1011 10:57:18.557717 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae49cc63-d351-440f-9334-4ef2550565a2-config-data\") pod \"nova-cell1-host-discover-5f556\" (UID: \"ae49cc63-d351-440f-9334-4ef2550565a2\") " pod="openstack/nova-cell1-host-discover-5f556" Oct 11 10:57:18.562876 master-1 kubenswrapper[4771]: I1011 10:57:18.562670 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/709c362a-6ace-46bf-9f94-86852f78f6f2-config-data\") pod \"nova-cell1-cell-mapping-bwgtz\" (UID: \"709c362a-6ace-46bf-9f94-86852f78f6f2\") " pod="openstack/nova-cell1-cell-mapping-bwgtz" Oct 11 10:57:18.564809 master-1 kubenswrapper[4771]: I1011 10:57:18.564560 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae49cc63-d351-440f-9334-4ef2550565a2-config-data\") pod \"nova-cell1-host-discover-5f556\" (UID: \"ae49cc63-d351-440f-9334-4ef2550565a2\") " pod="openstack/nova-cell1-host-discover-5f556" Oct 11 10:57:18.566938 master-1 kubenswrapper[4771]: I1011 10:57:18.566820 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae49cc63-d351-440f-9334-4ef2550565a2-combined-ca-bundle\") pod \"nova-cell1-host-discover-5f556\" (UID: \"ae49cc63-d351-440f-9334-4ef2550565a2\") " pod="openstack/nova-cell1-host-discover-5f556" Oct 11 10:57:18.567098 master-1 kubenswrapper[4771]: I1011 10:57:18.567059 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709c362a-6ace-46bf-9f94-86852f78f6f2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bwgtz\" (UID: \"709c362a-6ace-46bf-9f94-86852f78f6f2\") " pod="openstack/nova-cell1-cell-mapping-bwgtz" Oct 11 10:57:18.567980 master-1 kubenswrapper[4771]: I1011 10:57:18.567805 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/709c362a-6ace-46bf-9f94-86852f78f6f2-scripts\") pod \"nova-cell1-cell-mapping-bwgtz\" (UID: \"709c362a-6ace-46bf-9f94-86852f78f6f2\") " pod="openstack/nova-cell1-cell-mapping-bwgtz" Oct 11 10:57:18.569230 master-1 kubenswrapper[4771]: I1011 10:57:18.569040 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae49cc63-d351-440f-9334-4ef2550565a2-scripts\") pod \"nova-cell1-host-discover-5f556\" (UID: \"ae49cc63-d351-440f-9334-4ef2550565a2\") " pod="openstack/nova-cell1-host-discover-5f556" Oct 11 10:57:18.572591 master-2 kubenswrapper[4776]: I1011 10:57:18.572518 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-internal-tls-certs\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.572591 master-2 kubenswrapper[4776]: I1011 10:57:18.572580 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.572927 master-2 kubenswrapper[4776]: I1011 10:57:18.572696 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a4feec2-a4ba-4906-90f7-0912fc708375-logs\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.572927 master-2 kubenswrapper[4776]: I1011 10:57:18.572724 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-public-tls-certs\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.572927 master-2 kubenswrapper[4776]: I1011 10:57:18.572741 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z75mf\" (UniqueName: \"kubernetes.io/projected/3a4feec2-a4ba-4906-90f7-0912fc708375-kube-api-access-z75mf\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.572927 master-2 kubenswrapper[4776]: I1011 10:57:18.572785 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-config-data\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.574423 master-2 kubenswrapper[4776]: I1011 10:57:18.574354 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a4feec2-a4ba-4906-90f7-0912fc708375-logs\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.576458 master-2 kubenswrapper[4776]: I1011 10:57:18.576409 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-internal-tls-certs\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.576776 master-2 kubenswrapper[4776]: I1011 10:57:18.576745 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-config-data\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.576827 master-2 kubenswrapper[4776]: I1011 10:57:18.576744 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-public-tls-certs\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.576978 master-2 kubenswrapper[4776]: I1011 10:57:18.576909 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.590401 master-1 kubenswrapper[4771]: I1011 10:57:18.588561 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d7dq\" (UniqueName: \"kubernetes.io/projected/ae49cc63-d351-440f-9334-4ef2550565a2-kube-api-access-6d7dq\") pod \"nova-cell1-host-discover-5f556\" (UID: \"ae49cc63-d351-440f-9334-4ef2550565a2\") " pod="openstack/nova-cell1-host-discover-5f556" Oct 11 10:57:18.595927 master-2 kubenswrapper[4776]: I1011 10:57:18.595824 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z75mf\" (UniqueName: \"kubernetes.io/projected/3a4feec2-a4ba-4906-90f7-0912fc708375-kube-api-access-z75mf\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.598159 master-1 kubenswrapper[4771]: I1011 10:57:18.598115 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glspf\" (UniqueName: \"kubernetes.io/projected/709c362a-6ace-46bf-9f94-86852f78f6f2-kube-api-access-glspf\") pod \"nova-cell1-cell-mapping-bwgtz\" (UID: \"709c362a-6ace-46bf-9f94-86852f78f6f2\") " pod="openstack/nova-cell1-cell-mapping-bwgtz" Oct 11 10:57:18.617482 master-1 kubenswrapper[4771]: I1011 10:57:18.617405 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bwgtz" Oct 11 10:57:18.644727 master-1 kubenswrapper[4771]: I1011 10:57:18.644643 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-5f556" Oct 11 10:57:18.659870 master-2 kubenswrapper[4776]: I1011 10:57:18.659759 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:57:18.920485 master-1 kubenswrapper[4771]: I1011 10:57:18.920413 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e85bacc4-2a43-4bc5-acc7-67f930ed6331","Type":"ContainerStarted","Data":"c214200e01db88cd82a2be7612b0009a76284d886cfd5582dbd4340ef9a3bf14"} Oct 11 10:57:18.921370 master-1 kubenswrapper[4771]: I1011 10:57:18.921321 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerName="ceilometer-central-agent" containerID="cri-o://17e2191a3f20167201f2c27d474f69eb8338b55a6d88728e3892cdd495c418f4" gracePeriod=30 Oct 11 10:57:18.921471 master-1 kubenswrapper[4771]: I1011 10:57:18.921431 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerName="proxy-httpd" containerID="cri-o://c214200e01db88cd82a2be7612b0009a76284d886cfd5582dbd4340ef9a3bf14" gracePeriod=30 Oct 11 10:57:18.921635 master-1 kubenswrapper[4771]: I1011 10:57:18.921601 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerName="ceilometer-notification-agent" containerID="cri-o://7e99c868e28ca53790248959b5b2de659cb6170102635e6ace2cabc6e6703b85" gracePeriod=30 Oct 11 10:57:18.921701 master-1 kubenswrapper[4771]: I1011 10:57:18.921681 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerName="sg-core" containerID="cri-o://ebd23fc1f4d62b1e39557d15a1b710afe0ddaa7347500388c4a14b187788b9df" gracePeriod=30 Oct 11 10:57:18.953144 master-1 kubenswrapper[4771]: I1011 10:57:18.953102 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-1" Oct 11 10:57:18.954492 master-1 kubenswrapper[4771]: I1011 10:57:18.954438 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.899843392 podStartE2EDuration="5.954421811s" podCreationTimestamp="2025-10-11 10:57:13 +0000 UTC" firstStartedPulling="2025-10-11 10:57:14.746498557 +0000 UTC m=+1866.720725008" lastFinishedPulling="2025-10-11 10:57:17.801076956 +0000 UTC m=+1869.775303427" observedRunningTime="2025-10-11 10:57:18.953456083 +0000 UTC m=+1870.927682564" watchObservedRunningTime="2025-10-11 10:57:18.954421811 +0000 UTC m=+1870.928648252" Oct 11 10:57:19.007663 master-2 kubenswrapper[4776]: I1011 10:57:19.006580 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:57:19.007663 master-2 kubenswrapper[4776]: I1011 10:57:19.006849 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b" containerName="nova-scheduler-scheduler" containerID="cri-o://c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554" gracePeriod=30 Oct 11 10:57:19.022443 master-2 kubenswrapper[4776]: I1011 10:57:19.020891 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Oct 11 10:57:19.063661 master-2 kubenswrapper[4776]: I1011 10:57:19.061157 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Oct 11 10:57:19.064761 master-2 kubenswrapper[4776]: I1011 10:57:19.064708 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Oct 11 10:57:19.066672 master-1 kubenswrapper[4771]: I1011 10:57:19.066618 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bwgtz"] Oct 11 10:57:19.108576 master-2 kubenswrapper[4776]: I1011 10:57:19.108520 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:19.181338 master-2 kubenswrapper[4776]: I1011 10:57:19.181296 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a4feec2-a4ba-4906-90f7-0912fc708375","Type":"ContainerStarted","Data":"6503028ff1c30fc40e811d73cf5d9d6f0477fe67e3a05d72d99b498521268894"} Oct 11 10:57:19.201255 master-1 kubenswrapper[4771]: I1011 10:57:19.201211 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-5f556"] Oct 11 10:57:19.223926 master-1 kubenswrapper[4771]: W1011 10:57:19.223884 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae49cc63_d351_440f_9334_4ef2550565a2.slice/crio-bf7cd872d077a706534449020eb602438e4892d62ec0c9d5010cb1330868aeea WatchSource:0}: Error finding container bf7cd872d077a706534449020eb602438e4892d62ec0c9d5010cb1330868aeea: Status 404 returned error can't find the container with id bf7cd872d077a706534449020eb602438e4892d62ec0c9d5010cb1330868aeea Oct 11 10:57:19.936390 master-1 kubenswrapper[4771]: I1011 10:57:19.936163 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-5f556" event={"ID":"ae49cc63-d351-440f-9334-4ef2550565a2","Type":"ContainerStarted","Data":"48e35ef26a01bac7444e96fa2a9fa3fe07bd9eb6b20913ec8c1c945288cc11bc"} Oct 11 10:57:19.936390 master-1 kubenswrapper[4771]: I1011 10:57:19.936253 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-5f556" event={"ID":"ae49cc63-d351-440f-9334-4ef2550565a2","Type":"ContainerStarted","Data":"bf7cd872d077a706534449020eb602438e4892d62ec0c9d5010cb1330868aeea"} Oct 11 10:57:19.939417 master-1 kubenswrapper[4771]: I1011 10:57:19.939318 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bwgtz" event={"ID":"709c362a-6ace-46bf-9f94-86852f78f6f2","Type":"ContainerStarted","Data":"99d58d9d6b8b62fa18ae8ba7508466dad2a9761e505b9274423ecba095a9de64"} Oct 11 10:57:19.939498 master-1 kubenswrapper[4771]: I1011 10:57:19.939425 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bwgtz" event={"ID":"709c362a-6ace-46bf-9f94-86852f78f6f2","Type":"ContainerStarted","Data":"4cefc06e3826c53b2bddfc65675ffb15401ad9ff18e58b5e2736c262f90fe5e5"} Oct 11 10:57:19.947373 master-1 kubenswrapper[4771]: I1011 10:57:19.947170 4771 generic.go:334] "Generic (PLEG): container finished" podID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerID="c214200e01db88cd82a2be7612b0009a76284d886cfd5582dbd4340ef9a3bf14" exitCode=0 Oct 11 10:57:19.947373 master-1 kubenswrapper[4771]: I1011 10:57:19.947220 4771 generic.go:334] "Generic (PLEG): container finished" podID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerID="ebd23fc1f4d62b1e39557d15a1b710afe0ddaa7347500388c4a14b187788b9df" exitCode=2 Oct 11 10:57:19.947373 master-1 kubenswrapper[4771]: I1011 10:57:19.947238 4771 generic.go:334] "Generic (PLEG): container finished" podID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerID="7e99c868e28ca53790248959b5b2de659cb6170102635e6ace2cabc6e6703b85" exitCode=0 Oct 11 10:57:19.947373 master-1 kubenswrapper[4771]: I1011 10:57:19.947271 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e85bacc4-2a43-4bc5-acc7-67f930ed6331","Type":"ContainerDied","Data":"c214200e01db88cd82a2be7612b0009a76284d886cfd5582dbd4340ef9a3bf14"} Oct 11 10:57:19.947373 master-1 kubenswrapper[4771]: I1011 10:57:19.947336 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e85bacc4-2a43-4bc5-acc7-67f930ed6331","Type":"ContainerDied","Data":"ebd23fc1f4d62b1e39557d15a1b710afe0ddaa7347500388c4a14b187788b9df"} Oct 11 10:57:19.947585 master-1 kubenswrapper[4771]: I1011 10:57:19.947397 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e85bacc4-2a43-4bc5-acc7-67f930ed6331","Type":"ContainerDied","Data":"7e99c868e28ca53790248959b5b2de659cb6170102635e6ace2cabc6e6703b85"} Oct 11 10:57:19.975456 master-1 kubenswrapper[4771]: I1011 10:57:19.975284 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-host-discover-5f556" podStartSLOduration=1.975250991 podStartE2EDuration="1.975250991s" podCreationTimestamp="2025-10-11 10:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:57:19.970171663 +0000 UTC m=+1871.944398154" watchObservedRunningTime="2025-10-11 10:57:19.975250991 +0000 UTC m=+1871.949477502" Oct 11 10:57:20.000678 master-1 kubenswrapper[4771]: I1011 10:57:20.000568 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-bwgtz" podStartSLOduration=2.000542879 podStartE2EDuration="2.000542879s" podCreationTimestamp="2025-10-11 10:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:57:19.995153942 +0000 UTC m=+1871.969380433" watchObservedRunningTime="2025-10-11 10:57:20.000542879 +0000 UTC m=+1871.974769360" Oct 11 10:57:20.066873 master-2 kubenswrapper[4776]: I1011 10:57:20.066811 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" path="/var/lib/kubelet/pods/1e253bd1-27b5-4423-8212-c9e698198d47/volumes" Oct 11 10:57:20.191277 master-2 kubenswrapper[4776]: I1011 10:57:20.191102 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a4feec2-a4ba-4906-90f7-0912fc708375","Type":"ContainerStarted","Data":"8fad6ef13ce32e48a0200e0d51e6ec1869e31fdbae9e43f6b4327a3848b3263c"} Oct 11 10:57:20.191277 master-2 kubenswrapper[4776]: I1011 10:57:20.191162 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a4feec2-a4ba-4906-90f7-0912fc708375","Type":"ContainerStarted","Data":"fef38235c13a94e0f559cc147b180218c57bb86edee562a4d0f5a4ef26d4a256"} Oct 11 10:57:20.231701 master-2 kubenswrapper[4776]: I1011 10:57:20.227311 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2" podStartSLOduration=2.227287888 podStartE2EDuration="2.227287888s" podCreationTimestamp="2025-10-11 10:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:57:20.221317116 +0000 UTC m=+1875.005743835" watchObservedRunningTime="2025-10-11 10:57:20.227287888 +0000 UTC m=+1875.011714607" Oct 11 10:57:20.698085 master-0 kubenswrapper[4790]: I1011 10:57:20.698012 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 10:57:20.822661 master-1 kubenswrapper[4771]: I1011 10:57:20.822603 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cbf74f6f-j7kt4"] Oct 11 10:57:20.822901 master-1 kubenswrapper[4771]: I1011 10:57:20.822868 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" podUID="a23d84be-f5ab-4261-9ba2-d94aaf104a59" containerName="dnsmasq-dns" containerID="cri-o://12c6ff03be76828491f921afc8c9ec6e58880687794d58647b68e34022915241" gracePeriod=10 Oct 11 10:57:20.861764 master-1 kubenswrapper[4771]: I1011 10:57:20.861294 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb9b8c955-b5qwc"] Oct 11 10:57:20.864023 master-1 kubenswrapper[4771]: I1011 10:57:20.863974 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:20.881943 master-1 kubenswrapper[4771]: I1011 10:57:20.881854 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb9b8c955-b5qwc"] Oct 11 10:57:20.921978 master-1 kubenswrapper[4771]: I1011 10:57:20.920700 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-897k7\" (UniqueName: \"kubernetes.io/projected/396249ad-10d3-48d9-ba43-46df789198c9-kube-api-access-897k7\") pod \"dnsmasq-dns-6cb9b8c955-b5qwc\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:20.921978 master-1 kubenswrapper[4771]: I1011 10:57:20.920791 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-config\") pod \"dnsmasq-dns-6cb9b8c955-b5qwc\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:20.921978 master-1 kubenswrapper[4771]: I1011 10:57:20.920857 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb9b8c955-b5qwc\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:20.921978 master-1 kubenswrapper[4771]: I1011 10:57:20.920907 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-dns-svc\") pod \"dnsmasq-dns-6cb9b8c955-b5qwc\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:20.921978 master-1 kubenswrapper[4771]: I1011 10:57:20.920924 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb9b8c955-b5qwc\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:20.921978 master-1 kubenswrapper[4771]: I1011 10:57:20.920983 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-dns-swift-storage-0\") pod \"dnsmasq-dns-6cb9b8c955-b5qwc\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:20.958133 master-1 kubenswrapper[4771]: I1011 10:57:20.958083 4771 generic.go:334] "Generic (PLEG): container finished" podID="a23d84be-f5ab-4261-9ba2-d94aaf104a59" containerID="12c6ff03be76828491f921afc8c9ec6e58880687794d58647b68e34022915241" exitCode=0 Oct 11 10:57:20.958250 master-1 kubenswrapper[4771]: I1011 10:57:20.958197 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" event={"ID":"a23d84be-f5ab-4261-9ba2-d94aaf104a59","Type":"ContainerDied","Data":"12c6ff03be76828491f921afc8c9ec6e58880687794d58647b68e34022915241"} Oct 11 10:57:20.961726 master-1 kubenswrapper[4771]: I1011 10:57:20.961690 4771 generic.go:334] "Generic (PLEG): container finished" podID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerID="17e2191a3f20167201f2c27d474f69eb8338b55a6d88728e3892cdd495c418f4" exitCode=0 Oct 11 10:57:20.961787 master-1 kubenswrapper[4771]: I1011 10:57:20.961758 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e85bacc4-2a43-4bc5-acc7-67f930ed6331","Type":"ContainerDied","Data":"17e2191a3f20167201f2c27d474f69eb8338b55a6d88728e3892cdd495c418f4"} Oct 11 10:57:21.021984 master-1 kubenswrapper[4771]: I1011 10:57:21.021937 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-897k7\" (UniqueName: \"kubernetes.io/projected/396249ad-10d3-48d9-ba43-46df789198c9-kube-api-access-897k7\") pod \"dnsmasq-dns-6cb9b8c955-b5qwc\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:21.022515 master-1 kubenswrapper[4771]: I1011 10:57:21.021998 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-config\") pod \"dnsmasq-dns-6cb9b8c955-b5qwc\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:21.022738 master-1 kubenswrapper[4771]: I1011 10:57:21.022625 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb9b8c955-b5qwc\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:21.024183 master-1 kubenswrapper[4771]: I1011 10:57:21.022834 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-dns-svc\") pod \"dnsmasq-dns-6cb9b8c955-b5qwc\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:21.024183 master-1 kubenswrapper[4771]: I1011 10:57:21.022885 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb9b8c955-b5qwc\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:21.024183 master-1 kubenswrapper[4771]: I1011 10:57:21.023064 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-config\") pod \"dnsmasq-dns-6cb9b8c955-b5qwc\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:21.024183 master-1 kubenswrapper[4771]: I1011 10:57:21.023112 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-dns-swift-storage-0\") pod \"dnsmasq-dns-6cb9b8c955-b5qwc\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:21.024183 master-1 kubenswrapper[4771]: I1011 10:57:21.023747 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb9b8c955-b5qwc\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:21.024472 master-1 kubenswrapper[4771]: I1011 10:57:21.024182 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-dns-svc\") pod \"dnsmasq-dns-6cb9b8c955-b5qwc\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:21.024472 master-1 kubenswrapper[4771]: I1011 10:57:21.024301 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-dns-swift-storage-0\") pod \"dnsmasq-dns-6cb9b8c955-b5qwc\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:21.024629 master-1 kubenswrapper[4771]: I1011 10:57:21.024589 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb9b8c955-b5qwc\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:21.063185 master-1 kubenswrapper[4771]: I1011 10:57:21.063127 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-897k7\" (UniqueName: \"kubernetes.io/projected/396249ad-10d3-48d9-ba43-46df789198c9-kube-api-access-897k7\") pod \"dnsmasq-dns-6cb9b8c955-b5qwc\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:21.202146 master-2 kubenswrapper[4776]: I1011 10:57:21.201008 4776 generic.go:334] "Generic (PLEG): container finished" podID="9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b" containerID="c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554" exitCode=0 Oct 11 10:57:21.202146 master-2 kubenswrapper[4776]: I1011 10:57:21.201320 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b","Type":"ContainerDied","Data":"c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554"} Oct 11 10:57:21.321469 master-1 kubenswrapper[4771]: I1011 10:57:21.321410 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:21.529427 master-1 kubenswrapper[4771]: I1011 10:57:21.528900 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:57:21.538170 master-1 kubenswrapper[4771]: I1011 10:57:21.537724 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:57:21.640123 master-1 kubenswrapper[4771]: I1011 10:57:21.640063 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-combined-ca-bundle\") pod \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " Oct 11 10:57:21.640330 master-1 kubenswrapper[4771]: I1011 10:57:21.640140 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-scripts\") pod \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " Oct 11 10:57:21.640330 master-1 kubenswrapper[4771]: I1011 10:57:21.640188 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7n6h\" (UniqueName: \"kubernetes.io/projected/a23d84be-f5ab-4261-9ba2-d94aaf104a59-kube-api-access-k7n6h\") pod \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " Oct 11 10:57:21.640330 master-1 kubenswrapper[4771]: I1011 10:57:21.640235 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-ovsdbserver-nb\") pod \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " Oct 11 10:57:21.640330 master-1 kubenswrapper[4771]: I1011 10:57:21.640263 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-dns-svc\") pod \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " Oct 11 10:57:21.640494 master-1 kubenswrapper[4771]: I1011 10:57:21.640421 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e85bacc4-2a43-4bc5-acc7-67f930ed6331-log-httpd\") pod \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " Oct 11 10:57:21.640494 master-1 kubenswrapper[4771]: I1011 10:57:21.640449 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmlq6\" (UniqueName: \"kubernetes.io/projected/e85bacc4-2a43-4bc5-acc7-67f930ed6331-kube-api-access-tmlq6\") pod \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " Oct 11 10:57:21.640494 master-1 kubenswrapper[4771]: I1011 10:57:21.640475 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-config-data\") pod \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " Oct 11 10:57:21.640591 master-1 kubenswrapper[4771]: I1011 10:57:21.640509 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-dns-swift-storage-0\") pod \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " Oct 11 10:57:21.640591 master-1 kubenswrapper[4771]: I1011 10:57:21.640537 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-ceilometer-tls-certs\") pod \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " Oct 11 10:57:21.640735 master-1 kubenswrapper[4771]: I1011 10:57:21.640706 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-ovsdbserver-sb\") pod \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " Oct 11 10:57:21.640794 master-1 kubenswrapper[4771]: I1011 10:57:21.640774 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e85bacc4-2a43-4bc5-acc7-67f930ed6331-run-httpd\") pod \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " Oct 11 10:57:21.640831 master-1 kubenswrapper[4771]: I1011 10:57:21.640802 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-sg-core-conf-yaml\") pod \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\" (UID: \"e85bacc4-2a43-4bc5-acc7-67f930ed6331\") " Oct 11 10:57:21.640867 master-1 kubenswrapper[4771]: I1011 10:57:21.640842 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-config\") pod \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\" (UID: \"a23d84be-f5ab-4261-9ba2-d94aaf104a59\") " Oct 11 10:57:21.643117 master-1 kubenswrapper[4771]: I1011 10:57:21.642822 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e85bacc4-2a43-4bc5-acc7-67f930ed6331-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e85bacc4-2a43-4bc5-acc7-67f930ed6331" (UID: "e85bacc4-2a43-4bc5-acc7-67f930ed6331"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:57:21.643117 master-1 kubenswrapper[4771]: I1011 10:57:21.642991 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e85bacc4-2a43-4bc5-acc7-67f930ed6331-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e85bacc4-2a43-4bc5-acc7-67f930ed6331" (UID: "e85bacc4-2a43-4bc5-acc7-67f930ed6331"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:57:21.646146 master-1 kubenswrapper[4771]: I1011 10:57:21.646099 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a23d84be-f5ab-4261-9ba2-d94aaf104a59-kube-api-access-k7n6h" (OuterVolumeSpecName: "kube-api-access-k7n6h") pod "a23d84be-f5ab-4261-9ba2-d94aaf104a59" (UID: "a23d84be-f5ab-4261-9ba2-d94aaf104a59"). InnerVolumeSpecName "kube-api-access-k7n6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:21.646799 master-1 kubenswrapper[4771]: I1011 10:57:21.646762 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e85bacc4-2a43-4bc5-acc7-67f930ed6331-kube-api-access-tmlq6" (OuterVolumeSpecName: "kube-api-access-tmlq6") pod "e85bacc4-2a43-4bc5-acc7-67f930ed6331" (UID: "e85bacc4-2a43-4bc5-acc7-67f930ed6331"). InnerVolumeSpecName "kube-api-access-tmlq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:21.647064 master-1 kubenswrapper[4771]: I1011 10:57:21.647029 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-scripts" (OuterVolumeSpecName: "scripts") pod "e85bacc4-2a43-4bc5-acc7-67f930ed6331" (UID: "e85bacc4-2a43-4bc5-acc7-67f930ed6331"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:21.684852 master-1 kubenswrapper[4771]: I1011 10:57:21.684786 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a23d84be-f5ab-4261-9ba2-d94aaf104a59" (UID: "a23d84be-f5ab-4261-9ba2-d94aaf104a59"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:57:21.685149 master-1 kubenswrapper[4771]: I1011 10:57:21.685118 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a23d84be-f5ab-4261-9ba2-d94aaf104a59" (UID: "a23d84be-f5ab-4261-9ba2-d94aaf104a59"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:57:21.686515 master-1 kubenswrapper[4771]: I1011 10:57:21.686468 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e85bacc4-2a43-4bc5-acc7-67f930ed6331" (UID: "e85bacc4-2a43-4bc5-acc7-67f930ed6331"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:21.686957 master-1 kubenswrapper[4771]: I1011 10:57:21.686912 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-config" (OuterVolumeSpecName: "config") pod "a23d84be-f5ab-4261-9ba2-d94aaf104a59" (UID: "a23d84be-f5ab-4261-9ba2-d94aaf104a59"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:57:21.687751 master-1 kubenswrapper[4771]: I1011 10:57:21.687702 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e85bacc4-2a43-4bc5-acc7-67f930ed6331" (UID: "e85bacc4-2a43-4bc5-acc7-67f930ed6331"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:21.697748 master-1 kubenswrapper[4771]: I1011 10:57:21.697716 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a23d84be-f5ab-4261-9ba2-d94aaf104a59" (UID: "a23d84be-f5ab-4261-9ba2-d94aaf104a59"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:57:21.702223 master-1 kubenswrapper[4771]: I1011 10:57:21.702189 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a23d84be-f5ab-4261-9ba2-d94aaf104a59" (UID: "a23d84be-f5ab-4261-9ba2-d94aaf104a59"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:57:21.727130 master-1 kubenswrapper[4771]: I1011 10:57:21.727069 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e85bacc4-2a43-4bc5-acc7-67f930ed6331" (UID: "e85bacc4-2a43-4bc5-acc7-67f930ed6331"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:21.743278 master-1 kubenswrapper[4771]: I1011 10:57:21.743220 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e85bacc4-2a43-4bc5-acc7-67f930ed6331-run-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:21.743387 master-1 kubenswrapper[4771]: I1011 10:57:21.743300 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-sg-core-conf-yaml\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:21.743387 master-1 kubenswrapper[4771]: I1011 10:57:21.743320 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-config\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:21.743387 master-1 kubenswrapper[4771]: I1011 10:57:21.743336 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:21.743387 master-1 kubenswrapper[4771]: I1011 10:57:21.743375 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:21.743505 master-1 kubenswrapper[4771]: I1011 10:57:21.743393 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7n6h\" (UniqueName: \"kubernetes.io/projected/a23d84be-f5ab-4261-9ba2-d94aaf104a59-kube-api-access-k7n6h\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:21.743505 master-1 kubenswrapper[4771]: I1011 10:57:21.743411 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-ovsdbserver-nb\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:21.743505 master-1 kubenswrapper[4771]: I1011 10:57:21.743426 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-dns-svc\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:21.743505 master-1 kubenswrapper[4771]: I1011 10:57:21.743442 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e85bacc4-2a43-4bc5-acc7-67f930ed6331-log-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:21.743505 master-1 kubenswrapper[4771]: I1011 10:57:21.743460 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmlq6\" (UniqueName: \"kubernetes.io/projected/e85bacc4-2a43-4bc5-acc7-67f930ed6331-kube-api-access-tmlq6\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:21.743505 master-1 kubenswrapper[4771]: I1011 10:57:21.743477 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-dns-swift-storage-0\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:21.743505 master-1 kubenswrapper[4771]: I1011 10:57:21.743492 4771 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-ceilometer-tls-certs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:21.743505 master-1 kubenswrapper[4771]: I1011 10:57:21.743507 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a23d84be-f5ab-4261-9ba2-d94aaf104a59-ovsdbserver-sb\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:21.759802 master-1 kubenswrapper[4771]: I1011 10:57:21.759728 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-config-data" (OuterVolumeSpecName: "config-data") pod "e85bacc4-2a43-4bc5-acc7-67f930ed6331" (UID: "e85bacc4-2a43-4bc5-acc7-67f930ed6331"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:21.798539 master-1 kubenswrapper[4771]: I1011 10:57:21.798503 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb9b8c955-b5qwc"] Oct 11 10:57:21.845081 master-1 kubenswrapper[4771]: I1011 10:57:21.845056 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85bacc4-2a43-4bc5-acc7-67f930ed6331-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:21.862697 master-2 kubenswrapper[4776]: I1011 10:57:21.862643 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:57:21.959381 master-2 kubenswrapper[4776]: I1011 10:57:21.959315 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-combined-ca-bundle\") pod \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " Oct 11 10:57:21.959784 master-2 kubenswrapper[4776]: I1011 10:57:21.959741 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-config-data\") pod \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " Oct 11 10:57:21.960339 master-2 kubenswrapper[4776]: I1011 10:57:21.960305 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzwk6\" (UniqueName: \"kubernetes.io/projected/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-kube-api-access-hzwk6\") pod \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " Oct 11 10:57:21.967365 master-2 kubenswrapper[4776]: I1011 10:57:21.967304 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-kube-api-access-hzwk6" (OuterVolumeSpecName: "kube-api-access-hzwk6") pod "9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b" (UID: "9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b"). InnerVolumeSpecName "kube-api-access-hzwk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:21.971220 master-1 kubenswrapper[4771]: I1011 10:57:21.971182 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" event={"ID":"396249ad-10d3-48d9-ba43-46df789198c9","Type":"ContainerStarted","Data":"a14cd6677525c65737d0849bb25554909c4ebb8c2b5761120df0ab99b361a3df"} Oct 11 10:57:21.973968 master-1 kubenswrapper[4771]: I1011 10:57:21.973942 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" event={"ID":"a23d84be-f5ab-4261-9ba2-d94aaf104a59","Type":"ContainerDied","Data":"b0d191f73463f5a71aeb190809caf8100724d2aeec1100c76a864a58130b5a3d"} Oct 11 10:57:21.974043 master-1 kubenswrapper[4771]: I1011 10:57:21.973954 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cbf74f6f-j7kt4" Oct 11 10:57:21.974043 master-1 kubenswrapper[4771]: I1011 10:57:21.973997 4771 scope.go:117] "RemoveContainer" containerID="12c6ff03be76828491f921afc8c9ec6e58880687794d58647b68e34022915241" Oct 11 10:57:21.978113 master-1 kubenswrapper[4771]: I1011 10:57:21.978047 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e85bacc4-2a43-4bc5-acc7-67f930ed6331","Type":"ContainerDied","Data":"be2095abce3f4ac809c0e84a1273a55d460312bdbba18373f246c0e8a9de1c8d"} Oct 11 10:57:21.978160 master-1 kubenswrapper[4771]: I1011 10:57:21.978128 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:57:21.988612 master-2 kubenswrapper[4776]: I1011 10:57:21.986994 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b" (UID: "9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:21.994926 master-1 kubenswrapper[4771]: I1011 10:57:21.994845 4771 scope.go:117] "RemoveContainer" containerID="14cbbf6abeb88d28f08a7099ac711df9a488bc85a2f7bd445bc229705a05a25b" Oct 11 10:57:21.997728 master-2 kubenswrapper[4776]: I1011 10:57:21.997474 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-config-data" (OuterVolumeSpecName: "config-data") pod "9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b" (UID: "9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:22.035128 master-1 kubenswrapper[4771]: I1011 10:57:22.035082 4771 scope.go:117] "RemoveContainer" containerID="c214200e01db88cd82a2be7612b0009a76284d886cfd5582dbd4340ef9a3bf14" Oct 11 10:57:22.043561 master-1 kubenswrapper[4771]: I1011 10:57:22.043524 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cbf74f6f-j7kt4"] Oct 11 10:57:22.051896 master-1 kubenswrapper[4771]: I1011 10:57:22.051859 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79cbf74f6f-j7kt4"] Oct 11 10:57:22.064030 master-2 kubenswrapper[4776]: I1011 10:57:22.063111 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:22.064030 master-2 kubenswrapper[4776]: I1011 10:57:22.063148 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:22.064030 master-2 kubenswrapper[4776]: I1011 10:57:22.063157 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzwk6\" (UniqueName: \"kubernetes.io/projected/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-kube-api-access-hzwk6\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:22.076703 master-1 kubenswrapper[4771]: I1011 10:57:22.076658 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:57:22.103789 master-1 kubenswrapper[4771]: I1011 10:57:22.103749 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:57:22.127659 master-1 kubenswrapper[4771]: I1011 10:57:22.127615 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:57:22.128196 master-1 kubenswrapper[4771]: E1011 10:57:22.128181 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23d84be-f5ab-4261-9ba2-d94aaf104a59" containerName="init" Oct 11 10:57:22.128265 master-1 kubenswrapper[4771]: I1011 10:57:22.128255 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23d84be-f5ab-4261-9ba2-d94aaf104a59" containerName="init" Oct 11 10:57:22.128339 master-1 kubenswrapper[4771]: E1011 10:57:22.128327 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerName="ceilometer-notification-agent" Oct 11 10:57:22.128406 master-1 kubenswrapper[4771]: I1011 10:57:22.128396 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerName="ceilometer-notification-agent" Oct 11 10:57:22.128479 master-1 kubenswrapper[4771]: E1011 10:57:22.128470 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23d84be-f5ab-4261-9ba2-d94aaf104a59" containerName="dnsmasq-dns" Oct 11 10:57:22.128530 master-1 kubenswrapper[4771]: I1011 10:57:22.128521 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23d84be-f5ab-4261-9ba2-d94aaf104a59" containerName="dnsmasq-dns" Oct 11 10:57:22.128584 master-1 kubenswrapper[4771]: E1011 10:57:22.128575 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerName="ceilometer-central-agent" Oct 11 10:57:22.128639 master-1 kubenswrapper[4771]: I1011 10:57:22.128630 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerName="ceilometer-central-agent" Oct 11 10:57:22.128694 master-1 kubenswrapper[4771]: E1011 10:57:22.128685 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerName="proxy-httpd" Oct 11 10:57:22.128743 master-1 kubenswrapper[4771]: I1011 10:57:22.128734 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerName="proxy-httpd" Oct 11 10:57:22.128796 master-1 kubenswrapper[4771]: E1011 10:57:22.128788 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerName="sg-core" Oct 11 10:57:22.128854 master-1 kubenswrapper[4771]: I1011 10:57:22.128845 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerName="sg-core" Oct 11 10:57:22.129032 master-1 kubenswrapper[4771]: I1011 10:57:22.129021 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerName="sg-core" Oct 11 10:57:22.129103 master-1 kubenswrapper[4771]: I1011 10:57:22.129094 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerName="ceilometer-central-agent" Oct 11 10:57:22.129166 master-1 kubenswrapper[4771]: I1011 10:57:22.129158 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a23d84be-f5ab-4261-9ba2-d94aaf104a59" containerName="dnsmasq-dns" Oct 11 10:57:22.129226 master-1 kubenswrapper[4771]: I1011 10:57:22.129217 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerName="ceilometer-notification-agent" Oct 11 10:57:22.129295 master-1 kubenswrapper[4771]: I1011 10:57:22.129285 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" containerName="proxy-httpd" Oct 11 10:57:22.131625 master-1 kubenswrapper[4771]: I1011 10:57:22.131612 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:57:22.141530 master-1 kubenswrapper[4771]: I1011 10:57:22.136616 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 11 10:57:22.141530 master-1 kubenswrapper[4771]: I1011 10:57:22.136832 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 11 10:57:22.141530 master-1 kubenswrapper[4771]: I1011 10:57:22.136836 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 11 10:57:22.143496 master-1 kubenswrapper[4771]: I1011 10:57:22.143469 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:57:22.221574 master-2 kubenswrapper[4776]: I1011 10:57:22.218489 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b","Type":"ContainerDied","Data":"443bca660f4af73bca6744e7e355caaadc56f7779ac49304e17e54dc85a7286c"} Oct 11 10:57:22.221574 master-2 kubenswrapper[4776]: I1011 10:57:22.218590 4776 scope.go:117] "RemoveContainer" containerID="c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554" Oct 11 10:57:22.221574 master-2 kubenswrapper[4776]: I1011 10:57:22.218878 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:57:22.258130 master-2 kubenswrapper[4776]: I1011 10:57:22.258054 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:57:22.265856 master-2 kubenswrapper[4776]: I1011 10:57:22.265805 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:57:22.269189 master-1 kubenswrapper[4771]: I1011 10:57:22.269129 4771 scope.go:117] "RemoveContainer" containerID="ebd23fc1f4d62b1e39557d15a1b710afe0ddaa7347500388c4a14b187788b9df" Oct 11 10:57:22.275858 master-1 kubenswrapper[4771]: I1011 10:57:22.275783 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-scripts\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.276005 master-1 kubenswrapper[4771]: I1011 10:57:22.275907 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/926f8cdc-bbf6-4328-8436-8428df0a679b-run-httpd\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.276005 master-1 kubenswrapper[4771]: I1011 10:57:22.275984 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.276197 master-1 kubenswrapper[4771]: I1011 10:57:22.276173 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.276340 master-1 kubenswrapper[4771]: I1011 10:57:22.276212 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/926f8cdc-bbf6-4328-8436-8428df0a679b-log-httpd\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.276340 master-1 kubenswrapper[4771]: I1011 10:57:22.276249 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-config-data\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.276340 master-1 kubenswrapper[4771]: I1011 10:57:22.276315 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.276653 master-1 kubenswrapper[4771]: I1011 10:57:22.276351 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwhv2\" (UniqueName: \"kubernetes.io/projected/926f8cdc-bbf6-4328-8436-8428df0a679b-kube-api-access-xwhv2\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.281462 master-2 kubenswrapper[4776]: I1011 10:57:22.281402 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:57:22.281905 master-2 kubenswrapper[4776]: E1011 10:57:22.281874 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b" containerName="nova-scheduler-scheduler" Oct 11 10:57:22.281905 master-2 kubenswrapper[4776]: I1011 10:57:22.281894 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b" containerName="nova-scheduler-scheduler" Oct 11 10:57:22.282127 master-2 kubenswrapper[4776]: I1011 10:57:22.282100 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b" containerName="nova-scheduler-scheduler" Oct 11 10:57:22.282902 master-2 kubenswrapper[4776]: I1011 10:57:22.282876 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:57:22.288748 master-2 kubenswrapper[4776]: I1011 10:57:22.288706 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 11 10:57:22.295837 master-1 kubenswrapper[4771]: I1011 10:57:22.295713 4771 scope.go:117] "RemoveContainer" containerID="7e99c868e28ca53790248959b5b2de659cb6170102635e6ace2cabc6e6703b85" Oct 11 10:57:22.307641 master-2 kubenswrapper[4776]: I1011 10:57:22.307534 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:57:22.321615 master-1 kubenswrapper[4771]: I1011 10:57:22.321566 4771 scope.go:117] "RemoveContainer" containerID="17e2191a3f20167201f2c27d474f69eb8338b55a6d88728e3892cdd495c418f4" Oct 11 10:57:22.367786 master-2 kubenswrapper[4776]: I1011 10:57:22.367733 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwj7l\" (UniqueName: \"kubernetes.io/projected/69f600e5-9321-41ce-9ec4-abee215f69fe-kube-api-access-mwj7l\") pod \"nova-scheduler-0\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " pod="openstack/nova-scheduler-0" Oct 11 10:57:22.368010 master-2 kubenswrapper[4776]: I1011 10:57:22.367855 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " pod="openstack/nova-scheduler-0" Oct 11 10:57:22.368010 master-2 kubenswrapper[4776]: I1011 10:57:22.367936 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-config-data\") pod \"nova-scheduler-0\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " pod="openstack/nova-scheduler-0" Oct 11 10:57:22.380191 master-1 kubenswrapper[4771]: I1011 10:57:22.379849 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-scripts\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.380191 master-1 kubenswrapper[4771]: I1011 10:57:22.379964 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/926f8cdc-bbf6-4328-8436-8428df0a679b-run-httpd\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.380191 master-1 kubenswrapper[4771]: I1011 10:57:22.380009 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.380191 master-1 kubenswrapper[4771]: I1011 10:57:22.380095 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.380191 master-1 kubenswrapper[4771]: I1011 10:57:22.380119 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/926f8cdc-bbf6-4328-8436-8428df0a679b-log-httpd\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.380191 master-1 kubenswrapper[4771]: I1011 10:57:22.380140 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-config-data\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.380191 master-1 kubenswrapper[4771]: I1011 10:57:22.380161 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.380191 master-1 kubenswrapper[4771]: I1011 10:57:22.380181 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwhv2\" (UniqueName: \"kubernetes.io/projected/926f8cdc-bbf6-4328-8436-8428df0a679b-kube-api-access-xwhv2\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.385484 master-1 kubenswrapper[4771]: I1011 10:57:22.380788 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/926f8cdc-bbf6-4328-8436-8428df0a679b-run-httpd\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.385484 master-1 kubenswrapper[4771]: I1011 10:57:22.381165 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/926f8cdc-bbf6-4328-8436-8428df0a679b-log-httpd\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.385484 master-1 kubenswrapper[4771]: I1011 10:57:22.385135 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-config-data\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.385779 master-1 kubenswrapper[4771]: I1011 10:57:22.385599 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.385779 master-1 kubenswrapper[4771]: I1011 10:57:22.385639 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.395444 master-1 kubenswrapper[4771]: I1011 10:57:22.389170 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-scripts\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.395444 master-1 kubenswrapper[4771]: I1011 10:57:22.391506 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.409726 master-1 kubenswrapper[4771]: I1011 10:57:22.409681 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwhv2\" (UniqueName: \"kubernetes.io/projected/926f8cdc-bbf6-4328-8436-8428df0a679b-kube-api-access-xwhv2\") pod \"ceilometer-0\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " pod="openstack/ceilometer-0" Oct 11 10:57:22.446306 master-1 kubenswrapper[4771]: I1011 10:57:22.446242 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a23d84be-f5ab-4261-9ba2-d94aaf104a59" path="/var/lib/kubelet/pods/a23d84be-f5ab-4261-9ba2-d94aaf104a59/volumes" Oct 11 10:57:22.447163 master-1 kubenswrapper[4771]: I1011 10:57:22.447114 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e85bacc4-2a43-4bc5-acc7-67f930ed6331" path="/var/lib/kubelet/pods/e85bacc4-2a43-4bc5-acc7-67f930ed6331/volumes" Oct 11 10:57:22.471983 master-2 kubenswrapper[4776]: I1011 10:57:22.471877 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-config-data\") pod \"nova-scheduler-0\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " pod="openstack/nova-scheduler-0" Oct 11 10:57:22.472710 master-2 kubenswrapper[4776]: I1011 10:57:22.472650 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwj7l\" (UniqueName: \"kubernetes.io/projected/69f600e5-9321-41ce-9ec4-abee215f69fe-kube-api-access-mwj7l\") pod \"nova-scheduler-0\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " pod="openstack/nova-scheduler-0" Oct 11 10:57:22.472831 master-2 kubenswrapper[4776]: I1011 10:57:22.472813 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " pod="openstack/nova-scheduler-0" Oct 11 10:57:22.480578 master-2 kubenswrapper[4776]: I1011 10:57:22.480543 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-config-data\") pod \"nova-scheduler-0\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " pod="openstack/nova-scheduler-0" Oct 11 10:57:22.480746 master-2 kubenswrapper[4776]: I1011 10:57:22.480657 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " pod="openstack/nova-scheduler-0" Oct 11 10:57:22.494965 master-2 kubenswrapper[4776]: I1011 10:57:22.494765 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwj7l\" (UniqueName: \"kubernetes.io/projected/69f600e5-9321-41ce-9ec4-abee215f69fe-kube-api-access-mwj7l\") pod \"nova-scheduler-0\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " pod="openstack/nova-scheduler-0" Oct 11 10:57:22.582316 master-1 kubenswrapper[4771]: I1011 10:57:22.582262 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 10:57:22.601978 master-2 kubenswrapper[4776]: I1011 10:57:22.601816 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:57:22.992475 master-1 kubenswrapper[4771]: I1011 10:57:22.992288 4771 generic.go:334] "Generic (PLEG): container finished" podID="ae49cc63-d351-440f-9334-4ef2550565a2" containerID="48e35ef26a01bac7444e96fa2a9fa3fe07bd9eb6b20913ec8c1c945288cc11bc" exitCode=0 Oct 11 10:57:22.992727 master-1 kubenswrapper[4771]: I1011 10:57:22.992498 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-5f556" event={"ID":"ae49cc63-d351-440f-9334-4ef2550565a2","Type":"ContainerDied","Data":"48e35ef26a01bac7444e96fa2a9fa3fe07bd9eb6b20913ec8c1c945288cc11bc"} Oct 11 10:57:22.995578 master-1 kubenswrapper[4771]: I1011 10:57:22.995549 4771 generic.go:334] "Generic (PLEG): container finished" podID="396249ad-10d3-48d9-ba43-46df789198c9" containerID="56676cd3860cb8b15840c440ca73c01a49ae818c01bccb1e7e3f799e36cf02d2" exitCode=0 Oct 11 10:57:22.995696 master-1 kubenswrapper[4771]: I1011 10:57:22.995591 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" event={"ID":"396249ad-10d3-48d9-ba43-46df789198c9","Type":"ContainerDied","Data":"56676cd3860cb8b15840c440ca73c01a49ae818c01bccb1e7e3f799e36cf02d2"} Oct 11 10:57:23.084455 master-1 kubenswrapper[4771]: I1011 10:57:23.084392 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 10:57:23.111941 master-2 kubenswrapper[4776]: I1011 10:57:23.111850 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:57:23.122371 master-1 kubenswrapper[4771]: W1011 10:57:23.122313 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod926f8cdc_bbf6_4328_8436_8428df0a679b.slice/crio-2a75cd52ad1f72be5ccb56e9952a02a3c2bfd8c3f845acfbe551f4d25daeffc2 WatchSource:0}: Error finding container 2a75cd52ad1f72be5ccb56e9952a02a3c2bfd8c3f845acfbe551f4d25daeffc2: Status 404 returned error can't find the container with id 2a75cd52ad1f72be5ccb56e9952a02a3c2bfd8c3f845acfbe551f4d25daeffc2 Oct 11 10:57:23.229651 master-2 kubenswrapper[4776]: I1011 10:57:23.229588 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"69f600e5-9321-41ce-9ec4-abee215f69fe","Type":"ContainerStarted","Data":"bb574ec91c838b585beb61a822a5e96518597aaa45bd1ce4b4d86481bf2e5fb7"} Oct 11 10:57:23.545369 master-1 kubenswrapper[4771]: I1011 10:57:23.545199 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-1" Oct 11 10:57:23.545875 master-1 kubenswrapper[4771]: I1011 10:57:23.545855 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-1" Oct 11 10:57:24.016517 master-1 kubenswrapper[4771]: I1011 10:57:24.016420 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" event={"ID":"396249ad-10d3-48d9-ba43-46df789198c9","Type":"ContainerStarted","Data":"6510e0b21de1c8b7cb314f294d9600ddbe32a0fc8c522f5123cca3a802773760"} Oct 11 10:57:24.016850 master-1 kubenswrapper[4771]: I1011 10:57:24.016534 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:24.018814 master-1 kubenswrapper[4771]: I1011 10:57:24.018498 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"926f8cdc-bbf6-4328-8436-8428df0a679b","Type":"ContainerStarted","Data":"2a75cd52ad1f72be5ccb56e9952a02a3c2bfd8c3f845acfbe551f4d25daeffc2"} Oct 11 10:57:24.055669 master-1 kubenswrapper[4771]: I1011 10:57:24.055509 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" podStartSLOduration=4.055446081 podStartE2EDuration="4.055446081s" podCreationTimestamp="2025-10-11 10:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:57:24.049714924 +0000 UTC m=+1876.023941405" watchObservedRunningTime="2025-10-11 10:57:24.055446081 +0000 UTC m=+1876.029672562" Oct 11 10:57:24.068759 master-2 kubenswrapper[4776]: I1011 10:57:24.068698 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b" path="/var/lib/kubelet/pods/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b/volumes" Oct 11 10:57:24.243245 master-2 kubenswrapper[4776]: I1011 10:57:24.243145 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"69f600e5-9321-41ce-9ec4-abee215f69fe","Type":"ContainerStarted","Data":"a1ea9dd039fb792b9b4194ff7606c452151d402cebba8363c5c12a6a1d82ba1c"} Oct 11 10:57:24.276485 master-2 kubenswrapper[4776]: I1011 10:57:24.276388 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.276369935 podStartE2EDuration="2.276369935s" podCreationTimestamp="2025-10-11 10:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:57:24.267206017 +0000 UTC m=+1879.051632726" watchObservedRunningTime="2025-10-11 10:57:24.276369935 +0000 UTC m=+1879.060796644" Oct 11 10:57:24.472322 master-0 kubenswrapper[4790]: I1011 10:57:24.472207 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-1" Oct 11 10:57:24.472322 master-0 kubenswrapper[4790]: I1011 10:57:24.472304 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-1" Oct 11 10:57:24.513726 master-1 kubenswrapper[4771]: I1011 10:57:24.513639 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-5f556" Oct 11 10:57:24.565677 master-1 kubenswrapper[4771]: I1011 10:57:24.565565 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-1" podUID="486db0a3-f081-43d5-b20d-d7386531632e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.129.0.168:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:57:24.566015 master-1 kubenswrapper[4771]: I1011 10:57:24.565927 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-1" podUID="486db0a3-f081-43d5-b20d-d7386531632e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.129.0.168:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:57:24.571993 master-1 kubenswrapper[4771]: I1011 10:57:24.571934 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae49cc63-d351-440f-9334-4ef2550565a2-combined-ca-bundle\") pod \"ae49cc63-d351-440f-9334-4ef2550565a2\" (UID: \"ae49cc63-d351-440f-9334-4ef2550565a2\") " Oct 11 10:57:24.572154 master-1 kubenswrapper[4771]: I1011 10:57:24.572017 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae49cc63-d351-440f-9334-4ef2550565a2-config-data\") pod \"ae49cc63-d351-440f-9334-4ef2550565a2\" (UID: \"ae49cc63-d351-440f-9334-4ef2550565a2\") " Oct 11 10:57:24.572154 master-1 kubenswrapper[4771]: I1011 10:57:24.572108 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d7dq\" (UniqueName: \"kubernetes.io/projected/ae49cc63-d351-440f-9334-4ef2550565a2-kube-api-access-6d7dq\") pod \"ae49cc63-d351-440f-9334-4ef2550565a2\" (UID: \"ae49cc63-d351-440f-9334-4ef2550565a2\") " Oct 11 10:57:24.572154 master-1 kubenswrapper[4771]: I1011 10:57:24.572156 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae49cc63-d351-440f-9334-4ef2550565a2-scripts\") pod \"ae49cc63-d351-440f-9334-4ef2550565a2\" (UID: \"ae49cc63-d351-440f-9334-4ef2550565a2\") " Oct 11 10:57:24.577079 master-1 kubenswrapper[4771]: I1011 10:57:24.576938 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae49cc63-d351-440f-9334-4ef2550565a2-scripts" (OuterVolumeSpecName: "scripts") pod "ae49cc63-d351-440f-9334-4ef2550565a2" (UID: "ae49cc63-d351-440f-9334-4ef2550565a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:24.579472 master-1 kubenswrapper[4771]: I1011 10:57:24.578744 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae49cc63-d351-440f-9334-4ef2550565a2-kube-api-access-6d7dq" (OuterVolumeSpecName: "kube-api-access-6d7dq") pod "ae49cc63-d351-440f-9334-4ef2550565a2" (UID: "ae49cc63-d351-440f-9334-4ef2550565a2"). InnerVolumeSpecName "kube-api-access-6d7dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:24.609826 master-1 kubenswrapper[4771]: I1011 10:57:24.599216 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae49cc63-d351-440f-9334-4ef2550565a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae49cc63-d351-440f-9334-4ef2550565a2" (UID: "ae49cc63-d351-440f-9334-4ef2550565a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:24.609826 master-1 kubenswrapper[4771]: I1011 10:57:24.600134 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae49cc63-d351-440f-9334-4ef2550565a2-config-data" (OuterVolumeSpecName: "config-data") pod "ae49cc63-d351-440f-9334-4ef2550565a2" (UID: "ae49cc63-d351-440f-9334-4ef2550565a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:24.676801 master-1 kubenswrapper[4771]: I1011 10:57:24.676723 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae49cc63-d351-440f-9334-4ef2550565a2-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:24.676801 master-1 kubenswrapper[4771]: I1011 10:57:24.676802 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae49cc63-d351-440f-9334-4ef2550565a2-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:24.677080 master-1 kubenswrapper[4771]: I1011 10:57:24.676833 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d7dq\" (UniqueName: \"kubernetes.io/projected/ae49cc63-d351-440f-9334-4ef2550565a2-kube-api-access-6d7dq\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:24.677080 master-1 kubenswrapper[4771]: I1011 10:57:24.676857 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae49cc63-d351-440f-9334-4ef2550565a2-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:25.030110 master-1 kubenswrapper[4771]: I1011 10:57:25.030038 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"926f8cdc-bbf6-4328-8436-8428df0a679b","Type":"ContainerStarted","Data":"dae10efe0437b4662fa70f9c02ae261a84610fc2231a80eb15286d70bfdd6957"} Oct 11 10:57:25.033677 master-1 kubenswrapper[4771]: I1011 10:57:25.033596 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-5f556" event={"ID":"ae49cc63-d351-440f-9334-4ef2550565a2","Type":"ContainerDied","Data":"bf7cd872d077a706534449020eb602438e4892d62ec0c9d5010cb1330868aeea"} Oct 11 10:57:25.033677 master-1 kubenswrapper[4771]: I1011 10:57:25.033665 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf7cd872d077a706534449020eb602438e4892d62ec0c9d5010cb1330868aeea" Oct 11 10:57:25.033677 master-1 kubenswrapper[4771]: I1011 10:57:25.033626 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-5f556" Oct 11 10:57:25.037215 master-1 kubenswrapper[4771]: I1011 10:57:25.037130 4771 generic.go:334] "Generic (PLEG): container finished" podID="709c362a-6ace-46bf-9f94-86852f78f6f2" containerID="99d58d9d6b8b62fa18ae8ba7508466dad2a9761e505b9274423ecba095a9de64" exitCode=0 Oct 11 10:57:25.037215 master-1 kubenswrapper[4771]: I1011 10:57:25.037179 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bwgtz" event={"ID":"709c362a-6ace-46bf-9f94-86852f78f6f2","Type":"ContainerDied","Data":"99d58d9d6b8b62fa18ae8ba7508466dad2a9761e505b9274423ecba095a9de64"} Oct 11 10:57:25.493351 master-0 kubenswrapper[4790]: I1011 10:57:25.492916 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-1" podUID="44bcb391-53f2-438c-b46e-1f3208011f01" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.130.0.118:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:57:25.493351 master-0 kubenswrapper[4790]: I1011 10:57:25.492916 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-1" podUID="44bcb391-53f2-438c-b46e-1f3208011f01" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.130.0.118:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:57:26.051022 master-1 kubenswrapper[4771]: I1011 10:57:26.050823 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"926f8cdc-bbf6-4328-8436-8428df0a679b","Type":"ContainerStarted","Data":"a7cc850a7be7b2d0fcc12f07e1e079159798fbdcd81356efd79d9ef6b76f3e58"} Oct 11 10:57:26.051022 master-1 kubenswrapper[4771]: I1011 10:57:26.050969 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"926f8cdc-bbf6-4328-8436-8428df0a679b","Type":"ContainerStarted","Data":"4c4740c654b3eb5bc657fa26dafcc70a7dbbb11d03e1c33a59b6b35165113d78"} Oct 11 10:57:26.631260 master-1 kubenswrapper[4771]: I1011 10:57:26.631203 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bwgtz" Oct 11 10:57:26.735822 master-1 kubenswrapper[4771]: I1011 10:57:26.730167 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/709c362a-6ace-46bf-9f94-86852f78f6f2-scripts\") pod \"709c362a-6ace-46bf-9f94-86852f78f6f2\" (UID: \"709c362a-6ace-46bf-9f94-86852f78f6f2\") " Oct 11 10:57:26.735822 master-1 kubenswrapper[4771]: I1011 10:57:26.730257 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glspf\" (UniqueName: \"kubernetes.io/projected/709c362a-6ace-46bf-9f94-86852f78f6f2-kube-api-access-glspf\") pod \"709c362a-6ace-46bf-9f94-86852f78f6f2\" (UID: \"709c362a-6ace-46bf-9f94-86852f78f6f2\") " Oct 11 10:57:26.735822 master-1 kubenswrapper[4771]: I1011 10:57:26.730614 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/709c362a-6ace-46bf-9f94-86852f78f6f2-config-data\") pod \"709c362a-6ace-46bf-9f94-86852f78f6f2\" (UID: \"709c362a-6ace-46bf-9f94-86852f78f6f2\") " Oct 11 10:57:26.735822 master-1 kubenswrapper[4771]: I1011 10:57:26.730812 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709c362a-6ace-46bf-9f94-86852f78f6f2-combined-ca-bundle\") pod \"709c362a-6ace-46bf-9f94-86852f78f6f2\" (UID: \"709c362a-6ace-46bf-9f94-86852f78f6f2\") " Oct 11 10:57:26.736343 master-1 kubenswrapper[4771]: I1011 10:57:26.736069 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/709c362a-6ace-46bf-9f94-86852f78f6f2-scripts" (OuterVolumeSpecName: "scripts") pod "709c362a-6ace-46bf-9f94-86852f78f6f2" (UID: "709c362a-6ace-46bf-9f94-86852f78f6f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:26.749459 master-1 kubenswrapper[4771]: I1011 10:57:26.749273 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/709c362a-6ace-46bf-9f94-86852f78f6f2-kube-api-access-glspf" (OuterVolumeSpecName: "kube-api-access-glspf") pod "709c362a-6ace-46bf-9f94-86852f78f6f2" (UID: "709c362a-6ace-46bf-9f94-86852f78f6f2"). InnerVolumeSpecName "kube-api-access-glspf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:26.771495 master-1 kubenswrapper[4771]: I1011 10:57:26.771340 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/709c362a-6ace-46bf-9f94-86852f78f6f2-config-data" (OuterVolumeSpecName: "config-data") pod "709c362a-6ace-46bf-9f94-86852f78f6f2" (UID: "709c362a-6ace-46bf-9f94-86852f78f6f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:26.774741 master-1 kubenswrapper[4771]: I1011 10:57:26.774506 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/709c362a-6ace-46bf-9f94-86852f78f6f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "709c362a-6ace-46bf-9f94-86852f78f6f2" (UID: "709c362a-6ace-46bf-9f94-86852f78f6f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:26.834032 master-1 kubenswrapper[4771]: I1011 10:57:26.833937 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/709c362a-6ace-46bf-9f94-86852f78f6f2-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:26.834032 master-1 kubenswrapper[4771]: I1011 10:57:26.833979 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glspf\" (UniqueName: \"kubernetes.io/projected/709c362a-6ace-46bf-9f94-86852f78f6f2-kube-api-access-glspf\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:26.834032 master-1 kubenswrapper[4771]: I1011 10:57:26.833990 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/709c362a-6ace-46bf-9f94-86852f78f6f2-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:26.834032 master-1 kubenswrapper[4771]: I1011 10:57:26.833999 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709c362a-6ace-46bf-9f94-86852f78f6f2-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:27.067857 master-1 kubenswrapper[4771]: I1011 10:57:27.067776 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bwgtz" event={"ID":"709c362a-6ace-46bf-9f94-86852f78f6f2","Type":"ContainerDied","Data":"4cefc06e3826c53b2bddfc65675ffb15401ad9ff18e58b5e2736c262f90fe5e5"} Oct 11 10:57:27.067857 master-1 kubenswrapper[4771]: I1011 10:57:27.067847 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cefc06e3826c53b2bddfc65675ffb15401ad9ff18e58b5e2736c262f90fe5e5" Oct 11 10:57:27.068849 master-1 kubenswrapper[4771]: I1011 10:57:27.067865 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bwgtz" Oct 11 10:57:27.320643 master-2 kubenswrapper[4776]: I1011 10:57:27.320512 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:27.322539 master-2 kubenswrapper[4776]: I1011 10:57:27.321287 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-2" podUID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerName="nova-api-api" containerID="cri-o://8fad6ef13ce32e48a0200e0d51e6ec1869e31fdbae9e43f6b4327a3848b3263c" gracePeriod=30 Oct 11 10:57:27.322539 master-2 kubenswrapper[4776]: I1011 10:57:27.321697 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-2" podUID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerName="nova-api-log" containerID="cri-o://fef38235c13a94e0f559cc147b180218c57bb86edee562a4d0f5a4ef26d4a256" gracePeriod=30 Oct 11 10:57:27.323764 master-0 kubenswrapper[4790]: I1011 10:57:27.323621 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-2"] Oct 11 10:57:27.324499 master-0 kubenswrapper[4790]: I1011 10:57:27.323964 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-2" podUID="08f5fb34-a451-48f6-91f4-60d27bfd939c" containerName="nova-scheduler-scheduler" containerID="cri-o://4e0c106293c88310a5257072d57742a9c018d67c1675782ea7da7a234c8ae82b" gracePeriod=30 Oct 11 10:57:27.398300 master-0 kubenswrapper[4790]: I1011 10:57:27.398200 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:57:27.398863 master-0 kubenswrapper[4790]: I1011 10:57:27.398786 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-2" podUID="d221eb73-a42b-4c47-a912-4e47b88297a4" containerName="nova-metadata-metadata" containerID="cri-o://b47f7b57ab8ce5d9b1935a495e120a3eee5b2f462ead1f3d6820c61b709a1bbc" gracePeriod=30 Oct 11 10:57:27.399074 master-0 kubenswrapper[4790]: I1011 10:57:27.399038 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-2" podUID="d221eb73-a42b-4c47-a912-4e47b88297a4" containerName="nova-metadata-log" containerID="cri-o://15ed041bdb39ede94c3334a41421fc333a2760be59ed774b90e256cc011bfd8a" gracePeriod=30 Oct 11 10:57:27.602023 master-2 kubenswrapper[4776]: I1011 10:57:27.601970 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 11 10:57:28.079261 master-1 kubenswrapper[4771]: I1011 10:57:28.079202 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"926f8cdc-bbf6-4328-8436-8428df0a679b","Type":"ContainerStarted","Data":"ca31db39d1dec9084865eb81efc92049edf7d6166fd06121934c9ffcdc701d40"} Oct 11 10:57:28.080445 master-1 kubenswrapper[4771]: I1011 10:57:28.080413 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 11 10:57:28.106951 master-1 kubenswrapper[4771]: I1011 10:57:28.106880 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.328467384 podStartE2EDuration="6.106859452s" podCreationTimestamp="2025-10-11 10:57:22 +0000 UTC" firstStartedPulling="2025-10-11 10:57:23.136187533 +0000 UTC m=+1875.110413974" lastFinishedPulling="2025-10-11 10:57:26.914579571 +0000 UTC m=+1878.888806042" observedRunningTime="2025-10-11 10:57:28.10544979 +0000 UTC m=+1880.079676281" watchObservedRunningTime="2025-10-11 10:57:28.106859452 +0000 UTC m=+1880.081085893" Oct 11 10:57:28.186811 master-0 kubenswrapper[4790]: I1011 10:57:28.186677 4790 generic.go:334] "Generic (PLEG): container finished" podID="d221eb73-a42b-4c47-a912-4e47b88297a4" containerID="15ed041bdb39ede94c3334a41421fc333a2760be59ed774b90e256cc011bfd8a" exitCode=143 Oct 11 10:57:28.186811 master-0 kubenswrapper[4790]: I1011 10:57:28.186765 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"d221eb73-a42b-4c47-a912-4e47b88297a4","Type":"ContainerDied","Data":"15ed041bdb39ede94c3334a41421fc333a2760be59ed774b90e256cc011bfd8a"} Oct 11 10:57:28.276957 master-2 kubenswrapper[4776]: I1011 10:57:28.276905 4776 generic.go:334] "Generic (PLEG): container finished" podID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerID="8fad6ef13ce32e48a0200e0d51e6ec1869e31fdbae9e43f6b4327a3848b3263c" exitCode=0 Oct 11 10:57:28.276957 master-2 kubenswrapper[4776]: I1011 10:57:28.276940 4776 generic.go:334] "Generic (PLEG): container finished" podID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerID="fef38235c13a94e0f559cc147b180218c57bb86edee562a4d0f5a4ef26d4a256" exitCode=143 Oct 11 10:57:28.276957 master-2 kubenswrapper[4776]: I1011 10:57:28.276960 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a4feec2-a4ba-4906-90f7-0912fc708375","Type":"ContainerDied","Data":"8fad6ef13ce32e48a0200e0d51e6ec1869e31fdbae9e43f6b4327a3848b3263c"} Oct 11 10:57:28.277236 master-2 kubenswrapper[4776]: I1011 10:57:28.276987 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a4feec2-a4ba-4906-90f7-0912fc708375","Type":"ContainerDied","Data":"fef38235c13a94e0f559cc147b180218c57bb86edee562a4d0f5a4ef26d4a256"} Oct 11 10:57:28.277236 master-2 kubenswrapper[4776]: I1011 10:57:28.277010 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a4feec2-a4ba-4906-90f7-0912fc708375","Type":"ContainerDied","Data":"6503028ff1c30fc40e811d73cf5d9d6f0477fe67e3a05d72d99b498521268894"} Oct 11 10:57:28.277236 master-2 kubenswrapper[4776]: I1011 10:57:28.277021 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6503028ff1c30fc40e811d73cf5d9d6f0477fe67e3a05d72d99b498521268894" Oct 11 10:57:28.334559 master-2 kubenswrapper[4776]: I1011 10:57:28.334508 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:57:28.489086 master-2 kubenswrapper[4776]: I1011 10:57:28.489010 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-config-data\") pod \"3a4feec2-a4ba-4906-90f7-0912fc708375\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " Oct 11 10:57:28.489086 master-2 kubenswrapper[4776]: I1011 10:57:28.489081 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a4feec2-a4ba-4906-90f7-0912fc708375-logs\") pod \"3a4feec2-a4ba-4906-90f7-0912fc708375\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " Oct 11 10:57:28.489386 master-2 kubenswrapper[4776]: I1011 10:57:28.489211 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-internal-tls-certs\") pod \"3a4feec2-a4ba-4906-90f7-0912fc708375\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " Oct 11 10:57:28.489386 master-2 kubenswrapper[4776]: I1011 10:57:28.489242 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-combined-ca-bundle\") pod \"3a4feec2-a4ba-4906-90f7-0912fc708375\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " Oct 11 10:57:28.489386 master-2 kubenswrapper[4776]: I1011 10:57:28.489329 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-public-tls-certs\") pod \"3a4feec2-a4ba-4906-90f7-0912fc708375\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " Oct 11 10:57:28.489506 master-2 kubenswrapper[4776]: I1011 10:57:28.489446 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z75mf\" (UniqueName: \"kubernetes.io/projected/3a4feec2-a4ba-4906-90f7-0912fc708375-kube-api-access-z75mf\") pod \"3a4feec2-a4ba-4906-90f7-0912fc708375\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " Oct 11 10:57:28.490001 master-2 kubenswrapper[4776]: I1011 10:57:28.489434 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a4feec2-a4ba-4906-90f7-0912fc708375-logs" (OuterVolumeSpecName: "logs") pod "3a4feec2-a4ba-4906-90f7-0912fc708375" (UID: "3a4feec2-a4ba-4906-90f7-0912fc708375"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:57:28.494474 master-2 kubenswrapper[4776]: I1011 10:57:28.494415 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a4feec2-a4ba-4906-90f7-0912fc708375-kube-api-access-z75mf" (OuterVolumeSpecName: "kube-api-access-z75mf") pod "3a4feec2-a4ba-4906-90f7-0912fc708375" (UID: "3a4feec2-a4ba-4906-90f7-0912fc708375"). InnerVolumeSpecName "kube-api-access-z75mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:28.512825 master-2 kubenswrapper[4776]: I1011 10:57:28.512685 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a4feec2-a4ba-4906-90f7-0912fc708375" (UID: "3a4feec2-a4ba-4906-90f7-0912fc708375"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:28.517764 master-2 kubenswrapper[4776]: I1011 10:57:28.517708 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-config-data" (OuterVolumeSpecName: "config-data") pod "3a4feec2-a4ba-4906-90f7-0912fc708375" (UID: "3a4feec2-a4ba-4906-90f7-0912fc708375"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:28.535088 master-2 kubenswrapper[4776]: I1011 10:57:28.534660 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3a4feec2-a4ba-4906-90f7-0912fc708375" (UID: "3a4feec2-a4ba-4906-90f7-0912fc708375"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:28.537923 master-2 kubenswrapper[4776]: I1011 10:57:28.537871 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3a4feec2-a4ba-4906-90f7-0912fc708375" (UID: "3a4feec2-a4ba-4906-90f7-0912fc708375"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:28.591464 master-2 kubenswrapper[4776]: I1011 10:57:28.591422 4776 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-internal-tls-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:28.591464 master-2 kubenswrapper[4776]: I1011 10:57:28.591463 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:28.591464 master-2 kubenswrapper[4776]: I1011 10:57:28.591473 4776 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-public-tls-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:28.591718 master-2 kubenswrapper[4776]: I1011 10:57:28.591482 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z75mf\" (UniqueName: \"kubernetes.io/projected/3a4feec2-a4ba-4906-90f7-0912fc708375-kube-api-access-z75mf\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:28.591718 master-2 kubenswrapper[4776]: I1011 10:57:28.591539 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:28.591718 master-2 kubenswrapper[4776]: I1011 10:57:28.591547 4776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a4feec2-a4ba-4906-90f7-0912fc708375-logs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:29.284571 master-2 kubenswrapper[4776]: I1011 10:57:29.284518 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:57:29.331518 master-2 kubenswrapper[4776]: I1011 10:57:29.331454 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:29.338008 master-2 kubenswrapper[4776]: I1011 10:57:29.337957 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:29.382211 master-2 kubenswrapper[4776]: I1011 10:57:29.382149 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:29.383009 master-2 kubenswrapper[4776]: E1011 10:57:29.382960 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerName="nova-api-api" Oct 11 10:57:29.383009 master-2 kubenswrapper[4776]: I1011 10:57:29.383006 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerName="nova-api-api" Oct 11 10:57:29.383147 master-2 kubenswrapper[4776]: E1011 10:57:29.383034 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerName="nova-api-log" Oct 11 10:57:29.383147 master-2 kubenswrapper[4776]: I1011 10:57:29.383049 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerName="nova-api-log" Oct 11 10:57:29.383721 master-2 kubenswrapper[4776]: I1011 10:57:29.383618 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerName="nova-api-api" Oct 11 10:57:29.383794 master-2 kubenswrapper[4776]: I1011 10:57:29.383742 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerName="nova-api-log" Oct 11 10:57:29.388908 master-2 kubenswrapper[4776]: I1011 10:57:29.388834 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:57:29.392377 master-2 kubenswrapper[4776]: I1011 10:57:29.392126 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 11 10:57:29.392377 master-2 kubenswrapper[4776]: I1011 10:57:29.392351 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 11 10:57:29.392690 master-2 kubenswrapper[4776]: I1011 10:57:29.392481 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 11 10:57:29.401482 master-2 kubenswrapper[4776]: I1011 10:57:29.401263 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:29.505189 master-2 kubenswrapper[4776]: I1011 10:57:29.505084 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb645f14-616d-425d-ae7d-5475565669f8-logs\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.505454 master-2 kubenswrapper[4776]: I1011 10:57:29.505202 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvt9m\" (UniqueName: \"kubernetes.io/projected/bb645f14-616d-425d-ae7d-5475565669f8-kube-api-access-jvt9m\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.505454 master-2 kubenswrapper[4776]: I1011 10:57:29.505253 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.505454 master-2 kubenswrapper[4776]: I1011 10:57:29.505366 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-config-data\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.505454 master-2 kubenswrapper[4776]: I1011 10:57:29.505425 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-public-tls-certs\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.505986 master-2 kubenswrapper[4776]: I1011 10:57:29.505474 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-internal-tls-certs\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.607650 master-2 kubenswrapper[4776]: I1011 10:57:29.607461 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-config-data\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.607869 master-2 kubenswrapper[4776]: I1011 10:57:29.607595 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-public-tls-certs\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.607869 master-2 kubenswrapper[4776]: I1011 10:57:29.607775 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-internal-tls-certs\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.608096 master-2 kubenswrapper[4776]: I1011 10:57:29.608059 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb645f14-616d-425d-ae7d-5475565669f8-logs\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.608840 master-2 kubenswrapper[4776]: I1011 10:57:29.608802 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvt9m\" (UniqueName: \"kubernetes.io/projected/bb645f14-616d-425d-ae7d-5475565669f8-kube-api-access-jvt9m\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.608840 master-2 kubenswrapper[4776]: I1011 10:57:29.608840 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.608840 master-2 kubenswrapper[4776]: I1011 10:57:29.608708 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb645f14-616d-425d-ae7d-5475565669f8-logs\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.611425 master-2 kubenswrapper[4776]: I1011 10:57:29.611391 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-internal-tls-certs\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.611568 master-2 kubenswrapper[4776]: I1011 10:57:29.611535 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-public-tls-certs\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.612527 master-2 kubenswrapper[4776]: I1011 10:57:29.612492 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.613978 master-2 kubenswrapper[4776]: I1011 10:57:29.613930 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-config-data\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.631011 master-2 kubenswrapper[4776]: I1011 10:57:29.630858 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvt9m\" (UniqueName: \"kubernetes.io/projected/bb645f14-616d-425d-ae7d-5475565669f8-kube-api-access-jvt9m\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.711877 master-2 kubenswrapper[4776]: I1011 10:57:29.711797 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:57:30.074515 master-2 kubenswrapper[4776]: I1011 10:57:30.074445 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a4feec2-a4ba-4906-90f7-0912fc708375" path="/var/lib/kubelet/pods/3a4feec2-a4ba-4906-90f7-0912fc708375/volumes" Oct 11 10:57:30.149387 master-2 kubenswrapper[4776]: I1011 10:57:30.149206 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:30.207197 master-0 kubenswrapper[4790]: I1011 10:57:30.206976 4790 generic.go:334] "Generic (PLEG): container finished" podID="08f5fb34-a451-48f6-91f4-60d27bfd939c" containerID="4e0c106293c88310a5257072d57742a9c018d67c1675782ea7da7a234c8ae82b" exitCode=0 Oct 11 10:57:30.207197 master-0 kubenswrapper[4790]: I1011 10:57:30.207056 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-2" event={"ID":"08f5fb34-a451-48f6-91f4-60d27bfd939c","Type":"ContainerDied","Data":"4e0c106293c88310a5257072d57742a9c018d67c1675782ea7da7a234c8ae82b"} Oct 11 10:57:30.292514 master-2 kubenswrapper[4776]: I1011 10:57:30.292448 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"bb645f14-616d-425d-ae7d-5475565669f8","Type":"ContainerStarted","Data":"7ef456ed31622c073c43a52e6023552ab07b72c1c1590dab0764c3f6b523bfe0"} Oct 11 10:57:30.292514 master-2 kubenswrapper[4776]: I1011 10:57:30.292503 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"bb645f14-616d-425d-ae7d-5475565669f8","Type":"ContainerStarted","Data":"46090c8fd9d1e9147775f8b19377ed02da1a8e89c02f5e59811e92adf89b80fe"} Oct 11 10:57:30.494552 master-0 kubenswrapper[4790]: I1011 10:57:30.494501 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-2" Oct 11 10:57:30.539097 master-0 kubenswrapper[4790]: I1011 10:57:30.535001 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-2" podUID="d221eb73-a42b-4c47-a912-4e47b88297a4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.130.0.115:8775/\": read tcp 10.130.0.2:36562->10.130.0.115:8775: read: connection reset by peer" Oct 11 10:57:30.539693 master-0 kubenswrapper[4790]: I1011 10:57:30.539580 4790 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-2" podUID="d221eb73-a42b-4c47-a912-4e47b88297a4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.130.0.115:8775/\": read tcp 10.130.0.2:36556->10.130.0.115:8775: read: connection reset by peer" Oct 11 10:57:30.602918 master-0 kubenswrapper[4790]: I1011 10:57:30.602827 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f5fb34-a451-48f6-91f4-60d27bfd939c-config-data\") pod \"08f5fb34-a451-48f6-91f4-60d27bfd939c\" (UID: \"08f5fb34-a451-48f6-91f4-60d27bfd939c\") " Oct 11 10:57:30.603173 master-0 kubenswrapper[4790]: I1011 10:57:30.602936 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj2b6\" (UniqueName: \"kubernetes.io/projected/08f5fb34-a451-48f6-91f4-60d27bfd939c-kube-api-access-xj2b6\") pod \"08f5fb34-a451-48f6-91f4-60d27bfd939c\" (UID: \"08f5fb34-a451-48f6-91f4-60d27bfd939c\") " Oct 11 10:57:30.603173 master-0 kubenswrapper[4790]: I1011 10:57:30.603088 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f5fb34-a451-48f6-91f4-60d27bfd939c-combined-ca-bundle\") pod \"08f5fb34-a451-48f6-91f4-60d27bfd939c\" (UID: \"08f5fb34-a451-48f6-91f4-60d27bfd939c\") " Oct 11 10:57:30.607801 master-0 kubenswrapper[4790]: I1011 10:57:30.607696 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08f5fb34-a451-48f6-91f4-60d27bfd939c-kube-api-access-xj2b6" (OuterVolumeSpecName: "kube-api-access-xj2b6") pod "08f5fb34-a451-48f6-91f4-60d27bfd939c" (UID: "08f5fb34-a451-48f6-91f4-60d27bfd939c"). InnerVolumeSpecName "kube-api-access-xj2b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:30.637167 master-0 kubenswrapper[4790]: I1011 10:57:30.636803 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08f5fb34-a451-48f6-91f4-60d27bfd939c-config-data" (OuterVolumeSpecName: "config-data") pod "08f5fb34-a451-48f6-91f4-60d27bfd939c" (UID: "08f5fb34-a451-48f6-91f4-60d27bfd939c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:30.638338 master-0 kubenswrapper[4790]: I1011 10:57:30.638284 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08f5fb34-a451-48f6-91f4-60d27bfd939c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08f5fb34-a451-48f6-91f4-60d27bfd939c" (UID: "08f5fb34-a451-48f6-91f4-60d27bfd939c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:30.706037 master-0 kubenswrapper[4790]: I1011 10:57:30.705968 4790 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f5fb34-a451-48f6-91f4-60d27bfd939c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Oct 11 10:57:30.706037 master-0 kubenswrapper[4790]: I1011 10:57:30.706014 4790 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f5fb34-a451-48f6-91f4-60d27bfd939c-config-data\") on node \"master-0\" DevicePath \"\"" Oct 11 10:57:30.706037 master-0 kubenswrapper[4790]: I1011 10:57:30.706026 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj2b6\" (UniqueName: \"kubernetes.io/projected/08f5fb34-a451-48f6-91f4-60d27bfd939c-kube-api-access-xj2b6\") on node \"master-0\" DevicePath \"\"" Oct 11 10:57:31.100758 master-0 kubenswrapper[4790]: I1011 10:57:31.098961 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-2" Oct 11 10:57:31.218251 master-0 kubenswrapper[4790]: I1011 10:57:31.218183 4790 generic.go:334] "Generic (PLEG): container finished" podID="d221eb73-a42b-4c47-a912-4e47b88297a4" containerID="b47f7b57ab8ce5d9b1935a495e120a3eee5b2f462ead1f3d6820c61b709a1bbc" exitCode=0 Oct 11 10:57:31.218856 master-0 kubenswrapper[4790]: I1011 10:57:31.218275 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"d221eb73-a42b-4c47-a912-4e47b88297a4","Type":"ContainerDied","Data":"b47f7b57ab8ce5d9b1935a495e120a3eee5b2f462ead1f3d6820c61b709a1bbc"} Oct 11 10:57:31.218856 master-0 kubenswrapper[4790]: I1011 10:57:31.218310 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"d221eb73-a42b-4c47-a912-4e47b88297a4","Type":"ContainerDied","Data":"a01fc17bb36e96804df4939bb484d9c50eb215a10fead9c510b32b80ed9bd4c0"} Oct 11 10:57:31.218856 master-0 kubenswrapper[4790]: I1011 10:57:31.218332 4790 scope.go:117] "RemoveContainer" containerID="b47f7b57ab8ce5d9b1935a495e120a3eee5b2f462ead1f3d6820c61b709a1bbc" Oct 11 10:57:31.218856 master-0 kubenswrapper[4790]: I1011 10:57:31.218488 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-2" Oct 11 10:57:31.220001 master-0 kubenswrapper[4790]: I1011 10:57:31.219975 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d221eb73-a42b-4c47-a912-4e47b88297a4-logs\") pod \"d221eb73-a42b-4c47-a912-4e47b88297a4\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " Oct 11 10:57:31.220065 master-0 kubenswrapper[4790]: I1011 10:57:31.220022 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d221eb73-a42b-4c47-a912-4e47b88297a4-combined-ca-bundle\") pod \"d221eb73-a42b-4c47-a912-4e47b88297a4\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " Oct 11 10:57:31.220106 master-0 kubenswrapper[4790]: I1011 10:57:31.220081 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d221eb73-a42b-4c47-a912-4e47b88297a4-config-data\") pod \"d221eb73-a42b-4c47-a912-4e47b88297a4\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " Oct 11 10:57:31.220169 master-0 kubenswrapper[4790]: I1011 10:57:31.220151 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7ttc\" (UniqueName: \"kubernetes.io/projected/d221eb73-a42b-4c47-a912-4e47b88297a4-kube-api-access-z7ttc\") pod \"d221eb73-a42b-4c47-a912-4e47b88297a4\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " Oct 11 10:57:31.220218 master-0 kubenswrapper[4790]: I1011 10:57:31.220205 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d221eb73-a42b-4c47-a912-4e47b88297a4-nova-metadata-tls-certs\") pod \"d221eb73-a42b-4c47-a912-4e47b88297a4\" (UID: \"d221eb73-a42b-4c47-a912-4e47b88297a4\") " Oct 11 10:57:31.223393 master-0 kubenswrapper[4790]: I1011 10:57:31.222930 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d221eb73-a42b-4c47-a912-4e47b88297a4-logs" (OuterVolumeSpecName: "logs") pod "d221eb73-a42b-4c47-a912-4e47b88297a4" (UID: "d221eb73-a42b-4c47-a912-4e47b88297a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:57:31.225686 master-0 kubenswrapper[4790]: I1011 10:57:31.225622 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-2" event={"ID":"08f5fb34-a451-48f6-91f4-60d27bfd939c","Type":"ContainerDied","Data":"7550d6a37aff89c94d6dda17710e1becb7a0d5864e9949954fef2e9819f39291"} Oct 11 10:57:31.225796 master-0 kubenswrapper[4790]: I1011 10:57:31.225688 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-2" Oct 11 10:57:31.231182 master-0 kubenswrapper[4790]: I1011 10:57:31.231147 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d221eb73-a42b-4c47-a912-4e47b88297a4-kube-api-access-z7ttc" (OuterVolumeSpecName: "kube-api-access-z7ttc") pod "d221eb73-a42b-4c47-a912-4e47b88297a4" (UID: "d221eb73-a42b-4c47-a912-4e47b88297a4"). InnerVolumeSpecName "kube-api-access-z7ttc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:31.242741 master-0 kubenswrapper[4790]: I1011 10:57:31.242661 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d221eb73-a42b-4c47-a912-4e47b88297a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d221eb73-a42b-4c47-a912-4e47b88297a4" (UID: "d221eb73-a42b-4c47-a912-4e47b88297a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:31.247688 master-0 kubenswrapper[4790]: I1011 10:57:31.247656 4790 scope.go:117] "RemoveContainer" containerID="15ed041bdb39ede94c3334a41421fc333a2760be59ed774b90e256cc011bfd8a" Oct 11 10:57:31.256306 master-0 kubenswrapper[4790]: I1011 10:57:31.256261 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d221eb73-a42b-4c47-a912-4e47b88297a4-config-data" (OuterVolumeSpecName: "config-data") pod "d221eb73-a42b-4c47-a912-4e47b88297a4" (UID: "d221eb73-a42b-4c47-a912-4e47b88297a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:31.271698 master-0 kubenswrapper[4790]: I1011 10:57:31.271649 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d221eb73-a42b-4c47-a912-4e47b88297a4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d221eb73-a42b-4c47-a912-4e47b88297a4" (UID: "d221eb73-a42b-4c47-a912-4e47b88297a4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:31.302014 master-2 kubenswrapper[4776]: I1011 10:57:31.301965 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"bb645f14-616d-425d-ae7d-5475565669f8","Type":"ContainerStarted","Data":"58b9f677007e17c15343e0f132c977a59571d3a476d7cd1ee8cb0938902307a6"} Oct 11 10:57:31.323087 master-0 kubenswrapper[4790]: I1011 10:57:31.322792 4790 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d221eb73-a42b-4c47-a912-4e47b88297a4-logs\") on node \"master-0\" DevicePath \"\"" Oct 11 10:57:31.323087 master-0 kubenswrapper[4790]: I1011 10:57:31.322865 4790 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d221eb73-a42b-4c47-a912-4e47b88297a4-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Oct 11 10:57:31.323087 master-0 kubenswrapper[4790]: I1011 10:57:31.322886 4790 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d221eb73-a42b-4c47-a912-4e47b88297a4-config-data\") on node \"master-0\" DevicePath \"\"" Oct 11 10:57:31.323087 master-0 kubenswrapper[4790]: I1011 10:57:31.322905 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7ttc\" (UniqueName: \"kubernetes.io/projected/d221eb73-a42b-4c47-a912-4e47b88297a4-kube-api-access-z7ttc\") on node \"master-0\" DevicePath \"\"" Oct 11 10:57:31.323087 master-0 kubenswrapper[4790]: I1011 10:57:31.322926 4790 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d221eb73-a42b-4c47-a912-4e47b88297a4-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Oct 11 10:57:31.323830 master-1 kubenswrapper[4771]: I1011 10:57:31.323707 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 10:57:31.331151 master-0 kubenswrapper[4790]: I1011 10:57:31.330320 4790 scope.go:117] "RemoveContainer" containerID="b47f7b57ab8ce5d9b1935a495e120a3eee5b2f462ead1f3d6820c61b709a1bbc" Oct 11 10:57:31.334167 master-0 kubenswrapper[4790]: E1011 10:57:31.333835 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b47f7b57ab8ce5d9b1935a495e120a3eee5b2f462ead1f3d6820c61b709a1bbc\": container with ID starting with b47f7b57ab8ce5d9b1935a495e120a3eee5b2f462ead1f3d6820c61b709a1bbc not found: ID does not exist" containerID="b47f7b57ab8ce5d9b1935a495e120a3eee5b2f462ead1f3d6820c61b709a1bbc" Oct 11 10:57:31.334167 master-0 kubenswrapper[4790]: I1011 10:57:31.334022 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b47f7b57ab8ce5d9b1935a495e120a3eee5b2f462ead1f3d6820c61b709a1bbc"} err="failed to get container status \"b47f7b57ab8ce5d9b1935a495e120a3eee5b2f462ead1f3d6820c61b709a1bbc\": rpc error: code = NotFound desc = could not find container \"b47f7b57ab8ce5d9b1935a495e120a3eee5b2f462ead1f3d6820c61b709a1bbc\": container with ID starting with b47f7b57ab8ce5d9b1935a495e120a3eee5b2f462ead1f3d6820c61b709a1bbc not found: ID does not exist" Oct 11 10:57:31.334167 master-0 kubenswrapper[4790]: I1011 10:57:31.334106 4790 scope.go:117] "RemoveContainer" containerID="15ed041bdb39ede94c3334a41421fc333a2760be59ed774b90e256cc011bfd8a" Oct 11 10:57:31.335426 master-0 kubenswrapper[4790]: E1011 10:57:31.335361 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15ed041bdb39ede94c3334a41421fc333a2760be59ed774b90e256cc011bfd8a\": container with ID starting with 15ed041bdb39ede94c3334a41421fc333a2760be59ed774b90e256cc011bfd8a not found: ID does not exist" containerID="15ed041bdb39ede94c3334a41421fc333a2760be59ed774b90e256cc011bfd8a" Oct 11 10:57:31.335497 master-0 kubenswrapper[4790]: I1011 10:57:31.335425 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15ed041bdb39ede94c3334a41421fc333a2760be59ed774b90e256cc011bfd8a"} err="failed to get container status \"15ed041bdb39ede94c3334a41421fc333a2760be59ed774b90e256cc011bfd8a\": rpc error: code = NotFound desc = could not find container \"15ed041bdb39ede94c3334a41421fc333a2760be59ed774b90e256cc011bfd8a\": container with ID starting with 15ed041bdb39ede94c3334a41421fc333a2760be59ed774b90e256cc011bfd8a not found: ID does not exist" Oct 11 10:57:31.335497 master-0 kubenswrapper[4790]: I1011 10:57:31.335448 4790 scope.go:117] "RemoveContainer" containerID="4e0c106293c88310a5257072d57742a9c018d67c1675782ea7da7a234c8ae82b" Oct 11 10:57:31.358160 master-0 kubenswrapper[4790]: I1011 10:57:31.358069 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-2"] Oct 11 10:57:31.373887 master-0 kubenswrapper[4790]: I1011 10:57:31.372333 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-2"] Oct 11 10:57:31.395462 master-0 kubenswrapper[4790]: I1011 10:57:31.395360 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-2"] Oct 11 10:57:31.396317 master-0 kubenswrapper[4790]: E1011 10:57:31.395988 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f5fb34-a451-48f6-91f4-60d27bfd939c" containerName="nova-scheduler-scheduler" Oct 11 10:57:31.396317 master-0 kubenswrapper[4790]: I1011 10:57:31.396017 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f5fb34-a451-48f6-91f4-60d27bfd939c" containerName="nova-scheduler-scheduler" Oct 11 10:57:31.396317 master-0 kubenswrapper[4790]: E1011 10:57:31.396118 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d221eb73-a42b-4c47-a912-4e47b88297a4" containerName="nova-metadata-log" Oct 11 10:57:31.396317 master-0 kubenswrapper[4790]: I1011 10:57:31.396134 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="d221eb73-a42b-4c47-a912-4e47b88297a4" containerName="nova-metadata-log" Oct 11 10:57:31.396317 master-0 kubenswrapper[4790]: E1011 10:57:31.396168 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d221eb73-a42b-4c47-a912-4e47b88297a4" containerName="nova-metadata-metadata" Oct 11 10:57:31.396317 master-0 kubenswrapper[4790]: I1011 10:57:31.396180 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="d221eb73-a42b-4c47-a912-4e47b88297a4" containerName="nova-metadata-metadata" Oct 11 10:57:31.396617 master-0 kubenswrapper[4790]: I1011 10:57:31.396592 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="d221eb73-a42b-4c47-a912-4e47b88297a4" containerName="nova-metadata-metadata" Oct 11 10:57:31.396654 master-0 kubenswrapper[4790]: I1011 10:57:31.396627 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="d221eb73-a42b-4c47-a912-4e47b88297a4" containerName="nova-metadata-log" Oct 11 10:57:31.396654 master-0 kubenswrapper[4790]: I1011 10:57:31.396641 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="08f5fb34-a451-48f6-91f4-60d27bfd939c" containerName="nova-scheduler-scheduler" Oct 11 10:57:31.398203 master-0 kubenswrapper[4790]: I1011 10:57:31.398161 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-2" Oct 11 10:57:31.410295 master-0 kubenswrapper[4790]: I1011 10:57:31.408725 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 11 10:57:31.426125 master-0 kubenswrapper[4790]: I1011 10:57:31.425820 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-2"] Oct 11 10:57:31.443853 master-2 kubenswrapper[4776]: I1011 10:57:31.443743 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2" podStartSLOduration=2.443720597 podStartE2EDuration="2.443720597s" podCreationTimestamp="2025-10-11 10:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:57:31.332429434 +0000 UTC m=+1886.116856163" watchObservedRunningTime="2025-10-11 10:57:31.443720597 +0000 UTC m=+1886.228147306" Oct 11 10:57:31.445316 master-2 kubenswrapper[4776]: I1011 10:57:31.445268 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cbf74f6f-xmqbr"] Oct 11 10:57:31.445547 master-2 kubenswrapper[4776]: I1011 10:57:31.445518 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" podUID="43ef2dc9-3563-4188-8d91-2fc18c396a4a" containerName="dnsmasq-dns" containerID="cri-o://a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142" gracePeriod=10 Oct 11 10:57:31.528218 master-0 kubenswrapper[4790]: I1011 10:57:31.527796 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw66q\" (UniqueName: \"kubernetes.io/projected/b9948bac-db47-43c4-8ff5-611d5b07c46a-kube-api-access-qw66q\") pod \"nova-scheduler-2\" (UID: \"b9948bac-db47-43c4-8ff5-611d5b07c46a\") " pod="openstack/nova-scheduler-2" Oct 11 10:57:31.528218 master-0 kubenswrapper[4790]: I1011 10:57:31.528025 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9948bac-db47-43c4-8ff5-611d5b07c46a-combined-ca-bundle\") pod \"nova-scheduler-2\" (UID: \"b9948bac-db47-43c4-8ff5-611d5b07c46a\") " pod="openstack/nova-scheduler-2" Oct 11 10:57:31.528218 master-0 kubenswrapper[4790]: I1011 10:57:31.528097 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9948bac-db47-43c4-8ff5-611d5b07c46a-config-data\") pod \"nova-scheduler-2\" (UID: \"b9948bac-db47-43c4-8ff5-611d5b07c46a\") " pod="openstack/nova-scheduler-2" Oct 11 10:57:31.567766 master-0 kubenswrapper[4790]: I1011 10:57:31.567593 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:57:31.572588 master-0 kubenswrapper[4790]: I1011 10:57:31.572520 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:57:31.597122 master-0 kubenswrapper[4790]: I1011 10:57:31.597034 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:57:31.598581 master-0 kubenswrapper[4790]: I1011 10:57:31.598540 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-2" Oct 11 10:57:31.602154 master-0 kubenswrapper[4790]: I1011 10:57:31.602110 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 11 10:57:31.602766 master-0 kubenswrapper[4790]: I1011 10:57:31.602742 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 11 10:57:31.620193 master-0 kubenswrapper[4790]: I1011 10:57:31.620133 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:57:31.635176 master-0 kubenswrapper[4790]: I1011 10:57:31.635115 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a-config-data\") pod \"nova-metadata-2\" (UID: \"720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a\") " pod="openstack/nova-metadata-2" Oct 11 10:57:31.635176 master-0 kubenswrapper[4790]: I1011 10:57:31.635179 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a-combined-ca-bundle\") pod \"nova-metadata-2\" (UID: \"720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a\") " pod="openstack/nova-metadata-2" Oct 11 10:57:31.635581 master-0 kubenswrapper[4790]: I1011 10:57:31.635322 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw66q\" (UniqueName: \"kubernetes.io/projected/b9948bac-db47-43c4-8ff5-611d5b07c46a-kube-api-access-qw66q\") pod \"nova-scheduler-2\" (UID: \"b9948bac-db47-43c4-8ff5-611d5b07c46a\") " pod="openstack/nova-scheduler-2" Oct 11 10:57:31.635581 master-0 kubenswrapper[4790]: I1011 10:57:31.635359 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a-logs\") pod \"nova-metadata-2\" (UID: \"720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a\") " pod="openstack/nova-metadata-2" Oct 11 10:57:31.635581 master-0 kubenswrapper[4790]: I1011 10:57:31.635398 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jqr9\" (UniqueName: \"kubernetes.io/projected/720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a-kube-api-access-4jqr9\") pod \"nova-metadata-2\" (UID: \"720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a\") " pod="openstack/nova-metadata-2" Oct 11 10:57:31.635581 master-0 kubenswrapper[4790]: I1011 10:57:31.635423 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9948bac-db47-43c4-8ff5-611d5b07c46a-combined-ca-bundle\") pod \"nova-scheduler-2\" (UID: \"b9948bac-db47-43c4-8ff5-611d5b07c46a\") " pod="openstack/nova-scheduler-2" Oct 11 10:57:31.635581 master-0 kubenswrapper[4790]: I1011 10:57:31.635455 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a-nova-metadata-tls-certs\") pod \"nova-metadata-2\" (UID: \"720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a\") " pod="openstack/nova-metadata-2" Oct 11 10:57:31.635798 master-0 kubenswrapper[4790]: I1011 10:57:31.635577 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9948bac-db47-43c4-8ff5-611d5b07c46a-config-data\") pod \"nova-scheduler-2\" (UID: \"b9948bac-db47-43c4-8ff5-611d5b07c46a\") " pod="openstack/nova-scheduler-2" Oct 11 10:57:31.639377 master-0 kubenswrapper[4790]: I1011 10:57:31.639343 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9948bac-db47-43c4-8ff5-611d5b07c46a-config-data\") pod \"nova-scheduler-2\" (UID: \"b9948bac-db47-43c4-8ff5-611d5b07c46a\") " pod="openstack/nova-scheduler-2" Oct 11 10:57:31.640334 master-0 kubenswrapper[4790]: I1011 10:57:31.640282 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9948bac-db47-43c4-8ff5-611d5b07c46a-combined-ca-bundle\") pod \"nova-scheduler-2\" (UID: \"b9948bac-db47-43c4-8ff5-611d5b07c46a\") " pod="openstack/nova-scheduler-2" Oct 11 10:57:31.658636 master-0 kubenswrapper[4790]: I1011 10:57:31.658590 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw66q\" (UniqueName: \"kubernetes.io/projected/b9948bac-db47-43c4-8ff5-611d5b07c46a-kube-api-access-qw66q\") pod \"nova-scheduler-2\" (UID: \"b9948bac-db47-43c4-8ff5-611d5b07c46a\") " pod="openstack/nova-scheduler-2" Oct 11 10:57:31.737065 master-0 kubenswrapper[4790]: I1011 10:57:31.736986 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a-config-data\") pod \"nova-metadata-2\" (UID: \"720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a\") " pod="openstack/nova-metadata-2" Oct 11 10:57:31.737065 master-0 kubenswrapper[4790]: I1011 10:57:31.737057 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a-combined-ca-bundle\") pod \"nova-metadata-2\" (UID: \"720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a\") " pod="openstack/nova-metadata-2" Oct 11 10:57:31.737368 master-0 kubenswrapper[4790]: I1011 10:57:31.737145 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a-logs\") pod \"nova-metadata-2\" (UID: \"720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a\") " pod="openstack/nova-metadata-2" Oct 11 10:57:31.737368 master-0 kubenswrapper[4790]: I1011 10:57:31.737185 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jqr9\" (UniqueName: \"kubernetes.io/projected/720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a-kube-api-access-4jqr9\") pod \"nova-metadata-2\" (UID: \"720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a\") " pod="openstack/nova-metadata-2" Oct 11 10:57:31.737368 master-0 kubenswrapper[4790]: I1011 10:57:31.737216 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a-nova-metadata-tls-certs\") pod \"nova-metadata-2\" (UID: \"720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a\") " pod="openstack/nova-metadata-2" Oct 11 10:57:31.738022 master-0 kubenswrapper[4790]: I1011 10:57:31.737985 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a-logs\") pod \"nova-metadata-2\" (UID: \"720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a\") " pod="openstack/nova-metadata-2" Oct 11 10:57:31.741245 master-0 kubenswrapper[4790]: I1011 10:57:31.741199 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a-config-data\") pod \"nova-metadata-2\" (UID: \"720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a\") " pod="openstack/nova-metadata-2" Oct 11 10:57:31.741690 master-0 kubenswrapper[4790]: I1011 10:57:31.741650 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a-nova-metadata-tls-certs\") pod \"nova-metadata-2\" (UID: \"720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a\") " pod="openstack/nova-metadata-2" Oct 11 10:57:31.741770 master-0 kubenswrapper[4790]: I1011 10:57:31.741684 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a-combined-ca-bundle\") pod \"nova-metadata-2\" (UID: \"720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a\") " pod="openstack/nova-metadata-2" Oct 11 10:57:31.746000 master-0 kubenswrapper[4790]: I1011 10:57:31.745946 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-2" Oct 11 10:57:31.768032 master-0 kubenswrapper[4790]: I1011 10:57:31.767966 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jqr9\" (UniqueName: \"kubernetes.io/projected/720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a-kube-api-access-4jqr9\") pod \"nova-metadata-2\" (UID: \"720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a\") " pod="openstack/nova-metadata-2" Oct 11 10:57:31.920526 master-0 kubenswrapper[4790]: I1011 10:57:31.919993 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-2" Oct 11 10:57:32.212380 master-2 kubenswrapper[4776]: E1011 10:57:32.212277 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-443bca660f4af73bca6744e7e355caaadc56f7779ac49304e17e54dc85a7286c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-96fd4fc611b6a4bbbf2d990ec6feda8d8e8df9da67b88b7ad2b8d15e7527f7ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-conmon-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-conmon-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:32.212856 master-2 kubenswrapper[4776]: E1011 10:57:32.212329 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-96fd4fc611b6a4bbbf2d990ec6feda8d8e8df9da67b88b7ad2b8d15e7527f7ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-443bca660f4af73bca6744e7e355caaadc56f7779ac49304e17e54dc85a7286c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-conmon-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-conmon-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:32.212856 master-2 kubenswrapper[4776]: E1011 10:57:32.212614 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-conmon-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-443bca660f4af73bca6744e7e355caaadc56f7779ac49304e17e54dc85a7286c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-96fd4fc611b6a4bbbf2d990ec6feda8d8e8df9da67b88b7ad2b8d15e7527f7ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-conmon-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-conmon-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:32.213071 master-2 kubenswrapper[4776]: E1011 10:57:32.212960 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-conmon-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-conmon-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-96fd4fc611b6a4bbbf2d990ec6feda8d8e8df9da67b88b7ad2b8d15e7527f7ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-443bca660f4af73bca6744e7e355caaadc56f7779ac49304e17e54dc85a7286c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-conmon-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:32.223485 master-2 kubenswrapper[4776]: E1011 10:57:32.223236 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-conmon-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-96fd4fc611b6a4bbbf2d990ec6feda8d8e8df9da67b88b7ad2b8d15e7527f7ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-conmon-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-443bca660f4af73bca6744e7e355caaadc56f7779ac49304e17e54dc85a7286c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-conmon-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:32.224362 master-2 kubenswrapper[4776]: E1011 10:57:32.223705 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-conmon-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-443bca660f4af73bca6744e7e355caaadc56f7779ac49304e17e54dc85a7286c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-96fd4fc611b6a4bbbf2d990ec6feda8d8e8df9da67b88b7ad2b8d15e7527f7ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-conmon-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:32.231773 master-2 kubenswrapper[4776]: E1011 10:57:32.229548 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-conmon-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-conmon-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-443bca660f4af73bca6744e7e355caaadc56f7779ac49304e17e54dc85a7286c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-96fd4fc611b6a4bbbf2d990ec6feda8d8e8df9da67b88b7ad2b8d15e7527f7ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:32.240112 master-0 kubenswrapper[4790]: I1011 10:57:32.240036 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-2"] Oct 11 10:57:32.250492 master-0 kubenswrapper[4790]: W1011 10:57:32.250437 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9948bac_db47_43c4_8ff5_611d5b07c46a.slice/crio-1312fae28a8f53e7d49345f0efdd83d317a71323723d978a9985c0f156a93040 WatchSource:0}: Error finding container 1312fae28a8f53e7d49345f0efdd83d317a71323723d978a9985c0f156a93040: Status 404 returned error can't find the container with id 1312fae28a8f53e7d49345f0efdd83d317a71323723d978a9985c0f156a93040 Oct 11 10:57:32.302378 master-0 kubenswrapper[4790]: I1011 10:57:32.302322 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08f5fb34-a451-48f6-91f4-60d27bfd939c" path="/var/lib/kubelet/pods/08f5fb34-a451-48f6-91f4-60d27bfd939c/volumes" Oct 11 10:57:32.303044 master-0 kubenswrapper[4790]: I1011 10:57:32.302976 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d221eb73-a42b-4c47-a912-4e47b88297a4" path="/var/lib/kubelet/pods/d221eb73-a42b-4c47-a912-4e47b88297a4/volumes" Oct 11 10:57:32.314040 master-2 kubenswrapper[4776]: I1011 10:57:32.313780 4776 generic.go:334] "Generic (PLEG): container finished" podID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerID="b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9" exitCode=137 Oct 11 10:57:32.314040 master-2 kubenswrapper[4776]: I1011 10:57:32.313851 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerDied","Data":"b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9"} Oct 11 10:57:32.316220 master-2 kubenswrapper[4776]: I1011 10:57:32.315982 4776 generic.go:334] "Generic (PLEG): container finished" podID="43ef2dc9-3563-4188-8d91-2fc18c396a4a" containerID="a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142" exitCode=0 Oct 11 10:57:32.316220 master-2 kubenswrapper[4776]: I1011 10:57:32.316049 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" event={"ID":"43ef2dc9-3563-4188-8d91-2fc18c396a4a","Type":"ContainerDied","Data":"a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142"} Oct 11 10:57:32.405637 master-0 kubenswrapper[4790]: I1011 10:57:32.405565 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-2"] Oct 11 10:57:32.603199 master-2 kubenswrapper[4776]: I1011 10:57:32.602193 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 11 10:57:32.651860 master-2 kubenswrapper[4776]: I1011 10:57:32.651815 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:57:32.664966 master-2 kubenswrapper[4776]: I1011 10:57:32.664920 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 11 10:57:32.782888 master-2 kubenswrapper[4776]: I1011 10:57:32.782821 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-sb\") pod \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " Oct 11 10:57:32.783267 master-2 kubenswrapper[4776]: I1011 10:57:32.782908 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-nb\") pod \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " Oct 11 10:57:32.783267 master-2 kubenswrapper[4776]: I1011 10:57:32.782994 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcqbh\" (UniqueName: \"kubernetes.io/projected/43ef2dc9-3563-4188-8d91-2fc18c396a4a-kube-api-access-xcqbh\") pod \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " Oct 11 10:57:32.783267 master-2 kubenswrapper[4776]: I1011 10:57:32.783062 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-svc\") pod \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " Oct 11 10:57:32.783267 master-2 kubenswrapper[4776]: I1011 10:57:32.783103 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-swift-storage-0\") pod \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " Oct 11 10:57:32.783267 master-2 kubenswrapper[4776]: I1011 10:57:32.783124 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-config\") pod \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " Oct 11 10:57:32.799135 master-2 kubenswrapper[4776]: I1011 10:57:32.799076 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43ef2dc9-3563-4188-8d91-2fc18c396a4a-kube-api-access-xcqbh" (OuterVolumeSpecName: "kube-api-access-xcqbh") pod "43ef2dc9-3563-4188-8d91-2fc18c396a4a" (UID: "43ef2dc9-3563-4188-8d91-2fc18c396a4a"). InnerVolumeSpecName "kube-api-access-xcqbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:32.827561 master-2 kubenswrapper[4776]: I1011 10:57:32.827500 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "43ef2dc9-3563-4188-8d91-2fc18c396a4a" (UID: "43ef2dc9-3563-4188-8d91-2fc18c396a4a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:57:32.842748 master-2 kubenswrapper[4776]: I1011 10:57:32.842604 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "43ef2dc9-3563-4188-8d91-2fc18c396a4a" (UID: "43ef2dc9-3563-4188-8d91-2fc18c396a4a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:57:32.845651 master-2 kubenswrapper[4776]: I1011 10:57:32.845562 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "43ef2dc9-3563-4188-8d91-2fc18c396a4a" (UID: "43ef2dc9-3563-4188-8d91-2fc18c396a4a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:57:32.854656 master-2 kubenswrapper[4776]: I1011 10:57:32.854591 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-config" (OuterVolumeSpecName: "config") pod "43ef2dc9-3563-4188-8d91-2fc18c396a4a" (UID: "43ef2dc9-3563-4188-8d91-2fc18c396a4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:57:32.861280 master-2 kubenswrapper[4776]: I1011 10:57:32.861224 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "43ef2dc9-3563-4188-8d91-2fc18c396a4a" (UID: "43ef2dc9-3563-4188-8d91-2fc18c396a4a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:57:32.886184 master-2 kubenswrapper[4776]: I1011 10:57:32.886076 4776 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-swift-storage-0\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:32.886184 master-2 kubenswrapper[4776]: I1011 10:57:32.886124 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:32.886184 master-2 kubenswrapper[4776]: I1011 10:57:32.886135 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-sb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:32.886184 master-2 kubenswrapper[4776]: I1011 10:57:32.886143 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-nb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:32.886184 master-2 kubenswrapper[4776]: I1011 10:57:32.886151 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcqbh\" (UniqueName: \"kubernetes.io/projected/43ef2dc9-3563-4188-8d91-2fc18c396a4a-kube-api-access-xcqbh\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:32.886184 master-2 kubenswrapper[4776]: I1011 10:57:32.886159 4776 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-svc\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:32.968192 master-2 kubenswrapper[4776]: I1011 10:57:32.968148 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 11 10:57:33.088207 master-2 kubenswrapper[4776]: I1011 10:57:33.088162 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2tgl\" (UniqueName: \"kubernetes.io/projected/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-kube-api-access-p2tgl\") pod \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " Oct 11 10:57:33.088508 master-2 kubenswrapper[4776]: I1011 10:57:33.088489 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-scripts\") pod \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " Oct 11 10:57:33.088823 master-2 kubenswrapper[4776]: I1011 10:57:33.088805 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-config-data\") pod \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " Oct 11 10:57:33.088995 master-2 kubenswrapper[4776]: I1011 10:57:33.088977 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-combined-ca-bundle\") pod \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " Oct 11 10:57:33.090819 master-2 kubenswrapper[4776]: I1011 10:57:33.090791 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-kube-api-access-p2tgl" (OuterVolumeSpecName: "kube-api-access-p2tgl") pod "7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" (UID: "7ea24dce-b1ed-4d57-bd23-f74edc2df1c3"). InnerVolumeSpecName "kube-api-access-p2tgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:33.091196 master-2 kubenswrapper[4776]: I1011 10:57:33.091149 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-scripts" (OuterVolumeSpecName: "scripts") pod "7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" (UID: "7ea24dce-b1ed-4d57-bd23-f74edc2df1c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:33.173298 master-2 kubenswrapper[4776]: I1011 10:57:33.173244 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" (UID: "7ea24dce-b1ed-4d57-bd23-f74edc2df1c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:33.191009 master-2 kubenswrapper[4776]: I1011 10:57:33.190956 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-config-data" (OuterVolumeSpecName: "config-data") pod "7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" (UID: "7ea24dce-b1ed-4d57-bd23-f74edc2df1c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:33.191297 master-2 kubenswrapper[4776]: I1011 10:57:33.191249 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:33.191359 master-2 kubenswrapper[4776]: I1011 10:57:33.191296 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:33.191359 master-2 kubenswrapper[4776]: I1011 10:57:33.191313 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2tgl\" (UniqueName: \"kubernetes.io/projected/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-kube-api-access-p2tgl\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:33.191359 master-2 kubenswrapper[4776]: I1011 10:57:33.191325 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:33.283639 master-0 kubenswrapper[4790]: I1011 10:57:33.283507 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a","Type":"ContainerStarted","Data":"ae61c5515b9a2d374036db159ab569f90c948aa23970bdf132ca34ea4b15dba5"} Oct 11 10:57:33.284602 master-0 kubenswrapper[4790]: I1011 10:57:33.283656 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a","Type":"ContainerStarted","Data":"eb81c163f9c6609135566926180071a0096d86a45537992958ef439100113427"} Oct 11 10:57:33.284602 master-0 kubenswrapper[4790]: I1011 10:57:33.283678 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-2" event={"ID":"720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a","Type":"ContainerStarted","Data":"09d665290ab6ab8b3167b9064402e3deb7f949d44b8db9a7b0fef350f6ada2c2"} Oct 11 10:57:33.288309 master-0 kubenswrapper[4790]: I1011 10:57:33.288210 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-2" event={"ID":"b9948bac-db47-43c4-8ff5-611d5b07c46a","Type":"ContainerStarted","Data":"83885f8b5bfcbb13a3cc61e2c11507b2a0f6ecbae2c6a4917e213d1c701fa61c"} Oct 11 10:57:33.288409 master-0 kubenswrapper[4790]: I1011 10:57:33.288318 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-2" event={"ID":"b9948bac-db47-43c4-8ff5-611d5b07c46a","Type":"ContainerStarted","Data":"1312fae28a8f53e7d49345f0efdd83d317a71323723d978a9985c0f156a93040"} Oct 11 10:57:33.322652 master-0 kubenswrapper[4790]: I1011 10:57:33.322433 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-2" podStartSLOduration=2.322413584 podStartE2EDuration="2.322413584s" podCreationTimestamp="2025-10-11 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:57:33.316465419 +0000 UTC m=+1129.870925721" watchObservedRunningTime="2025-10-11 10:57:33.322413584 +0000 UTC m=+1129.876873876" Oct 11 10:57:33.326274 master-2 kubenswrapper[4776]: I1011 10:57:33.326213 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" event={"ID":"43ef2dc9-3563-4188-8d91-2fc18c396a4a","Type":"ContainerDied","Data":"b2468f50ae0c430498d74cf1413c34467e7f27dc91b6590e32b6437ebfd5f4be"} Oct 11 10:57:33.326274 master-2 kubenswrapper[4776]: I1011 10:57:33.326223 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:57:33.326999 master-2 kubenswrapper[4776]: I1011 10:57:33.326284 4776 scope.go:117] "RemoveContainer" containerID="a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142" Oct 11 10:57:33.329804 master-2 kubenswrapper[4776]: I1011 10:57:33.329758 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerDied","Data":"317d33ecc929d12f2e23fd3b8f482ae3dda6757c15d263ac0d8348bb3993f8df"} Oct 11 10:57:33.329934 master-2 kubenswrapper[4776]: I1011 10:57:33.329908 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 11 10:57:33.347536 master-2 kubenswrapper[4776]: I1011 10:57:33.347135 4776 scope.go:117] "RemoveContainer" containerID="c248f341904d5a6b165429ebe51185bc10d8bbf637b7b5baa2593c4ecc482b79" Oct 11 10:57:33.358630 master-2 kubenswrapper[4776]: I1011 10:57:33.358586 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 11 10:57:33.368606 master-0 kubenswrapper[4790]: I1011 10:57:33.368421 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-2" podStartSLOduration=2.368376579 podStartE2EDuration="2.368376579s" podCreationTimestamp="2025-10-11 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:57:33.352866381 +0000 UTC m=+1129.907326763" watchObservedRunningTime="2025-10-11 10:57:33.368376579 +0000 UTC m=+1129.922836871" Oct 11 10:57:33.369754 master-2 kubenswrapper[4776]: I1011 10:57:33.369694 4776 scope.go:117] "RemoveContainer" containerID="b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9" Oct 11 10:57:33.377565 master-2 kubenswrapper[4776]: I1011 10:57:33.377531 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cbf74f6f-xmqbr"] Oct 11 10:57:33.385352 master-2 kubenswrapper[4776]: I1011 10:57:33.385311 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79cbf74f6f-xmqbr"] Oct 11 10:57:33.413103 master-2 kubenswrapper[4776]: I1011 10:57:33.413010 4776 scope.go:117] "RemoveContainer" containerID="37c9c3f974a6d34524b1afbc2045074821422c1756dbdd22caa6addb90b8625b" Oct 11 10:57:33.431351 master-2 kubenswrapper[4776]: I1011 10:57:33.431299 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Oct 11 10:57:33.438664 master-2 kubenswrapper[4776]: I1011 10:57:33.438618 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Oct 11 10:57:33.440451 master-2 kubenswrapper[4776]: I1011 10:57:33.440423 4776 scope.go:117] "RemoveContainer" containerID="326e9fc4149cc880197755466f98e8b8e170fd384dd03ab05ef261cdaf3f4253" Oct 11 10:57:33.472846 master-2 kubenswrapper[4776]: I1011 10:57:33.470762 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Oct 11 10:57:33.473280 master-2 kubenswrapper[4776]: E1011 10:57:33.473241 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ef2dc9-3563-4188-8d91-2fc18c396a4a" containerName="init" Oct 11 10:57:33.473280 master-2 kubenswrapper[4776]: I1011 10:57:33.473273 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ef2dc9-3563-4188-8d91-2fc18c396a4a" containerName="init" Oct 11 10:57:33.473384 master-2 kubenswrapper[4776]: E1011 10:57:33.473290 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ef2dc9-3563-4188-8d91-2fc18c396a4a" containerName="dnsmasq-dns" Oct 11 10:57:33.473384 master-2 kubenswrapper[4776]: I1011 10:57:33.473296 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ef2dc9-3563-4188-8d91-2fc18c396a4a" containerName="dnsmasq-dns" Oct 11 10:57:33.473384 master-2 kubenswrapper[4776]: E1011 10:57:33.473313 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-listener" Oct 11 10:57:33.473384 master-2 kubenswrapper[4776]: I1011 10:57:33.473320 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-listener" Oct 11 10:57:33.473384 master-2 kubenswrapper[4776]: E1011 10:57:33.473338 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-api" Oct 11 10:57:33.473384 master-2 kubenswrapper[4776]: I1011 10:57:33.473345 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-api" Oct 11 10:57:33.473384 master-2 kubenswrapper[4776]: E1011 10:57:33.473353 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-evaluator" Oct 11 10:57:33.473384 master-2 kubenswrapper[4776]: I1011 10:57:33.473359 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-evaluator" Oct 11 10:57:33.473384 master-2 kubenswrapper[4776]: E1011 10:57:33.473386 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-notifier" Oct 11 10:57:33.473763 master-2 kubenswrapper[4776]: I1011 10:57:33.473392 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-notifier" Oct 11 10:57:33.473763 master-2 kubenswrapper[4776]: I1011 10:57:33.473562 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-notifier" Oct 11 10:57:33.473763 master-2 kubenswrapper[4776]: I1011 10:57:33.473575 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-listener" Oct 11 10:57:33.473763 master-2 kubenswrapper[4776]: I1011 10:57:33.473583 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-evaluator" Oct 11 10:57:33.473763 master-2 kubenswrapper[4776]: I1011 10:57:33.473599 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-api" Oct 11 10:57:33.473763 master-2 kubenswrapper[4776]: I1011 10:57:33.473618 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="43ef2dc9-3563-4188-8d91-2fc18c396a4a" containerName="dnsmasq-dns" Oct 11 10:57:33.475890 master-2 kubenswrapper[4776]: I1011 10:57:33.475858 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 11 10:57:33.475985 master-2 kubenswrapper[4776]: I1011 10:57:33.475891 4776 scope.go:117] "RemoveContainer" containerID="e4ba08b3d6a3495897a11277b1fc244a6f1d3e3d1223b0873ad4182dea2a4b01" Oct 11 10:57:33.482982 master-2 kubenswrapper[4776]: I1011 10:57:33.482889 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Oct 11 10:57:33.482982 master-2 kubenswrapper[4776]: I1011 10:57:33.482960 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Oct 11 10:57:33.483443 master-2 kubenswrapper[4776]: I1011 10:57:33.483414 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Oct 11 10:57:33.483907 master-2 kubenswrapper[4776]: I1011 10:57:33.483860 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Oct 11 10:57:33.497362 master-2 kubenswrapper[4776]: I1011 10:57:33.497302 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 11 10:57:33.553940 master-1 kubenswrapper[4771]: I1011 10:57:33.553880 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-1" Oct 11 10:57:33.554748 master-1 kubenswrapper[4771]: I1011 10:57:33.554008 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-1" Oct 11 10:57:33.561146 master-1 kubenswrapper[4771]: I1011 10:57:33.561047 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-1" Oct 11 10:57:33.562084 master-1 kubenswrapper[4771]: I1011 10:57:33.562030 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-1" Oct 11 10:57:33.602894 master-2 kubenswrapper[4776]: I1011 10:57:33.602733 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-scripts\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.602894 master-2 kubenswrapper[4776]: I1011 10:57:33.602808 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqm7m\" (UniqueName: \"kubernetes.io/projected/3a33cf0a-51bc-4906-9c65-b043d38426a0-kube-api-access-qqm7m\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.603323 master-2 kubenswrapper[4776]: I1011 10:57:33.603201 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-public-tls-certs\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.603488 master-2 kubenswrapper[4776]: I1011 10:57:33.603432 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-config-data\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.603689 master-2 kubenswrapper[4776]: I1011 10:57:33.603643 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.604043 master-2 kubenswrapper[4776]: I1011 10:57:33.604003 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-internal-tls-certs\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.707565 master-2 kubenswrapper[4776]: I1011 10:57:33.707445 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-internal-tls-certs\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.707936 master-2 kubenswrapper[4776]: I1011 10:57:33.707613 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-scripts\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.707936 master-2 kubenswrapper[4776]: I1011 10:57:33.707643 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqm7m\" (UniqueName: \"kubernetes.io/projected/3a33cf0a-51bc-4906-9c65-b043d38426a0-kube-api-access-qqm7m\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.707936 master-2 kubenswrapper[4776]: I1011 10:57:33.707697 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-public-tls-certs\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.707936 master-2 kubenswrapper[4776]: I1011 10:57:33.707731 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-config-data\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.707936 master-2 kubenswrapper[4776]: I1011 10:57:33.707769 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.711656 master-2 kubenswrapper[4776]: I1011 10:57:33.711616 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.711930 master-2 kubenswrapper[4776]: I1011 10:57:33.711883 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-internal-tls-certs\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.711990 master-2 kubenswrapper[4776]: I1011 10:57:33.711913 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-public-tls-certs\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.712196 master-2 kubenswrapper[4776]: I1011 10:57:33.712172 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-scripts\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.713307 master-2 kubenswrapper[4776]: I1011 10:57:33.713246 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-config-data\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.731616 master-2 kubenswrapper[4776]: I1011 10:57:33.731389 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqm7m\" (UniqueName: \"kubernetes.io/projected/3a33cf0a-51bc-4906-9c65-b043d38426a0-kube-api-access-qqm7m\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.812504 master-2 kubenswrapper[4776]: I1011 10:57:33.812409 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 11 10:57:34.072387 master-2 kubenswrapper[4776]: I1011 10:57:34.072335 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43ef2dc9-3563-4188-8d91-2fc18c396a4a" path="/var/lib/kubelet/pods/43ef2dc9-3563-4188-8d91-2fc18c396a4a/volumes" Oct 11 10:57:34.073110 master-2 kubenswrapper[4776]: I1011 10:57:34.073082 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" path="/var/lib/kubelet/pods/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3/volumes" Oct 11 10:57:34.245883 master-2 kubenswrapper[4776]: I1011 10:57:34.245830 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 11 10:57:34.247123 master-2 kubenswrapper[4776]: W1011 10:57:34.247078 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a33cf0a_51bc_4906_9c65_b043d38426a0.slice/crio-19661d60a7a0fd317aa5431eae2ad22091d0fcd48ad92b5160e4287af786920e WatchSource:0}: Error finding container 19661d60a7a0fd317aa5431eae2ad22091d0fcd48ad92b5160e4287af786920e: Status 404 returned error can't find the container with id 19661d60a7a0fd317aa5431eae2ad22091d0fcd48ad92b5160e4287af786920e Oct 11 10:57:34.345093 master-2 kubenswrapper[4776]: I1011 10:57:34.345032 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3a33cf0a-51bc-4906-9c65-b043d38426a0","Type":"ContainerStarted","Data":"19661d60a7a0fd317aa5431eae2ad22091d0fcd48ad92b5160e4287af786920e"} Oct 11 10:57:34.483017 master-0 kubenswrapper[4790]: I1011 10:57:34.482931 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-1" Oct 11 10:57:34.483868 master-0 kubenswrapper[4790]: I1011 10:57:34.483832 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-1" Oct 11 10:57:34.489412 master-0 kubenswrapper[4790]: I1011 10:57:34.489349 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-1" Oct 11 10:57:34.493946 master-0 kubenswrapper[4790]: I1011 10:57:34.493885 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-1" Oct 11 10:57:35.312304 master-0 kubenswrapper[4790]: I1011 10:57:35.312244 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-1" Oct 11 10:57:35.323800 master-0 kubenswrapper[4790]: I1011 10:57:35.323653 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-1" Oct 11 10:57:35.354356 master-2 kubenswrapper[4776]: I1011 10:57:35.354276 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3a33cf0a-51bc-4906-9c65-b043d38426a0","Type":"ContainerStarted","Data":"c11d21bdc93d76640ffec38d2311b1bb73741a7af0162d358c439a7fcb73f54d"} Oct 11 10:57:36.366453 master-2 kubenswrapper[4776]: I1011 10:57:36.366288 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3a33cf0a-51bc-4906-9c65-b043d38426a0","Type":"ContainerStarted","Data":"0ca0b1839df140e375b9d682f6202be40a80947d84dc2a6429db22f10a18baf6"} Oct 11 10:57:36.366453 master-2 kubenswrapper[4776]: I1011 10:57:36.366375 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3a33cf0a-51bc-4906-9c65-b043d38426a0","Type":"ContainerStarted","Data":"b4607e1e750b4ae0f2b91f175a40a1770e9e4c8c24782d97344e5983b46bec9a"} Oct 11 10:57:36.746547 master-0 kubenswrapper[4790]: I1011 10:57:36.746421 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-2" Oct 11 10:57:36.920566 master-0 kubenswrapper[4790]: I1011 10:57:36.920478 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-2" Oct 11 10:57:36.920877 master-0 kubenswrapper[4790]: I1011 10:57:36.920579 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-2" Oct 11 10:57:37.385505 master-2 kubenswrapper[4776]: I1011 10:57:37.385395 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3a33cf0a-51bc-4906-9c65-b043d38426a0","Type":"ContainerStarted","Data":"a17071c798c0cf44093889e34a7a24c7176010cd96550fb553cc6fc668d1566f"} Oct 11 10:57:37.441035 master-2 kubenswrapper[4776]: I1011 10:57:37.440861 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.8174366929999999 podStartE2EDuration="4.440824506s" podCreationTimestamp="2025-10-11 10:57:33 +0000 UTC" firstStartedPulling="2025-10-11 10:57:34.24947563 +0000 UTC m=+1889.033902329" lastFinishedPulling="2025-10-11 10:57:36.872863413 +0000 UTC m=+1891.657290142" observedRunningTime="2025-10-11 10:57:37.424530645 +0000 UTC m=+1892.208957394" watchObservedRunningTime="2025-10-11 10:57:37.440824506 +0000 UTC m=+1892.225251225" Oct 11 10:57:39.712781 master-2 kubenswrapper[4776]: I1011 10:57:39.712638 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-2" Oct 11 10:57:39.712781 master-2 kubenswrapper[4776]: I1011 10:57:39.712792 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-2" Oct 11 10:57:40.726943 master-2 kubenswrapper[4776]: I1011 10:57:40.726834 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-2" podUID="bb645f14-616d-425d-ae7d-5475565669f8" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.0.179:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:57:40.727844 master-2 kubenswrapper[4776]: I1011 10:57:40.726927 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-2" podUID="bb645f14-616d-425d-ae7d-5475565669f8" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.0.179:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:57:41.747315 master-0 kubenswrapper[4790]: I1011 10:57:41.747242 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-2" Oct 11 10:57:41.785606 master-0 kubenswrapper[4790]: I1011 10:57:41.784538 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-2" Oct 11 10:57:41.921357 master-0 kubenswrapper[4790]: I1011 10:57:41.921287 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-2" Oct 11 10:57:41.921357 master-0 kubenswrapper[4790]: I1011 10:57:41.921368 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-2" Oct 11 10:57:42.410732 master-0 kubenswrapper[4790]: I1011 10:57:42.410642 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-2" Oct 11 10:57:42.453382 master-1 kubenswrapper[4771]: I1011 10:57:42.453264 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-1"] Oct 11 10:57:42.455621 master-1 kubenswrapper[4771]: I1011 10:57:42.453556 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-1" podUID="82101e85-023a-4398-bb5e-4162dea69f46" containerName="nova-scheduler-scheduler" containerID="cri-o://ceb4ba3a78d59ca572872a25024937ab120ba7e7f85e67747ea7ab0ace9a66fc" gracePeriod=30 Oct 11 10:57:42.936057 master-0 kubenswrapper[4790]: I1011 10:57:42.935951 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-2" podUID="720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.130.0.120:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:57:42.936748 master-0 kubenswrapper[4790]: I1011 10:57:42.935988 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-2" podUID="720d2b85-b5e9-46cd-8d71-f5ef6a31cd5a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.130.0.120:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:57:43.438780 master-1 kubenswrapper[4771]: E1011 10:57:43.438722 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ceb4ba3a78d59ca572872a25024937ab120ba7e7f85e67747ea7ab0ace9a66fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 11 10:57:43.441262 master-1 kubenswrapper[4771]: E1011 10:57:43.441178 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ceb4ba3a78d59ca572872a25024937ab120ba7e7f85e67747ea7ab0ace9a66fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 11 10:57:43.443167 master-1 kubenswrapper[4771]: E1011 10:57:43.443133 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ceb4ba3a78d59ca572872a25024937ab120ba7e7f85e67747ea7ab0ace9a66fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 11 10:57:43.443439 master-1 kubenswrapper[4771]: E1011 10:57:43.443411 4771 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-1" podUID="82101e85-023a-4398-bb5e-4162dea69f46" containerName="nova-scheduler-scheduler" Oct 11 10:57:48.234411 master-1 kubenswrapper[4771]: I1011 10:57:48.234334 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 11 10:57:48.312209 master-1 kubenswrapper[4771]: I1011 10:57:48.312110 4771 generic.go:334] "Generic (PLEG): container finished" podID="82101e85-023a-4398-bb5e-4162dea69f46" containerID="ceb4ba3a78d59ca572872a25024937ab120ba7e7f85e67747ea7ab0ace9a66fc" exitCode=0 Oct 11 10:57:48.312209 master-1 kubenswrapper[4771]: I1011 10:57:48.312177 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"82101e85-023a-4398-bb5e-4162dea69f46","Type":"ContainerDied","Data":"ceb4ba3a78d59ca572872a25024937ab120ba7e7f85e67747ea7ab0ace9a66fc"} Oct 11 10:57:48.312569 master-1 kubenswrapper[4771]: I1011 10:57:48.312244 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 11 10:57:48.312569 master-1 kubenswrapper[4771]: I1011 10:57:48.312254 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"82101e85-023a-4398-bb5e-4162dea69f46","Type":"ContainerDied","Data":"344c84a1c3165ebb54446b68471bee6244a2b1504eca4e8fa46ae99da6e9b301"} Oct 11 10:57:48.312569 master-1 kubenswrapper[4771]: I1011 10:57:48.312269 4771 scope.go:117] "RemoveContainer" containerID="ceb4ba3a78d59ca572872a25024937ab120ba7e7f85e67747ea7ab0ace9a66fc" Oct 11 10:57:48.340663 master-1 kubenswrapper[4771]: I1011 10:57:48.340562 4771 scope.go:117] "RemoveContainer" containerID="ceb4ba3a78d59ca572872a25024937ab120ba7e7f85e67747ea7ab0ace9a66fc" Oct 11 10:57:48.341338 master-1 kubenswrapper[4771]: E1011 10:57:48.341280 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceb4ba3a78d59ca572872a25024937ab120ba7e7f85e67747ea7ab0ace9a66fc\": container with ID starting with ceb4ba3a78d59ca572872a25024937ab120ba7e7f85e67747ea7ab0ace9a66fc not found: ID does not exist" containerID="ceb4ba3a78d59ca572872a25024937ab120ba7e7f85e67747ea7ab0ace9a66fc" Oct 11 10:57:48.341526 master-1 kubenswrapper[4771]: I1011 10:57:48.341486 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceb4ba3a78d59ca572872a25024937ab120ba7e7f85e67747ea7ab0ace9a66fc"} err="failed to get container status \"ceb4ba3a78d59ca572872a25024937ab120ba7e7f85e67747ea7ab0ace9a66fc\": rpc error: code = NotFound desc = could not find container \"ceb4ba3a78d59ca572872a25024937ab120ba7e7f85e67747ea7ab0ace9a66fc\": container with ID starting with ceb4ba3a78d59ca572872a25024937ab120ba7e7f85e67747ea7ab0ace9a66fc not found: ID does not exist" Oct 11 10:57:48.410984 master-1 kubenswrapper[4771]: I1011 10:57:48.410721 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh5m8\" (UniqueName: \"kubernetes.io/projected/82101e85-023a-4398-bb5e-4162dea69f46-kube-api-access-kh5m8\") pod \"82101e85-023a-4398-bb5e-4162dea69f46\" (UID: \"82101e85-023a-4398-bb5e-4162dea69f46\") " Oct 11 10:57:48.410984 master-1 kubenswrapper[4771]: I1011 10:57:48.410843 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82101e85-023a-4398-bb5e-4162dea69f46-config-data\") pod \"82101e85-023a-4398-bb5e-4162dea69f46\" (UID: \"82101e85-023a-4398-bb5e-4162dea69f46\") " Oct 11 10:57:48.410984 master-1 kubenswrapper[4771]: I1011 10:57:48.410941 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82101e85-023a-4398-bb5e-4162dea69f46-combined-ca-bundle\") pod \"82101e85-023a-4398-bb5e-4162dea69f46\" (UID: \"82101e85-023a-4398-bb5e-4162dea69f46\") " Oct 11 10:57:48.417343 master-1 kubenswrapper[4771]: I1011 10:57:48.417280 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82101e85-023a-4398-bb5e-4162dea69f46-kube-api-access-kh5m8" (OuterVolumeSpecName: "kube-api-access-kh5m8") pod "82101e85-023a-4398-bb5e-4162dea69f46" (UID: "82101e85-023a-4398-bb5e-4162dea69f46"). InnerVolumeSpecName "kube-api-access-kh5m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:48.450702 master-1 kubenswrapper[4771]: I1011 10:57:48.450223 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82101e85-023a-4398-bb5e-4162dea69f46-config-data" (OuterVolumeSpecName: "config-data") pod "82101e85-023a-4398-bb5e-4162dea69f46" (UID: "82101e85-023a-4398-bb5e-4162dea69f46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:48.453888 master-1 kubenswrapper[4771]: I1011 10:57:48.453813 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82101e85-023a-4398-bb5e-4162dea69f46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82101e85-023a-4398-bb5e-4162dea69f46" (UID: "82101e85-023a-4398-bb5e-4162dea69f46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:48.515567 master-1 kubenswrapper[4771]: I1011 10:57:48.515410 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh5m8\" (UniqueName: \"kubernetes.io/projected/82101e85-023a-4398-bb5e-4162dea69f46-kube-api-access-kh5m8\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:48.515567 master-1 kubenswrapper[4771]: I1011 10:57:48.515476 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82101e85-023a-4398-bb5e-4162dea69f46-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:48.515567 master-1 kubenswrapper[4771]: I1011 10:57:48.515496 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82101e85-023a-4398-bb5e-4162dea69f46-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:48.657584 master-1 kubenswrapper[4771]: I1011 10:57:48.656882 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-1"] Oct 11 10:57:48.667644 master-1 kubenswrapper[4771]: I1011 10:57:48.667460 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-1"] Oct 11 10:57:48.693539 master-1 kubenswrapper[4771]: I1011 10:57:48.693428 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-1"] Oct 11 10:57:48.693890 master-1 kubenswrapper[4771]: E1011 10:57:48.693849 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82101e85-023a-4398-bb5e-4162dea69f46" containerName="nova-scheduler-scheduler" Oct 11 10:57:48.693890 master-1 kubenswrapper[4771]: I1011 10:57:48.693879 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="82101e85-023a-4398-bb5e-4162dea69f46" containerName="nova-scheduler-scheduler" Oct 11 10:57:48.694047 master-1 kubenswrapper[4771]: E1011 10:57:48.693894 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="709c362a-6ace-46bf-9f94-86852f78f6f2" containerName="nova-manage" Oct 11 10:57:48.694047 master-1 kubenswrapper[4771]: I1011 10:57:48.693906 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="709c362a-6ace-46bf-9f94-86852f78f6f2" containerName="nova-manage" Oct 11 10:57:48.694047 master-1 kubenswrapper[4771]: E1011 10:57:48.693957 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae49cc63-d351-440f-9334-4ef2550565a2" containerName="nova-manage" Oct 11 10:57:48.694047 master-1 kubenswrapper[4771]: I1011 10:57:48.693966 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae49cc63-d351-440f-9334-4ef2550565a2" containerName="nova-manage" Oct 11 10:57:48.694303 master-1 kubenswrapper[4771]: I1011 10:57:48.694150 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="82101e85-023a-4398-bb5e-4162dea69f46" containerName="nova-scheduler-scheduler" Oct 11 10:57:48.694303 master-1 kubenswrapper[4771]: I1011 10:57:48.694171 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae49cc63-d351-440f-9334-4ef2550565a2" containerName="nova-manage" Oct 11 10:57:48.694303 master-1 kubenswrapper[4771]: I1011 10:57:48.694188 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="709c362a-6ace-46bf-9f94-86852f78f6f2" containerName="nova-manage" Oct 11 10:57:48.695013 master-1 kubenswrapper[4771]: I1011 10:57:48.694967 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 11 10:57:48.700251 master-1 kubenswrapper[4771]: I1011 10:57:48.700119 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 11 10:57:48.745764 master-1 kubenswrapper[4771]: I1011 10:57:48.714261 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-1"] Oct 11 10:57:48.823470 master-1 kubenswrapper[4771]: I1011 10:57:48.822945 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c-config-data\") pod \"nova-scheduler-1\" (UID: \"77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c\") " pod="openstack/nova-scheduler-1" Oct 11 10:57:48.823470 master-1 kubenswrapper[4771]: I1011 10:57:48.823067 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmx7j\" (UniqueName: \"kubernetes.io/projected/77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c-kube-api-access-kmx7j\") pod \"nova-scheduler-1\" (UID: \"77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c\") " pod="openstack/nova-scheduler-1" Oct 11 10:57:48.823470 master-1 kubenswrapper[4771]: I1011 10:57:48.823142 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c-combined-ca-bundle\") pod \"nova-scheduler-1\" (UID: \"77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c\") " pod="openstack/nova-scheduler-1" Oct 11 10:57:48.926417 master-1 kubenswrapper[4771]: I1011 10:57:48.926184 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c-config-data\") pod \"nova-scheduler-1\" (UID: \"77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c\") " pod="openstack/nova-scheduler-1" Oct 11 10:57:48.926417 master-1 kubenswrapper[4771]: I1011 10:57:48.926300 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmx7j\" (UniqueName: \"kubernetes.io/projected/77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c-kube-api-access-kmx7j\") pod \"nova-scheduler-1\" (UID: \"77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c\") " pod="openstack/nova-scheduler-1" Oct 11 10:57:48.926417 master-1 kubenswrapper[4771]: I1011 10:57:48.926376 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c-combined-ca-bundle\") pod \"nova-scheduler-1\" (UID: \"77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c\") " pod="openstack/nova-scheduler-1" Oct 11 10:57:48.931044 master-1 kubenswrapper[4771]: I1011 10:57:48.931000 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c-combined-ca-bundle\") pod \"nova-scheduler-1\" (UID: \"77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c\") " pod="openstack/nova-scheduler-1" Oct 11 10:57:48.933503 master-1 kubenswrapper[4771]: I1011 10:57:48.933468 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c-config-data\") pod \"nova-scheduler-1\" (UID: \"77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c\") " pod="openstack/nova-scheduler-1" Oct 11 10:57:48.959507 master-1 kubenswrapper[4771]: I1011 10:57:48.959317 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmx7j\" (UniqueName: \"kubernetes.io/projected/77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c-kube-api-access-kmx7j\") pod \"nova-scheduler-1\" (UID: \"77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c\") " pod="openstack/nova-scheduler-1" Oct 11 10:57:49.058858 master-1 kubenswrapper[4771]: I1011 10:57:49.058763 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 11 10:57:49.534617 master-1 kubenswrapper[4771]: I1011 10:57:49.534543 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-1"] Oct 11 10:57:49.535326 master-1 kubenswrapper[4771]: W1011 10:57:49.534639 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77c00ce9_c5ba_47fe_a5c5_98ea5f96a36c.slice/crio-7ac397c3e61ae52c0aeda69f07c9f076ee92106f951040b6ba8a1bb32f43ce21 WatchSource:0}: Error finding container 7ac397c3e61ae52c0aeda69f07c9f076ee92106f951040b6ba8a1bb32f43ce21: Status 404 returned error can't find the container with id 7ac397c3e61ae52c0aeda69f07c9f076ee92106f951040b6ba8a1bb32f43ce21 Oct 11 10:57:49.719738 master-2 kubenswrapper[4776]: I1011 10:57:49.719665 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-2" Oct 11 10:57:49.720390 master-2 kubenswrapper[4776]: I1011 10:57:49.719861 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-2" Oct 11 10:57:49.720390 master-2 kubenswrapper[4776]: I1011 10:57:49.720181 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-2" Oct 11 10:57:49.720390 master-2 kubenswrapper[4776]: I1011 10:57:49.720204 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-2" Oct 11 10:57:49.725004 master-2 kubenswrapper[4776]: I1011 10:57:49.724973 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-2" Oct 11 10:57:49.725208 master-2 kubenswrapper[4776]: I1011 10:57:49.725179 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-2" Oct 11 10:57:49.818159 master-0 kubenswrapper[4790]: I1011 10:57:49.818045 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1"] Oct 11 10:57:49.819039 master-0 kubenswrapper[4790]: I1011 10:57:49.818457 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-1" podUID="44bcb391-53f2-438c-b46e-1f3208011f01" containerName="nova-api-log" containerID="cri-o://7c441b122cde11d0c0dd59e93d58166fba491455c97757aa05c250678bb0d192" gracePeriod=30 Oct 11 10:57:49.819222 master-0 kubenswrapper[4790]: I1011 10:57:49.819170 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-1" podUID="44bcb391-53f2-438c-b46e-1f3208011f01" containerName="nova-api-api" containerID="cri-o://9c19ce238ad447003ee3f2fc9c2488f5e38167272b8ba8dbab542ad52ae186e6" gracePeriod=30 Oct 11 10:57:50.336186 master-1 kubenswrapper[4771]: I1011 10:57:50.336102 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c","Type":"ContainerStarted","Data":"e7e5c8e5af7f1480b4c2c49099a0be8cb889cec0c16e033ef90c330b388c24bd"} Oct 11 10:57:50.336186 master-1 kubenswrapper[4771]: I1011 10:57:50.336174 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"77c00ce9-c5ba-47fe-a5c5-98ea5f96a36c","Type":"ContainerStarted","Data":"7ac397c3e61ae52c0aeda69f07c9f076ee92106f951040b6ba8a1bb32f43ce21"} Oct 11 10:57:50.371973 master-1 kubenswrapper[4771]: I1011 10:57:50.371864 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-1" podStartSLOduration=2.371833475 podStartE2EDuration="2.371833475s" podCreationTimestamp="2025-10-11 10:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:57:50.363762743 +0000 UTC m=+1902.337989234" watchObservedRunningTime="2025-10-11 10:57:50.371833475 +0000 UTC m=+1902.346059956" Oct 11 10:57:50.464205 master-1 kubenswrapper[4771]: I1011 10:57:50.464094 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82101e85-023a-4398-bb5e-4162dea69f46" path="/var/lib/kubelet/pods/82101e85-023a-4398-bb5e-4162dea69f46/volumes" Oct 11 10:57:50.472484 master-0 kubenswrapper[4790]: I1011 10:57:50.472394 4790 generic.go:334] "Generic (PLEG): container finished" podID="44bcb391-53f2-438c-b46e-1f3208011f01" containerID="7c441b122cde11d0c0dd59e93d58166fba491455c97757aa05c250678bb0d192" exitCode=143 Oct 11 10:57:50.472922 master-0 kubenswrapper[4790]: I1011 10:57:50.472486 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1" event={"ID":"44bcb391-53f2-438c-b46e-1f3208011f01","Type":"ContainerDied","Data":"7c441b122cde11d0c0dd59e93d58166fba491455c97757aa05c250678bb0d192"} Oct 11 10:57:51.927788 master-0 kubenswrapper[4790]: I1011 10:57:51.927678 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-2" Oct 11 10:57:51.928415 master-0 kubenswrapper[4790]: I1011 10:57:51.927849 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-2" Oct 11 10:57:51.934613 master-0 kubenswrapper[4790]: I1011 10:57:51.934548 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-2" Oct 11 10:57:51.936820 master-0 kubenswrapper[4790]: I1011 10:57:51.936765 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-2" Oct 11 10:57:52.097296 master-1 kubenswrapper[4771]: I1011 10:57:52.070000 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-1"] Oct 11 10:57:52.097296 master-1 kubenswrapper[4771]: I1011 10:57:52.070838 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-1" podUID="486db0a3-f081-43d5-b20d-d7386531632e" containerName="nova-metadata-log" containerID="cri-o://06a1bb6aa8f352b55c65006ce50a6d509e8dff8e484bf4f522e69ff0c42ae932" gracePeriod=30 Oct 11 10:57:52.097296 master-1 kubenswrapper[4771]: I1011 10:57:52.071340 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-1" podUID="486db0a3-f081-43d5-b20d-d7386531632e" containerName="nova-metadata-metadata" containerID="cri-o://80f0dd7f7bf3c1c4412d1e7a39722e2cd092b7c0f670f349af91c500d917aa10" gracePeriod=30 Oct 11 10:57:52.356931 master-1 kubenswrapper[4771]: I1011 10:57:52.356794 4771 generic.go:334] "Generic (PLEG): container finished" podID="486db0a3-f081-43d5-b20d-d7386531632e" containerID="06a1bb6aa8f352b55c65006ce50a6d509e8dff8e484bf4f522e69ff0c42ae932" exitCode=143 Oct 11 10:57:52.356931 master-1 kubenswrapper[4771]: I1011 10:57:52.356858 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"486db0a3-f081-43d5-b20d-d7386531632e","Type":"ContainerDied","Data":"06a1bb6aa8f352b55c65006ce50a6d509e8dff8e484bf4f522e69ff0c42ae932"} Oct 11 10:57:52.590939 master-1 kubenswrapper[4771]: I1011 10:57:52.590846 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 11 10:57:53.533736 master-0 kubenswrapper[4790]: I1011 10:57:53.529889 4790 generic.go:334] "Generic (PLEG): container finished" podID="44bcb391-53f2-438c-b46e-1f3208011f01" containerID="9c19ce238ad447003ee3f2fc9c2488f5e38167272b8ba8dbab542ad52ae186e6" exitCode=0 Oct 11 10:57:53.533736 master-0 kubenswrapper[4790]: I1011 10:57:53.530335 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1" event={"ID":"44bcb391-53f2-438c-b46e-1f3208011f01","Type":"ContainerDied","Data":"9c19ce238ad447003ee3f2fc9c2488f5e38167272b8ba8dbab542ad52ae186e6"} Oct 11 10:57:53.654191 master-0 kubenswrapper[4790]: I1011 10:57:53.654131 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1" Oct 11 10:57:53.811679 master-0 kubenswrapper[4790]: I1011 10:57:53.811607 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-internal-tls-certs\") pod \"44bcb391-53f2-438c-b46e-1f3208011f01\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " Oct 11 10:57:53.812024 master-0 kubenswrapper[4790]: I1011 10:57:53.811913 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44bcb391-53f2-438c-b46e-1f3208011f01-logs\") pod \"44bcb391-53f2-438c-b46e-1f3208011f01\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " Oct 11 10:57:53.812024 master-0 kubenswrapper[4790]: I1011 10:57:53.811939 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-combined-ca-bundle\") pod \"44bcb391-53f2-438c-b46e-1f3208011f01\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " Oct 11 10:57:53.812024 master-0 kubenswrapper[4790]: I1011 10:57:53.811987 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-config-data\") pod \"44bcb391-53f2-438c-b46e-1f3208011f01\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " Oct 11 10:57:53.812130 master-0 kubenswrapper[4790]: I1011 10:57:53.812065 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8qlt\" (UniqueName: \"kubernetes.io/projected/44bcb391-53f2-438c-b46e-1f3208011f01-kube-api-access-h8qlt\") pod \"44bcb391-53f2-438c-b46e-1f3208011f01\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " Oct 11 10:57:53.812130 master-0 kubenswrapper[4790]: I1011 10:57:53.812097 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-public-tls-certs\") pod \"44bcb391-53f2-438c-b46e-1f3208011f01\" (UID: \"44bcb391-53f2-438c-b46e-1f3208011f01\") " Oct 11 10:57:53.812720 master-0 kubenswrapper[4790]: I1011 10:57:53.812636 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44bcb391-53f2-438c-b46e-1f3208011f01-logs" (OuterVolumeSpecName: "logs") pod "44bcb391-53f2-438c-b46e-1f3208011f01" (UID: "44bcb391-53f2-438c-b46e-1f3208011f01"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:57:53.815904 master-0 kubenswrapper[4790]: I1011 10:57:53.815826 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44bcb391-53f2-438c-b46e-1f3208011f01-kube-api-access-h8qlt" (OuterVolumeSpecName: "kube-api-access-h8qlt") pod "44bcb391-53f2-438c-b46e-1f3208011f01" (UID: "44bcb391-53f2-438c-b46e-1f3208011f01"). InnerVolumeSpecName "kube-api-access-h8qlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:53.834793 master-0 kubenswrapper[4790]: I1011 10:57:53.834673 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-config-data" (OuterVolumeSpecName: "config-data") pod "44bcb391-53f2-438c-b46e-1f3208011f01" (UID: "44bcb391-53f2-438c-b46e-1f3208011f01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:53.837092 master-0 kubenswrapper[4790]: I1011 10:57:53.837025 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44bcb391-53f2-438c-b46e-1f3208011f01" (UID: "44bcb391-53f2-438c-b46e-1f3208011f01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:53.851199 master-0 kubenswrapper[4790]: I1011 10:57:53.851113 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "44bcb391-53f2-438c-b46e-1f3208011f01" (UID: "44bcb391-53f2-438c-b46e-1f3208011f01"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:53.855330 master-0 kubenswrapper[4790]: I1011 10:57:53.855247 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "44bcb391-53f2-438c-b46e-1f3208011f01" (UID: "44bcb391-53f2-438c-b46e-1f3208011f01"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:53.914593 master-0 kubenswrapper[4790]: I1011 10:57:53.914455 4790 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44bcb391-53f2-438c-b46e-1f3208011f01-logs\") on node \"master-0\" DevicePath \"\"" Oct 11 10:57:53.914593 master-0 kubenswrapper[4790]: I1011 10:57:53.914513 4790 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Oct 11 10:57:53.914593 master-0 kubenswrapper[4790]: I1011 10:57:53.914528 4790 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-config-data\") on node \"master-0\" DevicePath \"\"" Oct 11 10:57:53.914593 master-0 kubenswrapper[4790]: I1011 10:57:53.914538 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8qlt\" (UniqueName: \"kubernetes.io/projected/44bcb391-53f2-438c-b46e-1f3208011f01-kube-api-access-h8qlt\") on node \"master-0\" DevicePath \"\"" Oct 11 10:57:53.914593 master-0 kubenswrapper[4790]: I1011 10:57:53.914548 4790 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Oct 11 10:57:53.914593 master-0 kubenswrapper[4790]: I1011 10:57:53.914559 4790 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44bcb391-53f2-438c-b46e-1f3208011f01-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Oct 11 10:57:54.059826 master-1 kubenswrapper[4771]: I1011 10:57:54.059716 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-1" Oct 11 10:57:54.549464 master-0 kubenswrapper[4790]: I1011 10:57:54.549373 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1" event={"ID":"44bcb391-53f2-438c-b46e-1f3208011f01","Type":"ContainerDied","Data":"94a5d59118af400f9a8aa989b9def37c55ec9402d3102f10a1ba404bedd55ff9"} Oct 11 10:57:54.549464 master-0 kubenswrapper[4790]: I1011 10:57:54.549444 4790 scope.go:117] "RemoveContainer" containerID="9c19ce238ad447003ee3f2fc9c2488f5e38167272b8ba8dbab542ad52ae186e6" Oct 11 10:57:54.550526 master-0 kubenswrapper[4790]: I1011 10:57:54.549638 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1" Oct 11 10:57:54.573923 master-0 kubenswrapper[4790]: I1011 10:57:54.573860 4790 scope.go:117] "RemoveContainer" containerID="7c441b122cde11d0c0dd59e93d58166fba491455c97757aa05c250678bb0d192" Oct 11 10:57:54.587244 master-0 kubenswrapper[4790]: I1011 10:57:54.587056 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1"] Oct 11 10:57:54.609731 master-0 kubenswrapper[4790]: I1011 10:57:54.609631 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1"] Oct 11 10:57:54.624186 master-0 kubenswrapper[4790]: I1011 10:57:54.624096 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1"] Oct 11 10:57:54.624630 master-0 kubenswrapper[4790]: E1011 10:57:54.624591 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bcb391-53f2-438c-b46e-1f3208011f01" containerName="nova-api-api" Oct 11 10:57:54.624630 master-0 kubenswrapper[4790]: I1011 10:57:54.624616 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bcb391-53f2-438c-b46e-1f3208011f01" containerName="nova-api-api" Oct 11 10:57:54.624805 master-0 kubenswrapper[4790]: E1011 10:57:54.624642 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bcb391-53f2-438c-b46e-1f3208011f01" containerName="nova-api-log" Oct 11 10:57:54.624805 master-0 kubenswrapper[4790]: I1011 10:57:54.624654 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bcb391-53f2-438c-b46e-1f3208011f01" containerName="nova-api-log" Oct 11 10:57:54.624934 master-0 kubenswrapper[4790]: I1011 10:57:54.624875 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="44bcb391-53f2-438c-b46e-1f3208011f01" containerName="nova-api-api" Oct 11 10:57:54.624934 master-0 kubenswrapper[4790]: I1011 10:57:54.624908 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="44bcb391-53f2-438c-b46e-1f3208011f01" containerName="nova-api-log" Oct 11 10:57:54.626055 master-0 kubenswrapper[4790]: I1011 10:57:54.626023 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1" Oct 11 10:57:54.629786 master-0 kubenswrapper[4790]: I1011 10:57:54.629735 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 11 10:57:54.630022 master-0 kubenswrapper[4790]: I1011 10:57:54.629972 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 11 10:57:54.630739 master-0 kubenswrapper[4790]: I1011 10:57:54.630656 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 11 10:57:54.642433 master-0 kubenswrapper[4790]: I1011 10:57:54.642361 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1"] Oct 11 10:57:54.731952 master-0 kubenswrapper[4790]: I1011 10:57:54.731880 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qzw6\" (UniqueName: \"kubernetes.io/projected/acf716b4-c93a-4303-ab38-507bbc33a8c6-kube-api-access-4qzw6\") pod \"nova-api-1\" (UID: \"acf716b4-c93a-4303-ab38-507bbc33a8c6\") " pod="openstack/nova-api-1" Oct 11 10:57:54.732195 master-0 kubenswrapper[4790]: I1011 10:57:54.731991 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acf716b4-c93a-4303-ab38-507bbc33a8c6-internal-tls-certs\") pod \"nova-api-1\" (UID: \"acf716b4-c93a-4303-ab38-507bbc33a8c6\") " pod="openstack/nova-api-1" Oct 11 10:57:54.732195 master-0 kubenswrapper[4790]: I1011 10:57:54.732052 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acf716b4-c93a-4303-ab38-507bbc33a8c6-config-data\") pod \"nova-api-1\" (UID: \"acf716b4-c93a-4303-ab38-507bbc33a8c6\") " pod="openstack/nova-api-1" Oct 11 10:57:54.732195 master-0 kubenswrapper[4790]: I1011 10:57:54.732078 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/acf716b4-c93a-4303-ab38-507bbc33a8c6-public-tls-certs\") pod \"nova-api-1\" (UID: \"acf716b4-c93a-4303-ab38-507bbc33a8c6\") " pod="openstack/nova-api-1" Oct 11 10:57:54.732195 master-0 kubenswrapper[4790]: I1011 10:57:54.732174 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acf716b4-c93a-4303-ab38-507bbc33a8c6-logs\") pod \"nova-api-1\" (UID: \"acf716b4-c93a-4303-ab38-507bbc33a8c6\") " pod="openstack/nova-api-1" Oct 11 10:57:54.732365 master-0 kubenswrapper[4790]: I1011 10:57:54.732208 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acf716b4-c93a-4303-ab38-507bbc33a8c6-combined-ca-bundle\") pod \"nova-api-1\" (UID: \"acf716b4-c93a-4303-ab38-507bbc33a8c6\") " pod="openstack/nova-api-1" Oct 11 10:57:54.833946 master-0 kubenswrapper[4790]: I1011 10:57:54.833885 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acf716b4-c93a-4303-ab38-507bbc33a8c6-logs\") pod \"nova-api-1\" (UID: \"acf716b4-c93a-4303-ab38-507bbc33a8c6\") " pod="openstack/nova-api-1" Oct 11 10:57:54.833946 master-0 kubenswrapper[4790]: I1011 10:57:54.833961 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acf716b4-c93a-4303-ab38-507bbc33a8c6-combined-ca-bundle\") pod \"nova-api-1\" (UID: \"acf716b4-c93a-4303-ab38-507bbc33a8c6\") " pod="openstack/nova-api-1" Oct 11 10:57:54.834374 master-0 kubenswrapper[4790]: I1011 10:57:54.834036 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qzw6\" (UniqueName: \"kubernetes.io/projected/acf716b4-c93a-4303-ab38-507bbc33a8c6-kube-api-access-4qzw6\") pod \"nova-api-1\" (UID: \"acf716b4-c93a-4303-ab38-507bbc33a8c6\") " pod="openstack/nova-api-1" Oct 11 10:57:54.834374 master-0 kubenswrapper[4790]: I1011 10:57:54.834064 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acf716b4-c93a-4303-ab38-507bbc33a8c6-internal-tls-certs\") pod \"nova-api-1\" (UID: \"acf716b4-c93a-4303-ab38-507bbc33a8c6\") " pod="openstack/nova-api-1" Oct 11 10:57:54.834374 master-0 kubenswrapper[4790]: I1011 10:57:54.834102 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acf716b4-c93a-4303-ab38-507bbc33a8c6-config-data\") pod \"nova-api-1\" (UID: \"acf716b4-c93a-4303-ab38-507bbc33a8c6\") " pod="openstack/nova-api-1" Oct 11 10:57:54.834374 master-0 kubenswrapper[4790]: I1011 10:57:54.834123 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/acf716b4-c93a-4303-ab38-507bbc33a8c6-public-tls-certs\") pod \"nova-api-1\" (UID: \"acf716b4-c93a-4303-ab38-507bbc33a8c6\") " pod="openstack/nova-api-1" Oct 11 10:57:54.834583 master-0 kubenswrapper[4790]: I1011 10:57:54.834524 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acf716b4-c93a-4303-ab38-507bbc33a8c6-logs\") pod \"nova-api-1\" (UID: \"acf716b4-c93a-4303-ab38-507bbc33a8c6\") " pod="openstack/nova-api-1" Oct 11 10:57:54.838305 master-0 kubenswrapper[4790]: I1011 10:57:54.838229 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/acf716b4-c93a-4303-ab38-507bbc33a8c6-public-tls-certs\") pod \"nova-api-1\" (UID: \"acf716b4-c93a-4303-ab38-507bbc33a8c6\") " pod="openstack/nova-api-1" Oct 11 10:57:54.839783 master-0 kubenswrapper[4790]: I1011 10:57:54.839359 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acf716b4-c93a-4303-ab38-507bbc33a8c6-config-data\") pod \"nova-api-1\" (UID: \"acf716b4-c93a-4303-ab38-507bbc33a8c6\") " pod="openstack/nova-api-1" Oct 11 10:57:54.840313 master-0 kubenswrapper[4790]: I1011 10:57:54.840265 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acf716b4-c93a-4303-ab38-507bbc33a8c6-internal-tls-certs\") pod \"nova-api-1\" (UID: \"acf716b4-c93a-4303-ab38-507bbc33a8c6\") " pod="openstack/nova-api-1" Oct 11 10:57:54.842331 master-0 kubenswrapper[4790]: I1011 10:57:54.842267 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acf716b4-c93a-4303-ab38-507bbc33a8c6-combined-ca-bundle\") pod \"nova-api-1\" (UID: \"acf716b4-c93a-4303-ab38-507bbc33a8c6\") " pod="openstack/nova-api-1" Oct 11 10:57:54.857583 master-0 kubenswrapper[4790]: I1011 10:57:54.857517 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qzw6\" (UniqueName: \"kubernetes.io/projected/acf716b4-c93a-4303-ab38-507bbc33a8c6-kube-api-access-4qzw6\") pod \"nova-api-1\" (UID: \"acf716b4-c93a-4303-ab38-507bbc33a8c6\") " pod="openstack/nova-api-1" Oct 11 10:57:54.950902 master-0 kubenswrapper[4790]: I1011 10:57:54.950814 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1" Oct 11 10:57:55.218526 master-1 kubenswrapper[4771]: I1011 10:57:55.218306 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-1" podUID="486db0a3-f081-43d5-b20d-d7386531632e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.129.0.168:8775/\": read tcp 10.129.0.2:47572->10.129.0.168:8775: read: connection reset by peer" Oct 11 10:57:55.220062 master-1 kubenswrapper[4771]: I1011 10:57:55.218481 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-1" podUID="486db0a3-f081-43d5-b20d-d7386531632e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.129.0.168:8775/\": read tcp 10.129.0.2:47570->10.129.0.168:8775: read: connection reset by peer" Oct 11 10:57:55.381203 master-0 kubenswrapper[4790]: I1011 10:57:55.381138 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1"] Oct 11 10:57:55.386077 master-0 kubenswrapper[4790]: W1011 10:57:55.386013 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacf716b4_c93a_4303_ab38_507bbc33a8c6.slice/crio-f8c70f81f08b25cf8f98d4be418be2dd429ec74969110b9c3807243c1c9c37b2 WatchSource:0}: Error finding container f8c70f81f08b25cf8f98d4be418be2dd429ec74969110b9c3807243c1c9c37b2: Status 404 returned error can't find the container with id f8c70f81f08b25cf8f98d4be418be2dd429ec74969110b9c3807243c1c9c37b2 Oct 11 10:57:55.414825 master-1 kubenswrapper[4771]: I1011 10:57:55.414768 4771 generic.go:334] "Generic (PLEG): container finished" podID="486db0a3-f081-43d5-b20d-d7386531632e" containerID="80f0dd7f7bf3c1c4412d1e7a39722e2cd092b7c0f670f349af91c500d917aa10" exitCode=0 Oct 11 10:57:55.414943 master-1 kubenswrapper[4771]: I1011 10:57:55.414834 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"486db0a3-f081-43d5-b20d-d7386531632e","Type":"ContainerDied","Data":"80f0dd7f7bf3c1c4412d1e7a39722e2cd092b7c0f670f349af91c500d917aa10"} Oct 11 10:57:55.569843 master-0 kubenswrapper[4790]: I1011 10:57:55.568164 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1" event={"ID":"acf716b4-c93a-4303-ab38-507bbc33a8c6","Type":"ContainerStarted","Data":"0403695d910ac7dcbe6ecb8d65fbc7fad8faaaabae301a75a3da18e2abe2b981"} Oct 11 10:57:55.569843 master-0 kubenswrapper[4790]: I1011 10:57:55.568268 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1" event={"ID":"acf716b4-c93a-4303-ab38-507bbc33a8c6","Type":"ContainerStarted","Data":"f8c70f81f08b25cf8f98d4be418be2dd429ec74969110b9c3807243c1c9c37b2"} Oct 11 10:57:55.750656 master-1 kubenswrapper[4771]: I1011 10:57:55.750586 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 11 10:57:55.799248 master-1 kubenswrapper[4771]: I1011 10:57:55.799161 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/486db0a3-f081-43d5-b20d-d7386531632e-config-data\") pod \"486db0a3-f081-43d5-b20d-d7386531632e\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " Oct 11 10:57:55.799662 master-1 kubenswrapper[4771]: I1011 10:57:55.799411 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/486db0a3-f081-43d5-b20d-d7386531632e-combined-ca-bundle\") pod \"486db0a3-f081-43d5-b20d-d7386531632e\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " Oct 11 10:57:55.800086 master-1 kubenswrapper[4771]: I1011 10:57:55.800049 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbq4h\" (UniqueName: \"kubernetes.io/projected/486db0a3-f081-43d5-b20d-d7386531632e-kube-api-access-lbq4h\") pod \"486db0a3-f081-43d5-b20d-d7386531632e\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " Oct 11 10:57:55.800190 master-1 kubenswrapper[4771]: I1011 10:57:55.800113 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/486db0a3-f081-43d5-b20d-d7386531632e-nova-metadata-tls-certs\") pod \"486db0a3-f081-43d5-b20d-d7386531632e\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " Oct 11 10:57:55.800190 master-1 kubenswrapper[4771]: I1011 10:57:55.800173 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/486db0a3-f081-43d5-b20d-d7386531632e-logs\") pod \"486db0a3-f081-43d5-b20d-d7386531632e\" (UID: \"486db0a3-f081-43d5-b20d-d7386531632e\") " Oct 11 10:57:55.801804 master-1 kubenswrapper[4771]: I1011 10:57:55.801708 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/486db0a3-f081-43d5-b20d-d7386531632e-logs" (OuterVolumeSpecName: "logs") pod "486db0a3-f081-43d5-b20d-d7386531632e" (UID: "486db0a3-f081-43d5-b20d-d7386531632e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:57:55.832605 master-1 kubenswrapper[4771]: I1011 10:57:55.832489 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/486db0a3-f081-43d5-b20d-d7386531632e-kube-api-access-lbq4h" (OuterVolumeSpecName: "kube-api-access-lbq4h") pod "486db0a3-f081-43d5-b20d-d7386531632e" (UID: "486db0a3-f081-43d5-b20d-d7386531632e"). InnerVolumeSpecName "kube-api-access-lbq4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:55.838788 master-1 kubenswrapper[4771]: I1011 10:57:55.838701 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/486db0a3-f081-43d5-b20d-d7386531632e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "486db0a3-f081-43d5-b20d-d7386531632e" (UID: "486db0a3-f081-43d5-b20d-d7386531632e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:55.848976 master-1 kubenswrapper[4771]: I1011 10:57:55.848619 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/486db0a3-f081-43d5-b20d-d7386531632e-config-data" (OuterVolumeSpecName: "config-data") pod "486db0a3-f081-43d5-b20d-d7386531632e" (UID: "486db0a3-f081-43d5-b20d-d7386531632e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:55.869578 master-1 kubenswrapper[4771]: I1011 10:57:55.869265 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/486db0a3-f081-43d5-b20d-d7386531632e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "486db0a3-f081-43d5-b20d-d7386531632e" (UID: "486db0a3-f081-43d5-b20d-d7386531632e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:55.903922 master-1 kubenswrapper[4771]: I1011 10:57:55.903816 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/486db0a3-f081-43d5-b20d-d7386531632e-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:55.903922 master-1 kubenswrapper[4771]: I1011 10:57:55.903918 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/486db0a3-f081-43d5-b20d-d7386531632e-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:55.904213 master-1 kubenswrapper[4771]: I1011 10:57:55.903947 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbq4h\" (UniqueName: \"kubernetes.io/projected/486db0a3-f081-43d5-b20d-d7386531632e-kube-api-access-lbq4h\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:55.904213 master-1 kubenswrapper[4771]: I1011 10:57:55.903977 4771 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/486db0a3-f081-43d5-b20d-d7386531632e-nova-metadata-tls-certs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:55.904213 master-1 kubenswrapper[4771]: I1011 10:57:55.904002 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/486db0a3-f081-43d5-b20d-d7386531632e-logs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:57:56.304372 master-0 kubenswrapper[4790]: I1011 10:57:56.304307 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44bcb391-53f2-438c-b46e-1f3208011f01" path="/var/lib/kubelet/pods/44bcb391-53f2-438c-b46e-1f3208011f01/volumes" Oct 11 10:57:56.433280 master-1 kubenswrapper[4771]: I1011 10:57:56.433199 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"486db0a3-f081-43d5-b20d-d7386531632e","Type":"ContainerDied","Data":"5409d9254a9ac28cc3c943a6262472ba07dc81056b84eeb207f5cc4057ceaafd"} Oct 11 10:57:56.434128 master-1 kubenswrapper[4771]: I1011 10:57:56.433299 4771 scope.go:117] "RemoveContainer" containerID="80f0dd7f7bf3c1c4412d1e7a39722e2cd092b7c0f670f349af91c500d917aa10" Oct 11 10:57:56.434128 master-1 kubenswrapper[4771]: I1011 10:57:56.433328 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 11 10:57:56.462565 master-1 kubenswrapper[4771]: I1011 10:57:56.462513 4771 scope.go:117] "RemoveContainer" containerID="06a1bb6aa8f352b55c65006ce50a6d509e8dff8e484bf4f522e69ff0c42ae932" Oct 11 10:57:56.490873 master-1 kubenswrapper[4771]: I1011 10:57:56.490725 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-1"] Oct 11 10:57:56.503887 master-1 kubenswrapper[4771]: I1011 10:57:56.501610 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-1"] Oct 11 10:57:56.524673 master-1 kubenswrapper[4771]: I1011 10:57:56.524585 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-1"] Oct 11 10:57:56.525119 master-1 kubenswrapper[4771]: E1011 10:57:56.525076 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486db0a3-f081-43d5-b20d-d7386531632e" containerName="nova-metadata-log" Oct 11 10:57:56.525119 master-1 kubenswrapper[4771]: I1011 10:57:56.525100 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="486db0a3-f081-43d5-b20d-d7386531632e" containerName="nova-metadata-log" Oct 11 10:57:56.525256 master-1 kubenswrapper[4771]: E1011 10:57:56.525131 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486db0a3-f081-43d5-b20d-d7386531632e" containerName="nova-metadata-metadata" Oct 11 10:57:56.525256 master-1 kubenswrapper[4771]: I1011 10:57:56.525140 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="486db0a3-f081-43d5-b20d-d7386531632e" containerName="nova-metadata-metadata" Oct 11 10:57:56.525347 master-1 kubenswrapper[4771]: I1011 10:57:56.525303 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="486db0a3-f081-43d5-b20d-d7386531632e" containerName="nova-metadata-metadata" Oct 11 10:57:56.525347 master-1 kubenswrapper[4771]: I1011 10:57:56.525316 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="486db0a3-f081-43d5-b20d-d7386531632e" containerName="nova-metadata-log" Oct 11 10:57:56.526907 master-1 kubenswrapper[4771]: I1011 10:57:56.526863 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 11 10:57:56.532408 master-1 kubenswrapper[4771]: I1011 10:57:56.532298 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 11 10:57:56.532802 master-1 kubenswrapper[4771]: I1011 10:57:56.532715 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 11 10:57:56.550451 master-1 kubenswrapper[4771]: I1011 10:57:56.541519 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-1"] Oct 11 10:57:56.585179 master-0 kubenswrapper[4790]: I1011 10:57:56.584974 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1" event={"ID":"acf716b4-c93a-4303-ab38-507bbc33a8c6","Type":"ContainerStarted","Data":"4d2db9ffc182292640a480f615fb53f7bfbe826c4265583c5552f11c703f2805"} Oct 11 10:57:56.619689 master-0 kubenswrapper[4790]: I1011 10:57:56.619566 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-1" podStartSLOduration=2.619541555 podStartE2EDuration="2.619541555s" podCreationTimestamp="2025-10-11 10:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:57:56.61390477 +0000 UTC m=+1153.168365102" watchObservedRunningTime="2025-10-11 10:57:56.619541555 +0000 UTC m=+1153.174001847" Oct 11 10:57:56.620683 master-1 kubenswrapper[4771]: I1011 10:57:56.620474 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc53b2cb-c9f9-46f7-b783-d0a94b4f8060-config-data\") pod \"nova-metadata-1\" (UID: \"cc53b2cb-c9f9-46f7-b783-d0a94b4f8060\") " pod="openstack/nova-metadata-1" Oct 11 10:57:56.620683 master-1 kubenswrapper[4771]: I1011 10:57:56.620614 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc53b2cb-c9f9-46f7-b783-d0a94b4f8060-combined-ca-bundle\") pod \"nova-metadata-1\" (UID: \"cc53b2cb-c9f9-46f7-b783-d0a94b4f8060\") " pod="openstack/nova-metadata-1" Oct 11 10:57:56.620683 master-1 kubenswrapper[4771]: I1011 10:57:56.620655 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hhfr\" (UniqueName: \"kubernetes.io/projected/cc53b2cb-c9f9-46f7-b783-d0a94b4f8060-kube-api-access-2hhfr\") pod \"nova-metadata-1\" (UID: \"cc53b2cb-c9f9-46f7-b783-d0a94b4f8060\") " pod="openstack/nova-metadata-1" Oct 11 10:57:56.621109 master-1 kubenswrapper[4771]: I1011 10:57:56.620763 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc53b2cb-c9f9-46f7-b783-d0a94b4f8060-logs\") pod \"nova-metadata-1\" (UID: \"cc53b2cb-c9f9-46f7-b783-d0a94b4f8060\") " pod="openstack/nova-metadata-1" Oct 11 10:57:56.621109 master-1 kubenswrapper[4771]: I1011 10:57:56.621028 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc53b2cb-c9f9-46f7-b783-d0a94b4f8060-nova-metadata-tls-certs\") pod \"nova-metadata-1\" (UID: \"cc53b2cb-c9f9-46f7-b783-d0a94b4f8060\") " pod="openstack/nova-metadata-1" Oct 11 10:57:56.726026 master-1 kubenswrapper[4771]: I1011 10:57:56.725928 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc53b2cb-c9f9-46f7-b783-d0a94b4f8060-combined-ca-bundle\") pod \"nova-metadata-1\" (UID: \"cc53b2cb-c9f9-46f7-b783-d0a94b4f8060\") " pod="openstack/nova-metadata-1" Oct 11 10:57:56.726530 master-1 kubenswrapper[4771]: I1011 10:57:56.726054 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hhfr\" (UniqueName: \"kubernetes.io/projected/cc53b2cb-c9f9-46f7-b783-d0a94b4f8060-kube-api-access-2hhfr\") pod \"nova-metadata-1\" (UID: \"cc53b2cb-c9f9-46f7-b783-d0a94b4f8060\") " pod="openstack/nova-metadata-1" Oct 11 10:57:56.726530 master-1 kubenswrapper[4771]: I1011 10:57:56.726241 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc53b2cb-c9f9-46f7-b783-d0a94b4f8060-logs\") pod \"nova-metadata-1\" (UID: \"cc53b2cb-c9f9-46f7-b783-d0a94b4f8060\") " pod="openstack/nova-metadata-1" Oct 11 10:57:56.726530 master-1 kubenswrapper[4771]: I1011 10:57:56.726346 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc53b2cb-c9f9-46f7-b783-d0a94b4f8060-nova-metadata-tls-certs\") pod \"nova-metadata-1\" (UID: \"cc53b2cb-c9f9-46f7-b783-d0a94b4f8060\") " pod="openstack/nova-metadata-1" Oct 11 10:57:56.726530 master-1 kubenswrapper[4771]: I1011 10:57:56.726456 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc53b2cb-c9f9-46f7-b783-d0a94b4f8060-config-data\") pod \"nova-metadata-1\" (UID: \"cc53b2cb-c9f9-46f7-b783-d0a94b4f8060\") " pod="openstack/nova-metadata-1" Oct 11 10:57:56.726982 master-1 kubenswrapper[4771]: I1011 10:57:56.726921 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc53b2cb-c9f9-46f7-b783-d0a94b4f8060-logs\") pod \"nova-metadata-1\" (UID: \"cc53b2cb-c9f9-46f7-b783-d0a94b4f8060\") " pod="openstack/nova-metadata-1" Oct 11 10:57:56.731700 master-1 kubenswrapper[4771]: I1011 10:57:56.731652 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc53b2cb-c9f9-46f7-b783-d0a94b4f8060-nova-metadata-tls-certs\") pod \"nova-metadata-1\" (UID: \"cc53b2cb-c9f9-46f7-b783-d0a94b4f8060\") " pod="openstack/nova-metadata-1" Oct 11 10:57:56.732092 master-1 kubenswrapper[4771]: I1011 10:57:56.732036 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc53b2cb-c9f9-46f7-b783-d0a94b4f8060-config-data\") pod \"nova-metadata-1\" (UID: \"cc53b2cb-c9f9-46f7-b783-d0a94b4f8060\") " pod="openstack/nova-metadata-1" Oct 11 10:57:56.732419 master-1 kubenswrapper[4771]: I1011 10:57:56.732391 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc53b2cb-c9f9-46f7-b783-d0a94b4f8060-combined-ca-bundle\") pod \"nova-metadata-1\" (UID: \"cc53b2cb-c9f9-46f7-b783-d0a94b4f8060\") " pod="openstack/nova-metadata-1" Oct 11 10:57:56.759697 master-1 kubenswrapper[4771]: I1011 10:57:56.759633 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hhfr\" (UniqueName: \"kubernetes.io/projected/cc53b2cb-c9f9-46f7-b783-d0a94b4f8060-kube-api-access-2hhfr\") pod \"nova-metadata-1\" (UID: \"cc53b2cb-c9f9-46f7-b783-d0a94b4f8060\") " pod="openstack/nova-metadata-1" Oct 11 10:57:56.862453 master-1 kubenswrapper[4771]: I1011 10:57:56.862330 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 11 10:57:57.424628 master-1 kubenswrapper[4771]: I1011 10:57:57.424557 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-1"] Oct 11 10:57:57.445379 master-1 kubenswrapper[4771]: I1011 10:57:57.445281 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"cc53b2cb-c9f9-46f7-b783-d0a94b4f8060","Type":"ContainerStarted","Data":"56f34d307a3a2b199477fac214ca75a8f159662bd8fe82533b7d7937f345fe83"} Oct 11 10:57:58.451761 master-1 kubenswrapper[4771]: I1011 10:57:58.451129 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="486db0a3-f081-43d5-b20d-d7386531632e" path="/var/lib/kubelet/pods/486db0a3-f081-43d5-b20d-d7386531632e/volumes" Oct 11 10:57:58.461877 master-1 kubenswrapper[4771]: I1011 10:57:58.461798 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"cc53b2cb-c9f9-46f7-b783-d0a94b4f8060","Type":"ContainerStarted","Data":"9a27343f439811da6981cbde217403c4702c9a8eab93426f295b8db491f89051"} Oct 11 10:57:58.461996 master-1 kubenswrapper[4771]: I1011 10:57:58.461884 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"cc53b2cb-c9f9-46f7-b783-d0a94b4f8060","Type":"ContainerStarted","Data":"7a6d8d4017921ea2d1bb3bd110f46ada1d5f51112971f5f291521d318b2e9588"} Oct 11 10:57:58.500488 master-1 kubenswrapper[4771]: I1011 10:57:58.500380 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-1" podStartSLOduration=2.500324884 podStartE2EDuration="2.500324884s" podCreationTimestamp="2025-10-11 10:57:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:57:58.490686957 +0000 UTC m=+1910.464913498" watchObservedRunningTime="2025-10-11 10:57:58.500324884 +0000 UTC m=+1910.474551365" Oct 11 10:57:59.059866 master-1 kubenswrapper[4771]: I1011 10:57:59.059771 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-1" Oct 11 10:57:59.108602 master-1 kubenswrapper[4771]: I1011 10:57:59.108496 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-1" Oct 11 10:57:59.521246 master-1 kubenswrapper[4771]: I1011 10:57:59.521042 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-1" Oct 11 10:57:59.575926 master-2 kubenswrapper[4776]: I1011 10:57:59.575840 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:57:59.576583 master-2 kubenswrapper[4776]: I1011 10:57:59.576127 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="69f600e5-9321-41ce-9ec4-abee215f69fe" containerName="nova-scheduler-scheduler" containerID="cri-o://a1ea9dd039fb792b9b4194ff7606c452151d402cebba8363c5c12a6a1d82ba1c" gracePeriod=30 Oct 11 10:58:01.598522 master-2 kubenswrapper[4776]: I1011 10:58:01.598457 4776 generic.go:334] "Generic (PLEG): container finished" podID="69f600e5-9321-41ce-9ec4-abee215f69fe" containerID="a1ea9dd039fb792b9b4194ff7606c452151d402cebba8363c5c12a6a1d82ba1c" exitCode=0 Oct 11 10:58:01.598522 master-2 kubenswrapper[4776]: I1011 10:58:01.598510 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"69f600e5-9321-41ce-9ec4-abee215f69fe","Type":"ContainerDied","Data":"a1ea9dd039fb792b9b4194ff7606c452151d402cebba8363c5c12a6a1d82ba1c"} Oct 11 10:58:01.746210 master-2 kubenswrapper[4776]: I1011 10:58:01.746172 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:58:01.862710 master-1 kubenswrapper[4771]: I1011 10:58:01.862641 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-1" Oct 11 10:58:01.862710 master-1 kubenswrapper[4771]: I1011 10:58:01.862711 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-1" Oct 11 10:58:01.901950 master-2 kubenswrapper[4776]: I1011 10:58:01.901790 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwj7l\" (UniqueName: \"kubernetes.io/projected/69f600e5-9321-41ce-9ec4-abee215f69fe-kube-api-access-mwj7l\") pod \"69f600e5-9321-41ce-9ec4-abee215f69fe\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " Oct 11 10:58:01.901950 master-2 kubenswrapper[4776]: I1011 10:58:01.901898 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-combined-ca-bundle\") pod \"69f600e5-9321-41ce-9ec4-abee215f69fe\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " Oct 11 10:58:01.902296 master-2 kubenswrapper[4776]: I1011 10:58:01.902064 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-config-data\") pod \"69f600e5-9321-41ce-9ec4-abee215f69fe\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " Oct 11 10:58:01.904983 master-2 kubenswrapper[4776]: I1011 10:58:01.904920 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f600e5-9321-41ce-9ec4-abee215f69fe-kube-api-access-mwj7l" (OuterVolumeSpecName: "kube-api-access-mwj7l") pod "69f600e5-9321-41ce-9ec4-abee215f69fe" (UID: "69f600e5-9321-41ce-9ec4-abee215f69fe"). InnerVolumeSpecName "kube-api-access-mwj7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:58:01.951498 master-2 kubenswrapper[4776]: I1011 10:58:01.951439 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69f600e5-9321-41ce-9ec4-abee215f69fe" (UID: "69f600e5-9321-41ce-9ec4-abee215f69fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:58:01.952449 master-2 kubenswrapper[4776]: I1011 10:58:01.952407 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-config-data" (OuterVolumeSpecName: "config-data") pod "69f600e5-9321-41ce-9ec4-abee215f69fe" (UID: "69f600e5-9321-41ce-9ec4-abee215f69fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:58:02.004270 master-2 kubenswrapper[4776]: I1011 10:58:02.004224 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwj7l\" (UniqueName: \"kubernetes.io/projected/69f600e5-9321-41ce-9ec4-abee215f69fe-kube-api-access-mwj7l\") on node \"master-2\" DevicePath \"\"" Oct 11 10:58:02.004270 master-2 kubenswrapper[4776]: I1011 10:58:02.004260 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:58:02.004270 master-2 kubenswrapper[4776]: I1011 10:58:02.004271 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:58:02.611005 master-2 kubenswrapper[4776]: I1011 10:58:02.610951 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"69f600e5-9321-41ce-9ec4-abee215f69fe","Type":"ContainerDied","Data":"bb574ec91c838b585beb61a822a5e96518597aaa45bd1ce4b4d86481bf2e5fb7"} Oct 11 10:58:02.611538 master-2 kubenswrapper[4776]: I1011 10:58:02.611017 4776 scope.go:117] "RemoveContainer" containerID="a1ea9dd039fb792b9b4194ff7606c452151d402cebba8363c5c12a6a1d82ba1c" Oct 11 10:58:02.611538 master-2 kubenswrapper[4776]: I1011 10:58:02.611066 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:58:02.645649 master-2 kubenswrapper[4776]: I1011 10:58:02.645196 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:58:02.655478 master-2 kubenswrapper[4776]: I1011 10:58:02.655418 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:58:02.681918 master-2 kubenswrapper[4776]: I1011 10:58:02.681851 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:58:02.682308 master-2 kubenswrapper[4776]: E1011 10:58:02.682263 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f600e5-9321-41ce-9ec4-abee215f69fe" containerName="nova-scheduler-scheduler" Oct 11 10:58:02.682308 master-2 kubenswrapper[4776]: I1011 10:58:02.682279 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f600e5-9321-41ce-9ec4-abee215f69fe" containerName="nova-scheduler-scheduler" Oct 11 10:58:02.683648 master-2 kubenswrapper[4776]: I1011 10:58:02.682501 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f600e5-9321-41ce-9ec4-abee215f69fe" containerName="nova-scheduler-scheduler" Oct 11 10:58:02.683648 master-2 kubenswrapper[4776]: I1011 10:58:02.683268 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:58:02.686711 master-2 kubenswrapper[4776]: I1011 10:58:02.686500 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 11 10:58:02.716770 master-2 kubenswrapper[4776]: I1011 10:58:02.707085 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:58:02.819643 master-2 kubenswrapper[4776]: I1011 10:58:02.819518 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b694ecc8-10eb-40a7-8c2c-a622bd70f775-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b694ecc8-10eb-40a7-8c2c-a622bd70f775\") " pod="openstack/nova-scheduler-0" Oct 11 10:58:02.819915 master-2 kubenswrapper[4776]: I1011 10:58:02.819665 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b694ecc8-10eb-40a7-8c2c-a622bd70f775-config-data\") pod \"nova-scheduler-0\" (UID: \"b694ecc8-10eb-40a7-8c2c-a622bd70f775\") " pod="openstack/nova-scheduler-0" Oct 11 10:58:02.819969 master-2 kubenswrapper[4776]: I1011 10:58:02.819908 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2b2b\" (UniqueName: \"kubernetes.io/projected/b694ecc8-10eb-40a7-8c2c-a622bd70f775-kube-api-access-t2b2b\") pod \"nova-scheduler-0\" (UID: \"b694ecc8-10eb-40a7-8c2c-a622bd70f775\") " pod="openstack/nova-scheduler-0" Oct 11 10:58:02.922082 master-2 kubenswrapper[4776]: I1011 10:58:02.922014 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b694ecc8-10eb-40a7-8c2c-a622bd70f775-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b694ecc8-10eb-40a7-8c2c-a622bd70f775\") " pod="openstack/nova-scheduler-0" Oct 11 10:58:02.922327 master-2 kubenswrapper[4776]: I1011 10:58:02.922093 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b694ecc8-10eb-40a7-8c2c-a622bd70f775-config-data\") pod \"nova-scheduler-0\" (UID: \"b694ecc8-10eb-40a7-8c2c-a622bd70f775\") " pod="openstack/nova-scheduler-0" Oct 11 10:58:02.922327 master-2 kubenswrapper[4776]: I1011 10:58:02.922200 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2b2b\" (UniqueName: \"kubernetes.io/projected/b694ecc8-10eb-40a7-8c2c-a622bd70f775-kube-api-access-t2b2b\") pod \"nova-scheduler-0\" (UID: \"b694ecc8-10eb-40a7-8c2c-a622bd70f775\") " pod="openstack/nova-scheduler-0" Oct 11 10:58:02.925531 master-2 kubenswrapper[4776]: I1011 10:58:02.925457 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b694ecc8-10eb-40a7-8c2c-a622bd70f775-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b694ecc8-10eb-40a7-8c2c-a622bd70f775\") " pod="openstack/nova-scheduler-0" Oct 11 10:58:02.926742 master-2 kubenswrapper[4776]: I1011 10:58:02.926014 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b694ecc8-10eb-40a7-8c2c-a622bd70f775-config-data\") pod \"nova-scheduler-0\" (UID: \"b694ecc8-10eb-40a7-8c2c-a622bd70f775\") " pod="openstack/nova-scheduler-0" Oct 11 10:58:02.941639 master-2 kubenswrapper[4776]: I1011 10:58:02.941594 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2b2b\" (UniqueName: \"kubernetes.io/projected/b694ecc8-10eb-40a7-8c2c-a622bd70f775-kube-api-access-t2b2b\") pod \"nova-scheduler-0\" (UID: \"b694ecc8-10eb-40a7-8c2c-a622bd70f775\") " pod="openstack/nova-scheduler-0" Oct 11 10:58:03.001276 master-2 kubenswrapper[4776]: I1011 10:58:03.001181 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:58:03.411862 master-2 kubenswrapper[4776]: I1011 10:58:03.411826 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:58:03.413882 master-2 kubenswrapper[4776]: W1011 10:58:03.413842 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb694ecc8_10eb_40a7_8c2c_a622bd70f775.slice/crio-ad2275d72e32e65141b1ea1581cc0f574a8785c261c95f7a79614683fe1f5b45 WatchSource:0}: Error finding container ad2275d72e32e65141b1ea1581cc0f574a8785c261c95f7a79614683fe1f5b45: Status 404 returned error can't find the container with id ad2275d72e32e65141b1ea1581cc0f574a8785c261c95f7a79614683fe1f5b45 Oct 11 10:58:03.624835 master-2 kubenswrapper[4776]: I1011 10:58:03.623981 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b694ecc8-10eb-40a7-8c2c-a622bd70f775","Type":"ContainerStarted","Data":"86091f5b4e67c173f5c44ec620cd7651b50051b6424651f2d1ee6cc1ed5a40f6"} Oct 11 10:58:03.624835 master-2 kubenswrapper[4776]: I1011 10:58:03.624046 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b694ecc8-10eb-40a7-8c2c-a622bd70f775","Type":"ContainerStarted","Data":"ad2275d72e32e65141b1ea1581cc0f574a8785c261c95f7a79614683fe1f5b45"} Oct 11 10:58:03.650125 master-2 kubenswrapper[4776]: I1011 10:58:03.650044 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.650020517 podStartE2EDuration="1.650020517s" podCreationTimestamp="2025-10-11 10:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:58:03.646216884 +0000 UTC m=+1918.430643593" watchObservedRunningTime="2025-10-11 10:58:03.650020517 +0000 UTC m=+1918.434447226" Oct 11 10:58:04.070012 master-2 kubenswrapper[4776]: I1011 10:58:04.069942 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69f600e5-9321-41ce-9ec4-abee215f69fe" path="/var/lib/kubelet/pods/69f600e5-9321-41ce-9ec4-abee215f69fe/volumes" Oct 11 10:58:04.951328 master-0 kubenswrapper[4790]: I1011 10:58:04.951244 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-1" Oct 11 10:58:04.951328 master-0 kubenswrapper[4790]: I1011 10:58:04.951312 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-1" Oct 11 10:58:05.967098 master-0 kubenswrapper[4790]: I1011 10:58:05.966941 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-1" podUID="acf716b4-c93a-4303-ab38-507bbc33a8c6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.130.0.121:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:58:05.967098 master-0 kubenswrapper[4790]: I1011 10:58:05.967098 4790 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-1" podUID="acf716b4-c93a-4303-ab38-507bbc33a8c6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.130.0.121:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:58:06.863094 master-1 kubenswrapper[4771]: I1011 10:58:06.863009 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-1" Oct 11 10:58:06.863094 master-1 kubenswrapper[4771]: I1011 10:58:06.863091 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-1" Oct 11 10:58:07.877717 master-1 kubenswrapper[4771]: I1011 10:58:07.877618 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-1" podUID="cc53b2cb-c9f9-46f7-b783-d0a94b4f8060" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.129.0.175:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:58:07.878545 master-1 kubenswrapper[4771]: I1011 10:58:07.877624 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-1" podUID="cc53b2cb-c9f9-46f7-b783-d0a94b4f8060" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.129.0.175:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:58:08.001778 master-2 kubenswrapper[4776]: I1011 10:58:08.001692 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 11 10:58:13.001926 master-2 kubenswrapper[4776]: I1011 10:58:13.001831 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 11 10:58:13.032121 master-2 kubenswrapper[4776]: I1011 10:58:13.032039 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 11 10:58:13.772914 master-2 kubenswrapper[4776]: I1011 10:58:13.772827 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 11 10:58:14.958910 master-0 kubenswrapper[4790]: I1011 10:58:14.958842 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-1" Oct 11 10:58:14.959677 master-0 kubenswrapper[4790]: I1011 10:58:14.959410 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-1" Oct 11 10:58:14.961228 master-0 kubenswrapper[4790]: I1011 10:58:14.961155 4790 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-1" Oct 11 10:58:14.966964 master-0 kubenswrapper[4790]: I1011 10:58:14.966913 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-1" Oct 11 10:58:15.792586 master-0 kubenswrapper[4790]: I1011 10:58:15.792515 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-1" Oct 11 10:58:15.799665 master-0 kubenswrapper[4790]: I1011 10:58:15.799595 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-1" Oct 11 10:58:15.899072 master-1 kubenswrapper[4771]: I1011 10:58:15.898989 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 11 10:58:15.900295 master-1 kubenswrapper[4771]: I1011 10:58:15.899288 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c648855c-73f8-4316-9eca-147821b776c2" containerName="nova-api-log" containerID="cri-o://88924f324b5ca6c196af8b9bc4aa24510c91c3566f0f572201b83024bd6210b6" gracePeriod=30 Oct 11 10:58:15.900295 master-1 kubenswrapper[4771]: I1011 10:58:15.899890 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c648855c-73f8-4316-9eca-147821b776c2" containerName="nova-api-api" containerID="cri-o://684566c2bec8831ad596ea2c67283e963a33a72653ccd7d10aa766e1b9786557" gracePeriod=30 Oct 11 10:58:16.645084 master-1 kubenswrapper[4771]: I1011 10:58:16.644976 4771 generic.go:334] "Generic (PLEG): container finished" podID="c648855c-73f8-4316-9eca-147821b776c2" containerID="88924f324b5ca6c196af8b9bc4aa24510c91c3566f0f572201b83024bd6210b6" exitCode=143 Oct 11 10:58:16.645084 master-1 kubenswrapper[4771]: I1011 10:58:16.645049 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c648855c-73f8-4316-9eca-147821b776c2","Type":"ContainerDied","Data":"88924f324b5ca6c196af8b9bc4aa24510c91c3566f0f572201b83024bd6210b6"} Oct 11 10:58:16.870624 master-1 kubenswrapper[4771]: I1011 10:58:16.870172 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-1" Oct 11 10:58:16.871032 master-1 kubenswrapper[4771]: I1011 10:58:16.870968 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-1" Oct 11 10:58:16.881050 master-1 kubenswrapper[4771]: I1011 10:58:16.881001 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-1" Oct 11 10:58:17.664070 master-1 kubenswrapper[4771]: I1011 10:58:17.663944 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-1" Oct 11 10:58:17.735081 master-2 kubenswrapper[4776]: I1011 10:58:17.734990 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 10:58:17.736455 master-2 kubenswrapper[4776]: I1011 10:58:17.735394 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-log" containerID="cri-o://a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12" gracePeriod=30 Oct 11 10:58:17.736455 master-2 kubenswrapper[4776]: I1011 10:58:17.735565 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-metadata" containerID="cri-o://a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77" gracePeriod=30 Oct 11 10:58:18.779546 master-2 kubenswrapper[4776]: I1011 10:58:18.779479 4776 generic.go:334] "Generic (PLEG): container finished" podID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerID="a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12" exitCode=143 Oct 11 10:58:18.779546 master-2 kubenswrapper[4776]: I1011 10:58:18.779530 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ec11821-cdf5-45e1-a138-2b62dad57cc3","Type":"ContainerDied","Data":"a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12"} Oct 11 10:58:19.633740 master-1 kubenswrapper[4771]: I1011 10:58:19.633674 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 10:58:19.688828 master-1 kubenswrapper[4771]: I1011 10:58:19.688704 4771 generic.go:334] "Generic (PLEG): container finished" podID="c648855c-73f8-4316-9eca-147821b776c2" containerID="684566c2bec8831ad596ea2c67283e963a33a72653ccd7d10aa766e1b9786557" exitCode=0 Oct 11 10:58:19.688828 master-1 kubenswrapper[4771]: I1011 10:58:19.688770 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 10:58:19.689171 master-1 kubenswrapper[4771]: I1011 10:58:19.688807 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c648855c-73f8-4316-9eca-147821b776c2","Type":"ContainerDied","Data":"684566c2bec8831ad596ea2c67283e963a33a72653ccd7d10aa766e1b9786557"} Oct 11 10:58:19.689171 master-1 kubenswrapper[4771]: I1011 10:58:19.688920 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c648855c-73f8-4316-9eca-147821b776c2","Type":"ContainerDied","Data":"901bf1754d9ccab6be8ecc339468f500f7f6d434ca5a6c15ca5caad9d817a352"} Oct 11 10:58:19.689171 master-1 kubenswrapper[4771]: I1011 10:58:19.688948 4771 scope.go:117] "RemoveContainer" containerID="684566c2bec8831ad596ea2c67283e963a33a72653ccd7d10aa766e1b9786557" Oct 11 10:58:19.715729 master-1 kubenswrapper[4771]: I1011 10:58:19.715555 4771 scope.go:117] "RemoveContainer" containerID="88924f324b5ca6c196af8b9bc4aa24510c91c3566f0f572201b83024bd6210b6" Oct 11 10:58:19.735306 master-1 kubenswrapper[4771]: I1011 10:58:19.735246 4771 scope.go:117] "RemoveContainer" containerID="684566c2bec8831ad596ea2c67283e963a33a72653ccd7d10aa766e1b9786557" Oct 11 10:58:19.735808 master-1 kubenswrapper[4771]: E1011 10:58:19.735773 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"684566c2bec8831ad596ea2c67283e963a33a72653ccd7d10aa766e1b9786557\": container with ID starting with 684566c2bec8831ad596ea2c67283e963a33a72653ccd7d10aa766e1b9786557 not found: ID does not exist" containerID="684566c2bec8831ad596ea2c67283e963a33a72653ccd7d10aa766e1b9786557" Oct 11 10:58:19.735931 master-1 kubenswrapper[4771]: I1011 10:58:19.735905 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"684566c2bec8831ad596ea2c67283e963a33a72653ccd7d10aa766e1b9786557"} err="failed to get container status \"684566c2bec8831ad596ea2c67283e963a33a72653ccd7d10aa766e1b9786557\": rpc error: code = NotFound desc = could not find container \"684566c2bec8831ad596ea2c67283e963a33a72653ccd7d10aa766e1b9786557\": container with ID starting with 684566c2bec8831ad596ea2c67283e963a33a72653ccd7d10aa766e1b9786557 not found: ID does not exist" Oct 11 10:58:19.736018 master-1 kubenswrapper[4771]: I1011 10:58:19.736006 4771 scope.go:117] "RemoveContainer" containerID="88924f324b5ca6c196af8b9bc4aa24510c91c3566f0f572201b83024bd6210b6" Oct 11 10:58:19.736488 master-1 kubenswrapper[4771]: E1011 10:58:19.736340 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88924f324b5ca6c196af8b9bc4aa24510c91c3566f0f572201b83024bd6210b6\": container with ID starting with 88924f324b5ca6c196af8b9bc4aa24510c91c3566f0f572201b83024bd6210b6 not found: ID does not exist" containerID="88924f324b5ca6c196af8b9bc4aa24510c91c3566f0f572201b83024bd6210b6" Oct 11 10:58:19.736624 master-1 kubenswrapper[4771]: I1011 10:58:19.736606 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88924f324b5ca6c196af8b9bc4aa24510c91c3566f0f572201b83024bd6210b6"} err="failed to get container status \"88924f324b5ca6c196af8b9bc4aa24510c91c3566f0f572201b83024bd6210b6\": rpc error: code = NotFound desc = could not find container \"88924f324b5ca6c196af8b9bc4aa24510c91c3566f0f572201b83024bd6210b6\": container with ID starting with 88924f324b5ca6c196af8b9bc4aa24510c91c3566f0f572201b83024bd6210b6 not found: ID does not exist" Oct 11 10:58:19.781532 master-1 kubenswrapper[4771]: I1011 10:58:19.781419 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c648855c-73f8-4316-9eca-147821b776c2-config-data\") pod \"c648855c-73f8-4316-9eca-147821b776c2\" (UID: \"c648855c-73f8-4316-9eca-147821b776c2\") " Oct 11 10:58:19.781899 master-1 kubenswrapper[4771]: I1011 10:58:19.781565 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c648855c-73f8-4316-9eca-147821b776c2-combined-ca-bundle\") pod \"c648855c-73f8-4316-9eca-147821b776c2\" (UID: \"c648855c-73f8-4316-9eca-147821b776c2\") " Oct 11 10:58:19.781899 master-1 kubenswrapper[4771]: I1011 10:58:19.781649 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c648855c-73f8-4316-9eca-147821b776c2-logs\") pod \"c648855c-73f8-4316-9eca-147821b776c2\" (UID: \"c648855c-73f8-4316-9eca-147821b776c2\") " Oct 11 10:58:19.781899 master-1 kubenswrapper[4771]: I1011 10:58:19.781723 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkcdx\" (UniqueName: \"kubernetes.io/projected/c648855c-73f8-4316-9eca-147821b776c2-kube-api-access-rkcdx\") pod \"c648855c-73f8-4316-9eca-147821b776c2\" (UID: \"c648855c-73f8-4316-9eca-147821b776c2\") " Oct 11 10:58:19.783136 master-1 kubenswrapper[4771]: I1011 10:58:19.783055 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c648855c-73f8-4316-9eca-147821b776c2-logs" (OuterVolumeSpecName: "logs") pod "c648855c-73f8-4316-9eca-147821b776c2" (UID: "c648855c-73f8-4316-9eca-147821b776c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:58:19.786554 master-1 kubenswrapper[4771]: I1011 10:58:19.786499 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c648855c-73f8-4316-9eca-147821b776c2-kube-api-access-rkcdx" (OuterVolumeSpecName: "kube-api-access-rkcdx") pod "c648855c-73f8-4316-9eca-147821b776c2" (UID: "c648855c-73f8-4316-9eca-147821b776c2"). InnerVolumeSpecName "kube-api-access-rkcdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:58:19.810760 master-1 kubenswrapper[4771]: I1011 10:58:19.810664 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c648855c-73f8-4316-9eca-147821b776c2-config-data" (OuterVolumeSpecName: "config-data") pod "c648855c-73f8-4316-9eca-147821b776c2" (UID: "c648855c-73f8-4316-9eca-147821b776c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:58:19.819326 master-1 kubenswrapper[4771]: I1011 10:58:19.819266 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c648855c-73f8-4316-9eca-147821b776c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c648855c-73f8-4316-9eca-147821b776c2" (UID: "c648855c-73f8-4316-9eca-147821b776c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:58:19.885435 master-1 kubenswrapper[4771]: I1011 10:58:19.885324 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c648855c-73f8-4316-9eca-147821b776c2-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 10:58:19.885435 master-1 kubenswrapper[4771]: I1011 10:58:19.885429 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c648855c-73f8-4316-9eca-147821b776c2-logs\") on node \"master-1\" DevicePath \"\"" Oct 11 10:58:19.885790 master-1 kubenswrapper[4771]: I1011 10:58:19.885461 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkcdx\" (UniqueName: \"kubernetes.io/projected/c648855c-73f8-4316-9eca-147821b776c2-kube-api-access-rkcdx\") on node \"master-1\" DevicePath \"\"" Oct 11 10:58:19.885790 master-1 kubenswrapper[4771]: I1011 10:58:19.885490 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c648855c-73f8-4316-9eca-147821b776c2-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 10:58:20.060953 master-1 kubenswrapper[4771]: I1011 10:58:20.060767 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 11 10:58:20.073009 master-1 kubenswrapper[4771]: I1011 10:58:20.072930 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 11 10:58:20.093749 master-1 kubenswrapper[4771]: I1011 10:58:20.093653 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 11 10:58:20.094103 master-1 kubenswrapper[4771]: E1011 10:58:20.094005 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c648855c-73f8-4316-9eca-147821b776c2" containerName="nova-api-log" Oct 11 10:58:20.094103 master-1 kubenswrapper[4771]: I1011 10:58:20.094021 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c648855c-73f8-4316-9eca-147821b776c2" containerName="nova-api-log" Oct 11 10:58:20.094103 master-1 kubenswrapper[4771]: E1011 10:58:20.094029 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c648855c-73f8-4316-9eca-147821b776c2" containerName="nova-api-api" Oct 11 10:58:20.094103 master-1 kubenswrapper[4771]: I1011 10:58:20.094034 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c648855c-73f8-4316-9eca-147821b776c2" containerName="nova-api-api" Oct 11 10:58:20.095168 master-1 kubenswrapper[4771]: I1011 10:58:20.094219 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c648855c-73f8-4316-9eca-147821b776c2" containerName="nova-api-log" Oct 11 10:58:20.095168 master-1 kubenswrapper[4771]: I1011 10:58:20.094234 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c648855c-73f8-4316-9eca-147821b776c2" containerName="nova-api-api" Oct 11 10:58:20.095318 master-1 kubenswrapper[4771]: I1011 10:58:20.095238 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 10:58:20.098771 master-1 kubenswrapper[4771]: I1011 10:58:20.098704 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 11 10:58:20.099177 master-1 kubenswrapper[4771]: I1011 10:58:20.099109 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 11 10:58:20.099271 master-1 kubenswrapper[4771]: I1011 10:58:20.099203 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 11 10:58:20.118866 master-1 kubenswrapper[4771]: I1011 10:58:20.118800 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 11 10:58:20.296856 master-1 kubenswrapper[4771]: I1011 10:58:20.296775 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dfb8a04-f489-4ab7-b3fa-9477b10e2de0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0\") " pod="openstack/nova-api-0" Oct 11 10:58:20.297405 master-1 kubenswrapper[4771]: I1011 10:58:20.297369 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dfb8a04-f489-4ab7-b3fa-9477b10e2de0-config-data\") pod \"nova-api-0\" (UID: \"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0\") " pod="openstack/nova-api-0" Oct 11 10:58:20.298060 master-1 kubenswrapper[4771]: I1011 10:58:20.298001 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfb8a04-f489-4ab7-b3fa-9477b10e2de0-public-tls-certs\") pod \"nova-api-0\" (UID: \"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0\") " pod="openstack/nova-api-0" Oct 11 10:58:20.298400 master-1 kubenswrapper[4771]: I1011 10:58:20.298336 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n69zl\" (UniqueName: \"kubernetes.io/projected/7dfb8a04-f489-4ab7-b3fa-9477b10e2de0-kube-api-access-n69zl\") pod \"nova-api-0\" (UID: \"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0\") " pod="openstack/nova-api-0" Oct 11 10:58:20.298744 master-1 kubenswrapper[4771]: I1011 10:58:20.298698 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dfb8a04-f489-4ab7-b3fa-9477b10e2de0-logs\") pod \"nova-api-0\" (UID: \"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0\") " pod="openstack/nova-api-0" Oct 11 10:58:20.298992 master-1 kubenswrapper[4771]: I1011 10:58:20.298962 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfb8a04-f489-4ab7-b3fa-9477b10e2de0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0\") " pod="openstack/nova-api-0" Oct 11 10:58:20.402141 master-1 kubenswrapper[4771]: I1011 10:58:20.401999 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dfb8a04-f489-4ab7-b3fa-9477b10e2de0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0\") " pod="openstack/nova-api-0" Oct 11 10:58:20.402422 master-1 kubenswrapper[4771]: I1011 10:58:20.402156 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dfb8a04-f489-4ab7-b3fa-9477b10e2de0-config-data\") pod \"nova-api-0\" (UID: \"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0\") " pod="openstack/nova-api-0" Oct 11 10:58:20.402422 master-1 kubenswrapper[4771]: I1011 10:58:20.402283 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfb8a04-f489-4ab7-b3fa-9477b10e2de0-public-tls-certs\") pod \"nova-api-0\" (UID: \"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0\") " pod="openstack/nova-api-0" Oct 11 10:58:20.402422 master-1 kubenswrapper[4771]: I1011 10:58:20.402394 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n69zl\" (UniqueName: \"kubernetes.io/projected/7dfb8a04-f489-4ab7-b3fa-9477b10e2de0-kube-api-access-n69zl\") pod \"nova-api-0\" (UID: \"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0\") " pod="openstack/nova-api-0" Oct 11 10:58:20.402649 master-1 kubenswrapper[4771]: I1011 10:58:20.402460 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dfb8a04-f489-4ab7-b3fa-9477b10e2de0-logs\") pod \"nova-api-0\" (UID: \"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0\") " pod="openstack/nova-api-0" Oct 11 10:58:20.402649 master-1 kubenswrapper[4771]: I1011 10:58:20.402492 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfb8a04-f489-4ab7-b3fa-9477b10e2de0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0\") " pod="openstack/nova-api-0" Oct 11 10:58:20.403795 master-1 kubenswrapper[4771]: I1011 10:58:20.403715 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dfb8a04-f489-4ab7-b3fa-9477b10e2de0-logs\") pod \"nova-api-0\" (UID: \"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0\") " pod="openstack/nova-api-0" Oct 11 10:58:20.408520 master-1 kubenswrapper[4771]: I1011 10:58:20.408449 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfb8a04-f489-4ab7-b3fa-9477b10e2de0-public-tls-certs\") pod \"nova-api-0\" (UID: \"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0\") " pod="openstack/nova-api-0" Oct 11 10:58:20.409074 master-1 kubenswrapper[4771]: I1011 10:58:20.409016 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfb8a04-f489-4ab7-b3fa-9477b10e2de0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0\") " pod="openstack/nova-api-0" Oct 11 10:58:20.409667 master-1 kubenswrapper[4771]: I1011 10:58:20.409549 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dfb8a04-f489-4ab7-b3fa-9477b10e2de0-config-data\") pod \"nova-api-0\" (UID: \"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0\") " pod="openstack/nova-api-0" Oct 11 10:58:20.410249 master-1 kubenswrapper[4771]: I1011 10:58:20.410212 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dfb8a04-f489-4ab7-b3fa-9477b10e2de0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0\") " pod="openstack/nova-api-0" Oct 11 10:58:20.440583 master-1 kubenswrapper[4771]: I1011 10:58:20.440503 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n69zl\" (UniqueName: \"kubernetes.io/projected/7dfb8a04-f489-4ab7-b3fa-9477b10e2de0-kube-api-access-n69zl\") pod \"nova-api-0\" (UID: \"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0\") " pod="openstack/nova-api-0" Oct 11 10:58:20.457407 master-1 kubenswrapper[4771]: I1011 10:58:20.457291 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c648855c-73f8-4316-9eca-147821b776c2" path="/var/lib/kubelet/pods/c648855c-73f8-4316-9eca-147821b776c2/volumes" Oct 11 10:58:20.723445 master-1 kubenswrapper[4771]: I1011 10:58:20.723228 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 10:58:20.881911 master-2 kubenswrapper[4776]: I1011 10:58:20.881838 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.128.0.172:8775/\": read tcp 10.128.0.2:38124->10.128.0.172:8775: read: connection reset by peer" Oct 11 10:58:20.882845 master-2 kubenswrapper[4776]: I1011 10:58:20.881865 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.128.0.172:8775/\": read tcp 10.128.0.2:38140->10.128.0.172:8775: read: connection reset by peer" Oct 11 10:58:21.254983 master-1 kubenswrapper[4771]: I1011 10:58:21.254935 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 11 10:58:21.676965 master-2 kubenswrapper[4776]: I1011 10:58:21.676927 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 10:58:21.720433 master-1 kubenswrapper[4771]: I1011 10:58:21.720331 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0","Type":"ContainerStarted","Data":"fecef3be564b816399753f7d8d59265dc3887a1eeeb7163392bbdcd3a103b87a"} Oct 11 10:58:21.720433 master-1 kubenswrapper[4771]: I1011 10:58:21.720407 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0","Type":"ContainerStarted","Data":"16c28995ac3dcd1be970af48dc8e48b9c7c87a09a6fa757cf56a58da4eb7ad92"} Oct 11 10:58:21.806348 master-2 kubenswrapper[4776]: I1011 10:58:21.806284 4776 generic.go:334] "Generic (PLEG): container finished" podID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerID="a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77" exitCode=0 Oct 11 10:58:21.806348 master-2 kubenswrapper[4776]: I1011 10:58:21.806329 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ec11821-cdf5-45e1-a138-2b62dad57cc3","Type":"ContainerDied","Data":"a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77"} Oct 11 10:58:21.806348 master-2 kubenswrapper[4776]: I1011 10:58:21.806336 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 10:58:21.806663 master-2 kubenswrapper[4776]: I1011 10:58:21.806365 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ec11821-cdf5-45e1-a138-2b62dad57cc3","Type":"ContainerDied","Data":"9b37b95095e4c99e4c588f15858480444284fe16e29197075e74b845d5fdd23b"} Oct 11 10:58:21.806663 master-2 kubenswrapper[4776]: I1011 10:58:21.806387 4776 scope.go:117] "RemoveContainer" containerID="a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77" Oct 11 10:58:21.810640 master-2 kubenswrapper[4776]: I1011 10:58:21.810619 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ec11821-cdf5-45e1-a138-2b62dad57cc3-logs\") pod \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " Oct 11 10:58:21.810767 master-2 kubenswrapper[4776]: I1011 10:58:21.810747 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75nct\" (UniqueName: \"kubernetes.io/projected/2ec11821-cdf5-45e1-a138-2b62dad57cc3-kube-api-access-75nct\") pod \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " Oct 11 10:58:21.811786 master-2 kubenswrapper[4776]: I1011 10:58:21.810877 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-config-data\") pod \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " Oct 11 10:58:21.811786 master-2 kubenswrapper[4776]: I1011 10:58:21.810918 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-combined-ca-bundle\") pod \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " Oct 11 10:58:21.811786 master-2 kubenswrapper[4776]: I1011 10:58:21.811005 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ec11821-cdf5-45e1-a138-2b62dad57cc3-logs" (OuterVolumeSpecName: "logs") pod "2ec11821-cdf5-45e1-a138-2b62dad57cc3" (UID: "2ec11821-cdf5-45e1-a138-2b62dad57cc3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:58:21.815588 master-2 kubenswrapper[4776]: I1011 10:58:21.815547 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ec11821-cdf5-45e1-a138-2b62dad57cc3-kube-api-access-75nct" (OuterVolumeSpecName: "kube-api-access-75nct") pod "2ec11821-cdf5-45e1-a138-2b62dad57cc3" (UID: "2ec11821-cdf5-45e1-a138-2b62dad57cc3"). InnerVolumeSpecName "kube-api-access-75nct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:58:21.816022 master-2 kubenswrapper[4776]: I1011 10:58:21.816003 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75nct\" (UniqueName: \"kubernetes.io/projected/2ec11821-cdf5-45e1-a138-2b62dad57cc3-kube-api-access-75nct\") on node \"master-2\" DevicePath \"\"" Oct 11 10:58:21.816069 master-2 kubenswrapper[4776]: I1011 10:58:21.816027 4776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ec11821-cdf5-45e1-a138-2b62dad57cc3-logs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:58:21.834393 master-2 kubenswrapper[4776]: I1011 10:58:21.834345 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ec11821-cdf5-45e1-a138-2b62dad57cc3" (UID: "2ec11821-cdf5-45e1-a138-2b62dad57cc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:58:21.850014 master-2 kubenswrapper[4776]: I1011 10:58:21.849950 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-config-data" (OuterVolumeSpecName: "config-data") pod "2ec11821-cdf5-45e1-a138-2b62dad57cc3" (UID: "2ec11821-cdf5-45e1-a138-2b62dad57cc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:58:21.906392 master-2 kubenswrapper[4776]: I1011 10:58:21.906352 4776 scope.go:117] "RemoveContainer" containerID="a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12" Oct 11 10:58:21.918175 master-2 kubenswrapper[4776]: I1011 10:58:21.918135 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:58:21.918175 master-2 kubenswrapper[4776]: I1011 10:58:21.918163 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:58:21.926399 master-2 kubenswrapper[4776]: I1011 10:58:21.926114 4776 scope.go:117] "RemoveContainer" containerID="a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77" Oct 11 10:58:21.926506 master-2 kubenswrapper[4776]: E1011 10:58:21.926407 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77\": container with ID starting with a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77 not found: ID does not exist" containerID="a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77" Oct 11 10:58:21.926506 master-2 kubenswrapper[4776]: I1011 10:58:21.926430 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77"} err="failed to get container status \"a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77\": rpc error: code = NotFound desc = could not find container \"a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77\": container with ID starting with a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77 not found: ID does not exist" Oct 11 10:58:21.926506 master-2 kubenswrapper[4776]: I1011 10:58:21.926450 4776 scope.go:117] "RemoveContainer" containerID="a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12" Oct 11 10:58:21.926741 master-2 kubenswrapper[4776]: E1011 10:58:21.926723 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12\": container with ID starting with a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12 not found: ID does not exist" containerID="a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12" Oct 11 10:58:21.926813 master-2 kubenswrapper[4776]: I1011 10:58:21.926739 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12"} err="failed to get container status \"a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12\": rpc error: code = NotFound desc = could not find container \"a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12\": container with ID starting with a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12 not found: ID does not exist" Oct 11 10:58:22.137409 master-2 kubenswrapper[4776]: I1011 10:58:22.137290 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 10:58:22.146058 master-2 kubenswrapper[4776]: I1011 10:58:22.146001 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 10:58:22.177192 master-2 kubenswrapper[4776]: I1011 10:58:22.177102 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 11 10:58:22.177552 master-2 kubenswrapper[4776]: E1011 10:58:22.177502 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-metadata" Oct 11 10:58:22.177552 master-2 kubenswrapper[4776]: I1011 10:58:22.177525 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-metadata" Oct 11 10:58:22.177552 master-2 kubenswrapper[4776]: E1011 10:58:22.177545 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-log" Oct 11 10:58:22.177552 master-2 kubenswrapper[4776]: I1011 10:58:22.177553 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-log" Oct 11 10:58:22.177857 master-2 kubenswrapper[4776]: I1011 10:58:22.177806 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-metadata" Oct 11 10:58:22.177857 master-2 kubenswrapper[4776]: I1011 10:58:22.177838 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-log" Oct 11 10:58:22.178815 master-2 kubenswrapper[4776]: I1011 10:58:22.178779 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 10:58:22.181271 master-2 kubenswrapper[4776]: I1011 10:58:22.181087 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 11 10:58:22.181944 master-2 kubenswrapper[4776]: I1011 10:58:22.181878 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 11 10:58:22.197323 master-2 kubenswrapper[4776]: I1011 10:58:22.197259 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 10:58:22.330874 master-2 kubenswrapper[4776]: I1011 10:58:22.330777 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/62606489-fbdc-4c5f-8cad-744bdba9716c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.330874 master-2 kubenswrapper[4776]: I1011 10:58:22.330860 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62606489-fbdc-4c5f-8cad-744bdba9716c-logs\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.331167 master-2 kubenswrapper[4776]: I1011 10:58:22.331110 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62606489-fbdc-4c5f-8cad-744bdba9716c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.331205 master-2 kubenswrapper[4776]: I1011 10:58:22.331182 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62606489-fbdc-4c5f-8cad-744bdba9716c-config-data\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.331311 master-2 kubenswrapper[4776]: I1011 10:58:22.331274 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmdt8\" (UniqueName: \"kubernetes.io/projected/62606489-fbdc-4c5f-8cad-744bdba9716c-kube-api-access-vmdt8\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.434126 master-2 kubenswrapper[4776]: I1011 10:58:22.434058 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/62606489-fbdc-4c5f-8cad-744bdba9716c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.434126 master-2 kubenswrapper[4776]: I1011 10:58:22.434113 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62606489-fbdc-4c5f-8cad-744bdba9716c-logs\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.434448 master-2 kubenswrapper[4776]: I1011 10:58:22.434182 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62606489-fbdc-4c5f-8cad-744bdba9716c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.434448 master-2 kubenswrapper[4776]: I1011 10:58:22.434210 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62606489-fbdc-4c5f-8cad-744bdba9716c-config-data\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.434448 master-2 kubenswrapper[4776]: I1011 10:58:22.434245 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmdt8\" (UniqueName: \"kubernetes.io/projected/62606489-fbdc-4c5f-8cad-744bdba9716c-kube-api-access-vmdt8\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.434958 master-2 kubenswrapper[4776]: I1011 10:58:22.434610 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62606489-fbdc-4c5f-8cad-744bdba9716c-logs\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.437704 master-2 kubenswrapper[4776]: I1011 10:58:22.437643 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62606489-fbdc-4c5f-8cad-744bdba9716c-config-data\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.438608 master-2 kubenswrapper[4776]: I1011 10:58:22.438552 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/62606489-fbdc-4c5f-8cad-744bdba9716c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.439738 master-2 kubenswrapper[4776]: I1011 10:58:22.439704 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62606489-fbdc-4c5f-8cad-744bdba9716c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.453006 master-2 kubenswrapper[4776]: I1011 10:58:22.452972 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmdt8\" (UniqueName: \"kubernetes.io/projected/62606489-fbdc-4c5f-8cad-744bdba9716c-kube-api-access-vmdt8\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.519485 master-2 kubenswrapper[4776]: I1011 10:58:22.519415 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 10:58:22.735830 master-1 kubenswrapper[4771]: I1011 10:58:22.735718 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7dfb8a04-f489-4ab7-b3fa-9477b10e2de0","Type":"ContainerStarted","Data":"00f982736e4546898bb19bea284d3e39908bbc699fd22a8b9544393a4050273b"} Oct 11 10:58:22.780804 master-1 kubenswrapper[4771]: I1011 10:58:22.780653 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.780616743 podStartE2EDuration="2.780616743s" podCreationTimestamp="2025-10-11 10:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:58:22.773303393 +0000 UTC m=+1934.747529894" watchObservedRunningTime="2025-10-11 10:58:22.780616743 +0000 UTC m=+1934.754843214" Oct 11 10:58:22.965093 master-2 kubenswrapper[4776]: I1011 10:58:22.965034 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 10:58:23.829086 master-2 kubenswrapper[4776]: I1011 10:58:23.829003 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62606489-fbdc-4c5f-8cad-744bdba9716c","Type":"ContainerStarted","Data":"ee0892194461a7d514a5c0d355b0ae1f0ad4a1c2f97e3ca26937b5540ddf1137"} Oct 11 10:58:23.829086 master-2 kubenswrapper[4776]: I1011 10:58:23.829059 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62606489-fbdc-4c5f-8cad-744bdba9716c","Type":"ContainerStarted","Data":"3b305e1064b6f89b22836debeb11b4ad43a606968326f3a60d61a16c65817764"} Oct 11 10:58:23.829086 master-2 kubenswrapper[4776]: I1011 10:58:23.829074 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62606489-fbdc-4c5f-8cad-744bdba9716c","Type":"ContainerStarted","Data":"d3b026a74a2b7918e6e2879e2e3f656198028459d1ed0cab385e8a0df166070f"} Oct 11 10:58:23.867667 master-2 kubenswrapper[4776]: I1011 10:58:23.867561 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.867533287 podStartE2EDuration="1.867533287s" podCreationTimestamp="2025-10-11 10:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:58:23.862232953 +0000 UTC m=+1938.646659682" watchObservedRunningTime="2025-10-11 10:58:23.867533287 +0000 UTC m=+1938.651960006" Oct 11 10:58:24.067713 master-2 kubenswrapper[4776]: I1011 10:58:24.067560 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" path="/var/lib/kubelet/pods/2ec11821-cdf5-45e1-a138-2b62dad57cc3/volumes" Oct 11 10:58:27.519694 master-2 kubenswrapper[4776]: I1011 10:58:27.519625 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 11 10:58:27.520344 master-2 kubenswrapper[4776]: I1011 10:58:27.519956 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 11 10:58:30.724402 master-1 kubenswrapper[4771]: I1011 10:58:30.724259 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 11 10:58:30.724402 master-1 kubenswrapper[4771]: I1011 10:58:30.724399 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 11 10:58:31.742806 master-1 kubenswrapper[4771]: I1011 10:58:31.742658 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7dfb8a04-f489-4ab7-b3fa-9477b10e2de0" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.129.0.176:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:58:31.742806 master-1 kubenswrapper[4771]: I1011 10:58:31.742808 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7dfb8a04-f489-4ab7-b3fa-9477b10e2de0" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.129.0.176:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:58:32.519936 master-2 kubenswrapper[4776]: I1011 10:58:32.519751 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 11 10:58:32.519936 master-2 kubenswrapper[4776]: I1011 10:58:32.519919 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 11 10:58:33.540999 master-2 kubenswrapper[4776]: I1011 10:58:33.540928 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="62606489-fbdc-4c5f-8cad-744bdba9716c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.0.182:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:58:33.541783 master-2 kubenswrapper[4776]: I1011 10:58:33.540931 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="62606489-fbdc-4c5f-8cad-744bdba9716c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.0.182:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:58:40.732621 master-1 kubenswrapper[4771]: I1011 10:58:40.732399 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 11 10:58:40.733851 master-1 kubenswrapper[4771]: I1011 10:58:40.733066 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 11 10:58:40.734904 master-1 kubenswrapper[4771]: I1011 10:58:40.734833 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 11 10:58:40.747903 master-1 kubenswrapper[4771]: I1011 10:58:40.747842 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 11 10:58:40.926496 master-1 kubenswrapper[4771]: I1011 10:58:40.925682 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 11 10:58:40.939981 master-1 kubenswrapper[4771]: I1011 10:58:40.937377 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 11 10:58:42.527405 master-2 kubenswrapper[4776]: I1011 10:58:42.527175 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 11 10:58:42.528100 master-2 kubenswrapper[4776]: I1011 10:58:42.527987 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 11 10:58:42.538663 master-2 kubenswrapper[4776]: I1011 10:58:42.538609 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 11 10:58:43.006081 master-2 kubenswrapper[4776]: I1011 10:58:43.006015 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 11 11:00:00.178176 master-1 kubenswrapper[4771]: I1011 11:00:00.178072 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv"] Oct 11 11:00:00.180574 master-1 kubenswrapper[4771]: I1011 11:00:00.180490 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv" Oct 11 11:00:00.183304 master-1 kubenswrapper[4771]: I1011 11:00:00.183238 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 11 11:00:00.183508 master-1 kubenswrapper[4771]: I1011 11:00:00.183459 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-hbjq2" Oct 11 11:00:00.183896 master-1 kubenswrapper[4771]: I1011 11:00:00.183862 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 11 11:00:00.187722 master-1 kubenswrapper[4771]: I1011 11:00:00.187667 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv"] Oct 11 11:00:00.291456 master-1 kubenswrapper[4771]: I1011 11:00:00.291252 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8zpc\" (UniqueName: \"kubernetes.io/projected/52522dc6-1667-4ae1-ba84-82963e615ae0-kube-api-access-m8zpc\") pod \"collect-profiles-29336340-jv5mv\" (UID: \"52522dc6-1667-4ae1-ba84-82963e615ae0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv" Oct 11 11:00:00.291456 master-1 kubenswrapper[4771]: I1011 11:00:00.291346 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52522dc6-1667-4ae1-ba84-82963e615ae0-config-volume\") pod \"collect-profiles-29336340-jv5mv\" (UID: \"52522dc6-1667-4ae1-ba84-82963e615ae0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv" Oct 11 11:00:00.291456 master-1 kubenswrapper[4771]: I1011 11:00:00.291474 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52522dc6-1667-4ae1-ba84-82963e615ae0-secret-volume\") pod \"collect-profiles-29336340-jv5mv\" (UID: \"52522dc6-1667-4ae1-ba84-82963e615ae0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv" Oct 11 11:00:00.393918 master-1 kubenswrapper[4771]: I1011 11:00:00.393855 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52522dc6-1667-4ae1-ba84-82963e615ae0-secret-volume\") pod \"collect-profiles-29336340-jv5mv\" (UID: \"52522dc6-1667-4ae1-ba84-82963e615ae0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv" Oct 11 11:00:00.394245 master-1 kubenswrapper[4771]: I1011 11:00:00.393974 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8zpc\" (UniqueName: \"kubernetes.io/projected/52522dc6-1667-4ae1-ba84-82963e615ae0-kube-api-access-m8zpc\") pod \"collect-profiles-29336340-jv5mv\" (UID: \"52522dc6-1667-4ae1-ba84-82963e615ae0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv" Oct 11 11:00:00.394245 master-1 kubenswrapper[4771]: I1011 11:00:00.394022 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52522dc6-1667-4ae1-ba84-82963e615ae0-config-volume\") pod \"collect-profiles-29336340-jv5mv\" (UID: \"52522dc6-1667-4ae1-ba84-82963e615ae0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv" Oct 11 11:00:00.395505 master-1 kubenswrapper[4771]: I1011 11:00:00.395036 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52522dc6-1667-4ae1-ba84-82963e615ae0-config-volume\") pod \"collect-profiles-29336340-jv5mv\" (UID: \"52522dc6-1667-4ae1-ba84-82963e615ae0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv" Oct 11 11:00:00.405421 master-1 kubenswrapper[4771]: I1011 11:00:00.398804 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52522dc6-1667-4ae1-ba84-82963e615ae0-secret-volume\") pod \"collect-profiles-29336340-jv5mv\" (UID: \"52522dc6-1667-4ae1-ba84-82963e615ae0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv" Oct 11 11:00:00.432977 master-1 kubenswrapper[4771]: I1011 11:00:00.432845 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8zpc\" (UniqueName: \"kubernetes.io/projected/52522dc6-1667-4ae1-ba84-82963e615ae0-kube-api-access-m8zpc\") pod \"collect-profiles-29336340-jv5mv\" (UID: \"52522dc6-1667-4ae1-ba84-82963e615ae0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv" Oct 11 11:00:00.528980 master-1 kubenswrapper[4771]: I1011 11:00:00.528561 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv" Oct 11 11:00:01.049329 master-1 kubenswrapper[4771]: I1011 11:00:01.049253 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv"] Oct 11 11:00:01.827712 master-1 kubenswrapper[4771]: I1011 11:00:01.827606 4771 generic.go:334] "Generic (PLEG): container finished" podID="52522dc6-1667-4ae1-ba84-82963e615ae0" containerID="66ebd1c6bea5f170d205b6caa3ef551df56836492f79d6c8b4e2315482c9617d" exitCode=0 Oct 11 11:00:01.827712 master-1 kubenswrapper[4771]: I1011 11:00:01.827703 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv" event={"ID":"52522dc6-1667-4ae1-ba84-82963e615ae0","Type":"ContainerDied","Data":"66ebd1c6bea5f170d205b6caa3ef551df56836492f79d6c8b4e2315482c9617d"} Oct 11 11:00:01.828415 master-1 kubenswrapper[4771]: I1011 11:00:01.827752 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv" event={"ID":"52522dc6-1667-4ae1-ba84-82963e615ae0","Type":"ContainerStarted","Data":"4856a08c0ef28ff8db7f8bf2d0a82acc3d664647b30280176ffde22ed48a00b8"} Oct 11 11:00:03.273834 master-1 kubenswrapper[4771]: I1011 11:00:03.273758 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv" Oct 11 11:00:03.360925 master-1 kubenswrapper[4771]: I1011 11:00:03.360783 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52522dc6-1667-4ae1-ba84-82963e615ae0-secret-volume\") pod \"52522dc6-1667-4ae1-ba84-82963e615ae0\" (UID: \"52522dc6-1667-4ae1-ba84-82963e615ae0\") " Oct 11 11:00:03.361233 master-1 kubenswrapper[4771]: I1011 11:00:03.361017 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8zpc\" (UniqueName: \"kubernetes.io/projected/52522dc6-1667-4ae1-ba84-82963e615ae0-kube-api-access-m8zpc\") pod \"52522dc6-1667-4ae1-ba84-82963e615ae0\" (UID: \"52522dc6-1667-4ae1-ba84-82963e615ae0\") " Oct 11 11:00:03.361233 master-1 kubenswrapper[4771]: I1011 11:00:03.361056 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52522dc6-1667-4ae1-ba84-82963e615ae0-config-volume\") pod \"52522dc6-1667-4ae1-ba84-82963e615ae0\" (UID: \"52522dc6-1667-4ae1-ba84-82963e615ae0\") " Oct 11 11:00:03.363727 master-1 kubenswrapper[4771]: I1011 11:00:03.363641 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52522dc6-1667-4ae1-ba84-82963e615ae0-config-volume" (OuterVolumeSpecName: "config-volume") pod "52522dc6-1667-4ae1-ba84-82963e615ae0" (UID: "52522dc6-1667-4ae1-ba84-82963e615ae0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:00:03.364224 master-1 kubenswrapper[4771]: I1011 11:00:03.364025 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52522dc6-1667-4ae1-ba84-82963e615ae0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "52522dc6-1667-4ae1-ba84-82963e615ae0" (UID: "52522dc6-1667-4ae1-ba84-82963e615ae0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:00:03.365533 master-1 kubenswrapper[4771]: I1011 11:00:03.365327 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52522dc6-1667-4ae1-ba84-82963e615ae0-kube-api-access-m8zpc" (OuterVolumeSpecName: "kube-api-access-m8zpc") pod "52522dc6-1667-4ae1-ba84-82963e615ae0" (UID: "52522dc6-1667-4ae1-ba84-82963e615ae0"). InnerVolumeSpecName "kube-api-access-m8zpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:00:03.465652 master-1 kubenswrapper[4771]: I1011 11:00:03.465549 4771 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52522dc6-1667-4ae1-ba84-82963e615ae0-secret-volume\") on node \"master-1\" DevicePath \"\"" Oct 11 11:00:03.465652 master-1 kubenswrapper[4771]: I1011 11:00:03.465591 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8zpc\" (UniqueName: \"kubernetes.io/projected/52522dc6-1667-4ae1-ba84-82963e615ae0-kube-api-access-m8zpc\") on node \"master-1\" DevicePath \"\"" Oct 11 11:00:03.465652 master-1 kubenswrapper[4771]: I1011 11:00:03.465604 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52522dc6-1667-4ae1-ba84-82963e615ae0-config-volume\") on node \"master-1\" DevicePath \"\"" Oct 11 11:00:03.849761 master-1 kubenswrapper[4771]: I1011 11:00:03.849639 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv" event={"ID":"52522dc6-1667-4ae1-ba84-82963e615ae0","Type":"ContainerDied","Data":"4856a08c0ef28ff8db7f8bf2d0a82acc3d664647b30280176ffde22ed48a00b8"} Oct 11 11:00:03.849761 master-1 kubenswrapper[4771]: I1011 11:00:03.849710 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4856a08c0ef28ff8db7f8bf2d0a82acc3d664647b30280176ffde22ed48a00b8" Oct 11 11:00:03.849761 master-1 kubenswrapper[4771]: I1011 11:00:03.849706 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv" Oct 11 11:00:17.067613 master-2 kubenswrapper[4776]: I1011 11:00:17.067555 4776 scope.go:117] "RemoveContainer" containerID="4c31e081cf778b32f0c186718d90bb335248738e9e0b592d92f2f4e35cfd127e" Oct 11 11:00:17.091689 master-2 kubenswrapper[4776]: I1011 11:00:17.091589 4776 scope.go:117] "RemoveContainer" containerID="33f6c7d3d49df3aa00341fc88f0a2426a5e9fdd61368c7621163e1d8e6740b52" Oct 11 11:00:23.367693 master-1 kubenswrapper[4771]: I1011 11:00:23.367599 4771 scope.go:117] "RemoveContainer" containerID="b9f9706961ea78a9f4e52f8e9ebb80aedc250ae90f55ef49b6cd39d2d53a0f62" Oct 11 11:00:23.412443 master-1 kubenswrapper[4771]: I1011 11:00:23.412348 4771 scope.go:117] "RemoveContainer" containerID="d4634f70346f96ae4f97fe711847f0e072862de8b631ac6a0aaa341026f8675e" Oct 11 11:00:23.480726 master-1 kubenswrapper[4771]: I1011 11:00:23.480683 4771 scope.go:117] "RemoveContainer" containerID="94fe8e005fb0a8b586a5c6a1e344905a51e3390259171a77f131bc97d101f438" Oct 11 11:00:37.893048 master-1 kubenswrapper[4771]: I1011 11:00:37.891643 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ffxw9"] Oct 11 11:00:37.894054 master-1 kubenswrapper[4771]: E1011 11:00:37.893285 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52522dc6-1667-4ae1-ba84-82963e615ae0" containerName="collect-profiles" Oct 11 11:00:37.894054 master-1 kubenswrapper[4771]: I1011 11:00:37.893309 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="52522dc6-1667-4ae1-ba84-82963e615ae0" containerName="collect-profiles" Oct 11 11:00:37.894054 master-1 kubenswrapper[4771]: I1011 11:00:37.893551 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="52522dc6-1667-4ae1-ba84-82963e615ae0" containerName="collect-profiles" Oct 11 11:00:37.895870 master-1 kubenswrapper[4771]: I1011 11:00:37.895827 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ffxw9" Oct 11 11:00:37.929781 master-1 kubenswrapper[4771]: I1011 11:00:37.929688 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffxw9"] Oct 11 11:00:38.016496 master-1 kubenswrapper[4771]: I1011 11:00:38.016416 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9073372c-472d-49fc-865d-296c5e7e894e-catalog-content\") pod \"redhat-marketplace-ffxw9\" (UID: \"9073372c-472d-49fc-865d-296c5e7e894e\") " pod="openshift-marketplace/redhat-marketplace-ffxw9" Oct 11 11:00:38.016783 master-1 kubenswrapper[4771]: I1011 11:00:38.016525 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rgf2\" (UniqueName: \"kubernetes.io/projected/9073372c-472d-49fc-865d-296c5e7e894e-kube-api-access-7rgf2\") pod \"redhat-marketplace-ffxw9\" (UID: \"9073372c-472d-49fc-865d-296c5e7e894e\") " pod="openshift-marketplace/redhat-marketplace-ffxw9" Oct 11 11:00:38.016783 master-1 kubenswrapper[4771]: I1011 11:00:38.016611 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9073372c-472d-49fc-865d-296c5e7e894e-utilities\") pod \"redhat-marketplace-ffxw9\" (UID: \"9073372c-472d-49fc-865d-296c5e7e894e\") " pod="openshift-marketplace/redhat-marketplace-ffxw9" Oct 11 11:00:38.118998 master-1 kubenswrapper[4771]: I1011 11:00:38.118474 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9073372c-472d-49fc-865d-296c5e7e894e-utilities\") pod \"redhat-marketplace-ffxw9\" (UID: \"9073372c-472d-49fc-865d-296c5e7e894e\") " pod="openshift-marketplace/redhat-marketplace-ffxw9" Oct 11 11:00:38.118998 master-1 kubenswrapper[4771]: I1011 11:00:38.118598 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9073372c-472d-49fc-865d-296c5e7e894e-catalog-content\") pod \"redhat-marketplace-ffxw9\" (UID: \"9073372c-472d-49fc-865d-296c5e7e894e\") " pod="openshift-marketplace/redhat-marketplace-ffxw9" Oct 11 11:00:38.118998 master-1 kubenswrapper[4771]: I1011 11:00:38.118638 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rgf2\" (UniqueName: \"kubernetes.io/projected/9073372c-472d-49fc-865d-296c5e7e894e-kube-api-access-7rgf2\") pod \"redhat-marketplace-ffxw9\" (UID: \"9073372c-472d-49fc-865d-296c5e7e894e\") " pod="openshift-marketplace/redhat-marketplace-ffxw9" Oct 11 11:00:38.119388 master-1 kubenswrapper[4771]: I1011 11:00:38.119057 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9073372c-472d-49fc-865d-296c5e7e894e-utilities\") pod \"redhat-marketplace-ffxw9\" (UID: \"9073372c-472d-49fc-865d-296c5e7e894e\") " pod="openshift-marketplace/redhat-marketplace-ffxw9" Oct 11 11:00:38.119475 master-1 kubenswrapper[4771]: I1011 11:00:38.119433 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9073372c-472d-49fc-865d-296c5e7e894e-catalog-content\") pod \"redhat-marketplace-ffxw9\" (UID: \"9073372c-472d-49fc-865d-296c5e7e894e\") " pod="openshift-marketplace/redhat-marketplace-ffxw9" Oct 11 11:00:38.140598 master-1 kubenswrapper[4771]: I1011 11:00:38.140522 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rgf2\" (UniqueName: \"kubernetes.io/projected/9073372c-472d-49fc-865d-296c5e7e894e-kube-api-access-7rgf2\") pod \"redhat-marketplace-ffxw9\" (UID: \"9073372c-472d-49fc-865d-296c5e7e894e\") " pod="openshift-marketplace/redhat-marketplace-ffxw9" Oct 11 11:00:38.226857 master-1 kubenswrapper[4771]: I1011 11:00:38.226698 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ffxw9" Oct 11 11:00:38.696130 master-1 kubenswrapper[4771]: I1011 11:00:38.696066 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffxw9"] Oct 11 11:00:39.249549 master-1 kubenswrapper[4771]: I1011 11:00:39.249331 4771 generic.go:334] "Generic (PLEG): container finished" podID="9073372c-472d-49fc-865d-296c5e7e894e" containerID="93acab6f93e9fe153fc618c7354e26bfbf7629928b6655f4d192e2bb96b181a2" exitCode=0 Oct 11 11:00:39.249549 master-1 kubenswrapper[4771]: I1011 11:00:39.249417 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffxw9" event={"ID":"9073372c-472d-49fc-865d-296c5e7e894e","Type":"ContainerDied","Data":"93acab6f93e9fe153fc618c7354e26bfbf7629928b6655f4d192e2bb96b181a2"} Oct 11 11:00:39.249549 master-1 kubenswrapper[4771]: I1011 11:00:39.249456 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffxw9" event={"ID":"9073372c-472d-49fc-865d-296c5e7e894e","Type":"ContainerStarted","Data":"8ea364933c267a126636f7f3d6af92bf9dc8688de7b8628872ff7da70afcc5b6"} Oct 11 11:00:40.264428 master-1 kubenswrapper[4771]: I1011 11:00:40.264287 4771 generic.go:334] "Generic (PLEG): container finished" podID="9073372c-472d-49fc-865d-296c5e7e894e" containerID="c2bfecdebb12fef33e766ec37e47f3d0cf65379687c38507e2fd4dfbcc98d718" exitCode=0 Oct 11 11:00:40.265411 master-1 kubenswrapper[4771]: I1011 11:00:40.264443 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffxw9" event={"ID":"9073372c-472d-49fc-865d-296c5e7e894e","Type":"ContainerDied","Data":"c2bfecdebb12fef33e766ec37e47f3d0cf65379687c38507e2fd4dfbcc98d718"} Oct 11 11:00:41.277444 master-1 kubenswrapper[4771]: I1011 11:00:41.277328 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffxw9" event={"ID":"9073372c-472d-49fc-865d-296c5e7e894e","Type":"ContainerStarted","Data":"b9ffae440e93767bb6dd3cf6faa3e82f54777ec6a2484e2103a6a44c092bc37f"} Oct 11 11:00:41.314851 master-1 kubenswrapper[4771]: I1011 11:00:41.314731 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ffxw9" podStartSLOduration=2.637012764 podStartE2EDuration="4.314704383s" podCreationTimestamp="2025-10-11 11:00:37 +0000 UTC" firstStartedPulling="2025-10-11 11:00:39.252410865 +0000 UTC m=+2071.226637346" lastFinishedPulling="2025-10-11 11:00:40.930102484 +0000 UTC m=+2072.904328965" observedRunningTime="2025-10-11 11:00:41.309824232 +0000 UTC m=+2073.284050673" watchObservedRunningTime="2025-10-11 11:00:41.314704383 +0000 UTC m=+2073.288930864" Oct 11 11:00:44.776342 master-0 kubenswrapper[4790]: I1011 11:00:44.776188 4790 scope.go:117] "RemoveContainer" containerID="cc0e410018cdbb38cb0a44455ce0c9bcffaa24fb5b85e7a4f71ece632724bed8" Oct 11 11:00:48.227479 master-1 kubenswrapper[4771]: I1011 11:00:48.227390 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ffxw9" Oct 11 11:00:48.227479 master-1 kubenswrapper[4771]: I1011 11:00:48.227463 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ffxw9" Oct 11 11:00:48.301311 master-1 kubenswrapper[4771]: I1011 11:00:48.300650 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ffxw9" Oct 11 11:00:48.423241 master-1 kubenswrapper[4771]: I1011 11:00:48.423185 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ffxw9" Oct 11 11:00:48.570862 master-1 kubenswrapper[4771]: I1011 11:00:48.570780 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffxw9"] Oct 11 11:00:50.375883 master-1 kubenswrapper[4771]: I1011 11:00:50.375760 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ffxw9" podUID="9073372c-472d-49fc-865d-296c5e7e894e" containerName="registry-server" containerID="cri-o://b9ffae440e93767bb6dd3cf6faa3e82f54777ec6a2484e2103a6a44c092bc37f" gracePeriod=2 Oct 11 11:00:51.018274 master-1 kubenswrapper[4771]: I1011 11:00:51.018182 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ffxw9" Oct 11 11:00:51.175166 master-1 kubenswrapper[4771]: I1011 11:00:51.175054 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rgf2\" (UniqueName: \"kubernetes.io/projected/9073372c-472d-49fc-865d-296c5e7e894e-kube-api-access-7rgf2\") pod \"9073372c-472d-49fc-865d-296c5e7e894e\" (UID: \"9073372c-472d-49fc-865d-296c5e7e894e\") " Oct 11 11:00:51.175612 master-1 kubenswrapper[4771]: I1011 11:00:51.175250 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9073372c-472d-49fc-865d-296c5e7e894e-catalog-content\") pod \"9073372c-472d-49fc-865d-296c5e7e894e\" (UID: \"9073372c-472d-49fc-865d-296c5e7e894e\") " Oct 11 11:00:51.175612 master-1 kubenswrapper[4771]: I1011 11:00:51.175293 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9073372c-472d-49fc-865d-296c5e7e894e-utilities\") pod \"9073372c-472d-49fc-865d-296c5e7e894e\" (UID: \"9073372c-472d-49fc-865d-296c5e7e894e\") " Oct 11 11:00:51.177908 master-1 kubenswrapper[4771]: I1011 11:00:51.177810 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9073372c-472d-49fc-865d-296c5e7e894e-utilities" (OuterVolumeSpecName: "utilities") pod "9073372c-472d-49fc-865d-296c5e7e894e" (UID: "9073372c-472d-49fc-865d-296c5e7e894e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:00:51.180713 master-1 kubenswrapper[4771]: I1011 11:00:51.180634 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9073372c-472d-49fc-865d-296c5e7e894e-kube-api-access-7rgf2" (OuterVolumeSpecName: "kube-api-access-7rgf2") pod "9073372c-472d-49fc-865d-296c5e7e894e" (UID: "9073372c-472d-49fc-865d-296c5e7e894e"). InnerVolumeSpecName "kube-api-access-7rgf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:00:51.205135 master-1 kubenswrapper[4771]: I1011 11:00:51.204908 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9073372c-472d-49fc-865d-296c5e7e894e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9073372c-472d-49fc-865d-296c5e7e894e" (UID: "9073372c-472d-49fc-865d-296c5e7e894e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:00:51.278804 master-1 kubenswrapper[4771]: I1011 11:00:51.278682 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rgf2\" (UniqueName: \"kubernetes.io/projected/9073372c-472d-49fc-865d-296c5e7e894e-kube-api-access-7rgf2\") on node \"master-1\" DevicePath \"\"" Oct 11 11:00:51.278804 master-1 kubenswrapper[4771]: I1011 11:00:51.278755 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9073372c-472d-49fc-865d-296c5e7e894e-catalog-content\") on node \"master-1\" DevicePath \"\"" Oct 11 11:00:51.278804 master-1 kubenswrapper[4771]: I1011 11:00:51.278786 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9073372c-472d-49fc-865d-296c5e7e894e-utilities\") on node \"master-1\" DevicePath \"\"" Oct 11 11:00:51.390596 master-1 kubenswrapper[4771]: I1011 11:00:51.390508 4771 generic.go:334] "Generic (PLEG): container finished" podID="9073372c-472d-49fc-865d-296c5e7e894e" containerID="b9ffae440e93767bb6dd3cf6faa3e82f54777ec6a2484e2103a6a44c092bc37f" exitCode=0 Oct 11 11:00:51.391596 master-1 kubenswrapper[4771]: I1011 11:00:51.390632 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ffxw9" Oct 11 11:00:51.391596 master-1 kubenswrapper[4771]: I1011 11:00:51.390615 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffxw9" event={"ID":"9073372c-472d-49fc-865d-296c5e7e894e","Type":"ContainerDied","Data":"b9ffae440e93767bb6dd3cf6faa3e82f54777ec6a2484e2103a6a44c092bc37f"} Oct 11 11:00:51.391596 master-1 kubenswrapper[4771]: I1011 11:00:51.390879 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffxw9" event={"ID":"9073372c-472d-49fc-865d-296c5e7e894e","Type":"ContainerDied","Data":"8ea364933c267a126636f7f3d6af92bf9dc8688de7b8628872ff7da70afcc5b6"} Oct 11 11:00:51.391596 master-1 kubenswrapper[4771]: I1011 11:00:51.390932 4771 scope.go:117] "RemoveContainer" containerID="b9ffae440e93767bb6dd3cf6faa3e82f54777ec6a2484e2103a6a44c092bc37f" Oct 11 11:00:51.416218 master-1 kubenswrapper[4771]: I1011 11:00:51.415659 4771 scope.go:117] "RemoveContainer" containerID="c2bfecdebb12fef33e766ec37e47f3d0cf65379687c38507e2fd4dfbcc98d718" Oct 11 11:00:51.459900 master-1 kubenswrapper[4771]: I1011 11:00:51.459715 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffxw9"] Oct 11 11:00:51.465849 master-1 kubenswrapper[4771]: I1011 11:00:51.465774 4771 scope.go:117] "RemoveContainer" containerID="93acab6f93e9fe153fc618c7354e26bfbf7629928b6655f4d192e2bb96b181a2" Oct 11 11:00:51.467651 master-1 kubenswrapper[4771]: I1011 11:00:51.467590 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffxw9"] Oct 11 11:00:51.533257 master-1 kubenswrapper[4771]: I1011 11:00:51.533197 4771 scope.go:117] "RemoveContainer" containerID="b9ffae440e93767bb6dd3cf6faa3e82f54777ec6a2484e2103a6a44c092bc37f" Oct 11 11:00:51.534287 master-1 kubenswrapper[4771]: E1011 11:00:51.534200 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9ffae440e93767bb6dd3cf6faa3e82f54777ec6a2484e2103a6a44c092bc37f\": container with ID starting with b9ffae440e93767bb6dd3cf6faa3e82f54777ec6a2484e2103a6a44c092bc37f not found: ID does not exist" containerID="b9ffae440e93767bb6dd3cf6faa3e82f54777ec6a2484e2103a6a44c092bc37f" Oct 11 11:00:51.534287 master-1 kubenswrapper[4771]: I1011 11:00:51.534275 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9ffae440e93767bb6dd3cf6faa3e82f54777ec6a2484e2103a6a44c092bc37f"} err="failed to get container status \"b9ffae440e93767bb6dd3cf6faa3e82f54777ec6a2484e2103a6a44c092bc37f\": rpc error: code = NotFound desc = could not find container \"b9ffae440e93767bb6dd3cf6faa3e82f54777ec6a2484e2103a6a44c092bc37f\": container with ID starting with b9ffae440e93767bb6dd3cf6faa3e82f54777ec6a2484e2103a6a44c092bc37f not found: ID does not exist" Oct 11 11:00:51.534670 master-1 kubenswrapper[4771]: I1011 11:00:51.534313 4771 scope.go:117] "RemoveContainer" containerID="c2bfecdebb12fef33e766ec37e47f3d0cf65379687c38507e2fd4dfbcc98d718" Oct 11 11:00:51.535197 master-1 kubenswrapper[4771]: E1011 11:00:51.535136 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2bfecdebb12fef33e766ec37e47f3d0cf65379687c38507e2fd4dfbcc98d718\": container with ID starting with c2bfecdebb12fef33e766ec37e47f3d0cf65379687c38507e2fd4dfbcc98d718 not found: ID does not exist" containerID="c2bfecdebb12fef33e766ec37e47f3d0cf65379687c38507e2fd4dfbcc98d718" Oct 11 11:00:51.535197 master-1 kubenswrapper[4771]: I1011 11:00:51.535171 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2bfecdebb12fef33e766ec37e47f3d0cf65379687c38507e2fd4dfbcc98d718"} err="failed to get container status \"c2bfecdebb12fef33e766ec37e47f3d0cf65379687c38507e2fd4dfbcc98d718\": rpc error: code = NotFound desc = could not find container \"c2bfecdebb12fef33e766ec37e47f3d0cf65379687c38507e2fd4dfbcc98d718\": container with ID starting with c2bfecdebb12fef33e766ec37e47f3d0cf65379687c38507e2fd4dfbcc98d718 not found: ID does not exist" Oct 11 11:00:51.535197 master-1 kubenswrapper[4771]: I1011 11:00:51.535190 4771 scope.go:117] "RemoveContainer" containerID="93acab6f93e9fe153fc618c7354e26bfbf7629928b6655f4d192e2bb96b181a2" Oct 11 11:00:51.535893 master-1 kubenswrapper[4771]: E1011 11:00:51.535836 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93acab6f93e9fe153fc618c7354e26bfbf7629928b6655f4d192e2bb96b181a2\": container with ID starting with 93acab6f93e9fe153fc618c7354e26bfbf7629928b6655f4d192e2bb96b181a2 not found: ID does not exist" containerID="93acab6f93e9fe153fc618c7354e26bfbf7629928b6655f4d192e2bb96b181a2" Oct 11 11:00:51.535893 master-1 kubenswrapper[4771]: I1011 11:00:51.535869 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93acab6f93e9fe153fc618c7354e26bfbf7629928b6655f4d192e2bb96b181a2"} err="failed to get container status \"93acab6f93e9fe153fc618c7354e26bfbf7629928b6655f4d192e2bb96b181a2\": rpc error: code = NotFound desc = could not find container \"93acab6f93e9fe153fc618c7354e26bfbf7629928b6655f4d192e2bb96b181a2\": container with ID starting with 93acab6f93e9fe153fc618c7354e26bfbf7629928b6655f4d192e2bb96b181a2 not found: ID does not exist" Oct 11 11:00:52.456375 master-1 kubenswrapper[4771]: I1011 11:00:52.456250 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9073372c-472d-49fc-865d-296c5e7e894e" path="/var/lib/kubelet/pods/9073372c-472d-49fc-865d-296c5e7e894e/volumes" Oct 11 11:01:00.179169 master-1 kubenswrapper[4771]: I1011 11:01:00.179087 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29336341-k4h7v"] Oct 11 11:01:00.182041 master-1 kubenswrapper[4771]: E1011 11:01:00.179497 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9073372c-472d-49fc-865d-296c5e7e894e" containerName="extract-utilities" Oct 11 11:01:00.182041 master-1 kubenswrapper[4771]: I1011 11:01:00.179515 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9073372c-472d-49fc-865d-296c5e7e894e" containerName="extract-utilities" Oct 11 11:01:00.182041 master-1 kubenswrapper[4771]: E1011 11:01:00.179550 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9073372c-472d-49fc-865d-296c5e7e894e" containerName="registry-server" Oct 11 11:01:00.182041 master-1 kubenswrapper[4771]: I1011 11:01:00.179559 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9073372c-472d-49fc-865d-296c5e7e894e" containerName="registry-server" Oct 11 11:01:00.182041 master-1 kubenswrapper[4771]: E1011 11:01:00.179578 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9073372c-472d-49fc-865d-296c5e7e894e" containerName="extract-content" Oct 11 11:01:00.182041 master-1 kubenswrapper[4771]: I1011 11:01:00.179587 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9073372c-472d-49fc-865d-296c5e7e894e" containerName="extract-content" Oct 11 11:01:00.182041 master-1 kubenswrapper[4771]: I1011 11:01:00.179786 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="9073372c-472d-49fc-865d-296c5e7e894e" containerName="registry-server" Oct 11 11:01:00.182041 master-1 kubenswrapper[4771]: I1011 11:01:00.180680 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29336341-k4h7v" Oct 11 11:01:00.203563 master-1 kubenswrapper[4771]: I1011 11:01:00.201836 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29336341-k4h7v"] Oct 11 11:01:00.297938 master-1 kubenswrapper[4771]: I1011 11:01:00.297837 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/511b8ba0-8038-431a-8f39-f76a538b45be-combined-ca-bundle\") pod \"keystone-cron-29336341-k4h7v\" (UID: \"511b8ba0-8038-431a-8f39-f76a538b45be\") " pod="openstack/keystone-cron-29336341-k4h7v" Oct 11 11:01:00.298296 master-1 kubenswrapper[4771]: I1011 11:01:00.298273 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/511b8ba0-8038-431a-8f39-f76a538b45be-config-data\") pod \"keystone-cron-29336341-k4h7v\" (UID: \"511b8ba0-8038-431a-8f39-f76a538b45be\") " pod="openstack/keystone-cron-29336341-k4h7v" Oct 11 11:01:00.298419 master-1 kubenswrapper[4771]: I1011 11:01:00.298385 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rzkz\" (UniqueName: \"kubernetes.io/projected/511b8ba0-8038-431a-8f39-f76a538b45be-kube-api-access-7rzkz\") pod \"keystone-cron-29336341-k4h7v\" (UID: \"511b8ba0-8038-431a-8f39-f76a538b45be\") " pod="openstack/keystone-cron-29336341-k4h7v" Oct 11 11:01:00.298677 master-1 kubenswrapper[4771]: I1011 11:01:00.298607 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/511b8ba0-8038-431a-8f39-f76a538b45be-fernet-keys\") pod \"keystone-cron-29336341-k4h7v\" (UID: \"511b8ba0-8038-431a-8f39-f76a538b45be\") " pod="openstack/keystone-cron-29336341-k4h7v" Oct 11 11:01:00.401959 master-1 kubenswrapper[4771]: I1011 11:01:00.401858 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/511b8ba0-8038-431a-8f39-f76a538b45be-config-data\") pod \"keystone-cron-29336341-k4h7v\" (UID: \"511b8ba0-8038-431a-8f39-f76a538b45be\") " pod="openstack/keystone-cron-29336341-k4h7v" Oct 11 11:01:00.402281 master-1 kubenswrapper[4771]: I1011 11:01:00.401976 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rzkz\" (UniqueName: \"kubernetes.io/projected/511b8ba0-8038-431a-8f39-f76a538b45be-kube-api-access-7rzkz\") pod \"keystone-cron-29336341-k4h7v\" (UID: \"511b8ba0-8038-431a-8f39-f76a538b45be\") " pod="openstack/keystone-cron-29336341-k4h7v" Oct 11 11:01:00.402281 master-1 kubenswrapper[4771]: I1011 11:01:00.402078 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/511b8ba0-8038-431a-8f39-f76a538b45be-fernet-keys\") pod \"keystone-cron-29336341-k4h7v\" (UID: \"511b8ba0-8038-431a-8f39-f76a538b45be\") " pod="openstack/keystone-cron-29336341-k4h7v" Oct 11 11:01:00.402281 master-1 kubenswrapper[4771]: I1011 11:01:00.402149 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/511b8ba0-8038-431a-8f39-f76a538b45be-combined-ca-bundle\") pod \"keystone-cron-29336341-k4h7v\" (UID: \"511b8ba0-8038-431a-8f39-f76a538b45be\") " pod="openstack/keystone-cron-29336341-k4h7v" Oct 11 11:01:00.408120 master-1 kubenswrapper[4771]: I1011 11:01:00.408003 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/511b8ba0-8038-431a-8f39-f76a538b45be-config-data\") pod \"keystone-cron-29336341-k4h7v\" (UID: \"511b8ba0-8038-431a-8f39-f76a538b45be\") " pod="openstack/keystone-cron-29336341-k4h7v" Oct 11 11:01:00.410252 master-1 kubenswrapper[4771]: I1011 11:01:00.410188 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/511b8ba0-8038-431a-8f39-f76a538b45be-fernet-keys\") pod \"keystone-cron-29336341-k4h7v\" (UID: \"511b8ba0-8038-431a-8f39-f76a538b45be\") " pod="openstack/keystone-cron-29336341-k4h7v" Oct 11 11:01:00.411602 master-1 kubenswrapper[4771]: I1011 11:01:00.411527 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/511b8ba0-8038-431a-8f39-f76a538b45be-combined-ca-bundle\") pod \"keystone-cron-29336341-k4h7v\" (UID: \"511b8ba0-8038-431a-8f39-f76a538b45be\") " pod="openstack/keystone-cron-29336341-k4h7v" Oct 11 11:01:00.424923 master-1 kubenswrapper[4771]: I1011 11:01:00.424849 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rzkz\" (UniqueName: \"kubernetes.io/projected/511b8ba0-8038-431a-8f39-f76a538b45be-kube-api-access-7rzkz\") pod \"keystone-cron-29336341-k4h7v\" (UID: \"511b8ba0-8038-431a-8f39-f76a538b45be\") " pod="openstack/keystone-cron-29336341-k4h7v" Oct 11 11:01:00.507305 master-1 kubenswrapper[4771]: I1011 11:01:00.507090 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29336341-k4h7v" Oct 11 11:01:01.047522 master-1 kubenswrapper[4771]: W1011 11:01:01.047238 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod511b8ba0_8038_431a_8f39_f76a538b45be.slice/crio-a3ed1d03b679a3cd94ff44bc23ec01d6de0bca22701ccdc007ac4cf09faceaff WatchSource:0}: Error finding container a3ed1d03b679a3cd94ff44bc23ec01d6de0bca22701ccdc007ac4cf09faceaff: Status 404 returned error can't find the container with id a3ed1d03b679a3cd94ff44bc23ec01d6de0bca22701ccdc007ac4cf09faceaff Oct 11 11:01:01.053614 master-1 kubenswrapper[4771]: I1011 11:01:01.053536 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29336341-k4h7v"] Oct 11 11:01:01.502502 master-1 kubenswrapper[4771]: I1011 11:01:01.502342 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29336341-k4h7v" event={"ID":"511b8ba0-8038-431a-8f39-f76a538b45be","Type":"ContainerStarted","Data":"8a025688c4f85361b8fd4ce76230c6631cc23efe111a645e578034d939bf0805"} Oct 11 11:01:01.502502 master-1 kubenswrapper[4771]: I1011 11:01:01.502472 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29336341-k4h7v" event={"ID":"511b8ba0-8038-431a-8f39-f76a538b45be","Type":"ContainerStarted","Data":"a3ed1d03b679a3cd94ff44bc23ec01d6de0bca22701ccdc007ac4cf09faceaff"} Oct 11 11:01:01.541741 master-1 kubenswrapper[4771]: I1011 11:01:01.541617 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29336341-k4h7v" podStartSLOduration=1.541592696 podStartE2EDuration="1.541592696s" podCreationTimestamp="2025-10-11 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 11:01:01.535678466 +0000 UTC m=+2093.509904907" watchObservedRunningTime="2025-10-11 11:01:01.541592696 +0000 UTC m=+2093.515819157" Oct 11 11:01:03.525672 master-1 kubenswrapper[4771]: I1011 11:01:03.525522 4771 generic.go:334] "Generic (PLEG): container finished" podID="511b8ba0-8038-431a-8f39-f76a538b45be" containerID="8a025688c4f85361b8fd4ce76230c6631cc23efe111a645e578034d939bf0805" exitCode=0 Oct 11 11:01:03.525672 master-1 kubenswrapper[4771]: I1011 11:01:03.525597 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29336341-k4h7v" event={"ID":"511b8ba0-8038-431a-8f39-f76a538b45be","Type":"ContainerDied","Data":"8a025688c4f85361b8fd4ce76230c6631cc23efe111a645e578034d939bf0805"} Oct 11 11:01:05.034523 master-1 kubenswrapper[4771]: I1011 11:01:05.034426 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29336341-k4h7v" Oct 11 11:01:05.049622 master-1 kubenswrapper[4771]: I1011 11:01:05.049404 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/511b8ba0-8038-431a-8f39-f76a538b45be-fernet-keys\") pod \"511b8ba0-8038-431a-8f39-f76a538b45be\" (UID: \"511b8ba0-8038-431a-8f39-f76a538b45be\") " Oct 11 11:01:05.049622 master-1 kubenswrapper[4771]: I1011 11:01:05.049504 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/511b8ba0-8038-431a-8f39-f76a538b45be-combined-ca-bundle\") pod \"511b8ba0-8038-431a-8f39-f76a538b45be\" (UID: \"511b8ba0-8038-431a-8f39-f76a538b45be\") " Oct 11 11:01:05.050083 master-1 kubenswrapper[4771]: I1011 11:01:05.049708 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/511b8ba0-8038-431a-8f39-f76a538b45be-config-data\") pod \"511b8ba0-8038-431a-8f39-f76a538b45be\" (UID: \"511b8ba0-8038-431a-8f39-f76a538b45be\") " Oct 11 11:01:05.050083 master-1 kubenswrapper[4771]: I1011 11:01:05.049759 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rzkz\" (UniqueName: \"kubernetes.io/projected/511b8ba0-8038-431a-8f39-f76a538b45be-kube-api-access-7rzkz\") pod \"511b8ba0-8038-431a-8f39-f76a538b45be\" (UID: \"511b8ba0-8038-431a-8f39-f76a538b45be\") " Oct 11 11:01:05.053410 master-1 kubenswrapper[4771]: I1011 11:01:05.053287 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/511b8ba0-8038-431a-8f39-f76a538b45be-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "511b8ba0-8038-431a-8f39-f76a538b45be" (UID: "511b8ba0-8038-431a-8f39-f76a538b45be"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:01:05.059965 master-1 kubenswrapper[4771]: I1011 11:01:05.059860 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/511b8ba0-8038-431a-8f39-f76a538b45be-kube-api-access-7rzkz" (OuterVolumeSpecName: "kube-api-access-7rzkz") pod "511b8ba0-8038-431a-8f39-f76a538b45be" (UID: "511b8ba0-8038-431a-8f39-f76a538b45be"). InnerVolumeSpecName "kube-api-access-7rzkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:01:05.098648 master-1 kubenswrapper[4771]: I1011 11:01:05.098568 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/511b8ba0-8038-431a-8f39-f76a538b45be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "511b8ba0-8038-431a-8f39-f76a538b45be" (UID: "511b8ba0-8038-431a-8f39-f76a538b45be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:01:05.110442 master-1 kubenswrapper[4771]: I1011 11:01:05.110341 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/511b8ba0-8038-431a-8f39-f76a538b45be-config-data" (OuterVolumeSpecName: "config-data") pod "511b8ba0-8038-431a-8f39-f76a538b45be" (UID: "511b8ba0-8038-431a-8f39-f76a538b45be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:01:05.152574 master-1 kubenswrapper[4771]: I1011 11:01:05.152485 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/511b8ba0-8038-431a-8f39-f76a538b45be-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 11:01:05.152574 master-1 kubenswrapper[4771]: I1011 11:01:05.152550 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/511b8ba0-8038-431a-8f39-f76a538b45be-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 11:01:05.152574 master-1 kubenswrapper[4771]: I1011 11:01:05.152567 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rzkz\" (UniqueName: \"kubernetes.io/projected/511b8ba0-8038-431a-8f39-f76a538b45be-kube-api-access-7rzkz\") on node \"master-1\" DevicePath \"\"" Oct 11 11:01:05.152574 master-1 kubenswrapper[4771]: I1011 11:01:05.152584 4771 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/511b8ba0-8038-431a-8f39-f76a538b45be-fernet-keys\") on node \"master-1\" DevicePath \"\"" Oct 11 11:01:05.549830 master-1 kubenswrapper[4771]: I1011 11:01:05.549727 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29336341-k4h7v" event={"ID":"511b8ba0-8038-431a-8f39-f76a538b45be","Type":"ContainerDied","Data":"a3ed1d03b679a3cd94ff44bc23ec01d6de0bca22701ccdc007ac4cf09faceaff"} Oct 11 11:01:05.549830 master-1 kubenswrapper[4771]: I1011 11:01:05.549796 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3ed1d03b679a3cd94ff44bc23ec01d6de0bca22701ccdc007ac4cf09faceaff" Oct 11 11:01:05.549830 master-1 kubenswrapper[4771]: I1011 11:01:05.549812 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29336341-k4h7v" Oct 11 11:01:17.180213 master-2 kubenswrapper[4776]: I1011 11:01:17.180109 4776 scope.go:117] "RemoveContainer" containerID="1ef6065ef3373ebd1d031e48bef6566526ba170bc1411a15757ea04bbf481260" Oct 11 11:01:44.844414 master-0 kubenswrapper[4790]: I1011 11:01:44.844371 4790 scope.go:117] "RemoveContainer" containerID="d68ca291cf2b0da68d6a8d6c151be0aa939d7b802691ee00796f934bd8619918" Oct 11 11:02:11.542298 master-1 kubenswrapper[4771]: I1011 11:02:11.542078 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-rbtpx"] Oct 11 11:02:11.543182 master-1 kubenswrapper[4771]: E1011 11:02:11.542669 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511b8ba0-8038-431a-8f39-f76a538b45be" containerName="keystone-cron" Oct 11 11:02:11.543182 master-1 kubenswrapper[4771]: I1011 11:02:11.542691 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="511b8ba0-8038-431a-8f39-f76a538b45be" containerName="keystone-cron" Oct 11 11:02:11.543182 master-1 kubenswrapper[4771]: I1011 11:02:11.542865 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="511b8ba0-8038-431a-8f39-f76a538b45be" containerName="keystone-cron" Oct 11 11:02:11.543876 master-1 kubenswrapper[4771]: I1011 11:02:11.543844 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-rbtpx" Oct 11 11:02:11.558030 master-1 kubenswrapper[4771]: I1011 11:02:11.557950 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-rbtpx"] Oct 11 11:02:11.675106 master-1 kubenswrapper[4771]: I1011 11:02:11.675024 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srbc8\" (UniqueName: \"kubernetes.io/projected/041373ee-1533-4bc6-abd2-80d16bfa5f23-kube-api-access-srbc8\") pod \"octavia-db-create-rbtpx\" (UID: \"041373ee-1533-4bc6-abd2-80d16bfa5f23\") " pod="openstack/octavia-db-create-rbtpx" Oct 11 11:02:11.778782 master-1 kubenswrapper[4771]: I1011 11:02:11.778683 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srbc8\" (UniqueName: \"kubernetes.io/projected/041373ee-1533-4bc6-abd2-80d16bfa5f23-kube-api-access-srbc8\") pod \"octavia-db-create-rbtpx\" (UID: \"041373ee-1533-4bc6-abd2-80d16bfa5f23\") " pod="openstack/octavia-db-create-rbtpx" Oct 11 11:02:11.815238 master-1 kubenswrapper[4771]: I1011 11:02:11.814591 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srbc8\" (UniqueName: \"kubernetes.io/projected/041373ee-1533-4bc6-abd2-80d16bfa5f23-kube-api-access-srbc8\") pod \"octavia-db-create-rbtpx\" (UID: \"041373ee-1533-4bc6-abd2-80d16bfa5f23\") " pod="openstack/octavia-db-create-rbtpx" Oct 11 11:02:11.875012 master-1 kubenswrapper[4771]: I1011 11:02:11.874911 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-rbtpx" Oct 11 11:02:12.370168 master-1 kubenswrapper[4771]: I1011 11:02:12.370102 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-rbtpx"] Oct 11 11:02:13.297111 master-1 kubenswrapper[4771]: I1011 11:02:13.296943 4771 generic.go:334] "Generic (PLEG): container finished" podID="041373ee-1533-4bc6-abd2-80d16bfa5f23" containerID="736c15aefe67b305735f91c4e8c2109ad242f954cfb3635af77a747443410e30" exitCode=0 Oct 11 11:02:13.297111 master-1 kubenswrapper[4771]: I1011 11:02:13.297009 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-rbtpx" event={"ID":"041373ee-1533-4bc6-abd2-80d16bfa5f23","Type":"ContainerDied","Data":"736c15aefe67b305735f91c4e8c2109ad242f954cfb3635af77a747443410e30"} Oct 11 11:02:13.297111 master-1 kubenswrapper[4771]: I1011 11:02:13.297045 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-rbtpx" event={"ID":"041373ee-1533-4bc6-abd2-80d16bfa5f23","Type":"ContainerStarted","Data":"59b9f9010aba5846723186241ffea5afeeadfbf4f443023c5bddef0541687b18"} Oct 11 11:02:14.763404 master-1 kubenswrapper[4771]: I1011 11:02:14.763218 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-rbtpx" Oct 11 11:02:14.797388 master-1 kubenswrapper[4771]: I1011 11:02:14.793319 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srbc8\" (UniqueName: \"kubernetes.io/projected/041373ee-1533-4bc6-abd2-80d16bfa5f23-kube-api-access-srbc8\") pod \"041373ee-1533-4bc6-abd2-80d16bfa5f23\" (UID: \"041373ee-1533-4bc6-abd2-80d16bfa5f23\") " Oct 11 11:02:14.798865 master-1 kubenswrapper[4771]: I1011 11:02:14.798797 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041373ee-1533-4bc6-abd2-80d16bfa5f23-kube-api-access-srbc8" (OuterVolumeSpecName: "kube-api-access-srbc8") pod "041373ee-1533-4bc6-abd2-80d16bfa5f23" (UID: "041373ee-1533-4bc6-abd2-80d16bfa5f23"). InnerVolumeSpecName "kube-api-access-srbc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:02:14.895383 master-1 kubenswrapper[4771]: I1011 11:02:14.895280 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srbc8\" (UniqueName: \"kubernetes.io/projected/041373ee-1533-4bc6-abd2-80d16bfa5f23-kube-api-access-srbc8\") on node \"master-1\" DevicePath \"\"" Oct 11 11:02:15.320832 master-1 kubenswrapper[4771]: I1011 11:02:15.320771 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-rbtpx" event={"ID":"041373ee-1533-4bc6-abd2-80d16bfa5f23","Type":"ContainerDied","Data":"59b9f9010aba5846723186241ffea5afeeadfbf4f443023c5bddef0541687b18"} Oct 11 11:02:15.321186 master-1 kubenswrapper[4771]: I1011 11:02:15.321164 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59b9f9010aba5846723186241ffea5afeeadfbf4f443023c5bddef0541687b18" Oct 11 11:02:15.321318 master-1 kubenswrapper[4771]: I1011 11:02:15.320813 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-rbtpx" Oct 11 11:02:23.560495 master-1 kubenswrapper[4771]: I1011 11:02:23.560394 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-e4bf-account-create-m2n6j"] Oct 11 11:02:23.561655 master-1 kubenswrapper[4771]: E1011 11:02:23.561002 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041373ee-1533-4bc6-abd2-80d16bfa5f23" containerName="mariadb-database-create" Oct 11 11:02:23.561655 master-1 kubenswrapper[4771]: I1011 11:02:23.561024 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="041373ee-1533-4bc6-abd2-80d16bfa5f23" containerName="mariadb-database-create" Oct 11 11:02:23.561655 master-1 kubenswrapper[4771]: I1011 11:02:23.561262 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="041373ee-1533-4bc6-abd2-80d16bfa5f23" containerName="mariadb-database-create" Oct 11 11:02:23.563514 master-1 kubenswrapper[4771]: I1011 11:02:23.562308 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e4bf-account-create-m2n6j" Oct 11 11:02:23.566755 master-1 kubenswrapper[4771]: I1011 11:02:23.566690 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Oct 11 11:02:23.585438 master-1 kubenswrapper[4771]: I1011 11:02:23.585164 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-e4bf-account-create-m2n6j"] Oct 11 11:02:23.619546 master-1 kubenswrapper[4771]: I1011 11:02:23.619479 4771 scope.go:117] "RemoveContainer" containerID="10573443fa9f81c261e267c2d4f01ad7d7cf7482785a8f4f22c2ccd3fa1fc631" Oct 11 11:02:23.638888 master-1 kubenswrapper[4771]: I1011 11:02:23.638654 4771 scope.go:117] "RemoveContainer" containerID="7f100b006260b4ff812a662ca4646172d63077e3423c5b53974c9a4fc93bb108" Oct 11 11:02:23.657568 master-1 kubenswrapper[4771]: I1011 11:02:23.657510 4771 scope.go:117] "RemoveContainer" containerID="33e1159e64df7103066e5f7850051b2adc3d09e823478d0dc1137ddef2aee326" Oct 11 11:02:23.679407 master-1 kubenswrapper[4771]: I1011 11:02:23.679288 4771 scope.go:117] "RemoveContainer" containerID="1d0d93b3fc6393dcdc851e8c3921d7c5d5a44cf9e99d331f9e66f61b3c48f59d" Oct 11 11:02:23.713379 master-1 kubenswrapper[4771]: I1011 11:02:23.713295 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgsqp\" (UniqueName: \"kubernetes.io/projected/9ba52ba8-24c1-4b0c-83cb-6837e2353fa8-kube-api-access-vgsqp\") pod \"octavia-e4bf-account-create-m2n6j\" (UID: \"9ba52ba8-24c1-4b0c-83cb-6837e2353fa8\") " pod="openstack/octavia-e4bf-account-create-m2n6j" Oct 11 11:02:23.815734 master-1 kubenswrapper[4771]: I1011 11:02:23.815585 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgsqp\" (UniqueName: \"kubernetes.io/projected/9ba52ba8-24c1-4b0c-83cb-6837e2353fa8-kube-api-access-vgsqp\") pod \"octavia-e4bf-account-create-m2n6j\" (UID: \"9ba52ba8-24c1-4b0c-83cb-6837e2353fa8\") " pod="openstack/octavia-e4bf-account-create-m2n6j" Oct 11 11:02:23.848951 master-1 kubenswrapper[4771]: I1011 11:02:23.848870 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgsqp\" (UniqueName: \"kubernetes.io/projected/9ba52ba8-24c1-4b0c-83cb-6837e2353fa8-kube-api-access-vgsqp\") pod \"octavia-e4bf-account-create-m2n6j\" (UID: \"9ba52ba8-24c1-4b0c-83cb-6837e2353fa8\") " pod="openstack/octavia-e4bf-account-create-m2n6j" Oct 11 11:02:23.893004 master-1 kubenswrapper[4771]: I1011 11:02:23.892928 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e4bf-account-create-m2n6j" Oct 11 11:02:24.416924 master-1 kubenswrapper[4771]: I1011 11:02:24.416852 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-e4bf-account-create-m2n6j"] Oct 11 11:02:24.420252 master-1 kubenswrapper[4771]: W1011 11:02:24.420158 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ba52ba8_24c1_4b0c_83cb_6837e2353fa8.slice/crio-0737fbedbdbf7be810d318a8c1d6e9fd5bddd84c6d6ef7682635622648af493e WatchSource:0}: Error finding container 0737fbedbdbf7be810d318a8c1d6e9fd5bddd84c6d6ef7682635622648af493e: Status 404 returned error can't find the container with id 0737fbedbdbf7be810d318a8c1d6e9fd5bddd84c6d6ef7682635622648af493e Oct 11 11:02:25.442696 master-1 kubenswrapper[4771]: I1011 11:02:25.442589 4771 generic.go:334] "Generic (PLEG): container finished" podID="9ba52ba8-24c1-4b0c-83cb-6837e2353fa8" containerID="24ea528e7f6dc70693d8dee3aad4fc9efa2cfed93954344bd9c5720391f051ef" exitCode=0 Oct 11 11:02:25.442696 master-1 kubenswrapper[4771]: I1011 11:02:25.442656 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-e4bf-account-create-m2n6j" event={"ID":"9ba52ba8-24c1-4b0c-83cb-6837e2353fa8","Type":"ContainerDied","Data":"24ea528e7f6dc70693d8dee3aad4fc9efa2cfed93954344bd9c5720391f051ef"} Oct 11 11:02:25.442696 master-1 kubenswrapper[4771]: I1011 11:02:25.442695 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-e4bf-account-create-m2n6j" event={"ID":"9ba52ba8-24c1-4b0c-83cb-6837e2353fa8","Type":"ContainerStarted","Data":"0737fbedbdbf7be810d318a8c1d6e9fd5bddd84c6d6ef7682635622648af493e"} Oct 11 11:02:26.867040 master-1 kubenswrapper[4771]: I1011 11:02:26.866274 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e4bf-account-create-m2n6j" Oct 11 11:02:26.984282 master-1 kubenswrapper[4771]: I1011 11:02:26.984225 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgsqp\" (UniqueName: \"kubernetes.io/projected/9ba52ba8-24c1-4b0c-83cb-6837e2353fa8-kube-api-access-vgsqp\") pod \"9ba52ba8-24c1-4b0c-83cb-6837e2353fa8\" (UID: \"9ba52ba8-24c1-4b0c-83cb-6837e2353fa8\") " Oct 11 11:02:26.990719 master-1 kubenswrapper[4771]: I1011 11:02:26.990638 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ba52ba8-24c1-4b0c-83cb-6837e2353fa8-kube-api-access-vgsqp" (OuterVolumeSpecName: "kube-api-access-vgsqp") pod "9ba52ba8-24c1-4b0c-83cb-6837e2353fa8" (UID: "9ba52ba8-24c1-4b0c-83cb-6837e2353fa8"). InnerVolumeSpecName "kube-api-access-vgsqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:02:27.088096 master-1 kubenswrapper[4771]: I1011 11:02:27.088028 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgsqp\" (UniqueName: \"kubernetes.io/projected/9ba52ba8-24c1-4b0c-83cb-6837e2353fa8-kube-api-access-vgsqp\") on node \"master-1\" DevicePath \"\"" Oct 11 11:02:27.463928 master-1 kubenswrapper[4771]: I1011 11:02:27.463753 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-e4bf-account-create-m2n6j" event={"ID":"9ba52ba8-24c1-4b0c-83cb-6837e2353fa8","Type":"ContainerDied","Data":"0737fbedbdbf7be810d318a8c1d6e9fd5bddd84c6d6ef7682635622648af493e"} Oct 11 11:02:27.463928 master-1 kubenswrapper[4771]: I1011 11:02:27.463808 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0737fbedbdbf7be810d318a8c1d6e9fd5bddd84c6d6ef7682635622648af493e" Oct 11 11:02:27.463928 master-1 kubenswrapper[4771]: I1011 11:02:27.463906 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e4bf-account-create-m2n6j" Oct 11 11:02:29.304092 master-1 kubenswrapper[4771]: I1011 11:02:29.303982 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-8rff9"] Oct 11 11:02:29.305028 master-1 kubenswrapper[4771]: E1011 11:02:29.304956 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ba52ba8-24c1-4b0c-83cb-6837e2353fa8" containerName="mariadb-account-create" Oct 11 11:02:29.305028 master-1 kubenswrapper[4771]: I1011 11:02:29.304979 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ba52ba8-24c1-4b0c-83cb-6837e2353fa8" containerName="mariadb-account-create" Oct 11 11:02:29.306108 master-1 kubenswrapper[4771]: I1011 11:02:29.306076 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ba52ba8-24c1-4b0c-83cb-6837e2353fa8" containerName="mariadb-account-create" Oct 11 11:02:29.307152 master-1 kubenswrapper[4771]: I1011 11:02:29.307121 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-8rff9" Oct 11 11:02:29.320570 master-1 kubenswrapper[4771]: I1011 11:02:29.320492 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-8rff9"] Oct 11 11:02:29.443607 master-1 kubenswrapper[4771]: I1011 11:02:29.443511 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zscgw\" (UniqueName: \"kubernetes.io/projected/f91fc642-f994-42aa-9bb1-589b5bda7c22-kube-api-access-zscgw\") pod \"octavia-persistence-db-create-8rff9\" (UID: \"f91fc642-f994-42aa-9bb1-589b5bda7c22\") " pod="openstack/octavia-persistence-db-create-8rff9" Oct 11 11:02:29.545929 master-1 kubenswrapper[4771]: I1011 11:02:29.545827 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zscgw\" (UniqueName: \"kubernetes.io/projected/f91fc642-f994-42aa-9bb1-589b5bda7c22-kube-api-access-zscgw\") pod \"octavia-persistence-db-create-8rff9\" (UID: \"f91fc642-f994-42aa-9bb1-589b5bda7c22\") " pod="openstack/octavia-persistence-db-create-8rff9" Oct 11 11:02:29.572393 master-1 kubenswrapper[4771]: I1011 11:02:29.572220 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zscgw\" (UniqueName: \"kubernetes.io/projected/f91fc642-f994-42aa-9bb1-589b5bda7c22-kube-api-access-zscgw\") pod \"octavia-persistence-db-create-8rff9\" (UID: \"f91fc642-f994-42aa-9bb1-589b5bda7c22\") " pod="openstack/octavia-persistence-db-create-8rff9" Oct 11 11:02:29.627383 master-1 kubenswrapper[4771]: I1011 11:02:29.627258 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-8rff9" Oct 11 11:02:30.151486 master-1 kubenswrapper[4771]: I1011 11:02:30.151379 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-8rff9"] Oct 11 11:02:30.497106 master-1 kubenswrapper[4771]: I1011 11:02:30.497009 4771 generic.go:334] "Generic (PLEG): container finished" podID="f91fc642-f994-42aa-9bb1-589b5bda7c22" containerID="2433d4bbed13285c1fb5cb4b22ca8e93fb7e88d52d9a41f34c5f718dbcf8c96b" exitCode=0 Oct 11 11:02:30.497106 master-1 kubenswrapper[4771]: I1011 11:02:30.497098 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-8rff9" event={"ID":"f91fc642-f994-42aa-9bb1-589b5bda7c22","Type":"ContainerDied","Data":"2433d4bbed13285c1fb5cb4b22ca8e93fb7e88d52d9a41f34c5f718dbcf8c96b"} Oct 11 11:02:30.498059 master-1 kubenswrapper[4771]: I1011 11:02:30.497135 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-8rff9" event={"ID":"f91fc642-f994-42aa-9bb1-589b5bda7c22","Type":"ContainerStarted","Data":"e627db1dcac557c196093adf0eb1e9981e30cd3c3582197fc5b121e6205873d9"} Oct 11 11:02:31.996077 master-1 kubenswrapper[4771]: I1011 11:02:31.995993 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-8rff9" Oct 11 11:02:32.022404 master-1 kubenswrapper[4771]: I1011 11:02:32.022260 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zscgw\" (UniqueName: \"kubernetes.io/projected/f91fc642-f994-42aa-9bb1-589b5bda7c22-kube-api-access-zscgw\") pod \"f91fc642-f994-42aa-9bb1-589b5bda7c22\" (UID: \"f91fc642-f994-42aa-9bb1-589b5bda7c22\") " Oct 11 11:02:32.030548 master-1 kubenswrapper[4771]: I1011 11:02:32.030468 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91fc642-f994-42aa-9bb1-589b5bda7c22-kube-api-access-zscgw" (OuterVolumeSpecName: "kube-api-access-zscgw") pod "f91fc642-f994-42aa-9bb1-589b5bda7c22" (UID: "f91fc642-f994-42aa-9bb1-589b5bda7c22"). InnerVolumeSpecName "kube-api-access-zscgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:02:32.125860 master-1 kubenswrapper[4771]: I1011 11:02:32.125752 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zscgw\" (UniqueName: \"kubernetes.io/projected/f91fc642-f994-42aa-9bb1-589b5bda7c22-kube-api-access-zscgw\") on node \"master-1\" DevicePath \"\"" Oct 11 11:02:32.523336 master-1 kubenswrapper[4771]: I1011 11:02:32.523217 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-8rff9" event={"ID":"f91fc642-f994-42aa-9bb1-589b5bda7c22","Type":"ContainerDied","Data":"e627db1dcac557c196093adf0eb1e9981e30cd3c3582197fc5b121e6205873d9"} Oct 11 11:02:32.523336 master-1 kubenswrapper[4771]: I1011 11:02:32.523319 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e627db1dcac557c196093adf0eb1e9981e30cd3c3582197fc5b121e6205873d9" Oct 11 11:02:32.523336 master-1 kubenswrapper[4771]: I1011 11:02:32.523316 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-8rff9" Oct 11 11:02:40.244569 master-1 kubenswrapper[4771]: I1011 11:02:40.244467 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-5f0c-account-create-9njpc"] Oct 11 11:02:40.245702 master-1 kubenswrapper[4771]: E1011 11:02:40.245104 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91fc642-f994-42aa-9bb1-589b5bda7c22" containerName="mariadb-database-create" Oct 11 11:02:40.245702 master-1 kubenswrapper[4771]: I1011 11:02:40.245140 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91fc642-f994-42aa-9bb1-589b5bda7c22" containerName="mariadb-database-create" Oct 11 11:02:40.245702 master-1 kubenswrapper[4771]: I1011 11:02:40.245577 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f91fc642-f994-42aa-9bb1-589b5bda7c22" containerName="mariadb-database-create" Oct 11 11:02:40.248686 master-1 kubenswrapper[4771]: I1011 11:02:40.247207 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-5f0c-account-create-9njpc" Oct 11 11:02:40.250996 master-1 kubenswrapper[4771]: I1011 11:02:40.250576 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Oct 11 11:02:40.293469 master-1 kubenswrapper[4771]: I1011 11:02:40.293400 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-5f0c-account-create-9njpc"] Oct 11 11:02:40.418209 master-1 kubenswrapper[4771]: I1011 11:02:40.417954 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh5p2\" (UniqueName: \"kubernetes.io/projected/3eb36428-2086-42ca-8ebf-9864a0917971-kube-api-access-hh5p2\") pod \"octavia-5f0c-account-create-9njpc\" (UID: \"3eb36428-2086-42ca-8ebf-9864a0917971\") " pod="openstack/octavia-5f0c-account-create-9njpc" Oct 11 11:02:40.520921 master-1 kubenswrapper[4771]: I1011 11:02:40.520772 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh5p2\" (UniqueName: \"kubernetes.io/projected/3eb36428-2086-42ca-8ebf-9864a0917971-kube-api-access-hh5p2\") pod \"octavia-5f0c-account-create-9njpc\" (UID: \"3eb36428-2086-42ca-8ebf-9864a0917971\") " pod="openstack/octavia-5f0c-account-create-9njpc" Oct 11 11:02:40.557527 master-1 kubenswrapper[4771]: I1011 11:02:40.557456 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh5p2\" (UniqueName: \"kubernetes.io/projected/3eb36428-2086-42ca-8ebf-9864a0917971-kube-api-access-hh5p2\") pod \"octavia-5f0c-account-create-9njpc\" (UID: \"3eb36428-2086-42ca-8ebf-9864a0917971\") " pod="openstack/octavia-5f0c-account-create-9njpc" Oct 11 11:02:40.585274 master-1 kubenswrapper[4771]: I1011 11:02:40.585226 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-5f0c-account-create-9njpc" Oct 11 11:02:41.108019 master-1 kubenswrapper[4771]: I1011 11:02:41.107945 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-5f0c-account-create-9njpc"] Oct 11 11:02:41.115799 master-1 kubenswrapper[4771]: W1011 11:02:41.115704 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3eb36428_2086_42ca_8ebf_9864a0917971.slice/crio-ae6f1ba6fa658a954ed1dd9fd98150e308b5f8c5446f2f5e0899071d5a4e95fd WatchSource:0}: Error finding container ae6f1ba6fa658a954ed1dd9fd98150e308b5f8c5446f2f5e0899071d5a4e95fd: Status 404 returned error can't find the container with id ae6f1ba6fa658a954ed1dd9fd98150e308b5f8c5446f2f5e0899071d5a4e95fd Oct 11 11:02:41.635956 master-1 kubenswrapper[4771]: I1011 11:02:41.635900 4771 generic.go:334] "Generic (PLEG): container finished" podID="3eb36428-2086-42ca-8ebf-9864a0917971" containerID="9343b079fc4a104c0cb1564885c29bbb8cf2e14829a4f1096f11a9696ef57edf" exitCode=0 Oct 11 11:02:41.635956 master-1 kubenswrapper[4771]: I1011 11:02:41.635949 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-5f0c-account-create-9njpc" event={"ID":"3eb36428-2086-42ca-8ebf-9864a0917971","Type":"ContainerDied","Data":"9343b079fc4a104c0cb1564885c29bbb8cf2e14829a4f1096f11a9696ef57edf"} Oct 11 11:02:41.639404 master-1 kubenswrapper[4771]: I1011 11:02:41.636005 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-5f0c-account-create-9njpc" event={"ID":"3eb36428-2086-42ca-8ebf-9864a0917971","Type":"ContainerStarted","Data":"ae6f1ba6fa658a954ed1dd9fd98150e308b5f8c5446f2f5e0899071d5a4e95fd"} Oct 11 11:02:43.260753 master-1 kubenswrapper[4771]: I1011 11:02:43.260673 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-5f0c-account-create-9njpc" Oct 11 11:02:43.429582 master-1 kubenswrapper[4771]: I1011 11:02:43.429387 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh5p2\" (UniqueName: \"kubernetes.io/projected/3eb36428-2086-42ca-8ebf-9864a0917971-kube-api-access-hh5p2\") pod \"3eb36428-2086-42ca-8ebf-9864a0917971\" (UID: \"3eb36428-2086-42ca-8ebf-9864a0917971\") " Oct 11 11:02:43.434043 master-1 kubenswrapper[4771]: I1011 11:02:43.433932 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb36428-2086-42ca-8ebf-9864a0917971-kube-api-access-hh5p2" (OuterVolumeSpecName: "kube-api-access-hh5p2") pod "3eb36428-2086-42ca-8ebf-9864a0917971" (UID: "3eb36428-2086-42ca-8ebf-9864a0917971"). InnerVolumeSpecName "kube-api-access-hh5p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:02:43.533169 master-1 kubenswrapper[4771]: I1011 11:02:43.533039 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh5p2\" (UniqueName: \"kubernetes.io/projected/3eb36428-2086-42ca-8ebf-9864a0917971-kube-api-access-hh5p2\") on node \"master-1\" DevicePath \"\"" Oct 11 11:02:43.668323 master-1 kubenswrapper[4771]: I1011 11:02:43.668137 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-5f0c-account-create-9njpc" event={"ID":"3eb36428-2086-42ca-8ebf-9864a0917971","Type":"ContainerDied","Data":"ae6f1ba6fa658a954ed1dd9fd98150e308b5f8c5446f2f5e0899071d5a4e95fd"} Oct 11 11:02:43.668323 master-1 kubenswrapper[4771]: I1011 11:02:43.668203 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae6f1ba6fa658a954ed1dd9fd98150e308b5f8c5446f2f5e0899071d5a4e95fd" Oct 11 11:02:43.668323 master-1 kubenswrapper[4771]: I1011 11:02:43.668235 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-5f0c-account-create-9njpc" Oct 11 11:02:46.257954 master-1 kubenswrapper[4771]: I1011 11:02:46.256757 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-84f885c68-ttgvk"] Oct 11 11:02:46.257954 master-1 kubenswrapper[4771]: E1011 11:02:46.257954 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb36428-2086-42ca-8ebf-9864a0917971" containerName="mariadb-account-create" Oct 11 11:02:46.259208 master-1 kubenswrapper[4771]: I1011 11:02:46.257984 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb36428-2086-42ca-8ebf-9864a0917971" containerName="mariadb-account-create" Oct 11 11:02:46.259208 master-1 kubenswrapper[4771]: I1011 11:02:46.258235 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb36428-2086-42ca-8ebf-9864a0917971" containerName="mariadb-account-create" Oct 11 11:02:46.260581 master-1 kubenswrapper[4771]: I1011 11:02:46.260469 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:46.264094 master-1 kubenswrapper[4771]: I1011 11:02:46.263922 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Oct 11 11:02:46.264094 master-1 kubenswrapper[4771]: I1011 11:02:46.264050 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Oct 11 11:02:46.265799 master-1 kubenswrapper[4771]: I1011 11:02:46.265689 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-ovndbs" Oct 11 11:02:46.274235 master-1 kubenswrapper[4771]: I1011 11:02:46.274164 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-84f885c68-ttgvk"] Oct 11 11:02:46.401216 master-1 kubenswrapper[4771]: I1011 11:02:46.401066 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-scripts\") pod \"octavia-api-84f885c68-ttgvk\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:46.401757 master-1 kubenswrapper[4771]: I1011 11:02:46.401722 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a67756a2-aa42-4f6f-b27a-57e16f566883-config-data-merged\") pod \"octavia-api-84f885c68-ttgvk\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:46.401914 master-1 kubenswrapper[4771]: I1011 11:02:46.401895 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/a67756a2-aa42-4f6f-b27a-57e16f566883-octavia-run\") pod \"octavia-api-84f885c68-ttgvk\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:46.402216 master-1 kubenswrapper[4771]: I1011 11:02:46.402170 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-config-data\") pod \"octavia-api-84f885c68-ttgvk\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:46.402491 master-1 kubenswrapper[4771]: I1011 11:02:46.402435 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-ovndb-tls-certs\") pod \"octavia-api-84f885c68-ttgvk\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:46.402602 master-1 kubenswrapper[4771]: I1011 11:02:46.402542 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-combined-ca-bundle\") pod \"octavia-api-84f885c68-ttgvk\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:46.504237 master-1 kubenswrapper[4771]: I1011 11:02:46.504016 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-config-data\") pod \"octavia-api-84f885c68-ttgvk\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:46.504237 master-1 kubenswrapper[4771]: I1011 11:02:46.504150 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-ovndb-tls-certs\") pod \"octavia-api-84f885c68-ttgvk\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:46.504237 master-1 kubenswrapper[4771]: I1011 11:02:46.504204 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-combined-ca-bundle\") pod \"octavia-api-84f885c68-ttgvk\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:46.504237 master-1 kubenswrapper[4771]: I1011 11:02:46.504262 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-scripts\") pod \"octavia-api-84f885c68-ttgvk\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:46.510729 master-1 kubenswrapper[4771]: I1011 11:02:46.510684 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a67756a2-aa42-4f6f-b27a-57e16f566883-config-data-merged\") pod \"octavia-api-84f885c68-ttgvk\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:46.510729 master-1 kubenswrapper[4771]: I1011 11:02:46.510733 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/a67756a2-aa42-4f6f-b27a-57e16f566883-octavia-run\") pod \"octavia-api-84f885c68-ttgvk\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:46.515310 master-1 kubenswrapper[4771]: I1011 11:02:46.511433 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/a67756a2-aa42-4f6f-b27a-57e16f566883-octavia-run\") pod \"octavia-api-84f885c68-ttgvk\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:46.515310 master-1 kubenswrapper[4771]: I1011 11:02:46.511758 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a67756a2-aa42-4f6f-b27a-57e16f566883-config-data-merged\") pod \"octavia-api-84f885c68-ttgvk\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:46.515310 master-1 kubenswrapper[4771]: I1011 11:02:46.514560 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-ovndb-tls-certs\") pod \"octavia-api-84f885c68-ttgvk\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:46.515310 master-1 kubenswrapper[4771]: I1011 11:02:46.514718 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-combined-ca-bundle\") pod \"octavia-api-84f885c68-ttgvk\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:46.515310 master-1 kubenswrapper[4771]: I1011 11:02:46.515243 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-config-data\") pod \"octavia-api-84f885c68-ttgvk\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:46.527360 master-1 kubenswrapper[4771]: I1011 11:02:46.527282 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-scripts\") pod \"octavia-api-84f885c68-ttgvk\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:46.588543 master-1 kubenswrapper[4771]: I1011 11:02:46.588427 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:47.101993 master-1 kubenswrapper[4771]: I1011 11:02:47.101925 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-84f885c68-ttgvk"] Oct 11 11:02:47.108008 master-1 kubenswrapper[4771]: W1011 11:02:47.107751 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda67756a2_aa42_4f6f_b27a_57e16f566883.slice/crio-a5a886249a2d7b53c7a4945ae527e497dd08083a42ca3105272e41c3a3ba5d39 WatchSource:0}: Error finding container a5a886249a2d7b53c7a4945ae527e497dd08083a42ca3105272e41c3a3ba5d39: Status 404 returned error can't find the container with id a5a886249a2d7b53c7a4945ae527e497dd08083a42ca3105272e41c3a3ba5d39 Oct 11 11:02:47.110086 master-1 kubenswrapper[4771]: I1011 11:02:47.110025 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 11:02:47.722872 master-1 kubenswrapper[4771]: I1011 11:02:47.722790 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-84f885c68-ttgvk" event={"ID":"a67756a2-aa42-4f6f-b27a-57e16f566883","Type":"ContainerStarted","Data":"a5a886249a2d7b53c7a4945ae527e497dd08083a42ca3105272e41c3a3ba5d39"} Oct 11 11:02:57.852219 master-1 kubenswrapper[4771]: I1011 11:02:57.852139 4771 generic.go:334] "Generic (PLEG): container finished" podID="a67756a2-aa42-4f6f-b27a-57e16f566883" containerID="36a2a93821ed51d9e67c4b879d36463d95dbf4ca7d7ecbdb8119182e791d541c" exitCode=0 Oct 11 11:02:57.853352 master-1 kubenswrapper[4771]: I1011 11:02:57.852233 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-84f885c68-ttgvk" event={"ID":"a67756a2-aa42-4f6f-b27a-57e16f566883","Type":"ContainerDied","Data":"36a2a93821ed51d9e67c4b879d36463d95dbf4ca7d7ecbdb8119182e791d541c"} Oct 11 11:02:58.867821 master-1 kubenswrapper[4771]: I1011 11:02:58.867746 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-84f885c68-ttgvk" event={"ID":"a67756a2-aa42-4f6f-b27a-57e16f566883","Type":"ContainerStarted","Data":"e41aadf1122cacc740648e97ac9f4e96f2409d4d65d6150033e5ce9dfc2c1598"} Oct 11 11:02:58.867821 master-1 kubenswrapper[4771]: I1011 11:02:58.867829 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-84f885c68-ttgvk" event={"ID":"a67756a2-aa42-4f6f-b27a-57e16f566883","Type":"ContainerStarted","Data":"4b1b4afdacfba7ecd1fb1011c6d887849fae4d85214d8f2c59893edc87098530"} Oct 11 11:02:58.869798 master-1 kubenswrapper[4771]: I1011 11:02:58.869728 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:58.869798 master-1 kubenswrapper[4771]: I1011 11:02:58.869791 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:02:58.906386 master-1 kubenswrapper[4771]: I1011 11:02:58.906217 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-84f885c68-ttgvk" podStartSLOduration=3.061617878 podStartE2EDuration="12.906186922s" podCreationTimestamp="2025-10-11 11:02:46 +0000 UTC" firstStartedPulling="2025-10-11 11:02:47.109582472 +0000 UTC m=+2199.083808923" lastFinishedPulling="2025-10-11 11:02:56.954151516 +0000 UTC m=+2208.928377967" observedRunningTime="2025-10-11 11:02:58.903870966 +0000 UTC m=+2210.878097427" watchObservedRunningTime="2025-10-11 11:02:58.906186922 +0000 UTC m=+2210.880413403" Oct 11 11:03:00.002906 master-1 kubenswrapper[4771]: I1011 11:03:00.002805 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 11:03:00.003826 master-1 kubenswrapper[4771]: I1011 11:03:00.003215 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerName="ceilometer-central-agent" containerID="cri-o://dae10efe0437b4662fa70f9c02ae261a84610fc2231a80eb15286d70bfdd6957" gracePeriod=30 Oct 11 11:03:00.003826 master-1 kubenswrapper[4771]: I1011 11:03:00.003568 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerName="ceilometer-notification-agent" containerID="cri-o://4c4740c654b3eb5bc657fa26dafcc70a7dbbb11d03e1c33a59b6b35165113d78" gracePeriod=30 Oct 11 11:03:00.003826 master-1 kubenswrapper[4771]: I1011 11:03:00.003571 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerName="proxy-httpd" containerID="cri-o://ca31db39d1dec9084865eb81efc92049edf7d6166fd06121934c9ffcdc701d40" gracePeriod=30 Oct 11 11:03:00.004846 master-1 kubenswrapper[4771]: I1011 11:03:00.003795 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerName="sg-core" containerID="cri-o://a7cc850a7be7b2d0fcc12f07e1e079159798fbdcd81356efd79d9ef6b76f3e58" gracePeriod=30 Oct 11 11:03:00.891519 master-1 kubenswrapper[4771]: I1011 11:03:00.891445 4771 generic.go:334] "Generic (PLEG): container finished" podID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerID="ca31db39d1dec9084865eb81efc92049edf7d6166fd06121934c9ffcdc701d40" exitCode=0 Oct 11 11:03:00.891519 master-1 kubenswrapper[4771]: I1011 11:03:00.891498 4771 generic.go:334] "Generic (PLEG): container finished" podID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerID="a7cc850a7be7b2d0fcc12f07e1e079159798fbdcd81356efd79d9ef6b76f3e58" exitCode=2 Oct 11 11:03:00.891519 master-1 kubenswrapper[4771]: I1011 11:03:00.891509 4771 generic.go:334] "Generic (PLEG): container finished" podID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerID="dae10efe0437b4662fa70f9c02ae261a84610fc2231a80eb15286d70bfdd6957" exitCode=0 Oct 11 11:03:00.891872 master-1 kubenswrapper[4771]: I1011 11:03:00.891524 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"926f8cdc-bbf6-4328-8436-8428df0a679b","Type":"ContainerDied","Data":"ca31db39d1dec9084865eb81efc92049edf7d6166fd06121934c9ffcdc701d40"} Oct 11 11:03:00.891872 master-1 kubenswrapper[4771]: I1011 11:03:00.891615 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"926f8cdc-bbf6-4328-8436-8428df0a679b","Type":"ContainerDied","Data":"a7cc850a7be7b2d0fcc12f07e1e079159798fbdcd81356efd79d9ef6b76f3e58"} Oct 11 11:03:00.891872 master-1 kubenswrapper[4771]: I1011 11:03:00.891648 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"926f8cdc-bbf6-4328-8436-8428df0a679b","Type":"ContainerDied","Data":"dae10efe0437b4662fa70f9c02ae261a84610fc2231a80eb15286d70bfdd6957"} Oct 11 11:03:01.613141 master-1 kubenswrapper[4771]: I1011 11:03:01.613086 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 11:03:01.719414 master-1 kubenswrapper[4771]: I1011 11:03:01.713797 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-combined-ca-bundle\") pod \"926f8cdc-bbf6-4328-8436-8428df0a679b\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " Oct 11 11:03:01.719414 master-1 kubenswrapper[4771]: I1011 11:03:01.713894 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwhv2\" (UniqueName: \"kubernetes.io/projected/926f8cdc-bbf6-4328-8436-8428df0a679b-kube-api-access-xwhv2\") pod \"926f8cdc-bbf6-4328-8436-8428df0a679b\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " Oct 11 11:03:01.719414 master-1 kubenswrapper[4771]: I1011 11:03:01.713958 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-sg-core-conf-yaml\") pod \"926f8cdc-bbf6-4328-8436-8428df0a679b\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " Oct 11 11:03:01.719414 master-1 kubenswrapper[4771]: I1011 11:03:01.713990 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/926f8cdc-bbf6-4328-8436-8428df0a679b-run-httpd\") pod \"926f8cdc-bbf6-4328-8436-8428df0a679b\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " Oct 11 11:03:01.719414 master-1 kubenswrapper[4771]: I1011 11:03:01.714012 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-ceilometer-tls-certs\") pod \"926f8cdc-bbf6-4328-8436-8428df0a679b\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " Oct 11 11:03:01.719414 master-1 kubenswrapper[4771]: I1011 11:03:01.714037 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/926f8cdc-bbf6-4328-8436-8428df0a679b-log-httpd\") pod \"926f8cdc-bbf6-4328-8436-8428df0a679b\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " Oct 11 11:03:01.719414 master-1 kubenswrapper[4771]: I1011 11:03:01.714110 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-config-data\") pod \"926f8cdc-bbf6-4328-8436-8428df0a679b\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " Oct 11 11:03:01.719414 master-1 kubenswrapper[4771]: I1011 11:03:01.714154 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-scripts\") pod \"926f8cdc-bbf6-4328-8436-8428df0a679b\" (UID: \"926f8cdc-bbf6-4328-8436-8428df0a679b\") " Oct 11 11:03:01.719414 master-1 kubenswrapper[4771]: I1011 11:03:01.715635 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/926f8cdc-bbf6-4328-8436-8428df0a679b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "926f8cdc-bbf6-4328-8436-8428df0a679b" (UID: "926f8cdc-bbf6-4328-8436-8428df0a679b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:03:01.719414 master-1 kubenswrapper[4771]: I1011 11:03:01.716080 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/926f8cdc-bbf6-4328-8436-8428df0a679b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "926f8cdc-bbf6-4328-8436-8428df0a679b" (UID: "926f8cdc-bbf6-4328-8436-8428df0a679b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:03:01.719414 master-1 kubenswrapper[4771]: I1011 11:03:01.718152 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-scripts" (OuterVolumeSpecName: "scripts") pod "926f8cdc-bbf6-4328-8436-8428df0a679b" (UID: "926f8cdc-bbf6-4328-8436-8428df0a679b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:03:01.721454 master-1 kubenswrapper[4771]: I1011 11:03:01.720793 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/926f8cdc-bbf6-4328-8436-8428df0a679b-kube-api-access-xwhv2" (OuterVolumeSpecName: "kube-api-access-xwhv2") pod "926f8cdc-bbf6-4328-8436-8428df0a679b" (UID: "926f8cdc-bbf6-4328-8436-8428df0a679b"). InnerVolumeSpecName "kube-api-access-xwhv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:03:01.746615 master-1 kubenswrapper[4771]: I1011 11:03:01.746561 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "926f8cdc-bbf6-4328-8436-8428df0a679b" (UID: "926f8cdc-bbf6-4328-8436-8428df0a679b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:03:01.793774 master-1 kubenswrapper[4771]: I1011 11:03:01.793608 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "926f8cdc-bbf6-4328-8436-8428df0a679b" (UID: "926f8cdc-bbf6-4328-8436-8428df0a679b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:03:01.797306 master-1 kubenswrapper[4771]: I1011 11:03:01.797230 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "926f8cdc-bbf6-4328-8436-8428df0a679b" (UID: "926f8cdc-bbf6-4328-8436-8428df0a679b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:03:01.816524 master-1 kubenswrapper[4771]: I1011 11:03:01.816450 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwhv2\" (UniqueName: \"kubernetes.io/projected/926f8cdc-bbf6-4328-8436-8428df0a679b-kube-api-access-xwhv2\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:01.816524 master-1 kubenswrapper[4771]: I1011 11:03:01.816499 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-sg-core-conf-yaml\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:01.816524 master-1 kubenswrapper[4771]: I1011 11:03:01.816510 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/926f8cdc-bbf6-4328-8436-8428df0a679b-run-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:01.816524 master-1 kubenswrapper[4771]: I1011 11:03:01.816519 4771 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-ceilometer-tls-certs\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:01.816524 master-1 kubenswrapper[4771]: I1011 11:03:01.816529 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/926f8cdc-bbf6-4328-8436-8428df0a679b-log-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:01.816524 master-1 kubenswrapper[4771]: I1011 11:03:01.816537 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:01.816832 master-1 kubenswrapper[4771]: I1011 11:03:01.816546 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:01.827199 master-1 kubenswrapper[4771]: I1011 11:03:01.827133 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-config-data" (OuterVolumeSpecName: "config-data") pod "926f8cdc-bbf6-4328-8436-8428df0a679b" (UID: "926f8cdc-bbf6-4328-8436-8428df0a679b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:03:01.901750 master-1 kubenswrapper[4771]: I1011 11:03:01.901672 4771 generic.go:334] "Generic (PLEG): container finished" podID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerID="4c4740c654b3eb5bc657fa26dafcc70a7dbbb11d03e1c33a59b6b35165113d78" exitCode=0 Oct 11 11:03:01.901750 master-1 kubenswrapper[4771]: I1011 11:03:01.901729 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"926f8cdc-bbf6-4328-8436-8428df0a679b","Type":"ContainerDied","Data":"4c4740c654b3eb5bc657fa26dafcc70a7dbbb11d03e1c33a59b6b35165113d78"} Oct 11 11:03:01.901750 master-1 kubenswrapper[4771]: I1011 11:03:01.901756 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"926f8cdc-bbf6-4328-8436-8428df0a679b","Type":"ContainerDied","Data":"2a75cd52ad1f72be5ccb56e9952a02a3c2bfd8c3f845acfbe551f4d25daeffc2"} Oct 11 11:03:01.902082 master-1 kubenswrapper[4771]: I1011 11:03:01.901774 4771 scope.go:117] "RemoveContainer" containerID="ca31db39d1dec9084865eb81efc92049edf7d6166fd06121934c9ffcdc701d40" Oct 11 11:03:01.902082 master-1 kubenswrapper[4771]: I1011 11:03:01.901903 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 11:03:01.917872 master-1 kubenswrapper[4771]: I1011 11:03:01.917817 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/926f8cdc-bbf6-4328-8436-8428df0a679b-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:01.931806 master-1 kubenswrapper[4771]: I1011 11:03:01.931701 4771 scope.go:117] "RemoveContainer" containerID="a7cc850a7be7b2d0fcc12f07e1e079159798fbdcd81356efd79d9ef6b76f3e58" Oct 11 11:03:01.945032 master-1 kubenswrapper[4771]: I1011 11:03:01.944967 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 11:03:01.972625 master-1 kubenswrapper[4771]: I1011 11:03:01.968088 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 11 11:03:01.978462 master-1 kubenswrapper[4771]: I1011 11:03:01.978403 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 11 11:03:01.978724 master-1 kubenswrapper[4771]: E1011 11:03:01.978688 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerName="ceilometer-central-agent" Oct 11 11:03:01.978724 master-1 kubenswrapper[4771]: I1011 11:03:01.978707 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerName="ceilometer-central-agent" Oct 11 11:03:01.978724 master-1 kubenswrapper[4771]: E1011 11:03:01.978718 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerName="ceilometer-notification-agent" Oct 11 11:03:01.978724 master-1 kubenswrapper[4771]: I1011 11:03:01.978724 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerName="ceilometer-notification-agent" Oct 11 11:03:01.978724 master-1 kubenswrapper[4771]: E1011 11:03:01.978731 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerName="proxy-httpd" Oct 11 11:03:01.979053 master-1 kubenswrapper[4771]: I1011 11:03:01.978739 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerName="proxy-httpd" Oct 11 11:03:01.979053 master-1 kubenswrapper[4771]: E1011 11:03:01.978750 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerName="sg-core" Oct 11 11:03:01.979053 master-1 kubenswrapper[4771]: I1011 11:03:01.978757 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerName="sg-core" Oct 11 11:03:01.979053 master-1 kubenswrapper[4771]: I1011 11:03:01.978894 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerName="ceilometer-central-agent" Oct 11 11:03:01.979053 master-1 kubenswrapper[4771]: I1011 11:03:01.978911 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerName="ceilometer-notification-agent" Oct 11 11:03:01.979053 master-1 kubenswrapper[4771]: I1011 11:03:01.978918 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerName="sg-core" Oct 11 11:03:01.979053 master-1 kubenswrapper[4771]: I1011 11:03:01.978929 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="926f8cdc-bbf6-4328-8436-8428df0a679b" containerName="proxy-httpd" Oct 11 11:03:01.980472 master-1 kubenswrapper[4771]: I1011 11:03:01.980330 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 11:03:01.980570 master-1 kubenswrapper[4771]: I1011 11:03:01.980520 4771 scope.go:117] "RemoveContainer" containerID="4c4740c654b3eb5bc657fa26dafcc70a7dbbb11d03e1c33a59b6b35165113d78" Oct 11 11:03:01.985621 master-1 kubenswrapper[4771]: I1011 11:03:01.985580 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 11 11:03:01.985856 master-1 kubenswrapper[4771]: I1011 11:03:01.985810 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 11 11:03:01.985856 master-1 kubenswrapper[4771]: I1011 11:03:01.985830 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 11 11:03:01.999793 master-1 kubenswrapper[4771]: I1011 11:03:01.999736 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 11:03:02.016958 master-1 kubenswrapper[4771]: I1011 11:03:02.016891 4771 scope.go:117] "RemoveContainer" containerID="dae10efe0437b4662fa70f9c02ae261a84610fc2231a80eb15286d70bfdd6957" Oct 11 11:03:02.044702 master-1 kubenswrapper[4771]: I1011 11:03:02.044634 4771 scope.go:117] "RemoveContainer" containerID="ca31db39d1dec9084865eb81efc92049edf7d6166fd06121934c9ffcdc701d40" Oct 11 11:03:02.045189 master-1 kubenswrapper[4771]: E1011 11:03:02.045132 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca31db39d1dec9084865eb81efc92049edf7d6166fd06121934c9ffcdc701d40\": container with ID starting with ca31db39d1dec9084865eb81efc92049edf7d6166fd06121934c9ffcdc701d40 not found: ID does not exist" containerID="ca31db39d1dec9084865eb81efc92049edf7d6166fd06121934c9ffcdc701d40" Oct 11 11:03:02.045189 master-1 kubenswrapper[4771]: I1011 11:03:02.045179 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca31db39d1dec9084865eb81efc92049edf7d6166fd06121934c9ffcdc701d40"} err="failed to get container status \"ca31db39d1dec9084865eb81efc92049edf7d6166fd06121934c9ffcdc701d40\": rpc error: code = NotFound desc = could not find container \"ca31db39d1dec9084865eb81efc92049edf7d6166fd06121934c9ffcdc701d40\": container with ID starting with ca31db39d1dec9084865eb81efc92049edf7d6166fd06121934c9ffcdc701d40 not found: ID does not exist" Oct 11 11:03:02.045409 master-1 kubenswrapper[4771]: I1011 11:03:02.045206 4771 scope.go:117] "RemoveContainer" containerID="a7cc850a7be7b2d0fcc12f07e1e079159798fbdcd81356efd79d9ef6b76f3e58" Oct 11 11:03:02.045803 master-1 kubenswrapper[4771]: E1011 11:03:02.045739 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7cc850a7be7b2d0fcc12f07e1e079159798fbdcd81356efd79d9ef6b76f3e58\": container with ID starting with a7cc850a7be7b2d0fcc12f07e1e079159798fbdcd81356efd79d9ef6b76f3e58 not found: ID does not exist" containerID="a7cc850a7be7b2d0fcc12f07e1e079159798fbdcd81356efd79d9ef6b76f3e58" Oct 11 11:03:02.045803 master-1 kubenswrapper[4771]: I1011 11:03:02.045783 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7cc850a7be7b2d0fcc12f07e1e079159798fbdcd81356efd79d9ef6b76f3e58"} err="failed to get container status \"a7cc850a7be7b2d0fcc12f07e1e079159798fbdcd81356efd79d9ef6b76f3e58\": rpc error: code = NotFound desc = could not find container \"a7cc850a7be7b2d0fcc12f07e1e079159798fbdcd81356efd79d9ef6b76f3e58\": container with ID starting with a7cc850a7be7b2d0fcc12f07e1e079159798fbdcd81356efd79d9ef6b76f3e58 not found: ID does not exist" Oct 11 11:03:02.045803 master-1 kubenswrapper[4771]: I1011 11:03:02.045797 4771 scope.go:117] "RemoveContainer" containerID="4c4740c654b3eb5bc657fa26dafcc70a7dbbb11d03e1c33a59b6b35165113d78" Oct 11 11:03:02.046236 master-1 kubenswrapper[4771]: E1011 11:03:02.046177 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c4740c654b3eb5bc657fa26dafcc70a7dbbb11d03e1c33a59b6b35165113d78\": container with ID starting with 4c4740c654b3eb5bc657fa26dafcc70a7dbbb11d03e1c33a59b6b35165113d78 not found: ID does not exist" containerID="4c4740c654b3eb5bc657fa26dafcc70a7dbbb11d03e1c33a59b6b35165113d78" Oct 11 11:03:02.046236 master-1 kubenswrapper[4771]: I1011 11:03:02.046218 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c4740c654b3eb5bc657fa26dafcc70a7dbbb11d03e1c33a59b6b35165113d78"} err="failed to get container status \"4c4740c654b3eb5bc657fa26dafcc70a7dbbb11d03e1c33a59b6b35165113d78\": rpc error: code = NotFound desc = could not find container \"4c4740c654b3eb5bc657fa26dafcc70a7dbbb11d03e1c33a59b6b35165113d78\": container with ID starting with 4c4740c654b3eb5bc657fa26dafcc70a7dbbb11d03e1c33a59b6b35165113d78 not found: ID does not exist" Oct 11 11:03:02.046467 master-1 kubenswrapper[4771]: I1011 11:03:02.046252 4771 scope.go:117] "RemoveContainer" containerID="dae10efe0437b4662fa70f9c02ae261a84610fc2231a80eb15286d70bfdd6957" Oct 11 11:03:02.046759 master-1 kubenswrapper[4771]: E1011 11:03:02.046715 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dae10efe0437b4662fa70f9c02ae261a84610fc2231a80eb15286d70bfdd6957\": container with ID starting with dae10efe0437b4662fa70f9c02ae261a84610fc2231a80eb15286d70bfdd6957 not found: ID does not exist" containerID="dae10efe0437b4662fa70f9c02ae261a84610fc2231a80eb15286d70bfdd6957" Oct 11 11:03:02.046759 master-1 kubenswrapper[4771]: I1011 11:03:02.046744 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae10efe0437b4662fa70f9c02ae261a84610fc2231a80eb15286d70bfdd6957"} err="failed to get container status \"dae10efe0437b4662fa70f9c02ae261a84610fc2231a80eb15286d70bfdd6957\": rpc error: code = NotFound desc = could not find container \"dae10efe0437b4662fa70f9c02ae261a84610fc2231a80eb15286d70bfdd6957\": container with ID starting with dae10efe0437b4662fa70f9c02ae261a84610fc2231a80eb15286d70bfdd6957 not found: ID does not exist" Oct 11 11:03:02.122767 master-1 kubenswrapper[4771]: I1011 11:03:02.122581 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-scripts\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.122767 master-1 kubenswrapper[4771]: I1011 11:03:02.122730 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-config-data\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.123252 master-1 kubenswrapper[4771]: I1011 11:03:02.122845 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc67p\" (UniqueName: \"kubernetes.io/projected/d199f8f0-ae54-4c5c-b8af-f3058547edf1-kube-api-access-jc67p\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.123252 master-1 kubenswrapper[4771]: I1011 11:03:02.122926 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d199f8f0-ae54-4c5c-b8af-f3058547edf1-log-httpd\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.123252 master-1 kubenswrapper[4771]: I1011 11:03:02.123005 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d199f8f0-ae54-4c5c-b8af-f3058547edf1-run-httpd\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.123252 master-1 kubenswrapper[4771]: I1011 11:03:02.123063 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.123252 master-1 kubenswrapper[4771]: I1011 11:03:02.123223 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.123647 master-1 kubenswrapper[4771]: I1011 11:03:02.123283 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.225767 master-1 kubenswrapper[4771]: I1011 11:03:02.225617 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc67p\" (UniqueName: \"kubernetes.io/projected/d199f8f0-ae54-4c5c-b8af-f3058547edf1-kube-api-access-jc67p\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.226566 master-1 kubenswrapper[4771]: I1011 11:03:02.226521 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d199f8f0-ae54-4c5c-b8af-f3058547edf1-log-httpd\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.226669 master-1 kubenswrapper[4771]: I1011 11:03:02.226613 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d199f8f0-ae54-4c5c-b8af-f3058547edf1-log-httpd\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.226753 master-1 kubenswrapper[4771]: I1011 11:03:02.226738 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d199f8f0-ae54-4c5c-b8af-f3058547edf1-run-httpd\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.226964 master-1 kubenswrapper[4771]: I1011 11:03:02.226909 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.227061 master-1 kubenswrapper[4771]: I1011 11:03:02.227031 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.227131 master-1 kubenswrapper[4771]: I1011 11:03:02.227068 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.227331 master-1 kubenswrapper[4771]: I1011 11:03:02.227280 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-scripts\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.227331 master-1 kubenswrapper[4771]: I1011 11:03:02.227292 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d199f8f0-ae54-4c5c-b8af-f3058547edf1-run-httpd\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.227528 master-1 kubenswrapper[4771]: I1011 11:03:02.227326 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-config-data\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.233855 master-1 kubenswrapper[4771]: I1011 11:03:02.233754 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.236713 master-1 kubenswrapper[4771]: I1011 11:03:02.236659 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-scripts\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.239545 master-1 kubenswrapper[4771]: I1011 11:03:02.239486 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.239820 master-1 kubenswrapper[4771]: I1011 11:03:02.239771 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-config-data\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.249146 master-1 kubenswrapper[4771]: I1011 11:03:02.248948 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.262209 master-1 kubenswrapper[4771]: I1011 11:03:02.262139 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc67p\" (UniqueName: \"kubernetes.io/projected/d199f8f0-ae54-4c5c-b8af-f3058547edf1-kube-api-access-jc67p\") pod \"ceilometer-0\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " pod="openstack/ceilometer-0" Oct 11 11:03:02.300860 master-1 kubenswrapper[4771]: I1011 11:03:02.300775 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 11:03:02.469208 master-1 kubenswrapper[4771]: I1011 11:03:02.469117 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="926f8cdc-bbf6-4328-8436-8428df0a679b" path="/var/lib/kubelet/pods/926f8cdc-bbf6-4328-8436-8428df0a679b/volumes" Oct 11 11:03:02.846929 master-1 kubenswrapper[4771]: I1011 11:03:02.846823 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 11:03:02.850419 master-1 kubenswrapper[4771]: W1011 11:03:02.850349 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd199f8f0_ae54_4c5c_b8af_f3058547edf1.slice/crio-1f5edb60e620fedfd68b508b4b511f9365534396f20ac698191ef7d2d59c6bfb WatchSource:0}: Error finding container 1f5edb60e620fedfd68b508b4b511f9365534396f20ac698191ef7d2d59c6bfb: Status 404 returned error can't find the container with id 1f5edb60e620fedfd68b508b4b511f9365534396f20ac698191ef7d2d59c6bfb Oct 11 11:03:02.914704 master-1 kubenswrapper[4771]: I1011 11:03:02.914612 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d199f8f0-ae54-4c5c-b8af-f3058547edf1","Type":"ContainerStarted","Data":"1f5edb60e620fedfd68b508b4b511f9365534396f20ac698191ef7d2d59c6bfb"} Oct 11 11:03:03.605603 master-1 kubenswrapper[4771]: I1011 11:03:03.604762 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-ljmzz"] Oct 11 11:03:03.606546 master-1 kubenswrapper[4771]: I1011 11:03:03.606513 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-ljmzz" Oct 11 11:03:03.610571 master-1 kubenswrapper[4771]: I1011 11:03:03.610516 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Oct 11 11:03:03.612186 master-1 kubenswrapper[4771]: I1011 11:03:03.612116 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Oct 11 11:03:03.614413 master-1 kubenswrapper[4771]: I1011 11:03:03.613267 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Oct 11 11:03:03.620477 master-1 kubenswrapper[4771]: I1011 11:03:03.620182 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-ljmzz"] Oct 11 11:03:03.620618 master-2 kubenswrapper[4776]: I1011 11:03:03.620550 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-22xqd"] Oct 11 11:03:03.622167 master-2 kubenswrapper[4776]: I1011 11:03:03.622142 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.623624 master-0 kubenswrapper[4790]: I1011 11:03:03.623497 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-j7sxz"] Oct 11 11:03:03.624527 master-2 kubenswrapper[4776]: I1011 11:03:03.624476 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Oct 11 11:03:03.624527 master-2 kubenswrapper[4776]: I1011 11:03:03.624493 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Oct 11 11:03:03.624815 master-2 kubenswrapper[4776]: I1011 11:03:03.624788 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Oct 11 11:03:03.625949 master-0 kubenswrapper[4790]: I1011 11:03:03.625900 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-j7sxz" Oct 11 11:03:03.632658 master-0 kubenswrapper[4790]: I1011 11:03:03.632598 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Oct 11 11:03:03.634039 master-0 kubenswrapper[4790]: I1011 11:03:03.633984 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Oct 11 11:03:03.634132 master-0 kubenswrapper[4790]: I1011 11:03:03.634073 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Oct 11 11:03:03.640232 master-0 kubenswrapper[4790]: I1011 11:03:03.640145 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-j7sxz"] Oct 11 11:03:03.645813 master-2 kubenswrapper[4776]: I1011 11:03:03.645745 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-22xqd"] Oct 11 11:03:03.719954 master-2 kubenswrapper[4776]: I1011 11:03:03.719864 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-config-data-merged\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.720176 master-2 kubenswrapper[4776]: I1011 11:03:03.720053 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-scripts\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.720386 master-2 kubenswrapper[4776]: I1011 11:03:03.720349 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-hm-ports\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.720432 master-2 kubenswrapper[4776]: I1011 11:03:03.720393 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-config-data\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.744309 master-0 kubenswrapper[4790]: I1011 11:03:03.744236 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/3240af3d-94ab-4045-9618-f4a58c53b5a0-hm-ports\") pod \"octavia-rsyslog-j7sxz\" (UID: \"3240af3d-94ab-4045-9618-f4a58c53b5a0\") " pod="openstack/octavia-rsyslog-j7sxz" Oct 11 11:03:03.744514 master-0 kubenswrapper[4790]: I1011 11:03:03.744328 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3240af3d-94ab-4045-9618-f4a58c53b5a0-config-data\") pod \"octavia-rsyslog-j7sxz\" (UID: \"3240af3d-94ab-4045-9618-f4a58c53b5a0\") " pod="openstack/octavia-rsyslog-j7sxz" Oct 11 11:03:03.744589 master-0 kubenswrapper[4790]: I1011 11:03:03.744534 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3240af3d-94ab-4045-9618-f4a58c53b5a0-config-data-merged\") pod \"octavia-rsyslog-j7sxz\" (UID: \"3240af3d-94ab-4045-9618-f4a58c53b5a0\") " pod="openstack/octavia-rsyslog-j7sxz" Oct 11 11:03:03.744853 master-0 kubenswrapper[4790]: I1011 11:03:03.744812 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3240af3d-94ab-4045-9618-f4a58c53b5a0-scripts\") pod \"octavia-rsyslog-j7sxz\" (UID: \"3240af3d-94ab-4045-9618-f4a58c53b5a0\") " pod="openstack/octavia-rsyslog-j7sxz" Oct 11 11:03:03.769882 master-1 kubenswrapper[4771]: I1011 11:03:03.769827 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e71d1867-29ec-46d8-9057-7b862e658e4c-hm-ports\") pod \"octavia-rsyslog-ljmzz\" (UID: \"e71d1867-29ec-46d8-9057-7b862e658e4c\") " pod="openstack/octavia-rsyslog-ljmzz" Oct 11 11:03:03.770122 master-1 kubenswrapper[4771]: I1011 11:03:03.769943 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e71d1867-29ec-46d8-9057-7b862e658e4c-config-data-merged\") pod \"octavia-rsyslog-ljmzz\" (UID: \"e71d1867-29ec-46d8-9057-7b862e658e4c\") " pod="openstack/octavia-rsyslog-ljmzz" Oct 11 11:03:03.770122 master-1 kubenswrapper[4771]: I1011 11:03:03.769977 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e71d1867-29ec-46d8-9057-7b862e658e4c-config-data\") pod \"octavia-rsyslog-ljmzz\" (UID: \"e71d1867-29ec-46d8-9057-7b862e658e4c\") " pod="openstack/octavia-rsyslog-ljmzz" Oct 11 11:03:03.770122 master-1 kubenswrapper[4771]: I1011 11:03:03.770034 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e71d1867-29ec-46d8-9057-7b862e658e4c-scripts\") pod \"octavia-rsyslog-ljmzz\" (UID: \"e71d1867-29ec-46d8-9057-7b862e658e4c\") " pod="openstack/octavia-rsyslog-ljmzz" Oct 11 11:03:03.822013 master-2 kubenswrapper[4776]: I1011 11:03:03.821777 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-hm-ports\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.822013 master-2 kubenswrapper[4776]: I1011 11:03:03.822003 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-config-data\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.822280 master-2 kubenswrapper[4776]: I1011 11:03:03.822078 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-config-data-merged\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.822280 master-2 kubenswrapper[4776]: I1011 11:03:03.822126 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-scripts\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.822588 master-2 kubenswrapper[4776]: I1011 11:03:03.822552 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-config-data-merged\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.823305 master-2 kubenswrapper[4776]: I1011 11:03:03.823277 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-hm-ports\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.825491 master-2 kubenswrapper[4776]: I1011 11:03:03.825459 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-scripts\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.826247 master-2 kubenswrapper[4776]: I1011 11:03:03.826189 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-config-data\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.847071 master-0 kubenswrapper[4790]: I1011 11:03:03.846979 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/3240af3d-94ab-4045-9618-f4a58c53b5a0-hm-ports\") pod \"octavia-rsyslog-j7sxz\" (UID: \"3240af3d-94ab-4045-9618-f4a58c53b5a0\") " pod="openstack/octavia-rsyslog-j7sxz" Oct 11 11:03:03.847071 master-0 kubenswrapper[4790]: I1011 11:03:03.847073 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3240af3d-94ab-4045-9618-f4a58c53b5a0-config-data\") pod \"octavia-rsyslog-j7sxz\" (UID: \"3240af3d-94ab-4045-9618-f4a58c53b5a0\") " pod="openstack/octavia-rsyslog-j7sxz" Oct 11 11:03:03.847398 master-0 kubenswrapper[4790]: I1011 11:03:03.847110 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3240af3d-94ab-4045-9618-f4a58c53b5a0-config-data-merged\") pod \"octavia-rsyslog-j7sxz\" (UID: \"3240af3d-94ab-4045-9618-f4a58c53b5a0\") " pod="openstack/octavia-rsyslog-j7sxz" Oct 11 11:03:03.847398 master-0 kubenswrapper[4790]: I1011 11:03:03.847154 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3240af3d-94ab-4045-9618-f4a58c53b5a0-scripts\") pod \"octavia-rsyslog-j7sxz\" (UID: \"3240af3d-94ab-4045-9618-f4a58c53b5a0\") " pod="openstack/octavia-rsyslog-j7sxz" Oct 11 11:03:03.848170 master-0 kubenswrapper[4790]: I1011 11:03:03.848105 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3240af3d-94ab-4045-9618-f4a58c53b5a0-config-data-merged\") pod \"octavia-rsyslog-j7sxz\" (UID: \"3240af3d-94ab-4045-9618-f4a58c53b5a0\") " pod="openstack/octavia-rsyslog-j7sxz" Oct 11 11:03:03.848933 master-0 kubenswrapper[4790]: I1011 11:03:03.848867 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/3240af3d-94ab-4045-9618-f4a58c53b5a0-hm-ports\") pod \"octavia-rsyslog-j7sxz\" (UID: \"3240af3d-94ab-4045-9618-f4a58c53b5a0\") " pod="openstack/octavia-rsyslog-j7sxz" Oct 11 11:03:03.852280 master-0 kubenswrapper[4790]: I1011 11:03:03.852236 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3240af3d-94ab-4045-9618-f4a58c53b5a0-config-data\") pod \"octavia-rsyslog-j7sxz\" (UID: \"3240af3d-94ab-4045-9618-f4a58c53b5a0\") " pod="openstack/octavia-rsyslog-j7sxz" Oct 11 11:03:03.872309 master-1 kubenswrapper[4771]: I1011 11:03:03.872186 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e71d1867-29ec-46d8-9057-7b862e658e4c-hm-ports\") pod \"octavia-rsyslog-ljmzz\" (UID: \"e71d1867-29ec-46d8-9057-7b862e658e4c\") " pod="openstack/octavia-rsyslog-ljmzz" Oct 11 11:03:03.872309 master-1 kubenswrapper[4771]: I1011 11:03:03.872289 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e71d1867-29ec-46d8-9057-7b862e658e4c-config-data-merged\") pod \"octavia-rsyslog-ljmzz\" (UID: \"e71d1867-29ec-46d8-9057-7b862e658e4c\") " pod="openstack/octavia-rsyslog-ljmzz" Oct 11 11:03:03.872834 master-1 kubenswrapper[4771]: I1011 11:03:03.872311 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e71d1867-29ec-46d8-9057-7b862e658e4c-config-data\") pod \"octavia-rsyslog-ljmzz\" (UID: \"e71d1867-29ec-46d8-9057-7b862e658e4c\") " pod="openstack/octavia-rsyslog-ljmzz" Oct 11 11:03:03.872834 master-1 kubenswrapper[4771]: I1011 11:03:03.872349 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e71d1867-29ec-46d8-9057-7b862e658e4c-scripts\") pod \"octavia-rsyslog-ljmzz\" (UID: \"e71d1867-29ec-46d8-9057-7b862e658e4c\") " pod="openstack/octavia-rsyslog-ljmzz" Oct 11 11:03:03.872901 master-1 kubenswrapper[4771]: I1011 11:03:03.872829 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e71d1867-29ec-46d8-9057-7b862e658e4c-config-data-merged\") pod \"octavia-rsyslog-ljmzz\" (UID: \"e71d1867-29ec-46d8-9057-7b862e658e4c\") " pod="openstack/octavia-rsyslog-ljmzz" Oct 11 11:03:03.873943 master-1 kubenswrapper[4771]: I1011 11:03:03.873883 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e71d1867-29ec-46d8-9057-7b862e658e4c-hm-ports\") pod \"octavia-rsyslog-ljmzz\" (UID: \"e71d1867-29ec-46d8-9057-7b862e658e4c\") " pod="openstack/octavia-rsyslog-ljmzz" Oct 11 11:03:03.874538 master-0 kubenswrapper[4790]: I1011 11:03:03.874399 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3240af3d-94ab-4045-9618-f4a58c53b5a0-scripts\") pod \"octavia-rsyslog-j7sxz\" (UID: \"3240af3d-94ab-4045-9618-f4a58c53b5a0\") " pod="openstack/octavia-rsyslog-j7sxz" Oct 11 11:03:03.875874 master-1 kubenswrapper[4771]: I1011 11:03:03.875760 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e71d1867-29ec-46d8-9057-7b862e658e4c-config-data\") pod \"octavia-rsyslog-ljmzz\" (UID: \"e71d1867-29ec-46d8-9057-7b862e658e4c\") " pod="openstack/octavia-rsyslog-ljmzz" Oct 11 11:03:03.876012 master-1 kubenswrapper[4771]: I1011 11:03:03.875986 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e71d1867-29ec-46d8-9057-7b862e658e4c-scripts\") pod \"octavia-rsyslog-ljmzz\" (UID: \"e71d1867-29ec-46d8-9057-7b862e658e4c\") " pod="openstack/octavia-rsyslog-ljmzz" Oct 11 11:03:03.924843 master-1 kubenswrapper[4771]: I1011 11:03:03.924778 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d199f8f0-ae54-4c5c-b8af-f3058547edf1","Type":"ContainerStarted","Data":"2e42285e6351eb9ee62ba992f1e4f7be63e97a858d54b24d10c71afdf7aff3ea"} Oct 11 11:03:03.929780 master-1 kubenswrapper[4771]: I1011 11:03:03.929740 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-ljmzz" Oct 11 11:03:03.939062 master-2 kubenswrapper[4776]: I1011 11:03:03.939005 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.949533 master-0 kubenswrapper[4790]: I1011 11:03:03.949457 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-j7sxz" Oct 11 11:03:04.475904 master-1 kubenswrapper[4771]: I1011 11:03:04.475818 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-678599687f-vbxnh"] Oct 11 11:03:04.477741 master-1 kubenswrapper[4771]: I1011 11:03:04.477706 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-678599687f-vbxnh" Oct 11 11:03:04.486450 master-1 kubenswrapper[4771]: I1011 11:03:04.483001 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Oct 11 11:03:04.505075 master-1 kubenswrapper[4771]: I1011 11:03:04.504990 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-678599687f-vbxnh"] Oct 11 11:03:04.538345 master-1 kubenswrapper[4771]: I1011 11:03:04.537269 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-ljmzz"] Oct 11 11:03:04.599381 master-1 kubenswrapper[4771]: I1011 11:03:04.599265 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/30a9a1fe-e08c-4112-a7b8-6616d280405e-httpd-config\") pod \"octavia-image-upload-678599687f-vbxnh\" (UID: \"30a9a1fe-e08c-4112-a7b8-6616d280405e\") " pod="openstack/octavia-image-upload-678599687f-vbxnh" Oct 11 11:03:04.599793 master-1 kubenswrapper[4771]: I1011 11:03:04.599498 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/30a9a1fe-e08c-4112-a7b8-6616d280405e-amphora-image\") pod \"octavia-image-upload-678599687f-vbxnh\" (UID: \"30a9a1fe-e08c-4112-a7b8-6616d280405e\") " pod="openstack/octavia-image-upload-678599687f-vbxnh" Oct 11 11:03:04.708037 master-1 kubenswrapper[4771]: I1011 11:03:04.706504 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/30a9a1fe-e08c-4112-a7b8-6616d280405e-httpd-config\") pod \"octavia-image-upload-678599687f-vbxnh\" (UID: \"30a9a1fe-e08c-4112-a7b8-6616d280405e\") " pod="openstack/octavia-image-upload-678599687f-vbxnh" Oct 11 11:03:04.708037 master-1 kubenswrapper[4771]: I1011 11:03:04.706741 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/30a9a1fe-e08c-4112-a7b8-6616d280405e-amphora-image\") pod \"octavia-image-upload-678599687f-vbxnh\" (UID: \"30a9a1fe-e08c-4112-a7b8-6616d280405e\") " pod="openstack/octavia-image-upload-678599687f-vbxnh" Oct 11 11:03:04.708037 master-1 kubenswrapper[4771]: I1011 11:03:04.707600 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/30a9a1fe-e08c-4112-a7b8-6616d280405e-amphora-image\") pod \"octavia-image-upload-678599687f-vbxnh\" (UID: \"30a9a1fe-e08c-4112-a7b8-6616d280405e\") " pod="openstack/octavia-image-upload-678599687f-vbxnh" Oct 11 11:03:04.718005 master-1 kubenswrapper[4771]: I1011 11:03:04.717890 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/30a9a1fe-e08c-4112-a7b8-6616d280405e-httpd-config\") pod \"octavia-image-upload-678599687f-vbxnh\" (UID: \"30a9a1fe-e08c-4112-a7b8-6616d280405e\") " pod="openstack/octavia-image-upload-678599687f-vbxnh" Oct 11 11:03:04.804649 master-1 kubenswrapper[4771]: I1011 11:03:04.804606 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-678599687f-vbxnh" Oct 11 11:03:04.940844 master-1 kubenswrapper[4771]: I1011 11:03:04.940780 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d199f8f0-ae54-4c5c-b8af-f3058547edf1","Type":"ContainerStarted","Data":"f369b17be5d671fc3e32876475f80c5b0218179c8afd4aa1df1108a929725692"} Oct 11 11:03:04.948768 master-1 kubenswrapper[4771]: I1011 11:03:04.942482 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-ljmzz" event={"ID":"e71d1867-29ec-46d8-9057-7b862e658e4c","Type":"ContainerStarted","Data":"20ff9e10bb2272bee7130877e2cd8a446c12de1b611c03f2f5b4990b7f22c4e1"} Oct 11 11:03:05.207291 master-1 kubenswrapper[4771]: I1011 11:03:05.205411 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-dsphh"] Oct 11 11:03:05.207291 master-1 kubenswrapper[4771]: I1011 11:03:05.207203 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-dsphh" Oct 11 11:03:05.211688 master-1 kubenswrapper[4771]: I1011 11:03:05.211628 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Oct 11 11:03:05.221396 master-1 kubenswrapper[4771]: I1011 11:03:05.221315 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-dsphh"] Oct 11 11:03:05.289116 master-1 kubenswrapper[4771]: I1011 11:03:05.289031 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-678599687f-vbxnh"] Oct 11 11:03:05.294575 master-1 kubenswrapper[4771]: W1011 11:03:05.294499 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30a9a1fe_e08c_4112_a7b8_6616d280405e.slice/crio-822d91bade265914abdf5fc87f453e2f88a3133976d26e99e2272117f587739c WatchSource:0}: Error finding container 822d91bade265914abdf5fc87f453e2f88a3133976d26e99e2272117f587739c: Status 404 returned error can't find the container with id 822d91bade265914abdf5fc87f453e2f88a3133976d26e99e2272117f587739c Oct 11 11:03:05.322876 master-1 kubenswrapper[4771]: I1011 11:03:05.322813 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2681ddac-5e31-449b-bf71-fb54e8ba389c-config-data-merged\") pod \"octavia-db-sync-dsphh\" (UID: \"2681ddac-5e31-449b-bf71-fb54e8ba389c\") " pod="openstack/octavia-db-sync-dsphh" Oct 11 11:03:05.322876 master-1 kubenswrapper[4771]: I1011 11:03:05.322885 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2681ddac-5e31-449b-bf71-fb54e8ba389c-scripts\") pod \"octavia-db-sync-dsphh\" (UID: \"2681ddac-5e31-449b-bf71-fb54e8ba389c\") " pod="openstack/octavia-db-sync-dsphh" Oct 11 11:03:05.323194 master-1 kubenswrapper[4771]: I1011 11:03:05.323013 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2681ddac-5e31-449b-bf71-fb54e8ba389c-config-data\") pod \"octavia-db-sync-dsphh\" (UID: \"2681ddac-5e31-449b-bf71-fb54e8ba389c\") " pod="openstack/octavia-db-sync-dsphh" Oct 11 11:03:05.323194 master-1 kubenswrapper[4771]: I1011 11:03:05.323034 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2681ddac-5e31-449b-bf71-fb54e8ba389c-combined-ca-bundle\") pod \"octavia-db-sync-dsphh\" (UID: \"2681ddac-5e31-449b-bf71-fb54e8ba389c\") " pod="openstack/octavia-db-sync-dsphh" Oct 11 11:03:05.343921 master-0 kubenswrapper[4790]: I1011 11:03:05.343833 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-j7sxz"] Oct 11 11:03:05.355611 master-0 kubenswrapper[4790]: I1011 11:03:05.355557 4790 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 11:03:05.427384 master-1 kubenswrapper[4771]: I1011 11:03:05.426376 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2681ddac-5e31-449b-bf71-fb54e8ba389c-config-data\") pod \"octavia-db-sync-dsphh\" (UID: \"2681ddac-5e31-449b-bf71-fb54e8ba389c\") " pod="openstack/octavia-db-sync-dsphh" Oct 11 11:03:05.427384 master-1 kubenswrapper[4771]: I1011 11:03:05.426530 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2681ddac-5e31-449b-bf71-fb54e8ba389c-combined-ca-bundle\") pod \"octavia-db-sync-dsphh\" (UID: \"2681ddac-5e31-449b-bf71-fb54e8ba389c\") " pod="openstack/octavia-db-sync-dsphh" Oct 11 11:03:05.427384 master-1 kubenswrapper[4771]: I1011 11:03:05.427167 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2681ddac-5e31-449b-bf71-fb54e8ba389c-config-data-merged\") pod \"octavia-db-sync-dsphh\" (UID: \"2681ddac-5e31-449b-bf71-fb54e8ba389c\") " pod="openstack/octavia-db-sync-dsphh" Oct 11 11:03:05.427384 master-1 kubenswrapper[4771]: I1011 11:03:05.427278 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2681ddac-5e31-449b-bf71-fb54e8ba389c-scripts\") pod \"octavia-db-sync-dsphh\" (UID: \"2681ddac-5e31-449b-bf71-fb54e8ba389c\") " pod="openstack/octavia-db-sync-dsphh" Oct 11 11:03:05.428560 master-1 kubenswrapper[4771]: I1011 11:03:05.428323 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2681ddac-5e31-449b-bf71-fb54e8ba389c-config-data-merged\") pod \"octavia-db-sync-dsphh\" (UID: \"2681ddac-5e31-449b-bf71-fb54e8ba389c\") " pod="openstack/octavia-db-sync-dsphh" Oct 11 11:03:05.451483 master-1 kubenswrapper[4771]: I1011 11:03:05.431319 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2681ddac-5e31-449b-bf71-fb54e8ba389c-combined-ca-bundle\") pod \"octavia-db-sync-dsphh\" (UID: \"2681ddac-5e31-449b-bf71-fb54e8ba389c\") " pod="openstack/octavia-db-sync-dsphh" Oct 11 11:03:05.451483 master-1 kubenswrapper[4771]: I1011 11:03:05.435145 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2681ddac-5e31-449b-bf71-fb54e8ba389c-scripts\") pod \"octavia-db-sync-dsphh\" (UID: \"2681ddac-5e31-449b-bf71-fb54e8ba389c\") " pod="openstack/octavia-db-sync-dsphh" Oct 11 11:03:05.451483 master-1 kubenswrapper[4771]: I1011 11:03:05.437009 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2681ddac-5e31-449b-bf71-fb54e8ba389c-config-data\") pod \"octavia-db-sync-dsphh\" (UID: \"2681ddac-5e31-449b-bf71-fb54e8ba389c\") " pod="openstack/octavia-db-sync-dsphh" Oct 11 11:03:05.534428 master-1 kubenswrapper[4771]: I1011 11:03:05.534340 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-dsphh" Oct 11 11:03:05.665789 master-2 kubenswrapper[4776]: I1011 11:03:05.665735 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-22xqd"] Oct 11 11:03:05.682956 master-2 kubenswrapper[4776]: W1011 11:03:05.682874 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5127d226_d4e2_41f4_8a2b_eaf2f7707f12.slice/crio-086d8b87f3256b526bae1a95dfbf12f82e90f39f919d3f33a0e3e0656998885b WatchSource:0}: Error finding container 086d8b87f3256b526bae1a95dfbf12f82e90f39f919d3f33a0e3e0656998885b: Status 404 returned error can't find the container with id 086d8b87f3256b526bae1a95dfbf12f82e90f39f919d3f33a0e3e0656998885b Oct 11 11:03:05.692738 master-2 kubenswrapper[4776]: I1011 11:03:05.689203 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 11:03:05.800198 master-0 kubenswrapper[4790]: I1011 11:03:05.800140 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-j7sxz" event={"ID":"3240af3d-94ab-4045-9618-f4a58c53b5a0","Type":"ContainerStarted","Data":"8a82447c206637bb30e47630382a5d9161cf334f1ba255ad4e7a627d844d40f1"} Oct 11 11:03:05.972332 master-1 kubenswrapper[4771]: I1011 11:03:05.972260 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d199f8f0-ae54-4c5c-b8af-f3058547edf1","Type":"ContainerStarted","Data":"ca41f80532d4f090e18655608cd8c3f3de6ab84ef57c4a67c84ac53396fb2ce3"} Oct 11 11:03:05.973780 master-1 kubenswrapper[4771]: I1011 11:03:05.973731 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-vbxnh" event={"ID":"30a9a1fe-e08c-4112-a7b8-6616d280405e","Type":"ContainerStarted","Data":"822d91bade265914abdf5fc87f453e2f88a3133976d26e99e2272117f587739c"} Oct 11 11:03:06.567779 master-2 kubenswrapper[4776]: I1011 11:03:06.567609 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-22xqd" event={"ID":"5127d226-d4e2-41f4-8a2b-eaf2f7707f12","Type":"ContainerStarted","Data":"086d8b87f3256b526bae1a95dfbf12f82e90f39f919d3f33a0e3e0656998885b"} Oct 11 11:03:06.990280 master-1 kubenswrapper[4771]: I1011 11:03:06.990205 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d199f8f0-ae54-4c5c-b8af-f3058547edf1","Type":"ContainerStarted","Data":"d509506178d765ed7f4a16904a37ddc1b2c54d25ae15ba1153670674913b8d5f"} Oct 11 11:03:06.991859 master-1 kubenswrapper[4771]: I1011 11:03:06.991794 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 11 11:03:07.060854 master-1 kubenswrapper[4771]: I1011 11:03:07.060782 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-dsphh"] Oct 11 11:03:07.085219 master-1 kubenswrapper[4771]: I1011 11:03:07.085138 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.733738982 podStartE2EDuration="6.085112906s" podCreationTimestamp="2025-10-11 11:03:01 +0000 UTC" firstStartedPulling="2025-10-11 11:03:02.85537011 +0000 UTC m=+2214.829596551" lastFinishedPulling="2025-10-11 11:03:06.206744034 +0000 UTC m=+2218.180970475" observedRunningTime="2025-10-11 11:03:07.057683569 +0000 UTC m=+2219.031910040" watchObservedRunningTime="2025-10-11 11:03:07.085112906 +0000 UTC m=+2219.059339347" Oct 11 11:03:07.666454 master-1 kubenswrapper[4771]: W1011 11:03:07.660622 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2681ddac_5e31_449b_bf71_fb54e8ba389c.slice/crio-62529d885f50e38d46775355d42d0bf6618c28a31161c124af0a288723c5173b WatchSource:0}: Error finding container 62529d885f50e38d46775355d42d0bf6618c28a31161c124af0a288723c5173b: Status 404 returned error can't find the container with id 62529d885f50e38d46775355d42d0bf6618c28a31161c124af0a288723c5173b Oct 11 11:03:08.008258 master-1 kubenswrapper[4771]: I1011 11:03:08.008110 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-dsphh" event={"ID":"2681ddac-5e31-449b-bf71-fb54e8ba389c","Type":"ContainerStarted","Data":"62529d885f50e38d46775355d42d0bf6618c28a31161c124af0a288723c5173b"} Oct 11 11:03:09.019432 master-1 kubenswrapper[4771]: I1011 11:03:09.019231 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-ljmzz" event={"ID":"e71d1867-29ec-46d8-9057-7b862e658e4c","Type":"ContainerStarted","Data":"10eedbd9c1cf8319085bd869b6939fe0341c197bd1a92f749d507e8b4b690ca9"} Oct 11 11:03:09.023722 master-1 kubenswrapper[4771]: I1011 11:03:09.023665 4771 generic.go:334] "Generic (PLEG): container finished" podID="2681ddac-5e31-449b-bf71-fb54e8ba389c" containerID="649f9407421e9587f48e63d8c49281e9025d233acd0bf83983ed52b03d671758" exitCode=0 Oct 11 11:03:09.023875 master-1 kubenswrapper[4771]: I1011 11:03:09.023714 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-dsphh" event={"ID":"2681ddac-5e31-449b-bf71-fb54e8ba389c","Type":"ContainerDied","Data":"649f9407421e9587f48e63d8c49281e9025d233acd0bf83983ed52b03d671758"} Oct 11 11:03:10.033576 master-1 kubenswrapper[4771]: I1011 11:03:10.033333 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-dsphh" event={"ID":"2681ddac-5e31-449b-bf71-fb54e8ba389c","Type":"ContainerStarted","Data":"00d99e2ba51100c415ae5d1a3b19ce4ab68cd0b4655796bec9fd8f7ec75f10f8"} Oct 11 11:03:10.223286 master-1 kubenswrapper[4771]: I1011 11:03:10.223191 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-dsphh" podStartSLOduration=5.22316825 podStartE2EDuration="5.22316825s" podCreationTimestamp="2025-10-11 11:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 11:03:10.120954408 +0000 UTC m=+2222.095180869" watchObservedRunningTime="2025-10-11 11:03:10.22316825 +0000 UTC m=+2222.197394711" Oct 11 11:03:11.045965 master-1 kubenswrapper[4771]: I1011 11:03:11.045866 4771 generic.go:334] "Generic (PLEG): container finished" podID="e71d1867-29ec-46d8-9057-7b862e658e4c" containerID="10eedbd9c1cf8319085bd869b6939fe0341c197bd1a92f749d507e8b4b690ca9" exitCode=0 Oct 11 11:03:11.047768 master-1 kubenswrapper[4771]: I1011 11:03:11.047711 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-ljmzz" event={"ID":"e71d1867-29ec-46d8-9057-7b862e658e4c","Type":"ContainerDied","Data":"10eedbd9c1cf8319085bd869b6939fe0341c197bd1a92f749d507e8b4b690ca9"} Oct 11 11:03:12.621289 master-2 kubenswrapper[4776]: I1011 11:03:12.621134 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-22xqd" event={"ID":"5127d226-d4e2-41f4-8a2b-eaf2f7707f12","Type":"ContainerStarted","Data":"659e8f92c23e371eff8940610003062936d4fd419ce58aaceb8d267acae7ec23"} Oct 11 11:03:13.868214 master-0 kubenswrapper[4790]: I1011 11:03:13.868125 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-j7sxz" event={"ID":"3240af3d-94ab-4045-9618-f4a58c53b5a0","Type":"ContainerStarted","Data":"33658efd0dee5c5cf0929135102f1452d4a0592380545f82fb598f9624bf85a0"} Oct 11 11:03:14.095433 master-1 kubenswrapper[4771]: I1011 11:03:14.095278 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-ljmzz" event={"ID":"e71d1867-29ec-46d8-9057-7b862e658e4c","Type":"ContainerStarted","Data":"ab76b9ef3a27720c3ed7bbf5d3d905989fa800df9a8b2ba7b35a422c01cf124c"} Oct 11 11:03:14.095982 master-1 kubenswrapper[4771]: I1011 11:03:14.095668 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-ljmzz" Oct 11 11:03:14.123040 master-1 kubenswrapper[4771]: I1011 11:03:14.122939 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-ljmzz" podStartSLOduration=2.659351729 podStartE2EDuration="11.122916259s" podCreationTimestamp="2025-10-11 11:03:03 +0000 UTC" firstStartedPulling="2025-10-11 11:03:04.541572819 +0000 UTC m=+2216.515799260" lastFinishedPulling="2025-10-11 11:03:13.005137349 +0000 UTC m=+2224.979363790" observedRunningTime="2025-10-11 11:03:14.117686889 +0000 UTC m=+2226.091913330" watchObservedRunningTime="2025-10-11 11:03:14.122916259 +0000 UTC m=+2226.097142690" Oct 11 11:03:14.638764 master-2 kubenswrapper[4776]: I1011 11:03:14.638525 4776 generic.go:334] "Generic (PLEG): container finished" podID="5127d226-d4e2-41f4-8a2b-eaf2f7707f12" containerID="659e8f92c23e371eff8940610003062936d4fd419ce58aaceb8d267acae7ec23" exitCode=0 Oct 11 11:03:14.638764 master-2 kubenswrapper[4776]: I1011 11:03:14.638575 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-22xqd" event={"ID":"5127d226-d4e2-41f4-8a2b-eaf2f7707f12","Type":"ContainerDied","Data":"659e8f92c23e371eff8940610003062936d4fd419ce58aaceb8d267acae7ec23"} Oct 11 11:03:14.881384 master-0 kubenswrapper[4790]: I1011 11:03:14.879410 4790 generic.go:334] "Generic (PLEG): container finished" podID="3240af3d-94ab-4045-9618-f4a58c53b5a0" containerID="33658efd0dee5c5cf0929135102f1452d4a0592380545f82fb598f9624bf85a0" exitCode=0 Oct 11 11:03:14.881384 master-0 kubenswrapper[4790]: I1011 11:03:14.879490 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-j7sxz" event={"ID":"3240af3d-94ab-4045-9618-f4a58c53b5a0","Type":"ContainerDied","Data":"33658efd0dee5c5cf0929135102f1452d4a0592380545f82fb598f9624bf85a0"} Oct 11 11:03:16.661573 master-2 kubenswrapper[4776]: I1011 11:03:16.661507 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-22xqd" event={"ID":"5127d226-d4e2-41f4-8a2b-eaf2f7707f12","Type":"ContainerStarted","Data":"61363a536dd6f5a57138ce9ec0fda0b52a0e897985be03d22f6159df79753737"} Oct 11 11:03:16.662514 master-2 kubenswrapper[4776]: I1011 11:03:16.661731 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:16.694649 master-2 kubenswrapper[4776]: I1011 11:03:16.694544 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-22xqd" podStartSLOduration=3.816206545 podStartE2EDuration="13.694525937s" podCreationTimestamp="2025-10-11 11:03:03 +0000 UTC" firstStartedPulling="2025-10-11 11:03:05.689124768 +0000 UTC m=+2220.473551477" lastFinishedPulling="2025-10-11 11:03:15.56744416 +0000 UTC m=+2230.351870869" observedRunningTime="2025-10-11 11:03:16.691358602 +0000 UTC m=+2231.475785311" watchObservedRunningTime="2025-10-11 11:03:16.694525937 +0000 UTC m=+2231.478952646" Oct 11 11:03:16.899859 master-0 kubenswrapper[4790]: I1011 11:03:16.899777 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-j7sxz" event={"ID":"3240af3d-94ab-4045-9618-f4a58c53b5a0","Type":"ContainerStarted","Data":"61bff005e5fd07a8cb4da8a1244c56bec23767c401f084a3a78027ce157e8436"} Oct 11 11:03:16.900753 master-0 kubenswrapper[4790]: I1011 11:03:16.900055 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-j7sxz" Oct 11 11:03:16.944737 master-0 kubenswrapper[4790]: I1011 11:03:16.939415 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-j7sxz" podStartSLOduration=3.560733548 podStartE2EDuration="13.939358277s" podCreationTimestamp="2025-10-11 11:03:03 +0000 UTC" firstStartedPulling="2025-10-11 11:03:05.355470076 +0000 UTC m=+1461.909930368" lastFinishedPulling="2025-10-11 11:03:15.734094805 +0000 UTC m=+1472.288555097" observedRunningTime="2025-10-11 11:03:16.931240495 +0000 UTC m=+1473.485700797" watchObservedRunningTime="2025-10-11 11:03:16.939358277 +0000 UTC m=+1473.493818579" Oct 11 11:03:17.127735 master-1 kubenswrapper[4771]: I1011 11:03:17.127640 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-vbxnh" event={"ID":"30a9a1fe-e08c-4112-a7b8-6616d280405e","Type":"ContainerStarted","Data":"04f298e98d945615f593936f0afe3e98a8fda50794b9ca2f8c1e2dcce3be303c"} Oct 11 11:03:18.140216 master-1 kubenswrapper[4771]: I1011 11:03:18.140138 4771 generic.go:334] "Generic (PLEG): container finished" podID="30a9a1fe-e08c-4112-a7b8-6616d280405e" containerID="04f298e98d945615f593936f0afe3e98a8fda50794b9ca2f8c1e2dcce3be303c" exitCode=0 Oct 11 11:03:18.140216 master-1 kubenswrapper[4771]: I1011 11:03:18.140210 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-vbxnh" event={"ID":"30a9a1fe-e08c-4112-a7b8-6616d280405e","Type":"ContainerDied","Data":"04f298e98d945615f593936f0afe3e98a8fda50794b9ca2f8c1e2dcce3be303c"} Oct 11 11:03:18.977814 master-1 kubenswrapper[4771]: I1011 11:03:18.977733 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-ljmzz" Oct 11 11:03:19.152918 master-1 kubenswrapper[4771]: I1011 11:03:19.152773 4771 generic.go:334] "Generic (PLEG): container finished" podID="2681ddac-5e31-449b-bf71-fb54e8ba389c" containerID="00d99e2ba51100c415ae5d1a3b19ce4ab68cd0b4655796bec9fd8f7ec75f10f8" exitCode=0 Oct 11 11:03:19.153439 master-1 kubenswrapper[4771]: I1011 11:03:19.152902 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-dsphh" event={"ID":"2681ddac-5e31-449b-bf71-fb54e8ba389c","Type":"ContainerDied","Data":"00d99e2ba51100c415ae5d1a3b19ce4ab68cd0b4655796bec9fd8f7ec75f10f8"} Oct 11 11:03:19.155054 master-1 kubenswrapper[4771]: I1011 11:03:19.154989 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-vbxnh" event={"ID":"30a9a1fe-e08c-4112-a7b8-6616d280405e","Type":"ContainerStarted","Data":"bb7f7c6b7f921511c546d7f35dc55b164c7677f19e59519f7cff36e97aa18bfd"} Oct 11 11:03:19.459883 master-1 kubenswrapper[4771]: I1011 11:03:19.459568 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-678599687f-vbxnh" podStartSLOduration=4.038228471 podStartE2EDuration="15.459537673s" podCreationTimestamp="2025-10-11 11:03:04 +0000 UTC" firstStartedPulling="2025-10-11 11:03:05.293983706 +0000 UTC m=+2217.268210147" lastFinishedPulling="2025-10-11 11:03:16.715292868 +0000 UTC m=+2228.689519349" observedRunningTime="2025-10-11 11:03:19.45104481 +0000 UTC m=+2231.425271301" watchObservedRunningTime="2025-10-11 11:03:19.459537673 +0000 UTC m=+2231.433764154" Oct 11 11:03:20.702833 master-1 kubenswrapper[4771]: I1011 11:03:20.702774 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:03:20.741057 master-1 kubenswrapper[4771]: I1011 11:03:20.739845 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:03:20.746227 master-1 kubenswrapper[4771]: I1011 11:03:20.746141 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-dsphh" Oct 11 11:03:20.880615 master-1 kubenswrapper[4771]: I1011 11:03:20.880478 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2681ddac-5e31-449b-bf71-fb54e8ba389c-config-data-merged\") pod \"2681ddac-5e31-449b-bf71-fb54e8ba389c\" (UID: \"2681ddac-5e31-449b-bf71-fb54e8ba389c\") " Oct 11 11:03:20.880615 master-1 kubenswrapper[4771]: I1011 11:03:20.880585 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2681ddac-5e31-449b-bf71-fb54e8ba389c-scripts\") pod \"2681ddac-5e31-449b-bf71-fb54e8ba389c\" (UID: \"2681ddac-5e31-449b-bf71-fb54e8ba389c\") " Oct 11 11:03:20.880947 master-1 kubenswrapper[4771]: I1011 11:03:20.880695 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2681ddac-5e31-449b-bf71-fb54e8ba389c-combined-ca-bundle\") pod \"2681ddac-5e31-449b-bf71-fb54e8ba389c\" (UID: \"2681ddac-5e31-449b-bf71-fb54e8ba389c\") " Oct 11 11:03:20.880947 master-1 kubenswrapper[4771]: I1011 11:03:20.880785 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2681ddac-5e31-449b-bf71-fb54e8ba389c-config-data\") pod \"2681ddac-5e31-449b-bf71-fb54e8ba389c\" (UID: \"2681ddac-5e31-449b-bf71-fb54e8ba389c\") " Oct 11 11:03:20.886405 master-1 kubenswrapper[4771]: I1011 11:03:20.886294 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2681ddac-5e31-449b-bf71-fb54e8ba389c-scripts" (OuterVolumeSpecName: "scripts") pod "2681ddac-5e31-449b-bf71-fb54e8ba389c" (UID: "2681ddac-5e31-449b-bf71-fb54e8ba389c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:03:20.902183 master-1 kubenswrapper[4771]: I1011 11:03:20.902100 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2681ddac-5e31-449b-bf71-fb54e8ba389c-config-data" (OuterVolumeSpecName: "config-data") pod "2681ddac-5e31-449b-bf71-fb54e8ba389c" (UID: "2681ddac-5e31-449b-bf71-fb54e8ba389c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:03:20.902520 master-1 kubenswrapper[4771]: I1011 11:03:20.902446 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2681ddac-5e31-449b-bf71-fb54e8ba389c-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "2681ddac-5e31-449b-bf71-fb54e8ba389c" (UID: "2681ddac-5e31-449b-bf71-fb54e8ba389c"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:03:20.929628 master-1 kubenswrapper[4771]: I1011 11:03:20.929505 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2681ddac-5e31-449b-bf71-fb54e8ba389c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2681ddac-5e31-449b-bf71-fb54e8ba389c" (UID: "2681ddac-5e31-449b-bf71-fb54e8ba389c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:03:20.983507 master-1 kubenswrapper[4771]: I1011 11:03:20.983272 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2681ddac-5e31-449b-bf71-fb54e8ba389c-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:20.983507 master-1 kubenswrapper[4771]: I1011 11:03:20.983318 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2681ddac-5e31-449b-bf71-fb54e8ba389c-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:20.983507 master-1 kubenswrapper[4771]: I1011 11:03:20.983331 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2681ddac-5e31-449b-bf71-fb54e8ba389c-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:20.983507 master-1 kubenswrapper[4771]: I1011 11:03:20.983340 4771 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2681ddac-5e31-449b-bf71-fb54e8ba389c-config-data-merged\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:21.183288 master-1 kubenswrapper[4771]: I1011 11:03:21.183153 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-dsphh" Oct 11 11:03:21.183536 master-1 kubenswrapper[4771]: I1011 11:03:21.183286 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-dsphh" event={"ID":"2681ddac-5e31-449b-bf71-fb54e8ba389c","Type":"ContainerDied","Data":"62529d885f50e38d46775355d42d0bf6618c28a31161c124af0a288723c5173b"} Oct 11 11:03:21.183536 master-1 kubenswrapper[4771]: I1011 11:03:21.183333 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62529d885f50e38d46775355d42d0bf6618c28a31161c124af0a288723c5173b" Oct 11 11:03:22.010579 master-1 kubenswrapper[4771]: I1011 11:03:22.010421 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-7f8fd8dbdd-z9j79"] Oct 11 11:03:22.011268 master-1 kubenswrapper[4771]: E1011 11:03:22.010835 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2681ddac-5e31-449b-bf71-fb54e8ba389c" containerName="octavia-db-sync" Oct 11 11:03:22.011268 master-1 kubenswrapper[4771]: I1011 11:03:22.010870 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2681ddac-5e31-449b-bf71-fb54e8ba389c" containerName="octavia-db-sync" Oct 11 11:03:22.011268 master-1 kubenswrapper[4771]: E1011 11:03:22.010893 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2681ddac-5e31-449b-bf71-fb54e8ba389c" containerName="init" Oct 11 11:03:22.011268 master-1 kubenswrapper[4771]: I1011 11:03:22.010903 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2681ddac-5e31-449b-bf71-fb54e8ba389c" containerName="init" Oct 11 11:03:22.011268 master-1 kubenswrapper[4771]: I1011 11:03:22.011103 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="2681ddac-5e31-449b-bf71-fb54e8ba389c" containerName="octavia-db-sync" Oct 11 11:03:22.012647 master-1 kubenswrapper[4771]: I1011 11:03:22.012603 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.015814 master-1 kubenswrapper[4771]: I1011 11:03:22.015749 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-internal-svc" Oct 11 11:03:22.015983 master-1 kubenswrapper[4771]: I1011 11:03:22.015900 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-public-svc" Oct 11 11:03:22.042388 master-1 kubenswrapper[4771]: I1011 11:03:22.042264 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-7f8fd8dbdd-z9j79"] Oct 11 11:03:22.108376 master-1 kubenswrapper[4771]: I1011 11:03:22.108274 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7a14abd-dac6-484f-8be5-a64699a6318a-public-tls-certs\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.108631 master-1 kubenswrapper[4771]: I1011 11:03:22.108358 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a14abd-dac6-484f-8be5-a64699a6318a-combined-ca-bundle\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.108631 master-1 kubenswrapper[4771]: I1011 11:03:22.108436 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/d7a14abd-dac6-484f-8be5-a64699a6318a-octavia-run\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.108631 master-1 kubenswrapper[4771]: I1011 11:03:22.108517 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7a14abd-dac6-484f-8be5-a64699a6318a-ovndb-tls-certs\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.108631 master-1 kubenswrapper[4771]: I1011 11:03:22.108571 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7a14abd-dac6-484f-8be5-a64699a6318a-scripts\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.108888 master-1 kubenswrapper[4771]: I1011 11:03:22.108700 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7a14abd-dac6-484f-8be5-a64699a6318a-internal-tls-certs\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.108888 master-1 kubenswrapper[4771]: I1011 11:03:22.108769 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7a14abd-dac6-484f-8be5-a64699a6318a-config-data\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.108888 master-1 kubenswrapper[4771]: I1011 11:03:22.108849 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d7a14abd-dac6-484f-8be5-a64699a6318a-config-data-merged\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.218879 master-1 kubenswrapper[4771]: I1011 11:03:22.211261 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7a14abd-dac6-484f-8be5-a64699a6318a-ovndb-tls-certs\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.218879 master-1 kubenswrapper[4771]: I1011 11:03:22.211339 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7a14abd-dac6-484f-8be5-a64699a6318a-scripts\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.218879 master-1 kubenswrapper[4771]: I1011 11:03:22.211416 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7a14abd-dac6-484f-8be5-a64699a6318a-internal-tls-certs\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.218879 master-1 kubenswrapper[4771]: I1011 11:03:22.211476 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7a14abd-dac6-484f-8be5-a64699a6318a-config-data\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.218879 master-1 kubenswrapper[4771]: I1011 11:03:22.211613 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d7a14abd-dac6-484f-8be5-a64699a6318a-config-data-merged\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.218879 master-1 kubenswrapper[4771]: I1011 11:03:22.212038 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7a14abd-dac6-484f-8be5-a64699a6318a-public-tls-certs\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.218879 master-1 kubenswrapper[4771]: I1011 11:03:22.212072 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a14abd-dac6-484f-8be5-a64699a6318a-combined-ca-bundle\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.218879 master-1 kubenswrapper[4771]: I1011 11:03:22.212102 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/d7a14abd-dac6-484f-8be5-a64699a6318a-octavia-run\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.218879 master-1 kubenswrapper[4771]: I1011 11:03:22.213131 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/d7a14abd-dac6-484f-8be5-a64699a6318a-octavia-run\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.220062 master-1 kubenswrapper[4771]: I1011 11:03:22.219523 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7a14abd-dac6-484f-8be5-a64699a6318a-ovndb-tls-certs\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.220930 master-1 kubenswrapper[4771]: I1011 11:03:22.220883 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d7a14abd-dac6-484f-8be5-a64699a6318a-config-data-merged\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.223584 master-1 kubenswrapper[4771]: I1011 11:03:22.223526 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7a14abd-dac6-484f-8be5-a64699a6318a-scripts\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.226798 master-1 kubenswrapper[4771]: I1011 11:03:22.226752 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7a14abd-dac6-484f-8be5-a64699a6318a-internal-tls-certs\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.227696 master-1 kubenswrapper[4771]: I1011 11:03:22.227639 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7a14abd-dac6-484f-8be5-a64699a6318a-public-tls-certs\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.229177 master-1 kubenswrapper[4771]: I1011 11:03:22.229109 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a14abd-dac6-484f-8be5-a64699a6318a-combined-ca-bundle\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.232381 master-1 kubenswrapper[4771]: I1011 11:03:22.232309 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7a14abd-dac6-484f-8be5-a64699a6318a-config-data\") pod \"octavia-api-7f8fd8dbdd-z9j79\" (UID: \"d7a14abd-dac6-484f-8be5-a64699a6318a\") " pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.331815 master-1 kubenswrapper[4771]: I1011 11:03:22.331762 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:22.960808 master-1 kubenswrapper[4771]: I1011 11:03:22.960748 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-7f8fd8dbdd-z9j79"] Oct 11 11:03:23.239060 master-1 kubenswrapper[4771]: I1011 11:03:23.220512 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-7f8fd8dbdd-z9j79" event={"ID":"d7a14abd-dac6-484f-8be5-a64699a6318a","Type":"ContainerStarted","Data":"22fa22b2e1e0fff56977fc20f36b3003169ab500f911c05f2401cbd9c83f35f5"} Oct 11 11:03:23.239060 master-1 kubenswrapper[4771]: I1011 11:03:23.220592 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-7f8fd8dbdd-z9j79" event={"ID":"d7a14abd-dac6-484f-8be5-a64699a6318a","Type":"ContainerStarted","Data":"e641101a9ca2b11120834b9cee131298e55ecf3dc36dfa33ecae8239b2afd40d"} Oct 11 11:03:24.239576 master-1 kubenswrapper[4771]: I1011 11:03:24.237417 4771 generic.go:334] "Generic (PLEG): container finished" podID="d7a14abd-dac6-484f-8be5-a64699a6318a" containerID="22fa22b2e1e0fff56977fc20f36b3003169ab500f911c05f2401cbd9c83f35f5" exitCode=0 Oct 11 11:03:24.239576 master-1 kubenswrapper[4771]: I1011 11:03:24.237489 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-7f8fd8dbdd-z9j79" event={"ID":"d7a14abd-dac6-484f-8be5-a64699a6318a","Type":"ContainerDied","Data":"22fa22b2e1e0fff56977fc20f36b3003169ab500f911c05f2401cbd9c83f35f5"} Oct 11 11:03:24.287124 master-1 kubenswrapper[4771]: I1011 11:03:24.287031 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 11:03:24.287604 master-1 kubenswrapper[4771]: I1011 11:03:24.287545 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerName="ceilometer-central-agent" containerID="cri-o://2e42285e6351eb9ee62ba992f1e4f7be63e97a858d54b24d10c71afdf7aff3ea" gracePeriod=30 Oct 11 11:03:24.287923 master-1 kubenswrapper[4771]: I1011 11:03:24.287863 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerName="proxy-httpd" containerID="cri-o://d509506178d765ed7f4a16904a37ddc1b2c54d25ae15ba1153670674913b8d5f" gracePeriod=30 Oct 11 11:03:24.288143 master-1 kubenswrapper[4771]: I1011 11:03:24.288094 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerName="ceilometer-notification-agent" containerID="cri-o://f369b17be5d671fc3e32876475f80c5b0218179c8afd4aa1df1108a929725692" gracePeriod=30 Oct 11 11:03:24.288670 master-1 kubenswrapper[4771]: I1011 11:03:24.288219 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerName="sg-core" containerID="cri-o://ca41f80532d4f090e18655608cd8c3f3de6ab84ef57c4a67c84ac53396fb2ce3" gracePeriod=30 Oct 11 11:03:24.399840 master-1 kubenswrapper[4771]: I1011 11:03:24.399744 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.129.0.185:3000/\": read tcp 10.129.0.2:56100->10.129.0.185:3000: read: connection reset by peer" Oct 11 11:03:25.251194 master-1 kubenswrapper[4771]: I1011 11:03:25.251106 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-7f8fd8dbdd-z9j79" event={"ID":"d7a14abd-dac6-484f-8be5-a64699a6318a","Type":"ContainerStarted","Data":"79831f65d452f5ad8ac5b696a3cd986522fd67e265fdd4f945f48ec4a2ef905f"} Oct 11 11:03:25.251194 master-1 kubenswrapper[4771]: I1011 11:03:25.251177 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-7f8fd8dbdd-z9j79" event={"ID":"d7a14abd-dac6-484f-8be5-a64699a6318a","Type":"ContainerStarted","Data":"bf0aa14e0e1efa873d57b5787f324c374fc683b1feac774fc95901a1f4611c7f"} Oct 11 11:03:25.252178 master-1 kubenswrapper[4771]: I1011 11:03:25.251654 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:25.252178 master-1 kubenswrapper[4771]: I1011 11:03:25.251683 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:25.255381 master-1 kubenswrapper[4771]: I1011 11:03:25.255307 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d199f8f0-ae54-4c5c-b8af-f3058547edf1","Type":"ContainerDied","Data":"d509506178d765ed7f4a16904a37ddc1b2c54d25ae15ba1153670674913b8d5f"} Oct 11 11:03:25.255792 master-1 kubenswrapper[4771]: I1011 11:03:25.255721 4771 generic.go:334] "Generic (PLEG): container finished" podID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerID="d509506178d765ed7f4a16904a37ddc1b2c54d25ae15ba1153670674913b8d5f" exitCode=0 Oct 11 11:03:25.255792 master-1 kubenswrapper[4771]: I1011 11:03:25.255785 4771 generic.go:334] "Generic (PLEG): container finished" podID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerID="ca41f80532d4f090e18655608cd8c3f3de6ab84ef57c4a67c84ac53396fb2ce3" exitCode=2 Oct 11 11:03:25.256005 master-1 kubenswrapper[4771]: I1011 11:03:25.255809 4771 generic.go:334] "Generic (PLEG): container finished" podID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerID="2e42285e6351eb9ee62ba992f1e4f7be63e97a858d54b24d10c71afdf7aff3ea" exitCode=0 Oct 11 11:03:25.256005 master-1 kubenswrapper[4771]: I1011 11:03:25.255845 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d199f8f0-ae54-4c5c-b8af-f3058547edf1","Type":"ContainerDied","Data":"ca41f80532d4f090e18655608cd8c3f3de6ab84ef57c4a67c84ac53396fb2ce3"} Oct 11 11:03:25.256005 master-1 kubenswrapper[4771]: I1011 11:03:25.255881 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d199f8f0-ae54-4c5c-b8af-f3058547edf1","Type":"ContainerDied","Data":"2e42285e6351eb9ee62ba992f1e4f7be63e97a858d54b24d10c71afdf7aff3ea"} Oct 11 11:03:25.299486 master-1 kubenswrapper[4771]: I1011 11:03:25.298865 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-7f8fd8dbdd-z9j79" podStartSLOduration=4.29878046 podStartE2EDuration="4.29878046s" podCreationTimestamp="2025-10-11 11:03:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 11:03:25.293677803 +0000 UTC m=+2237.267904274" watchObservedRunningTime="2025-10-11 11:03:25.29878046 +0000 UTC m=+2237.273006901" Oct 11 11:03:26.176242 master-1 kubenswrapper[4771]: I1011 11:03:26.176132 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 11:03:26.280040 master-1 kubenswrapper[4771]: I1011 11:03:26.279958 4771 generic.go:334] "Generic (PLEG): container finished" podID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerID="f369b17be5d671fc3e32876475f80c5b0218179c8afd4aa1df1108a929725692" exitCode=0 Oct 11 11:03:26.280040 master-1 kubenswrapper[4771]: I1011 11:03:26.280035 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 11:03:26.280992 master-1 kubenswrapper[4771]: I1011 11:03:26.280046 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d199f8f0-ae54-4c5c-b8af-f3058547edf1","Type":"ContainerDied","Data":"f369b17be5d671fc3e32876475f80c5b0218179c8afd4aa1df1108a929725692"} Oct 11 11:03:26.280992 master-1 kubenswrapper[4771]: I1011 11:03:26.280311 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d199f8f0-ae54-4c5c-b8af-f3058547edf1","Type":"ContainerDied","Data":"1f5edb60e620fedfd68b508b4b511f9365534396f20ac698191ef7d2d59c6bfb"} Oct 11 11:03:26.280992 master-1 kubenswrapper[4771]: I1011 11:03:26.280338 4771 scope.go:117] "RemoveContainer" containerID="d509506178d765ed7f4a16904a37ddc1b2c54d25ae15ba1153670674913b8d5f" Oct 11 11:03:26.304770 master-1 kubenswrapper[4771]: I1011 11:03:26.304593 4771 scope.go:117] "RemoveContainer" containerID="ca41f80532d4f090e18655608cd8c3f3de6ab84ef57c4a67c84ac53396fb2ce3" Oct 11 11:03:26.321543 master-1 kubenswrapper[4771]: I1011 11:03:26.321487 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-config-data\") pod \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " Oct 11 11:03:26.321543 master-1 kubenswrapper[4771]: I1011 11:03:26.321543 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d199f8f0-ae54-4c5c-b8af-f3058547edf1-run-httpd\") pod \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " Oct 11 11:03:26.322002 master-1 kubenswrapper[4771]: I1011 11:03:26.321672 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc67p\" (UniqueName: \"kubernetes.io/projected/d199f8f0-ae54-4c5c-b8af-f3058547edf1-kube-api-access-jc67p\") pod \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " Oct 11 11:03:26.322002 master-1 kubenswrapper[4771]: I1011 11:03:26.321760 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-ceilometer-tls-certs\") pod \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " Oct 11 11:03:26.322002 master-1 kubenswrapper[4771]: I1011 11:03:26.321870 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-sg-core-conf-yaml\") pod \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " Oct 11 11:03:26.322002 master-1 kubenswrapper[4771]: I1011 11:03:26.321911 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-combined-ca-bundle\") pod \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " Oct 11 11:03:26.322002 master-1 kubenswrapper[4771]: I1011 11:03:26.321957 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-scripts\") pod \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " Oct 11 11:03:26.322002 master-1 kubenswrapper[4771]: I1011 11:03:26.321975 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d199f8f0-ae54-4c5c-b8af-f3058547edf1-log-httpd\") pod \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\" (UID: \"d199f8f0-ae54-4c5c-b8af-f3058547edf1\") " Oct 11 11:03:26.322288 master-1 kubenswrapper[4771]: I1011 11:03:26.322189 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d199f8f0-ae54-4c5c-b8af-f3058547edf1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d199f8f0-ae54-4c5c-b8af-f3058547edf1" (UID: "d199f8f0-ae54-4c5c-b8af-f3058547edf1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:03:26.324202 master-1 kubenswrapper[4771]: I1011 11:03:26.323807 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d199f8f0-ae54-4c5c-b8af-f3058547edf1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d199f8f0-ae54-4c5c-b8af-f3058547edf1" (UID: "d199f8f0-ae54-4c5c-b8af-f3058547edf1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:03:26.324202 master-1 kubenswrapper[4771]: I1011 11:03:26.324123 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d199f8f0-ae54-4c5c-b8af-f3058547edf1-log-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:26.324202 master-1 kubenswrapper[4771]: I1011 11:03:26.324167 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d199f8f0-ae54-4c5c-b8af-f3058547edf1-run-httpd\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:26.327763 master-1 kubenswrapper[4771]: I1011 11:03:26.327730 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-scripts" (OuterVolumeSpecName: "scripts") pod "d199f8f0-ae54-4c5c-b8af-f3058547edf1" (UID: "d199f8f0-ae54-4c5c-b8af-f3058547edf1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:03:26.329484 master-1 kubenswrapper[4771]: I1011 11:03:26.329437 4771 scope.go:117] "RemoveContainer" containerID="f369b17be5d671fc3e32876475f80c5b0218179c8afd4aa1df1108a929725692" Oct 11 11:03:26.337333 master-1 kubenswrapper[4771]: I1011 11:03:26.337244 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d199f8f0-ae54-4c5c-b8af-f3058547edf1-kube-api-access-jc67p" (OuterVolumeSpecName: "kube-api-access-jc67p") pod "d199f8f0-ae54-4c5c-b8af-f3058547edf1" (UID: "d199f8f0-ae54-4c5c-b8af-f3058547edf1"). InnerVolumeSpecName "kube-api-access-jc67p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:03:26.350705 master-1 kubenswrapper[4771]: I1011 11:03:26.350619 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d199f8f0-ae54-4c5c-b8af-f3058547edf1" (UID: "d199f8f0-ae54-4c5c-b8af-f3058547edf1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:03:26.396676 master-1 kubenswrapper[4771]: I1011 11:03:26.396456 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d199f8f0-ae54-4c5c-b8af-f3058547edf1" (UID: "d199f8f0-ae54-4c5c-b8af-f3058547edf1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:03:26.413245 master-1 kubenswrapper[4771]: I1011 11:03:26.413161 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-config-data" (OuterVolumeSpecName: "config-data") pod "d199f8f0-ae54-4c5c-b8af-f3058547edf1" (UID: "d199f8f0-ae54-4c5c-b8af-f3058547edf1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:03:26.413734 master-1 kubenswrapper[4771]: I1011 11:03:26.413665 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d199f8f0-ae54-4c5c-b8af-f3058547edf1" (UID: "d199f8f0-ae54-4c5c-b8af-f3058547edf1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:03:26.426032 master-1 kubenswrapper[4771]: I1011 11:03:26.425957 4771 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-ceilometer-tls-certs\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:26.426032 master-1 kubenswrapper[4771]: I1011 11:03:26.426013 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-sg-core-conf-yaml\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:26.426032 master-1 kubenswrapper[4771]: I1011 11:03:26.426029 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:26.426032 master-1 kubenswrapper[4771]: I1011 11:03:26.426046 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:26.426032 master-1 kubenswrapper[4771]: I1011 11:03:26.426062 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d199f8f0-ae54-4c5c-b8af-f3058547edf1-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:26.426671 master-1 kubenswrapper[4771]: I1011 11:03:26.426076 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc67p\" (UniqueName: \"kubernetes.io/projected/d199f8f0-ae54-4c5c-b8af-f3058547edf1-kube-api-access-jc67p\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:26.472098 master-1 kubenswrapper[4771]: I1011 11:03:26.472053 4771 scope.go:117] "RemoveContainer" containerID="2e42285e6351eb9ee62ba992f1e4f7be63e97a858d54b24d10c71afdf7aff3ea" Oct 11 11:03:26.495745 master-1 kubenswrapper[4771]: I1011 11:03:26.495693 4771 scope.go:117] "RemoveContainer" containerID="d509506178d765ed7f4a16904a37ddc1b2c54d25ae15ba1153670674913b8d5f" Oct 11 11:03:26.496543 master-1 kubenswrapper[4771]: E1011 11:03:26.496505 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d509506178d765ed7f4a16904a37ddc1b2c54d25ae15ba1153670674913b8d5f\": container with ID starting with d509506178d765ed7f4a16904a37ddc1b2c54d25ae15ba1153670674913b8d5f not found: ID does not exist" containerID="d509506178d765ed7f4a16904a37ddc1b2c54d25ae15ba1153670674913b8d5f" Oct 11 11:03:26.496601 master-1 kubenswrapper[4771]: I1011 11:03:26.496544 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d509506178d765ed7f4a16904a37ddc1b2c54d25ae15ba1153670674913b8d5f"} err="failed to get container status \"d509506178d765ed7f4a16904a37ddc1b2c54d25ae15ba1153670674913b8d5f\": rpc error: code = NotFound desc = could not find container \"d509506178d765ed7f4a16904a37ddc1b2c54d25ae15ba1153670674913b8d5f\": container with ID starting with d509506178d765ed7f4a16904a37ddc1b2c54d25ae15ba1153670674913b8d5f not found: ID does not exist" Oct 11 11:03:26.496601 master-1 kubenswrapper[4771]: I1011 11:03:26.496567 4771 scope.go:117] "RemoveContainer" containerID="ca41f80532d4f090e18655608cd8c3f3de6ab84ef57c4a67c84ac53396fb2ce3" Oct 11 11:03:26.497079 master-1 kubenswrapper[4771]: E1011 11:03:26.497047 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca41f80532d4f090e18655608cd8c3f3de6ab84ef57c4a67c84ac53396fb2ce3\": container with ID starting with ca41f80532d4f090e18655608cd8c3f3de6ab84ef57c4a67c84ac53396fb2ce3 not found: ID does not exist" containerID="ca41f80532d4f090e18655608cd8c3f3de6ab84ef57c4a67c84ac53396fb2ce3" Oct 11 11:03:26.497133 master-1 kubenswrapper[4771]: I1011 11:03:26.497080 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca41f80532d4f090e18655608cd8c3f3de6ab84ef57c4a67c84ac53396fb2ce3"} err="failed to get container status \"ca41f80532d4f090e18655608cd8c3f3de6ab84ef57c4a67c84ac53396fb2ce3\": rpc error: code = NotFound desc = could not find container \"ca41f80532d4f090e18655608cd8c3f3de6ab84ef57c4a67c84ac53396fb2ce3\": container with ID starting with ca41f80532d4f090e18655608cd8c3f3de6ab84ef57c4a67c84ac53396fb2ce3 not found: ID does not exist" Oct 11 11:03:26.497133 master-1 kubenswrapper[4771]: I1011 11:03:26.497095 4771 scope.go:117] "RemoveContainer" containerID="f369b17be5d671fc3e32876475f80c5b0218179c8afd4aa1df1108a929725692" Oct 11 11:03:26.497591 master-1 kubenswrapper[4771]: E1011 11:03:26.497553 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f369b17be5d671fc3e32876475f80c5b0218179c8afd4aa1df1108a929725692\": container with ID starting with f369b17be5d671fc3e32876475f80c5b0218179c8afd4aa1df1108a929725692 not found: ID does not exist" containerID="f369b17be5d671fc3e32876475f80c5b0218179c8afd4aa1df1108a929725692" Oct 11 11:03:26.497591 master-1 kubenswrapper[4771]: I1011 11:03:26.497585 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f369b17be5d671fc3e32876475f80c5b0218179c8afd4aa1df1108a929725692"} err="failed to get container status \"f369b17be5d671fc3e32876475f80c5b0218179c8afd4aa1df1108a929725692\": rpc error: code = NotFound desc = could not find container \"f369b17be5d671fc3e32876475f80c5b0218179c8afd4aa1df1108a929725692\": container with ID starting with f369b17be5d671fc3e32876475f80c5b0218179c8afd4aa1df1108a929725692 not found: ID does not exist" Oct 11 11:03:26.497684 master-1 kubenswrapper[4771]: I1011 11:03:26.497600 4771 scope.go:117] "RemoveContainer" containerID="2e42285e6351eb9ee62ba992f1e4f7be63e97a858d54b24d10c71afdf7aff3ea" Oct 11 11:03:26.499913 master-1 kubenswrapper[4771]: E1011 11:03:26.499860 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e42285e6351eb9ee62ba992f1e4f7be63e97a858d54b24d10c71afdf7aff3ea\": container with ID starting with 2e42285e6351eb9ee62ba992f1e4f7be63e97a858d54b24d10c71afdf7aff3ea not found: ID does not exist" containerID="2e42285e6351eb9ee62ba992f1e4f7be63e97a858d54b24d10c71afdf7aff3ea" Oct 11 11:03:26.499989 master-1 kubenswrapper[4771]: I1011 11:03:26.499921 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e42285e6351eb9ee62ba992f1e4f7be63e97a858d54b24d10c71afdf7aff3ea"} err="failed to get container status \"2e42285e6351eb9ee62ba992f1e4f7be63e97a858d54b24d10c71afdf7aff3ea\": rpc error: code = NotFound desc = could not find container \"2e42285e6351eb9ee62ba992f1e4f7be63e97a858d54b24d10c71afdf7aff3ea\": container with ID starting with 2e42285e6351eb9ee62ba992f1e4f7be63e97a858d54b24d10c71afdf7aff3ea not found: ID does not exist" Oct 11 11:03:26.612312 master-1 kubenswrapper[4771]: I1011 11:03:26.612232 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 11:03:26.621999 master-1 kubenswrapper[4771]: I1011 11:03:26.621958 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 11 11:03:26.651595 master-1 kubenswrapper[4771]: I1011 11:03:26.651235 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 11 11:03:26.652235 master-1 kubenswrapper[4771]: E1011 11:03:26.652213 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerName="sg-core" Oct 11 11:03:26.652352 master-1 kubenswrapper[4771]: I1011 11:03:26.652339 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerName="sg-core" Oct 11 11:03:26.652461 master-1 kubenswrapper[4771]: E1011 11:03:26.652449 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerName="proxy-httpd" Oct 11 11:03:26.652517 master-1 kubenswrapper[4771]: I1011 11:03:26.652508 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerName="proxy-httpd" Oct 11 11:03:26.652593 master-1 kubenswrapper[4771]: E1011 11:03:26.652581 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerName="ceilometer-central-agent" Oct 11 11:03:26.652682 master-1 kubenswrapper[4771]: I1011 11:03:26.652668 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerName="ceilometer-central-agent" Oct 11 11:03:26.652796 master-1 kubenswrapper[4771]: E1011 11:03:26.652780 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerName="ceilometer-notification-agent" Oct 11 11:03:26.652932 master-1 kubenswrapper[4771]: I1011 11:03:26.652913 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerName="ceilometer-notification-agent" Oct 11 11:03:26.653369 master-1 kubenswrapper[4771]: I1011 11:03:26.653150 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerName="sg-core" Oct 11 11:03:26.653523 master-1 kubenswrapper[4771]: I1011 11:03:26.653509 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerName="proxy-httpd" Oct 11 11:03:26.653854 master-1 kubenswrapper[4771]: I1011 11:03:26.653833 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerName="ceilometer-central-agent" Oct 11 11:03:26.653932 master-1 kubenswrapper[4771]: I1011 11:03:26.653922 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" containerName="ceilometer-notification-agent" Oct 11 11:03:26.655616 master-1 kubenswrapper[4771]: I1011 11:03:26.655598 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 11:03:26.661445 master-1 kubenswrapper[4771]: I1011 11:03:26.661342 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 11 11:03:26.661878 master-1 kubenswrapper[4771]: I1011 11:03:26.661849 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 11 11:03:26.662044 master-1 kubenswrapper[4771]: I1011 11:03:26.662020 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 11 11:03:26.671432 master-1 kubenswrapper[4771]: I1011 11:03:26.671373 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 11:03:26.755016 master-1 kubenswrapper[4771]: I1011 11:03:26.754969 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d247640d-5d67-4ba9-a371-0aa12cc122c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.755488 master-1 kubenswrapper[4771]: I1011 11:03:26.755463 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5gtk\" (UniqueName: \"kubernetes.io/projected/d247640d-5d67-4ba9-a371-0aa12cc122c6-kube-api-access-p5gtk\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.755602 master-1 kubenswrapper[4771]: I1011 11:03:26.755586 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d247640d-5d67-4ba9-a371-0aa12cc122c6-run-httpd\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.755702 master-1 kubenswrapper[4771]: I1011 11:03:26.755690 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d247640d-5d67-4ba9-a371-0aa12cc122c6-scripts\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.755789 master-1 kubenswrapper[4771]: I1011 11:03:26.755777 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d247640d-5d67-4ba9-a371-0aa12cc122c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.755897 master-1 kubenswrapper[4771]: I1011 11:03:26.755877 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d247640d-5d67-4ba9-a371-0aa12cc122c6-log-httpd\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.756082 master-1 kubenswrapper[4771]: I1011 11:03:26.756063 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d247640d-5d67-4ba9-a371-0aa12cc122c6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.756416 master-1 kubenswrapper[4771]: I1011 11:03:26.756260 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d247640d-5d67-4ba9-a371-0aa12cc122c6-config-data\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.858110 master-1 kubenswrapper[4771]: I1011 11:03:26.858073 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d247640d-5d67-4ba9-a371-0aa12cc122c6-config-data\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.858892 master-1 kubenswrapper[4771]: I1011 11:03:26.858872 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d247640d-5d67-4ba9-a371-0aa12cc122c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.859021 master-1 kubenswrapper[4771]: I1011 11:03:26.859005 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5gtk\" (UniqueName: \"kubernetes.io/projected/d247640d-5d67-4ba9-a371-0aa12cc122c6-kube-api-access-p5gtk\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.859157 master-1 kubenswrapper[4771]: I1011 11:03:26.859143 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d247640d-5d67-4ba9-a371-0aa12cc122c6-run-httpd\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.859259 master-1 kubenswrapper[4771]: I1011 11:03:26.859246 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d247640d-5d67-4ba9-a371-0aa12cc122c6-scripts\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.859405 master-1 kubenswrapper[4771]: I1011 11:03:26.859391 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d247640d-5d67-4ba9-a371-0aa12cc122c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.859512 master-1 kubenswrapper[4771]: I1011 11:03:26.859497 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d247640d-5d67-4ba9-a371-0aa12cc122c6-log-httpd\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.859659 master-1 kubenswrapper[4771]: I1011 11:03:26.859642 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d247640d-5d67-4ba9-a371-0aa12cc122c6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.860238 master-1 kubenswrapper[4771]: I1011 11:03:26.860183 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d247640d-5d67-4ba9-a371-0aa12cc122c6-run-httpd\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.860688 master-1 kubenswrapper[4771]: I1011 11:03:26.860634 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d247640d-5d67-4ba9-a371-0aa12cc122c6-log-httpd\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.863149 master-1 kubenswrapper[4771]: I1011 11:03:26.863123 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d247640d-5d67-4ba9-a371-0aa12cc122c6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.863721 master-1 kubenswrapper[4771]: I1011 11:03:26.863504 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d247640d-5d67-4ba9-a371-0aa12cc122c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.864568 master-1 kubenswrapper[4771]: I1011 11:03:26.864543 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d247640d-5d67-4ba9-a371-0aa12cc122c6-scripts\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.864689 master-1 kubenswrapper[4771]: I1011 11:03:26.864601 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d247640d-5d67-4ba9-a371-0aa12cc122c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.866186 master-1 kubenswrapper[4771]: I1011 11:03:26.866132 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d247640d-5d67-4ba9-a371-0aa12cc122c6-config-data\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.888925 master-1 kubenswrapper[4771]: I1011 11:03:26.888891 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5gtk\" (UniqueName: \"kubernetes.io/projected/d247640d-5d67-4ba9-a371-0aa12cc122c6-kube-api-access-p5gtk\") pod \"ceilometer-0\" (UID: \"d247640d-5d67-4ba9-a371-0aa12cc122c6\") " pod="openstack/ceilometer-0" Oct 11 11:03:26.979575 master-1 kubenswrapper[4771]: I1011 11:03:26.979470 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 11:03:27.467856 master-1 kubenswrapper[4771]: I1011 11:03:27.467748 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 11:03:28.319672 master-1 kubenswrapper[4771]: I1011 11:03:28.319615 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d247640d-5d67-4ba9-a371-0aa12cc122c6","Type":"ContainerStarted","Data":"1a1b47943d68bc0445b87a7d7de76ea6131c5497e89e624301640c77c81a4800"} Oct 11 11:03:28.451718 master-1 kubenswrapper[4771]: I1011 11:03:28.451626 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d199f8f0-ae54-4c5c-b8af-f3058547edf1" path="/var/lib/kubelet/pods/d199f8f0-ae54-4c5c-b8af-f3058547edf1/volumes" Oct 11 11:03:29.328506 master-1 kubenswrapper[4771]: I1011 11:03:29.328435 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d247640d-5d67-4ba9-a371-0aa12cc122c6","Type":"ContainerStarted","Data":"423d9d672627475e5bbe8504c353fff88e7d057fbbe5abd832cfbec0141ccb20"} Oct 11 11:03:30.342440 master-1 kubenswrapper[4771]: I1011 11:03:30.341904 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d247640d-5d67-4ba9-a371-0aa12cc122c6","Type":"ContainerStarted","Data":"c280fe9bdf8d1af8f1dcc4147c2d8952707f98e0ff2dfb05da03cdcf2eb663a1"} Oct 11 11:03:32.378616 master-1 kubenswrapper[4771]: I1011 11:03:32.378524 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d247640d-5d67-4ba9-a371-0aa12cc122c6","Type":"ContainerStarted","Data":"76136259c130fdcf90abb03fcc84dd6667ae2aab8b9be639ce6e95fb5ec43ca0"} Oct 11 11:03:33.391156 master-1 kubenswrapper[4771]: I1011 11:03:33.391089 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d247640d-5d67-4ba9-a371-0aa12cc122c6","Type":"ContainerStarted","Data":"b39399b5d8b4910fc52ba7ebd88e058e6f28aa63e2d7a7745dfc5b1297f52216"} Oct 11 11:03:33.391929 master-1 kubenswrapper[4771]: I1011 11:03:33.391328 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 11 11:03:33.427375 master-1 kubenswrapper[4771]: I1011 11:03:33.427258 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.052192018 podStartE2EDuration="7.427237196s" podCreationTimestamp="2025-10-11 11:03:26 +0000 UTC" firstStartedPulling="2025-10-11 11:03:27.483644926 +0000 UTC m=+2239.457871367" lastFinishedPulling="2025-10-11 11:03:32.858690104 +0000 UTC m=+2244.832916545" observedRunningTime="2025-10-11 11:03:33.423170069 +0000 UTC m=+2245.397396530" watchObservedRunningTime="2025-10-11 11:03:33.427237196 +0000 UTC m=+2245.401463647" Oct 11 11:03:33.967796 master-2 kubenswrapper[4776]: I1011 11:03:33.967638 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:33.995434 master-0 kubenswrapper[4790]: I1011 11:03:33.995285 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-j7sxz" Oct 11 11:03:41.255583 master-1 kubenswrapper[4771]: I1011 11:03:41.255499 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:41.325583 master-1 kubenswrapper[4771]: I1011 11:03:41.325527 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-7f8fd8dbdd-z9j79" Oct 11 11:03:41.605298 master-1 kubenswrapper[4771]: I1011 11:03:41.605231 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-84f885c68-ttgvk"] Oct 11 11:03:41.606276 master-1 kubenswrapper[4771]: I1011 11:03:41.605582 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-84f885c68-ttgvk" podUID="a67756a2-aa42-4f6f-b27a-57e16f566883" containerName="octavia-api-provider-agent" containerID="cri-o://e41aadf1122cacc740648e97ac9f4e96f2409d4d65d6150033e5ce9dfc2c1598" gracePeriod=30 Oct 11 11:03:41.606276 master-1 kubenswrapper[4771]: I1011 11:03:41.605778 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-84f885c68-ttgvk" podUID="a67756a2-aa42-4f6f-b27a-57e16f566883" containerName="octavia-api" containerID="cri-o://4b1b4afdacfba7ecd1fb1011c6d887849fae4d85214d8f2c59893edc87098530" gracePeriod=30 Oct 11 11:03:42.057913 master-1 kubenswrapper[4771]: I1011 11:03:42.057835 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-678599687f-vbxnh"] Oct 11 11:03:42.058259 master-1 kubenswrapper[4771]: I1011 11:03:42.058202 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-678599687f-vbxnh" podUID="30a9a1fe-e08c-4112-a7b8-6616d280405e" containerName="octavia-amphora-httpd" containerID="cri-o://bb7f7c6b7f921511c546d7f35dc55b164c7677f19e59519f7cff36e97aa18bfd" gracePeriod=30 Oct 11 11:03:42.507557 master-1 kubenswrapper[4771]: I1011 11:03:42.507468 4771 generic.go:334] "Generic (PLEG): container finished" podID="a67756a2-aa42-4f6f-b27a-57e16f566883" containerID="e41aadf1122cacc740648e97ac9f4e96f2409d4d65d6150033e5ce9dfc2c1598" exitCode=0 Oct 11 11:03:42.507557 master-1 kubenswrapper[4771]: I1011 11:03:42.507538 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-84f885c68-ttgvk" event={"ID":"a67756a2-aa42-4f6f-b27a-57e16f566883","Type":"ContainerDied","Data":"e41aadf1122cacc740648e97ac9f4e96f2409d4d65d6150033e5ce9dfc2c1598"} Oct 11 11:03:42.514935 master-1 kubenswrapper[4771]: I1011 11:03:42.514871 4771 generic.go:334] "Generic (PLEG): container finished" podID="30a9a1fe-e08c-4112-a7b8-6616d280405e" containerID="bb7f7c6b7f921511c546d7f35dc55b164c7677f19e59519f7cff36e97aa18bfd" exitCode=0 Oct 11 11:03:42.515238 master-1 kubenswrapper[4771]: I1011 11:03:42.514944 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-vbxnh" event={"ID":"30a9a1fe-e08c-4112-a7b8-6616d280405e","Type":"ContainerDied","Data":"bb7f7c6b7f921511c546d7f35dc55b164c7677f19e59519f7cff36e97aa18bfd"} Oct 11 11:03:42.798951 master-1 kubenswrapper[4771]: I1011 11:03:42.798875 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-678599687f-vbxnh" Oct 11 11:03:42.981074 master-1 kubenswrapper[4771]: I1011 11:03:42.980913 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/30a9a1fe-e08c-4112-a7b8-6616d280405e-amphora-image\") pod \"30a9a1fe-e08c-4112-a7b8-6616d280405e\" (UID: \"30a9a1fe-e08c-4112-a7b8-6616d280405e\") " Oct 11 11:03:42.981074 master-1 kubenswrapper[4771]: I1011 11:03:42.981053 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/30a9a1fe-e08c-4112-a7b8-6616d280405e-httpd-config\") pod \"30a9a1fe-e08c-4112-a7b8-6616d280405e\" (UID: \"30a9a1fe-e08c-4112-a7b8-6616d280405e\") " Oct 11 11:03:43.026216 master-1 kubenswrapper[4771]: I1011 11:03:43.026135 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a9a1fe-e08c-4112-a7b8-6616d280405e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "30a9a1fe-e08c-4112-a7b8-6616d280405e" (UID: "30a9a1fe-e08c-4112-a7b8-6616d280405e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:03:43.055371 master-2 kubenswrapper[4776]: I1011 11:03:43.054752 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-v9tlh"] Oct 11 11:03:43.059028 master-1 kubenswrapper[4771]: I1011 11:03:43.058948 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-dnsz5"] Oct 11 11:03:43.063277 master-2 kubenswrapper[4776]: I1011 11:03:43.063216 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-v9tlh"] Oct 11 11:03:43.069797 master-1 kubenswrapper[4771]: I1011 11:03:43.069727 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-dnsz5"] Oct 11 11:03:43.083678 master-1 kubenswrapper[4771]: I1011 11:03:43.083594 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/30a9a1fe-e08c-4112-a7b8-6616d280405e-httpd-config\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:43.087754 master-1 kubenswrapper[4771]: I1011 11:03:43.087685 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30a9a1fe-e08c-4112-a7b8-6616d280405e-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "30a9a1fe-e08c-4112-a7b8-6616d280405e" (UID: "30a9a1fe-e08c-4112-a7b8-6616d280405e"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:03:43.185587 master-1 kubenswrapper[4771]: I1011 11:03:43.185520 4771 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/30a9a1fe-e08c-4112-a7b8-6616d280405e-amphora-image\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:43.534117 master-1 kubenswrapper[4771]: I1011 11:03:43.533994 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-vbxnh" event={"ID":"30a9a1fe-e08c-4112-a7b8-6616d280405e","Type":"ContainerDied","Data":"822d91bade265914abdf5fc87f453e2f88a3133976d26e99e2272117f587739c"} Oct 11 11:03:43.534117 master-1 kubenswrapper[4771]: I1011 11:03:43.534088 4771 scope.go:117] "RemoveContainer" containerID="bb7f7c6b7f921511c546d7f35dc55b164c7677f19e59519f7cff36e97aa18bfd" Oct 11 11:03:43.535385 master-1 kubenswrapper[4771]: I1011 11:03:43.534290 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-678599687f-vbxnh" Oct 11 11:03:43.581041 master-1 kubenswrapper[4771]: I1011 11:03:43.580994 4771 scope.go:117] "RemoveContainer" containerID="04f298e98d945615f593936f0afe3e98a8fda50794b9ca2f8c1e2dcce3be303c" Oct 11 11:03:43.604577 master-1 kubenswrapper[4771]: I1011 11:03:43.604489 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-678599687f-vbxnh"] Oct 11 11:03:43.627024 master-1 kubenswrapper[4771]: I1011 11:03:43.626930 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-678599687f-vbxnh"] Oct 11 11:03:44.041915 master-2 kubenswrapper[4776]: I1011 11:03:44.041854 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-d7297"] Oct 11 11:03:44.049743 master-2 kubenswrapper[4776]: I1011 11:03:44.049697 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-d7297"] Oct 11 11:03:44.071653 master-2 kubenswrapper[4776]: I1011 11:03:44.070836 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a7cb456-8a0b-4e56-9dc5-93b488813f77" path="/var/lib/kubelet/pods/1a7cb456-8a0b-4e56-9dc5-93b488813f77/volumes" Oct 11 11:03:44.071653 master-2 kubenswrapper[4776]: I1011 11:03:44.071531 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="829885e7-9e39-447e-a4f0-2ac128443d04" path="/var/lib/kubelet/pods/829885e7-9e39-447e-a4f0-2ac128443d04/volumes" Oct 11 11:03:44.454919 master-1 kubenswrapper[4771]: I1011 11:03:44.454844 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30a9a1fe-e08c-4112-a7b8-6616d280405e" path="/var/lib/kubelet/pods/30a9a1fe-e08c-4112-a7b8-6616d280405e/volumes" Oct 11 11:03:44.455678 master-1 kubenswrapper[4771]: I1011 11:03:44.455652 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57ac9130-d850-4420-a75e-53ec744b16eb" path="/var/lib/kubelet/pods/57ac9130-d850-4420-a75e-53ec744b16eb/volumes" Oct 11 11:03:45.494061 master-1 kubenswrapper[4771]: I1011 11:03:45.494009 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:03:45.556707 master-1 kubenswrapper[4771]: I1011 11:03:45.556632 4771 generic.go:334] "Generic (PLEG): container finished" podID="a67756a2-aa42-4f6f-b27a-57e16f566883" containerID="4b1b4afdacfba7ecd1fb1011c6d887849fae4d85214d8f2c59893edc87098530" exitCode=0 Oct 11 11:03:45.556707 master-1 kubenswrapper[4771]: I1011 11:03:45.556686 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-84f885c68-ttgvk" event={"ID":"a67756a2-aa42-4f6f-b27a-57e16f566883","Type":"ContainerDied","Data":"4b1b4afdacfba7ecd1fb1011c6d887849fae4d85214d8f2c59893edc87098530"} Oct 11 11:03:45.556707 master-1 kubenswrapper[4771]: I1011 11:03:45.556725 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-84f885c68-ttgvk" event={"ID":"a67756a2-aa42-4f6f-b27a-57e16f566883","Type":"ContainerDied","Data":"a5a886249a2d7b53c7a4945ae527e497dd08083a42ca3105272e41c3a3ba5d39"} Oct 11 11:03:45.557147 master-1 kubenswrapper[4771]: I1011 11:03:45.556727 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-84f885c68-ttgvk" Oct 11 11:03:45.557147 master-1 kubenswrapper[4771]: I1011 11:03:45.556746 4771 scope.go:117] "RemoveContainer" containerID="e41aadf1122cacc740648e97ac9f4e96f2409d4d65d6150033e5ce9dfc2c1598" Oct 11 11:03:45.580075 master-1 kubenswrapper[4771]: I1011 11:03:45.580003 4771 scope.go:117] "RemoveContainer" containerID="4b1b4afdacfba7ecd1fb1011c6d887849fae4d85214d8f2c59893edc87098530" Oct 11 11:03:45.603607 master-1 kubenswrapper[4771]: I1011 11:03:45.603552 4771 scope.go:117] "RemoveContainer" containerID="36a2a93821ed51d9e67c4b879d36463d95dbf4ca7d7ecbdb8119182e791d541c" Oct 11 11:03:45.628117 master-1 kubenswrapper[4771]: I1011 11:03:45.628083 4771 scope.go:117] "RemoveContainer" containerID="e41aadf1122cacc740648e97ac9f4e96f2409d4d65d6150033e5ce9dfc2c1598" Oct 11 11:03:45.628570 master-1 kubenswrapper[4771]: E1011 11:03:45.628542 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e41aadf1122cacc740648e97ac9f4e96f2409d4d65d6150033e5ce9dfc2c1598\": container with ID starting with e41aadf1122cacc740648e97ac9f4e96f2409d4d65d6150033e5ce9dfc2c1598 not found: ID does not exist" containerID="e41aadf1122cacc740648e97ac9f4e96f2409d4d65d6150033e5ce9dfc2c1598" Oct 11 11:03:45.628748 master-1 kubenswrapper[4771]: I1011 11:03:45.628582 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e41aadf1122cacc740648e97ac9f4e96f2409d4d65d6150033e5ce9dfc2c1598"} err="failed to get container status \"e41aadf1122cacc740648e97ac9f4e96f2409d4d65d6150033e5ce9dfc2c1598\": rpc error: code = NotFound desc = could not find container \"e41aadf1122cacc740648e97ac9f4e96f2409d4d65d6150033e5ce9dfc2c1598\": container with ID starting with e41aadf1122cacc740648e97ac9f4e96f2409d4d65d6150033e5ce9dfc2c1598 not found: ID does not exist" Oct 11 11:03:45.628748 master-1 kubenswrapper[4771]: I1011 11:03:45.628610 4771 scope.go:117] "RemoveContainer" containerID="4b1b4afdacfba7ecd1fb1011c6d887849fae4d85214d8f2c59893edc87098530" Oct 11 11:03:45.629457 master-1 kubenswrapper[4771]: E1011 11:03:45.629418 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b1b4afdacfba7ecd1fb1011c6d887849fae4d85214d8f2c59893edc87098530\": container with ID starting with 4b1b4afdacfba7ecd1fb1011c6d887849fae4d85214d8f2c59893edc87098530 not found: ID does not exist" containerID="4b1b4afdacfba7ecd1fb1011c6d887849fae4d85214d8f2c59893edc87098530" Oct 11 11:03:45.629607 master-1 kubenswrapper[4771]: I1011 11:03:45.629581 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b1b4afdacfba7ecd1fb1011c6d887849fae4d85214d8f2c59893edc87098530"} err="failed to get container status \"4b1b4afdacfba7ecd1fb1011c6d887849fae4d85214d8f2c59893edc87098530\": rpc error: code = NotFound desc = could not find container \"4b1b4afdacfba7ecd1fb1011c6d887849fae4d85214d8f2c59893edc87098530\": container with ID starting with 4b1b4afdacfba7ecd1fb1011c6d887849fae4d85214d8f2c59893edc87098530 not found: ID does not exist" Oct 11 11:03:45.629700 master-1 kubenswrapper[4771]: I1011 11:03:45.629685 4771 scope.go:117] "RemoveContainer" containerID="36a2a93821ed51d9e67c4b879d36463d95dbf4ca7d7ecbdb8119182e791d541c" Oct 11 11:03:45.630104 master-1 kubenswrapper[4771]: E1011 11:03:45.630077 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36a2a93821ed51d9e67c4b879d36463d95dbf4ca7d7ecbdb8119182e791d541c\": container with ID starting with 36a2a93821ed51d9e67c4b879d36463d95dbf4ca7d7ecbdb8119182e791d541c not found: ID does not exist" containerID="36a2a93821ed51d9e67c4b879d36463d95dbf4ca7d7ecbdb8119182e791d541c" Oct 11 11:03:45.630177 master-1 kubenswrapper[4771]: I1011 11:03:45.630105 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36a2a93821ed51d9e67c4b879d36463d95dbf4ca7d7ecbdb8119182e791d541c"} err="failed to get container status \"36a2a93821ed51d9e67c4b879d36463d95dbf4ca7d7ecbdb8119182e791d541c\": rpc error: code = NotFound desc = could not find container \"36a2a93821ed51d9e67c4b879d36463d95dbf4ca7d7ecbdb8119182e791d541c\": container with ID starting with 36a2a93821ed51d9e67c4b879d36463d95dbf4ca7d7ecbdb8119182e791d541c not found: ID does not exist" Oct 11 11:03:45.657135 master-1 kubenswrapper[4771]: I1011 11:03:45.657004 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-config-data\") pod \"a67756a2-aa42-4f6f-b27a-57e16f566883\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " Oct 11 11:03:45.657135 master-1 kubenswrapper[4771]: I1011 11:03:45.657101 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/a67756a2-aa42-4f6f-b27a-57e16f566883-octavia-run\") pod \"a67756a2-aa42-4f6f-b27a-57e16f566883\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " Oct 11 11:03:45.657135 master-1 kubenswrapper[4771]: I1011 11:03:45.657119 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-ovndb-tls-certs\") pod \"a67756a2-aa42-4f6f-b27a-57e16f566883\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " Oct 11 11:03:45.657442 master-1 kubenswrapper[4771]: I1011 11:03:45.657266 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-scripts\") pod \"a67756a2-aa42-4f6f-b27a-57e16f566883\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " Oct 11 11:03:45.657442 master-1 kubenswrapper[4771]: I1011 11:03:45.657307 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-combined-ca-bundle\") pod \"a67756a2-aa42-4f6f-b27a-57e16f566883\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " Oct 11 11:03:45.657442 master-1 kubenswrapper[4771]: I1011 11:03:45.657402 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a67756a2-aa42-4f6f-b27a-57e16f566883-config-data-merged\") pod \"a67756a2-aa42-4f6f-b27a-57e16f566883\" (UID: \"a67756a2-aa42-4f6f-b27a-57e16f566883\") " Oct 11 11:03:45.657635 master-1 kubenswrapper[4771]: I1011 11:03:45.657499 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a67756a2-aa42-4f6f-b27a-57e16f566883-octavia-run" (OuterVolumeSpecName: "octavia-run") pod "a67756a2-aa42-4f6f-b27a-57e16f566883" (UID: "a67756a2-aa42-4f6f-b27a-57e16f566883"). InnerVolumeSpecName "octavia-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:03:45.657917 master-1 kubenswrapper[4771]: I1011 11:03:45.657895 4771 reconciler_common.go:293] "Volume detached for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/a67756a2-aa42-4f6f-b27a-57e16f566883-octavia-run\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:45.663658 master-1 kubenswrapper[4771]: I1011 11:03:45.663618 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-config-data" (OuterVolumeSpecName: "config-data") pod "a67756a2-aa42-4f6f-b27a-57e16f566883" (UID: "a67756a2-aa42-4f6f-b27a-57e16f566883"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:03:45.663808 master-1 kubenswrapper[4771]: I1011 11:03:45.663732 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-scripts" (OuterVolumeSpecName: "scripts") pod "a67756a2-aa42-4f6f-b27a-57e16f566883" (UID: "a67756a2-aa42-4f6f-b27a-57e16f566883"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:03:45.698729 master-1 kubenswrapper[4771]: I1011 11:03:45.698557 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a67756a2-aa42-4f6f-b27a-57e16f566883-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "a67756a2-aa42-4f6f-b27a-57e16f566883" (UID: "a67756a2-aa42-4f6f-b27a-57e16f566883"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:03:45.710561 master-1 kubenswrapper[4771]: I1011 11:03:45.710491 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a67756a2-aa42-4f6f-b27a-57e16f566883" (UID: "a67756a2-aa42-4f6f-b27a-57e16f566883"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:03:45.762561 master-1 kubenswrapper[4771]: I1011 11:03:45.762333 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-config-data\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:45.762561 master-1 kubenswrapper[4771]: I1011 11:03:45.762540 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-scripts\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:45.762561 master-1 kubenswrapper[4771]: I1011 11:03:45.762567 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:45.762897 master-1 kubenswrapper[4771]: I1011 11:03:45.762592 4771 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a67756a2-aa42-4f6f-b27a-57e16f566883-config-data-merged\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:45.779456 master-1 kubenswrapper[4771]: I1011 11:03:45.779395 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a67756a2-aa42-4f6f-b27a-57e16f566883" (UID: "a67756a2-aa42-4f6f-b27a-57e16f566883"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:03:45.865446 master-1 kubenswrapper[4771]: I1011 11:03:45.865328 4771 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a67756a2-aa42-4f6f-b27a-57e16f566883-ovndb-tls-certs\") on node \"master-1\" DevicePath \"\"" Oct 11 11:03:45.964070 master-1 kubenswrapper[4771]: I1011 11:03:45.963962 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-84f885c68-ttgvk"] Oct 11 11:03:45.971942 master-1 kubenswrapper[4771]: I1011 11:03:45.971884 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-api-84f885c68-ttgvk"] Oct 11 11:03:46.451999 master-1 kubenswrapper[4771]: I1011 11:03:46.451877 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a67756a2-aa42-4f6f-b27a-57e16f566883" path="/var/lib/kubelet/pods/a67756a2-aa42-4f6f-b27a-57e16f566883/volumes" Oct 11 11:03:47.483731 master-1 kubenswrapper[4771]: I1011 11:03:47.483658 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-678599687f-6wlhj"] Oct 11 11:03:47.484460 master-1 kubenswrapper[4771]: E1011 11:03:47.484069 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a9a1fe-e08c-4112-a7b8-6616d280405e" containerName="octavia-amphora-httpd" Oct 11 11:03:47.484460 master-1 kubenswrapper[4771]: I1011 11:03:47.484088 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a9a1fe-e08c-4112-a7b8-6616d280405e" containerName="octavia-amphora-httpd" Oct 11 11:03:47.484460 master-1 kubenswrapper[4771]: E1011 11:03:47.484125 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67756a2-aa42-4f6f-b27a-57e16f566883" containerName="octavia-api-provider-agent" Oct 11 11:03:47.484460 master-1 kubenswrapper[4771]: I1011 11:03:47.484133 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67756a2-aa42-4f6f-b27a-57e16f566883" containerName="octavia-api-provider-agent" Oct 11 11:03:47.484460 master-1 kubenswrapper[4771]: E1011 11:03:47.484147 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67756a2-aa42-4f6f-b27a-57e16f566883" containerName="init" Oct 11 11:03:47.484460 master-1 kubenswrapper[4771]: I1011 11:03:47.484155 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67756a2-aa42-4f6f-b27a-57e16f566883" containerName="init" Oct 11 11:03:47.484460 master-1 kubenswrapper[4771]: E1011 11:03:47.484183 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a9a1fe-e08c-4112-a7b8-6616d280405e" containerName="init" Oct 11 11:03:47.484460 master-1 kubenswrapper[4771]: I1011 11:03:47.484191 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a9a1fe-e08c-4112-a7b8-6616d280405e" containerName="init" Oct 11 11:03:47.484460 master-1 kubenswrapper[4771]: E1011 11:03:47.484202 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67756a2-aa42-4f6f-b27a-57e16f566883" containerName="octavia-api" Oct 11 11:03:47.484460 master-1 kubenswrapper[4771]: I1011 11:03:47.484211 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67756a2-aa42-4f6f-b27a-57e16f566883" containerName="octavia-api" Oct 11 11:03:47.484460 master-1 kubenswrapper[4771]: I1011 11:03:47.484433 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="30a9a1fe-e08c-4112-a7b8-6616d280405e" containerName="octavia-amphora-httpd" Oct 11 11:03:47.484460 master-1 kubenswrapper[4771]: I1011 11:03:47.484456 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67756a2-aa42-4f6f-b27a-57e16f566883" containerName="octavia-api" Oct 11 11:03:47.484460 master-1 kubenswrapper[4771]: I1011 11:03:47.484467 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67756a2-aa42-4f6f-b27a-57e16f566883" containerName="octavia-api-provider-agent" Oct 11 11:03:47.485820 master-1 kubenswrapper[4771]: I1011 11:03:47.485772 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-678599687f-6wlhj" Oct 11 11:03:47.489769 master-1 kubenswrapper[4771]: I1011 11:03:47.489707 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Oct 11 11:03:47.502001 master-1 kubenswrapper[4771]: I1011 11:03:47.501957 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-678599687f-6wlhj"] Oct 11 11:03:47.604673 master-1 kubenswrapper[4771]: I1011 11:03:47.604596 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/420f57b5-0516-421c-9eb8-9292d25b970d-amphora-image\") pod \"octavia-image-upload-678599687f-6wlhj\" (UID: \"420f57b5-0516-421c-9eb8-9292d25b970d\") " pod="openstack/octavia-image-upload-678599687f-6wlhj" Oct 11 11:03:47.604962 master-1 kubenswrapper[4771]: I1011 11:03:47.604878 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/420f57b5-0516-421c-9eb8-9292d25b970d-httpd-config\") pod \"octavia-image-upload-678599687f-6wlhj\" (UID: \"420f57b5-0516-421c-9eb8-9292d25b970d\") " pod="openstack/octavia-image-upload-678599687f-6wlhj" Oct 11 11:03:47.707172 master-1 kubenswrapper[4771]: I1011 11:03:47.707099 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/420f57b5-0516-421c-9eb8-9292d25b970d-httpd-config\") pod \"octavia-image-upload-678599687f-6wlhj\" (UID: \"420f57b5-0516-421c-9eb8-9292d25b970d\") " pod="openstack/octavia-image-upload-678599687f-6wlhj" Oct 11 11:03:47.707440 master-1 kubenswrapper[4771]: I1011 11:03:47.707235 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/420f57b5-0516-421c-9eb8-9292d25b970d-amphora-image\") pod \"octavia-image-upload-678599687f-6wlhj\" (UID: \"420f57b5-0516-421c-9eb8-9292d25b970d\") " pod="openstack/octavia-image-upload-678599687f-6wlhj" Oct 11 11:03:47.707943 master-1 kubenswrapper[4771]: I1011 11:03:47.707908 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/420f57b5-0516-421c-9eb8-9292d25b970d-amphora-image\") pod \"octavia-image-upload-678599687f-6wlhj\" (UID: \"420f57b5-0516-421c-9eb8-9292d25b970d\") " pod="openstack/octavia-image-upload-678599687f-6wlhj" Oct 11 11:03:47.716048 master-1 kubenswrapper[4771]: I1011 11:03:47.715983 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/420f57b5-0516-421c-9eb8-9292d25b970d-httpd-config\") pod \"octavia-image-upload-678599687f-6wlhj\" (UID: \"420f57b5-0516-421c-9eb8-9292d25b970d\") " pod="openstack/octavia-image-upload-678599687f-6wlhj" Oct 11 11:03:47.854249 master-1 kubenswrapper[4771]: I1011 11:03:47.854187 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-678599687f-6wlhj" Oct 11 11:03:48.280689 master-1 kubenswrapper[4771]: I1011 11:03:48.279183 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-678599687f-6wlhj"] Oct 11 11:03:48.284986 master-1 kubenswrapper[4771]: W1011 11:03:48.284932 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod420f57b5_0516_421c_9eb8_9292d25b970d.slice/crio-0d1e950d86bc9f254cc676bdf73db3c7876708e150adca1f5aa8577ebd603f8f WatchSource:0}: Error finding container 0d1e950d86bc9f254cc676bdf73db3c7876708e150adca1f5aa8577ebd603f8f: Status 404 returned error can't find the container with id 0d1e950d86bc9f254cc676bdf73db3c7876708e150adca1f5aa8577ebd603f8f Oct 11 11:03:48.590406 master-1 kubenswrapper[4771]: I1011 11:03:48.590311 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-6wlhj" event={"ID":"420f57b5-0516-421c-9eb8-9292d25b970d","Type":"ContainerStarted","Data":"0d1e950d86bc9f254cc676bdf73db3c7876708e150adca1f5aa8577ebd603f8f"} Oct 11 11:03:49.605908 master-1 kubenswrapper[4771]: I1011 11:03:49.605796 4771 generic.go:334] "Generic (PLEG): container finished" podID="420f57b5-0516-421c-9eb8-9292d25b970d" containerID="dd07cd4c096ade4f01dd9d10c239a3de8236d4cbc66416bdef796255440f219a" exitCode=0 Oct 11 11:03:49.605908 master-1 kubenswrapper[4771]: I1011 11:03:49.605902 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-6wlhj" event={"ID":"420f57b5-0516-421c-9eb8-9292d25b970d","Type":"ContainerDied","Data":"dd07cd4c096ade4f01dd9d10c239a3de8236d4cbc66416bdef796255440f219a"} Oct 11 11:03:50.623236 master-1 kubenswrapper[4771]: I1011 11:03:50.623132 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-6wlhj" event={"ID":"420f57b5-0516-421c-9eb8-9292d25b970d","Type":"ContainerStarted","Data":"89c0833ca2cccadd62ed3e1986975324540ae427e61e10a9a0811c668a54aeb4"} Oct 11 11:03:50.674855 master-1 kubenswrapper[4771]: I1011 11:03:50.674756 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-678599687f-6wlhj" podStartSLOduration=3.221825886 podStartE2EDuration="3.6747289s" podCreationTimestamp="2025-10-11 11:03:47 +0000 UTC" firstStartedPulling="2025-10-11 11:03:48.287933879 +0000 UTC m=+2260.262160350" lastFinishedPulling="2025-10-11 11:03:48.740836923 +0000 UTC m=+2260.715063364" observedRunningTime="2025-10-11 11:03:50.65766053 +0000 UTC m=+2262.631887031" watchObservedRunningTime="2025-10-11 11:03:50.6747289 +0000 UTC m=+2262.648955341" Oct 11 11:03:52.055075 master-0 kubenswrapper[4790]: I1011 11:03:52.054941 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-113b-account-create-twjxb"] Oct 11 11:03:52.068492 master-0 kubenswrapper[4790]: I1011 11:03:52.068369 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-113b-account-create-twjxb"] Oct 11 11:03:52.304290 master-0 kubenswrapper[4790]: I1011 11:03:52.304161 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70cbbe93-7c50-40cb-91f4-f75c8875580d" path="/var/lib/kubelet/pods/70cbbe93-7c50-40cb-91f4-f75c8875580d/volumes" Oct 11 11:03:56.993326 master-1 kubenswrapper[4771]: I1011 11:03:56.993241 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 11 11:03:59.047943 master-0 kubenswrapper[4790]: I1011 11:03:59.047864 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-af51-account-create-tz8f4"] Oct 11 11:03:59.055522 master-0 kubenswrapper[4790]: I1011 11:03:59.055463 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0a7a-account-create-9c44k"] Oct 11 11:03:59.065297 master-0 kubenswrapper[4790]: I1011 11:03:59.065214 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-af51-account-create-tz8f4"] Oct 11 11:03:59.104467 master-0 kubenswrapper[4790]: I1011 11:03:59.104376 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0a7a-account-create-9c44k"] Oct 11 11:04:00.303666 master-0 kubenswrapper[4790]: I1011 11:04:00.303614 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4838cae2-31c3-4b4d-a914-e95b0b6308be" path="/var/lib/kubelet/pods/4838cae2-31c3-4b4d-a914-e95b0b6308be/volumes" Oct 11 11:04:00.304228 master-0 kubenswrapper[4790]: I1011 11:04:00.304174 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c21954bc-9fb3-4d4e-8085-b2fcf628e0a5" path="/var/lib/kubelet/pods/c21954bc-9fb3-4d4e-8085-b2fcf628e0a5/volumes" Oct 11 11:04:08.758918 master-0 kubenswrapper[4790]: I1011 11:04:08.758862 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-wmmzh"] Oct 11 11:04:08.761879 master-0 kubenswrapper[4790]: I1011 11:04:08.761851 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:08.768136 master-0 kubenswrapper[4790]: I1011 11:04:08.768059 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Oct 11 11:04:08.768526 master-0 kubenswrapper[4790]: I1011 11:04:08.768488 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Oct 11 11:04:08.768766 master-0 kubenswrapper[4790]: I1011 11:04:08.768695 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Oct 11 11:04:08.778288 master-1 kubenswrapper[4771]: I1011 11:04:08.778204 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-j6frc"] Oct 11 11:04:08.780638 master-1 kubenswrapper[4771]: I1011 11:04:08.780591 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:08.785912 master-1 kubenswrapper[4771]: I1011 11:04:08.785867 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Oct 11 11:04:08.786106 master-1 kubenswrapper[4771]: I1011 11:04:08.786082 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Oct 11 11:04:08.786280 master-1 kubenswrapper[4771]: I1011 11:04:08.786237 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Oct 11 11:04:08.788586 master-2 kubenswrapper[4776]: I1011 11:04:08.785145 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-4h92c"] Oct 11 11:04:08.789217 master-2 kubenswrapper[4776]: I1011 11:04:08.788971 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.793284 master-2 kubenswrapper[4776]: I1011 11:04:08.793246 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Oct 11 11:04:08.794460 master-2 kubenswrapper[4776]: I1011 11:04:08.794417 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Oct 11 11:04:08.796481 master-2 kubenswrapper[4776]: I1011 11:04:08.796397 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Oct 11 11:04:08.808042 master-1 kubenswrapper[4771]: I1011 11:04:08.807972 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-j6frc"] Oct 11 11:04:08.812549 master-2 kubenswrapper[4776]: I1011 11:04:08.812488 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-4h92c"] Oct 11 11:04:08.824085 master-0 kubenswrapper[4790]: I1011 11:04:08.823943 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-wmmzh"] Oct 11 11:04:08.843793 master-1 kubenswrapper[4771]: I1011 11:04:08.843705 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/c02195a5-59ae-446e-9984-abafa5c03ce5-amphora-certs\") pod \"octavia-healthmanager-j6frc\" (UID: \"c02195a5-59ae-446e-9984-abafa5c03ce5\") " pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:08.843793 master-1 kubenswrapper[4771]: I1011 11:04:08.843775 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02195a5-59ae-446e-9984-abafa5c03ce5-combined-ca-bundle\") pod \"octavia-healthmanager-j6frc\" (UID: \"c02195a5-59ae-446e-9984-abafa5c03ce5\") " pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:08.844221 master-1 kubenswrapper[4771]: I1011 11:04:08.843829 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/c02195a5-59ae-446e-9984-abafa5c03ce5-hm-ports\") pod \"octavia-healthmanager-j6frc\" (UID: \"c02195a5-59ae-446e-9984-abafa5c03ce5\") " pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:08.844221 master-1 kubenswrapper[4771]: I1011 11:04:08.843856 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c02195a5-59ae-446e-9984-abafa5c03ce5-config-data-merged\") pod \"octavia-healthmanager-j6frc\" (UID: \"c02195a5-59ae-446e-9984-abafa5c03ce5\") " pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:08.844221 master-1 kubenswrapper[4771]: I1011 11:04:08.844000 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02195a5-59ae-446e-9984-abafa5c03ce5-config-data\") pod \"octavia-healthmanager-j6frc\" (UID: \"c02195a5-59ae-446e-9984-abafa5c03ce5\") " pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:08.844221 master-1 kubenswrapper[4771]: I1011 11:04:08.844080 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c02195a5-59ae-446e-9984-abafa5c03ce5-scripts\") pod \"octavia-healthmanager-j6frc\" (UID: \"c02195a5-59ae-446e-9984-abafa5c03ce5\") " pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:08.852915 master-2 kubenswrapper[4776]: I1011 11:04:08.852849 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-amphora-certs\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.853288 master-2 kubenswrapper[4776]: I1011 11:04:08.852953 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a4c01261-6836-4eb3-9dca-826e486273ec-config-data-merged\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.853288 master-2 kubenswrapper[4776]: I1011 11:04:08.853000 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-combined-ca-bundle\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.853288 master-2 kubenswrapper[4776]: I1011 11:04:08.853062 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a4c01261-6836-4eb3-9dca-826e486273ec-hm-ports\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.853288 master-2 kubenswrapper[4776]: I1011 11:04:08.853087 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-config-data\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.853288 master-2 kubenswrapper[4776]: I1011 11:04:08.853153 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-scripts\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.903000 master-0 kubenswrapper[4790]: I1011 11:04:08.902914 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b23d0bf-c533-41d4-aa06-4cbb6bcda90d-scripts\") pod \"octavia-healthmanager-wmmzh\" (UID: \"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d\") " pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:08.903335 master-0 kubenswrapper[4790]: I1011 11:04:08.903030 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2b23d0bf-c533-41d4-aa06-4cbb6bcda90d-amphora-certs\") pod \"octavia-healthmanager-wmmzh\" (UID: \"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d\") " pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:08.903335 master-0 kubenswrapper[4790]: I1011 11:04:08.903113 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b23d0bf-c533-41d4-aa06-4cbb6bcda90d-combined-ca-bundle\") pod \"octavia-healthmanager-wmmzh\" (UID: \"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d\") " pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:08.903335 master-0 kubenswrapper[4790]: I1011 11:04:08.903288 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b23d0bf-c533-41d4-aa06-4cbb6bcda90d-config-data\") pod \"octavia-healthmanager-wmmzh\" (UID: \"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d\") " pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:08.903542 master-0 kubenswrapper[4790]: I1011 11:04:08.903516 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2b23d0bf-c533-41d4-aa06-4cbb6bcda90d-hm-ports\") pod \"octavia-healthmanager-wmmzh\" (UID: \"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d\") " pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:08.903642 master-0 kubenswrapper[4790]: I1011 11:04:08.903614 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2b23d0bf-c533-41d4-aa06-4cbb6bcda90d-config-data-merged\") pod \"octavia-healthmanager-wmmzh\" (UID: \"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d\") " pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:08.945063 master-1 kubenswrapper[4771]: I1011 11:04:08.945008 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02195a5-59ae-446e-9984-abafa5c03ce5-config-data\") pod \"octavia-healthmanager-j6frc\" (UID: \"c02195a5-59ae-446e-9984-abafa5c03ce5\") " pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:08.945457 master-1 kubenswrapper[4771]: I1011 11:04:08.945438 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c02195a5-59ae-446e-9984-abafa5c03ce5-scripts\") pod \"octavia-healthmanager-j6frc\" (UID: \"c02195a5-59ae-446e-9984-abafa5c03ce5\") " pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:08.945574 master-1 kubenswrapper[4771]: I1011 11:04:08.945560 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/c02195a5-59ae-446e-9984-abafa5c03ce5-amphora-certs\") pod \"octavia-healthmanager-j6frc\" (UID: \"c02195a5-59ae-446e-9984-abafa5c03ce5\") " pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:08.945666 master-1 kubenswrapper[4771]: I1011 11:04:08.945651 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02195a5-59ae-446e-9984-abafa5c03ce5-combined-ca-bundle\") pod \"octavia-healthmanager-j6frc\" (UID: \"c02195a5-59ae-446e-9984-abafa5c03ce5\") " pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:08.945783 master-1 kubenswrapper[4771]: I1011 11:04:08.945769 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/c02195a5-59ae-446e-9984-abafa5c03ce5-hm-ports\") pod \"octavia-healthmanager-j6frc\" (UID: \"c02195a5-59ae-446e-9984-abafa5c03ce5\") " pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:08.945885 master-1 kubenswrapper[4771]: I1011 11:04:08.945873 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c02195a5-59ae-446e-9984-abafa5c03ce5-config-data-merged\") pod \"octavia-healthmanager-j6frc\" (UID: \"c02195a5-59ae-446e-9984-abafa5c03ce5\") " pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:08.946486 master-1 kubenswrapper[4771]: I1011 11:04:08.946470 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c02195a5-59ae-446e-9984-abafa5c03ce5-config-data-merged\") pod \"octavia-healthmanager-j6frc\" (UID: \"c02195a5-59ae-446e-9984-abafa5c03ce5\") " pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:08.948983 master-1 kubenswrapper[4771]: I1011 11:04:08.948923 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/c02195a5-59ae-446e-9984-abafa5c03ce5-hm-ports\") pod \"octavia-healthmanager-j6frc\" (UID: \"c02195a5-59ae-446e-9984-abafa5c03ce5\") " pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:08.950672 master-1 kubenswrapper[4771]: I1011 11:04:08.950652 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02195a5-59ae-446e-9984-abafa5c03ce5-config-data\") pod \"octavia-healthmanager-j6frc\" (UID: \"c02195a5-59ae-446e-9984-abafa5c03ce5\") " pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:08.951621 master-1 kubenswrapper[4771]: I1011 11:04:08.951599 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/c02195a5-59ae-446e-9984-abafa5c03ce5-amphora-certs\") pod \"octavia-healthmanager-j6frc\" (UID: \"c02195a5-59ae-446e-9984-abafa5c03ce5\") " pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:08.952845 master-1 kubenswrapper[4771]: I1011 11:04:08.952784 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c02195a5-59ae-446e-9984-abafa5c03ce5-scripts\") pod \"octavia-healthmanager-j6frc\" (UID: \"c02195a5-59ae-446e-9984-abafa5c03ce5\") " pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:08.954602 master-1 kubenswrapper[4771]: I1011 11:04:08.954551 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02195a5-59ae-446e-9984-abafa5c03ce5-combined-ca-bundle\") pod \"octavia-healthmanager-j6frc\" (UID: \"c02195a5-59ae-446e-9984-abafa5c03ce5\") " pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:08.955149 master-2 kubenswrapper[4776]: I1011 11:04:08.955081 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-config-data\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.955373 master-2 kubenswrapper[4776]: I1011 11:04:08.955199 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-scripts\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.955489 master-2 kubenswrapper[4776]: I1011 11:04:08.955454 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-amphora-certs\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.955542 master-2 kubenswrapper[4776]: I1011 11:04:08.955525 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a4c01261-6836-4eb3-9dca-826e486273ec-config-data-merged\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.955592 master-2 kubenswrapper[4776]: I1011 11:04:08.955572 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-combined-ca-bundle\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.955662 master-2 kubenswrapper[4776]: I1011 11:04:08.955642 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a4c01261-6836-4eb3-9dca-826e486273ec-hm-ports\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.956331 master-2 kubenswrapper[4776]: I1011 11:04:08.956277 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a4c01261-6836-4eb3-9dca-826e486273ec-config-data-merged\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.958611 master-2 kubenswrapper[4776]: I1011 11:04:08.957503 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a4c01261-6836-4eb3-9dca-826e486273ec-hm-ports\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.960472 master-2 kubenswrapper[4776]: I1011 11:04:08.960348 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-scripts\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.960622 master-2 kubenswrapper[4776]: I1011 11:04:08.960584 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-amphora-certs\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.961727 master-2 kubenswrapper[4776]: I1011 11:04:08.961264 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-config-data\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.961727 master-2 kubenswrapper[4776]: I1011 11:04:08.961630 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-combined-ca-bundle\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:09.005405 master-0 kubenswrapper[4790]: I1011 11:04:09.005237 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b23d0bf-c533-41d4-aa06-4cbb6bcda90d-combined-ca-bundle\") pod \"octavia-healthmanager-wmmzh\" (UID: \"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d\") " pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:09.005405 master-0 kubenswrapper[4790]: I1011 11:04:09.005309 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b23d0bf-c533-41d4-aa06-4cbb6bcda90d-config-data\") pod \"octavia-healthmanager-wmmzh\" (UID: \"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d\") " pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:09.005405 master-0 kubenswrapper[4790]: I1011 11:04:09.005348 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2b23d0bf-c533-41d4-aa06-4cbb6bcda90d-hm-ports\") pod \"octavia-healthmanager-wmmzh\" (UID: \"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d\") " pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:09.005405 master-0 kubenswrapper[4790]: I1011 11:04:09.005371 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2b23d0bf-c533-41d4-aa06-4cbb6bcda90d-config-data-merged\") pod \"octavia-healthmanager-wmmzh\" (UID: \"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d\") " pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:09.005845 master-0 kubenswrapper[4790]: I1011 11:04:09.005441 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b23d0bf-c533-41d4-aa06-4cbb6bcda90d-scripts\") pod \"octavia-healthmanager-wmmzh\" (UID: \"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d\") " pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:09.005845 master-0 kubenswrapper[4790]: I1011 11:04:09.005474 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2b23d0bf-c533-41d4-aa06-4cbb6bcda90d-amphora-certs\") pod \"octavia-healthmanager-wmmzh\" (UID: \"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d\") " pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:09.006483 master-0 kubenswrapper[4790]: I1011 11:04:09.006429 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2b23d0bf-c533-41d4-aa06-4cbb6bcda90d-config-data-merged\") pod \"octavia-healthmanager-wmmzh\" (UID: \"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d\") " pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:09.007844 master-0 kubenswrapper[4790]: I1011 11:04:09.007786 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2b23d0bf-c533-41d4-aa06-4cbb6bcda90d-hm-ports\") pod \"octavia-healthmanager-wmmzh\" (UID: \"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d\") " pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:09.010009 master-0 kubenswrapper[4790]: I1011 11:04:09.009975 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b23d0bf-c533-41d4-aa06-4cbb6bcda90d-combined-ca-bundle\") pod \"octavia-healthmanager-wmmzh\" (UID: \"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d\") " pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:09.010334 master-0 kubenswrapper[4790]: I1011 11:04:09.010297 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2b23d0bf-c533-41d4-aa06-4cbb6bcda90d-amphora-certs\") pod \"octavia-healthmanager-wmmzh\" (UID: \"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d\") " pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:09.013074 master-0 kubenswrapper[4790]: I1011 11:04:09.013027 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b23d0bf-c533-41d4-aa06-4cbb6bcda90d-config-data\") pod \"octavia-healthmanager-wmmzh\" (UID: \"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d\") " pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:09.015985 master-0 kubenswrapper[4790]: I1011 11:04:09.015921 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b23d0bf-c533-41d4-aa06-4cbb6bcda90d-scripts\") pod \"octavia-healthmanager-wmmzh\" (UID: \"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d\") " pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:09.104252 master-1 kubenswrapper[4771]: I1011 11:04:09.104176 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:09.104713 master-2 kubenswrapper[4776]: I1011 11:04:09.104557 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:09.107182 master-0 kubenswrapper[4790]: I1011 11:04:09.107050 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:09.693123 master-2 kubenswrapper[4776]: I1011 11:04:09.693052 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-4h92c"] Oct 11 11:04:09.897259 master-1 kubenswrapper[4771]: I1011 11:04:09.897197 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-j6frc"] Oct 11 11:04:10.173815 master-2 kubenswrapper[4776]: I1011 11:04:10.173737 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-4h92c" event={"ID":"a4c01261-6836-4eb3-9dca-826e486273ec","Type":"ContainerStarted","Data":"e0da58750383ace39a89ddfe7e45a633fc74849fe1c0d62dadbe6c821a5735d7"} Oct 11 11:04:10.586440 master-0 kubenswrapper[4790]: I1011 11:04:10.586377 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-wmmzh"] Oct 11 11:04:10.593048 master-0 kubenswrapper[4790]: W1011 11:04:10.592962 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b23d0bf_c533_41d4_aa06_4cbb6bcda90d.slice/crio-2932d56e27a8a6f750a2db321cab98a75e2c3e86af7cc35652f038c46413cb3f WatchSource:0}: Error finding container 2932d56e27a8a6f750a2db321cab98a75e2c3e86af7cc35652f038c46413cb3f: Status 404 returned error can't find the container with id 2932d56e27a8a6f750a2db321cab98a75e2c3e86af7cc35652f038c46413cb3f Oct 11 11:04:10.867185 master-1 kubenswrapper[4771]: I1011 11:04:10.867119 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-j6frc" event={"ID":"c02195a5-59ae-446e-9984-abafa5c03ce5","Type":"ContainerStarted","Data":"f78da3624b576ac119751eab0fd297c9217f5f3b2082190b2dcca93bfd67d131"} Oct 11 11:04:10.867185 master-1 kubenswrapper[4771]: I1011 11:04:10.867186 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-j6frc" event={"ID":"c02195a5-59ae-446e-9984-abafa5c03ce5","Type":"ContainerStarted","Data":"0da54d2dd711becbc625a7d84e560e520bc18693d22425761a185f6ecc5e0bc7"} Oct 11 11:04:11.022109 master-2 kubenswrapper[4776]: I1011 11:04:11.022036 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-dsjdj"] Oct 11 11:04:11.023887 master-2 kubenswrapper[4776]: I1011 11:04:11.023853 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.026788 master-2 kubenswrapper[4776]: I1011 11:04:11.026733 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Oct 11 11:04:11.026919 master-2 kubenswrapper[4776]: I1011 11:04:11.026856 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Oct 11 11:04:11.041740 master-0 kubenswrapper[4790]: I1011 11:04:11.040054 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-8wwvc"] Oct 11 11:04:11.042674 master-0 kubenswrapper[4790]: I1011 11:04:11.042592 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.047198 master-0 kubenswrapper[4790]: I1011 11:04:11.047147 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Oct 11 11:04:11.047505 master-0 kubenswrapper[4790]: I1011 11:04:11.047454 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Oct 11 11:04:11.047804 master-1 kubenswrapper[4771]: I1011 11:04:11.047735 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-hjq8j"] Oct 11 11:04:11.050330 master-1 kubenswrapper[4771]: I1011 11:04:11.050199 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.051756 master-2 kubenswrapper[4776]: I1011 11:04:11.051640 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-dsjdj"] Oct 11 11:04:11.059669 master-1 kubenswrapper[4771]: I1011 11:04:11.059609 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Oct 11 11:04:11.060053 master-1 kubenswrapper[4771]: I1011 11:04:11.059614 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Oct 11 11:04:11.066598 master-0 kubenswrapper[4790]: I1011 11:04:11.066517 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-8wwvc"] Oct 11 11:04:11.067440 master-1 kubenswrapper[4771]: I1011 11:04:11.067253 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-hjq8j"] Oct 11 11:04:11.098878 master-2 kubenswrapper[4776]: I1011 11:04:11.098826 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/b714c56b-9901-470b-ba8d-790c638ddd43-hm-ports\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.099110 master-2 kubenswrapper[4776]: I1011 11:04:11.098959 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-config-data\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.099110 master-2 kubenswrapper[4776]: I1011 11:04:11.099029 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b714c56b-9901-470b-ba8d-790c638ddd43-config-data-merged\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.099226 master-2 kubenswrapper[4776]: I1011 11:04:11.099118 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-combined-ca-bundle\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.099226 master-2 kubenswrapper[4776]: I1011 11:04:11.099155 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-scripts\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.099226 master-2 kubenswrapper[4776]: I1011 11:04:11.099195 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-amphora-certs\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.099464 master-1 kubenswrapper[4771]: I1011 11:04:11.099367 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/4147847f-c490-4bc6-a2da-298d3a3e188b-amphora-certs\") pod \"octavia-housekeeping-hjq8j\" (UID: \"4147847f-c490-4bc6-a2da-298d3a3e188b\") " pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.099695 master-1 kubenswrapper[4771]: I1011 11:04:11.099499 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4147847f-c490-4bc6-a2da-298d3a3e188b-config-data-merged\") pod \"octavia-housekeeping-hjq8j\" (UID: \"4147847f-c490-4bc6-a2da-298d3a3e188b\") " pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.099695 master-1 kubenswrapper[4771]: I1011 11:04:11.099550 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4147847f-c490-4bc6-a2da-298d3a3e188b-scripts\") pod \"octavia-housekeeping-hjq8j\" (UID: \"4147847f-c490-4bc6-a2da-298d3a3e188b\") " pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.099988 master-1 kubenswrapper[4771]: I1011 11:04:11.099739 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4147847f-c490-4bc6-a2da-298d3a3e188b-config-data\") pod \"octavia-housekeeping-hjq8j\" (UID: \"4147847f-c490-4bc6-a2da-298d3a3e188b\") " pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.100053 master-1 kubenswrapper[4771]: I1011 11:04:11.100007 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/4147847f-c490-4bc6-a2da-298d3a3e188b-hm-ports\") pod \"octavia-housekeeping-hjq8j\" (UID: \"4147847f-c490-4bc6-a2da-298d3a3e188b\") " pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.100269 master-1 kubenswrapper[4771]: I1011 11:04:11.100244 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4147847f-c490-4bc6-a2da-298d3a3e188b-combined-ca-bundle\") pod \"octavia-housekeeping-hjq8j\" (UID: \"4147847f-c490-4bc6-a2da-298d3a3e188b\") " pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.161945 master-0 kubenswrapper[4790]: I1011 11:04:11.161884 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/cec8d92b-373a-45a6-926a-6e2ae2a2645d-config-data-merged\") pod \"octavia-housekeeping-8wwvc\" (UID: \"cec8d92b-373a-45a6-926a-6e2ae2a2645d\") " pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.162249 master-0 kubenswrapper[4790]: I1011 11:04:11.161970 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cec8d92b-373a-45a6-926a-6e2ae2a2645d-combined-ca-bundle\") pod \"octavia-housekeeping-8wwvc\" (UID: \"cec8d92b-373a-45a6-926a-6e2ae2a2645d\") " pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.162249 master-0 kubenswrapper[4790]: I1011 11:04:11.162044 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/cec8d92b-373a-45a6-926a-6e2ae2a2645d-hm-ports\") pod \"octavia-housekeeping-8wwvc\" (UID: \"cec8d92b-373a-45a6-926a-6e2ae2a2645d\") " pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.162249 master-0 kubenswrapper[4790]: I1011 11:04:11.162077 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cec8d92b-373a-45a6-926a-6e2ae2a2645d-scripts\") pod \"octavia-housekeeping-8wwvc\" (UID: \"cec8d92b-373a-45a6-926a-6e2ae2a2645d\") " pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.162249 master-0 kubenswrapper[4790]: I1011 11:04:11.162201 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/cec8d92b-373a-45a6-926a-6e2ae2a2645d-amphora-certs\") pod \"octavia-housekeeping-8wwvc\" (UID: \"cec8d92b-373a-45a6-926a-6e2ae2a2645d\") " pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.162249 master-0 kubenswrapper[4790]: I1011 11:04:11.162221 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cec8d92b-373a-45a6-926a-6e2ae2a2645d-config-data\") pod \"octavia-housekeeping-8wwvc\" (UID: \"cec8d92b-373a-45a6-926a-6e2ae2a2645d\") " pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.188860 master-2 kubenswrapper[4776]: I1011 11:04:11.188815 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-4h92c" event={"ID":"a4c01261-6836-4eb3-9dca-826e486273ec","Type":"ContainerStarted","Data":"6d7fc05fe475863b3968a1652d1f8e0da4ae7bb2a768ecaa2bf9f2f3fa395701"} Oct 11 11:04:11.201690 master-2 kubenswrapper[4776]: I1011 11:04:11.201592 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-amphora-certs\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.201926 master-2 kubenswrapper[4776]: I1011 11:04:11.201754 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/b714c56b-9901-470b-ba8d-790c638ddd43-hm-ports\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.201926 master-2 kubenswrapper[4776]: I1011 11:04:11.201869 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-config-data\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.202023 master-2 kubenswrapper[4776]: I1011 11:04:11.201958 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b714c56b-9901-470b-ba8d-790c638ddd43-config-data-merged\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.202060 master-2 kubenswrapper[4776]: I1011 11:04:11.202042 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-combined-ca-bundle\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.202145 master-2 kubenswrapper[4776]: I1011 11:04:11.202072 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-scripts\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.202880 master-2 kubenswrapper[4776]: I1011 11:04:11.202831 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b714c56b-9901-470b-ba8d-790c638ddd43-config-data-merged\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.203653 master-1 kubenswrapper[4771]: I1011 11:04:11.202872 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4147847f-c490-4bc6-a2da-298d3a3e188b-combined-ca-bundle\") pod \"octavia-housekeeping-hjq8j\" (UID: \"4147847f-c490-4bc6-a2da-298d3a3e188b\") " pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.203653 master-1 kubenswrapper[4771]: I1011 11:04:11.202984 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/4147847f-c490-4bc6-a2da-298d3a3e188b-amphora-certs\") pod \"octavia-housekeeping-hjq8j\" (UID: \"4147847f-c490-4bc6-a2da-298d3a3e188b\") " pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.203653 master-1 kubenswrapper[4771]: I1011 11:04:11.203034 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4147847f-c490-4bc6-a2da-298d3a3e188b-config-data-merged\") pod \"octavia-housekeeping-hjq8j\" (UID: \"4147847f-c490-4bc6-a2da-298d3a3e188b\") " pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.203653 master-1 kubenswrapper[4771]: I1011 11:04:11.203068 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4147847f-c490-4bc6-a2da-298d3a3e188b-scripts\") pod \"octavia-housekeeping-hjq8j\" (UID: \"4147847f-c490-4bc6-a2da-298d3a3e188b\") " pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.203653 master-1 kubenswrapper[4771]: I1011 11:04:11.203094 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4147847f-c490-4bc6-a2da-298d3a3e188b-config-data\") pod \"octavia-housekeeping-hjq8j\" (UID: \"4147847f-c490-4bc6-a2da-298d3a3e188b\") " pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.203653 master-1 kubenswrapper[4771]: I1011 11:04:11.203145 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/4147847f-c490-4bc6-a2da-298d3a3e188b-hm-ports\") pod \"octavia-housekeeping-hjq8j\" (UID: \"4147847f-c490-4bc6-a2da-298d3a3e188b\") " pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.204616 master-2 kubenswrapper[4776]: I1011 11:04:11.204547 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/b714c56b-9901-470b-ba8d-790c638ddd43-hm-ports\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.204618 master-1 kubenswrapper[4771]: I1011 11:04:11.204143 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4147847f-c490-4bc6-a2da-298d3a3e188b-config-data-merged\") pod \"octavia-housekeeping-hjq8j\" (UID: \"4147847f-c490-4bc6-a2da-298d3a3e188b\") " pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.205272 master-1 kubenswrapper[4771]: I1011 11:04:11.205124 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/4147847f-c490-4bc6-a2da-298d3a3e188b-hm-ports\") pod \"octavia-housekeeping-hjq8j\" (UID: \"4147847f-c490-4bc6-a2da-298d3a3e188b\") " pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.207411 master-2 kubenswrapper[4776]: I1011 11:04:11.206885 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-combined-ca-bundle\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.207411 master-2 kubenswrapper[4776]: I1011 11:04:11.207368 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-config-data\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.207520 master-2 kubenswrapper[4776]: I1011 11:04:11.207345 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-scripts\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.208036 master-1 kubenswrapper[4771]: I1011 11:04:11.207951 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/4147847f-c490-4bc6-a2da-298d3a3e188b-amphora-certs\") pod \"octavia-housekeeping-hjq8j\" (UID: \"4147847f-c490-4bc6-a2da-298d3a3e188b\") " pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.210192 master-1 kubenswrapper[4771]: I1011 11:04:11.210116 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4147847f-c490-4bc6-a2da-298d3a3e188b-config-data\") pod \"octavia-housekeeping-hjq8j\" (UID: \"4147847f-c490-4bc6-a2da-298d3a3e188b\") " pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.211742 master-1 kubenswrapper[4771]: I1011 11:04:11.211203 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4147847f-c490-4bc6-a2da-298d3a3e188b-combined-ca-bundle\") pod \"octavia-housekeeping-hjq8j\" (UID: \"4147847f-c490-4bc6-a2da-298d3a3e188b\") " pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.213463 master-1 kubenswrapper[4771]: I1011 11:04:11.213400 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4147847f-c490-4bc6-a2da-298d3a3e188b-scripts\") pod \"octavia-housekeeping-hjq8j\" (UID: \"4147847f-c490-4bc6-a2da-298d3a3e188b\") " pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.236833 master-2 kubenswrapper[4776]: I1011 11:04:11.236715 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-amphora-certs\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.265204 master-0 kubenswrapper[4790]: I1011 11:04:11.265037 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/cec8d92b-373a-45a6-926a-6e2ae2a2645d-amphora-certs\") pod \"octavia-housekeeping-8wwvc\" (UID: \"cec8d92b-373a-45a6-926a-6e2ae2a2645d\") " pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.265204 master-0 kubenswrapper[4790]: I1011 11:04:11.265090 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cec8d92b-373a-45a6-926a-6e2ae2a2645d-config-data\") pod \"octavia-housekeeping-8wwvc\" (UID: \"cec8d92b-373a-45a6-926a-6e2ae2a2645d\") " pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.265204 master-0 kubenswrapper[4790]: I1011 11:04:11.265124 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/cec8d92b-373a-45a6-926a-6e2ae2a2645d-config-data-merged\") pod \"octavia-housekeeping-8wwvc\" (UID: \"cec8d92b-373a-45a6-926a-6e2ae2a2645d\") " pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.265204 master-0 kubenswrapper[4790]: I1011 11:04:11.265154 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cec8d92b-373a-45a6-926a-6e2ae2a2645d-combined-ca-bundle\") pod \"octavia-housekeeping-8wwvc\" (UID: \"cec8d92b-373a-45a6-926a-6e2ae2a2645d\") " pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.265204 master-0 kubenswrapper[4790]: I1011 11:04:11.265203 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/cec8d92b-373a-45a6-926a-6e2ae2a2645d-hm-ports\") pod \"octavia-housekeeping-8wwvc\" (UID: \"cec8d92b-373a-45a6-926a-6e2ae2a2645d\") " pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.265591 master-0 kubenswrapper[4790]: I1011 11:04:11.265245 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cec8d92b-373a-45a6-926a-6e2ae2a2645d-scripts\") pod \"octavia-housekeeping-8wwvc\" (UID: \"cec8d92b-373a-45a6-926a-6e2ae2a2645d\") " pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.265973 master-0 kubenswrapper[4790]: I1011 11:04:11.265935 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/cec8d92b-373a-45a6-926a-6e2ae2a2645d-config-data-merged\") pod \"octavia-housekeeping-8wwvc\" (UID: \"cec8d92b-373a-45a6-926a-6e2ae2a2645d\") " pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.270539 master-0 kubenswrapper[4790]: I1011 11:04:11.270494 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/cec8d92b-373a-45a6-926a-6e2ae2a2645d-hm-ports\") pod \"octavia-housekeeping-8wwvc\" (UID: \"cec8d92b-373a-45a6-926a-6e2ae2a2645d\") " pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.278728 master-0 kubenswrapper[4790]: I1011 11:04:11.274800 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cec8d92b-373a-45a6-926a-6e2ae2a2645d-scripts\") pod \"octavia-housekeeping-8wwvc\" (UID: \"cec8d92b-373a-45a6-926a-6e2ae2a2645d\") " pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.278728 master-0 kubenswrapper[4790]: I1011 11:04:11.276056 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/cec8d92b-373a-45a6-926a-6e2ae2a2645d-amphora-certs\") pod \"octavia-housekeeping-8wwvc\" (UID: \"cec8d92b-373a-45a6-926a-6e2ae2a2645d\") " pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.278728 master-0 kubenswrapper[4790]: I1011 11:04:11.276402 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cec8d92b-373a-45a6-926a-6e2ae2a2645d-combined-ca-bundle\") pod \"octavia-housekeeping-8wwvc\" (UID: \"cec8d92b-373a-45a6-926a-6e2ae2a2645d\") " pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.278728 master-0 kubenswrapper[4790]: I1011 11:04:11.270512 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cec8d92b-373a-45a6-926a-6e2ae2a2645d-config-data\") pod \"octavia-housekeeping-8wwvc\" (UID: \"cec8d92b-373a-45a6-926a-6e2ae2a2645d\") " pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.360787 master-2 kubenswrapper[4776]: I1011 11:04:11.340525 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.376539 master-1 kubenswrapper[4771]: I1011 11:04:11.375331 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:11.389701 master-0 kubenswrapper[4790]: I1011 11:04:11.387967 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:11.565806 master-0 kubenswrapper[4790]: I1011 11:04:11.565634 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-wmmzh" event={"ID":"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d","Type":"ContainerStarted","Data":"197ff42e3cee7c14a2bd909bd444bf54fe56101f70a5136c28611d688198512d"} Oct 11 11:04:11.565806 master-0 kubenswrapper[4790]: I1011 11:04:11.565726 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-wmmzh" event={"ID":"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d","Type":"ContainerStarted","Data":"2932d56e27a8a6f750a2db321cab98a75e2c3e86af7cc35652f038c46413cb3f"} Oct 11 11:04:11.928195 master-2 kubenswrapper[4776]: I1011 11:04:11.928149 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-dsjdj"] Oct 11 11:04:11.928562 master-2 kubenswrapper[4776]: W1011 11:04:11.928511 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb714c56b_9901_470b_ba8d_790c638ddd43.slice/crio-2dd77c240f26748f3aa8f9054cf07fa045c2d4f5a0c2a851b6cd723b0f2608b7 WatchSource:0}: Error finding container 2dd77c240f26748f3aa8f9054cf07fa045c2d4f5a0c2a851b6cd723b0f2608b7: Status 404 returned error can't find the container with id 2dd77c240f26748f3aa8f9054cf07fa045c2d4f5a0c2a851b6cd723b0f2608b7 Oct 11 11:04:12.216689 master-2 kubenswrapper[4776]: I1011 11:04:12.216554 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-dsjdj" event={"ID":"b714c56b-9901-470b-ba8d-790c638ddd43","Type":"ContainerStarted","Data":"2dd77c240f26748f3aa8f9054cf07fa045c2d4f5a0c2a851b6cd723b0f2608b7"} Oct 11 11:04:12.219586 master-2 kubenswrapper[4776]: I1011 11:04:12.219530 4776 generic.go:334] "Generic (PLEG): container finished" podID="a4c01261-6836-4eb3-9dca-826e486273ec" containerID="6d7fc05fe475863b3968a1652d1f8e0da4ae7bb2a768ecaa2bf9f2f3fa395701" exitCode=0 Oct 11 11:04:12.219661 master-2 kubenswrapper[4776]: I1011 11:04:12.219582 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-4h92c" event={"ID":"a4c01261-6836-4eb3-9dca-826e486273ec","Type":"ContainerDied","Data":"6d7fc05fe475863b3968a1652d1f8e0da4ae7bb2a768ecaa2bf9f2f3fa395701"} Oct 11 11:04:12.830970 master-2 kubenswrapper[4776]: I1011 11:04:12.830894 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-2h6nw"] Oct 11 11:04:12.832433 master-2 kubenswrapper[4776]: I1011 11:04:12.832404 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:12.835381 master-2 kubenswrapper[4776]: I1011 11:04:12.835322 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Oct 11 11:04:12.836044 master-2 kubenswrapper[4776]: I1011 11:04:12.835983 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Oct 11 11:04:12.840266 master-0 kubenswrapper[4790]: I1011 11:04:12.839993 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-bp8q6"] Oct 11 11:04:12.844508 master-0 kubenswrapper[4790]: I1011 11:04:12.844425 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:12.849924 master-1 kubenswrapper[4771]: I1011 11:04:12.849701 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-jwl5z"] Oct 11 11:04:12.850035 master-0 kubenswrapper[4790]: I1011 11:04:12.849975 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Oct 11 11:04:12.851209 master-0 kubenswrapper[4790]: I1011 11:04:12.851187 4790 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Oct 11 11:04:12.854874 master-1 kubenswrapper[4771]: I1011 11:04:12.854796 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:12.854890 master-2 kubenswrapper[4776]: I1011 11:04:12.854820 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-2h6nw"] Oct 11 11:04:12.860931 master-1 kubenswrapper[4771]: I1011 11:04:12.860867 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Oct 11 11:04:12.861139 master-1 kubenswrapper[4771]: I1011 11:04:12.861094 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Oct 11 11:04:12.864276 master-1 kubenswrapper[4771]: I1011 11:04:12.864216 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-jwl5z"] Oct 11 11:04:12.885327 master-0 kubenswrapper[4790]: I1011 11:04:12.885274 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-bp8q6"] Oct 11 11:04:12.897200 master-1 kubenswrapper[4771]: I1011 11:04:12.897142 4771 generic.go:334] "Generic (PLEG): container finished" podID="c02195a5-59ae-446e-9984-abafa5c03ce5" containerID="f78da3624b576ac119751eab0fd297c9217f5f3b2082190b2dcca93bfd67d131" exitCode=0 Oct 11 11:04:12.897443 master-1 kubenswrapper[4771]: I1011 11:04:12.897209 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-j6frc" event={"ID":"c02195a5-59ae-446e-9984-abafa5c03ce5","Type":"ContainerDied","Data":"f78da3624b576ac119751eab0fd297c9217f5f3b2082190b2dcca93bfd67d131"} Oct 11 11:04:12.941130 master-2 kubenswrapper[4776]: I1011 11:04:12.941023 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-scripts\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:12.942066 master-2 kubenswrapper[4776]: I1011 11:04:12.942023 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/165afc87-c932-440a-931b-99e339c2b038-config-data-merged\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:12.942149 master-2 kubenswrapper[4776]: I1011 11:04:12.942070 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-amphora-certs\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:12.942197 master-2 kubenswrapper[4776]: I1011 11:04:12.942172 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-combined-ca-bundle\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:12.942275 master-2 kubenswrapper[4776]: I1011 11:04:12.942227 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/165afc87-c932-440a-931b-99e339c2b038-hm-ports\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:12.942331 master-2 kubenswrapper[4776]: I1011 11:04:12.942302 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-config-data\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:12.954471 master-1 kubenswrapper[4771]: I1011 11:04:12.953830 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2de0e65e-d3d5-4b50-939a-afc198469db7-hm-ports\") pod \"octavia-worker-jwl5z\" (UID: \"2de0e65e-d3d5-4b50-939a-afc198469db7\") " pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:12.954471 master-1 kubenswrapper[4771]: I1011 11:04:12.953963 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2de0e65e-d3d5-4b50-939a-afc198469db7-amphora-certs\") pod \"octavia-worker-jwl5z\" (UID: \"2de0e65e-d3d5-4b50-939a-afc198469db7\") " pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:12.954471 master-1 kubenswrapper[4771]: I1011 11:04:12.953994 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2de0e65e-d3d5-4b50-939a-afc198469db7-scripts\") pod \"octavia-worker-jwl5z\" (UID: \"2de0e65e-d3d5-4b50-939a-afc198469db7\") " pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:12.954471 master-1 kubenswrapper[4771]: I1011 11:04:12.954134 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2de0e65e-d3d5-4b50-939a-afc198469db7-config-data-merged\") pod \"octavia-worker-jwl5z\" (UID: \"2de0e65e-d3d5-4b50-939a-afc198469db7\") " pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:12.954471 master-1 kubenswrapper[4771]: I1011 11:04:12.954247 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de0e65e-d3d5-4b50-939a-afc198469db7-combined-ca-bundle\") pod \"octavia-worker-jwl5z\" (UID: \"2de0e65e-d3d5-4b50-939a-afc198469db7\") " pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:12.954471 master-1 kubenswrapper[4771]: I1011 11:04:12.954345 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2de0e65e-d3d5-4b50-939a-afc198469db7-config-data\") pod \"octavia-worker-jwl5z\" (UID: \"2de0e65e-d3d5-4b50-939a-afc198469db7\") " pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:12.957975 master-0 kubenswrapper[4790]: I1011 11:04:12.957907 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-8wwvc"] Oct 11 11:04:13.014933 master-0 kubenswrapper[4790]: I1011 11:04:13.014862 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/943bdd13-9252-4c87-b669-9cf3f566e2ec-amphora-certs\") pod \"octavia-worker-bp8q6\" (UID: \"943bdd13-9252-4c87-b669-9cf3f566e2ec\") " pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:13.015148 master-0 kubenswrapper[4790]: I1011 11:04:13.014993 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/943bdd13-9252-4c87-b669-9cf3f566e2ec-scripts\") pod \"octavia-worker-bp8q6\" (UID: \"943bdd13-9252-4c87-b669-9cf3f566e2ec\") " pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:13.015148 master-0 kubenswrapper[4790]: I1011 11:04:13.015063 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/943bdd13-9252-4c87-b669-9cf3f566e2ec-combined-ca-bundle\") pod \"octavia-worker-bp8q6\" (UID: \"943bdd13-9252-4c87-b669-9cf3f566e2ec\") " pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:13.015148 master-0 kubenswrapper[4790]: I1011 11:04:13.015099 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/943bdd13-9252-4c87-b669-9cf3f566e2ec-config-data-merged\") pod \"octavia-worker-bp8q6\" (UID: \"943bdd13-9252-4c87-b669-9cf3f566e2ec\") " pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:13.015148 master-0 kubenswrapper[4790]: I1011 11:04:13.015124 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/943bdd13-9252-4c87-b669-9cf3f566e2ec-hm-ports\") pod \"octavia-worker-bp8q6\" (UID: \"943bdd13-9252-4c87-b669-9cf3f566e2ec\") " pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:13.015148 master-0 kubenswrapper[4790]: I1011 11:04:13.015149 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/943bdd13-9252-4c87-b669-9cf3f566e2ec-config-data\") pod \"octavia-worker-bp8q6\" (UID: \"943bdd13-9252-4c87-b669-9cf3f566e2ec\") " pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:13.044165 master-2 kubenswrapper[4776]: I1011 11:04:13.044002 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/165afc87-c932-440a-931b-99e339c2b038-config-data-merged\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.044165 master-2 kubenswrapper[4776]: I1011 11:04:13.044066 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-amphora-certs\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.044165 master-2 kubenswrapper[4776]: I1011 11:04:13.044124 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-combined-ca-bundle\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.044165 master-2 kubenswrapper[4776]: I1011 11:04:13.044157 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/165afc87-c932-440a-931b-99e339c2b038-hm-ports\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.044659 master-2 kubenswrapper[4776]: I1011 11:04:13.044195 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-config-data\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.044659 master-2 kubenswrapper[4776]: I1011 11:04:13.044233 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-scripts\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.044659 master-2 kubenswrapper[4776]: I1011 11:04:13.044523 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/165afc87-c932-440a-931b-99e339c2b038-config-data-merged\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.045795 master-2 kubenswrapper[4776]: I1011 11:04:13.045741 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/165afc87-c932-440a-931b-99e339c2b038-hm-ports\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.047974 master-2 kubenswrapper[4776]: I1011 11:04:13.047916 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-amphora-certs\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.049574 master-2 kubenswrapper[4776]: I1011 11:04:13.049513 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-combined-ca-bundle\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.050014 master-2 kubenswrapper[4776]: I1011 11:04:13.049973 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-scripts\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.050771 master-2 kubenswrapper[4776]: I1011 11:04:13.050699 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-config-data\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.057049 master-1 kubenswrapper[4771]: I1011 11:04:13.056902 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2de0e65e-d3d5-4b50-939a-afc198469db7-config-data\") pod \"octavia-worker-jwl5z\" (UID: \"2de0e65e-d3d5-4b50-939a-afc198469db7\") " pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:13.057489 master-1 kubenswrapper[4771]: I1011 11:04:13.057157 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2de0e65e-d3d5-4b50-939a-afc198469db7-hm-ports\") pod \"octavia-worker-jwl5z\" (UID: \"2de0e65e-d3d5-4b50-939a-afc198469db7\") " pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:13.057489 master-1 kubenswrapper[4771]: I1011 11:04:13.057266 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2de0e65e-d3d5-4b50-939a-afc198469db7-amphora-certs\") pod \"octavia-worker-jwl5z\" (UID: \"2de0e65e-d3d5-4b50-939a-afc198469db7\") " pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:13.057489 master-1 kubenswrapper[4771]: I1011 11:04:13.057317 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2de0e65e-d3d5-4b50-939a-afc198469db7-scripts\") pod \"octavia-worker-jwl5z\" (UID: \"2de0e65e-d3d5-4b50-939a-afc198469db7\") " pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:13.057489 master-1 kubenswrapper[4771]: I1011 11:04:13.057350 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2de0e65e-d3d5-4b50-939a-afc198469db7-config-data-merged\") pod \"octavia-worker-jwl5z\" (UID: \"2de0e65e-d3d5-4b50-939a-afc198469db7\") " pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:13.057489 master-1 kubenswrapper[4771]: I1011 11:04:13.057407 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de0e65e-d3d5-4b50-939a-afc198469db7-combined-ca-bundle\") pod \"octavia-worker-jwl5z\" (UID: \"2de0e65e-d3d5-4b50-939a-afc198469db7\") " pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:13.058682 master-1 kubenswrapper[4771]: I1011 11:04:13.058620 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2de0e65e-d3d5-4b50-939a-afc198469db7-config-data-merged\") pod \"octavia-worker-jwl5z\" (UID: \"2de0e65e-d3d5-4b50-939a-afc198469db7\") " pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:13.060251 master-1 kubenswrapper[4771]: I1011 11:04:13.060193 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2de0e65e-d3d5-4b50-939a-afc198469db7-hm-ports\") pod \"octavia-worker-jwl5z\" (UID: \"2de0e65e-d3d5-4b50-939a-afc198469db7\") " pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:13.062190 master-1 kubenswrapper[4771]: I1011 11:04:13.062148 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2de0e65e-d3d5-4b50-939a-afc198469db7-scripts\") pod \"octavia-worker-jwl5z\" (UID: \"2de0e65e-d3d5-4b50-939a-afc198469db7\") " pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:13.063128 master-1 kubenswrapper[4771]: I1011 11:04:13.063081 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2de0e65e-d3d5-4b50-939a-afc198469db7-amphora-certs\") pod \"octavia-worker-jwl5z\" (UID: \"2de0e65e-d3d5-4b50-939a-afc198469db7\") " pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:13.064336 master-1 kubenswrapper[4771]: I1011 11:04:13.064307 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de0e65e-d3d5-4b50-939a-afc198469db7-combined-ca-bundle\") pod \"octavia-worker-jwl5z\" (UID: \"2de0e65e-d3d5-4b50-939a-afc198469db7\") " pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:13.065232 master-1 kubenswrapper[4771]: I1011 11:04:13.065203 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2de0e65e-d3d5-4b50-939a-afc198469db7-config-data\") pod \"octavia-worker-jwl5z\" (UID: \"2de0e65e-d3d5-4b50-939a-afc198469db7\") " pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:13.117481 master-0 kubenswrapper[4790]: I1011 11:04:13.117313 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/943bdd13-9252-4c87-b669-9cf3f566e2ec-scripts\") pod \"octavia-worker-bp8q6\" (UID: \"943bdd13-9252-4c87-b669-9cf3f566e2ec\") " pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:13.117481 master-0 kubenswrapper[4790]: I1011 11:04:13.117445 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/943bdd13-9252-4c87-b669-9cf3f566e2ec-combined-ca-bundle\") pod \"octavia-worker-bp8q6\" (UID: \"943bdd13-9252-4c87-b669-9cf3f566e2ec\") " pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:13.117750 master-0 kubenswrapper[4790]: I1011 11:04:13.117502 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/943bdd13-9252-4c87-b669-9cf3f566e2ec-config-data-merged\") pod \"octavia-worker-bp8q6\" (UID: \"943bdd13-9252-4c87-b669-9cf3f566e2ec\") " pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:13.117750 master-0 kubenswrapper[4790]: I1011 11:04:13.117550 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/943bdd13-9252-4c87-b669-9cf3f566e2ec-hm-ports\") pod \"octavia-worker-bp8q6\" (UID: \"943bdd13-9252-4c87-b669-9cf3f566e2ec\") " pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:13.117750 master-0 kubenswrapper[4790]: I1011 11:04:13.117588 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/943bdd13-9252-4c87-b669-9cf3f566e2ec-config-data\") pod \"octavia-worker-bp8q6\" (UID: \"943bdd13-9252-4c87-b669-9cf3f566e2ec\") " pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:13.117750 master-0 kubenswrapper[4790]: I1011 11:04:13.117627 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/943bdd13-9252-4c87-b669-9cf3f566e2ec-amphora-certs\") pod \"octavia-worker-bp8q6\" (UID: \"943bdd13-9252-4c87-b669-9cf3f566e2ec\") " pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:13.119256 master-0 kubenswrapper[4790]: I1011 11:04:13.119222 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/943bdd13-9252-4c87-b669-9cf3f566e2ec-config-data-merged\") pod \"octavia-worker-bp8q6\" (UID: \"943bdd13-9252-4c87-b669-9cf3f566e2ec\") " pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:13.121447 master-0 kubenswrapper[4790]: I1011 11:04:13.121406 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/943bdd13-9252-4c87-b669-9cf3f566e2ec-hm-ports\") pod \"octavia-worker-bp8q6\" (UID: \"943bdd13-9252-4c87-b669-9cf3f566e2ec\") " pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:13.121936 master-0 kubenswrapper[4790]: I1011 11:04:13.121883 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/943bdd13-9252-4c87-b669-9cf3f566e2ec-combined-ca-bundle\") pod \"octavia-worker-bp8q6\" (UID: \"943bdd13-9252-4c87-b669-9cf3f566e2ec\") " pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:13.122256 master-0 kubenswrapper[4790]: I1011 11:04:13.122240 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/943bdd13-9252-4c87-b669-9cf3f566e2ec-config-data\") pod \"octavia-worker-bp8q6\" (UID: \"943bdd13-9252-4c87-b669-9cf3f566e2ec\") " pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:13.123411 master-0 kubenswrapper[4790]: I1011 11:04:13.123340 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/943bdd13-9252-4c87-b669-9cf3f566e2ec-amphora-certs\") pod \"octavia-worker-bp8q6\" (UID: \"943bdd13-9252-4c87-b669-9cf3f566e2ec\") " pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:13.124014 master-0 kubenswrapper[4790]: I1011 11:04:13.123978 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/943bdd13-9252-4c87-b669-9cf3f566e2ec-scripts\") pod \"octavia-worker-bp8q6\" (UID: \"943bdd13-9252-4c87-b669-9cf3f566e2ec\") " pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:13.175959 master-2 kubenswrapper[4776]: I1011 11:04:13.175825 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.211113 master-1 kubenswrapper[4771]: I1011 11:04:13.210697 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:13.240227 master-2 kubenswrapper[4776]: I1011 11:04:13.240126 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-4h92c" event={"ID":"a4c01261-6836-4eb3-9dca-826e486273ec","Type":"ContainerStarted","Data":"d48802fad657594e7d8c88578a13ecd2e12055388ddf6c73c39497b6aaad00d7"} Oct 11 11:04:13.241178 master-2 kubenswrapper[4776]: I1011 11:04:13.240437 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:13.279494 master-2 kubenswrapper[4776]: I1011 11:04:13.279429 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-4h92c" podStartSLOduration=5.279410701 podStartE2EDuration="5.279410701s" podCreationTimestamp="2025-10-11 11:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 11:04:13.274331374 +0000 UTC m=+2288.058758083" watchObservedRunningTime="2025-10-11 11:04:13.279410701 +0000 UTC m=+2288.063837400" Oct 11 11:04:13.305480 master-0 kubenswrapper[4790]: I1011 11:04:13.305394 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:13.592991 master-0 kubenswrapper[4790]: I1011 11:04:13.592906 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-8wwvc" event={"ID":"cec8d92b-373a-45a6-926a-6e2ae2a2645d","Type":"ContainerStarted","Data":"f8b26c91595fdeca6125ac3fec47c1996dcc05da06ddd08630f24201c2b98afe"} Oct 11 11:04:13.600062 master-0 kubenswrapper[4790]: I1011 11:04:13.599548 4790 generic.go:334] "Generic (PLEG): container finished" podID="2b23d0bf-c533-41d4-aa06-4cbb6bcda90d" containerID="197ff42e3cee7c14a2bd909bd444bf54fe56101f70a5136c28611d688198512d" exitCode=0 Oct 11 11:04:13.600062 master-0 kubenswrapper[4790]: I1011 11:04:13.599626 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-wmmzh" event={"ID":"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d","Type":"ContainerDied","Data":"197ff42e3cee7c14a2bd909bd444bf54fe56101f70a5136c28611d688198512d"} Oct 11 11:04:13.734762 master-2 kubenswrapper[4776]: I1011 11:04:13.734319 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-2h6nw"] Oct 11 11:04:13.748588 master-2 kubenswrapper[4776]: W1011 11:04:13.748537 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod165afc87_c932_440a_931b_99e339c2b038.slice/crio-8d52820228f382bf6db9e08a5ed87739272bf5d5e1d0c7c40d2541d2d4a45420 WatchSource:0}: Error finding container 8d52820228f382bf6db9e08a5ed87739272bf5d5e1d0c7c40d2541d2d4a45420: Status 404 returned error can't find the container with id 8d52820228f382bf6db9e08a5ed87739272bf5d5e1d0c7c40d2541d2d4a45420 Oct 11 11:04:13.817442 master-1 kubenswrapper[4771]: I1011 11:04:13.814912 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-jwl5z"] Oct 11 11:04:13.818640 master-1 kubenswrapper[4771]: W1011 11:04:13.818589 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2de0e65e_d3d5_4b50_939a_afc198469db7.slice/crio-620af9e2516377bf5ba951b3c9e51790ef07e4d8539330389c86481c3c629618 WatchSource:0}: Error finding container 620af9e2516377bf5ba951b3c9e51790ef07e4d8539330389c86481c3c629618: Status 404 returned error can't find the container with id 620af9e2516377bf5ba951b3c9e51790ef07e4d8539330389c86481c3c629618 Oct 11 11:04:13.872681 master-0 kubenswrapper[4790]: I1011 11:04:13.872585 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-bp8q6"] Oct 11 11:04:13.874316 master-0 kubenswrapper[4790]: W1011 11:04:13.874213 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod943bdd13_9252_4c87_b669_9cf3f566e2ec.slice/crio-f515c27a0294a207e428aa07aa8c085866d42831b8cf9ed0b011d7785d14e28d WatchSource:0}: Error finding container f515c27a0294a207e428aa07aa8c085866d42831b8cf9ed0b011d7785d14e28d: Status 404 returned error can't find the container with id f515c27a0294a207e428aa07aa8c085866d42831b8cf9ed0b011d7785d14e28d Oct 11 11:04:13.908381 master-1 kubenswrapper[4771]: I1011 11:04:13.908270 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-jwl5z" event={"ID":"2de0e65e-d3d5-4b50-939a-afc198469db7","Type":"ContainerStarted","Data":"620af9e2516377bf5ba951b3c9e51790ef07e4d8539330389c86481c3c629618"} Oct 11 11:04:13.910495 master-1 kubenswrapper[4771]: I1011 11:04:13.910346 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-j6frc" event={"ID":"c02195a5-59ae-446e-9984-abafa5c03ce5","Type":"ContainerStarted","Data":"8187c3e5dc58271c299604a12745842c20da91d99b4e5d0c317b8819d82e8696"} Oct 11 11:04:13.910905 master-1 kubenswrapper[4771]: I1011 11:04:13.910836 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:13.952686 master-1 kubenswrapper[4771]: I1011 11:04:13.952595 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-j6frc" podStartSLOduration=5.95257079 podStartE2EDuration="5.95257079s" podCreationTimestamp="2025-10-11 11:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 11:04:13.950982945 +0000 UTC m=+2285.925209396" watchObservedRunningTime="2025-10-11 11:04:13.95257079 +0000 UTC m=+2285.926797231" Oct 11 11:04:14.072183 master-0 kubenswrapper[4790]: I1011 11:04:14.072075 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-jmmst"] Oct 11 11:04:14.079376 master-0 kubenswrapper[4790]: I1011 11:04:14.079300 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-l7rcp"] Oct 11 11:04:14.087908 master-0 kubenswrapper[4790]: I1011 11:04:14.087760 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-jmmst"] Oct 11 11:04:14.091793 master-0 kubenswrapper[4790]: I1011 11:04:14.091724 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-l7rcp"] Oct 11 11:04:14.265198 master-2 kubenswrapper[4776]: I1011 11:04:14.264352 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-2h6nw" event={"ID":"165afc87-c932-440a-931b-99e339c2b038","Type":"ContainerStarted","Data":"8d52820228f382bf6db9e08a5ed87739272bf5d5e1d0c7c40d2541d2d4a45420"} Oct 11 11:04:14.312292 master-0 kubenswrapper[4790]: I1011 11:04:14.312120 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d40b588a-5009-41c8-b8b0-b417de6693ac" path="/var/lib/kubelet/pods/d40b588a-5009-41c8-b8b0-b417de6693ac/volumes" Oct 11 11:04:14.313755 master-0 kubenswrapper[4790]: I1011 11:04:14.313696 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfe51cce-787f-4883-8b8f-f1ed50caa3d3" path="/var/lib/kubelet/pods/dfe51cce-787f-4883-8b8f-f1ed50caa3d3/volumes" Oct 11 11:04:14.515379 master-1 kubenswrapper[4771]: I1011 11:04:14.515018 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-hjq8j"] Oct 11 11:04:14.622499 master-0 kubenswrapper[4790]: I1011 11:04:14.622425 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-bp8q6" event={"ID":"943bdd13-9252-4c87-b669-9cf3f566e2ec","Type":"ContainerStarted","Data":"f515c27a0294a207e428aa07aa8c085866d42831b8cf9ed0b011d7785d14e28d"} Oct 11 11:04:14.629631 master-0 kubenswrapper[4790]: I1011 11:04:14.629562 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-wmmzh" event={"ID":"2b23d0bf-c533-41d4-aa06-4cbb6bcda90d","Type":"ContainerStarted","Data":"c907e97fe44860901f875d6d455a963e45f0d2b6a7ae64e83df0f6564e422b34"} Oct 11 11:04:14.630930 master-0 kubenswrapper[4790]: I1011 11:04:14.630887 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:14.658421 master-0 kubenswrapper[4790]: I1011 11:04:14.658343 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-wmmzh" podStartSLOduration=6.658322955 podStartE2EDuration="6.658322955s" podCreationTimestamp="2025-10-11 11:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 11:04:14.654265474 +0000 UTC m=+1531.208725776" watchObservedRunningTime="2025-10-11 11:04:14.658322955 +0000 UTC m=+1531.212783247" Oct 11 11:04:14.924374 master-1 kubenswrapper[4771]: I1011 11:04:14.923986 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-hjq8j" event={"ID":"4147847f-c490-4bc6-a2da-298d3a3e188b","Type":"ContainerStarted","Data":"3a732fe908f149e77768cda1e245a5bbde89d124df44b7c8f34da2dbc422e246"} Oct 11 11:04:15.043863 master-0 kubenswrapper[4790]: I1011 11:04:15.043817 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-7vsxp"] Oct 11 11:04:15.056687 master-0 kubenswrapper[4790]: I1011 11:04:15.056589 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-7vsxp"] Oct 11 11:04:15.273059 master-2 kubenswrapper[4776]: I1011 11:04:15.272989 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-dsjdj" event={"ID":"b714c56b-9901-470b-ba8d-790c638ddd43","Type":"ContainerStarted","Data":"bd4ee5e48134590a1dd0bfaf0e53ca64e08a46974ba6ae0b270bfd71b5de3f13"} Oct 11 11:04:15.642779 master-0 kubenswrapper[4790]: I1011 11:04:15.642652 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-8wwvc" event={"ID":"cec8d92b-373a-45a6-926a-6e2ae2a2645d","Type":"ContainerStarted","Data":"41a01c86c951fd5fa24d58edf7651edb8ffe47ee4cc4c4a17c703d5b3d2e1523"} Oct 11 11:04:16.282952 master-2 kubenswrapper[4776]: I1011 11:04:16.282917 4776 generic.go:334] "Generic (PLEG): container finished" podID="b714c56b-9901-470b-ba8d-790c638ddd43" containerID="bd4ee5e48134590a1dd0bfaf0e53ca64e08a46974ba6ae0b270bfd71b5de3f13" exitCode=0 Oct 11 11:04:16.283877 master-2 kubenswrapper[4776]: I1011 11:04:16.282962 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-dsjdj" event={"ID":"b714c56b-9901-470b-ba8d-790c638ddd43","Type":"ContainerDied","Data":"bd4ee5e48134590a1dd0bfaf0e53ca64e08a46974ba6ae0b270bfd71b5de3f13"} Oct 11 11:04:16.306681 master-0 kubenswrapper[4790]: I1011 11:04:16.306603 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d58f3b14-e8da-4046-afb1-c376a65ef16e" path="/var/lib/kubelet/pods/d58f3b14-e8da-4046-afb1-c376a65ef16e/volumes" Oct 11 11:04:16.653650 master-0 kubenswrapper[4790]: I1011 11:04:16.653443 4790 generic.go:334] "Generic (PLEG): container finished" podID="cec8d92b-373a-45a6-926a-6e2ae2a2645d" containerID="41a01c86c951fd5fa24d58edf7651edb8ffe47ee4cc4c4a17c703d5b3d2e1523" exitCode=0 Oct 11 11:04:16.653650 master-0 kubenswrapper[4790]: I1011 11:04:16.653528 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-8wwvc" event={"ID":"cec8d92b-373a-45a6-926a-6e2ae2a2645d","Type":"ContainerDied","Data":"41a01c86c951fd5fa24d58edf7651edb8ffe47ee4cc4c4a17c703d5b3d2e1523"} Oct 11 11:04:16.656444 master-0 kubenswrapper[4790]: I1011 11:04:16.656367 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-bp8q6" event={"ID":"943bdd13-9252-4c87-b669-9cf3f566e2ec","Type":"ContainerStarted","Data":"ea93697b9c384496775b0ae62c141aa148f53ec6c485357004d0d0c95b051d40"} Oct 11 11:04:16.951992 master-1 kubenswrapper[4771]: I1011 11:04:16.951911 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-hjq8j" event={"ID":"4147847f-c490-4bc6-a2da-298d3a3e188b","Type":"ContainerStarted","Data":"785f888f163f9a4ad23b11870a89d933866aaf82fd9811e43f9ca7e883e62d16"} Oct 11 11:04:17.045485 master-2 kubenswrapper[4776]: I1011 11:04:17.045376 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-848z9"] Oct 11 11:04:17.056153 master-2 kubenswrapper[4776]: I1011 11:04:17.056102 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-848z9"] Oct 11 11:04:17.318856 master-2 kubenswrapper[4776]: I1011 11:04:17.318287 4776 scope.go:117] "RemoveContainer" containerID="c49bb5e7b7b29373f91460103b137215ef53ea437154f26d2c8b1bb8b44e90c5" Oct 11 11:04:17.322749 master-2 kubenswrapper[4776]: I1011 11:04:17.322660 4776 generic.go:334] "Generic (PLEG): container finished" podID="165afc87-c932-440a-931b-99e339c2b038" containerID="bc0eb0d1dc5809dc09f96697cede8293b058768a79846386115117b790c89478" exitCode=0 Oct 11 11:04:17.322869 master-2 kubenswrapper[4776]: I1011 11:04:17.322781 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-2h6nw" event={"ID":"165afc87-c932-440a-931b-99e339c2b038","Type":"ContainerDied","Data":"bc0eb0d1dc5809dc09f96697cede8293b058768a79846386115117b790c89478"} Oct 11 11:04:17.326409 master-2 kubenswrapper[4776]: I1011 11:04:17.326272 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-dsjdj" event={"ID":"b714c56b-9901-470b-ba8d-790c638ddd43","Type":"ContainerStarted","Data":"bb9a428e96576310c4fd673f948b0b1f89e5c1d96bde39469e92168f764a0b5a"} Oct 11 11:04:17.326476 master-2 kubenswrapper[4776]: I1011 11:04:17.326434 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:17.342703 master-2 kubenswrapper[4776]: I1011 11:04:17.342658 4776 scope.go:117] "RemoveContainer" containerID="fef38235c13a94e0f559cc147b180218c57bb86edee562a4d0f5a4ef26d4a256" Oct 11 11:04:17.381073 master-2 kubenswrapper[4776]: I1011 11:04:17.381009 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-dsjdj" podStartSLOduration=4.775390734 podStartE2EDuration="7.380990891s" podCreationTimestamp="2025-10-11 11:04:10 +0000 UTC" firstStartedPulling="2025-10-11 11:04:11.931054145 +0000 UTC m=+2286.715480854" lastFinishedPulling="2025-10-11 11:04:14.536654302 +0000 UTC m=+2289.321081011" observedRunningTime="2025-10-11 11:04:17.379416969 +0000 UTC m=+2292.163843688" watchObservedRunningTime="2025-10-11 11:04:17.380990891 +0000 UTC m=+2292.165417600" Oct 11 11:04:17.451452 master-2 kubenswrapper[4776]: I1011 11:04:17.451410 4776 scope.go:117] "RemoveContainer" containerID="6868226996ff13456bac113fdafadf8a4b2b3ab56a80c2fc96fb7d2ab46abffa" Oct 11 11:04:17.473351 master-2 kubenswrapper[4776]: I1011 11:04:17.473295 4776 scope.go:117] "RemoveContainer" containerID="8fad6ef13ce32e48a0200e0d51e6ec1869e31fdbae9e43f6b4327a3848b3263c" Oct 11 11:04:17.669410 master-0 kubenswrapper[4790]: I1011 11:04:17.669329 4790 generic.go:334] "Generic (PLEG): container finished" podID="943bdd13-9252-4c87-b669-9cf3f566e2ec" containerID="ea93697b9c384496775b0ae62c141aa148f53ec6c485357004d0d0c95b051d40" exitCode=0 Oct 11 11:04:17.670055 master-0 kubenswrapper[4790]: I1011 11:04:17.669429 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-bp8q6" event={"ID":"943bdd13-9252-4c87-b669-9cf3f566e2ec","Type":"ContainerDied","Data":"ea93697b9c384496775b0ae62c141aa148f53ec6c485357004d0d0c95b051d40"} Oct 11 11:04:17.681224 master-0 kubenswrapper[4790]: I1011 11:04:17.681155 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-8wwvc" event={"ID":"cec8d92b-373a-45a6-926a-6e2ae2a2645d","Type":"ContainerStarted","Data":"3ff4efd92c1cfb0a3fce962e4c178a773e76ade548c2a57f01380d3b0f25f161"} Oct 11 11:04:17.682162 master-0 kubenswrapper[4790]: I1011 11:04:17.682128 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:17.796419 master-0 kubenswrapper[4790]: I1011 11:04:17.796281 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-8wwvc" podStartSLOduration=5.22956999 podStartE2EDuration="6.796259991s" podCreationTimestamp="2025-10-11 11:04:11 +0000 UTC" firstStartedPulling="2025-10-11 11:04:12.974605995 +0000 UTC m=+1529.529066287" lastFinishedPulling="2025-10-11 11:04:14.541295996 +0000 UTC m=+1531.095756288" observedRunningTime="2025-10-11 11:04:17.794509013 +0000 UTC m=+1534.348969325" watchObservedRunningTime="2025-10-11 11:04:17.796259991 +0000 UTC m=+1534.350720283" Oct 11 11:04:17.962562 master-1 kubenswrapper[4771]: I1011 11:04:17.962500 4771 generic.go:334] "Generic (PLEG): container finished" podID="2de0e65e-d3d5-4b50-939a-afc198469db7" containerID="775bdd8e43e99152441a14ec4179896b210af621ae7682d1c65a8fc5e6b85b65" exitCode=0 Oct 11 11:04:17.963395 master-1 kubenswrapper[4771]: I1011 11:04:17.962862 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-jwl5z" event={"ID":"2de0e65e-d3d5-4b50-939a-afc198469db7","Type":"ContainerDied","Data":"775bdd8e43e99152441a14ec4179896b210af621ae7682d1c65a8fc5e6b85b65"} Oct 11 11:04:17.966743 master-1 kubenswrapper[4771]: I1011 11:04:17.966715 4771 generic.go:334] "Generic (PLEG): container finished" podID="4147847f-c490-4bc6-a2da-298d3a3e188b" containerID="785f888f163f9a4ad23b11870a89d933866aaf82fd9811e43f9ca7e883e62d16" exitCode=0 Oct 11 11:04:17.966875 master-1 kubenswrapper[4771]: I1011 11:04:17.966756 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-hjq8j" event={"ID":"4147847f-c490-4bc6-a2da-298d3a3e188b","Type":"ContainerDied","Data":"785f888f163f9a4ad23b11870a89d933866aaf82fd9811e43f9ca7e883e62d16"} Oct 11 11:04:18.070665 master-2 kubenswrapper[4776]: I1011 11:04:18.070613 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25099d7a-e434-48d2-a175-088e5ad2caf2" path="/var/lib/kubelet/pods/25099d7a-e434-48d2-a175-088e5ad2caf2/volumes" Oct 11 11:04:18.350901 master-2 kubenswrapper[4776]: I1011 11:04:18.343649 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-2h6nw" event={"ID":"165afc87-c932-440a-931b-99e339c2b038","Type":"ContainerStarted","Data":"d973db2c223668b8c8f8737e8e4a20b4e7eeeb9df8c1114502eb718a02027190"} Oct 11 11:04:18.350901 master-2 kubenswrapper[4776]: I1011 11:04:18.343745 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:18.391955 master-2 kubenswrapper[4776]: I1011 11:04:18.391843 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-2h6nw" podStartSLOduration=4.20683001 podStartE2EDuration="6.391824872s" podCreationTimestamp="2025-10-11 11:04:12 +0000 UTC" firstStartedPulling="2025-10-11 11:04:13.750695608 +0000 UTC m=+2288.535122317" lastFinishedPulling="2025-10-11 11:04:15.93569047 +0000 UTC m=+2290.720117179" observedRunningTime="2025-10-11 11:04:18.385333727 +0000 UTC m=+2293.169760436" watchObservedRunningTime="2025-10-11 11:04:18.391824872 +0000 UTC m=+2293.176251581" Oct 11 11:04:18.702591 master-0 kubenswrapper[4790]: I1011 11:04:18.702434 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-bp8q6" event={"ID":"943bdd13-9252-4c87-b669-9cf3f566e2ec","Type":"ContainerStarted","Data":"6a1fa28481cc708f6993ce48dafaee0a728e36376439b0a4cc765b441e1c044e"} Oct 11 11:04:18.735081 master-0 kubenswrapper[4790]: I1011 11:04:18.734952 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-bp8q6" podStartSLOduration=4.585599372 podStartE2EDuration="6.734928707s" podCreationTimestamp="2025-10-11 11:04:12 +0000 UTC" firstStartedPulling="2025-10-11 11:04:13.878980384 +0000 UTC m=+1530.433440676" lastFinishedPulling="2025-10-11 11:04:16.028309719 +0000 UTC m=+1532.582770011" observedRunningTime="2025-10-11 11:04:18.731270397 +0000 UTC m=+1535.285730709" watchObservedRunningTime="2025-10-11 11:04:18.734928707 +0000 UTC m=+1535.289388999" Oct 11 11:04:19.000080 master-1 kubenswrapper[4771]: I1011 11:04:19.000004 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-hjq8j" event={"ID":"4147847f-c490-4bc6-a2da-298d3a3e188b","Type":"ContainerStarted","Data":"afe19be260ccae21154710d06bcbde1d2b07e18920105bb8499fcf325b4e6df4"} Oct 11 11:04:19.003475 master-1 kubenswrapper[4771]: I1011 11:04:19.003412 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-jwl5z" event={"ID":"2de0e65e-d3d5-4b50-939a-afc198469db7","Type":"ContainerStarted","Data":"2ccf70b97f25e460b07e34a6a246840a964d663547005c15c82774fd6b717cf5"} Oct 11 11:04:19.004278 master-1 kubenswrapper[4771]: I1011 11:04:19.004222 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:19.045556 master-1 kubenswrapper[4771]: I1011 11:04:19.045417 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-hjq8j" podStartSLOduration=6.692584599 podStartE2EDuration="8.045392016s" podCreationTimestamp="2025-10-11 11:04:11 +0000 UTC" firstStartedPulling="2025-10-11 11:04:14.525178289 +0000 UTC m=+2286.499404720" lastFinishedPulling="2025-10-11 11:04:15.877985696 +0000 UTC m=+2287.852212137" observedRunningTime="2025-10-11 11:04:19.03574455 +0000 UTC m=+2291.009971011" watchObservedRunningTime="2025-10-11 11:04:19.045392016 +0000 UTC m=+2291.019618477" Oct 11 11:04:19.067681 master-1 kubenswrapper[4771]: I1011 11:04:19.067052 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-jwl5z" podStartSLOduration=4.46827366 podStartE2EDuration="7.067024215s" podCreationTimestamp="2025-10-11 11:04:12 +0000 UTC" firstStartedPulling="2025-10-11 11:04:13.822881951 +0000 UTC m=+2285.797108392" lastFinishedPulling="2025-10-11 11:04:16.421632506 +0000 UTC m=+2288.395858947" observedRunningTime="2025-10-11 11:04:19.065470481 +0000 UTC m=+2291.039696952" watchObservedRunningTime="2025-10-11 11:04:19.067024215 +0000 UTC m=+2291.041250656" Oct 11 11:04:19.713594 master-0 kubenswrapper[4790]: I1011 11:04:19.713498 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:20.013387 master-1 kubenswrapper[4771]: I1011 11:04:20.013289 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:21.041324 master-2 kubenswrapper[4776]: I1011 11:04:21.041271 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-8dnfj"] Oct 11 11:04:21.048032 master-2 kubenswrapper[4776]: I1011 11:04:21.047977 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-8dnfj"] Oct 11 11:04:22.071342 master-2 kubenswrapper[4776]: I1011 11:04:22.071298 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e873bed5-1a50-4fb0-81b1-2225f4893b28" path="/var/lib/kubelet/pods/e873bed5-1a50-4fb0-81b1-2225f4893b28/volumes" Oct 11 11:04:23.777002 master-1 kubenswrapper[4771]: I1011 11:04:23.776913 4771 scope.go:117] "RemoveContainer" containerID="497e015434504b4db642357797a1c623d7b35238dcc0952d89c6a79885be7010" Oct 11 11:04:24.142192 master-0 kubenswrapper[4790]: I1011 11:04:24.142088 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-wmmzh" Oct 11 11:04:24.153207 master-2 kubenswrapper[4776]: I1011 11:04:24.153151 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:24.163753 master-1 kubenswrapper[4771]: I1011 11:04:24.163638 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-j6frc" Oct 11 11:04:26.054309 master-0 kubenswrapper[4790]: I1011 11:04:26.054232 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-create-gvzlv"] Oct 11 11:04:26.066948 master-0 kubenswrapper[4790]: I1011 11:04:26.066754 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-create-gvzlv"] Oct 11 11:04:26.307269 master-0 kubenswrapper[4790]: I1011 11:04:26.307129 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2137512f-c759-4935-944d-48248c99c2ec" path="/var/lib/kubelet/pods/2137512f-c759-4935-944d-48248c99c2ec/volumes" Oct 11 11:04:26.369393 master-2 kubenswrapper[4776]: I1011 11:04:26.369246 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:26.407520 master-1 kubenswrapper[4771]: I1011 11:04:26.407453 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-hjq8j" Oct 11 11:04:26.437891 master-0 kubenswrapper[4790]: I1011 11:04:26.437828 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-8wwvc" Oct 11 11:04:28.207312 master-2 kubenswrapper[4776]: I1011 11:04:28.207260 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:28.241119 master-1 kubenswrapper[4771]: I1011 11:04:28.241045 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-jwl5z" Oct 11 11:04:28.335014 master-0 kubenswrapper[4790]: I1011 11:04:28.334943 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-bp8q6" Oct 11 11:04:33.062529 master-0 kubenswrapper[4790]: I1011 11:04:33.062204 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b634-account-create-vb2w7"] Oct 11 11:04:33.064386 master-2 kubenswrapper[4776]: I1011 11:04:33.064319 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-5pz76"] Oct 11 11:04:33.070926 master-0 kubenswrapper[4790]: I1011 11:04:33.070852 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-9ac8-account-create-r5rxs"] Oct 11 11:04:33.073979 master-2 kubenswrapper[4776]: I1011 11:04:33.073920 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-5pz76"] Oct 11 11:04:33.077644 master-0 kubenswrapper[4790]: I1011 11:04:33.077551 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2033-account-create-jh9gc"] Oct 11 11:04:33.094253 master-0 kubenswrapper[4790]: I1011 11:04:33.094115 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b634-account-create-vb2w7"] Oct 11 11:04:33.094253 master-0 kubenswrapper[4790]: I1011 11:04:33.094218 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2033-account-create-jh9gc"] Oct 11 11:04:33.101080 master-0 kubenswrapper[4790]: I1011 11:04:33.101020 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-9ac8-account-create-r5rxs"] Oct 11 11:04:34.069666 master-2 kubenswrapper[4776]: I1011 11:04:34.069608 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cef6f34-fa11-4593-b4d8-c9ac415f1967" path="/var/lib/kubelet/pods/7cef6f34-fa11-4593-b4d8-c9ac415f1967/volumes" Oct 11 11:04:34.313336 master-0 kubenswrapper[4790]: I1011 11:04:34.313198 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08a325c6-b9b6-495b-87dc-d6e12b3f1029" path="/var/lib/kubelet/pods/08a325c6-b9b6-495b-87dc-d6e12b3f1029/volumes" Oct 11 11:04:34.314538 master-0 kubenswrapper[4790]: I1011 11:04:34.314472 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ddf95f-6e9c-4f3c-b742-87379c6594b2" path="/var/lib/kubelet/pods/09ddf95f-6e9c-4f3c-b742-87379c6594b2/volumes" Oct 11 11:04:34.315648 master-0 kubenswrapper[4790]: I1011 11:04:34.315581 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdd4a60e-f24a-48fe-afcb-c7ccab615f69" path="/var/lib/kubelet/pods/cdd4a60e-f24a-48fe-afcb-c7ccab615f69/volumes" Oct 11 11:04:36.042564 master-2 kubenswrapper[4776]: I1011 11:04:36.042504 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xptqx"] Oct 11 11:04:36.058133 master-0 kubenswrapper[4790]: I1011 11:04:36.058039 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-e1bf-account-create-9qds8"] Oct 11 11:04:36.065490 master-0 kubenswrapper[4790]: I1011 11:04:36.065417 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-e1bf-account-create-9qds8"] Oct 11 11:04:36.071797 master-2 kubenswrapper[4776]: I1011 11:04:36.071727 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xptqx"] Oct 11 11:04:36.305950 master-0 kubenswrapper[4790]: I1011 11:04:36.305887 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03b3f6bf-ef4b-41fa-b098-fc5620a92300" path="/var/lib/kubelet/pods/03b3f6bf-ef4b-41fa-b098-fc5620a92300/volumes" Oct 11 11:04:38.069182 master-2 kubenswrapper[4776]: I1011 11:04:38.069051 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30996a86-1b86-4a67-bfea-0e63f7417196" path="/var/lib/kubelet/pods/30996a86-1b86-4a67-bfea-0e63f7417196/volumes" Oct 11 11:04:44.945331 master-0 kubenswrapper[4790]: I1011 11:04:44.945238 4790 scope.go:117] "RemoveContainer" containerID="050e70bb3b6a03db41f9fcb784b5238c3ff9d94ed85c503d6f9f58f7bd27daa0" Oct 11 11:04:44.976955 master-0 kubenswrapper[4790]: I1011 11:04:44.976887 4790 scope.go:117] "RemoveContainer" containerID="4a77a0e25a1bbd76eb350e88d6052fb5f4963ac556fb275beeaf9d30c06320df" Oct 11 11:04:45.035420 master-0 kubenswrapper[4790]: I1011 11:04:45.035258 4790 scope.go:117] "RemoveContainer" containerID="3b1f890304ef9d089479614d38d81aabcf093a42157b52f334fa1a99c8f86aa6" Oct 11 11:04:45.072371 master-0 kubenswrapper[4790]: I1011 11:04:45.072316 4790 scope.go:117] "RemoveContainer" containerID="07ec7b09db6fb5294fadc7bd8337f6b789e9b95a2303619665336b8735fa4bfe" Oct 11 11:04:45.096264 master-0 kubenswrapper[4790]: I1011 11:04:45.096218 4790 scope.go:117] "RemoveContainer" containerID="f18d3e7808bc3bd6d8d3dfcffe3def526d4ab16b836ac39e9bb14dfceb0d8247" Oct 11 11:04:45.128765 master-0 kubenswrapper[4790]: I1011 11:04:45.128736 4790 scope.go:117] "RemoveContainer" containerID="fd9735379426d4418da18546aac8b7806a6015a386e483f957b980d675840314" Oct 11 11:04:45.158745 master-0 kubenswrapper[4790]: I1011 11:04:45.158602 4790 scope.go:117] "RemoveContainer" containerID="677c51ee7ba248bdecdce7b7bb9d050175056a091f08201d76d54e3406eb2697" Oct 11 11:04:45.180579 master-0 kubenswrapper[4790]: I1011 11:04:45.180449 4790 scope.go:117] "RemoveContainer" containerID="1cde782f190214155e020d24bbe2e2d5c9f2dc24b3fea8e9236ee944da092a1c" Oct 11 11:04:45.202289 master-0 kubenswrapper[4790]: I1011 11:04:45.202208 4790 scope.go:117] "RemoveContainer" containerID="fa9c7f461b0e315bcd532cda39de483b7f3baaed2714bed160ee9f75fc0f43db" Oct 11 11:04:45.223748 master-0 kubenswrapper[4790]: I1011 11:04:45.223677 4790 scope.go:117] "RemoveContainer" containerID="6c3235d5d6bf2dc75b5b651ccfac846d26bd4957ea4e687fad602fa916728d6b" Oct 11 11:04:45.242387 master-0 kubenswrapper[4790]: I1011 11:04:45.242353 4790 scope.go:117] "RemoveContainer" containerID="5ca20afffe15faa31f5c2c1443a96be8fe5b0268275280368238f1f4b32ef4f2" Oct 11 11:04:47.307593 master-2 kubenswrapper[4776]: I1011 11:04:47.307522 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85b986dbf-plkrt"] Oct 11 11:04:47.309141 master-2 kubenswrapper[4776]: I1011 11:04:47.309093 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.312364 master-2 kubenswrapper[4776]: I1011 11:04:47.312305 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 11 11:04:47.312473 master-2 kubenswrapper[4776]: I1011 11:04:47.312305 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 11:04:47.312933 master-2 kubenswrapper[4776]: I1011 11:04:47.312898 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 11:04:47.313066 master-2 kubenswrapper[4776]: I1011 11:04:47.312943 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 11 11:04:47.313066 master-2 kubenswrapper[4776]: I1011 11:04:47.313037 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"networkers" Oct 11 11:04:47.313549 master-2 kubenswrapper[4776]: I1011 11:04:47.313423 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 11 11:04:47.326844 master-2 kubenswrapper[4776]: I1011 11:04:47.323359 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85b986dbf-plkrt"] Oct 11 11:04:47.393043 master-2 kubenswrapper[4776]: I1011 11:04:47.392971 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-svc\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.393043 master-2 kubenswrapper[4776]: I1011 11:04:47.393025 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-networkers\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.393043 master-2 kubenswrapper[4776]: I1011 11:04:47.393047 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-config\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.393607 master-2 kubenswrapper[4776]: I1011 11:04:47.393080 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-swift-storage-0\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.393607 master-2 kubenswrapper[4776]: I1011 11:04:47.393119 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-nb\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.393607 master-2 kubenswrapper[4776]: I1011 11:04:47.393150 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-sb\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.393607 master-2 kubenswrapper[4776]: I1011 11:04:47.393388 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs8hv\" (UniqueName: \"kubernetes.io/projected/a63f7af9-5ea2-4091-901f-6d9187377785-kube-api-access-qs8hv\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.495455 master-2 kubenswrapper[4776]: I1011 11:04:47.495383 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-svc\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.495775 master-2 kubenswrapper[4776]: I1011 11:04:47.495465 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-networkers\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.495775 master-2 kubenswrapper[4776]: I1011 11:04:47.495497 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-config\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.495775 master-2 kubenswrapper[4776]: I1011 11:04:47.495558 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-swift-storage-0\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.495775 master-2 kubenswrapper[4776]: I1011 11:04:47.495614 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-nb\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.495775 master-2 kubenswrapper[4776]: I1011 11:04:47.495650 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-sb\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.495775 master-2 kubenswrapper[4776]: I1011 11:04:47.495729 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs8hv\" (UniqueName: \"kubernetes.io/projected/a63f7af9-5ea2-4091-901f-6d9187377785-kube-api-access-qs8hv\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.497663 master-2 kubenswrapper[4776]: I1011 11:04:47.497609 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-swift-storage-0\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.498620 master-2 kubenswrapper[4776]: I1011 11:04:47.498416 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-sb\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.498984 master-2 kubenswrapper[4776]: I1011 11:04:47.498942 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-nb\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.499311 master-2 kubenswrapper[4776]: I1011 11:04:47.499256 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-svc\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.500333 master-2 kubenswrapper[4776]: I1011 11:04:47.500270 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-config\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.500490 master-2 kubenswrapper[4776]: I1011 11:04:47.500274 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-networkers\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.521353 master-2 kubenswrapper[4776]: I1011 11:04:47.521214 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs8hv\" (UniqueName: \"kubernetes.io/projected/a63f7af9-5ea2-4091-901f-6d9187377785-kube-api-access-qs8hv\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.627961 master-2 kubenswrapper[4776]: I1011 11:04:47.627770 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:48.125638 master-2 kubenswrapper[4776]: I1011 11:04:48.125576 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85b986dbf-plkrt"] Oct 11 11:04:48.614218 master-2 kubenswrapper[4776]: I1011 11:04:48.614019 4776 generic.go:334] "Generic (PLEG): container finished" podID="a63f7af9-5ea2-4091-901f-6d9187377785" containerID="eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22" exitCode=0 Oct 11 11:04:48.614218 master-2 kubenswrapper[4776]: I1011 11:04:48.614081 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" event={"ID":"a63f7af9-5ea2-4091-901f-6d9187377785","Type":"ContainerDied","Data":"eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22"} Oct 11 11:04:48.614218 master-2 kubenswrapper[4776]: I1011 11:04:48.614105 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" event={"ID":"a63f7af9-5ea2-4091-901f-6d9187377785","Type":"ContainerStarted","Data":"7e86f6a0692217b02770a72e3b5288c4aba15fbb51a8be198df074c909a4ebea"} Oct 11 11:04:49.623833 master-2 kubenswrapper[4776]: I1011 11:04:49.623698 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" event={"ID":"a63f7af9-5ea2-4091-901f-6d9187377785","Type":"ContainerStarted","Data":"207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64"} Oct 11 11:04:49.624462 master-2 kubenswrapper[4776]: I1011 11:04:49.623869 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:49.658980 master-2 kubenswrapper[4776]: I1011 11:04:49.658875 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" podStartSLOduration=2.658850683 podStartE2EDuration="2.658850683s" podCreationTimestamp="2025-10-11 11:04:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 11:04:49.652742628 +0000 UTC m=+2324.437169337" watchObservedRunningTime="2025-10-11 11:04:49.658850683 +0000 UTC m=+2324.443277392" Oct 11 11:04:57.629937 master-2 kubenswrapper[4776]: I1011 11:04:57.629856 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:57.758911 master-1 kubenswrapper[4771]: I1011 11:04:57.758602 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb9b8c955-b5qwc"] Oct 11 11:04:57.760789 master-1 kubenswrapper[4771]: I1011 11:04:57.760712 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" podUID="396249ad-10d3-48d9-ba43-46df789198c9" containerName="dnsmasq-dns" containerID="cri-o://6510e0b21de1c8b7cb314f294d9600ddbe32a0fc8c522f5123cca3a802773760" gracePeriod=10 Oct 11 11:04:57.816647 master-1 kubenswrapper[4771]: I1011 11:04:57.816499 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85b986dbf-gg6pg"] Oct 11 11:04:57.823555 master-1 kubenswrapper[4771]: I1011 11:04:57.823467 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:57.831326 master-1 kubenswrapper[4771]: I1011 11:04:57.831275 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"networkers" Oct 11 11:04:57.831701 master-1 kubenswrapper[4771]: I1011 11:04:57.831582 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85b986dbf-gg6pg"] Oct 11 11:04:57.947043 master-1 kubenswrapper[4771]: I1011 11:04:57.946944 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sqqv\" (UniqueName: \"kubernetes.io/projected/ba5730cb-34e6-498e-995a-20912130efe3-kube-api-access-8sqqv\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:57.947472 master-1 kubenswrapper[4771]: I1011 11:04:57.947216 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-config\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:57.947472 master-1 kubenswrapper[4771]: I1011 11:04:57.947263 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-dns-swift-storage-0\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:57.947472 master-1 kubenswrapper[4771]: I1011 11:04:57.947295 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-dns-svc\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:57.947472 master-1 kubenswrapper[4771]: I1011 11:04:57.947317 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-ovsdbserver-sb\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:57.947472 master-1 kubenswrapper[4771]: I1011 11:04:57.947457 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-ovsdbserver-nb\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:57.948341 master-1 kubenswrapper[4771]: I1011 11:04:57.947520 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-networkers\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:58.013007 master-1 kubenswrapper[4771]: I1011 11:04:57.993092 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85b986dbf-gg6pg"] Oct 11 11:04:58.013007 master-1 kubenswrapper[4771]: E1011 11:04:57.993663 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-8sqqv networkers ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" podUID="ba5730cb-34e6-498e-995a-20912130efe3" Oct 11 11:04:58.034278 master-2 kubenswrapper[4776]: I1011 11:04:58.034202 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59dd57778c-sbrb9"] Oct 11 11:04:58.036023 master-2 kubenswrapper[4776]: I1011 11:04:58.035991 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.048986 master-1 kubenswrapper[4771]: I1011 11:04:58.048889 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sqqv\" (UniqueName: \"kubernetes.io/projected/ba5730cb-34e6-498e-995a-20912130efe3-kube-api-access-8sqqv\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:58.048986 master-1 kubenswrapper[4771]: I1011 11:04:58.048998 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-config\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:58.049696 master-1 kubenswrapper[4771]: I1011 11:04:58.049021 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-dns-swift-storage-0\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:58.049696 master-1 kubenswrapper[4771]: I1011 11:04:58.049049 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-dns-svc\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:58.049696 master-1 kubenswrapper[4771]: I1011 11:04:58.049075 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-ovsdbserver-sb\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:58.049696 master-1 kubenswrapper[4771]: I1011 11:04:58.049136 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-ovsdbserver-nb\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:58.049696 master-1 kubenswrapper[4771]: I1011 11:04:58.049195 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-networkers\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:58.050299 master-1 kubenswrapper[4771]: I1011 11:04:58.050241 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-dns-swift-storage-0\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:58.050404 master-1 kubenswrapper[4771]: I1011 11:04:58.050377 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-dns-svc\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:58.050976 master-1 kubenswrapper[4771]: I1011 11:04:58.050917 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-ovsdbserver-sb\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:58.051157 master-1 kubenswrapper[4771]: I1011 11:04:58.051027 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-ovsdbserver-nb\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:58.051157 master-1 kubenswrapper[4771]: I1011 11:04:58.050994 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-config\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:58.051688 master-1 kubenswrapper[4771]: I1011 11:04:58.051654 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-networkers\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:58.052350 master-2 kubenswrapper[4776]: I1011 11:04:58.052299 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59dd57778c-sbrb9"] Oct 11 11:04:58.080293 master-1 kubenswrapper[4771]: I1011 11:04:58.080203 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sqqv\" (UniqueName: \"kubernetes.io/projected/ba5730cb-34e6-498e-995a-20912130efe3-kube-api-access-8sqqv\") pod \"dnsmasq-dns-85b986dbf-gg6pg\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:58.169812 master-2 kubenswrapper[4776]: I1011 11:04:58.169709 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt62j\" (UniqueName: \"kubernetes.io/projected/bdab7f7f-97fe-4393-859b-b71f68b588b4-kube-api-access-kt62j\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.169812 master-2 kubenswrapper[4776]: I1011 11:04:58.169809 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-networkers\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.170082 master-2 kubenswrapper[4776]: I1011 11:04:58.169872 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-config\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.170082 master-2 kubenswrapper[4776]: I1011 11:04:58.169894 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-sb\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.170082 master-2 kubenswrapper[4776]: I1011 11:04:58.170040 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-svc\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.170254 master-2 kubenswrapper[4776]: I1011 11:04:58.170217 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-swift-storage-0\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.170324 master-2 kubenswrapper[4776]: I1011 11:04:58.170298 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-nb\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.272099 master-2 kubenswrapper[4776]: I1011 11:04:58.272019 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt62j\" (UniqueName: \"kubernetes.io/projected/bdab7f7f-97fe-4393-859b-b71f68b588b4-kube-api-access-kt62j\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.272099 master-2 kubenswrapper[4776]: I1011 11:04:58.272087 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-networkers\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.272363 master-2 kubenswrapper[4776]: I1011 11:04:58.272139 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-config\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.272363 master-2 kubenswrapper[4776]: I1011 11:04:58.272162 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-sb\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.272363 master-2 kubenswrapper[4776]: I1011 11:04:58.272193 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-svc\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.272363 master-2 kubenswrapper[4776]: I1011 11:04:58.272240 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-swift-storage-0\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.272363 master-2 kubenswrapper[4776]: I1011 11:04:58.272270 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-nb\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.273124 master-2 kubenswrapper[4776]: I1011 11:04:58.273076 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-networkers\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.273187 master-2 kubenswrapper[4776]: I1011 11:04:58.273170 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-nb\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.273426 master-2 kubenswrapper[4776]: I1011 11:04:58.273394 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-sb\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.273806 master-2 kubenswrapper[4776]: I1011 11:04:58.273779 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-svc\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.274041 master-2 kubenswrapper[4776]: I1011 11:04:58.274012 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-swift-storage-0\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.274438 master-2 kubenswrapper[4776]: I1011 11:04:58.274411 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-config\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.293068 master-2 kubenswrapper[4776]: I1011 11:04:58.292970 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt62j\" (UniqueName: \"kubernetes.io/projected/bdab7f7f-97fe-4393-859b-b71f68b588b4-kube-api-access-kt62j\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.357462 master-2 kubenswrapper[4776]: I1011 11:04:58.357407 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.405163 master-1 kubenswrapper[4771]: I1011 11:04:58.405078 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 11:04:58.471832 master-1 kubenswrapper[4771]: I1011 11:04:58.471752 4771 generic.go:334] "Generic (PLEG): container finished" podID="396249ad-10d3-48d9-ba43-46df789198c9" containerID="6510e0b21de1c8b7cb314f294d9600ddbe32a0fc8c522f5123cca3a802773760" exitCode=0 Oct 11 11:04:58.471832 master-1 kubenswrapper[4771]: I1011 11:04:58.471823 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" Oct 11 11:04:58.472964 master-1 kubenswrapper[4771]: I1011 11:04:58.471856 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:58.472964 master-1 kubenswrapper[4771]: I1011 11:04:58.471877 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" event={"ID":"396249ad-10d3-48d9-ba43-46df789198c9","Type":"ContainerDied","Data":"6510e0b21de1c8b7cb314f294d9600ddbe32a0fc8c522f5123cca3a802773760"} Oct 11 11:04:58.472964 master-1 kubenswrapper[4771]: I1011 11:04:58.471968 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb9b8c955-b5qwc" event={"ID":"396249ad-10d3-48d9-ba43-46df789198c9","Type":"ContainerDied","Data":"a14cd6677525c65737d0849bb25554909c4ebb8c2b5761120df0ab99b361a3df"} Oct 11 11:04:58.472964 master-1 kubenswrapper[4771]: I1011 11:04:58.471993 4771 scope.go:117] "RemoveContainer" containerID="6510e0b21de1c8b7cb314f294d9600ddbe32a0fc8c522f5123cca3a802773760" Oct 11 11:04:58.481064 master-1 kubenswrapper[4771]: I1011 11:04:58.481001 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:58.499801 master-1 kubenswrapper[4771]: I1011 11:04:58.499723 4771 scope.go:117] "RemoveContainer" containerID="56676cd3860cb8b15840c440ca73c01a49ae818c01bccb1e7e3f799e36cf02d2" Oct 11 11:04:58.528005 master-1 kubenswrapper[4771]: I1011 11:04:58.527928 4771 scope.go:117] "RemoveContainer" containerID="6510e0b21de1c8b7cb314f294d9600ddbe32a0fc8c522f5123cca3a802773760" Oct 11 11:04:58.528742 master-1 kubenswrapper[4771]: E1011 11:04:58.528693 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6510e0b21de1c8b7cb314f294d9600ddbe32a0fc8c522f5123cca3a802773760\": container with ID starting with 6510e0b21de1c8b7cb314f294d9600ddbe32a0fc8c522f5123cca3a802773760 not found: ID does not exist" containerID="6510e0b21de1c8b7cb314f294d9600ddbe32a0fc8c522f5123cca3a802773760" Oct 11 11:04:58.528825 master-1 kubenswrapper[4771]: I1011 11:04:58.528739 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6510e0b21de1c8b7cb314f294d9600ddbe32a0fc8c522f5123cca3a802773760"} err="failed to get container status \"6510e0b21de1c8b7cb314f294d9600ddbe32a0fc8c522f5123cca3a802773760\": rpc error: code = NotFound desc = could not find container \"6510e0b21de1c8b7cb314f294d9600ddbe32a0fc8c522f5123cca3a802773760\": container with ID starting with 6510e0b21de1c8b7cb314f294d9600ddbe32a0fc8c522f5123cca3a802773760 not found: ID does not exist" Oct 11 11:04:58.528825 master-1 kubenswrapper[4771]: I1011 11:04:58.528771 4771 scope.go:117] "RemoveContainer" containerID="56676cd3860cb8b15840c440ca73c01a49ae818c01bccb1e7e3f799e36cf02d2" Oct 11 11:04:58.529284 master-1 kubenswrapper[4771]: E1011 11:04:58.529242 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56676cd3860cb8b15840c440ca73c01a49ae818c01bccb1e7e3f799e36cf02d2\": container with ID starting with 56676cd3860cb8b15840c440ca73c01a49ae818c01bccb1e7e3f799e36cf02d2 not found: ID does not exist" containerID="56676cd3860cb8b15840c440ca73c01a49ae818c01bccb1e7e3f799e36cf02d2" Oct 11 11:04:58.529284 master-1 kubenswrapper[4771]: I1011 11:04:58.529269 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56676cd3860cb8b15840c440ca73c01a49ae818c01bccb1e7e3f799e36cf02d2"} err="failed to get container status \"56676cd3860cb8b15840c440ca73c01a49ae818c01bccb1e7e3f799e36cf02d2\": rpc error: code = NotFound desc = could not find container \"56676cd3860cb8b15840c440ca73c01a49ae818c01bccb1e7e3f799e36cf02d2\": container with ID starting with 56676cd3860cb8b15840c440ca73c01a49ae818c01bccb1e7e3f799e36cf02d2 not found: ID does not exist" Oct 11 11:04:58.563776 master-1 kubenswrapper[4771]: I1011 11:04:58.563695 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-config\") pod \"ba5730cb-34e6-498e-995a-20912130efe3\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " Oct 11 11:04:58.563953 master-1 kubenswrapper[4771]: I1011 11:04:58.563782 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-config\") pod \"396249ad-10d3-48d9-ba43-46df789198c9\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " Oct 11 11:04:58.563953 master-1 kubenswrapper[4771]: I1011 11:04:58.563868 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-ovsdbserver-nb\") pod \"396249ad-10d3-48d9-ba43-46df789198c9\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " Oct 11 11:04:58.564052 master-1 kubenswrapper[4771]: I1011 11:04:58.563984 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-ovsdbserver-nb\") pod \"ba5730cb-34e6-498e-995a-20912130efe3\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " Oct 11 11:04:58.564667 master-1 kubenswrapper[4771]: I1011 11:04:58.564607 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-config" (OuterVolumeSpecName: "config") pod "ba5730cb-34e6-498e-995a-20912130efe3" (UID: "ba5730cb-34e6-498e-995a-20912130efe3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:04:58.564756 master-1 kubenswrapper[4771]: I1011 11:04:58.564654 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ba5730cb-34e6-498e-995a-20912130efe3" (UID: "ba5730cb-34e6-498e-995a-20912130efe3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:04:58.564847 master-1 kubenswrapper[4771]: I1011 11:04:58.564811 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-ovsdbserver-sb\") pod \"ba5730cb-34e6-498e-995a-20912130efe3\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " Oct 11 11:04:58.564941 master-1 kubenswrapper[4771]: I1011 11:04:58.564911 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-ovsdbserver-sb\") pod \"396249ad-10d3-48d9-ba43-46df789198c9\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " Oct 11 11:04:58.565020 master-1 kubenswrapper[4771]: I1011 11:04:58.564973 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-dns-swift-storage-0\") pod \"ba5730cb-34e6-498e-995a-20912130efe3\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " Oct 11 11:04:58.565351 master-1 kubenswrapper[4771]: I1011 11:04:58.565324 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-networkers\") pod \"ba5730cb-34e6-498e-995a-20912130efe3\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " Oct 11 11:04:58.565488 master-1 kubenswrapper[4771]: I1011 11:04:58.565346 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ba5730cb-34e6-498e-995a-20912130efe3" (UID: "ba5730cb-34e6-498e-995a-20912130efe3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:04:58.565488 master-1 kubenswrapper[4771]: I1011 11:04:58.565392 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-dns-svc\") pod \"ba5730cb-34e6-498e-995a-20912130efe3\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " Oct 11 11:04:58.565488 master-1 kubenswrapper[4771]: I1011 11:04:58.565436 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sqqv\" (UniqueName: \"kubernetes.io/projected/ba5730cb-34e6-498e-995a-20912130efe3-kube-api-access-8sqqv\") pod \"ba5730cb-34e6-498e-995a-20912130efe3\" (UID: \"ba5730cb-34e6-498e-995a-20912130efe3\") " Oct 11 11:04:58.565488 master-1 kubenswrapper[4771]: I1011 11:04:58.565473 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-dns-svc\") pod \"396249ad-10d3-48d9-ba43-46df789198c9\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " Oct 11 11:04:58.566491 master-1 kubenswrapper[4771]: I1011 11:04:58.565875 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ba5730cb-34e6-498e-995a-20912130efe3" (UID: "ba5730cb-34e6-498e-995a-20912130efe3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:04:58.566491 master-1 kubenswrapper[4771]: I1011 11:04:58.566044 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-897k7\" (UniqueName: \"kubernetes.io/projected/396249ad-10d3-48d9-ba43-46df789198c9-kube-api-access-897k7\") pod \"396249ad-10d3-48d9-ba43-46df789198c9\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " Oct 11 11:04:58.566491 master-1 kubenswrapper[4771]: I1011 11:04:58.566085 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-dns-swift-storage-0\") pod \"396249ad-10d3-48d9-ba43-46df789198c9\" (UID: \"396249ad-10d3-48d9-ba43-46df789198c9\") " Oct 11 11:04:58.566491 master-1 kubenswrapper[4771]: I1011 11:04:58.566324 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-networkers" (OuterVolumeSpecName: "networkers") pod "ba5730cb-34e6-498e-995a-20912130efe3" (UID: "ba5730cb-34e6-498e-995a-20912130efe3"). InnerVolumeSpecName "networkers". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:04:58.566491 master-1 kubenswrapper[4771]: I1011 11:04:58.566446 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ba5730cb-34e6-498e-995a-20912130efe3" (UID: "ba5730cb-34e6-498e-995a-20912130efe3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:04:58.567431 master-1 kubenswrapper[4771]: I1011 11:04:58.567396 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-ovsdbserver-nb\") on node \"master-1\" DevicePath \"\"" Oct 11 11:04:58.567431 master-1 kubenswrapper[4771]: I1011 11:04:58.567428 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-ovsdbserver-sb\") on node \"master-1\" DevicePath \"\"" Oct 11 11:04:58.567604 master-1 kubenswrapper[4771]: I1011 11:04:58.567469 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-dns-swift-storage-0\") on node \"master-1\" DevicePath \"\"" Oct 11 11:04:58.567604 master-1 kubenswrapper[4771]: I1011 11:04:58.567487 4771 reconciler_common.go:293] "Volume detached for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-networkers\") on node \"master-1\" DevicePath \"\"" Oct 11 11:04:58.567604 master-1 kubenswrapper[4771]: I1011 11:04:58.567501 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-dns-svc\") on node \"master-1\" DevicePath \"\"" Oct 11 11:04:58.567604 master-1 kubenswrapper[4771]: I1011 11:04:58.567513 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba5730cb-34e6-498e-995a-20912130efe3-config\") on node \"master-1\" DevicePath \"\"" Oct 11 11:04:58.568634 master-1 kubenswrapper[4771]: I1011 11:04:58.568579 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/396249ad-10d3-48d9-ba43-46df789198c9-kube-api-access-897k7" (OuterVolumeSpecName: "kube-api-access-897k7") pod "396249ad-10d3-48d9-ba43-46df789198c9" (UID: "396249ad-10d3-48d9-ba43-46df789198c9"). InnerVolumeSpecName "kube-api-access-897k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:04:58.570583 master-1 kubenswrapper[4771]: I1011 11:04:58.570525 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba5730cb-34e6-498e-995a-20912130efe3-kube-api-access-8sqqv" (OuterVolumeSpecName: "kube-api-access-8sqqv") pod "ba5730cb-34e6-498e-995a-20912130efe3" (UID: "ba5730cb-34e6-498e-995a-20912130efe3"). InnerVolumeSpecName "kube-api-access-8sqqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:04:58.604259 master-1 kubenswrapper[4771]: I1011 11:04:58.604112 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "396249ad-10d3-48d9-ba43-46df789198c9" (UID: "396249ad-10d3-48d9-ba43-46df789198c9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:04:58.615672 master-1 kubenswrapper[4771]: I1011 11:04:58.615602 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "396249ad-10d3-48d9-ba43-46df789198c9" (UID: "396249ad-10d3-48d9-ba43-46df789198c9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:04:58.620882 master-1 kubenswrapper[4771]: I1011 11:04:58.620817 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-config" (OuterVolumeSpecName: "config") pod "396249ad-10d3-48d9-ba43-46df789198c9" (UID: "396249ad-10d3-48d9-ba43-46df789198c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:04:58.624534 master-1 kubenswrapper[4771]: I1011 11:04:58.624474 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "396249ad-10d3-48d9-ba43-46df789198c9" (UID: "396249ad-10d3-48d9-ba43-46df789198c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:04:58.634202 master-1 kubenswrapper[4771]: I1011 11:04:58.634151 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "396249ad-10d3-48d9-ba43-46df789198c9" (UID: "396249ad-10d3-48d9-ba43-46df789198c9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:04:58.669652 master-1 kubenswrapper[4771]: I1011 11:04:58.669606 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-ovsdbserver-sb\") on node \"master-1\" DevicePath \"\"" Oct 11 11:04:58.670638 master-1 kubenswrapper[4771]: I1011 11:04:58.669874 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sqqv\" (UniqueName: \"kubernetes.io/projected/ba5730cb-34e6-498e-995a-20912130efe3-kube-api-access-8sqqv\") on node \"master-1\" DevicePath \"\"" Oct 11 11:04:58.670638 master-1 kubenswrapper[4771]: I1011 11:04:58.669894 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-dns-svc\") on node \"master-1\" DevicePath \"\"" Oct 11 11:04:58.670638 master-1 kubenswrapper[4771]: I1011 11:04:58.669908 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-897k7\" (UniqueName: \"kubernetes.io/projected/396249ad-10d3-48d9-ba43-46df789198c9-kube-api-access-897k7\") on node \"master-1\" DevicePath \"\"" Oct 11 11:04:58.670638 master-1 kubenswrapper[4771]: I1011 11:04:58.669920 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-dns-swift-storage-0\") on node \"master-1\" DevicePath \"\"" Oct 11 11:04:58.670638 master-1 kubenswrapper[4771]: I1011 11:04:58.669934 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-config\") on node \"master-1\" DevicePath \"\"" Oct 11 11:04:58.670638 master-1 kubenswrapper[4771]: I1011 11:04:58.669946 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/396249ad-10d3-48d9-ba43-46df789198c9-ovsdbserver-nb\") on node \"master-1\" DevicePath \"\"" Oct 11 11:04:58.873203 master-2 kubenswrapper[4776]: I1011 11:04:58.873135 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59dd57778c-sbrb9"] Oct 11 11:04:58.876926 master-1 kubenswrapper[4771]: I1011 11:04:58.876747 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb9b8c955-b5qwc"] Oct 11 11:04:58.891413 master-1 kubenswrapper[4771]: I1011 11:04:58.891321 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb9b8c955-b5qwc"] Oct 11 11:04:59.481198 master-1 kubenswrapper[4771]: I1011 11:04:59.481149 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b986dbf-gg6pg" Oct 11 11:04:59.547592 master-1 kubenswrapper[4771]: I1011 11:04:59.547456 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85b986dbf-gg6pg"] Oct 11 11:04:59.560643 master-1 kubenswrapper[4771]: I1011 11:04:59.560605 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85b986dbf-gg6pg"] Oct 11 11:04:59.727479 master-2 kubenswrapper[4776]: I1011 11:04:59.727270 4776 generic.go:334] "Generic (PLEG): container finished" podID="bdab7f7f-97fe-4393-859b-b71f68b588b4" containerID="2822f6bcfdf6e3e239b9eb0a0c752c195651ceefe7ca225b476e6aad99bf8b53" exitCode=0 Oct 11 11:04:59.727479 master-2 kubenswrapper[4776]: I1011 11:04:59.727383 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" event={"ID":"bdab7f7f-97fe-4393-859b-b71f68b588b4","Type":"ContainerDied","Data":"2822f6bcfdf6e3e239b9eb0a0c752c195651ceefe7ca225b476e6aad99bf8b53"} Oct 11 11:04:59.727479 master-2 kubenswrapper[4776]: I1011 11:04:59.727417 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" event={"ID":"bdab7f7f-97fe-4393-859b-b71f68b588b4","Type":"ContainerStarted","Data":"47eafd73d1b09f5fdc1106e16dc957897cec09f84dde7bc47e0e3b86cb99cab7"} Oct 11 11:05:00.456514 master-1 kubenswrapper[4771]: I1011 11:05:00.456428 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="396249ad-10d3-48d9-ba43-46df789198c9" path="/var/lib/kubelet/pods/396249ad-10d3-48d9-ba43-46df789198c9/volumes" Oct 11 11:05:00.458428 master-1 kubenswrapper[4771]: I1011 11:05:00.458379 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba5730cb-34e6-498e-995a-20912130efe3" path="/var/lib/kubelet/pods/ba5730cb-34e6-498e-995a-20912130efe3/volumes" Oct 11 11:05:00.744876 master-2 kubenswrapper[4776]: I1011 11:05:00.744659 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" event={"ID":"bdab7f7f-97fe-4393-859b-b71f68b588b4","Type":"ContainerStarted","Data":"a642a0d29a9271c87588091a04275c01a52f4245f7c7b0cb7fa10b6f9d71d7b7"} Oct 11 11:05:00.745826 master-2 kubenswrapper[4776]: I1011 11:05:00.745160 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:05:00.786767 master-2 kubenswrapper[4776]: I1011 11:05:00.786668 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" podStartSLOduration=2.786648316 podStartE2EDuration="2.786648316s" podCreationTimestamp="2025-10-11 11:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 11:05:00.779946824 +0000 UTC m=+2335.564373533" watchObservedRunningTime="2025-10-11 11:05:00.786648316 +0000 UTC m=+2335.571075025" Oct 11 11:05:01.058395 master-2 kubenswrapper[4776]: I1011 11:05:01.058230 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-nz82h"] Oct 11 11:05:01.072839 master-2 kubenswrapper[4776]: I1011 11:05:01.072786 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-nz82h"] Oct 11 11:05:02.071147 master-2 kubenswrapper[4776]: I1011 11:05:02.071084 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="005f2579-b848-40fd-b3f3-2d3383344047" path="/var/lib/kubelet/pods/005f2579-b848-40fd-b3f3-2d3383344047/volumes" Oct 11 11:05:03.052964 master-2 kubenswrapper[4776]: I1011 11:05:03.052907 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-2dgxj"] Oct 11 11:05:03.060240 master-2 kubenswrapper[4776]: I1011 11:05:03.060175 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-db-sync-4sh7r"] Oct 11 11:05:03.073291 master-2 kubenswrapper[4776]: I1011 11:05:03.073221 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b5802-db-sync-4sh7r"] Oct 11 11:05:03.078839 master-2 kubenswrapper[4776]: I1011 11:05:03.078367 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-2dgxj"] Oct 11 11:05:04.067472 master-2 kubenswrapper[4776]: I1011 11:05:04.067410 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f2a1bf-160f-40ad-bc2c-a7286a90b988" path="/var/lib/kubelet/pods/c4f2a1bf-160f-40ad-bc2c-a7286a90b988/volumes" Oct 11 11:05:04.068162 master-2 kubenswrapper[4776]: I1011 11:05:04.068129 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d90a5c6e-6cd2-4396-b38c-dc0e03da9d38" path="/var/lib/kubelet/pods/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38/volumes" Oct 11 11:05:08.359657 master-2 kubenswrapper[4776]: I1011 11:05:08.359596 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:05:08.481964 master-0 kubenswrapper[4790]: I1011 11:05:08.479054 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb9b8c955-g7qzg"] Oct 11 11:05:08.481964 master-0 kubenswrapper[4790]: I1011 11:05:08.479950 4790 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" podUID="aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c" containerName="dnsmasq-dns" containerID="cri-o://0bbeab6347207b28833870198731d207ead777fe2d62b52c20f939994946673c" gracePeriod=10 Oct 11 11:05:08.529793 master-1 kubenswrapper[4771]: I1011 11:05:08.529683 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59dd57778c-jshg9"] Oct 11 11:05:08.530811 master-1 kubenswrapper[4771]: E1011 11:05:08.530198 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="396249ad-10d3-48d9-ba43-46df789198c9" containerName="init" Oct 11 11:05:08.530811 master-1 kubenswrapper[4771]: I1011 11:05:08.530215 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="396249ad-10d3-48d9-ba43-46df789198c9" containerName="init" Oct 11 11:05:08.530811 master-1 kubenswrapper[4771]: E1011 11:05:08.530238 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="396249ad-10d3-48d9-ba43-46df789198c9" containerName="dnsmasq-dns" Oct 11 11:05:08.530811 master-1 kubenswrapper[4771]: I1011 11:05:08.530246 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="396249ad-10d3-48d9-ba43-46df789198c9" containerName="dnsmasq-dns" Oct 11 11:05:08.530811 master-1 kubenswrapper[4771]: I1011 11:05:08.530497 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="396249ad-10d3-48d9-ba43-46df789198c9" containerName="dnsmasq-dns" Oct 11 11:05:08.531756 master-1 kubenswrapper[4771]: I1011 11:05:08.531713 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.556080 master-1 kubenswrapper[4771]: I1011 11:05:08.539114 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 11 11:05:08.556080 master-1 kubenswrapper[4771]: I1011 11:05:08.539216 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 11 11:05:08.556080 master-1 kubenswrapper[4771]: I1011 11:05:08.539384 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"networkers" Oct 11 11:05:08.557062 master-1 kubenswrapper[4771]: I1011 11:05:08.557000 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 11:05:08.557431 master-1 kubenswrapper[4771]: I1011 11:05:08.557406 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 11 11:05:08.557522 master-1 kubenswrapper[4771]: I1011 11:05:08.557459 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 11:05:08.600120 master-1 kubenswrapper[4771]: I1011 11:05:08.600038 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59dd57778c-jshg9"] Oct 11 11:05:08.649196 master-1 kubenswrapper[4771]: I1011 11:05:08.649121 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-dns-swift-storage-0\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.649196 master-1 kubenswrapper[4771]: I1011 11:05:08.649225 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-ovsdbserver-nb\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.649572 master-1 kubenswrapper[4771]: I1011 11:05:08.649317 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-networkers\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.649572 master-1 kubenswrapper[4771]: I1011 11:05:08.649374 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-config\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.649572 master-1 kubenswrapper[4771]: I1011 11:05:08.649432 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-dns-svc\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.649572 master-1 kubenswrapper[4771]: I1011 11:05:08.649500 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smwf6\" (UniqueName: \"kubernetes.io/projected/460e6e8f-ccc4-4952-934c-1d3229573074-kube-api-access-smwf6\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.649730 master-1 kubenswrapper[4771]: I1011 11:05:08.649589 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-ovsdbserver-sb\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.752836 master-1 kubenswrapper[4771]: I1011 11:05:08.752634 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-ovsdbserver-sb\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.752836 master-1 kubenswrapper[4771]: I1011 11:05:08.752764 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-ovsdbserver-sb\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.753185 master-1 kubenswrapper[4771]: I1011 11:05:08.752860 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-dns-swift-storage-0\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.753185 master-1 kubenswrapper[4771]: I1011 11:05:08.752916 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-ovsdbserver-nb\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.753185 master-1 kubenswrapper[4771]: I1011 11:05:08.752952 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-networkers\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.753864 master-1 kubenswrapper[4771]: I1011 11:05:08.753813 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-ovsdbserver-nb\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.753968 master-1 kubenswrapper[4771]: I1011 11:05:08.753903 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-config\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.754721 master-1 kubenswrapper[4771]: I1011 11:05:08.754677 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-dns-svc\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.754864 master-1 kubenswrapper[4771]: I1011 11:05:08.754818 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smwf6\" (UniqueName: \"kubernetes.io/projected/460e6e8f-ccc4-4952-934c-1d3229573074-kube-api-access-smwf6\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.755015 master-1 kubenswrapper[4771]: I1011 11:05:08.754563 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-config\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.755015 master-1 kubenswrapper[4771]: I1011 11:05:08.754492 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-networkers\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.755693 master-1 kubenswrapper[4771]: I1011 11:05:08.755650 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-dns-svc\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.755693 master-1 kubenswrapper[4771]: I1011 11:05:08.754542 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-dns-swift-storage-0\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.782006 master-1 kubenswrapper[4771]: I1011 11:05:08.781918 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smwf6\" (UniqueName: \"kubernetes.io/projected/460e6e8f-ccc4-4952-934c-1d3229573074-kube-api-access-smwf6\") pod \"dnsmasq-dns-59dd57778c-jshg9\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:08.878177 master-1 kubenswrapper[4771]: I1011 11:05:08.878082 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:09.118018 master-0 kubenswrapper[4790]: I1011 11:05:09.117941 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 11:05:09.211572 master-0 kubenswrapper[4790]: I1011 11:05:09.211487 4790 generic.go:334] "Generic (PLEG): container finished" podID="aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c" containerID="0bbeab6347207b28833870198731d207ead777fe2d62b52c20f939994946673c" exitCode=0 Oct 11 11:05:09.211572 master-0 kubenswrapper[4790]: I1011 11:05:09.211562 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" event={"ID":"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c","Type":"ContainerDied","Data":"0bbeab6347207b28833870198731d207ead777fe2d62b52c20f939994946673c"} Oct 11 11:05:09.211947 master-0 kubenswrapper[4790]: I1011 11:05:09.211600 4790 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" Oct 11 11:05:09.211947 master-0 kubenswrapper[4790]: I1011 11:05:09.211608 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb9b8c955-g7qzg" event={"ID":"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c","Type":"ContainerDied","Data":"82ab564208fa75b6a416368edc3991b9aae0b1bdbf1f7ab61745c571e8067316"} Oct 11 11:05:09.211947 master-0 kubenswrapper[4790]: I1011 11:05:09.211625 4790 scope.go:117] "RemoveContainer" containerID="0bbeab6347207b28833870198731d207ead777fe2d62b52c20f939994946673c" Oct 11 11:05:09.232183 master-0 kubenswrapper[4790]: I1011 11:05:09.232124 4790 scope.go:117] "RemoveContainer" containerID="148a01e0193a2980ace147ec2984826e51e9857bd60dd500d2908e53b675c079" Oct 11 11:05:09.258404 master-0 kubenswrapper[4790]: I1011 11:05:09.258334 4790 scope.go:117] "RemoveContainer" containerID="0bbeab6347207b28833870198731d207ead777fe2d62b52c20f939994946673c" Oct 11 11:05:09.259377 master-0 kubenswrapper[4790]: E1011 11:05:09.259296 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bbeab6347207b28833870198731d207ead777fe2d62b52c20f939994946673c\": container with ID starting with 0bbeab6347207b28833870198731d207ead777fe2d62b52c20f939994946673c not found: ID does not exist" containerID="0bbeab6347207b28833870198731d207ead777fe2d62b52c20f939994946673c" Oct 11 11:05:09.259452 master-0 kubenswrapper[4790]: I1011 11:05:09.259404 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-dns-svc\") pod \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " Oct 11 11:05:09.259502 master-0 kubenswrapper[4790]: I1011 11:05:09.259475 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-config\") pod \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " Oct 11 11:05:09.259547 master-0 kubenswrapper[4790]: I1011 11:05:09.259407 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bbeab6347207b28833870198731d207ead777fe2d62b52c20f939994946673c"} err="failed to get container status \"0bbeab6347207b28833870198731d207ead777fe2d62b52c20f939994946673c\": rpc error: code = NotFound desc = could not find container \"0bbeab6347207b28833870198731d207ead777fe2d62b52c20f939994946673c\": container with ID starting with 0bbeab6347207b28833870198731d207ead777fe2d62b52c20f939994946673c not found: ID does not exist" Oct 11 11:05:09.259603 master-0 kubenswrapper[4790]: I1011 11:05:09.259575 4790 scope.go:117] "RemoveContainer" containerID="148a01e0193a2980ace147ec2984826e51e9857bd60dd500d2908e53b675c079" Oct 11 11:05:09.259780 master-0 kubenswrapper[4790]: I1011 11:05:09.259740 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4srbk\" (UniqueName: \"kubernetes.io/projected/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-kube-api-access-4srbk\") pod \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " Oct 11 11:05:09.260304 master-0 kubenswrapper[4790]: I1011 11:05:09.260263 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-ovsdbserver-sb\") pod \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " Oct 11 11:05:09.260366 master-0 kubenswrapper[4790]: I1011 11:05:09.260326 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-dns-swift-storage-0\") pod \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " Oct 11 11:05:09.260414 master-0 kubenswrapper[4790]: E1011 11:05:09.260350 4790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"148a01e0193a2980ace147ec2984826e51e9857bd60dd500d2908e53b675c079\": container with ID starting with 148a01e0193a2980ace147ec2984826e51e9857bd60dd500d2908e53b675c079 not found: ID does not exist" containerID="148a01e0193a2980ace147ec2984826e51e9857bd60dd500d2908e53b675c079" Oct 11 11:05:09.260487 master-0 kubenswrapper[4790]: I1011 11:05:09.260433 4790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"148a01e0193a2980ace147ec2984826e51e9857bd60dd500d2908e53b675c079"} err="failed to get container status \"148a01e0193a2980ace147ec2984826e51e9857bd60dd500d2908e53b675c079\": rpc error: code = NotFound desc = could not find container \"148a01e0193a2980ace147ec2984826e51e9857bd60dd500d2908e53b675c079\": container with ID starting with 148a01e0193a2980ace147ec2984826e51e9857bd60dd500d2908e53b675c079 not found: ID does not exist" Oct 11 11:05:09.260487 master-0 kubenswrapper[4790]: I1011 11:05:09.260372 4790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-ovsdbserver-nb\") pod \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\" (UID: \"aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c\") " Oct 11 11:05:09.263221 master-0 kubenswrapper[4790]: I1011 11:05:09.263144 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-kube-api-access-4srbk" (OuterVolumeSpecName: "kube-api-access-4srbk") pod "aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c" (UID: "aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c"). InnerVolumeSpecName "kube-api-access-4srbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:05:09.301046 master-0 kubenswrapper[4790]: I1011 11:05:09.300967 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c" (UID: "aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:09.303653 master-0 kubenswrapper[4790]: I1011 11:05:09.303605 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c" (UID: "aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:09.304953 master-0 kubenswrapper[4790]: I1011 11:05:09.304902 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c" (UID: "aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:09.312058 master-0 kubenswrapper[4790]: I1011 11:05:09.311989 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-config" (OuterVolumeSpecName: "config") pod "aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c" (UID: "aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:09.316951 master-0 kubenswrapper[4790]: I1011 11:05:09.316878 4790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c" (UID: "aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:09.351988 master-1 kubenswrapper[4771]: I1011 11:05:09.351918 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59dd57778c-jshg9"] Oct 11 11:05:09.363912 master-0 kubenswrapper[4790]: I1011 11:05:09.363823 4790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4srbk\" (UniqueName: \"kubernetes.io/projected/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-kube-api-access-4srbk\") on node \"master-0\" DevicePath \"\"" Oct 11 11:05:09.363912 master-0 kubenswrapper[4790]: I1011 11:05:09.363880 4790 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Oct 11 11:05:09.363912 master-0 kubenswrapper[4790]: I1011 11:05:09.363899 4790 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Oct 11 11:05:09.363912 master-0 kubenswrapper[4790]: I1011 11:05:09.363913 4790 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Oct 11 11:05:09.363912 master-0 kubenswrapper[4790]: I1011 11:05:09.363926 4790 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-dns-svc\") on node \"master-0\" DevicePath \"\"" Oct 11 11:05:09.363912 master-0 kubenswrapper[4790]: I1011 11:05:09.363938 4790 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c-config\") on node \"master-0\" DevicePath \"\"" Oct 11 11:05:09.561992 master-0 kubenswrapper[4790]: I1011 11:05:09.561847 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb9b8c955-g7qzg"] Oct 11 11:05:09.569246 master-0 kubenswrapper[4790]: I1011 11:05:09.569143 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb9b8c955-g7qzg"] Oct 11 11:05:09.623454 master-1 kubenswrapper[4771]: I1011 11:05:09.623232 4771 generic.go:334] "Generic (PLEG): container finished" podID="460e6e8f-ccc4-4952-934c-1d3229573074" containerID="4cd14af125b21bd0750a27ba68476f9cdd511a9783b05c8056f3c380232ba59f" exitCode=0 Oct 11 11:05:09.623454 master-1 kubenswrapper[4771]: I1011 11:05:09.623336 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dd57778c-jshg9" event={"ID":"460e6e8f-ccc4-4952-934c-1d3229573074","Type":"ContainerDied","Data":"4cd14af125b21bd0750a27ba68476f9cdd511a9783b05c8056f3c380232ba59f"} Oct 11 11:05:09.623454 master-1 kubenswrapper[4771]: I1011 11:05:09.623416 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dd57778c-jshg9" event={"ID":"460e6e8f-ccc4-4952-934c-1d3229573074","Type":"ContainerStarted","Data":"e322bd0a72d963bfb151ddd53adda74fdeefdc268230c1597845f51e0682ee69"} Oct 11 11:05:10.307169 master-0 kubenswrapper[4790]: I1011 11:05:10.307057 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c" path="/var/lib/kubelet/pods/aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c/volumes" Oct 11 11:05:10.639459 master-1 kubenswrapper[4771]: I1011 11:05:10.639240 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dd57778c-jshg9" event={"ID":"460e6e8f-ccc4-4952-934c-1d3229573074","Type":"ContainerStarted","Data":"47ed2540bea8eec9542f1728675554694e1d171301942dba021c31ee5d74ea47"} Oct 11 11:05:10.641165 master-1 kubenswrapper[4771]: I1011 11:05:10.640100 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:10.675730 master-1 kubenswrapper[4771]: I1011 11:05:10.675631 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59dd57778c-jshg9" podStartSLOduration=2.675602021 podStartE2EDuration="2.675602021s" podCreationTimestamp="2025-10-11 11:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 11:05:10.671989668 +0000 UTC m=+2342.646216159" watchObservedRunningTime="2025-10-11 11:05:10.675602021 +0000 UTC m=+2342.649828462" Oct 11 11:05:13.064008 master-2 kubenswrapper[4776]: I1011 11:05:13.063923 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-sync-n7nm2"] Oct 11 11:05:13.073332 master-2 kubenswrapper[4776]: I1011 11:05:13.073246 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-sync-n7nm2"] Oct 11 11:05:14.119935 master-2 kubenswrapper[4776]: I1011 11:05:14.119827 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a1c4d38-1f25-4465-9976-43be28a3b282" path="/var/lib/kubelet/pods/4a1c4d38-1f25-4465-9976-43be28a3b282/volumes" Oct 11 11:05:17.588173 master-2 kubenswrapper[4776]: I1011 11:05:17.588107 4776 scope.go:117] "RemoveContainer" containerID="076657b6f64fe541979b78922896c11acb7556c05c2c695c6e5189bd99136f77" Oct 11 11:05:17.659066 master-2 kubenswrapper[4776]: I1011 11:05:17.658899 4776 scope.go:117] "RemoveContainer" containerID="a51a44def94d3717c345488e50d0f116c0777018eb329e198859a513f723b71f" Oct 11 11:05:17.695992 master-2 kubenswrapper[4776]: I1011 11:05:17.695934 4776 scope.go:117] "RemoveContainer" containerID="5943503815e84fefb31e290c68a69b97ca3d79be2036bfcb024f274a60831171" Oct 11 11:05:17.754787 master-2 kubenswrapper[4776]: I1011 11:05:17.754725 4776 scope.go:117] "RemoveContainer" containerID="f7b2a1a1f6cfb4760b3cd06bae0a54d2a7eb0b82c31eb466cd4d45304c8c9826" Oct 11 11:05:17.842381 master-2 kubenswrapper[4776]: I1011 11:05:17.842087 4776 scope.go:117] "RemoveContainer" containerID="9cbf7752342665c7e92852ff9e1ea1e5f0d5dc7a3ede8988348adb50918c085e" Oct 11 11:05:17.875700 master-2 kubenswrapper[4776]: I1011 11:05:17.875641 4776 scope.go:117] "RemoveContainer" containerID="3eac08206e51e42747d734fbad286ecc138ff94d119ee5ffe85a0b9dac4348e7" Oct 11 11:05:17.913874 master-2 kubenswrapper[4776]: I1011 11:05:17.913828 4776 scope.go:117] "RemoveContainer" containerID="b7e0d5acf6bbdc53a5ba11187ce29782ecfb6106125d3631307f9dca40bcd06a" Oct 11 11:05:17.969264 master-2 kubenswrapper[4776]: I1011 11:05:17.969217 4776 scope.go:117] "RemoveContainer" containerID="7a37bc55a741b7925fe73a8333e051e4eed1c5b9263c43dfa0598438fa7d12fc" Oct 11 11:05:18.029064 master-2 kubenswrapper[4776]: I1011 11:05:18.029015 4776 scope.go:117] "RemoveContainer" containerID="5a93f178b03e320516bcd32c99e245334eeef09b36cb1fbff0b5d12e1d56145d" Oct 11 11:05:18.879672 master-1 kubenswrapper[4771]: I1011 11:05:18.879590 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:18.988716 master-2 kubenswrapper[4776]: I1011 11:05:18.987988 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85b986dbf-plkrt"] Oct 11 11:05:18.988716 master-2 kubenswrapper[4776]: I1011 11:05:18.988254 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" podUID="a63f7af9-5ea2-4091-901f-6d9187377785" containerName="dnsmasq-dns" containerID="cri-o://207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64" gracePeriod=10 Oct 11 11:05:19.566788 master-0 kubenswrapper[4790]: I1011 11:05:19.566678 4790 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f984c5fd9-ljt8j"] Oct 11 11:05:19.567533 master-0 kubenswrapper[4790]: E1011 11:05:19.567225 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c" containerName="init" Oct 11 11:05:19.567533 master-0 kubenswrapper[4790]: I1011 11:05:19.567250 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c" containerName="init" Oct 11 11:05:19.567533 master-0 kubenswrapper[4790]: E1011 11:05:19.567300 4790 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c" containerName="dnsmasq-dns" Oct 11 11:05:19.567533 master-0 kubenswrapper[4790]: I1011 11:05:19.567313 4790 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c" containerName="dnsmasq-dns" Oct 11 11:05:19.567671 master-0 kubenswrapper[4790]: I1011 11:05:19.567612 4790 memory_manager.go:354] "RemoveStaleState removing state" podUID="aea1c85b-85f7-4996-bf3b-7f5ecf1e1d8c" containerName="dnsmasq-dns" Oct 11 11:05:19.569311 master-0 kubenswrapper[4790]: I1011 11:05:19.569275 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.573324 master-0 kubenswrapper[4790]: I1011 11:05:19.573280 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 11 11:05:19.573666 master-0 kubenswrapper[4790]: I1011 11:05:19.573636 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"edpm" Oct 11 11:05:19.573912 master-0 kubenswrapper[4790]: I1011 11:05:19.573850 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 11 11:05:19.574256 master-0 kubenswrapper[4790]: I1011 11:05:19.574221 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"networkers" Oct 11 11:05:19.575608 master-0 kubenswrapper[4790]: I1011 11:05:19.575537 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 11:05:19.575796 master-0 kubenswrapper[4790]: I1011 11:05:19.575758 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 11 11:05:19.576399 master-0 kubenswrapper[4790]: I1011 11:05:19.576344 4790 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 11:05:19.585454 master-0 kubenswrapper[4790]: I1011 11:05:19.585387 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f984c5fd9-ljt8j"] Oct 11 11:05:19.670701 master-2 kubenswrapper[4776]: I1011 11:05:19.668269 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:05:19.741147 master-0 kubenswrapper[4790]: I1011 11:05:19.741053 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-ovsdbserver-sb\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.741147 master-0 kubenswrapper[4790]: I1011 11:05:19.741131 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-dns-swift-storage-0\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.741495 master-0 kubenswrapper[4790]: I1011 11:05:19.741171 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-dns-svc\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.741495 master-0 kubenswrapper[4790]: I1011 11:05:19.741215 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-config\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.741495 master-0 kubenswrapper[4790]: I1011 11:05:19.741245 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-networkers\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.741495 master-0 kubenswrapper[4790]: I1011 11:05:19.741267 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-edpm\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.741495 master-0 kubenswrapper[4790]: I1011 11:05:19.741292 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4r2n\" (UniqueName: \"kubernetes.io/projected/94ed578b-910e-4144-a5c9-6d5e7a585b3d-kube-api-access-l4r2n\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.741495 master-0 kubenswrapper[4790]: I1011 11:05:19.741319 4790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-ovsdbserver-nb\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.755571 master-2 kubenswrapper[4776]: I1011 11:05:19.755494 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-svc\") pod \"a63f7af9-5ea2-4091-901f-6d9187377785\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " Oct 11 11:05:19.755824 master-2 kubenswrapper[4776]: I1011 11:05:19.755656 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-swift-storage-0\") pod \"a63f7af9-5ea2-4091-901f-6d9187377785\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " Oct 11 11:05:19.755824 master-2 kubenswrapper[4776]: I1011 11:05:19.755745 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs8hv\" (UniqueName: \"kubernetes.io/projected/a63f7af9-5ea2-4091-901f-6d9187377785-kube-api-access-qs8hv\") pod \"a63f7af9-5ea2-4091-901f-6d9187377785\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " Oct 11 11:05:19.755824 master-2 kubenswrapper[4776]: I1011 11:05:19.755796 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-nb\") pod \"a63f7af9-5ea2-4091-901f-6d9187377785\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " Oct 11 11:05:19.755955 master-2 kubenswrapper[4776]: I1011 11:05:19.755865 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-networkers\") pod \"a63f7af9-5ea2-4091-901f-6d9187377785\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " Oct 11 11:05:19.755955 master-2 kubenswrapper[4776]: I1011 11:05:19.755902 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-sb\") pod \"a63f7af9-5ea2-4091-901f-6d9187377785\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " Oct 11 11:05:19.756024 master-2 kubenswrapper[4776]: I1011 11:05:19.755953 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-config\") pod \"a63f7af9-5ea2-4091-901f-6d9187377785\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " Oct 11 11:05:19.761776 master-2 kubenswrapper[4776]: I1011 11:05:19.761712 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a63f7af9-5ea2-4091-901f-6d9187377785-kube-api-access-qs8hv" (OuterVolumeSpecName: "kube-api-access-qs8hv") pod "a63f7af9-5ea2-4091-901f-6d9187377785" (UID: "a63f7af9-5ea2-4091-901f-6d9187377785"). InnerVolumeSpecName "kube-api-access-qs8hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:05:19.807423 master-2 kubenswrapper[4776]: I1011 11:05:19.807363 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a63f7af9-5ea2-4091-901f-6d9187377785" (UID: "a63f7af9-5ea2-4091-901f-6d9187377785"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:19.811257 master-2 kubenswrapper[4776]: I1011 11:05:19.811194 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a63f7af9-5ea2-4091-901f-6d9187377785" (UID: "a63f7af9-5ea2-4091-901f-6d9187377785"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:19.812920 master-2 kubenswrapper[4776]: I1011 11:05:19.812849 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a63f7af9-5ea2-4091-901f-6d9187377785" (UID: "a63f7af9-5ea2-4091-901f-6d9187377785"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:19.816874 master-2 kubenswrapper[4776]: I1011 11:05:19.816770 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-config" (OuterVolumeSpecName: "config") pod "a63f7af9-5ea2-4091-901f-6d9187377785" (UID: "a63f7af9-5ea2-4091-901f-6d9187377785"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:19.822868 master-2 kubenswrapper[4776]: I1011 11:05:19.822786 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a63f7af9-5ea2-4091-901f-6d9187377785" (UID: "a63f7af9-5ea2-4091-901f-6d9187377785"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:19.832317 master-2 kubenswrapper[4776]: I1011 11:05:19.832250 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-networkers" (OuterVolumeSpecName: "networkers") pod "a63f7af9-5ea2-4091-901f-6d9187377785" (UID: "a63f7af9-5ea2-4091-901f-6d9187377785"). InnerVolumeSpecName "networkers". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:19.843455 master-0 kubenswrapper[4790]: I1011 11:05:19.843299 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-ovsdbserver-sb\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.843455 master-0 kubenswrapper[4790]: I1011 11:05:19.843380 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-dns-swift-storage-0\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.843455 master-0 kubenswrapper[4790]: I1011 11:05:19.843426 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-dns-svc\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.843771 master-0 kubenswrapper[4790]: I1011 11:05:19.843462 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-config\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.843771 master-0 kubenswrapper[4790]: I1011 11:05:19.843496 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-networkers\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.843771 master-0 kubenswrapper[4790]: I1011 11:05:19.843521 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-edpm\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.843771 master-0 kubenswrapper[4790]: I1011 11:05:19.843549 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4r2n\" (UniqueName: \"kubernetes.io/projected/94ed578b-910e-4144-a5c9-6d5e7a585b3d-kube-api-access-l4r2n\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.843771 master-0 kubenswrapper[4790]: I1011 11:05:19.843572 4790 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-ovsdbserver-nb\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.844671 master-0 kubenswrapper[4790]: I1011 11:05:19.844622 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-dns-svc\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.844831 master-0 kubenswrapper[4790]: I1011 11:05:19.844799 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-ovsdbserver-nb\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.844831 master-0 kubenswrapper[4790]: I1011 11:05:19.844815 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-edpm\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.845418 master-0 kubenswrapper[4790]: I1011 11:05:19.845363 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-dns-swift-storage-0\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.845462 master-0 kubenswrapper[4790]: I1011 11:05:19.845424 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-ovsdbserver-sb\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.845768 master-0 kubenswrapper[4790]: I1011 11:05:19.845671 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-networkers\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.845823 master-0 kubenswrapper[4790]: I1011 11:05:19.845671 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94ed578b-910e-4144-a5c9-6d5e7a585b3d-config\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.858619 master-2 kubenswrapper[4776]: I1011 11:05:19.858558 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-sb\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:19.858619 master-2 kubenswrapper[4776]: I1011 11:05:19.858605 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-config\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:19.858619 master-2 kubenswrapper[4776]: I1011 11:05:19.858614 4776 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-svc\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:19.858619 master-2 kubenswrapper[4776]: I1011 11:05:19.858623 4776 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-swift-storage-0\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:19.858828 master-2 kubenswrapper[4776]: I1011 11:05:19.858634 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs8hv\" (UniqueName: \"kubernetes.io/projected/a63f7af9-5ea2-4091-901f-6d9187377785-kube-api-access-qs8hv\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:19.858828 master-2 kubenswrapper[4776]: I1011 11:05:19.858643 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-nb\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:19.858828 master-2 kubenswrapper[4776]: I1011 11:05:19.858651 4776 reconciler_common.go:293] "Volume detached for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-networkers\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:19.863035 master-0 kubenswrapper[4790]: I1011 11:05:19.862973 4790 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4r2n\" (UniqueName: \"kubernetes.io/projected/94ed578b-910e-4144-a5c9-6d5e7a585b3d-kube-api-access-l4r2n\") pod \"dnsmasq-dns-f984c5fd9-ljt8j\" (UID: \"94ed578b-910e-4144-a5c9-6d5e7a585b3d\") " pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.894329 master-0 kubenswrapper[4790]: I1011 11:05:19.894237 4790 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:19.967793 master-2 kubenswrapper[4776]: I1011 11:05:19.967714 4776 generic.go:334] "Generic (PLEG): container finished" podID="a63f7af9-5ea2-4091-901f-6d9187377785" containerID="207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64" exitCode=0 Oct 11 11:05:19.967793 master-2 kubenswrapper[4776]: I1011 11:05:19.967789 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" event={"ID":"a63f7af9-5ea2-4091-901f-6d9187377785","Type":"ContainerDied","Data":"207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64"} Oct 11 11:05:19.968066 master-2 kubenswrapper[4776]: I1011 11:05:19.967821 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:05:19.968066 master-2 kubenswrapper[4776]: I1011 11:05:19.967839 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" event={"ID":"a63f7af9-5ea2-4091-901f-6d9187377785","Type":"ContainerDied","Data":"7e86f6a0692217b02770a72e3b5288c4aba15fbb51a8be198df074c909a4ebea"} Oct 11 11:05:19.968066 master-2 kubenswrapper[4776]: I1011 11:05:19.967887 4776 scope.go:117] "RemoveContainer" containerID="207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64" Oct 11 11:05:19.993345 master-2 kubenswrapper[4776]: I1011 11:05:19.993281 4776 scope.go:117] "RemoveContainer" containerID="eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22" Oct 11 11:05:20.020697 master-2 kubenswrapper[4776]: I1011 11:05:20.020628 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85b986dbf-plkrt"] Oct 11 11:05:20.029537 master-2 kubenswrapper[4776]: I1011 11:05:20.029420 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85b986dbf-plkrt"] Oct 11 11:05:20.031443 master-2 kubenswrapper[4776]: I1011 11:05:20.031345 4776 scope.go:117] "RemoveContainer" containerID="207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64" Oct 11 11:05:20.032062 master-2 kubenswrapper[4776]: E1011 11:05:20.032012 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64\": container with ID starting with 207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64 not found: ID does not exist" containerID="207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64" Oct 11 11:05:20.032121 master-2 kubenswrapper[4776]: I1011 11:05:20.032072 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64"} err="failed to get container status \"207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64\": rpc error: code = NotFound desc = could not find container \"207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64\": container with ID starting with 207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64 not found: ID does not exist" Oct 11 11:05:20.032121 master-2 kubenswrapper[4776]: I1011 11:05:20.032111 4776 scope.go:117] "RemoveContainer" containerID="eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22" Oct 11 11:05:20.032661 master-2 kubenswrapper[4776]: E1011 11:05:20.032611 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22\": container with ID starting with eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22 not found: ID does not exist" containerID="eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22" Oct 11 11:05:20.032740 master-2 kubenswrapper[4776]: I1011 11:05:20.032690 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22"} err="failed to get container status \"eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22\": rpc error: code = NotFound desc = could not find container \"eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22\": container with ID starting with eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22 not found: ID does not exist" Oct 11 11:05:20.073328 master-2 kubenswrapper[4776]: I1011 11:05:20.073154 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a63f7af9-5ea2-4091-901f-6d9187377785" path="/var/lib/kubelet/pods/a63f7af9-5ea2-4091-901f-6d9187377785/volumes" Oct 11 11:05:20.340500 master-0 kubenswrapper[4790]: I1011 11:05:20.340462 4790 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f984c5fd9-ljt8j"] Oct 11 11:05:20.340916 master-0 kubenswrapper[4790]: W1011 11:05:20.340874 4790 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94ed578b_910e_4144_a5c9_6d5e7a585b3d.slice/crio-2b1492c58561187ead8d6fb953bd7524cf9be19d615611ad419076f6d1580f7a WatchSource:0}: Error finding container 2b1492c58561187ead8d6fb953bd7524cf9be19d615611ad419076f6d1580f7a: Status 404 returned error can't find the container with id 2b1492c58561187ead8d6fb953bd7524cf9be19d615611ad419076f6d1580f7a Oct 11 11:05:21.341695 master-0 kubenswrapper[4790]: I1011 11:05:21.341619 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" event={"ID":"94ed578b-910e-4144-a5c9-6d5e7a585b3d","Type":"ContainerDied","Data":"8782cb137f123e61f9d257a7ae5d142f54f9b330f8c57f8e1b1a243ba9b0a7d7"} Oct 11 11:05:21.342281 master-0 kubenswrapper[4790]: I1011 11:05:21.341545 4790 generic.go:334] "Generic (PLEG): container finished" podID="94ed578b-910e-4144-a5c9-6d5e7a585b3d" containerID="8782cb137f123e61f9d257a7ae5d142f54f9b330f8c57f8e1b1a243ba9b0a7d7" exitCode=0 Oct 11 11:05:21.342281 master-0 kubenswrapper[4790]: I1011 11:05:21.341796 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" event={"ID":"94ed578b-910e-4144-a5c9-6d5e7a585b3d","Type":"ContainerStarted","Data":"2b1492c58561187ead8d6fb953bd7524cf9be19d615611ad419076f6d1580f7a"} Oct 11 11:05:22.352951 master-0 kubenswrapper[4790]: I1011 11:05:22.352887 4790 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" event={"ID":"94ed578b-910e-4144-a5c9-6d5e7a585b3d","Type":"ContainerStarted","Data":"ebab5a442a872a623eba61a0b0792dc58e71e175b0b733a278bc74216b925ae7"} Oct 11 11:05:22.353503 master-0 kubenswrapper[4790]: I1011 11:05:22.353094 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:22.391192 master-0 kubenswrapper[4790]: I1011 11:05:22.390964 4790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" podStartSLOduration=3.390935947 podStartE2EDuration="3.390935947s" podCreationTimestamp="2025-10-11 11:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 11:05:22.389430826 +0000 UTC m=+1598.943891138" watchObservedRunningTime="2025-10-11 11:05:22.390935947 +0000 UTC m=+1598.945396239" Oct 11 11:05:29.898700 master-0 kubenswrapper[4790]: I1011 11:05:29.898634 4790 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f984c5fd9-ljt8j" Oct 11 11:05:30.005895 master-1 kubenswrapper[4771]: I1011 11:05:30.005783 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59dd57778c-jshg9"] Oct 11 11:05:30.008795 master-1 kubenswrapper[4771]: I1011 11:05:30.006189 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59dd57778c-jshg9" podUID="460e6e8f-ccc4-4952-934c-1d3229573074" containerName="dnsmasq-dns" containerID="cri-o://47ed2540bea8eec9542f1728675554694e1d171301942dba021c31ee5d74ea47" gracePeriod=10 Oct 11 11:05:30.057081 master-1 kubenswrapper[4771]: I1011 11:05:30.056995 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f984c5fd9-vj7wp"] Oct 11 11:05:30.064142 master-1 kubenswrapper[4771]: I1011 11:05:30.061937 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.066296 master-1 kubenswrapper[4771]: I1011 11:05:30.066218 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"edpm" Oct 11 11:05:30.075413 master-1 kubenswrapper[4771]: I1011 11:05:30.075308 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f984c5fd9-vj7wp"] Oct 11 11:05:30.111072 master-0 kubenswrapper[4790]: I1011 11:05:30.110808 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-rgsq2"] Oct 11 11:05:30.121933 master-0 kubenswrapper[4790]: I1011 11:05:30.121826 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-rgsq2"] Oct 11 11:05:30.259761 master-1 kubenswrapper[4771]: I1011 11:05:30.259493 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-ovsdbserver-sb\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.260115 master-1 kubenswrapper[4771]: I1011 11:05:30.259768 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-networkers\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.260115 master-1 kubenswrapper[4771]: I1011 11:05:30.259882 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-ovsdbserver-nb\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.260115 master-1 kubenswrapper[4771]: I1011 11:05:30.259920 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-dns-swift-storage-0\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.260273 master-1 kubenswrapper[4771]: I1011 11:05:30.260173 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hczc9\" (UniqueName: \"kubernetes.io/projected/73899937-c48a-4a79-9bc7-c5f4987908c3-kube-api-access-hczc9\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.260273 master-1 kubenswrapper[4771]: I1011 11:05:30.260213 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-config\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.260273 master-1 kubenswrapper[4771]: I1011 11:05:30.260247 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-dns-svc\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.260679 master-1 kubenswrapper[4771]: I1011 11:05:30.260438 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-edpm\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.302743 master-0 kubenswrapper[4790]: I1011 11:05:30.302683 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb6dd47-2665-4a3f-8773-2a61034146a3" path="/var/lib/kubelet/pods/7bb6dd47-2665-4a3f-8773-2a61034146a3/volumes" Oct 11 11:05:30.364427 master-1 kubenswrapper[4771]: I1011 11:05:30.364232 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-ovsdbserver-sb\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.364811 master-1 kubenswrapper[4771]: I1011 11:05:30.364466 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-networkers\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.364811 master-1 kubenswrapper[4771]: I1011 11:05:30.364553 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-ovsdbserver-nb\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.364811 master-1 kubenswrapper[4771]: I1011 11:05:30.364603 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-dns-swift-storage-0\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.364811 master-1 kubenswrapper[4771]: I1011 11:05:30.364702 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hczc9\" (UniqueName: \"kubernetes.io/projected/73899937-c48a-4a79-9bc7-c5f4987908c3-kube-api-access-hczc9\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.364811 master-1 kubenswrapper[4771]: I1011 11:05:30.364740 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-config\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.364811 master-1 kubenswrapper[4771]: I1011 11:05:30.364778 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-dns-svc\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.365034 master-1 kubenswrapper[4771]: I1011 11:05:30.364830 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-edpm\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.367636 master-1 kubenswrapper[4771]: I1011 11:05:30.367578 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-edpm\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.369629 master-1 kubenswrapper[4771]: I1011 11:05:30.369566 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-ovsdbserver-sb\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.371134 master-1 kubenswrapper[4771]: I1011 11:05:30.371079 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-networkers\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.373477 master-1 kubenswrapper[4771]: I1011 11:05:30.373390 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-dns-swift-storage-0\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.373955 master-1 kubenswrapper[4771]: I1011 11:05:30.373844 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-ovsdbserver-nb\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.374273 master-1 kubenswrapper[4771]: I1011 11:05:30.374240 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-config\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.377347 master-1 kubenswrapper[4771]: I1011 11:05:30.377280 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73899937-c48a-4a79-9bc7-c5f4987908c3-dns-svc\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.408852 master-1 kubenswrapper[4771]: I1011 11:05:30.408786 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hczc9\" (UniqueName: \"kubernetes.io/projected/73899937-c48a-4a79-9bc7-c5f4987908c3-kube-api-access-hczc9\") pod \"dnsmasq-dns-f984c5fd9-vj7wp\" (UID: \"73899937-c48a-4a79-9bc7-c5f4987908c3\") " pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.461515 master-1 kubenswrapper[4771]: I1011 11:05:30.461448 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:30.564176 master-1 kubenswrapper[4771]: I1011 11:05:30.563042 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:30.672922 master-1 kubenswrapper[4771]: I1011 11:05:30.672837 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-ovsdbserver-nb\") pod \"460e6e8f-ccc4-4952-934c-1d3229573074\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " Oct 11 11:05:30.673142 master-1 kubenswrapper[4771]: I1011 11:05:30.672982 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smwf6\" (UniqueName: \"kubernetes.io/projected/460e6e8f-ccc4-4952-934c-1d3229573074-kube-api-access-smwf6\") pod \"460e6e8f-ccc4-4952-934c-1d3229573074\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " Oct 11 11:05:30.673142 master-1 kubenswrapper[4771]: I1011 11:05:30.673127 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-ovsdbserver-sb\") pod \"460e6e8f-ccc4-4952-934c-1d3229573074\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " Oct 11 11:05:30.673315 master-1 kubenswrapper[4771]: I1011 11:05:30.673283 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-config\") pod \"460e6e8f-ccc4-4952-934c-1d3229573074\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " Oct 11 11:05:30.673391 master-1 kubenswrapper[4771]: I1011 11:05:30.673313 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-dns-svc\") pod \"460e6e8f-ccc4-4952-934c-1d3229573074\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " Oct 11 11:05:30.673391 master-1 kubenswrapper[4771]: I1011 11:05:30.673359 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-dns-swift-storage-0\") pod \"460e6e8f-ccc4-4952-934c-1d3229573074\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " Oct 11 11:05:30.674097 master-1 kubenswrapper[4771]: I1011 11:05:30.674068 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-networkers\") pod \"460e6e8f-ccc4-4952-934c-1d3229573074\" (UID: \"460e6e8f-ccc4-4952-934c-1d3229573074\") " Oct 11 11:05:30.679668 master-1 kubenswrapper[4771]: I1011 11:05:30.679626 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/460e6e8f-ccc4-4952-934c-1d3229573074-kube-api-access-smwf6" (OuterVolumeSpecName: "kube-api-access-smwf6") pod "460e6e8f-ccc4-4952-934c-1d3229573074" (UID: "460e6e8f-ccc4-4952-934c-1d3229573074"). InnerVolumeSpecName "kube-api-access-smwf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:05:30.725789 master-1 kubenswrapper[4771]: I1011 11:05:30.725713 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "460e6e8f-ccc4-4952-934c-1d3229573074" (UID: "460e6e8f-ccc4-4952-934c-1d3229573074"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:30.733647 master-1 kubenswrapper[4771]: I1011 11:05:30.733534 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-networkers" (OuterVolumeSpecName: "networkers") pod "460e6e8f-ccc4-4952-934c-1d3229573074" (UID: "460e6e8f-ccc4-4952-934c-1d3229573074"). InnerVolumeSpecName "networkers". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:30.739047 master-1 kubenswrapper[4771]: I1011 11:05:30.738997 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-config" (OuterVolumeSpecName: "config") pod "460e6e8f-ccc4-4952-934c-1d3229573074" (UID: "460e6e8f-ccc4-4952-934c-1d3229573074"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:30.742422 master-1 kubenswrapper[4771]: I1011 11:05:30.742370 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "460e6e8f-ccc4-4952-934c-1d3229573074" (UID: "460e6e8f-ccc4-4952-934c-1d3229573074"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:30.749976 master-1 kubenswrapper[4771]: I1011 11:05:30.749936 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "460e6e8f-ccc4-4952-934c-1d3229573074" (UID: "460e6e8f-ccc4-4952-934c-1d3229573074"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:30.753839 master-1 kubenswrapper[4771]: I1011 11:05:30.753763 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "460e6e8f-ccc4-4952-934c-1d3229573074" (UID: "460e6e8f-ccc4-4952-934c-1d3229573074"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:30.777035 master-1 kubenswrapper[4771]: I1011 11:05:30.776985 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smwf6\" (UniqueName: \"kubernetes.io/projected/460e6e8f-ccc4-4952-934c-1d3229573074-kube-api-access-smwf6\") on node \"master-1\" DevicePath \"\"" Oct 11 11:05:30.777035 master-1 kubenswrapper[4771]: I1011 11:05:30.777019 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-ovsdbserver-sb\") on node \"master-1\" DevicePath \"\"" Oct 11 11:05:30.777178 master-1 kubenswrapper[4771]: I1011 11:05:30.777048 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-config\") on node \"master-1\" DevicePath \"\"" Oct 11 11:05:30.777178 master-1 kubenswrapper[4771]: I1011 11:05:30.777060 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-dns-svc\") on node \"master-1\" DevicePath \"\"" Oct 11 11:05:30.777178 master-1 kubenswrapper[4771]: I1011 11:05:30.777069 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-dns-swift-storage-0\") on node \"master-1\" DevicePath \"\"" Oct 11 11:05:30.777178 master-1 kubenswrapper[4771]: I1011 11:05:30.777079 4771 reconciler_common.go:293] "Volume detached for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-networkers\") on node \"master-1\" DevicePath \"\"" Oct 11 11:05:30.777178 master-1 kubenswrapper[4771]: I1011 11:05:30.777087 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/460e6e8f-ccc4-4952-934c-1d3229573074-ovsdbserver-nb\") on node \"master-1\" DevicePath \"\"" Oct 11 11:05:30.856225 master-1 kubenswrapper[4771]: I1011 11:05:30.856030 4771 generic.go:334] "Generic (PLEG): container finished" podID="460e6e8f-ccc4-4952-934c-1d3229573074" containerID="47ed2540bea8eec9542f1728675554694e1d171301942dba021c31ee5d74ea47" exitCode=0 Oct 11 11:05:30.856225 master-1 kubenswrapper[4771]: I1011 11:05:30.856084 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dd57778c-jshg9" event={"ID":"460e6e8f-ccc4-4952-934c-1d3229573074","Type":"ContainerDied","Data":"47ed2540bea8eec9542f1728675554694e1d171301942dba021c31ee5d74ea47"} Oct 11 11:05:30.856225 master-1 kubenswrapper[4771]: I1011 11:05:30.856115 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dd57778c-jshg9" event={"ID":"460e6e8f-ccc4-4952-934c-1d3229573074","Type":"ContainerDied","Data":"e322bd0a72d963bfb151ddd53adda74fdeefdc268230c1597845f51e0682ee69"} Oct 11 11:05:30.856225 master-1 kubenswrapper[4771]: I1011 11:05:30.856134 4771 scope.go:117] "RemoveContainer" containerID="47ed2540bea8eec9542f1728675554694e1d171301942dba021c31ee5d74ea47" Oct 11 11:05:30.856786 master-1 kubenswrapper[4771]: I1011 11:05:30.856262 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59dd57778c-jshg9" Oct 11 11:05:30.911173 master-1 kubenswrapper[4771]: I1011 11:05:30.910664 4771 scope.go:117] "RemoveContainer" containerID="4cd14af125b21bd0750a27ba68476f9cdd511a9783b05c8056f3c380232ba59f" Oct 11 11:05:30.933295 master-1 kubenswrapper[4771]: I1011 11:05:30.933225 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59dd57778c-jshg9"] Oct 11 11:05:30.940017 master-1 kubenswrapper[4771]: I1011 11:05:30.939970 4771 scope.go:117] "RemoveContainer" containerID="47ed2540bea8eec9542f1728675554694e1d171301942dba021c31ee5d74ea47" Oct 11 11:05:30.940792 master-1 kubenswrapper[4771]: E1011 11:05:30.940728 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47ed2540bea8eec9542f1728675554694e1d171301942dba021c31ee5d74ea47\": container with ID starting with 47ed2540bea8eec9542f1728675554694e1d171301942dba021c31ee5d74ea47 not found: ID does not exist" containerID="47ed2540bea8eec9542f1728675554694e1d171301942dba021c31ee5d74ea47" Oct 11 11:05:30.940917 master-1 kubenswrapper[4771]: I1011 11:05:30.940793 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47ed2540bea8eec9542f1728675554694e1d171301942dba021c31ee5d74ea47"} err="failed to get container status \"47ed2540bea8eec9542f1728675554694e1d171301942dba021c31ee5d74ea47\": rpc error: code = NotFound desc = could not find container \"47ed2540bea8eec9542f1728675554694e1d171301942dba021c31ee5d74ea47\": container with ID starting with 47ed2540bea8eec9542f1728675554694e1d171301942dba021c31ee5d74ea47 not found: ID does not exist" Oct 11 11:05:30.940917 master-1 kubenswrapper[4771]: I1011 11:05:30.940811 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59dd57778c-jshg9"] Oct 11 11:05:30.940917 master-1 kubenswrapper[4771]: I1011 11:05:30.940829 4771 scope.go:117] "RemoveContainer" containerID="4cd14af125b21bd0750a27ba68476f9cdd511a9783b05c8056f3c380232ba59f" Oct 11 11:05:30.941686 master-1 kubenswrapper[4771]: E1011 11:05:30.941626 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cd14af125b21bd0750a27ba68476f9cdd511a9783b05c8056f3c380232ba59f\": container with ID starting with 4cd14af125b21bd0750a27ba68476f9cdd511a9783b05c8056f3c380232ba59f not found: ID does not exist" containerID="4cd14af125b21bd0750a27ba68476f9cdd511a9783b05c8056f3c380232ba59f" Oct 11 11:05:30.941793 master-1 kubenswrapper[4771]: I1011 11:05:30.941687 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd14af125b21bd0750a27ba68476f9cdd511a9783b05c8056f3c380232ba59f"} err="failed to get container status \"4cd14af125b21bd0750a27ba68476f9cdd511a9783b05c8056f3c380232ba59f\": rpc error: code = NotFound desc = could not find container \"4cd14af125b21bd0750a27ba68476f9cdd511a9783b05c8056f3c380232ba59f\": container with ID starting with 4cd14af125b21bd0750a27ba68476f9cdd511a9783b05c8056f3c380232ba59f not found: ID does not exist" Oct 11 11:05:30.993407 master-1 kubenswrapper[4771]: I1011 11:05:30.992141 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f984c5fd9-vj7wp"] Oct 11 11:05:31.871927 master-1 kubenswrapper[4771]: I1011 11:05:31.871846 4771 generic.go:334] "Generic (PLEG): container finished" podID="73899937-c48a-4a79-9bc7-c5f4987908c3" containerID="20cfd66e6d311e07fd81153ee758ffd6072059205c2c6725635c8ceba6ca1b1e" exitCode=0 Oct 11 11:05:31.872723 master-1 kubenswrapper[4771]: I1011 11:05:31.871936 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" event={"ID":"73899937-c48a-4a79-9bc7-c5f4987908c3","Type":"ContainerDied","Data":"20cfd66e6d311e07fd81153ee758ffd6072059205c2c6725635c8ceba6ca1b1e"} Oct 11 11:05:31.872723 master-1 kubenswrapper[4771]: I1011 11:05:31.871997 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" event={"ID":"73899937-c48a-4a79-9bc7-c5f4987908c3","Type":"ContainerStarted","Data":"f10d01a7069e7e8cacab614b10bb215aa0cd2faf882c6288c6c10ab2628e80d2"} Oct 11 11:05:32.455481 master-1 kubenswrapper[4771]: I1011 11:05:32.455239 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="460e6e8f-ccc4-4952-934c-1d3229573074" path="/var/lib/kubelet/pods/460e6e8f-ccc4-4952-934c-1d3229573074/volumes" Oct 11 11:05:32.889820 master-1 kubenswrapper[4771]: I1011 11:05:32.889702 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" event={"ID":"73899937-c48a-4a79-9bc7-c5f4987908c3","Type":"ContainerStarted","Data":"df1802ddf887326037a580eaa64afb2b722cbb15026b43331527ade8e1ffb855"} Oct 11 11:05:32.890858 master-1 kubenswrapper[4771]: I1011 11:05:32.889935 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:32.927821 master-1 kubenswrapper[4771]: I1011 11:05:32.927725 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" podStartSLOduration=2.927702632 podStartE2EDuration="2.927702632s" podCreationTimestamp="2025-10-11 11:05:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 11:05:32.924530951 +0000 UTC m=+2364.898757432" watchObservedRunningTime="2025-10-11 11:05:32.927702632 +0000 UTC m=+2364.901929083" Oct 11 11:05:36.065837 master-0 kubenswrapper[4790]: I1011 11:05:36.065764 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-nw6gg"] Oct 11 11:05:36.077932 master-0 kubenswrapper[4790]: I1011 11:05:36.077845 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-r9jnj"] Oct 11 11:05:36.089389 master-0 kubenswrapper[4790]: I1011 11:05:36.089348 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-nw6gg"] Oct 11 11:05:36.100831 master-0 kubenswrapper[4790]: I1011 11:05:36.100790 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-r9jnj"] Oct 11 11:05:36.309745 master-0 kubenswrapper[4790]: I1011 11:05:36.309620 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b7ae2a3-6802-400c-bbe7-5729052a2c1c" path="/var/lib/kubelet/pods/5b7ae2a3-6802-400c-bbe7-5729052a2c1c/volumes" Oct 11 11:05:36.310821 master-0 kubenswrapper[4790]: I1011 11:05:36.310775 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b0929b8-354d-4de6-9e2d-ac6e11324b10" path="/var/lib/kubelet/pods/8b0929b8-354d-4de6-9e2d-ac6e11324b10/volumes" Oct 11 11:05:40.462652 master-1 kubenswrapper[4771]: I1011 11:05:40.462588 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f984c5fd9-vj7wp" Oct 11 11:05:40.571503 master-2 kubenswrapper[4776]: I1011 11:05:40.570178 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59dd57778c-sbrb9"] Oct 11 11:05:40.571503 master-2 kubenswrapper[4776]: I1011 11:05:40.570908 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" podUID="bdab7f7f-97fe-4393-859b-b71f68b588b4" containerName="dnsmasq-dns" containerID="cri-o://a642a0d29a9271c87588091a04275c01a52f4245f7c7b0cb7fa10b6f9d71d7b7" gracePeriod=10 Oct 11 11:05:41.047205 master-0 kubenswrapper[4790]: I1011 11:05:41.047122 4790 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-8a39-account-create-clqqg"] Oct 11 11:05:41.052181 master-0 kubenswrapper[4790]: I1011 11:05:41.052115 4790 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-8a39-account-create-clqqg"] Oct 11 11:05:41.151253 master-2 kubenswrapper[4776]: I1011 11:05:41.151190 4776 generic.go:334] "Generic (PLEG): container finished" podID="bdab7f7f-97fe-4393-859b-b71f68b588b4" containerID="a642a0d29a9271c87588091a04275c01a52f4245f7c7b0cb7fa10b6f9d71d7b7" exitCode=0 Oct 11 11:05:41.151253 master-2 kubenswrapper[4776]: I1011 11:05:41.151240 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" event={"ID":"bdab7f7f-97fe-4393-859b-b71f68b588b4","Type":"ContainerDied","Data":"a642a0d29a9271c87588091a04275c01a52f4245f7c7b0cb7fa10b6f9d71d7b7"} Oct 11 11:05:41.302995 master-2 kubenswrapper[4776]: I1011 11:05:41.302909 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:05:41.448526 master-2 kubenswrapper[4776]: I1011 11:05:41.448461 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-nb\") pod \"bdab7f7f-97fe-4393-859b-b71f68b588b4\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " Oct 11 11:05:41.448791 master-2 kubenswrapper[4776]: I1011 11:05:41.448606 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-swift-storage-0\") pod \"bdab7f7f-97fe-4393-859b-b71f68b588b4\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " Oct 11 11:05:41.448791 master-2 kubenswrapper[4776]: I1011 11:05:41.448740 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-config\") pod \"bdab7f7f-97fe-4393-859b-b71f68b588b4\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " Oct 11 11:05:41.448873 master-2 kubenswrapper[4776]: I1011 11:05:41.448802 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-svc\") pod \"bdab7f7f-97fe-4393-859b-b71f68b588b4\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " Oct 11 11:05:41.449927 master-2 kubenswrapper[4776]: I1011 11:05:41.448889 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-sb\") pod \"bdab7f7f-97fe-4393-859b-b71f68b588b4\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " Oct 11 11:05:41.449927 master-2 kubenswrapper[4776]: I1011 11:05:41.448934 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-networkers\") pod \"bdab7f7f-97fe-4393-859b-b71f68b588b4\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " Oct 11 11:05:41.449927 master-2 kubenswrapper[4776]: I1011 11:05:41.448959 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt62j\" (UniqueName: \"kubernetes.io/projected/bdab7f7f-97fe-4393-859b-b71f68b588b4-kube-api-access-kt62j\") pod \"bdab7f7f-97fe-4393-859b-b71f68b588b4\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " Oct 11 11:05:41.456154 master-2 kubenswrapper[4776]: I1011 11:05:41.456079 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdab7f7f-97fe-4393-859b-b71f68b588b4-kube-api-access-kt62j" (OuterVolumeSpecName: "kube-api-access-kt62j") pod "bdab7f7f-97fe-4393-859b-b71f68b588b4" (UID: "bdab7f7f-97fe-4393-859b-b71f68b588b4"). InnerVolumeSpecName "kube-api-access-kt62j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:05:41.494464 master-2 kubenswrapper[4776]: I1011 11:05:41.494410 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bdab7f7f-97fe-4393-859b-b71f68b588b4" (UID: "bdab7f7f-97fe-4393-859b-b71f68b588b4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:41.497240 master-2 kubenswrapper[4776]: I1011 11:05:41.497155 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-config" (OuterVolumeSpecName: "config") pod "bdab7f7f-97fe-4393-859b-b71f68b588b4" (UID: "bdab7f7f-97fe-4393-859b-b71f68b588b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:41.498049 master-2 kubenswrapper[4776]: I1011 11:05:41.497992 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bdab7f7f-97fe-4393-859b-b71f68b588b4" (UID: "bdab7f7f-97fe-4393-859b-b71f68b588b4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:41.503582 master-2 kubenswrapper[4776]: I1011 11:05:41.501097 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bdab7f7f-97fe-4393-859b-b71f68b588b4" (UID: "bdab7f7f-97fe-4393-859b-b71f68b588b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:41.512261 master-2 kubenswrapper[4776]: I1011 11:05:41.512216 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bdab7f7f-97fe-4393-859b-b71f68b588b4" (UID: "bdab7f7f-97fe-4393-859b-b71f68b588b4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:41.518294 master-2 kubenswrapper[4776]: I1011 11:05:41.517648 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-networkers" (OuterVolumeSpecName: "networkers") pod "bdab7f7f-97fe-4393-859b-b71f68b588b4" (UID: "bdab7f7f-97fe-4393-859b-b71f68b588b4"). InnerVolumeSpecName "networkers". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:41.551309 master-2 kubenswrapper[4776]: I1011 11:05:41.551245 4776 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-swift-storage-0\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:41.551309 master-2 kubenswrapper[4776]: I1011 11:05:41.551298 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-config\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:41.551309 master-2 kubenswrapper[4776]: I1011 11:05:41.551312 4776 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-svc\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:41.551670 master-2 kubenswrapper[4776]: I1011 11:05:41.551326 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-sb\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:41.551670 master-2 kubenswrapper[4776]: I1011 11:05:41.551338 4776 reconciler_common.go:293] "Volume detached for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-networkers\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:41.551670 master-2 kubenswrapper[4776]: I1011 11:05:41.551349 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt62j\" (UniqueName: \"kubernetes.io/projected/bdab7f7f-97fe-4393-859b-b71f68b588b4-kube-api-access-kt62j\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:41.551670 master-2 kubenswrapper[4776]: I1011 11:05:41.551360 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-nb\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:42.165389 master-2 kubenswrapper[4776]: I1011 11:05:42.165296 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" event={"ID":"bdab7f7f-97fe-4393-859b-b71f68b588b4","Type":"ContainerDied","Data":"47eafd73d1b09f5fdc1106e16dc957897cec09f84dde7bc47e0e3b86cb99cab7"} Oct 11 11:05:42.165389 master-2 kubenswrapper[4776]: I1011 11:05:42.165358 4776 scope.go:117] "RemoveContainer" containerID="a642a0d29a9271c87588091a04275c01a52f4245f7c7b0cb7fa10b6f9d71d7b7" Oct 11 11:05:42.165389 master-2 kubenswrapper[4776]: I1011 11:05:42.165380 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:05:42.197776 master-2 kubenswrapper[4776]: I1011 11:05:42.196458 4776 scope.go:117] "RemoveContainer" containerID="2822f6bcfdf6e3e239b9eb0a0c752c195651ceefe7ca225b476e6aad99bf8b53" Oct 11 11:05:42.207166 master-2 kubenswrapper[4776]: I1011 11:05:42.207117 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59dd57778c-sbrb9"] Oct 11 11:05:42.213423 master-2 kubenswrapper[4776]: I1011 11:05:42.213370 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59dd57778c-sbrb9"] Oct 11 11:05:42.307211 master-0 kubenswrapper[4790]: I1011 11:05:42.307126 4790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e37e1fe6-6e89-4407-a40f-cf494a35eccd" path="/var/lib/kubelet/pods/e37e1fe6-6e89-4407-a40f-cf494a35eccd/volumes" Oct 11 11:05:44.091702 master-2 kubenswrapper[4776]: I1011 11:05:44.091177 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdab7f7f-97fe-4393-859b-b71f68b588b4" path="/var/lib/kubelet/pods/bdab7f7f-97fe-4393-859b-b71f68b588b4/volumes" Oct 11 11:05:45.456016 master-0 kubenswrapper[4790]: I1011 11:05:45.455821 4790 scope.go:117] "RemoveContainer" containerID="5b50296ba2efac22efde8aae60a1ee89c11a8ace1ff375049f5a9b2bda8f8fc0" Oct 11 11:05:45.477290 master-0 kubenswrapper[4790]: I1011 11:05:45.477203 4790 scope.go:117] "RemoveContainer" containerID="187ba140386fe1bdb57e2319039e5e08987bfe52015b52422eac9cff8a82d276" Oct 11 11:05:45.512438 master-0 kubenswrapper[4790]: I1011 11:05:45.512269 4790 scope.go:117] "RemoveContainer" containerID="ef89d0976a05facc749dfabb5416524787541999145463ce1f713dd9a9f315fb" Oct 11 11:05:45.554072 master-0 kubenswrapper[4790]: I1011 11:05:45.554009 4790 scope.go:117] "RemoveContainer" containerID="ad9c864509e03d2c97f9d070b630e91a99b7a68797b54f4da7ce040e5a112381" Oct 11 11:05:51.059149 master-1 kubenswrapper[4771]: I1011 11:05:51.059035 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a338-account-create-v4xzh"] Oct 11 11:05:51.072731 master-1 kubenswrapper[4771]: I1011 11:05:51.072636 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a338-account-create-v4xzh"] Oct 11 11:05:52.447191 master-1 kubenswrapper[4771]: I1011 11:05:52.447120 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ac9af7f-afc6-4d4d-9923-db14ac820459" path="/var/lib/kubelet/pods/7ac9af7f-afc6-4d4d-9923-db14ac820459/volumes" Oct 11 11:05:52.796446 master-1 kubenswrapper[4771]: I1011 11:05:52.796251 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-dataplane-edpm-hsdgd"] Oct 11 11:05:52.796815 master-1 kubenswrapper[4771]: E1011 11:05:52.796779 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="460e6e8f-ccc4-4952-934c-1d3229573074" containerName="init" Oct 11 11:05:52.796815 master-1 kubenswrapper[4771]: I1011 11:05:52.796806 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="460e6e8f-ccc4-4952-934c-1d3229573074" containerName="init" Oct 11 11:05:52.797033 master-1 kubenswrapper[4771]: E1011 11:05:52.796829 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="460e6e8f-ccc4-4952-934c-1d3229573074" containerName="dnsmasq-dns" Oct 11 11:05:52.797033 master-1 kubenswrapper[4771]: I1011 11:05:52.796844 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="460e6e8f-ccc4-4952-934c-1d3229573074" containerName="dnsmasq-dns" Oct 11 11:05:52.797194 master-1 kubenswrapper[4771]: I1011 11:05:52.797116 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="460e6e8f-ccc4-4952-934c-1d3229573074" containerName="dnsmasq-dns" Oct 11 11:05:52.798094 master-1 kubenswrapper[4771]: I1011 11:05:52.798063 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-dataplane-edpm-hsdgd" Oct 11 11:05:52.801874 master-1 kubenswrapper[4771]: I1011 11:05:52.801808 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:05:52.802526 master-1 kubenswrapper[4771]: I1011 11:05:52.802469 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-edpm" Oct 11 11:05:52.802844 master-1 kubenswrapper[4771]: I1011 11:05:52.802480 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:05:52.812208 master-1 kubenswrapper[4771]: I1011 11:05:52.812147 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-dataplane-edpm-hsdgd"] Oct 11 11:05:52.845478 master-1 kubenswrapper[4771]: I1011 11:05:52.845400 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2frc\" (UniqueName: \"kubernetes.io/projected/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-kube-api-access-v2frc\") pod \"bootstrap-dataplane-edpm-hsdgd\" (UID: \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\") " pod="openstack/bootstrap-dataplane-edpm-hsdgd" Oct 11 11:05:52.845478 master-1 kubenswrapper[4771]: I1011 11:05:52.845473 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-inventory\") pod \"bootstrap-dataplane-edpm-hsdgd\" (UID: \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\") " pod="openstack/bootstrap-dataplane-edpm-hsdgd" Oct 11 11:05:52.845882 master-1 kubenswrapper[4771]: I1011 11:05:52.845567 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-bootstrap-combined-ca-bundle\") pod \"bootstrap-dataplane-edpm-hsdgd\" (UID: \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\") " pod="openstack/bootstrap-dataplane-edpm-hsdgd" Oct 11 11:05:52.845882 master-1 kubenswrapper[4771]: I1011 11:05:52.845606 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-ssh-key\") pod \"bootstrap-dataplane-edpm-hsdgd\" (UID: \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\") " pod="openstack/bootstrap-dataplane-edpm-hsdgd" Oct 11 11:05:52.948429 master-1 kubenswrapper[4771]: I1011 11:05:52.948369 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2frc\" (UniqueName: \"kubernetes.io/projected/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-kube-api-access-v2frc\") pod \"bootstrap-dataplane-edpm-hsdgd\" (UID: \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\") " pod="openstack/bootstrap-dataplane-edpm-hsdgd" Oct 11 11:05:52.948796 master-1 kubenswrapper[4771]: I1011 11:05:52.948442 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-inventory\") pod \"bootstrap-dataplane-edpm-hsdgd\" (UID: \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\") " pod="openstack/bootstrap-dataplane-edpm-hsdgd" Oct 11 11:05:52.948796 master-1 kubenswrapper[4771]: I1011 11:05:52.948514 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-bootstrap-combined-ca-bundle\") pod \"bootstrap-dataplane-edpm-hsdgd\" (UID: \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\") " pod="openstack/bootstrap-dataplane-edpm-hsdgd" Oct 11 11:05:52.948796 master-1 kubenswrapper[4771]: I1011 11:05:52.948567 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-ssh-key\") pod \"bootstrap-dataplane-edpm-hsdgd\" (UID: \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\") " pod="openstack/bootstrap-dataplane-edpm-hsdgd" Oct 11 11:05:52.952395 master-1 kubenswrapper[4771]: I1011 11:05:52.952330 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-inventory\") pod \"bootstrap-dataplane-edpm-hsdgd\" (UID: \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\") " pod="openstack/bootstrap-dataplane-edpm-hsdgd" Oct 11 11:05:52.958530 master-1 kubenswrapper[4771]: I1011 11:05:52.958423 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-bootstrap-combined-ca-bundle\") pod \"bootstrap-dataplane-edpm-hsdgd\" (UID: \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\") " pod="openstack/bootstrap-dataplane-edpm-hsdgd" Oct 11 11:05:52.958837 master-1 kubenswrapper[4771]: I1011 11:05:52.958642 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-ssh-key\") pod \"bootstrap-dataplane-edpm-hsdgd\" (UID: \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\") " pod="openstack/bootstrap-dataplane-edpm-hsdgd" Oct 11 11:05:52.978730 master-1 kubenswrapper[4771]: I1011 11:05:52.978650 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2frc\" (UniqueName: \"kubernetes.io/projected/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-kube-api-access-v2frc\") pod \"bootstrap-dataplane-edpm-hsdgd\" (UID: \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\") " pod="openstack/bootstrap-dataplane-edpm-hsdgd" Oct 11 11:05:53.033727 master-2 kubenswrapper[4776]: I1011 11:05:53.033658 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-faff-account-create-nc9gw"] Oct 11 11:05:53.043870 master-2 kubenswrapper[4776]: I1011 11:05:53.043745 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-faff-account-create-nc9gw"] Oct 11 11:05:53.156223 master-1 kubenswrapper[4771]: I1011 11:05:53.156129 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-dataplane-edpm-hsdgd" Oct 11 11:05:53.786190 master-1 kubenswrapper[4771]: I1011 11:05:53.786099 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-dataplane-edpm-hsdgd"] Oct 11 11:05:53.787853 master-1 kubenswrapper[4771]: W1011 11:05:53.787799 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd3df1d_629a_4bbd_9ab4_7731e8928b01.slice/crio-a29718cbf38e7f25af3d27f643844ad043c8e1dc7c5a8dbda4e222f2e172deb6 WatchSource:0}: Error finding container a29718cbf38e7f25af3d27f643844ad043c8e1dc7c5a8dbda4e222f2e172deb6: Status 404 returned error can't find the container with id a29718cbf38e7f25af3d27f643844ad043c8e1dc7c5a8dbda4e222f2e172deb6 Oct 11 11:05:54.026014 master-1 kubenswrapper[4771]: I1011 11:05:54.025923 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j7glk"] Oct 11 11:05:54.031666 master-1 kubenswrapper[4771]: I1011 11:05:54.031565 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j7glk" Oct 11 11:05:54.044320 master-1 kubenswrapper[4771]: I1011 11:05:54.044222 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j7glk"] Oct 11 11:05:54.070477 master-2 kubenswrapper[4776]: I1011 11:05:54.070412 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be6ef0ad-fb25-4af2-a9fc-c89be4b1983b" path="/var/lib/kubelet/pods/be6ef0ad-fb25-4af2-a9fc-c89be4b1983b/volumes" Oct 11 11:05:54.147502 master-1 kubenswrapper[4771]: I1011 11:05:54.147425 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-dataplane-edpm-hsdgd" event={"ID":"7bd3df1d-629a-4bbd-9ab4-7731e8928b01","Type":"ContainerStarted","Data":"a29718cbf38e7f25af3d27f643844ad043c8e1dc7c5a8dbda4e222f2e172deb6"} Oct 11 11:05:54.177979 master-1 kubenswrapper[4771]: I1011 11:05:54.177929 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc8pb\" (UniqueName: \"kubernetes.io/projected/f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2-kube-api-access-nc8pb\") pod \"community-operators-j7glk\" (UID: \"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2\") " pod="openshift-marketplace/community-operators-j7glk" Oct 11 11:05:54.178129 master-1 kubenswrapper[4771]: I1011 11:05:54.177994 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2-utilities\") pod \"community-operators-j7glk\" (UID: \"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2\") " pod="openshift-marketplace/community-operators-j7glk" Oct 11 11:05:54.178129 master-1 kubenswrapper[4771]: I1011 11:05:54.178063 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2-catalog-content\") pod \"community-operators-j7glk\" (UID: \"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2\") " pod="openshift-marketplace/community-operators-j7glk" Oct 11 11:05:54.280309 master-1 kubenswrapper[4771]: I1011 11:05:54.280232 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2-catalog-content\") pod \"community-operators-j7glk\" (UID: \"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2\") " pod="openshift-marketplace/community-operators-j7glk" Oct 11 11:05:54.280705 master-1 kubenswrapper[4771]: I1011 11:05:54.280470 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc8pb\" (UniqueName: \"kubernetes.io/projected/f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2-kube-api-access-nc8pb\") pod \"community-operators-j7glk\" (UID: \"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2\") " pod="openshift-marketplace/community-operators-j7glk" Oct 11 11:05:54.280705 master-1 kubenswrapper[4771]: I1011 11:05:54.280538 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2-utilities\") pod \"community-operators-j7glk\" (UID: \"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2\") " pod="openshift-marketplace/community-operators-j7glk" Oct 11 11:05:54.281533 master-1 kubenswrapper[4771]: I1011 11:05:54.281390 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2-utilities\") pod \"community-operators-j7glk\" (UID: \"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2\") " pod="openshift-marketplace/community-operators-j7glk" Oct 11 11:05:54.281801 master-1 kubenswrapper[4771]: I1011 11:05:54.281613 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2-catalog-content\") pod \"community-operators-j7glk\" (UID: \"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2\") " pod="openshift-marketplace/community-operators-j7glk" Oct 11 11:05:54.313347 master-1 kubenswrapper[4771]: I1011 11:05:54.313262 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc8pb\" (UniqueName: \"kubernetes.io/projected/f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2-kube-api-access-nc8pb\") pod \"community-operators-j7glk\" (UID: \"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2\") " pod="openshift-marketplace/community-operators-j7glk" Oct 11 11:05:54.368961 master-1 kubenswrapper[4771]: I1011 11:05:54.368402 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j7glk" Oct 11 11:05:54.885758 master-1 kubenswrapper[4771]: I1011 11:05:54.884930 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j7glk"] Oct 11 11:05:55.158661 master-1 kubenswrapper[4771]: I1011 11:05:55.158612 4771 generic.go:334] "Generic (PLEG): container finished" podID="f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2" containerID="28f51367dd6d11abe17cd09270952a1c6d945d37acbc8713d9275ca61fa60699" exitCode=0 Oct 11 11:05:55.158784 master-1 kubenswrapper[4771]: I1011 11:05:55.158674 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7glk" event={"ID":"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2","Type":"ContainerDied","Data":"28f51367dd6d11abe17cd09270952a1c6d945d37acbc8713d9275ca61fa60699"} Oct 11 11:05:55.158784 master-1 kubenswrapper[4771]: I1011 11:05:55.158709 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7glk" event={"ID":"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2","Type":"ContainerStarted","Data":"39291eb0bb4dddc8b1a55028f1e708f4628e007e9d0a4908a7a947f1a79a7a0f"} Oct 11 11:05:56.175965 master-1 kubenswrapper[4771]: I1011 11:05:56.175894 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7glk" event={"ID":"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2","Type":"ContainerStarted","Data":"45ae6a7716efdd3aea461d5292cdae5828f0ecbb445a641e81b3a2c67ebda8e7"} Oct 11 11:05:57.188854 master-1 kubenswrapper[4771]: I1011 11:05:57.188749 4771 generic.go:334] "Generic (PLEG): container finished" podID="f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2" containerID="45ae6a7716efdd3aea461d5292cdae5828f0ecbb445a641e81b3a2c67ebda8e7" exitCode=0 Oct 11 11:05:57.188854 master-1 kubenswrapper[4771]: I1011 11:05:57.188847 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7glk" event={"ID":"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2","Type":"ContainerDied","Data":"45ae6a7716efdd3aea461d5292cdae5828f0ecbb445a641e81b3a2c67ebda8e7"} Oct 11 11:06:00.992540 master-2 kubenswrapper[4776]: I1011 11:06:00.992437 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-networker-deploy-networkers-hpjcr"] Oct 11 11:06:00.993387 master-2 kubenswrapper[4776]: E1011 11:06:00.993139 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdab7f7f-97fe-4393-859b-b71f68b588b4" containerName="dnsmasq-dns" Oct 11 11:06:00.993387 master-2 kubenswrapper[4776]: I1011 11:06:00.993165 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdab7f7f-97fe-4393-859b-b71f68b588b4" containerName="dnsmasq-dns" Oct 11 11:06:00.993387 master-2 kubenswrapper[4776]: E1011 11:06:00.993195 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a63f7af9-5ea2-4091-901f-6d9187377785" containerName="dnsmasq-dns" Oct 11 11:06:00.993387 master-2 kubenswrapper[4776]: I1011 11:06:00.993204 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="a63f7af9-5ea2-4091-901f-6d9187377785" containerName="dnsmasq-dns" Oct 11 11:06:00.993387 master-2 kubenswrapper[4776]: E1011 11:06:00.993245 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdab7f7f-97fe-4393-859b-b71f68b588b4" containerName="init" Oct 11 11:06:00.993387 master-2 kubenswrapper[4776]: I1011 11:06:00.993254 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdab7f7f-97fe-4393-859b-b71f68b588b4" containerName="init" Oct 11 11:06:00.993387 master-2 kubenswrapper[4776]: E1011 11:06:00.993277 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a63f7af9-5ea2-4091-901f-6d9187377785" containerName="init" Oct 11 11:06:00.993387 master-2 kubenswrapper[4776]: I1011 11:06:00.993285 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="a63f7af9-5ea2-4091-901f-6d9187377785" containerName="init" Oct 11 11:06:00.993723 master-2 kubenswrapper[4776]: I1011 11:06:00.993531 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdab7f7f-97fe-4393-859b-b71f68b588b4" containerName="dnsmasq-dns" Oct 11 11:06:00.993723 master-2 kubenswrapper[4776]: I1011 11:06:00.993559 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="a63f7af9-5ea2-4091-901f-6d9187377785" containerName="dnsmasq-dns" Oct 11 11:06:00.995050 master-2 kubenswrapper[4776]: I1011 11:06:00.994799 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:00.998024 master-2 kubenswrapper[4776]: I1011 11:06:00.997973 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:06:00.998300 master-2 kubenswrapper[4776]: I1011 11:06:00.998253 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:06:00.998458 master-2 kubenswrapper[4776]: I1011 11:06:00.998278 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-networkers" Oct 11 11:06:01.007494 master-2 kubenswrapper[4776]: I1011 11:06:01.007445 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-networker-deploy-networkers-hpjcr"] Oct 11 11:06:01.068573 master-2 kubenswrapper[4776]: I1011 11:06:01.068482 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-inventory\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.068573 master-2 kubenswrapper[4776]: I1011 11:06:01.068584 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-ssh-key\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.068991 master-2 kubenswrapper[4776]: I1011 11:06:01.068714 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fxlr\" (UniqueName: \"kubernetes.io/projected/e0d073f2-1387-41f9-9e3d-71e1057293f9-kube-api-access-4fxlr\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.068991 master-2 kubenswrapper[4776]: I1011 11:06:01.068821 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-bootstrap-combined-ca-bundle\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.170827 master-2 kubenswrapper[4776]: I1011 11:06:01.170775 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-ssh-key\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.171066 master-2 kubenswrapper[4776]: I1011 11:06:01.170884 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fxlr\" (UniqueName: \"kubernetes.io/projected/e0d073f2-1387-41f9-9e3d-71e1057293f9-kube-api-access-4fxlr\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.171066 master-2 kubenswrapper[4776]: I1011 11:06:01.170981 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-bootstrap-combined-ca-bundle\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.171185 master-2 kubenswrapper[4776]: I1011 11:06:01.171167 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-inventory\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.175027 master-2 kubenswrapper[4776]: I1011 11:06:01.175009 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-inventory\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.175392 master-2 kubenswrapper[4776]: I1011 11:06:01.175365 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-bootstrap-combined-ca-bundle\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.190634 master-2 kubenswrapper[4776]: I1011 11:06:01.190450 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-ssh-key\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.194489 master-2 kubenswrapper[4776]: I1011 11:06:01.194457 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fxlr\" (UniqueName: \"kubernetes.io/projected/e0d073f2-1387-41f9-9e3d-71e1057293f9-kube-api-access-4fxlr\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.326285 master-2 kubenswrapper[4776]: I1011 11:06:01.326221 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.948394 master-2 kubenswrapper[4776]: I1011 11:06:01.948250 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-networker-deploy-networkers-hpjcr"] Oct 11 11:06:01.952611 master-2 kubenswrapper[4776]: W1011 11:06:01.952541 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0d073f2_1387_41f9_9e3d_71e1057293f9.slice/crio-edc4f9119e96838dd2383ba29e5f21de62558eb73ab2b4b283c2439ebc73cdd8 WatchSource:0}: Error finding container edc4f9119e96838dd2383ba29e5f21de62558eb73ab2b4b283c2439ebc73cdd8: Status 404 returned error can't find the container with id edc4f9119e96838dd2383ba29e5f21de62558eb73ab2b4b283c2439ebc73cdd8 Oct 11 11:06:02.386170 master-2 kubenswrapper[4776]: I1011 11:06:02.386095 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" event={"ID":"e0d073f2-1387-41f9-9e3d-71e1057293f9","Type":"ContainerStarted","Data":"edc4f9119e96838dd2383ba29e5f21de62558eb73ab2b4b283c2439ebc73cdd8"} Oct 11 11:06:04.260074 master-1 kubenswrapper[4771]: I1011 11:06:04.259877 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7glk" event={"ID":"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2","Type":"ContainerStarted","Data":"a7251ba4d7d22653d6ccb9ee7c097e5d20e42d6656045bba5df4b68d8f18c09d"} Oct 11 11:06:04.262555 master-1 kubenswrapper[4771]: I1011 11:06:04.262437 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-dataplane-edpm-hsdgd" event={"ID":"7bd3df1d-629a-4bbd-9ab4-7731e8928b01","Type":"ContainerStarted","Data":"b6fe2bafc6abee3dc8fb39a96bf66118ada768aa6b65ffb7178ecf81e4862ef0"} Oct 11 11:06:04.316072 master-1 kubenswrapper[4771]: I1011 11:06:04.315944 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j7glk" podStartSLOduration=2.769382736 podStartE2EDuration="11.315921513s" podCreationTimestamp="2025-10-11 11:05:53 +0000 UTC" firstStartedPulling="2025-10-11 11:05:55.160746598 +0000 UTC m=+2387.134973039" lastFinishedPulling="2025-10-11 11:06:03.707285375 +0000 UTC m=+2395.681511816" observedRunningTime="2025-10-11 11:06:04.307960806 +0000 UTC m=+2396.282187277" watchObservedRunningTime="2025-10-11 11:06:04.315921513 +0000 UTC m=+2396.290147964" Oct 11 11:06:04.353320 master-1 kubenswrapper[4771]: I1011 11:06:04.353205 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-dataplane-edpm-hsdgd" podStartSLOduration=2.412388001 podStartE2EDuration="12.353182739s" podCreationTimestamp="2025-10-11 11:05:52 +0000 UTC" firstStartedPulling="2025-10-11 11:05:53.792009667 +0000 UTC m=+2385.766236108" lastFinishedPulling="2025-10-11 11:06:03.732804405 +0000 UTC m=+2395.707030846" observedRunningTime="2025-10-11 11:06:04.347511577 +0000 UTC m=+2396.321738048" watchObservedRunningTime="2025-10-11 11:06:04.353182739 +0000 UTC m=+2396.327409190" Oct 11 11:06:04.369533 master-1 kubenswrapper[4771]: I1011 11:06:04.369462 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j7glk" Oct 11 11:06:04.369671 master-1 kubenswrapper[4771]: I1011 11:06:04.369547 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j7glk" Oct 11 11:06:05.416761 master-1 kubenswrapper[4771]: I1011 11:06:05.416670 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-j7glk" podUID="f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2" containerName="registry-server" probeResult="failure" output=< Oct 11 11:06:05.416761 master-1 kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Oct 11 11:06:05.416761 master-1 kubenswrapper[4771]: > Oct 11 11:06:11.075798 master-1 kubenswrapper[4771]: I1011 11:06:11.075616 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-sz8dm"] Oct 11 11:06:11.092300 master-1 kubenswrapper[4771]: I1011 11:06:11.092194 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-sz8dm"] Oct 11 11:06:11.122964 master-2 kubenswrapper[4776]: I1011 11:06:11.122901 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:06:11.486404 master-2 kubenswrapper[4776]: I1011 11:06:11.484899 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" event={"ID":"e0d073f2-1387-41f9-9e3d-71e1057293f9","Type":"ContainerStarted","Data":"b093ff59d085519739327a515a07ac3ab6962069ac919c6da772163834377b3f"} Oct 11 11:06:11.521756 master-2 kubenswrapper[4776]: I1011 11:06:11.521284 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" podStartSLOduration=2.3576906429999998 podStartE2EDuration="11.521267918s" podCreationTimestamp="2025-10-11 11:06:00 +0000 UTC" firstStartedPulling="2025-10-11 11:06:01.95640258 +0000 UTC m=+2396.740829289" lastFinishedPulling="2025-10-11 11:06:11.119979855 +0000 UTC m=+2405.904406564" observedRunningTime="2025-10-11 11:06:11.515404438 +0000 UTC m=+2406.299831147" watchObservedRunningTime="2025-10-11 11:06:11.521267918 +0000 UTC m=+2406.305694627" Oct 11 11:06:12.455799 master-1 kubenswrapper[4771]: I1011 11:06:12.455701 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa024267-404c-497a-a798-3a371608b678" path="/var/lib/kubelet/pods/fa024267-404c-497a-a798-3a371608b678/volumes" Oct 11 11:06:14.447038 master-1 kubenswrapper[4771]: I1011 11:06:14.446972 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j7glk" Oct 11 11:06:14.516848 master-1 kubenswrapper[4771]: I1011 11:06:14.516772 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j7glk" Oct 11 11:06:14.711937 master-1 kubenswrapper[4771]: I1011 11:06:14.711794 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j7glk"] Oct 11 11:06:16.417062 master-1 kubenswrapper[4771]: I1011 11:06:16.416943 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j7glk" podUID="f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2" containerName="registry-server" containerID="cri-o://a7251ba4d7d22653d6ccb9ee7c097e5d20e42d6656045bba5df4b68d8f18c09d" gracePeriod=2 Oct 11 11:06:17.031147 master-1 kubenswrapper[4771]: I1011 11:06:17.031068 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j7glk" Oct 11 11:06:17.080504 master-1 kubenswrapper[4771]: I1011 11:06:17.076891 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc8pb\" (UniqueName: \"kubernetes.io/projected/f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2-kube-api-access-nc8pb\") pod \"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2\" (UID: \"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2\") " Oct 11 11:06:17.080504 master-1 kubenswrapper[4771]: I1011 11:06:17.077114 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2-utilities\") pod \"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2\" (UID: \"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2\") " Oct 11 11:06:17.080504 master-1 kubenswrapper[4771]: I1011 11:06:17.077179 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2-catalog-content\") pod \"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2\" (UID: \"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2\") " Oct 11 11:06:17.080504 master-1 kubenswrapper[4771]: I1011 11:06:17.078939 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2-utilities" (OuterVolumeSpecName: "utilities") pod "f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2" (UID: "f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:06:17.082946 master-1 kubenswrapper[4771]: I1011 11:06:17.082837 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2-kube-api-access-nc8pb" (OuterVolumeSpecName: "kube-api-access-nc8pb") pod "f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2" (UID: "f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2"). InnerVolumeSpecName "kube-api-access-nc8pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:06:17.139768 master-1 kubenswrapper[4771]: I1011 11:06:17.139671 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2" (UID: "f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:06:17.182522 master-1 kubenswrapper[4771]: I1011 11:06:17.182440 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc8pb\" (UniqueName: \"kubernetes.io/projected/f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2-kube-api-access-nc8pb\") on node \"master-1\" DevicePath \"\"" Oct 11 11:06:17.182522 master-1 kubenswrapper[4771]: I1011 11:06:17.182514 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2-utilities\") on node \"master-1\" DevicePath \"\"" Oct 11 11:06:17.182824 master-1 kubenswrapper[4771]: I1011 11:06:17.182536 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2-catalog-content\") on node \"master-1\" DevicePath \"\"" Oct 11 11:06:17.431648 master-1 kubenswrapper[4771]: I1011 11:06:17.431576 4771 generic.go:334] "Generic (PLEG): container finished" podID="f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2" containerID="a7251ba4d7d22653d6ccb9ee7c097e5d20e42d6656045bba5df4b68d8f18c09d" exitCode=0 Oct 11 11:06:17.431648 master-1 kubenswrapper[4771]: I1011 11:06:17.431647 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7glk" event={"ID":"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2","Type":"ContainerDied","Data":"a7251ba4d7d22653d6ccb9ee7c097e5d20e42d6656045bba5df4b68d8f18c09d"} Oct 11 11:06:17.432304 master-1 kubenswrapper[4771]: I1011 11:06:17.431684 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j7glk" Oct 11 11:06:17.432304 master-1 kubenswrapper[4771]: I1011 11:06:17.431729 4771 scope.go:117] "RemoveContainer" containerID="a7251ba4d7d22653d6ccb9ee7c097e5d20e42d6656045bba5df4b68d8f18c09d" Oct 11 11:06:17.432304 master-1 kubenswrapper[4771]: I1011 11:06:17.431712 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7glk" event={"ID":"f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2","Type":"ContainerDied","Data":"39291eb0bb4dddc8b1a55028f1e708f4628e007e9d0a4908a7a947f1a79a7a0f"} Oct 11 11:06:17.457043 master-1 kubenswrapper[4771]: I1011 11:06:17.454724 4771 scope.go:117] "RemoveContainer" containerID="45ae6a7716efdd3aea461d5292cdae5828f0ecbb445a641e81b3a2c67ebda8e7" Oct 11 11:06:17.486713 master-1 kubenswrapper[4771]: I1011 11:06:17.486653 4771 scope.go:117] "RemoveContainer" containerID="28f51367dd6d11abe17cd09270952a1c6d945d37acbc8713d9275ca61fa60699" Oct 11 11:06:17.503179 master-1 kubenswrapper[4771]: I1011 11:06:17.503113 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j7glk"] Oct 11 11:06:17.516295 master-1 kubenswrapper[4771]: I1011 11:06:17.516229 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j7glk"] Oct 11 11:06:17.518880 master-1 kubenswrapper[4771]: I1011 11:06:17.518834 4771 scope.go:117] "RemoveContainer" containerID="a7251ba4d7d22653d6ccb9ee7c097e5d20e42d6656045bba5df4b68d8f18c09d" Oct 11 11:06:17.519607 master-1 kubenswrapper[4771]: E1011 11:06:17.519555 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7251ba4d7d22653d6ccb9ee7c097e5d20e42d6656045bba5df4b68d8f18c09d\": container with ID starting with a7251ba4d7d22653d6ccb9ee7c097e5d20e42d6656045bba5df4b68d8f18c09d not found: ID does not exist" containerID="a7251ba4d7d22653d6ccb9ee7c097e5d20e42d6656045bba5df4b68d8f18c09d" Oct 11 11:06:17.519675 master-1 kubenswrapper[4771]: I1011 11:06:17.519619 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7251ba4d7d22653d6ccb9ee7c097e5d20e42d6656045bba5df4b68d8f18c09d"} err="failed to get container status \"a7251ba4d7d22653d6ccb9ee7c097e5d20e42d6656045bba5df4b68d8f18c09d\": rpc error: code = NotFound desc = could not find container \"a7251ba4d7d22653d6ccb9ee7c097e5d20e42d6656045bba5df4b68d8f18c09d\": container with ID starting with a7251ba4d7d22653d6ccb9ee7c097e5d20e42d6656045bba5df4b68d8f18c09d not found: ID does not exist" Oct 11 11:06:17.519675 master-1 kubenswrapper[4771]: I1011 11:06:17.519649 4771 scope.go:117] "RemoveContainer" containerID="45ae6a7716efdd3aea461d5292cdae5828f0ecbb445a641e81b3a2c67ebda8e7" Oct 11 11:06:17.520175 master-1 kubenswrapper[4771]: E1011 11:06:17.520118 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45ae6a7716efdd3aea461d5292cdae5828f0ecbb445a641e81b3a2c67ebda8e7\": container with ID starting with 45ae6a7716efdd3aea461d5292cdae5828f0ecbb445a641e81b3a2c67ebda8e7 not found: ID does not exist" containerID="45ae6a7716efdd3aea461d5292cdae5828f0ecbb445a641e81b3a2c67ebda8e7" Oct 11 11:06:17.520260 master-1 kubenswrapper[4771]: I1011 11:06:17.520182 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45ae6a7716efdd3aea461d5292cdae5828f0ecbb445a641e81b3a2c67ebda8e7"} err="failed to get container status \"45ae6a7716efdd3aea461d5292cdae5828f0ecbb445a641e81b3a2c67ebda8e7\": rpc error: code = NotFound desc = could not find container \"45ae6a7716efdd3aea461d5292cdae5828f0ecbb445a641e81b3a2c67ebda8e7\": container with ID starting with 45ae6a7716efdd3aea461d5292cdae5828f0ecbb445a641e81b3a2c67ebda8e7 not found: ID does not exist" Oct 11 11:06:17.520260 master-1 kubenswrapper[4771]: I1011 11:06:17.520215 4771 scope.go:117] "RemoveContainer" containerID="28f51367dd6d11abe17cd09270952a1c6d945d37acbc8713d9275ca61fa60699" Oct 11 11:06:17.520703 master-1 kubenswrapper[4771]: E1011 11:06:17.520645 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28f51367dd6d11abe17cd09270952a1c6d945d37acbc8713d9275ca61fa60699\": container with ID starting with 28f51367dd6d11abe17cd09270952a1c6d945d37acbc8713d9275ca61fa60699 not found: ID does not exist" containerID="28f51367dd6d11abe17cd09270952a1c6d945d37acbc8713d9275ca61fa60699" Oct 11 11:06:17.520807 master-1 kubenswrapper[4771]: I1011 11:06:17.520689 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28f51367dd6d11abe17cd09270952a1c6d945d37acbc8713d9275ca61fa60699"} err="failed to get container status \"28f51367dd6d11abe17cd09270952a1c6d945d37acbc8713d9275ca61fa60699\": rpc error: code = NotFound desc = could not find container \"28f51367dd6d11abe17cd09270952a1c6d945d37acbc8713d9275ca61fa60699\": container with ID starting with 28f51367dd6d11abe17cd09270952a1c6d945d37acbc8713d9275ca61fa60699 not found: ID does not exist" Oct 11 11:06:18.166218 master-2 kubenswrapper[4776]: I1011 11:06:18.166155 4776 scope.go:117] "RemoveContainer" containerID="b466abf7a0b4707a238b9568c5f5c7ad243418122b1d4aff19889a45820a6369" Oct 11 11:06:18.449903 master-1 kubenswrapper[4771]: I1011 11:06:18.449800 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2" path="/var/lib/kubelet/pods/f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2/volumes" Oct 11 11:06:20.062891 master-1 kubenswrapper[4771]: I1011 11:06:20.061909 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-chcrd"] Oct 11 11:06:20.072004 master-1 kubenswrapper[4771]: I1011 11:06:20.071939 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-chcrd"] Oct 11 11:06:20.448026 master-1 kubenswrapper[4771]: I1011 11:06:20.447912 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38267a66-0ebd-44ab-bc7f-cd5703503b74" path="/var/lib/kubelet/pods/38267a66-0ebd-44ab-bc7f-cd5703503b74/volumes" Oct 11 11:06:23.949212 master-1 kubenswrapper[4771]: I1011 11:06:23.949123 4771 scope.go:117] "RemoveContainer" containerID="558ef049f7336b936f032c4d3e3115131e36703eb572e93323b57a5fd484ff9e" Oct 11 11:06:23.988856 master-1 kubenswrapper[4771]: I1011 11:06:23.988509 4771 scope.go:117] "RemoveContainer" containerID="e1ee0992af169f3773493c300780fafe6521ac72bd4a220402d3338c4c92c6fb" Oct 11 11:06:24.027167 master-1 kubenswrapper[4771]: I1011 11:06:24.026879 4771 scope.go:117] "RemoveContainer" containerID="5d17ff04cecf6e6d74e4dc9eda892c61efec1bfa4b25f713f94a437e54e6aeed" Oct 11 11:06:24.062035 master-1 kubenswrapper[4771]: I1011 11:06:24.061939 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-1a24-account-create-pb6gd"] Oct 11 11:06:24.073812 master-1 kubenswrapper[4771]: I1011 11:06:24.073735 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-1a24-account-create-pb6gd"] Oct 11 11:06:24.456215 master-1 kubenswrapper[4771]: I1011 11:06:24.455978 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b670525b-9ca9-419c-858b-6bb2a2303cf6" path="/var/lib/kubelet/pods/b670525b-9ca9-419c-858b-6bb2a2303cf6/volumes" Oct 11 11:06:43.079324 master-1 kubenswrapper[4771]: I1011 11:06:43.079189 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-tn8xz"] Oct 11 11:06:43.096025 master-1 kubenswrapper[4771]: I1011 11:06:43.095931 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-tn8xz"] Oct 11 11:06:44.050680 master-1 kubenswrapper[4771]: I1011 11:06:44.050614 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-2kt7k"] Oct 11 11:06:44.057775 master-1 kubenswrapper[4771]: I1011 11:06:44.057733 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-2kt7k"] Oct 11 11:06:44.457099 master-1 kubenswrapper[4771]: I1011 11:06:44.457029 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3de492fb-5249-49e2-a327-756234aa92bd" path="/var/lib/kubelet/pods/3de492fb-5249-49e2-a327-756234aa92bd/volumes" Oct 11 11:06:44.458049 master-1 kubenswrapper[4771]: I1011 11:06:44.458020 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85d5cfa-8073-4bbf-9eff-78fde719dadf" path="/var/lib/kubelet/pods/f85d5cfa-8073-4bbf-9eff-78fde719dadf/volumes" Oct 11 11:06:46.051268 master-1 kubenswrapper[4771]: I1011 11:06:46.051200 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tgg64"] Oct 11 11:06:46.066025 master-1 kubenswrapper[4771]: I1011 11:06:46.065923 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tgg64"] Oct 11 11:06:46.456385 master-1 kubenswrapper[4771]: I1011 11:06:46.456260 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b5e37e3-9afd-4ff3-b992-1e6c28a986ad" path="/var/lib/kubelet/pods/5b5e37e3-9afd-4ff3-b992-1e6c28a986ad/volumes" Oct 11 11:06:58.997608 master-1 kubenswrapper[4771]: I1011 11:06:58.997515 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ml7zj"] Oct 11 11:06:58.998659 master-1 kubenswrapper[4771]: E1011 11:06:58.998034 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2" containerName="extract-content" Oct 11 11:06:58.998659 master-1 kubenswrapper[4771]: I1011 11:06:58.998060 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2" containerName="extract-content" Oct 11 11:06:58.998659 master-1 kubenswrapper[4771]: E1011 11:06:58.998086 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2" containerName="registry-server" Oct 11 11:06:58.998659 master-1 kubenswrapper[4771]: I1011 11:06:58.998099 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2" containerName="registry-server" Oct 11 11:06:58.998659 master-1 kubenswrapper[4771]: E1011 11:06:58.998152 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2" containerName="extract-utilities" Oct 11 11:06:58.998659 master-1 kubenswrapper[4771]: I1011 11:06:58.998166 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2" containerName="extract-utilities" Oct 11 11:06:58.998659 master-1 kubenswrapper[4771]: I1011 11:06:58.998470 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b4e7d7-c57b-4b03-b229-91c6ceb0cda2" containerName="registry-server" Oct 11 11:06:59.000922 master-1 kubenswrapper[4771]: I1011 11:06:59.000863 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ml7zj" Oct 11 11:06:59.021765 master-1 kubenswrapper[4771]: I1011 11:06:59.021680 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ml7zj"] Oct 11 11:06:59.159806 master-1 kubenswrapper[4771]: I1011 11:06:59.159685 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/975d8dbf-78fe-4498-ba3c-c77b71e3d13c-utilities\") pod \"redhat-operators-ml7zj\" (UID: \"975d8dbf-78fe-4498-ba3c-c77b71e3d13c\") " pod="openshift-marketplace/redhat-operators-ml7zj" Oct 11 11:06:59.160056 master-1 kubenswrapper[4771]: I1011 11:06:59.159923 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9xks\" (UniqueName: \"kubernetes.io/projected/975d8dbf-78fe-4498-ba3c-c77b71e3d13c-kube-api-access-b9xks\") pod \"redhat-operators-ml7zj\" (UID: \"975d8dbf-78fe-4498-ba3c-c77b71e3d13c\") " pod="openshift-marketplace/redhat-operators-ml7zj" Oct 11 11:06:59.160117 master-1 kubenswrapper[4771]: I1011 11:06:59.160095 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/975d8dbf-78fe-4498-ba3c-c77b71e3d13c-catalog-content\") pod \"redhat-operators-ml7zj\" (UID: \"975d8dbf-78fe-4498-ba3c-c77b71e3d13c\") " pod="openshift-marketplace/redhat-operators-ml7zj" Oct 11 11:06:59.275616 master-1 kubenswrapper[4771]: I1011 11:06:59.274441 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/975d8dbf-78fe-4498-ba3c-c77b71e3d13c-utilities\") pod \"redhat-operators-ml7zj\" (UID: \"975d8dbf-78fe-4498-ba3c-c77b71e3d13c\") " pod="openshift-marketplace/redhat-operators-ml7zj" Oct 11 11:06:59.275616 master-1 kubenswrapper[4771]: I1011 11:06:59.274746 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9xks\" (UniqueName: \"kubernetes.io/projected/975d8dbf-78fe-4498-ba3c-c77b71e3d13c-kube-api-access-b9xks\") pod \"redhat-operators-ml7zj\" (UID: \"975d8dbf-78fe-4498-ba3c-c77b71e3d13c\") " pod="openshift-marketplace/redhat-operators-ml7zj" Oct 11 11:06:59.275616 master-1 kubenswrapper[4771]: I1011 11:06:59.275026 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/975d8dbf-78fe-4498-ba3c-c77b71e3d13c-utilities\") pod \"redhat-operators-ml7zj\" (UID: \"975d8dbf-78fe-4498-ba3c-c77b71e3d13c\") " pod="openshift-marketplace/redhat-operators-ml7zj" Oct 11 11:06:59.275616 master-1 kubenswrapper[4771]: I1011 11:06:59.275072 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/975d8dbf-78fe-4498-ba3c-c77b71e3d13c-catalog-content\") pod \"redhat-operators-ml7zj\" (UID: \"975d8dbf-78fe-4498-ba3c-c77b71e3d13c\") " pod="openshift-marketplace/redhat-operators-ml7zj" Oct 11 11:06:59.275616 master-1 kubenswrapper[4771]: I1011 11:06:59.275373 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/975d8dbf-78fe-4498-ba3c-c77b71e3d13c-catalog-content\") pod \"redhat-operators-ml7zj\" (UID: \"975d8dbf-78fe-4498-ba3c-c77b71e3d13c\") " pod="openshift-marketplace/redhat-operators-ml7zj" Oct 11 11:06:59.298291 master-1 kubenswrapper[4771]: I1011 11:06:59.298238 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9xks\" (UniqueName: \"kubernetes.io/projected/975d8dbf-78fe-4498-ba3c-c77b71e3d13c-kube-api-access-b9xks\") pod \"redhat-operators-ml7zj\" (UID: \"975d8dbf-78fe-4498-ba3c-c77b71e3d13c\") " pod="openshift-marketplace/redhat-operators-ml7zj" Oct 11 11:06:59.336164 master-1 kubenswrapper[4771]: I1011 11:06:59.336101 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ml7zj" Oct 11 11:06:59.911410 master-1 kubenswrapper[4771]: I1011 11:06:59.911338 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ml7zj"] Oct 11 11:06:59.925108 master-1 kubenswrapper[4771]: I1011 11:06:59.925055 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ml7zj" event={"ID":"975d8dbf-78fe-4498-ba3c-c77b71e3d13c","Type":"ContainerStarted","Data":"ef3300826966f8ec98a66b7079eaad081f80ea7c8396e36a817549fc64d17745"} Oct 11 11:07:00.936022 master-1 kubenswrapper[4771]: I1011 11:07:00.935937 4771 generic.go:334] "Generic (PLEG): container finished" podID="975d8dbf-78fe-4498-ba3c-c77b71e3d13c" containerID="7f9d75f8079234d810d11902757682947dc5fb16ad7718bd4f50ddbf03868820" exitCode=0 Oct 11 11:07:00.936022 master-1 kubenswrapper[4771]: I1011 11:07:00.936017 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ml7zj" event={"ID":"975d8dbf-78fe-4498-ba3c-c77b71e3d13c","Type":"ContainerDied","Data":"7f9d75f8079234d810d11902757682947dc5fb16ad7718bd4f50ddbf03868820"} Oct 11 11:07:02.967163 master-1 kubenswrapper[4771]: I1011 11:07:02.967048 4771 generic.go:334] "Generic (PLEG): container finished" podID="975d8dbf-78fe-4498-ba3c-c77b71e3d13c" containerID="5729cff9675c3fe08ce7639f6deb94291b153462aad880a76ad3b64a18541b6d" exitCode=0 Oct 11 11:07:02.967163 master-1 kubenswrapper[4771]: I1011 11:07:02.967149 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ml7zj" event={"ID":"975d8dbf-78fe-4498-ba3c-c77b71e3d13c","Type":"ContainerDied","Data":"5729cff9675c3fe08ce7639f6deb94291b153462aad880a76ad3b64a18541b6d"} Oct 11 11:07:03.981561 master-1 kubenswrapper[4771]: I1011 11:07:03.981389 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ml7zj" event={"ID":"975d8dbf-78fe-4498-ba3c-c77b71e3d13c","Type":"ContainerStarted","Data":"0fabae86416d42993c56f8001eca8e0c83d909a12fa2532f9d5f6f9861d1e947"} Oct 11 11:07:04.046476 master-1 kubenswrapper[4771]: I1011 11:07:04.046328 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ml7zj" podStartSLOduration=3.616338558 podStartE2EDuration="6.046298126s" podCreationTimestamp="2025-10-11 11:06:58 +0000 UTC" firstStartedPulling="2025-10-11 11:07:00.938800639 +0000 UTC m=+2452.913027100" lastFinishedPulling="2025-10-11 11:07:03.368760227 +0000 UTC m=+2455.342986668" observedRunningTime="2025-10-11 11:07:04.036207657 +0000 UTC m=+2456.010434188" watchObservedRunningTime="2025-10-11 11:07:04.046298126 +0000 UTC m=+2456.020524607" Oct 11 11:07:09.337088 master-1 kubenswrapper[4771]: I1011 11:07:09.337017 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ml7zj" Oct 11 11:07:09.338505 master-1 kubenswrapper[4771]: I1011 11:07:09.337410 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ml7zj" Oct 11 11:07:10.406552 master-1 kubenswrapper[4771]: I1011 11:07:10.406452 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ml7zj" podUID="975d8dbf-78fe-4498-ba3c-c77b71e3d13c" containerName="registry-server" probeResult="failure" output=< Oct 11 11:07:10.406552 master-1 kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Oct 11 11:07:10.406552 master-1 kubenswrapper[4771]: > Oct 11 11:07:19.422745 master-1 kubenswrapper[4771]: I1011 11:07:19.422634 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ml7zj" Oct 11 11:07:19.509394 master-1 kubenswrapper[4771]: I1011 11:07:19.509274 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ml7zj" Oct 11 11:07:19.696575 master-1 kubenswrapper[4771]: I1011 11:07:19.696332 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ml7zj"] Oct 11 11:07:21.190478 master-1 kubenswrapper[4771]: I1011 11:07:21.190295 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ml7zj" podUID="975d8dbf-78fe-4498-ba3c-c77b71e3d13c" containerName="registry-server" containerID="cri-o://0fabae86416d42993c56f8001eca8e0c83d909a12fa2532f9d5f6f9861d1e947" gracePeriod=2 Oct 11 11:07:21.842681 master-1 kubenswrapper[4771]: I1011 11:07:21.842625 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ml7zj" Oct 11 11:07:22.039638 master-1 kubenswrapper[4771]: I1011 11:07:22.039476 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/975d8dbf-78fe-4498-ba3c-c77b71e3d13c-utilities\") pod \"975d8dbf-78fe-4498-ba3c-c77b71e3d13c\" (UID: \"975d8dbf-78fe-4498-ba3c-c77b71e3d13c\") " Oct 11 11:07:22.039638 master-1 kubenswrapper[4771]: I1011 11:07:22.039579 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9xks\" (UniqueName: \"kubernetes.io/projected/975d8dbf-78fe-4498-ba3c-c77b71e3d13c-kube-api-access-b9xks\") pod \"975d8dbf-78fe-4498-ba3c-c77b71e3d13c\" (UID: \"975d8dbf-78fe-4498-ba3c-c77b71e3d13c\") " Oct 11 11:07:22.039973 master-1 kubenswrapper[4771]: I1011 11:07:22.039659 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/975d8dbf-78fe-4498-ba3c-c77b71e3d13c-catalog-content\") pod \"975d8dbf-78fe-4498-ba3c-c77b71e3d13c\" (UID: \"975d8dbf-78fe-4498-ba3c-c77b71e3d13c\") " Oct 11 11:07:22.041068 master-1 kubenswrapper[4771]: I1011 11:07:22.040997 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/975d8dbf-78fe-4498-ba3c-c77b71e3d13c-utilities" (OuterVolumeSpecName: "utilities") pod "975d8dbf-78fe-4498-ba3c-c77b71e3d13c" (UID: "975d8dbf-78fe-4498-ba3c-c77b71e3d13c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:07:22.056892 master-1 kubenswrapper[4771]: I1011 11:07:22.056839 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/975d8dbf-78fe-4498-ba3c-c77b71e3d13c-kube-api-access-b9xks" (OuterVolumeSpecName: "kube-api-access-b9xks") pod "975d8dbf-78fe-4498-ba3c-c77b71e3d13c" (UID: "975d8dbf-78fe-4498-ba3c-c77b71e3d13c"). InnerVolumeSpecName "kube-api-access-b9xks". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:07:22.142063 master-1 kubenswrapper[4771]: I1011 11:07:22.142008 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/975d8dbf-78fe-4498-ba3c-c77b71e3d13c-utilities\") on node \"master-1\" DevicePath \"\"" Oct 11 11:07:22.142063 master-1 kubenswrapper[4771]: I1011 11:07:22.142051 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9xks\" (UniqueName: \"kubernetes.io/projected/975d8dbf-78fe-4498-ba3c-c77b71e3d13c-kube-api-access-b9xks\") on node \"master-1\" DevicePath \"\"" Oct 11 11:07:22.176264 master-1 kubenswrapper[4771]: I1011 11:07:22.176199 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/975d8dbf-78fe-4498-ba3c-c77b71e3d13c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "975d8dbf-78fe-4498-ba3c-c77b71e3d13c" (UID: "975d8dbf-78fe-4498-ba3c-c77b71e3d13c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:07:22.204146 master-1 kubenswrapper[4771]: I1011 11:07:22.204082 4771 generic.go:334] "Generic (PLEG): container finished" podID="975d8dbf-78fe-4498-ba3c-c77b71e3d13c" containerID="0fabae86416d42993c56f8001eca8e0c83d909a12fa2532f9d5f6f9861d1e947" exitCode=0 Oct 11 11:07:22.204146 master-1 kubenswrapper[4771]: I1011 11:07:22.204136 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ml7zj" event={"ID":"975d8dbf-78fe-4498-ba3c-c77b71e3d13c","Type":"ContainerDied","Data":"0fabae86416d42993c56f8001eca8e0c83d909a12fa2532f9d5f6f9861d1e947"} Oct 11 11:07:22.204619 master-1 kubenswrapper[4771]: I1011 11:07:22.204178 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ml7zj" event={"ID":"975d8dbf-78fe-4498-ba3c-c77b71e3d13c","Type":"ContainerDied","Data":"ef3300826966f8ec98a66b7079eaad081f80ea7c8396e36a817549fc64d17745"} Oct 11 11:07:22.204619 master-1 kubenswrapper[4771]: I1011 11:07:22.204202 4771 scope.go:117] "RemoveContainer" containerID="0fabae86416d42993c56f8001eca8e0c83d909a12fa2532f9d5f6f9861d1e947" Oct 11 11:07:22.204619 master-1 kubenswrapper[4771]: I1011 11:07:22.204213 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ml7zj" Oct 11 11:07:22.228625 master-1 kubenswrapper[4771]: I1011 11:07:22.228572 4771 scope.go:117] "RemoveContainer" containerID="5729cff9675c3fe08ce7639f6deb94291b153462aad880a76ad3b64a18541b6d" Oct 11 11:07:22.245555 master-1 kubenswrapper[4771]: I1011 11:07:22.244908 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/975d8dbf-78fe-4498-ba3c-c77b71e3d13c-catalog-content\") on node \"master-1\" DevicePath \"\"" Oct 11 11:07:22.253603 master-1 kubenswrapper[4771]: I1011 11:07:22.253419 4771 scope.go:117] "RemoveContainer" containerID="7f9d75f8079234d810d11902757682947dc5fb16ad7718bd4f50ddbf03868820" Oct 11 11:07:22.260200 master-1 kubenswrapper[4771]: I1011 11:07:22.260126 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ml7zj"] Oct 11 11:07:22.269953 master-1 kubenswrapper[4771]: I1011 11:07:22.268500 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ml7zj"] Oct 11 11:07:22.319345 master-1 kubenswrapper[4771]: I1011 11:07:22.319275 4771 scope.go:117] "RemoveContainer" containerID="0fabae86416d42993c56f8001eca8e0c83d909a12fa2532f9d5f6f9861d1e947" Oct 11 11:07:22.320145 master-1 kubenswrapper[4771]: E1011 11:07:22.319992 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fabae86416d42993c56f8001eca8e0c83d909a12fa2532f9d5f6f9861d1e947\": container with ID starting with 0fabae86416d42993c56f8001eca8e0c83d909a12fa2532f9d5f6f9861d1e947 not found: ID does not exist" containerID="0fabae86416d42993c56f8001eca8e0c83d909a12fa2532f9d5f6f9861d1e947" Oct 11 11:07:22.320145 master-1 kubenswrapper[4771]: I1011 11:07:22.320051 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fabae86416d42993c56f8001eca8e0c83d909a12fa2532f9d5f6f9861d1e947"} err="failed to get container status \"0fabae86416d42993c56f8001eca8e0c83d909a12fa2532f9d5f6f9861d1e947\": rpc error: code = NotFound desc = could not find container \"0fabae86416d42993c56f8001eca8e0c83d909a12fa2532f9d5f6f9861d1e947\": container with ID starting with 0fabae86416d42993c56f8001eca8e0c83d909a12fa2532f9d5f6f9861d1e947 not found: ID does not exist" Oct 11 11:07:22.320145 master-1 kubenswrapper[4771]: I1011 11:07:22.320087 4771 scope.go:117] "RemoveContainer" containerID="5729cff9675c3fe08ce7639f6deb94291b153462aad880a76ad3b64a18541b6d" Oct 11 11:07:22.320687 master-1 kubenswrapper[4771]: E1011 11:07:22.320639 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5729cff9675c3fe08ce7639f6deb94291b153462aad880a76ad3b64a18541b6d\": container with ID starting with 5729cff9675c3fe08ce7639f6deb94291b153462aad880a76ad3b64a18541b6d not found: ID does not exist" containerID="5729cff9675c3fe08ce7639f6deb94291b153462aad880a76ad3b64a18541b6d" Oct 11 11:07:22.320744 master-1 kubenswrapper[4771]: I1011 11:07:22.320694 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5729cff9675c3fe08ce7639f6deb94291b153462aad880a76ad3b64a18541b6d"} err="failed to get container status \"5729cff9675c3fe08ce7639f6deb94291b153462aad880a76ad3b64a18541b6d\": rpc error: code = NotFound desc = could not find container \"5729cff9675c3fe08ce7639f6deb94291b153462aad880a76ad3b64a18541b6d\": container with ID starting with 5729cff9675c3fe08ce7639f6deb94291b153462aad880a76ad3b64a18541b6d not found: ID does not exist" Oct 11 11:07:22.320744 master-1 kubenswrapper[4771]: I1011 11:07:22.320730 4771 scope.go:117] "RemoveContainer" containerID="7f9d75f8079234d810d11902757682947dc5fb16ad7718bd4f50ddbf03868820" Oct 11 11:07:22.321260 master-1 kubenswrapper[4771]: E1011 11:07:22.321194 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f9d75f8079234d810d11902757682947dc5fb16ad7718bd4f50ddbf03868820\": container with ID starting with 7f9d75f8079234d810d11902757682947dc5fb16ad7718bd4f50ddbf03868820 not found: ID does not exist" containerID="7f9d75f8079234d810d11902757682947dc5fb16ad7718bd4f50ddbf03868820" Oct 11 11:07:22.321314 master-1 kubenswrapper[4771]: I1011 11:07:22.321273 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f9d75f8079234d810d11902757682947dc5fb16ad7718bd4f50ddbf03868820"} err="failed to get container status \"7f9d75f8079234d810d11902757682947dc5fb16ad7718bd4f50ddbf03868820\": rpc error: code = NotFound desc = could not find container \"7f9d75f8079234d810d11902757682947dc5fb16ad7718bd4f50ddbf03868820\": container with ID starting with 7f9d75f8079234d810d11902757682947dc5fb16ad7718bd4f50ddbf03868820 not found: ID does not exist" Oct 11 11:07:22.449318 master-1 kubenswrapper[4771]: I1011 11:07:22.449205 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="975d8dbf-78fe-4498-ba3c-c77b71e3d13c" path="/var/lib/kubelet/pods/975d8dbf-78fe-4498-ba3c-c77b71e3d13c/volumes" Oct 11 11:07:24.218382 master-1 kubenswrapper[4771]: I1011 11:07:24.218269 4771 scope.go:117] "RemoveContainer" containerID="34c012fefebf03c137c3d264726e9a32c974159496d4bf0d0a4dad6dcdf4c655" Oct 11 11:07:24.253730 master-1 kubenswrapper[4771]: I1011 11:07:24.252795 4771 scope.go:117] "RemoveContainer" containerID="03e80f5bdb6844a3112427ed3612b145765c86f689c582771359401e14c9758e" Oct 11 11:07:24.355016 master-1 kubenswrapper[4771]: I1011 11:07:24.354940 4771 scope.go:117] "RemoveContainer" containerID="38fe7e740cc7430b1900679565564cc35f6e1964bf7c4a238c960c0377445331" Oct 11 11:07:24.388737 master-1 kubenswrapper[4771]: I1011 11:07:24.388664 4771 scope.go:117] "RemoveContainer" containerID="af56a7e4623de207ef8289e7bba0d65eef5da9d57f459e288f321109c3a8e4f3" Oct 11 11:07:25.074069 master-1 kubenswrapper[4771]: I1011 11:07:25.073974 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-host-discover-5f556"] Oct 11 11:07:25.080020 master-1 kubenswrapper[4771]: I1011 11:07:25.079930 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-host-discover-5f556"] Oct 11 11:07:26.452550 master-1 kubenswrapper[4771]: I1011 11:07:26.452474 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae49cc63-d351-440f-9334-4ef2550565a2" path="/var/lib/kubelet/pods/ae49cc63-d351-440f-9334-4ef2550565a2/volumes" Oct 11 11:07:27.063532 master-1 kubenswrapper[4771]: I1011 11:07:27.063439 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-bwgtz"] Oct 11 11:07:27.073745 master-1 kubenswrapper[4771]: I1011 11:07:27.073656 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-bwgtz"] Oct 11 11:07:28.451640 master-1 kubenswrapper[4771]: I1011 11:07:28.451583 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="709c362a-6ace-46bf-9f94-86852f78f6f2" path="/var/lib/kubelet/pods/709c362a-6ace-46bf-9f94-86852f78f6f2/volumes" Oct 11 11:07:28.706344 master-2 kubenswrapper[4776]: I1011 11:07:28.705891 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lkxll"] Oct 11 11:07:28.710441 master-2 kubenswrapper[4776]: I1011 11:07:28.708643 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:28.721466 master-2 kubenswrapper[4776]: I1011 11:07:28.721389 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lkxll"] Oct 11 11:07:28.787203 master-2 kubenswrapper[4776]: I1011 11:07:28.787138 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-catalog-content\") pod \"certified-operators-lkxll\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:28.787468 master-2 kubenswrapper[4776]: I1011 11:07:28.787408 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-utilities\") pod \"certified-operators-lkxll\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:28.787649 master-2 kubenswrapper[4776]: I1011 11:07:28.787617 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqjbc\" (UniqueName: \"kubernetes.io/projected/d3740bfc-2abd-4b82-897e-ce53c4fa4324-kube-api-access-jqjbc\") pod \"certified-operators-lkxll\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:28.889818 master-2 kubenswrapper[4776]: I1011 11:07:28.889450 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-catalog-content\") pod \"certified-operators-lkxll\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:28.890087 master-2 kubenswrapper[4776]: I1011 11:07:28.889836 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-utilities\") pod \"certified-operators-lkxll\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:28.890087 master-2 kubenswrapper[4776]: I1011 11:07:28.889943 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqjbc\" (UniqueName: \"kubernetes.io/projected/d3740bfc-2abd-4b82-897e-ce53c4fa4324-kube-api-access-jqjbc\") pod \"certified-operators-lkxll\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:28.890202 master-2 kubenswrapper[4776]: I1011 11:07:28.890163 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-catalog-content\") pod \"certified-operators-lkxll\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:28.890467 master-2 kubenswrapper[4776]: I1011 11:07:28.890438 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-utilities\") pod \"certified-operators-lkxll\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:28.914755 master-2 kubenswrapper[4776]: I1011 11:07:28.914695 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqjbc\" (UniqueName: \"kubernetes.io/projected/d3740bfc-2abd-4b82-897e-ce53c4fa4324-kube-api-access-jqjbc\") pod \"certified-operators-lkxll\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:29.039367 master-2 kubenswrapper[4776]: I1011 11:07:29.039191 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:29.501853 master-2 kubenswrapper[4776]: I1011 11:07:29.501784 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lkxll"] Oct 11 11:07:29.505986 master-2 kubenswrapper[4776]: W1011 11:07:29.505922 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3740bfc_2abd_4b82_897e_ce53c4fa4324.slice/crio-5d62d892e763fd982d96f4fde2f5d10b2f577a9b0d439d782eb4ae9756201327 WatchSource:0}: Error finding container 5d62d892e763fd982d96f4fde2f5d10b2f577a9b0d439d782eb4ae9756201327: Status 404 returned error can't find the container with id 5d62d892e763fd982d96f4fde2f5d10b2f577a9b0d439d782eb4ae9756201327 Oct 11 11:07:30.263704 master-2 kubenswrapper[4776]: I1011 11:07:30.263618 4776 generic.go:334] "Generic (PLEG): container finished" podID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerID="771cf7d2c437cbd2b066674b2e8953bdd61419d156e2d9fd0d38039d3abbcdfa" exitCode=0 Oct 11 11:07:30.264299 master-2 kubenswrapper[4776]: I1011 11:07:30.263739 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkxll" event={"ID":"d3740bfc-2abd-4b82-897e-ce53c4fa4324","Type":"ContainerDied","Data":"771cf7d2c437cbd2b066674b2e8953bdd61419d156e2d9fd0d38039d3abbcdfa"} Oct 11 11:07:30.264299 master-2 kubenswrapper[4776]: I1011 11:07:30.263868 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkxll" event={"ID":"d3740bfc-2abd-4b82-897e-ce53c4fa4324","Type":"ContainerStarted","Data":"5d62d892e763fd982d96f4fde2f5d10b2f577a9b0d439d782eb4ae9756201327"} Oct 11 11:07:31.276055 master-2 kubenswrapper[4776]: I1011 11:07:31.275975 4776 generic.go:334] "Generic (PLEG): container finished" podID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerID="9195079ba196503bb72f278a15987d3d4d6cfdbe5832ba4ad851edf5a3520416" exitCode=0 Oct 11 11:07:31.276055 master-2 kubenswrapper[4776]: I1011 11:07:31.276051 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkxll" event={"ID":"d3740bfc-2abd-4b82-897e-ce53c4fa4324","Type":"ContainerDied","Data":"9195079ba196503bb72f278a15987d3d4d6cfdbe5832ba4ad851edf5a3520416"} Oct 11 11:07:32.287063 master-2 kubenswrapper[4776]: I1011 11:07:32.286986 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkxll" event={"ID":"d3740bfc-2abd-4b82-897e-ce53c4fa4324","Type":"ContainerStarted","Data":"77b9a96c3d235a44cb31104f6ad3f5c6bb437f1efad3dbcfebf2e2dd8c332917"} Oct 11 11:07:32.324710 master-2 kubenswrapper[4776]: I1011 11:07:32.324551 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lkxll" podStartSLOduration=2.90910971 podStartE2EDuration="4.324532942s" podCreationTimestamp="2025-10-11 11:07:28 +0000 UTC" firstStartedPulling="2025-10-11 11:07:30.268080989 +0000 UTC m=+2485.052507708" lastFinishedPulling="2025-10-11 11:07:31.683504221 +0000 UTC m=+2486.467930940" observedRunningTime="2025-10-11 11:07:32.316186126 +0000 UTC m=+2487.100612845" watchObservedRunningTime="2025-10-11 11:07:32.324532942 +0000 UTC m=+2487.108959651" Oct 11 11:07:39.039520 master-2 kubenswrapper[4776]: I1011 11:07:39.039387 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:39.040451 master-2 kubenswrapper[4776]: I1011 11:07:39.040430 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:39.100217 master-2 kubenswrapper[4776]: I1011 11:07:39.100162 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:39.390078 master-2 kubenswrapper[4776]: I1011 11:07:39.389968 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:39.471698 master-2 kubenswrapper[4776]: I1011 11:07:39.471590 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lkxll"] Oct 11 11:07:41.365597 master-2 kubenswrapper[4776]: I1011 11:07:41.365519 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lkxll" podUID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerName="registry-server" containerID="cri-o://77b9a96c3d235a44cb31104f6ad3f5c6bb437f1efad3dbcfebf2e2dd8c332917" gracePeriod=2 Oct 11 11:07:42.385798 master-2 kubenswrapper[4776]: I1011 11:07:42.385735 4776 generic.go:334] "Generic (PLEG): container finished" podID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerID="77b9a96c3d235a44cb31104f6ad3f5c6bb437f1efad3dbcfebf2e2dd8c332917" exitCode=0 Oct 11 11:07:42.385798 master-2 kubenswrapper[4776]: I1011 11:07:42.385784 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkxll" event={"ID":"d3740bfc-2abd-4b82-897e-ce53c4fa4324","Type":"ContainerDied","Data":"77b9a96c3d235a44cb31104f6ad3f5c6bb437f1efad3dbcfebf2e2dd8c332917"} Oct 11 11:07:42.653149 master-2 kubenswrapper[4776]: I1011 11:07:42.653091 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:42.757630 master-2 kubenswrapper[4776]: I1011 11:07:42.757580 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-catalog-content\") pod \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " Oct 11 11:07:42.757984 master-2 kubenswrapper[4776]: I1011 11:07:42.757964 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-utilities\") pod \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " Oct 11 11:07:42.758312 master-2 kubenswrapper[4776]: I1011 11:07:42.758291 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqjbc\" (UniqueName: \"kubernetes.io/projected/d3740bfc-2abd-4b82-897e-ce53c4fa4324-kube-api-access-jqjbc\") pod \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " Oct 11 11:07:42.759822 master-2 kubenswrapper[4776]: I1011 11:07:42.759768 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-utilities" (OuterVolumeSpecName: "utilities") pod "d3740bfc-2abd-4b82-897e-ce53c4fa4324" (UID: "d3740bfc-2abd-4b82-897e-ce53c4fa4324"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:07:42.763721 master-2 kubenswrapper[4776]: I1011 11:07:42.763694 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3740bfc-2abd-4b82-897e-ce53c4fa4324-kube-api-access-jqjbc" (OuterVolumeSpecName: "kube-api-access-jqjbc") pod "d3740bfc-2abd-4b82-897e-ce53c4fa4324" (UID: "d3740bfc-2abd-4b82-897e-ce53c4fa4324"). InnerVolumeSpecName "kube-api-access-jqjbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:07:42.800868 master-2 kubenswrapper[4776]: I1011 11:07:42.800815 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3740bfc-2abd-4b82-897e-ce53c4fa4324" (UID: "d3740bfc-2abd-4b82-897e-ce53c4fa4324"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:07:42.859955 master-2 kubenswrapper[4776]: I1011 11:07:42.859841 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqjbc\" (UniqueName: \"kubernetes.io/projected/d3740bfc-2abd-4b82-897e-ce53c4fa4324-kube-api-access-jqjbc\") on node \"master-2\" DevicePath \"\"" Oct 11 11:07:42.859955 master-2 kubenswrapper[4776]: I1011 11:07:42.859880 4776 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-catalog-content\") on node \"master-2\" DevicePath \"\"" Oct 11 11:07:42.859955 master-2 kubenswrapper[4776]: I1011 11:07:42.859889 4776 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-utilities\") on node \"master-2\" DevicePath \"\"" Oct 11 11:07:43.422723 master-2 kubenswrapper[4776]: I1011 11:07:43.422628 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkxll" event={"ID":"d3740bfc-2abd-4b82-897e-ce53c4fa4324","Type":"ContainerDied","Data":"5d62d892e763fd982d96f4fde2f5d10b2f577a9b0d439d782eb4ae9756201327"} Oct 11 11:07:43.423442 master-2 kubenswrapper[4776]: I1011 11:07:43.422788 4776 scope.go:117] "RemoveContainer" containerID="77b9a96c3d235a44cb31104f6ad3f5c6bb437f1efad3dbcfebf2e2dd8c332917" Oct 11 11:07:43.423442 master-2 kubenswrapper[4776]: I1011 11:07:43.422828 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:43.455993 master-2 kubenswrapper[4776]: I1011 11:07:43.455949 4776 scope.go:117] "RemoveContainer" containerID="9195079ba196503bb72f278a15987d3d4d6cfdbe5832ba4ad851edf5a3520416" Oct 11 11:07:43.474559 master-2 kubenswrapper[4776]: I1011 11:07:43.474494 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lkxll"] Oct 11 11:07:43.480700 master-2 kubenswrapper[4776]: I1011 11:07:43.480639 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lkxll"] Oct 11 11:07:43.489536 master-2 kubenswrapper[4776]: I1011 11:07:43.489489 4776 scope.go:117] "RemoveContainer" containerID="771cf7d2c437cbd2b066674b2e8953bdd61419d156e2d9fd0d38039d3abbcdfa" Oct 11 11:07:44.070812 master-2 kubenswrapper[4776]: I1011 11:07:44.070767 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" path="/var/lib/kubelet/pods/d3740bfc-2abd-4b82-897e-ce53c4fa4324/volumes" Oct 11 11:08:24.595133 master-1 kubenswrapper[4771]: I1011 11:08:24.595055 4771 scope.go:117] "RemoveContainer" containerID="99d58d9d6b8b62fa18ae8ba7508466dad2a9761e505b9274423ecba095a9de64" Oct 11 11:08:24.658949 master-1 kubenswrapper[4771]: I1011 11:08:24.658890 4771 scope.go:117] "RemoveContainer" containerID="48e35ef26a01bac7444e96fa2a9fa3fe07bd9eb6b20913ec8c1c945288cc11bc" Oct 11 11:10:04.786097 master-2 kubenswrapper[4776]: I1011 11:10:04.786017 4776 generic.go:334] "Generic (PLEG): container finished" podID="e0d073f2-1387-41f9-9e3d-71e1057293f9" containerID="b093ff59d085519739327a515a07ac3ab6962069ac919c6da772163834377b3f" exitCode=0 Oct 11 11:10:04.786097 master-2 kubenswrapper[4776]: I1011 11:10:04.786094 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" event={"ID":"e0d073f2-1387-41f9-9e3d-71e1057293f9","Type":"ContainerDied","Data":"b093ff59d085519739327a515a07ac3ab6962069ac919c6da772163834377b3f"} Oct 11 11:10:06.400377 master-2 kubenswrapper[4776]: I1011 11:10:06.400322 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:10:06.464915 master-2 kubenswrapper[4776]: I1011 11:10:06.464454 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-ssh-key\") pod \"e0d073f2-1387-41f9-9e3d-71e1057293f9\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " Oct 11 11:10:06.464915 master-2 kubenswrapper[4776]: I1011 11:10:06.464569 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-bootstrap-combined-ca-bundle\") pod \"e0d073f2-1387-41f9-9e3d-71e1057293f9\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " Oct 11 11:10:06.464915 master-2 kubenswrapper[4776]: I1011 11:10:06.464868 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fxlr\" (UniqueName: \"kubernetes.io/projected/e0d073f2-1387-41f9-9e3d-71e1057293f9-kube-api-access-4fxlr\") pod \"e0d073f2-1387-41f9-9e3d-71e1057293f9\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " Oct 11 11:10:06.465244 master-2 kubenswrapper[4776]: I1011 11:10:06.464995 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-inventory\") pod \"e0d073f2-1387-41f9-9e3d-71e1057293f9\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " Oct 11 11:10:06.468779 master-2 kubenswrapper[4776]: I1011 11:10:06.467865 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e0d073f2-1387-41f9-9e3d-71e1057293f9" (UID: "e0d073f2-1387-41f9-9e3d-71e1057293f9"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:10:06.468779 master-2 kubenswrapper[4776]: I1011 11:10:06.468128 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0d073f2-1387-41f9-9e3d-71e1057293f9-kube-api-access-4fxlr" (OuterVolumeSpecName: "kube-api-access-4fxlr") pod "e0d073f2-1387-41f9-9e3d-71e1057293f9" (UID: "e0d073f2-1387-41f9-9e3d-71e1057293f9"). InnerVolumeSpecName "kube-api-access-4fxlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:10:06.487362 master-2 kubenswrapper[4776]: I1011 11:10:06.487292 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e0d073f2-1387-41f9-9e3d-71e1057293f9" (UID: "e0d073f2-1387-41f9-9e3d-71e1057293f9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:10:06.493921 master-2 kubenswrapper[4776]: I1011 11:10:06.493880 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-inventory" (OuterVolumeSpecName: "inventory") pod "e0d073f2-1387-41f9-9e3d-71e1057293f9" (UID: "e0d073f2-1387-41f9-9e3d-71e1057293f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:10:06.568915 master-2 kubenswrapper[4776]: I1011 11:10:06.568844 4776 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-ssh-key\") on node \"master-2\" DevicePath \"\"" Oct 11 11:10:06.568915 master-2 kubenswrapper[4776]: I1011 11:10:06.568920 4776 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-bootstrap-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 11:10:06.569229 master-2 kubenswrapper[4776]: I1011 11:10:06.568934 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fxlr\" (UniqueName: \"kubernetes.io/projected/e0d073f2-1387-41f9-9e3d-71e1057293f9-kube-api-access-4fxlr\") on node \"master-2\" DevicePath \"\"" Oct 11 11:10:06.569229 master-2 kubenswrapper[4776]: I1011 11:10:06.568949 4776 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-inventory\") on node \"master-2\" DevicePath \"\"" Oct 11 11:10:06.827260 master-2 kubenswrapper[4776]: I1011 11:10:06.827187 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" event={"ID":"e0d073f2-1387-41f9-9e3d-71e1057293f9","Type":"ContainerDied","Data":"edc4f9119e96838dd2383ba29e5f21de62558eb73ab2b4b283c2439ebc73cdd8"} Oct 11 11:10:06.827260 master-2 kubenswrapper[4776]: I1011 11:10:06.827245 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edc4f9119e96838dd2383ba29e5f21de62558eb73ab2b4b283c2439ebc73cdd8" Oct 11 11:10:06.827551 master-2 kubenswrapper[4776]: I1011 11:10:06.827284 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:10:06.962971 master-1 kubenswrapper[4771]: I1011 11:10:06.962053 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-networker-deploy-networkers-vkj22"] Oct 11 11:10:06.965369 master-1 kubenswrapper[4771]: E1011 11:10:06.965318 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="975d8dbf-78fe-4498-ba3c-c77b71e3d13c" containerName="extract-content" Oct 11 11:10:06.965443 master-1 kubenswrapper[4771]: I1011 11:10:06.965412 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="975d8dbf-78fe-4498-ba3c-c77b71e3d13c" containerName="extract-content" Oct 11 11:10:06.965481 master-1 kubenswrapper[4771]: E1011 11:10:06.965469 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="975d8dbf-78fe-4498-ba3c-c77b71e3d13c" containerName="registry-server" Oct 11 11:10:06.965481 master-1 kubenswrapper[4771]: I1011 11:10:06.965480 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="975d8dbf-78fe-4498-ba3c-c77b71e3d13c" containerName="registry-server" Oct 11 11:10:06.965541 master-1 kubenswrapper[4771]: E1011 11:10:06.965498 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="975d8dbf-78fe-4498-ba3c-c77b71e3d13c" containerName="extract-utilities" Oct 11 11:10:06.965541 master-1 kubenswrapper[4771]: I1011 11:10:06.965508 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="975d8dbf-78fe-4498-ba3c-c77b71e3d13c" containerName="extract-utilities" Oct 11 11:10:06.965772 master-1 kubenswrapper[4771]: I1011 11:10:06.965739 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="975d8dbf-78fe-4498-ba3c-c77b71e3d13c" containerName="registry-server" Oct 11 11:10:06.967021 master-1 kubenswrapper[4771]: I1011 11:10:06.966976 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-networker-deploy-networkers-vkj22" Oct 11 11:10:06.970048 master-1 kubenswrapper[4771]: I1011 11:10:06.969985 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-networkers" Oct 11 11:10:06.986044 master-1 kubenswrapper[4771]: I1011 11:10:06.985946 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-networker-deploy-networkers-vkj22"] Oct 11 11:10:07.031662 master-1 kubenswrapper[4771]: I1011 11:10:07.031565 4771 generic.go:334] "Generic (PLEG): container finished" podID="7bd3df1d-629a-4bbd-9ab4-7731e8928b01" containerID="b6fe2bafc6abee3dc8fb39a96bf66118ada768aa6b65ffb7178ecf81e4862ef0" exitCode=0 Oct 11 11:10:07.031662 master-1 kubenswrapper[4771]: I1011 11:10:07.031650 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-dataplane-edpm-hsdgd" event={"ID":"7bd3df1d-629a-4bbd-9ab4-7731e8928b01","Type":"ContainerDied","Data":"b6fe2bafc6abee3dc8fb39a96bf66118ada768aa6b65ffb7178ecf81e4862ef0"} Oct 11 11:10:07.093268 master-1 kubenswrapper[4771]: I1011 11:10:07.093155 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23431f0d-ef6f-4620-a467-15eda9b19df4-ssh-key\") pod \"configure-network-networker-deploy-networkers-vkj22\" (UID: \"23431f0d-ef6f-4620-a467-15eda9b19df4\") " pod="openstack/configure-network-networker-deploy-networkers-vkj22" Oct 11 11:10:07.093602 master-1 kubenswrapper[4771]: I1011 11:10:07.093321 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23431f0d-ef6f-4620-a467-15eda9b19df4-inventory\") pod \"configure-network-networker-deploy-networkers-vkj22\" (UID: \"23431f0d-ef6f-4620-a467-15eda9b19df4\") " pod="openstack/configure-network-networker-deploy-networkers-vkj22" Oct 11 11:10:07.093602 master-1 kubenswrapper[4771]: I1011 11:10:07.093428 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc4dv\" (UniqueName: \"kubernetes.io/projected/23431f0d-ef6f-4620-a467-15eda9b19df4-kube-api-access-sc4dv\") pod \"configure-network-networker-deploy-networkers-vkj22\" (UID: \"23431f0d-ef6f-4620-a467-15eda9b19df4\") " pod="openstack/configure-network-networker-deploy-networkers-vkj22" Oct 11 11:10:07.195446 master-1 kubenswrapper[4771]: I1011 11:10:07.195351 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc4dv\" (UniqueName: \"kubernetes.io/projected/23431f0d-ef6f-4620-a467-15eda9b19df4-kube-api-access-sc4dv\") pod \"configure-network-networker-deploy-networkers-vkj22\" (UID: \"23431f0d-ef6f-4620-a467-15eda9b19df4\") " pod="openstack/configure-network-networker-deploy-networkers-vkj22" Oct 11 11:10:07.195677 master-1 kubenswrapper[4771]: I1011 11:10:07.195542 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23431f0d-ef6f-4620-a467-15eda9b19df4-ssh-key\") pod \"configure-network-networker-deploy-networkers-vkj22\" (UID: \"23431f0d-ef6f-4620-a467-15eda9b19df4\") " pod="openstack/configure-network-networker-deploy-networkers-vkj22" Oct 11 11:10:07.195677 master-1 kubenswrapper[4771]: I1011 11:10:07.195641 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23431f0d-ef6f-4620-a467-15eda9b19df4-inventory\") pod \"configure-network-networker-deploy-networkers-vkj22\" (UID: \"23431f0d-ef6f-4620-a467-15eda9b19df4\") " pod="openstack/configure-network-networker-deploy-networkers-vkj22" Oct 11 11:10:07.225136 master-1 kubenswrapper[4771]: I1011 11:10:07.224990 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23431f0d-ef6f-4620-a467-15eda9b19df4-ssh-key\") pod \"configure-network-networker-deploy-networkers-vkj22\" (UID: \"23431f0d-ef6f-4620-a467-15eda9b19df4\") " pod="openstack/configure-network-networker-deploy-networkers-vkj22" Oct 11 11:10:07.228775 master-1 kubenswrapper[4771]: I1011 11:10:07.228719 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23431f0d-ef6f-4620-a467-15eda9b19df4-inventory\") pod \"configure-network-networker-deploy-networkers-vkj22\" (UID: \"23431f0d-ef6f-4620-a467-15eda9b19df4\") " pod="openstack/configure-network-networker-deploy-networkers-vkj22" Oct 11 11:10:07.231242 master-1 kubenswrapper[4771]: I1011 11:10:07.231185 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc4dv\" (UniqueName: \"kubernetes.io/projected/23431f0d-ef6f-4620-a467-15eda9b19df4-kube-api-access-sc4dv\") pod \"configure-network-networker-deploy-networkers-vkj22\" (UID: \"23431f0d-ef6f-4620-a467-15eda9b19df4\") " pod="openstack/configure-network-networker-deploy-networkers-vkj22" Oct 11 11:10:07.358139 master-1 kubenswrapper[4771]: I1011 11:10:07.358049 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-networker-deploy-networkers-vkj22" Oct 11 11:10:08.028201 master-1 kubenswrapper[4771]: I1011 11:10:08.028125 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-networker-deploy-networkers-vkj22"] Oct 11 11:10:08.037377 master-1 kubenswrapper[4771]: W1011 11:10:08.037291 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23431f0d_ef6f_4620_a467_15eda9b19df4.slice/crio-85d24faa6a4078bf4e25441dba67f8499b9fbea0924137b61c555442523d5fad WatchSource:0}: Error finding container 85d24faa6a4078bf4e25441dba67f8499b9fbea0924137b61c555442523d5fad: Status 404 returned error can't find the container with id 85d24faa6a4078bf4e25441dba67f8499b9fbea0924137b61c555442523d5fad Oct 11 11:10:08.041827 master-1 kubenswrapper[4771]: I1011 11:10:08.041796 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 11:10:08.628286 master-1 kubenswrapper[4771]: I1011 11:10:08.628244 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-dataplane-edpm-hsdgd" Oct 11 11:10:08.728618 master-1 kubenswrapper[4771]: I1011 11:10:08.728535 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-ssh-key\") pod \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\" (UID: \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\") " Oct 11 11:10:08.728932 master-1 kubenswrapper[4771]: I1011 11:10:08.728679 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-inventory\") pod \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\" (UID: \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\") " Oct 11 11:10:08.728932 master-1 kubenswrapper[4771]: I1011 11:10:08.728747 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-bootstrap-combined-ca-bundle\") pod \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\" (UID: \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\") " Oct 11 11:10:08.728932 master-1 kubenswrapper[4771]: I1011 11:10:08.728847 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2frc\" (UniqueName: \"kubernetes.io/projected/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-kube-api-access-v2frc\") pod \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\" (UID: \"7bd3df1d-629a-4bbd-9ab4-7731e8928b01\") " Oct 11 11:10:08.733168 master-1 kubenswrapper[4771]: I1011 11:10:08.733107 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "7bd3df1d-629a-4bbd-9ab4-7731e8928b01" (UID: "7bd3df1d-629a-4bbd-9ab4-7731e8928b01"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:10:08.733912 master-1 kubenswrapper[4771]: I1011 11:10:08.733843 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-kube-api-access-v2frc" (OuterVolumeSpecName: "kube-api-access-v2frc") pod "7bd3df1d-629a-4bbd-9ab4-7731e8928b01" (UID: "7bd3df1d-629a-4bbd-9ab4-7731e8928b01"). InnerVolumeSpecName "kube-api-access-v2frc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:10:08.758210 master-1 kubenswrapper[4771]: I1011 11:10:08.758132 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-inventory" (OuterVolumeSpecName: "inventory") pod "7bd3df1d-629a-4bbd-9ab4-7731e8928b01" (UID: "7bd3df1d-629a-4bbd-9ab4-7731e8928b01"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:10:08.765923 master-1 kubenswrapper[4771]: I1011 11:10:08.765829 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7bd3df1d-629a-4bbd-9ab4-7731e8928b01" (UID: "7bd3df1d-629a-4bbd-9ab4-7731e8928b01"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:10:08.832859 master-1 kubenswrapper[4771]: I1011 11:10:08.832775 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-ssh-key\") on node \"master-1\" DevicePath \"\"" Oct 11 11:10:08.832859 master-1 kubenswrapper[4771]: I1011 11:10:08.832865 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-inventory\") on node \"master-1\" DevicePath \"\"" Oct 11 11:10:08.833155 master-1 kubenswrapper[4771]: I1011 11:10:08.832952 4771 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-bootstrap-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 11:10:08.833155 master-1 kubenswrapper[4771]: I1011 11:10:08.832973 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2frc\" (UniqueName: \"kubernetes.io/projected/7bd3df1d-629a-4bbd-9ab4-7731e8928b01-kube-api-access-v2frc\") on node \"master-1\" DevicePath \"\"" Oct 11 11:10:09.053956 master-1 kubenswrapper[4771]: I1011 11:10:09.053856 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-networker-deploy-networkers-vkj22" event={"ID":"23431f0d-ef6f-4620-a467-15eda9b19df4","Type":"ContainerStarted","Data":"fe2f1263dad86bbee33eaad9ff872067091a41fe3185790ad6a3291bbb8b8199"} Oct 11 11:10:09.053956 master-1 kubenswrapper[4771]: I1011 11:10:09.053943 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-networker-deploy-networkers-vkj22" event={"ID":"23431f0d-ef6f-4620-a467-15eda9b19df4","Type":"ContainerStarted","Data":"85d24faa6a4078bf4e25441dba67f8499b9fbea0924137b61c555442523d5fad"} Oct 11 11:10:09.056170 master-1 kubenswrapper[4771]: I1011 11:10:09.056096 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-dataplane-edpm-hsdgd" event={"ID":"7bd3df1d-629a-4bbd-9ab4-7731e8928b01","Type":"ContainerDied","Data":"a29718cbf38e7f25af3d27f643844ad043c8e1dc7c5a8dbda4e222f2e172deb6"} Oct 11 11:10:09.056170 master-1 kubenswrapper[4771]: I1011 11:10:09.056168 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a29718cbf38e7f25af3d27f643844ad043c8e1dc7c5a8dbda4e222f2e172deb6" Oct 11 11:10:09.056442 master-1 kubenswrapper[4771]: I1011 11:10:09.056212 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-dataplane-edpm-hsdgd" Oct 11 11:10:09.164257 master-1 kubenswrapper[4771]: I1011 11:10:09.164077 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-networker-deploy-networkers-vkj22" podStartSLOduration=2.727833389 podStartE2EDuration="3.164051572s" podCreationTimestamp="2025-10-11 11:10:06 +0000 UTC" firstStartedPulling="2025-10-11 11:10:08.041725674 +0000 UTC m=+2640.015952115" lastFinishedPulling="2025-10-11 11:10:08.477943817 +0000 UTC m=+2640.452170298" observedRunningTime="2025-10-11 11:10:09.15942073 +0000 UTC m=+2641.133647261" watchObservedRunningTime="2025-10-11 11:10:09.164051572 +0000 UTC m=+2641.138278043" Oct 11 11:10:09.212546 master-1 kubenswrapper[4771]: I1011 11:10:09.212463 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-dataplane-edpm-8f9hl"] Oct 11 11:10:09.212982 master-1 kubenswrapper[4771]: E1011 11:10:09.212943 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd3df1d-629a-4bbd-9ab4-7731e8928b01" containerName="bootstrap-dataplane-edpm" Oct 11 11:10:09.212982 master-1 kubenswrapper[4771]: I1011 11:10:09.212966 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd3df1d-629a-4bbd-9ab4-7731e8928b01" containerName="bootstrap-dataplane-edpm" Oct 11 11:10:09.213239 master-1 kubenswrapper[4771]: I1011 11:10:09.213187 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd3df1d-629a-4bbd-9ab4-7731e8928b01" containerName="bootstrap-dataplane-edpm" Oct 11 11:10:09.214276 master-1 kubenswrapper[4771]: I1011 11:10:09.214236 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-dataplane-edpm-8f9hl" Oct 11 11:10:09.218336 master-1 kubenswrapper[4771]: I1011 11:10:09.218264 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-edpm" Oct 11 11:10:09.229109 master-1 kubenswrapper[4771]: I1011 11:10:09.229017 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-dataplane-edpm-8f9hl"] Oct 11 11:10:09.346183 master-1 kubenswrapper[4771]: I1011 11:10:09.345982 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b37e2544-0a7f-44ec-9e4a-f2736c93b9f5-inventory\") pod \"configure-network-dataplane-edpm-8f9hl\" (UID: \"b37e2544-0a7f-44ec-9e4a-f2736c93b9f5\") " pod="openstack/configure-network-dataplane-edpm-8f9hl" Oct 11 11:10:09.346183 master-1 kubenswrapper[4771]: I1011 11:10:09.346072 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b37e2544-0a7f-44ec-9e4a-f2736c93b9f5-ssh-key\") pod \"configure-network-dataplane-edpm-8f9hl\" (UID: \"b37e2544-0a7f-44ec-9e4a-f2736c93b9f5\") " pod="openstack/configure-network-dataplane-edpm-8f9hl" Oct 11 11:10:09.346676 master-1 kubenswrapper[4771]: I1011 11:10:09.346340 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mdxh\" (UniqueName: \"kubernetes.io/projected/b37e2544-0a7f-44ec-9e4a-f2736c93b9f5-kube-api-access-9mdxh\") pod \"configure-network-dataplane-edpm-8f9hl\" (UID: \"b37e2544-0a7f-44ec-9e4a-f2736c93b9f5\") " pod="openstack/configure-network-dataplane-edpm-8f9hl" Oct 11 11:10:09.450551 master-1 kubenswrapper[4771]: I1011 11:10:09.450486 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b37e2544-0a7f-44ec-9e4a-f2736c93b9f5-ssh-key\") pod \"configure-network-dataplane-edpm-8f9hl\" (UID: \"b37e2544-0a7f-44ec-9e4a-f2736c93b9f5\") " pod="openstack/configure-network-dataplane-edpm-8f9hl" Oct 11 11:10:09.450903 master-1 kubenswrapper[4771]: I1011 11:10:09.450631 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mdxh\" (UniqueName: \"kubernetes.io/projected/b37e2544-0a7f-44ec-9e4a-f2736c93b9f5-kube-api-access-9mdxh\") pod \"configure-network-dataplane-edpm-8f9hl\" (UID: \"b37e2544-0a7f-44ec-9e4a-f2736c93b9f5\") " pod="openstack/configure-network-dataplane-edpm-8f9hl" Oct 11 11:10:09.450903 master-1 kubenswrapper[4771]: I1011 11:10:09.450841 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b37e2544-0a7f-44ec-9e4a-f2736c93b9f5-inventory\") pod \"configure-network-dataplane-edpm-8f9hl\" (UID: \"b37e2544-0a7f-44ec-9e4a-f2736c93b9f5\") " pod="openstack/configure-network-dataplane-edpm-8f9hl" Oct 11 11:10:09.455766 master-1 kubenswrapper[4771]: I1011 11:10:09.455698 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b37e2544-0a7f-44ec-9e4a-f2736c93b9f5-ssh-key\") pod \"configure-network-dataplane-edpm-8f9hl\" (UID: \"b37e2544-0a7f-44ec-9e4a-f2736c93b9f5\") " pod="openstack/configure-network-dataplane-edpm-8f9hl" Oct 11 11:10:09.456777 master-1 kubenswrapper[4771]: I1011 11:10:09.456714 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b37e2544-0a7f-44ec-9e4a-f2736c93b9f5-inventory\") pod \"configure-network-dataplane-edpm-8f9hl\" (UID: \"b37e2544-0a7f-44ec-9e4a-f2736c93b9f5\") " pod="openstack/configure-network-dataplane-edpm-8f9hl" Oct 11 11:10:09.472880 master-1 kubenswrapper[4771]: I1011 11:10:09.472816 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mdxh\" (UniqueName: \"kubernetes.io/projected/b37e2544-0a7f-44ec-9e4a-f2736c93b9f5-kube-api-access-9mdxh\") pod \"configure-network-dataplane-edpm-8f9hl\" (UID: \"b37e2544-0a7f-44ec-9e4a-f2736c93b9f5\") " pod="openstack/configure-network-dataplane-edpm-8f9hl" Oct 11 11:10:09.603408 master-1 kubenswrapper[4771]: I1011 11:10:09.603179 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-dataplane-edpm-8f9hl" Oct 11 11:10:10.229765 master-1 kubenswrapper[4771]: I1011 11:10:10.225951 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-dataplane-edpm-8f9hl"] Oct 11 11:10:11.082406 master-1 kubenswrapper[4771]: I1011 11:10:11.082230 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-dataplane-edpm-8f9hl" event={"ID":"b37e2544-0a7f-44ec-9e4a-f2736c93b9f5","Type":"ContainerStarted","Data":"2cb0bad499f000cfdcc63010f00970a3004b6dc3b5dc657aef1ca64f72cd76cb"} Oct 11 11:10:12.097778 master-1 kubenswrapper[4771]: I1011 11:10:12.097650 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-dataplane-edpm-8f9hl" event={"ID":"b37e2544-0a7f-44ec-9e4a-f2736c93b9f5","Type":"ContainerStarted","Data":"747d7c436af7449ca66e16a5f43141089dad6007d37b198bae14f81f9e107bb5"} Oct 11 11:10:12.227118 master-1 kubenswrapper[4771]: I1011 11:10:12.226996 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-dataplane-edpm-8f9hl" podStartSLOduration=2.580016283 podStartE2EDuration="3.226961586s" podCreationTimestamp="2025-10-11 11:10:09 +0000 UTC" firstStartedPulling="2025-10-11 11:10:10.228578077 +0000 UTC m=+2642.202804508" lastFinishedPulling="2025-10-11 11:10:10.87552337 +0000 UTC m=+2642.849749811" observedRunningTime="2025-10-11 11:10:12.21767496 +0000 UTC m=+2644.191901451" watchObservedRunningTime="2025-10-11 11:10:12.226961586 +0000 UTC m=+2644.201188037" Oct 11 11:11:18.460585 master-1 kubenswrapper[4771]: I1011 11:11:18.460503 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sgbq8"] Oct 11 11:11:18.466449 master-1 kubenswrapper[4771]: I1011 11:11:18.462892 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sgbq8" Oct 11 11:11:18.526169 master-1 kubenswrapper[4771]: I1011 11:11:18.526066 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sgbq8"] Oct 11 11:11:18.551917 master-1 kubenswrapper[4771]: I1011 11:11:18.551844 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc2314e8-dacc-4d65-b836-58b7453a60fa-catalog-content\") pod \"redhat-marketplace-sgbq8\" (UID: \"bc2314e8-dacc-4d65-b836-58b7453a60fa\") " pod="openshift-marketplace/redhat-marketplace-sgbq8" Oct 11 11:11:18.552167 master-1 kubenswrapper[4771]: I1011 11:11:18.552036 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92h48\" (UniqueName: \"kubernetes.io/projected/bc2314e8-dacc-4d65-b836-58b7453a60fa-kube-api-access-92h48\") pod \"redhat-marketplace-sgbq8\" (UID: \"bc2314e8-dacc-4d65-b836-58b7453a60fa\") " pod="openshift-marketplace/redhat-marketplace-sgbq8" Oct 11 11:11:18.552293 master-1 kubenswrapper[4771]: I1011 11:11:18.552259 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc2314e8-dacc-4d65-b836-58b7453a60fa-utilities\") pod \"redhat-marketplace-sgbq8\" (UID: \"bc2314e8-dacc-4d65-b836-58b7453a60fa\") " pod="openshift-marketplace/redhat-marketplace-sgbq8" Oct 11 11:11:18.654611 master-1 kubenswrapper[4771]: I1011 11:11:18.654480 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc2314e8-dacc-4d65-b836-58b7453a60fa-catalog-content\") pod \"redhat-marketplace-sgbq8\" (UID: \"bc2314e8-dacc-4d65-b836-58b7453a60fa\") " pod="openshift-marketplace/redhat-marketplace-sgbq8" Oct 11 11:11:18.655532 master-1 kubenswrapper[4771]: I1011 11:11:18.655486 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92h48\" (UniqueName: \"kubernetes.io/projected/bc2314e8-dacc-4d65-b836-58b7453a60fa-kube-api-access-92h48\") pod \"redhat-marketplace-sgbq8\" (UID: \"bc2314e8-dacc-4d65-b836-58b7453a60fa\") " pod="openshift-marketplace/redhat-marketplace-sgbq8" Oct 11 11:11:18.655692 master-1 kubenswrapper[4771]: I1011 11:11:18.655649 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc2314e8-dacc-4d65-b836-58b7453a60fa-utilities\") pod \"redhat-marketplace-sgbq8\" (UID: \"bc2314e8-dacc-4d65-b836-58b7453a60fa\") " pod="openshift-marketplace/redhat-marketplace-sgbq8" Oct 11 11:11:18.656148 master-1 kubenswrapper[4771]: I1011 11:11:18.656105 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc2314e8-dacc-4d65-b836-58b7453a60fa-utilities\") pod \"redhat-marketplace-sgbq8\" (UID: \"bc2314e8-dacc-4d65-b836-58b7453a60fa\") " pod="openshift-marketplace/redhat-marketplace-sgbq8" Oct 11 11:11:18.656569 master-1 kubenswrapper[4771]: I1011 11:11:18.656526 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc2314e8-dacc-4d65-b836-58b7453a60fa-catalog-content\") pod \"redhat-marketplace-sgbq8\" (UID: \"bc2314e8-dacc-4d65-b836-58b7453a60fa\") " pod="openshift-marketplace/redhat-marketplace-sgbq8" Oct 11 11:11:18.679680 master-1 kubenswrapper[4771]: I1011 11:11:18.679608 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92h48\" (UniqueName: \"kubernetes.io/projected/bc2314e8-dacc-4d65-b836-58b7453a60fa-kube-api-access-92h48\") pod \"redhat-marketplace-sgbq8\" (UID: \"bc2314e8-dacc-4d65-b836-58b7453a60fa\") " pod="openshift-marketplace/redhat-marketplace-sgbq8" Oct 11 11:11:18.847606 master-1 kubenswrapper[4771]: I1011 11:11:18.847513 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sgbq8" Oct 11 11:11:19.304993 master-1 kubenswrapper[4771]: W1011 11:11:19.304901 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc2314e8_dacc_4d65_b836_58b7453a60fa.slice/crio-53a505e30cbb5b3189f3cc84cceeaa763702420367f06fbafef2bbdae9c2391e WatchSource:0}: Error finding container 53a505e30cbb5b3189f3cc84cceeaa763702420367f06fbafef2bbdae9c2391e: Status 404 returned error can't find the container with id 53a505e30cbb5b3189f3cc84cceeaa763702420367f06fbafef2bbdae9c2391e Oct 11 11:11:19.305349 master-1 kubenswrapper[4771]: I1011 11:11:19.305251 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sgbq8"] Oct 11 11:11:19.810975 master-1 kubenswrapper[4771]: I1011 11:11:19.810849 4771 generic.go:334] "Generic (PLEG): container finished" podID="bc2314e8-dacc-4d65-b836-58b7453a60fa" containerID="e5ed3ec47283fd94f26c6b7d58982128685eab7226f768e7fba4c17ae866e369" exitCode=0 Oct 11 11:11:19.810975 master-1 kubenswrapper[4771]: I1011 11:11:19.810947 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgbq8" event={"ID":"bc2314e8-dacc-4d65-b836-58b7453a60fa","Type":"ContainerDied","Data":"e5ed3ec47283fd94f26c6b7d58982128685eab7226f768e7fba4c17ae866e369"} Oct 11 11:11:19.812102 master-1 kubenswrapper[4771]: I1011 11:11:19.811002 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgbq8" event={"ID":"bc2314e8-dacc-4d65-b836-58b7453a60fa","Type":"ContainerStarted","Data":"53a505e30cbb5b3189f3cc84cceeaa763702420367f06fbafef2bbdae9c2391e"} Oct 11 11:11:21.835594 master-1 kubenswrapper[4771]: I1011 11:11:21.835504 4771 generic.go:334] "Generic (PLEG): container finished" podID="bc2314e8-dacc-4d65-b836-58b7453a60fa" containerID="f5f44a51efa64fb057e1bc45ff72f7b6e206ce4ce789d4bdd15bfeada257f07d" exitCode=0 Oct 11 11:11:21.836505 master-1 kubenswrapper[4771]: I1011 11:11:21.835584 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgbq8" event={"ID":"bc2314e8-dacc-4d65-b836-58b7453a60fa","Type":"ContainerDied","Data":"f5f44a51efa64fb057e1bc45ff72f7b6e206ce4ce789d4bdd15bfeada257f07d"} Oct 11 11:11:22.852305 master-1 kubenswrapper[4771]: I1011 11:11:22.852210 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgbq8" event={"ID":"bc2314e8-dacc-4d65-b836-58b7453a60fa","Type":"ContainerStarted","Data":"611612c9745148a9971c30e141372de2a008a018cd29e07a94c85a40014e378a"} Oct 11 11:11:22.894995 master-1 kubenswrapper[4771]: I1011 11:11:22.894875 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sgbq8" podStartSLOduration=2.459694403 podStartE2EDuration="4.894848741s" podCreationTimestamp="2025-10-11 11:11:18 +0000 UTC" firstStartedPulling="2025-10-11 11:11:19.813475177 +0000 UTC m=+2711.787701618" lastFinishedPulling="2025-10-11 11:11:22.248629475 +0000 UTC m=+2714.222855956" observedRunningTime="2025-10-11 11:11:22.881385495 +0000 UTC m=+2714.855612026" watchObservedRunningTime="2025-10-11 11:11:22.894848741 +0000 UTC m=+2714.869075212" Oct 11 11:11:28.848761 master-1 kubenswrapper[4771]: I1011 11:11:28.848688 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sgbq8" Oct 11 11:11:28.848761 master-1 kubenswrapper[4771]: I1011 11:11:28.848732 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sgbq8" Oct 11 11:11:28.900927 master-1 kubenswrapper[4771]: I1011 11:11:28.900880 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sgbq8" Oct 11 11:11:28.980875 master-1 kubenswrapper[4771]: I1011 11:11:28.980794 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sgbq8" Oct 11 11:11:29.159503 master-1 kubenswrapper[4771]: I1011 11:11:29.159091 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sgbq8"] Oct 11 11:11:29.248382 master-1 kubenswrapper[4771]: E1011 11:11:29.248268 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb37e2544_0a7f_44ec_9e4a_f2736c93b9f5.slice/crio-747d7c436af7449ca66e16a5f43141089dad6007d37b198bae14f81f9e107bb5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb37e2544_0a7f_44ec_9e4a_f2736c93b9f5.slice/crio-conmon-747d7c436af7449ca66e16a5f43141089dad6007d37b198bae14f81f9e107bb5.scope\": RecentStats: unable to find data in memory cache]" Oct 11 11:11:29.248784 master-1 kubenswrapper[4771]: E1011 11:11:29.248562 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb37e2544_0a7f_44ec_9e4a_f2736c93b9f5.slice/crio-conmon-747d7c436af7449ca66e16a5f43141089dad6007d37b198bae14f81f9e107bb5.scope\": RecentStats: unable to find data in memory cache]" Oct 11 11:11:29.925169 master-1 kubenswrapper[4771]: I1011 11:11:29.925075 4771 generic.go:334] "Generic (PLEG): container finished" podID="b37e2544-0a7f-44ec-9e4a-f2736c93b9f5" containerID="747d7c436af7449ca66e16a5f43141089dad6007d37b198bae14f81f9e107bb5" exitCode=0 Oct 11 11:11:29.926546 master-1 kubenswrapper[4771]: I1011 11:11:29.926344 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-dataplane-edpm-8f9hl" event={"ID":"b37e2544-0a7f-44ec-9e4a-f2736c93b9f5","Type":"ContainerDied","Data":"747d7c436af7449ca66e16a5f43141089dad6007d37b198bae14f81f9e107bb5"} Oct 11 11:11:30.934610 master-1 kubenswrapper[4771]: I1011 11:11:30.934502 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sgbq8" podUID="bc2314e8-dacc-4d65-b836-58b7453a60fa" containerName="registry-server" containerID="cri-o://611612c9745148a9971c30e141372de2a008a018cd29e07a94c85a40014e378a" gracePeriod=2 Oct 11 11:11:31.615188 master-1 kubenswrapper[4771]: I1011 11:11:31.615154 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-dataplane-edpm-8f9hl" Oct 11 11:11:31.716715 master-1 kubenswrapper[4771]: I1011 11:11:31.716660 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sgbq8" Oct 11 11:11:31.776466 master-1 kubenswrapper[4771]: I1011 11:11:31.776405 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b37e2544-0a7f-44ec-9e4a-f2736c93b9f5-ssh-key\") pod \"b37e2544-0a7f-44ec-9e4a-f2736c93b9f5\" (UID: \"b37e2544-0a7f-44ec-9e4a-f2736c93b9f5\") " Oct 11 11:11:31.776754 master-1 kubenswrapper[4771]: I1011 11:11:31.776570 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b37e2544-0a7f-44ec-9e4a-f2736c93b9f5-inventory\") pod \"b37e2544-0a7f-44ec-9e4a-f2736c93b9f5\" (UID: \"b37e2544-0a7f-44ec-9e4a-f2736c93b9f5\") " Oct 11 11:11:31.776754 master-1 kubenswrapper[4771]: I1011 11:11:31.776670 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mdxh\" (UniqueName: \"kubernetes.io/projected/b37e2544-0a7f-44ec-9e4a-f2736c93b9f5-kube-api-access-9mdxh\") pod \"b37e2544-0a7f-44ec-9e4a-f2736c93b9f5\" (UID: \"b37e2544-0a7f-44ec-9e4a-f2736c93b9f5\") " Oct 11 11:11:31.782349 master-1 kubenswrapper[4771]: I1011 11:11:31.782230 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b37e2544-0a7f-44ec-9e4a-f2736c93b9f5-kube-api-access-9mdxh" (OuterVolumeSpecName: "kube-api-access-9mdxh") pod "b37e2544-0a7f-44ec-9e4a-f2736c93b9f5" (UID: "b37e2544-0a7f-44ec-9e4a-f2736c93b9f5"). InnerVolumeSpecName "kube-api-access-9mdxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:11:31.801768 master-1 kubenswrapper[4771]: I1011 11:11:31.801712 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37e2544-0a7f-44ec-9e4a-f2736c93b9f5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b37e2544-0a7f-44ec-9e4a-f2736c93b9f5" (UID: "b37e2544-0a7f-44ec-9e4a-f2736c93b9f5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:11:31.818741 master-1 kubenswrapper[4771]: I1011 11:11:31.818584 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37e2544-0a7f-44ec-9e4a-f2736c93b9f5-inventory" (OuterVolumeSpecName: "inventory") pod "b37e2544-0a7f-44ec-9e4a-f2736c93b9f5" (UID: "b37e2544-0a7f-44ec-9e4a-f2736c93b9f5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:11:31.878105 master-1 kubenswrapper[4771]: I1011 11:11:31.878049 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc2314e8-dacc-4d65-b836-58b7453a60fa-catalog-content\") pod \"bc2314e8-dacc-4d65-b836-58b7453a60fa\" (UID: \"bc2314e8-dacc-4d65-b836-58b7453a60fa\") " Oct 11 11:11:31.878505 master-1 kubenswrapper[4771]: I1011 11:11:31.878285 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc2314e8-dacc-4d65-b836-58b7453a60fa-utilities\") pod \"bc2314e8-dacc-4d65-b836-58b7453a60fa\" (UID: \"bc2314e8-dacc-4d65-b836-58b7453a60fa\") " Oct 11 11:11:31.878505 master-1 kubenswrapper[4771]: I1011 11:11:31.878390 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92h48\" (UniqueName: \"kubernetes.io/projected/bc2314e8-dacc-4d65-b836-58b7453a60fa-kube-api-access-92h48\") pod \"bc2314e8-dacc-4d65-b836-58b7453a60fa\" (UID: \"bc2314e8-dacc-4d65-b836-58b7453a60fa\") " Oct 11 11:11:31.879181 master-1 kubenswrapper[4771]: I1011 11:11:31.878803 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b37e2544-0a7f-44ec-9e4a-f2736c93b9f5-ssh-key\") on node \"master-1\" DevicePath \"\"" Oct 11 11:11:31.879181 master-1 kubenswrapper[4771]: I1011 11:11:31.878821 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b37e2544-0a7f-44ec-9e4a-f2736c93b9f5-inventory\") on node \"master-1\" DevicePath \"\"" Oct 11 11:11:31.879181 master-1 kubenswrapper[4771]: I1011 11:11:31.878831 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mdxh\" (UniqueName: \"kubernetes.io/projected/b37e2544-0a7f-44ec-9e4a-f2736c93b9f5-kube-api-access-9mdxh\") on node \"master-1\" DevicePath \"\"" Oct 11 11:11:31.879944 master-1 kubenswrapper[4771]: I1011 11:11:31.879871 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc2314e8-dacc-4d65-b836-58b7453a60fa-utilities" (OuterVolumeSpecName: "utilities") pod "bc2314e8-dacc-4d65-b836-58b7453a60fa" (UID: "bc2314e8-dacc-4d65-b836-58b7453a60fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:11:31.884659 master-1 kubenswrapper[4771]: I1011 11:11:31.883586 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2314e8-dacc-4d65-b836-58b7453a60fa-kube-api-access-92h48" (OuterVolumeSpecName: "kube-api-access-92h48") pod "bc2314e8-dacc-4d65-b836-58b7453a60fa" (UID: "bc2314e8-dacc-4d65-b836-58b7453a60fa"). InnerVolumeSpecName "kube-api-access-92h48". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:11:31.916292 master-1 kubenswrapper[4771]: I1011 11:11:31.916183 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc2314e8-dacc-4d65-b836-58b7453a60fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc2314e8-dacc-4d65-b836-58b7453a60fa" (UID: "bc2314e8-dacc-4d65-b836-58b7453a60fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:11:31.944916 master-1 kubenswrapper[4771]: I1011 11:11:31.944389 4771 generic.go:334] "Generic (PLEG): container finished" podID="23431f0d-ef6f-4620-a467-15eda9b19df4" containerID="fe2f1263dad86bbee33eaad9ff872067091a41fe3185790ad6a3291bbb8b8199" exitCode=0 Oct 11 11:11:31.944916 master-1 kubenswrapper[4771]: I1011 11:11:31.944431 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-networker-deploy-networkers-vkj22" event={"ID":"23431f0d-ef6f-4620-a467-15eda9b19df4","Type":"ContainerDied","Data":"fe2f1263dad86bbee33eaad9ff872067091a41fe3185790ad6a3291bbb8b8199"} Oct 11 11:11:31.946877 master-1 kubenswrapper[4771]: I1011 11:11:31.946835 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-dataplane-edpm-8f9hl" event={"ID":"b37e2544-0a7f-44ec-9e4a-f2736c93b9f5","Type":"ContainerDied","Data":"2cb0bad499f000cfdcc63010f00970a3004b6dc3b5dc657aef1ca64f72cd76cb"} Oct 11 11:11:31.946967 master-1 kubenswrapper[4771]: I1011 11:11:31.946880 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cb0bad499f000cfdcc63010f00970a3004b6dc3b5dc657aef1ca64f72cd76cb" Oct 11 11:11:31.946967 master-1 kubenswrapper[4771]: I1011 11:11:31.946884 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-dataplane-edpm-8f9hl" Oct 11 11:11:31.950375 master-1 kubenswrapper[4771]: I1011 11:11:31.950278 4771 generic.go:334] "Generic (PLEG): container finished" podID="bc2314e8-dacc-4d65-b836-58b7453a60fa" containerID="611612c9745148a9971c30e141372de2a008a018cd29e07a94c85a40014e378a" exitCode=0 Oct 11 11:11:31.950375 master-1 kubenswrapper[4771]: I1011 11:11:31.950347 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sgbq8" Oct 11 11:11:31.950504 master-1 kubenswrapper[4771]: I1011 11:11:31.950384 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgbq8" event={"ID":"bc2314e8-dacc-4d65-b836-58b7453a60fa","Type":"ContainerDied","Data":"611612c9745148a9971c30e141372de2a008a018cd29e07a94c85a40014e378a"} Oct 11 11:11:31.950567 master-1 kubenswrapper[4771]: I1011 11:11:31.950538 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sgbq8" event={"ID":"bc2314e8-dacc-4d65-b836-58b7453a60fa","Type":"ContainerDied","Data":"53a505e30cbb5b3189f3cc84cceeaa763702420367f06fbafef2bbdae9c2391e"} Oct 11 11:11:31.950623 master-1 kubenswrapper[4771]: I1011 11:11:31.950587 4771 scope.go:117] "RemoveContainer" containerID="611612c9745148a9971c30e141372de2a008a018cd29e07a94c85a40014e378a" Oct 11 11:11:31.978789 master-1 kubenswrapper[4771]: I1011 11:11:31.978616 4771 scope.go:117] "RemoveContainer" containerID="f5f44a51efa64fb057e1bc45ff72f7b6e206ce4ce789d4bdd15bfeada257f07d" Oct 11 11:11:31.982048 master-1 kubenswrapper[4771]: I1011 11:11:31.981998 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92h48\" (UniqueName: \"kubernetes.io/projected/bc2314e8-dacc-4d65-b836-58b7453a60fa-kube-api-access-92h48\") on node \"master-1\" DevicePath \"\"" Oct 11 11:11:31.982105 master-1 kubenswrapper[4771]: I1011 11:11:31.982055 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc2314e8-dacc-4d65-b836-58b7453a60fa-catalog-content\") on node \"master-1\" DevicePath \"\"" Oct 11 11:11:31.982105 master-1 kubenswrapper[4771]: I1011 11:11:31.982074 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc2314e8-dacc-4d65-b836-58b7453a60fa-utilities\") on node \"master-1\" DevicePath \"\"" Oct 11 11:11:32.029732 master-1 kubenswrapper[4771]: I1011 11:11:32.029651 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sgbq8"] Oct 11 11:11:32.032282 master-1 kubenswrapper[4771]: I1011 11:11:32.032236 4771 scope.go:117] "RemoveContainer" containerID="e5ed3ec47283fd94f26c6b7d58982128685eab7226f768e7fba4c17ae866e369" Oct 11 11:11:32.037875 master-1 kubenswrapper[4771]: I1011 11:11:32.037834 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sgbq8"] Oct 11 11:11:32.059085 master-1 kubenswrapper[4771]: I1011 11:11:32.059046 4771 scope.go:117] "RemoveContainer" containerID="611612c9745148a9971c30e141372de2a008a018cd29e07a94c85a40014e378a" Oct 11 11:11:32.059716 master-1 kubenswrapper[4771]: E1011 11:11:32.059672 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"611612c9745148a9971c30e141372de2a008a018cd29e07a94c85a40014e378a\": container with ID starting with 611612c9745148a9971c30e141372de2a008a018cd29e07a94c85a40014e378a not found: ID does not exist" containerID="611612c9745148a9971c30e141372de2a008a018cd29e07a94c85a40014e378a" Oct 11 11:11:32.059885 master-1 kubenswrapper[4771]: I1011 11:11:32.059711 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"611612c9745148a9971c30e141372de2a008a018cd29e07a94c85a40014e378a"} err="failed to get container status \"611612c9745148a9971c30e141372de2a008a018cd29e07a94c85a40014e378a\": rpc error: code = NotFound desc = could not find container \"611612c9745148a9971c30e141372de2a008a018cd29e07a94c85a40014e378a\": container with ID starting with 611612c9745148a9971c30e141372de2a008a018cd29e07a94c85a40014e378a not found: ID does not exist" Oct 11 11:11:32.059885 master-1 kubenswrapper[4771]: I1011 11:11:32.059779 4771 scope.go:117] "RemoveContainer" containerID="f5f44a51efa64fb057e1bc45ff72f7b6e206ce4ce789d4bdd15bfeada257f07d" Oct 11 11:11:32.060443 master-1 kubenswrapper[4771]: E1011 11:11:32.060401 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5f44a51efa64fb057e1bc45ff72f7b6e206ce4ce789d4bdd15bfeada257f07d\": container with ID starting with f5f44a51efa64fb057e1bc45ff72f7b6e206ce4ce789d4bdd15bfeada257f07d not found: ID does not exist" containerID="f5f44a51efa64fb057e1bc45ff72f7b6e206ce4ce789d4bdd15bfeada257f07d" Oct 11 11:11:32.060515 master-1 kubenswrapper[4771]: I1011 11:11:32.060442 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5f44a51efa64fb057e1bc45ff72f7b6e206ce4ce789d4bdd15bfeada257f07d"} err="failed to get container status \"f5f44a51efa64fb057e1bc45ff72f7b6e206ce4ce789d4bdd15bfeada257f07d\": rpc error: code = NotFound desc = could not find container \"f5f44a51efa64fb057e1bc45ff72f7b6e206ce4ce789d4bdd15bfeada257f07d\": container with ID starting with f5f44a51efa64fb057e1bc45ff72f7b6e206ce4ce789d4bdd15bfeada257f07d not found: ID does not exist" Oct 11 11:11:32.060515 master-1 kubenswrapper[4771]: I1011 11:11:32.060471 4771 scope.go:117] "RemoveContainer" containerID="e5ed3ec47283fd94f26c6b7d58982128685eab7226f768e7fba4c17ae866e369" Oct 11 11:11:32.060849 master-1 kubenswrapper[4771]: E1011 11:11:32.060824 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5ed3ec47283fd94f26c6b7d58982128685eab7226f768e7fba4c17ae866e369\": container with ID starting with e5ed3ec47283fd94f26c6b7d58982128685eab7226f768e7fba4c17ae866e369 not found: ID does not exist" containerID="e5ed3ec47283fd94f26c6b7d58982128685eab7226f768e7fba4c17ae866e369" Oct 11 11:11:32.060922 master-1 kubenswrapper[4771]: I1011 11:11:32.060854 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5ed3ec47283fd94f26c6b7d58982128685eab7226f768e7fba4c17ae866e369"} err="failed to get container status \"e5ed3ec47283fd94f26c6b7d58982128685eab7226f768e7fba4c17ae866e369\": rpc error: code = NotFound desc = could not find container \"e5ed3ec47283fd94f26c6b7d58982128685eab7226f768e7fba4c17ae866e369\": container with ID starting with e5ed3ec47283fd94f26c6b7d58982128685eab7226f768e7fba4c17ae866e369 not found: ID does not exist" Oct 11 11:11:32.106240 master-2 kubenswrapper[4776]: I1011 11:11:32.106171 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-dataplane-edpm-qw76s"] Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: E1011 11:11:32.106580 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d073f2-1387-41f9-9e3d-71e1057293f9" containerName="bootstrap-networker-deploy-networkers" Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: I1011 11:11:32.106598 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d073f2-1387-41f9-9e3d-71e1057293f9" containerName="bootstrap-networker-deploy-networkers" Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: E1011 11:11:32.106610 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerName="registry-server" Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: I1011 11:11:32.106619 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerName="registry-server" Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: E1011 11:11:32.106646 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerName="extract-utilities" Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: I1011 11:11:32.106656 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerName="extract-utilities" Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: E1011 11:11:32.106697 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerName="extract-content" Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: I1011 11:11:32.106706 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerName="extract-content" Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: I1011 11:11:32.106896 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerName="registry-server" Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: I1011 11:11:32.106929 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d073f2-1387-41f9-9e3d-71e1057293f9" containerName="bootstrap-networker-deploy-networkers" Oct 11 11:11:32.107787 master-2 kubenswrapper[4776]: I1011 11:11:32.107750 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.110869 master-2 kubenswrapper[4776]: I1011 11:11:32.110280 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:11:32.110869 master-2 kubenswrapper[4776]: I1011 11:11:32.110306 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-edpm" Oct 11 11:11:32.111022 master-2 kubenswrapper[4776]: I1011 11:11:32.110955 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:11:32.153477 master-2 kubenswrapper[4776]: I1011 11:11:32.121259 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-dataplane-edpm-qw76s"] Oct 11 11:11:32.268909 master-2 kubenswrapper[4776]: I1011 11:11:32.265639 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-ssh-key\") pod \"validate-network-dataplane-edpm-qw76s\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.268909 master-2 kubenswrapper[4776]: I1011 11:11:32.266403 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwjj9\" (UniqueName: \"kubernetes.io/projected/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-kube-api-access-wwjj9\") pod \"validate-network-dataplane-edpm-qw76s\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.268909 master-2 kubenswrapper[4776]: I1011 11:11:32.266805 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-inventory\") pod \"validate-network-dataplane-edpm-qw76s\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.369414 master-2 kubenswrapper[4776]: I1011 11:11:32.369219 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-inventory\") pod \"validate-network-dataplane-edpm-qw76s\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.369414 master-2 kubenswrapper[4776]: I1011 11:11:32.369387 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-ssh-key\") pod \"validate-network-dataplane-edpm-qw76s\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.369414 master-2 kubenswrapper[4776]: I1011 11:11:32.369414 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwjj9\" (UniqueName: \"kubernetes.io/projected/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-kube-api-access-wwjj9\") pod \"validate-network-dataplane-edpm-qw76s\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.372929 master-2 kubenswrapper[4776]: I1011 11:11:32.372878 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-inventory\") pod \"validate-network-dataplane-edpm-qw76s\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.381733 master-2 kubenswrapper[4776]: I1011 11:11:32.373974 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-ssh-key\") pod \"validate-network-dataplane-edpm-qw76s\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.390550 master-2 kubenswrapper[4776]: I1011 11:11:32.390490 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwjj9\" (UniqueName: \"kubernetes.io/projected/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-kube-api-access-wwjj9\") pod \"validate-network-dataplane-edpm-qw76s\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.456182 master-1 kubenswrapper[4771]: I1011 11:11:32.455975 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc2314e8-dacc-4d65-b836-58b7453a60fa" path="/var/lib/kubelet/pods/bc2314e8-dacc-4d65-b836-58b7453a60fa/volumes" Oct 11 11:11:32.465445 master-2 kubenswrapper[4776]: I1011 11:11:32.465368 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:33.001880 master-2 kubenswrapper[4776]: I1011 11:11:33.001839 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-dataplane-edpm-qw76s"] Oct 11 11:11:33.003580 master-2 kubenswrapper[4776]: I1011 11:11:33.003541 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 11:11:33.559972 master-1 kubenswrapper[4771]: I1011 11:11:33.559830 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-networker-deploy-networkers-vkj22" Oct 11 11:11:33.620438 master-2 kubenswrapper[4776]: I1011 11:11:33.620377 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-dataplane-edpm-qw76s" event={"ID":"7d0d6c1e-ea6f-481f-976b-9668d70d0b12","Type":"ContainerStarted","Data":"1e568579c9a19177244c1a879b4e8845b9e8d161fb50ed43e44b349db3481f1b"} Oct 11 11:11:33.733195 master-1 kubenswrapper[4771]: I1011 11:11:33.733018 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23431f0d-ef6f-4620-a467-15eda9b19df4-inventory\") pod \"23431f0d-ef6f-4620-a467-15eda9b19df4\" (UID: \"23431f0d-ef6f-4620-a467-15eda9b19df4\") " Oct 11 11:11:33.733608 master-1 kubenswrapper[4771]: I1011 11:11:33.733245 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23431f0d-ef6f-4620-a467-15eda9b19df4-ssh-key\") pod \"23431f0d-ef6f-4620-a467-15eda9b19df4\" (UID: \"23431f0d-ef6f-4620-a467-15eda9b19df4\") " Oct 11 11:11:33.733677 master-1 kubenswrapper[4771]: I1011 11:11:33.733636 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc4dv\" (UniqueName: \"kubernetes.io/projected/23431f0d-ef6f-4620-a467-15eda9b19df4-kube-api-access-sc4dv\") pod \"23431f0d-ef6f-4620-a467-15eda9b19df4\" (UID: \"23431f0d-ef6f-4620-a467-15eda9b19df4\") " Oct 11 11:11:33.743992 master-1 kubenswrapper[4771]: I1011 11:11:33.743896 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23431f0d-ef6f-4620-a467-15eda9b19df4-kube-api-access-sc4dv" (OuterVolumeSpecName: "kube-api-access-sc4dv") pod "23431f0d-ef6f-4620-a467-15eda9b19df4" (UID: "23431f0d-ef6f-4620-a467-15eda9b19df4"). InnerVolumeSpecName "kube-api-access-sc4dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:11:33.778089 master-1 kubenswrapper[4771]: I1011 11:11:33.777988 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23431f0d-ef6f-4620-a467-15eda9b19df4-inventory" (OuterVolumeSpecName: "inventory") pod "23431f0d-ef6f-4620-a467-15eda9b19df4" (UID: "23431f0d-ef6f-4620-a467-15eda9b19df4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:11:33.783281 master-1 kubenswrapper[4771]: I1011 11:11:33.783206 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23431f0d-ef6f-4620-a467-15eda9b19df4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "23431f0d-ef6f-4620-a467-15eda9b19df4" (UID: "23431f0d-ef6f-4620-a467-15eda9b19df4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:11:33.836940 master-1 kubenswrapper[4771]: I1011 11:11:33.836867 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23431f0d-ef6f-4620-a467-15eda9b19df4-inventory\") on node \"master-1\" DevicePath \"\"" Oct 11 11:11:33.836940 master-1 kubenswrapper[4771]: I1011 11:11:33.836940 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23431f0d-ef6f-4620-a467-15eda9b19df4-ssh-key\") on node \"master-1\" DevicePath \"\"" Oct 11 11:11:33.837079 master-1 kubenswrapper[4771]: I1011 11:11:33.836962 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc4dv\" (UniqueName: \"kubernetes.io/projected/23431f0d-ef6f-4620-a467-15eda9b19df4-kube-api-access-sc4dv\") on node \"master-1\" DevicePath \"\"" Oct 11 11:11:33.977414 master-1 kubenswrapper[4771]: I1011 11:11:33.977190 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-networker-deploy-networkers-vkj22" event={"ID":"23431f0d-ef6f-4620-a467-15eda9b19df4","Type":"ContainerDied","Data":"85d24faa6a4078bf4e25441dba67f8499b9fbea0924137b61c555442523d5fad"} Oct 11 11:11:33.977414 master-1 kubenswrapper[4771]: I1011 11:11:33.977254 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-networker-deploy-networkers-vkj22" Oct 11 11:11:33.977414 master-1 kubenswrapper[4771]: I1011 11:11:33.977268 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85d24faa6a4078bf4e25441dba67f8499b9fbea0924137b61c555442523d5fad" Oct 11 11:11:34.287468 master-1 kubenswrapper[4771]: I1011 11:11:34.286594 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-networker-deploy-networkers-m7845"] Oct 11 11:11:34.287468 master-1 kubenswrapper[4771]: E1011 11:11:34.287066 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2314e8-dacc-4d65-b836-58b7453a60fa" containerName="extract-utilities" Oct 11 11:11:34.287468 master-1 kubenswrapper[4771]: I1011 11:11:34.287085 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2314e8-dacc-4d65-b836-58b7453a60fa" containerName="extract-utilities" Oct 11 11:11:34.287468 master-1 kubenswrapper[4771]: E1011 11:11:34.287105 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37e2544-0a7f-44ec-9e4a-f2736c93b9f5" containerName="configure-network-dataplane-edpm" Oct 11 11:11:34.287468 master-1 kubenswrapper[4771]: I1011 11:11:34.287471 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37e2544-0a7f-44ec-9e4a-f2736c93b9f5" containerName="configure-network-dataplane-edpm" Oct 11 11:11:34.287896 master-1 kubenswrapper[4771]: E1011 11:11:34.287497 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23431f0d-ef6f-4620-a467-15eda9b19df4" containerName="configure-network-networker-deploy-networkers" Oct 11 11:11:34.287896 master-1 kubenswrapper[4771]: I1011 11:11:34.287509 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="23431f0d-ef6f-4620-a467-15eda9b19df4" containerName="configure-network-networker-deploy-networkers" Oct 11 11:11:34.287896 master-1 kubenswrapper[4771]: E1011 11:11:34.287522 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2314e8-dacc-4d65-b836-58b7453a60fa" containerName="registry-server" Oct 11 11:11:34.287896 master-1 kubenswrapper[4771]: I1011 11:11:34.287529 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2314e8-dacc-4d65-b836-58b7453a60fa" containerName="registry-server" Oct 11 11:11:34.287896 master-1 kubenswrapper[4771]: E1011 11:11:34.287552 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2314e8-dacc-4d65-b836-58b7453a60fa" containerName="extract-content" Oct 11 11:11:34.287896 master-1 kubenswrapper[4771]: I1011 11:11:34.287559 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2314e8-dacc-4d65-b836-58b7453a60fa" containerName="extract-content" Oct 11 11:11:34.288168 master-1 kubenswrapper[4771]: I1011 11:11:34.287960 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2314e8-dacc-4d65-b836-58b7453a60fa" containerName="registry-server" Oct 11 11:11:34.288168 master-1 kubenswrapper[4771]: I1011 11:11:34.287999 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="23431f0d-ef6f-4620-a467-15eda9b19df4" containerName="configure-network-networker-deploy-networkers" Oct 11 11:11:34.288168 master-1 kubenswrapper[4771]: I1011 11:11:34.288019 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b37e2544-0a7f-44ec-9e4a-f2736c93b9f5" containerName="configure-network-dataplane-edpm" Oct 11 11:11:34.291044 master-1 kubenswrapper[4771]: I1011 11:11:34.290983 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-networker-deploy-networkers-m7845" Oct 11 11:11:34.294979 master-1 kubenswrapper[4771]: I1011 11:11:34.294931 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-networkers" Oct 11 11:11:34.295813 master-1 kubenswrapper[4771]: I1011 11:11:34.295605 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:11:34.296347 master-1 kubenswrapper[4771]: I1011 11:11:34.296250 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:11:34.324422 master-1 kubenswrapper[4771]: I1011 11:11:34.315091 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-networker-deploy-networkers-m7845"] Oct 11 11:11:34.349684 master-1 kubenswrapper[4771]: I1011 11:11:34.349593 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32524a1a-75d4-47d4-81e1-ef8562425eb3-inventory\") pod \"validate-network-networker-deploy-networkers-m7845\" (UID: \"32524a1a-75d4-47d4-81e1-ef8562425eb3\") " pod="openstack/validate-network-networker-deploy-networkers-m7845" Oct 11 11:11:34.349920 master-1 kubenswrapper[4771]: I1011 11:11:34.349699 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w7qp\" (UniqueName: \"kubernetes.io/projected/32524a1a-75d4-47d4-81e1-ef8562425eb3-kube-api-access-2w7qp\") pod \"validate-network-networker-deploy-networkers-m7845\" (UID: \"32524a1a-75d4-47d4-81e1-ef8562425eb3\") " pod="openstack/validate-network-networker-deploy-networkers-m7845" Oct 11 11:11:34.350125 master-1 kubenswrapper[4771]: I1011 11:11:34.350062 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32524a1a-75d4-47d4-81e1-ef8562425eb3-ssh-key\") pod \"validate-network-networker-deploy-networkers-m7845\" (UID: \"32524a1a-75d4-47d4-81e1-ef8562425eb3\") " pod="openstack/validate-network-networker-deploy-networkers-m7845" Oct 11 11:11:34.462962 master-1 kubenswrapper[4771]: I1011 11:11:34.462862 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32524a1a-75d4-47d4-81e1-ef8562425eb3-ssh-key\") pod \"validate-network-networker-deploy-networkers-m7845\" (UID: \"32524a1a-75d4-47d4-81e1-ef8562425eb3\") " pod="openstack/validate-network-networker-deploy-networkers-m7845" Oct 11 11:11:34.462962 master-1 kubenswrapper[4771]: I1011 11:11:34.462966 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32524a1a-75d4-47d4-81e1-ef8562425eb3-inventory\") pod \"validate-network-networker-deploy-networkers-m7845\" (UID: \"32524a1a-75d4-47d4-81e1-ef8562425eb3\") " pod="openstack/validate-network-networker-deploy-networkers-m7845" Oct 11 11:11:34.463272 master-1 kubenswrapper[4771]: I1011 11:11:34.463022 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w7qp\" (UniqueName: \"kubernetes.io/projected/32524a1a-75d4-47d4-81e1-ef8562425eb3-kube-api-access-2w7qp\") pod \"validate-network-networker-deploy-networkers-m7845\" (UID: \"32524a1a-75d4-47d4-81e1-ef8562425eb3\") " pod="openstack/validate-network-networker-deploy-networkers-m7845" Oct 11 11:11:34.468971 master-1 kubenswrapper[4771]: I1011 11:11:34.468904 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32524a1a-75d4-47d4-81e1-ef8562425eb3-ssh-key\") pod \"validate-network-networker-deploy-networkers-m7845\" (UID: \"32524a1a-75d4-47d4-81e1-ef8562425eb3\") " pod="openstack/validate-network-networker-deploy-networkers-m7845" Oct 11 11:11:34.469993 master-1 kubenswrapper[4771]: I1011 11:11:34.469948 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32524a1a-75d4-47d4-81e1-ef8562425eb3-inventory\") pod \"validate-network-networker-deploy-networkers-m7845\" (UID: \"32524a1a-75d4-47d4-81e1-ef8562425eb3\") " pod="openstack/validate-network-networker-deploy-networkers-m7845" Oct 11 11:11:34.488130 master-1 kubenswrapper[4771]: I1011 11:11:34.488047 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w7qp\" (UniqueName: \"kubernetes.io/projected/32524a1a-75d4-47d4-81e1-ef8562425eb3-kube-api-access-2w7qp\") pod \"validate-network-networker-deploy-networkers-m7845\" (UID: \"32524a1a-75d4-47d4-81e1-ef8562425eb3\") " pod="openstack/validate-network-networker-deploy-networkers-m7845" Oct 11 11:11:34.626634 master-1 kubenswrapper[4771]: I1011 11:11:34.626533 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-networker-deploy-networkers-m7845" Oct 11 11:11:34.628889 master-2 kubenswrapper[4776]: I1011 11:11:34.628802 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-dataplane-edpm-qw76s" event={"ID":"7d0d6c1e-ea6f-481f-976b-9668d70d0b12","Type":"ContainerStarted","Data":"b9dba38d8519dbf93ea62185fc5465f76d79d37384f84f54450af50ba090a712"} Oct 11 11:11:34.655092 master-2 kubenswrapper[4776]: I1011 11:11:34.654987 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-dataplane-edpm-qw76s" podStartSLOduration=2.166466535 podStartE2EDuration="2.654969237s" podCreationTimestamp="2025-10-11 11:11:32 +0000 UTC" firstStartedPulling="2025-10-11 11:11:33.003504037 +0000 UTC m=+2727.787930746" lastFinishedPulling="2025-10-11 11:11:33.492006739 +0000 UTC m=+2728.276433448" observedRunningTime="2025-10-11 11:11:34.650728413 +0000 UTC m=+2729.435155112" watchObservedRunningTime="2025-10-11 11:11:34.654969237 +0000 UTC m=+2729.439395946" Oct 11 11:11:35.263603 master-1 kubenswrapper[4771]: I1011 11:11:35.263517 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-networker-deploy-networkers-m7845"] Oct 11 11:11:35.272454 master-1 kubenswrapper[4771]: W1011 11:11:35.272334 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32524a1a_75d4_47d4_81e1_ef8562425eb3.slice/crio-8102d15a2bc375478cb70b78596a84085e5e1f3d04840dcc27705e78d0412342 WatchSource:0}: Error finding container 8102d15a2bc375478cb70b78596a84085e5e1f3d04840dcc27705e78d0412342: Status 404 returned error can't find the container with id 8102d15a2bc375478cb70b78596a84085e5e1f3d04840dcc27705e78d0412342 Oct 11 11:11:36.010450 master-1 kubenswrapper[4771]: I1011 11:11:36.010248 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-networker-deploy-networkers-m7845" event={"ID":"32524a1a-75d4-47d4-81e1-ef8562425eb3","Type":"ContainerStarted","Data":"29a03a08ba4a91706056a19084966a858f7a063a71bad2d0e4a86c00ea696baa"} Oct 11 11:11:36.011330 master-1 kubenswrapper[4771]: I1011 11:11:36.011293 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-networker-deploy-networkers-m7845" event={"ID":"32524a1a-75d4-47d4-81e1-ef8562425eb3","Type":"ContainerStarted","Data":"8102d15a2bc375478cb70b78596a84085e5e1f3d04840dcc27705e78d0412342"} Oct 11 11:11:36.035820 master-1 kubenswrapper[4771]: I1011 11:11:36.035687 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-networker-deploy-networkers-m7845" podStartSLOduration=1.609112871 podStartE2EDuration="2.035654567s" podCreationTimestamp="2025-10-11 11:11:34 +0000 UTC" firstStartedPulling="2025-10-11 11:11:35.277304189 +0000 UTC m=+2727.251530640" lastFinishedPulling="2025-10-11 11:11:35.703845855 +0000 UTC m=+2727.678072336" observedRunningTime="2025-10-11 11:11:36.03330206 +0000 UTC m=+2728.007528541" watchObservedRunningTime="2025-10-11 11:11:36.035654567 +0000 UTC m=+2728.009881048" Oct 11 11:11:38.663411 master-2 kubenswrapper[4776]: I1011 11:11:38.663340 4776 generic.go:334] "Generic (PLEG): container finished" podID="7d0d6c1e-ea6f-481f-976b-9668d70d0b12" containerID="b9dba38d8519dbf93ea62185fc5465f76d79d37384f84f54450af50ba090a712" exitCode=0 Oct 11 11:11:38.663411 master-2 kubenswrapper[4776]: I1011 11:11:38.663396 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-dataplane-edpm-qw76s" event={"ID":"7d0d6c1e-ea6f-481f-976b-9668d70d0b12","Type":"ContainerDied","Data":"b9dba38d8519dbf93ea62185fc5465f76d79d37384f84f54450af50ba090a712"} Oct 11 11:11:40.244750 master-2 kubenswrapper[4776]: I1011 11:11:40.244633 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:40.332225 master-2 kubenswrapper[4776]: I1011 11:11:40.332162 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-inventory\") pod \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " Oct 11 11:11:40.332486 master-2 kubenswrapper[4776]: I1011 11:11:40.332242 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-ssh-key\") pod \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " Oct 11 11:11:40.332486 master-2 kubenswrapper[4776]: I1011 11:11:40.332264 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwjj9\" (UniqueName: \"kubernetes.io/projected/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-kube-api-access-wwjj9\") pod \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " Oct 11 11:11:40.335211 master-2 kubenswrapper[4776]: I1011 11:11:40.335161 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-kube-api-access-wwjj9" (OuterVolumeSpecName: "kube-api-access-wwjj9") pod "7d0d6c1e-ea6f-481f-976b-9668d70d0b12" (UID: "7d0d6c1e-ea6f-481f-976b-9668d70d0b12"). InnerVolumeSpecName "kube-api-access-wwjj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:11:40.354166 master-2 kubenswrapper[4776]: I1011 11:11:40.354100 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7d0d6c1e-ea6f-481f-976b-9668d70d0b12" (UID: "7d0d6c1e-ea6f-481f-976b-9668d70d0b12"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:11:40.361266 master-2 kubenswrapper[4776]: I1011 11:11:40.361202 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-inventory" (OuterVolumeSpecName: "inventory") pod "7d0d6c1e-ea6f-481f-976b-9668d70d0b12" (UID: "7d0d6c1e-ea6f-481f-976b-9668d70d0b12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:11:40.435047 master-2 kubenswrapper[4776]: I1011 11:11:40.434984 4776 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-inventory\") on node \"master-2\" DevicePath \"\"" Oct 11 11:11:40.435047 master-2 kubenswrapper[4776]: I1011 11:11:40.435030 4776 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-ssh-key\") on node \"master-2\" DevicePath \"\"" Oct 11 11:11:40.435047 master-2 kubenswrapper[4776]: I1011 11:11:40.435040 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwjj9\" (UniqueName: \"kubernetes.io/projected/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-kube-api-access-wwjj9\") on node \"master-2\" DevicePath \"\"" Oct 11 11:11:40.688162 master-2 kubenswrapper[4776]: I1011 11:11:40.688020 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-dataplane-edpm-qw76s" event={"ID":"7d0d6c1e-ea6f-481f-976b-9668d70d0b12","Type":"ContainerDied","Data":"1e568579c9a19177244c1a879b4e8845b9e8d161fb50ed43e44b349db3481f1b"} Oct 11 11:11:40.688162 master-2 kubenswrapper[4776]: I1011 11:11:40.688067 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:40.688453 master-2 kubenswrapper[4776]: I1011 11:11:40.688068 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e568579c9a19177244c1a879b4e8845b9e8d161fb50ed43e44b349db3481f1b" Oct 11 11:11:40.813192 master-2 kubenswrapper[4776]: I1011 11:11:40.813130 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-dataplane-edpm-hmm68"] Oct 11 11:11:40.813580 master-2 kubenswrapper[4776]: E1011 11:11:40.813526 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0d6c1e-ea6f-481f-976b-9668d70d0b12" containerName="validate-network-dataplane-edpm" Oct 11 11:11:40.813580 master-2 kubenswrapper[4776]: I1011 11:11:40.813556 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0d6c1e-ea6f-481f-976b-9668d70d0b12" containerName="validate-network-dataplane-edpm" Oct 11 11:11:40.814029 master-2 kubenswrapper[4776]: I1011 11:11:40.813970 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d0d6c1e-ea6f-481f-976b-9668d70d0b12" containerName="validate-network-dataplane-edpm" Oct 11 11:11:40.814870 master-2 kubenswrapper[4776]: I1011 11:11:40.814846 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:40.818241 master-2 kubenswrapper[4776]: I1011 11:11:40.818057 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:11:40.818241 master-2 kubenswrapper[4776]: I1011 11:11:40.818165 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-edpm" Oct 11 11:11:40.818370 master-2 kubenswrapper[4776]: I1011 11:11:40.818354 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:11:40.829346 master-2 kubenswrapper[4776]: I1011 11:11:40.829286 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-dataplane-edpm-hmm68"] Oct 11 11:11:40.945156 master-2 kubenswrapper[4776]: I1011 11:11:40.945007 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-ssh-key\") pod \"install-os-dataplane-edpm-hmm68\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:40.945156 master-2 kubenswrapper[4776]: I1011 11:11:40.945076 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhwz4\" (UniqueName: \"kubernetes.io/projected/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-kube-api-access-jhwz4\") pod \"install-os-dataplane-edpm-hmm68\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:40.945156 master-2 kubenswrapper[4776]: I1011 11:11:40.945149 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-inventory\") pod \"install-os-dataplane-edpm-hmm68\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:41.048505 master-2 kubenswrapper[4776]: I1011 11:11:41.048439 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-ssh-key\") pod \"install-os-dataplane-edpm-hmm68\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:41.048505 master-2 kubenswrapper[4776]: I1011 11:11:41.048503 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhwz4\" (UniqueName: \"kubernetes.io/projected/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-kube-api-access-jhwz4\") pod \"install-os-dataplane-edpm-hmm68\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:41.048821 master-2 kubenswrapper[4776]: I1011 11:11:41.048548 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-inventory\") pod \"install-os-dataplane-edpm-hmm68\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:41.061395 master-2 kubenswrapper[4776]: I1011 11:11:41.061330 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-ssh-key\") pod \"install-os-dataplane-edpm-hmm68\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:41.061764 master-2 kubenswrapper[4776]: I1011 11:11:41.061726 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-inventory\") pod \"install-os-dataplane-edpm-hmm68\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:41.078019 master-2 kubenswrapper[4776]: I1011 11:11:41.077964 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhwz4\" (UniqueName: \"kubernetes.io/projected/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-kube-api-access-jhwz4\") pod \"install-os-dataplane-edpm-hmm68\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:41.176968 master-2 kubenswrapper[4776]: I1011 11:11:41.176891 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:41.724308 master-2 kubenswrapper[4776]: I1011 11:11:41.724244 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-dataplane-edpm-hmm68"] Oct 11 11:11:41.729093 master-2 kubenswrapper[4776]: W1011 11:11:41.729050 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97a17fc9_19bc_4b4d_8fe0_640b5efbc992.slice/crio-4179debd1de70026429192221d8702d87e421e0b85ee1859e86d56b169b9667e WatchSource:0}: Error finding container 4179debd1de70026429192221d8702d87e421e0b85ee1859e86d56b169b9667e: Status 404 returned error can't find the container with id 4179debd1de70026429192221d8702d87e421e0b85ee1859e86d56b169b9667e Oct 11 11:11:42.083802 master-1 kubenswrapper[4771]: I1011 11:11:42.083754 4771 generic.go:334] "Generic (PLEG): container finished" podID="32524a1a-75d4-47d4-81e1-ef8562425eb3" containerID="29a03a08ba4a91706056a19084966a858f7a063a71bad2d0e4a86c00ea696baa" exitCode=0 Oct 11 11:11:42.084415 master-1 kubenswrapper[4771]: I1011 11:11:42.083888 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-networker-deploy-networkers-m7845" event={"ID":"32524a1a-75d4-47d4-81e1-ef8562425eb3","Type":"ContainerDied","Data":"29a03a08ba4a91706056a19084966a858f7a063a71bad2d0e4a86c00ea696baa"} Oct 11 11:11:42.719577 master-2 kubenswrapper[4776]: I1011 11:11:42.715895 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-dataplane-edpm-hmm68" event={"ID":"97a17fc9-19bc-4b4d-8fe0-640b5efbc992","Type":"ContainerStarted","Data":"c9ad99941256151e596732de62ab477a0600fb63d276d7be934ce39a219c4417"} Oct 11 11:11:42.719577 master-2 kubenswrapper[4776]: I1011 11:11:42.715944 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-dataplane-edpm-hmm68" event={"ID":"97a17fc9-19bc-4b4d-8fe0-640b5efbc992","Type":"ContainerStarted","Data":"4179debd1de70026429192221d8702d87e421e0b85ee1859e86d56b169b9667e"} Oct 11 11:11:42.756437 master-2 kubenswrapper[4776]: I1011 11:11:42.756310 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-dataplane-edpm-hmm68" podStartSLOduration=2.371756026 podStartE2EDuration="2.756291821s" podCreationTimestamp="2025-10-11 11:11:40 +0000 UTC" firstStartedPulling="2025-10-11 11:11:41.731845334 +0000 UTC m=+2736.516272043" lastFinishedPulling="2025-10-11 11:11:42.116381139 +0000 UTC m=+2736.900807838" observedRunningTime="2025-10-11 11:11:42.748728788 +0000 UTC m=+2737.533155497" watchObservedRunningTime="2025-10-11 11:11:42.756291821 +0000 UTC m=+2737.540718530" Oct 11 11:11:43.749274 master-1 kubenswrapper[4771]: I1011 11:11:43.749200 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-networker-deploy-networkers-m7845" Oct 11 11:11:43.799408 master-1 kubenswrapper[4771]: I1011 11:11:43.799330 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32524a1a-75d4-47d4-81e1-ef8562425eb3-inventory\") pod \"32524a1a-75d4-47d4-81e1-ef8562425eb3\" (UID: \"32524a1a-75d4-47d4-81e1-ef8562425eb3\") " Oct 11 11:11:43.799979 master-1 kubenswrapper[4771]: I1011 11:11:43.799449 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w7qp\" (UniqueName: \"kubernetes.io/projected/32524a1a-75d4-47d4-81e1-ef8562425eb3-kube-api-access-2w7qp\") pod \"32524a1a-75d4-47d4-81e1-ef8562425eb3\" (UID: \"32524a1a-75d4-47d4-81e1-ef8562425eb3\") " Oct 11 11:11:43.799979 master-1 kubenswrapper[4771]: I1011 11:11:43.799610 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32524a1a-75d4-47d4-81e1-ef8562425eb3-ssh-key\") pod \"32524a1a-75d4-47d4-81e1-ef8562425eb3\" (UID: \"32524a1a-75d4-47d4-81e1-ef8562425eb3\") " Oct 11 11:11:43.808039 master-1 kubenswrapper[4771]: I1011 11:11:43.807956 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32524a1a-75d4-47d4-81e1-ef8562425eb3-kube-api-access-2w7qp" (OuterVolumeSpecName: "kube-api-access-2w7qp") pod "32524a1a-75d4-47d4-81e1-ef8562425eb3" (UID: "32524a1a-75d4-47d4-81e1-ef8562425eb3"). InnerVolumeSpecName "kube-api-access-2w7qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:11:43.827553 master-1 kubenswrapper[4771]: I1011 11:11:43.827469 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32524a1a-75d4-47d4-81e1-ef8562425eb3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "32524a1a-75d4-47d4-81e1-ef8562425eb3" (UID: "32524a1a-75d4-47d4-81e1-ef8562425eb3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:11:43.868167 master-1 kubenswrapper[4771]: I1011 11:11:43.868106 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32524a1a-75d4-47d4-81e1-ef8562425eb3-inventory" (OuterVolumeSpecName: "inventory") pod "32524a1a-75d4-47d4-81e1-ef8562425eb3" (UID: "32524a1a-75d4-47d4-81e1-ef8562425eb3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:11:43.901181 master-1 kubenswrapper[4771]: I1011 11:11:43.901117 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32524a1a-75d4-47d4-81e1-ef8562425eb3-inventory\") on node \"master-1\" DevicePath \"\"" Oct 11 11:11:43.901181 master-1 kubenswrapper[4771]: I1011 11:11:43.901170 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w7qp\" (UniqueName: \"kubernetes.io/projected/32524a1a-75d4-47d4-81e1-ef8562425eb3-kube-api-access-2w7qp\") on node \"master-1\" DevicePath \"\"" Oct 11 11:11:43.901412 master-1 kubenswrapper[4771]: I1011 11:11:43.901189 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32524a1a-75d4-47d4-81e1-ef8562425eb3-ssh-key\") on node \"master-1\" DevicePath \"\"" Oct 11 11:11:44.107284 master-1 kubenswrapper[4771]: I1011 11:11:44.107207 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-networker-deploy-networkers-m7845" event={"ID":"32524a1a-75d4-47d4-81e1-ef8562425eb3","Type":"ContainerDied","Data":"8102d15a2bc375478cb70b78596a84085e5e1f3d04840dcc27705e78d0412342"} Oct 11 11:11:44.107559 master-1 kubenswrapper[4771]: I1011 11:11:44.107305 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8102d15a2bc375478cb70b78596a84085e5e1f3d04840dcc27705e78d0412342" Oct 11 11:11:44.107559 master-1 kubenswrapper[4771]: I1011 11:11:44.107313 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-networker-deploy-networkers-m7845" Oct 11 11:11:44.227790 master-1 kubenswrapper[4771]: I1011 11:11:44.227704 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-networker-deploy-networkers-6txzc"] Oct 11 11:11:44.228349 master-1 kubenswrapper[4771]: E1011 11:11:44.228303 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32524a1a-75d4-47d4-81e1-ef8562425eb3" containerName="validate-network-networker-deploy-networkers" Oct 11 11:11:44.228349 master-1 kubenswrapper[4771]: I1011 11:11:44.228342 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="32524a1a-75d4-47d4-81e1-ef8562425eb3" containerName="validate-network-networker-deploy-networkers" Oct 11 11:11:44.228718 master-1 kubenswrapper[4771]: I1011 11:11:44.228676 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="32524a1a-75d4-47d4-81e1-ef8562425eb3" containerName="validate-network-networker-deploy-networkers" Oct 11 11:11:44.232791 master-1 kubenswrapper[4771]: I1011 11:11:44.231062 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-networker-deploy-networkers-6txzc" Oct 11 11:11:44.235236 master-1 kubenswrapper[4771]: I1011 11:11:44.235128 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:11:44.235490 master-1 kubenswrapper[4771]: I1011 11:11:44.235441 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:11:44.235945 master-1 kubenswrapper[4771]: I1011 11:11:44.235902 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-networkers" Oct 11 11:11:44.289022 master-1 kubenswrapper[4771]: I1011 11:11:44.288935 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-networker-deploy-networkers-6txzc"] Oct 11 11:11:44.311852 master-1 kubenswrapper[4771]: I1011 11:11:44.311777 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24d0c33f-c29a-4878-a73d-4fdc54ee4754-inventory\") pod \"install-os-networker-deploy-networkers-6txzc\" (UID: \"24d0c33f-c29a-4878-a73d-4fdc54ee4754\") " pod="openstack/install-os-networker-deploy-networkers-6txzc" Oct 11 11:11:44.312013 master-1 kubenswrapper[4771]: I1011 11:11:44.311863 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flrrm\" (UniqueName: \"kubernetes.io/projected/24d0c33f-c29a-4878-a73d-4fdc54ee4754-kube-api-access-flrrm\") pod \"install-os-networker-deploy-networkers-6txzc\" (UID: \"24d0c33f-c29a-4878-a73d-4fdc54ee4754\") " pod="openstack/install-os-networker-deploy-networkers-6txzc" Oct 11 11:11:44.312440 master-1 kubenswrapper[4771]: I1011 11:11:44.312400 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24d0c33f-c29a-4878-a73d-4fdc54ee4754-ssh-key\") pod \"install-os-networker-deploy-networkers-6txzc\" (UID: \"24d0c33f-c29a-4878-a73d-4fdc54ee4754\") " pod="openstack/install-os-networker-deploy-networkers-6txzc" Oct 11 11:11:44.414988 master-1 kubenswrapper[4771]: I1011 11:11:44.414749 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24d0c33f-c29a-4878-a73d-4fdc54ee4754-ssh-key\") pod \"install-os-networker-deploy-networkers-6txzc\" (UID: \"24d0c33f-c29a-4878-a73d-4fdc54ee4754\") " pod="openstack/install-os-networker-deploy-networkers-6txzc" Oct 11 11:11:44.414988 master-1 kubenswrapper[4771]: I1011 11:11:44.414852 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24d0c33f-c29a-4878-a73d-4fdc54ee4754-inventory\") pod \"install-os-networker-deploy-networkers-6txzc\" (UID: \"24d0c33f-c29a-4878-a73d-4fdc54ee4754\") " pod="openstack/install-os-networker-deploy-networkers-6txzc" Oct 11 11:11:44.414988 master-1 kubenswrapper[4771]: I1011 11:11:44.414884 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flrrm\" (UniqueName: \"kubernetes.io/projected/24d0c33f-c29a-4878-a73d-4fdc54ee4754-kube-api-access-flrrm\") pod \"install-os-networker-deploy-networkers-6txzc\" (UID: \"24d0c33f-c29a-4878-a73d-4fdc54ee4754\") " pod="openstack/install-os-networker-deploy-networkers-6txzc" Oct 11 11:11:44.419165 master-1 kubenswrapper[4771]: I1011 11:11:44.419094 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24d0c33f-c29a-4878-a73d-4fdc54ee4754-ssh-key\") pod \"install-os-networker-deploy-networkers-6txzc\" (UID: \"24d0c33f-c29a-4878-a73d-4fdc54ee4754\") " pod="openstack/install-os-networker-deploy-networkers-6txzc" Oct 11 11:11:44.419290 master-1 kubenswrapper[4771]: I1011 11:11:44.419230 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24d0c33f-c29a-4878-a73d-4fdc54ee4754-inventory\") pod \"install-os-networker-deploy-networkers-6txzc\" (UID: \"24d0c33f-c29a-4878-a73d-4fdc54ee4754\") " pod="openstack/install-os-networker-deploy-networkers-6txzc" Oct 11 11:11:44.439836 master-1 kubenswrapper[4771]: I1011 11:11:44.439740 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flrrm\" (UniqueName: \"kubernetes.io/projected/24d0c33f-c29a-4878-a73d-4fdc54ee4754-kube-api-access-flrrm\") pod \"install-os-networker-deploy-networkers-6txzc\" (UID: \"24d0c33f-c29a-4878-a73d-4fdc54ee4754\") " pod="openstack/install-os-networker-deploy-networkers-6txzc" Oct 11 11:11:44.556951 master-1 kubenswrapper[4771]: I1011 11:11:44.556712 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-networker-deploy-networkers-6txzc" Oct 11 11:11:45.186173 master-1 kubenswrapper[4771]: I1011 11:11:45.186108 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-networker-deploy-networkers-6txzc"] Oct 11 11:11:45.192271 master-1 kubenswrapper[4771]: W1011 11:11:45.192076 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24d0c33f_c29a_4878_a73d_4fdc54ee4754.slice/crio-358ed73fdc20984b748856c249dfddc8dcd94200b65b4cf2e53a249cbb6512a0 WatchSource:0}: Error finding container 358ed73fdc20984b748856c249dfddc8dcd94200b65b4cf2e53a249cbb6512a0: Status 404 returned error can't find the container with id 358ed73fdc20984b748856c249dfddc8dcd94200b65b4cf2e53a249cbb6512a0 Oct 11 11:11:46.131252 master-1 kubenswrapper[4771]: I1011 11:11:46.131126 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-networker-deploy-networkers-6txzc" event={"ID":"24d0c33f-c29a-4878-a73d-4fdc54ee4754","Type":"ContainerStarted","Data":"8dfae439c8c304c6fb6f9f04b4870358e01be13642192b2cd7e8e013d0312a0b"} Oct 11 11:11:46.131673 master-1 kubenswrapper[4771]: I1011 11:11:46.131494 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-networker-deploy-networkers-6txzc" event={"ID":"24d0c33f-c29a-4878-a73d-4fdc54ee4754","Type":"ContainerStarted","Data":"358ed73fdc20984b748856c249dfddc8dcd94200b65b4cf2e53a249cbb6512a0"} Oct 11 11:11:46.158257 master-1 kubenswrapper[4771]: I1011 11:11:46.158123 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-networker-deploy-networkers-6txzc" podStartSLOduration=1.6747316300000001 podStartE2EDuration="2.158081202s" podCreationTimestamp="2025-10-11 11:11:44 +0000 UTC" firstStartedPulling="2025-10-11 11:11:45.196444353 +0000 UTC m=+2737.170670804" lastFinishedPulling="2025-10-11 11:11:45.679793935 +0000 UTC m=+2737.654020376" observedRunningTime="2025-10-11 11:11:46.153172252 +0000 UTC m=+2738.127398753" watchObservedRunningTime="2025-10-11 11:11:46.158081202 +0000 UTC m=+2738.132307673" Oct 11 11:12:15.070312 master-1 kubenswrapper[4771]: I1011 11:12:15.069784 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-rbtpx"] Oct 11 11:12:15.084615 master-1 kubenswrapper[4771]: I1011 11:12:15.084547 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-rbtpx"] Oct 11 11:12:16.450262 master-1 kubenswrapper[4771]: I1011 11:12:16.450166 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="041373ee-1533-4bc6-abd2-80d16bfa5f23" path="/var/lib/kubelet/pods/041373ee-1533-4bc6-abd2-80d16bfa5f23/volumes" Oct 11 11:12:24.872458 master-1 kubenswrapper[4771]: I1011 11:12:24.872342 4771 scope.go:117] "RemoveContainer" containerID="736c15aefe67b305735f91c4e8c2109ad242f954cfb3635af77a747443410e30" Oct 11 11:12:27.060538 master-1 kubenswrapper[4771]: I1011 11:12:27.060421 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-e4bf-account-create-m2n6j"] Oct 11 11:12:27.074415 master-1 kubenswrapper[4771]: I1011 11:12:27.074311 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-e4bf-account-create-m2n6j"] Oct 11 11:12:28.461230 master-1 kubenswrapper[4771]: I1011 11:12:28.461140 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ba52ba8-24c1-4b0c-83cb-6837e2353fa8" path="/var/lib/kubelet/pods/9ba52ba8-24c1-4b0c-83cb-6837e2353fa8/volumes" Oct 11 11:12:33.052240 master-1 kubenswrapper[4771]: I1011 11:12:33.052162 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-8rff9"] Oct 11 11:12:33.064936 master-1 kubenswrapper[4771]: I1011 11:12:33.064828 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-8rff9"] Oct 11 11:12:34.458674 master-1 kubenswrapper[4771]: I1011 11:12:34.458573 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f91fc642-f994-42aa-9bb1-589b5bda7c22" path="/var/lib/kubelet/pods/f91fc642-f994-42aa-9bb1-589b5bda7c22/volumes" Oct 11 11:12:45.068298 master-1 kubenswrapper[4771]: I1011 11:12:45.068179 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-5f0c-account-create-9njpc"] Oct 11 11:12:45.074451 master-1 kubenswrapper[4771]: I1011 11:12:45.073412 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-5f0c-account-create-9njpc"] Oct 11 11:12:46.450474 master-1 kubenswrapper[4771]: I1011 11:12:46.450391 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb36428-2086-42ca-8ebf-9864a0917971" path="/var/lib/kubelet/pods/3eb36428-2086-42ca-8ebf-9864a0917971/volumes" Oct 11 11:12:57.372863 master-2 kubenswrapper[4776]: I1011 11:12:57.372775 4776 generic.go:334] "Generic (PLEG): container finished" podID="97a17fc9-19bc-4b4d-8fe0-640b5efbc992" containerID="c9ad99941256151e596732de62ab477a0600fb63d276d7be934ce39a219c4417" exitCode=0 Oct 11 11:12:57.372863 master-2 kubenswrapper[4776]: I1011 11:12:57.372848 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-dataplane-edpm-hmm68" event={"ID":"97a17fc9-19bc-4b4d-8fe0-640b5efbc992","Type":"ContainerDied","Data":"c9ad99941256151e596732de62ab477a0600fb63d276d7be934ce39a219c4417"} Oct 11 11:12:58.879440 master-2 kubenswrapper[4776]: I1011 11:12:58.879394 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:12:58.980139 master-2 kubenswrapper[4776]: I1011 11:12:58.976489 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-ssh-key\") pod \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " Oct 11 11:12:58.980139 master-2 kubenswrapper[4776]: I1011 11:12:58.976617 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhwz4\" (UniqueName: \"kubernetes.io/projected/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-kube-api-access-jhwz4\") pod \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " Oct 11 11:12:58.980139 master-2 kubenswrapper[4776]: I1011 11:12:58.977781 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-inventory\") pod \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " Oct 11 11:12:58.983978 master-2 kubenswrapper[4776]: I1011 11:12:58.983933 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-kube-api-access-jhwz4" (OuterVolumeSpecName: "kube-api-access-jhwz4") pod "97a17fc9-19bc-4b4d-8fe0-640b5efbc992" (UID: "97a17fc9-19bc-4b4d-8fe0-640b5efbc992"). InnerVolumeSpecName "kube-api-access-jhwz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:12:59.002685 master-2 kubenswrapper[4776]: I1011 11:12:59.002613 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-inventory" (OuterVolumeSpecName: "inventory") pod "97a17fc9-19bc-4b4d-8fe0-640b5efbc992" (UID: "97a17fc9-19bc-4b4d-8fe0-640b5efbc992"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:12:59.007272 master-2 kubenswrapper[4776]: I1011 11:12:59.007219 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "97a17fc9-19bc-4b4d-8fe0-640b5efbc992" (UID: "97a17fc9-19bc-4b4d-8fe0-640b5efbc992"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:12:59.081588 master-2 kubenswrapper[4776]: I1011 11:12:59.081453 4776 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-ssh-key\") on node \"master-2\" DevicePath \"\"" Oct 11 11:12:59.081588 master-2 kubenswrapper[4776]: I1011 11:12:59.081505 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhwz4\" (UniqueName: \"kubernetes.io/projected/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-kube-api-access-jhwz4\") on node \"master-2\" DevicePath \"\"" Oct 11 11:12:59.081588 master-2 kubenswrapper[4776]: I1011 11:12:59.081519 4776 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-inventory\") on node \"master-2\" DevicePath \"\"" Oct 11 11:12:59.394578 master-2 kubenswrapper[4776]: I1011 11:12:59.394421 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-dataplane-edpm-hmm68" event={"ID":"97a17fc9-19bc-4b4d-8fe0-640b5efbc992","Type":"ContainerDied","Data":"4179debd1de70026429192221d8702d87e421e0b85ee1859e86d56b169b9667e"} Oct 11 11:12:59.394578 master-2 kubenswrapper[4776]: I1011 11:12:59.394479 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4179debd1de70026429192221d8702d87e421e0b85ee1859e86d56b169b9667e" Oct 11 11:12:59.394578 master-2 kubenswrapper[4776]: I1011 11:12:59.394499 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:12:59.537402 master-1 kubenswrapper[4771]: I1011 11:12:59.537295 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-dataplane-edpm-t4d7h"] Oct 11 11:12:59.540025 master-1 kubenswrapper[4771]: I1011 11:12:59.539973 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-dataplane-edpm-t4d7h" Oct 11 11:12:59.544274 master-1 kubenswrapper[4771]: I1011 11:12:59.544221 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-edpm" Oct 11 11:12:59.553207 master-1 kubenswrapper[4771]: I1011 11:12:59.553128 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-dataplane-edpm-t4d7h"] Oct 11 11:12:59.633688 master-1 kubenswrapper[4771]: I1011 11:12:59.633610 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89bef19a-554b-423c-b350-11b000bff73d-ssh-key\") pod \"configure-os-dataplane-edpm-t4d7h\" (UID: \"89bef19a-554b-423c-b350-11b000bff73d\") " pod="openstack/configure-os-dataplane-edpm-t4d7h" Oct 11 11:12:59.633938 master-1 kubenswrapper[4771]: I1011 11:12:59.633698 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmcl9\" (UniqueName: \"kubernetes.io/projected/89bef19a-554b-423c-b350-11b000bff73d-kube-api-access-hmcl9\") pod \"configure-os-dataplane-edpm-t4d7h\" (UID: \"89bef19a-554b-423c-b350-11b000bff73d\") " pod="openstack/configure-os-dataplane-edpm-t4d7h" Oct 11 11:12:59.633984 master-1 kubenswrapper[4771]: I1011 11:12:59.633919 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89bef19a-554b-423c-b350-11b000bff73d-inventory\") pod \"configure-os-dataplane-edpm-t4d7h\" (UID: \"89bef19a-554b-423c-b350-11b000bff73d\") " pod="openstack/configure-os-dataplane-edpm-t4d7h" Oct 11 11:12:59.735807 master-1 kubenswrapper[4771]: I1011 11:12:59.735732 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89bef19a-554b-423c-b350-11b000bff73d-ssh-key\") pod \"configure-os-dataplane-edpm-t4d7h\" (UID: \"89bef19a-554b-423c-b350-11b000bff73d\") " pod="openstack/configure-os-dataplane-edpm-t4d7h" Oct 11 11:12:59.735807 master-1 kubenswrapper[4771]: I1011 11:12:59.735818 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmcl9\" (UniqueName: \"kubernetes.io/projected/89bef19a-554b-423c-b350-11b000bff73d-kube-api-access-hmcl9\") pod \"configure-os-dataplane-edpm-t4d7h\" (UID: \"89bef19a-554b-423c-b350-11b000bff73d\") " pod="openstack/configure-os-dataplane-edpm-t4d7h" Oct 11 11:12:59.736268 master-1 kubenswrapper[4771]: I1011 11:12:59.735939 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89bef19a-554b-423c-b350-11b000bff73d-inventory\") pod \"configure-os-dataplane-edpm-t4d7h\" (UID: \"89bef19a-554b-423c-b350-11b000bff73d\") " pod="openstack/configure-os-dataplane-edpm-t4d7h" Oct 11 11:12:59.740426 master-1 kubenswrapper[4771]: I1011 11:12:59.740332 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89bef19a-554b-423c-b350-11b000bff73d-inventory\") pod \"configure-os-dataplane-edpm-t4d7h\" (UID: \"89bef19a-554b-423c-b350-11b000bff73d\") " pod="openstack/configure-os-dataplane-edpm-t4d7h" Oct 11 11:12:59.742753 master-1 kubenswrapper[4771]: I1011 11:12:59.742641 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89bef19a-554b-423c-b350-11b000bff73d-ssh-key\") pod \"configure-os-dataplane-edpm-t4d7h\" (UID: \"89bef19a-554b-423c-b350-11b000bff73d\") " pod="openstack/configure-os-dataplane-edpm-t4d7h" Oct 11 11:12:59.761724 master-1 kubenswrapper[4771]: I1011 11:12:59.761652 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmcl9\" (UniqueName: \"kubernetes.io/projected/89bef19a-554b-423c-b350-11b000bff73d-kube-api-access-hmcl9\") pod \"configure-os-dataplane-edpm-t4d7h\" (UID: \"89bef19a-554b-423c-b350-11b000bff73d\") " pod="openstack/configure-os-dataplane-edpm-t4d7h" Oct 11 11:12:59.868238 master-1 kubenswrapper[4771]: I1011 11:12:59.868133 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-dataplane-edpm-t4d7h" Oct 11 11:13:00.484626 master-1 kubenswrapper[4771]: I1011 11:13:00.484100 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-dataplane-edpm-t4d7h"] Oct 11 11:13:00.996113 master-1 kubenswrapper[4771]: I1011 11:13:00.996038 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-dataplane-edpm-t4d7h" event={"ID":"89bef19a-554b-423c-b350-11b000bff73d","Type":"ContainerStarted","Data":"10b24d028221fc67acf0e5a32b8e9a6276196774b4fb61c0a673ef5c09860f57"} Oct 11 11:13:02.016771 master-1 kubenswrapper[4771]: I1011 11:13:02.016679 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-dataplane-edpm-t4d7h" event={"ID":"89bef19a-554b-423c-b350-11b000bff73d","Type":"ContainerStarted","Data":"70de448794c96c4f984e2d3d72ad94feed3c008eba01f0c4522911ade0c71763"} Oct 11 11:13:02.044604 master-1 kubenswrapper[4771]: I1011 11:13:02.044471 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-dataplane-edpm-t4d7h" podStartSLOduration=2.561380873 podStartE2EDuration="3.044441006s" podCreationTimestamp="2025-10-11 11:12:59 +0000 UTC" firstStartedPulling="2025-10-11 11:13:00.486159198 +0000 UTC m=+2812.460385649" lastFinishedPulling="2025-10-11 11:13:00.969219351 +0000 UTC m=+2812.943445782" observedRunningTime="2025-10-11 11:13:02.043421447 +0000 UTC m=+2814.017647928" watchObservedRunningTime="2025-10-11 11:13:02.044441006 +0000 UTC m=+2814.018667487" Oct 11 11:13:09.083183 master-1 kubenswrapper[4771]: I1011 11:13:09.083106 4771 generic.go:334] "Generic (PLEG): container finished" podID="24d0c33f-c29a-4878-a73d-4fdc54ee4754" containerID="8dfae439c8c304c6fb6f9f04b4870358e01be13642192b2cd7e8e013d0312a0b" exitCode=0 Oct 11 11:13:09.083183 master-1 kubenswrapper[4771]: I1011 11:13:09.083189 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-networker-deploy-networkers-6txzc" event={"ID":"24d0c33f-c29a-4878-a73d-4fdc54ee4754","Type":"ContainerDied","Data":"8dfae439c8c304c6fb6f9f04b4870358e01be13642192b2cd7e8e013d0312a0b"} Oct 11 11:13:10.805789 master-1 kubenswrapper[4771]: I1011 11:13:10.805711 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-networker-deploy-networkers-6txzc" Oct 11 11:13:10.900587 master-1 kubenswrapper[4771]: I1011 11:13:10.900452 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flrrm\" (UniqueName: \"kubernetes.io/projected/24d0c33f-c29a-4878-a73d-4fdc54ee4754-kube-api-access-flrrm\") pod \"24d0c33f-c29a-4878-a73d-4fdc54ee4754\" (UID: \"24d0c33f-c29a-4878-a73d-4fdc54ee4754\") " Oct 11 11:13:10.900922 master-1 kubenswrapper[4771]: I1011 11:13:10.900634 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24d0c33f-c29a-4878-a73d-4fdc54ee4754-inventory\") pod \"24d0c33f-c29a-4878-a73d-4fdc54ee4754\" (UID: \"24d0c33f-c29a-4878-a73d-4fdc54ee4754\") " Oct 11 11:13:10.900922 master-1 kubenswrapper[4771]: I1011 11:13:10.900769 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24d0c33f-c29a-4878-a73d-4fdc54ee4754-ssh-key\") pod \"24d0c33f-c29a-4878-a73d-4fdc54ee4754\" (UID: \"24d0c33f-c29a-4878-a73d-4fdc54ee4754\") " Oct 11 11:13:10.906148 master-1 kubenswrapper[4771]: I1011 11:13:10.906036 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d0c33f-c29a-4878-a73d-4fdc54ee4754-kube-api-access-flrrm" (OuterVolumeSpecName: "kube-api-access-flrrm") pod "24d0c33f-c29a-4878-a73d-4fdc54ee4754" (UID: "24d0c33f-c29a-4878-a73d-4fdc54ee4754"). InnerVolumeSpecName "kube-api-access-flrrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:13:10.939432 master-1 kubenswrapper[4771]: I1011 11:13:10.939322 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d0c33f-c29a-4878-a73d-4fdc54ee4754-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "24d0c33f-c29a-4878-a73d-4fdc54ee4754" (UID: "24d0c33f-c29a-4878-a73d-4fdc54ee4754"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:13:10.950020 master-1 kubenswrapper[4771]: I1011 11:13:10.948642 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d0c33f-c29a-4878-a73d-4fdc54ee4754-inventory" (OuterVolumeSpecName: "inventory") pod "24d0c33f-c29a-4878-a73d-4fdc54ee4754" (UID: "24d0c33f-c29a-4878-a73d-4fdc54ee4754"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:13:11.003972 master-1 kubenswrapper[4771]: I1011 11:13:11.003885 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flrrm\" (UniqueName: \"kubernetes.io/projected/24d0c33f-c29a-4878-a73d-4fdc54ee4754-kube-api-access-flrrm\") on node \"master-1\" DevicePath \"\"" Oct 11 11:13:11.003972 master-1 kubenswrapper[4771]: I1011 11:13:11.003941 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24d0c33f-c29a-4878-a73d-4fdc54ee4754-inventory\") on node \"master-1\" DevicePath \"\"" Oct 11 11:13:11.003972 master-1 kubenswrapper[4771]: I1011 11:13:11.003952 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24d0c33f-c29a-4878-a73d-4fdc54ee4754-ssh-key\") on node \"master-1\" DevicePath \"\"" Oct 11 11:13:11.109845 master-1 kubenswrapper[4771]: I1011 11:13:11.109749 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-networker-deploy-networkers-6txzc" event={"ID":"24d0c33f-c29a-4878-a73d-4fdc54ee4754","Type":"ContainerDied","Data":"358ed73fdc20984b748856c249dfddc8dcd94200b65b4cf2e53a249cbb6512a0"} Oct 11 11:13:11.109845 master-1 kubenswrapper[4771]: I1011 11:13:11.109811 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="358ed73fdc20984b748856c249dfddc8dcd94200b65b4cf2e53a249cbb6512a0" Oct 11 11:13:11.110275 master-1 kubenswrapper[4771]: I1011 11:13:11.109906 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-networker-deploy-networkers-6txzc" Oct 11 11:13:11.231785 master-2 kubenswrapper[4776]: I1011 11:13:11.231683 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-networker-deploy-networkers-7chrf"] Oct 11 11:13:11.232436 master-2 kubenswrapper[4776]: E1011 11:13:11.232064 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a17fc9-19bc-4b4d-8fe0-640b5efbc992" containerName="install-os-dataplane-edpm" Oct 11 11:13:11.232436 master-2 kubenswrapper[4776]: I1011 11:13:11.232081 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a17fc9-19bc-4b4d-8fe0-640b5efbc992" containerName="install-os-dataplane-edpm" Oct 11 11:13:11.232436 master-2 kubenswrapper[4776]: I1011 11:13:11.232285 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a17fc9-19bc-4b4d-8fe0-640b5efbc992" containerName="install-os-dataplane-edpm" Oct 11 11:13:11.232991 master-2 kubenswrapper[4776]: I1011 11:13:11.232971 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.235848 master-2 kubenswrapper[4776]: I1011 11:13:11.235808 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:13:11.236046 master-2 kubenswrapper[4776]: I1011 11:13:11.236018 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:13:11.236186 master-2 kubenswrapper[4776]: I1011 11:13:11.236164 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-networkers" Oct 11 11:13:11.250131 master-2 kubenswrapper[4776]: I1011 11:13:11.250074 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-networker-deploy-networkers-7chrf"] Oct 11 11:13:11.318439 master-2 kubenswrapper[4776]: I1011 11:13:11.318372 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-ssh-key\") pod \"configure-os-networker-deploy-networkers-7chrf\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.318645 master-2 kubenswrapper[4776]: I1011 11:13:11.318533 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j4js\" (UniqueName: \"kubernetes.io/projected/76c47213-ac0b-4317-8d04-c62782b350ca-kube-api-access-5j4js\") pod \"configure-os-networker-deploy-networkers-7chrf\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.318735 master-2 kubenswrapper[4776]: I1011 11:13:11.318669 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-inventory\") pod \"configure-os-networker-deploy-networkers-7chrf\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.420993 master-2 kubenswrapper[4776]: I1011 11:13:11.420919 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-inventory\") pod \"configure-os-networker-deploy-networkers-7chrf\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.420993 master-2 kubenswrapper[4776]: I1011 11:13:11.420996 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-ssh-key\") pod \"configure-os-networker-deploy-networkers-7chrf\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.421245 master-2 kubenswrapper[4776]: I1011 11:13:11.421071 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j4js\" (UniqueName: \"kubernetes.io/projected/76c47213-ac0b-4317-8d04-c62782b350ca-kube-api-access-5j4js\") pod \"configure-os-networker-deploy-networkers-7chrf\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.424081 master-2 kubenswrapper[4776]: I1011 11:13:11.424045 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-inventory\") pod \"configure-os-networker-deploy-networkers-7chrf\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.425408 master-2 kubenswrapper[4776]: I1011 11:13:11.425374 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-ssh-key\") pod \"configure-os-networker-deploy-networkers-7chrf\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.443897 master-2 kubenswrapper[4776]: I1011 11:13:11.443841 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j4js\" (UniqueName: \"kubernetes.io/projected/76c47213-ac0b-4317-8d04-c62782b350ca-kube-api-access-5j4js\") pod \"configure-os-networker-deploy-networkers-7chrf\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.549419 master-2 kubenswrapper[4776]: I1011 11:13:11.549285 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:12.074151 master-2 kubenswrapper[4776]: W1011 11:13:12.074101 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76c47213_ac0b_4317_8d04_c62782b350ca.slice/crio-d988d3aa7e418d77815e76f53fabe06a76d9deef23117ecee521dd14a195dad3 WatchSource:0}: Error finding container d988d3aa7e418d77815e76f53fabe06a76d9deef23117ecee521dd14a195dad3: Status 404 returned error can't find the container with id d988d3aa7e418d77815e76f53fabe06a76d9deef23117ecee521dd14a195dad3 Oct 11 11:13:12.074869 master-2 kubenswrapper[4776]: I1011 11:13:12.074832 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-networker-deploy-networkers-7chrf"] Oct 11 11:13:12.568072 master-2 kubenswrapper[4776]: I1011 11:13:12.567999 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-networker-deploy-networkers-7chrf" event={"ID":"76c47213-ac0b-4317-8d04-c62782b350ca","Type":"ContainerStarted","Data":"d988d3aa7e418d77815e76f53fabe06a76d9deef23117ecee521dd14a195dad3"} Oct 11 11:13:13.575578 master-2 kubenswrapper[4776]: I1011 11:13:13.575504 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-networker-deploy-networkers-7chrf" event={"ID":"76c47213-ac0b-4317-8d04-c62782b350ca","Type":"ContainerStarted","Data":"10f8d52bf4bd1629684d13c561de45c535476feb86dfb1a788c3f2556f22ab4c"} Oct 11 11:13:13.601058 master-2 kubenswrapper[4776]: I1011 11:13:13.600974 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-networker-deploy-networkers-7chrf" podStartSLOduration=2.212631386 podStartE2EDuration="2.600954073s" podCreationTimestamp="2025-10-11 11:13:11 +0000 UTC" firstStartedPulling="2025-10-11 11:13:12.076376419 +0000 UTC m=+2826.860803128" lastFinishedPulling="2025-10-11 11:13:12.464699106 +0000 UTC m=+2827.249125815" observedRunningTime="2025-10-11 11:13:13.600295606 +0000 UTC m=+2828.384722315" watchObservedRunningTime="2025-10-11 11:13:13.600954073 +0000 UTC m=+2828.385380782" Oct 11 11:13:21.073582 master-1 kubenswrapper[4771]: I1011 11:13:21.073475 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-dsphh"] Oct 11 11:13:21.085317 master-1 kubenswrapper[4771]: I1011 11:13:21.085212 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-dsphh"] Oct 11 11:13:22.451706 master-1 kubenswrapper[4771]: I1011 11:13:22.451624 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2681ddac-5e31-449b-bf71-fb54e8ba389c" path="/var/lib/kubelet/pods/2681ddac-5e31-449b-bf71-fb54e8ba389c/volumes" Oct 11 11:13:24.953989 master-1 kubenswrapper[4771]: I1011 11:13:24.953874 4771 scope.go:117] "RemoveContainer" containerID="649f9407421e9587f48e63d8c49281e9025d233acd0bf83983ed52b03d671758" Oct 11 11:13:24.988450 master-1 kubenswrapper[4771]: I1011 11:13:24.988342 4771 scope.go:117] "RemoveContainer" containerID="9343b079fc4a104c0cb1564885c29bbb8cf2e14829a4f1096f11a9696ef57edf" Oct 11 11:13:25.073109 master-1 kubenswrapper[4771]: I1011 11:13:25.072953 4771 scope.go:117] "RemoveContainer" containerID="24ea528e7f6dc70693d8dee3aad4fc9efa2cfed93954344bd9c5720391f051ef" Oct 11 11:13:25.139564 master-1 kubenswrapper[4771]: I1011 11:13:25.139488 4771 scope.go:117] "RemoveContainer" containerID="00d99e2ba51100c415ae5d1a3b19ce4ab68cd0b4655796bec9fd8f7ec75f10f8" Oct 11 11:13:25.198574 master-1 kubenswrapper[4771]: I1011 11:13:25.198507 4771 scope.go:117] "RemoveContainer" containerID="2433d4bbed13285c1fb5cb4b22ca8e93fb7e88d52d9a41f34c5f718dbcf8c96b" Oct 11 11:13:55.634389 master-1 kubenswrapper[4771]: I1011 11:13:55.634264 4771 generic.go:334] "Generic (PLEG): container finished" podID="89bef19a-554b-423c-b350-11b000bff73d" containerID="70de448794c96c4f984e2d3d72ad94feed3c008eba01f0c4522911ade0c71763" exitCode=2 Oct 11 11:13:55.635428 master-1 kubenswrapper[4771]: I1011 11:13:55.634381 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-dataplane-edpm-t4d7h" event={"ID":"89bef19a-554b-423c-b350-11b000bff73d","Type":"ContainerDied","Data":"70de448794c96c4f984e2d3d72ad94feed3c008eba01f0c4522911ade0c71763"} Oct 11 11:13:57.285195 master-1 kubenswrapper[4771]: I1011 11:13:57.285106 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-dataplane-edpm-t4d7h" Oct 11 11:13:57.483736 master-1 kubenswrapper[4771]: I1011 11:13:57.483582 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89bef19a-554b-423c-b350-11b000bff73d-inventory\") pod \"89bef19a-554b-423c-b350-11b000bff73d\" (UID: \"89bef19a-554b-423c-b350-11b000bff73d\") " Oct 11 11:13:57.484090 master-1 kubenswrapper[4771]: I1011 11:13:57.483788 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmcl9\" (UniqueName: \"kubernetes.io/projected/89bef19a-554b-423c-b350-11b000bff73d-kube-api-access-hmcl9\") pod \"89bef19a-554b-423c-b350-11b000bff73d\" (UID: \"89bef19a-554b-423c-b350-11b000bff73d\") " Oct 11 11:13:57.484090 master-1 kubenswrapper[4771]: I1011 11:13:57.483871 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89bef19a-554b-423c-b350-11b000bff73d-ssh-key\") pod \"89bef19a-554b-423c-b350-11b000bff73d\" (UID: \"89bef19a-554b-423c-b350-11b000bff73d\") " Oct 11 11:13:57.489440 master-1 kubenswrapper[4771]: I1011 11:13:57.489332 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89bef19a-554b-423c-b350-11b000bff73d-kube-api-access-hmcl9" (OuterVolumeSpecName: "kube-api-access-hmcl9") pod "89bef19a-554b-423c-b350-11b000bff73d" (UID: "89bef19a-554b-423c-b350-11b000bff73d"). InnerVolumeSpecName "kube-api-access-hmcl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:13:57.513722 master-1 kubenswrapper[4771]: I1011 11:13:57.513621 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89bef19a-554b-423c-b350-11b000bff73d-inventory" (OuterVolumeSpecName: "inventory") pod "89bef19a-554b-423c-b350-11b000bff73d" (UID: "89bef19a-554b-423c-b350-11b000bff73d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:13:57.514582 master-1 kubenswrapper[4771]: I1011 11:13:57.514510 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89bef19a-554b-423c-b350-11b000bff73d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "89bef19a-554b-423c-b350-11b000bff73d" (UID: "89bef19a-554b-423c-b350-11b000bff73d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:13:57.586640 master-1 kubenswrapper[4771]: I1011 11:13:57.586529 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89bef19a-554b-423c-b350-11b000bff73d-inventory\") on node \"master-1\" DevicePath \"\"" Oct 11 11:13:57.586640 master-1 kubenswrapper[4771]: I1011 11:13:57.586577 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmcl9\" (UniqueName: \"kubernetes.io/projected/89bef19a-554b-423c-b350-11b000bff73d-kube-api-access-hmcl9\") on node \"master-1\" DevicePath \"\"" Oct 11 11:13:57.586640 master-1 kubenswrapper[4771]: I1011 11:13:57.586604 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89bef19a-554b-423c-b350-11b000bff73d-ssh-key\") on node \"master-1\" DevicePath \"\"" Oct 11 11:13:57.656859 master-1 kubenswrapper[4771]: I1011 11:13:57.656737 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-dataplane-edpm-t4d7h" event={"ID":"89bef19a-554b-423c-b350-11b000bff73d","Type":"ContainerDied","Data":"10b24d028221fc67acf0e5a32b8e9a6276196774b4fb61c0a673ef5c09860f57"} Oct 11 11:13:57.656859 master-1 kubenswrapper[4771]: I1011 11:13:57.656815 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10b24d028221fc67acf0e5a32b8e9a6276196774b4fb61c0a673ef5c09860f57" Oct 11 11:13:57.657475 master-1 kubenswrapper[4771]: I1011 11:13:57.656944 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-dataplane-edpm-t4d7h" Oct 11 11:14:05.475117 master-1 kubenswrapper[4771]: I1011 11:14:05.474986 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-dataplane-edpm-kndnx"] Oct 11 11:14:05.476274 master-1 kubenswrapper[4771]: E1011 11:14:05.475626 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d0c33f-c29a-4878-a73d-4fdc54ee4754" containerName="install-os-networker-deploy-networkers" Oct 11 11:14:05.476274 master-1 kubenswrapper[4771]: I1011 11:14:05.475715 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d0c33f-c29a-4878-a73d-4fdc54ee4754" containerName="install-os-networker-deploy-networkers" Oct 11 11:14:05.476274 master-1 kubenswrapper[4771]: E1011 11:14:05.475792 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89bef19a-554b-423c-b350-11b000bff73d" containerName="configure-os-dataplane-edpm" Oct 11 11:14:05.476274 master-1 kubenswrapper[4771]: I1011 11:14:05.475801 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="89bef19a-554b-423c-b350-11b000bff73d" containerName="configure-os-dataplane-edpm" Oct 11 11:14:05.476642 master-1 kubenswrapper[4771]: I1011 11:14:05.476304 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d0c33f-c29a-4878-a73d-4fdc54ee4754" containerName="install-os-networker-deploy-networkers" Oct 11 11:14:05.476642 master-1 kubenswrapper[4771]: I1011 11:14:05.476325 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="89bef19a-554b-423c-b350-11b000bff73d" containerName="configure-os-dataplane-edpm" Oct 11 11:14:05.477518 master-1 kubenswrapper[4771]: I1011 11:14:05.477469 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-dataplane-edpm-kndnx" Oct 11 11:14:05.480684 master-1 kubenswrapper[4771]: I1011 11:14:05.480605 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:14:05.481124 master-1 kubenswrapper[4771]: I1011 11:14:05.481056 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:14:05.481221 master-1 kubenswrapper[4771]: I1011 11:14:05.481073 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-edpm" Oct 11 11:14:05.489572 master-1 kubenswrapper[4771]: I1011 11:14:05.489487 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-dataplane-edpm-kndnx"] Oct 11 11:14:05.492397 master-1 kubenswrapper[4771]: I1011 11:14:05.492301 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/453fb91b-f029-45ef-abab-ee2e81fe09d4-ssh-key\") pod \"configure-os-dataplane-edpm-kndnx\" (UID: \"453fb91b-f029-45ef-abab-ee2e81fe09d4\") " pod="openstack/configure-os-dataplane-edpm-kndnx" Oct 11 11:14:05.492671 master-1 kubenswrapper[4771]: I1011 11:14:05.492626 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/453fb91b-f029-45ef-abab-ee2e81fe09d4-inventory\") pod \"configure-os-dataplane-edpm-kndnx\" (UID: \"453fb91b-f029-45ef-abab-ee2e81fe09d4\") " pod="openstack/configure-os-dataplane-edpm-kndnx" Oct 11 11:14:05.492763 master-1 kubenswrapper[4771]: I1011 11:14:05.492671 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5dv8\" (UniqueName: \"kubernetes.io/projected/453fb91b-f029-45ef-abab-ee2e81fe09d4-kube-api-access-j5dv8\") pod \"configure-os-dataplane-edpm-kndnx\" (UID: \"453fb91b-f029-45ef-abab-ee2e81fe09d4\") " pod="openstack/configure-os-dataplane-edpm-kndnx" Oct 11 11:14:05.595875 master-1 kubenswrapper[4771]: I1011 11:14:05.595735 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/453fb91b-f029-45ef-abab-ee2e81fe09d4-inventory\") pod \"configure-os-dataplane-edpm-kndnx\" (UID: \"453fb91b-f029-45ef-abab-ee2e81fe09d4\") " pod="openstack/configure-os-dataplane-edpm-kndnx" Oct 11 11:14:05.595875 master-1 kubenswrapper[4771]: I1011 11:14:05.595850 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5dv8\" (UniqueName: \"kubernetes.io/projected/453fb91b-f029-45ef-abab-ee2e81fe09d4-kube-api-access-j5dv8\") pod \"configure-os-dataplane-edpm-kndnx\" (UID: \"453fb91b-f029-45ef-abab-ee2e81fe09d4\") " pod="openstack/configure-os-dataplane-edpm-kndnx" Oct 11 11:14:05.596320 master-1 kubenswrapper[4771]: I1011 11:14:05.595984 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/453fb91b-f029-45ef-abab-ee2e81fe09d4-ssh-key\") pod \"configure-os-dataplane-edpm-kndnx\" (UID: \"453fb91b-f029-45ef-abab-ee2e81fe09d4\") " pod="openstack/configure-os-dataplane-edpm-kndnx" Oct 11 11:14:05.602458 master-1 kubenswrapper[4771]: I1011 11:14:05.602399 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/453fb91b-f029-45ef-abab-ee2e81fe09d4-inventory\") pod \"configure-os-dataplane-edpm-kndnx\" (UID: \"453fb91b-f029-45ef-abab-ee2e81fe09d4\") " pod="openstack/configure-os-dataplane-edpm-kndnx" Oct 11 11:14:05.602634 master-1 kubenswrapper[4771]: I1011 11:14:05.602588 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/453fb91b-f029-45ef-abab-ee2e81fe09d4-ssh-key\") pod \"configure-os-dataplane-edpm-kndnx\" (UID: \"453fb91b-f029-45ef-abab-ee2e81fe09d4\") " pod="openstack/configure-os-dataplane-edpm-kndnx" Oct 11 11:14:05.617172 master-1 kubenswrapper[4771]: I1011 11:14:05.617101 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5dv8\" (UniqueName: \"kubernetes.io/projected/453fb91b-f029-45ef-abab-ee2e81fe09d4-kube-api-access-j5dv8\") pod \"configure-os-dataplane-edpm-kndnx\" (UID: \"453fb91b-f029-45ef-abab-ee2e81fe09d4\") " pod="openstack/configure-os-dataplane-edpm-kndnx" Oct 11 11:14:05.817893 master-1 kubenswrapper[4771]: I1011 11:14:05.817792 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-dataplane-edpm-kndnx" Oct 11 11:14:06.496931 master-1 kubenswrapper[4771]: I1011 11:14:06.496850 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-dataplane-edpm-kndnx"] Oct 11 11:14:06.818907 master-1 kubenswrapper[4771]: I1011 11:14:06.818811 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-dataplane-edpm-kndnx" event={"ID":"453fb91b-f029-45ef-abab-ee2e81fe09d4","Type":"ContainerStarted","Data":"9b21e0d2161f7c72225ba0d8f1f34b86ffb2212388c14f76d64766036b46fcbe"} Oct 11 11:14:07.038220 master-2 kubenswrapper[4776]: I1011 11:14:07.038139 4776 generic.go:334] "Generic (PLEG): container finished" podID="76c47213-ac0b-4317-8d04-c62782b350ca" containerID="10f8d52bf4bd1629684d13c561de45c535476feb86dfb1a788c3f2556f22ab4c" exitCode=2 Oct 11 11:14:07.038220 master-2 kubenswrapper[4776]: I1011 11:14:07.038194 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-networker-deploy-networkers-7chrf" event={"ID":"76c47213-ac0b-4317-8d04-c62782b350ca","Type":"ContainerDied","Data":"10f8d52bf4bd1629684d13c561de45c535476feb86dfb1a788c3f2556f22ab4c"} Oct 11 11:14:07.829289 master-1 kubenswrapper[4771]: I1011 11:14:07.829175 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-dataplane-edpm-kndnx" event={"ID":"453fb91b-f029-45ef-abab-ee2e81fe09d4","Type":"ContainerStarted","Data":"e3721d28b852247005699682d334c78c4d7fde3a9e258d447e6fee94baa45c00"} Oct 11 11:14:07.860897 master-1 kubenswrapper[4771]: I1011 11:14:07.860782 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-dataplane-edpm-kndnx" podStartSLOduration=2.183146566 podStartE2EDuration="2.86073012s" podCreationTimestamp="2025-10-11 11:14:05 +0000 UTC" firstStartedPulling="2025-10-11 11:14:06.531541904 +0000 UTC m=+2878.505768355" lastFinishedPulling="2025-10-11 11:14:07.209125428 +0000 UTC m=+2879.183351909" observedRunningTime="2025-10-11 11:14:07.855318646 +0000 UTC m=+2879.829545087" watchObservedRunningTime="2025-10-11 11:14:07.86073012 +0000 UTC m=+2879.834956551" Oct 11 11:14:08.571408 master-2 kubenswrapper[4776]: I1011 11:14:08.571303 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:14:08.721486 master-2 kubenswrapper[4776]: I1011 11:14:08.721418 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-inventory\") pod \"76c47213-ac0b-4317-8d04-c62782b350ca\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " Oct 11 11:14:08.721764 master-2 kubenswrapper[4776]: I1011 11:14:08.721734 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-ssh-key\") pod \"76c47213-ac0b-4317-8d04-c62782b350ca\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " Oct 11 11:14:08.721888 master-2 kubenswrapper[4776]: I1011 11:14:08.721843 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j4js\" (UniqueName: \"kubernetes.io/projected/76c47213-ac0b-4317-8d04-c62782b350ca-kube-api-access-5j4js\") pod \"76c47213-ac0b-4317-8d04-c62782b350ca\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " Oct 11 11:14:08.724919 master-2 kubenswrapper[4776]: I1011 11:14:08.724863 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76c47213-ac0b-4317-8d04-c62782b350ca-kube-api-access-5j4js" (OuterVolumeSpecName: "kube-api-access-5j4js") pod "76c47213-ac0b-4317-8d04-c62782b350ca" (UID: "76c47213-ac0b-4317-8d04-c62782b350ca"). InnerVolumeSpecName "kube-api-access-5j4js". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:14:08.745774 master-2 kubenswrapper[4776]: I1011 11:14:08.742861 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "76c47213-ac0b-4317-8d04-c62782b350ca" (UID: "76c47213-ac0b-4317-8d04-c62782b350ca"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:14:08.752505 master-2 kubenswrapper[4776]: I1011 11:14:08.749645 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-inventory" (OuterVolumeSpecName: "inventory") pod "76c47213-ac0b-4317-8d04-c62782b350ca" (UID: "76c47213-ac0b-4317-8d04-c62782b350ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:14:08.823951 master-2 kubenswrapper[4776]: I1011 11:14:08.823895 4776 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-inventory\") on node \"master-2\" DevicePath \"\"" Oct 11 11:14:08.823951 master-2 kubenswrapper[4776]: I1011 11:14:08.823934 4776 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-ssh-key\") on node \"master-2\" DevicePath \"\"" Oct 11 11:14:08.823951 master-2 kubenswrapper[4776]: I1011 11:14:08.823943 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j4js\" (UniqueName: \"kubernetes.io/projected/76c47213-ac0b-4317-8d04-c62782b350ca-kube-api-access-5j4js\") on node \"master-2\" DevicePath \"\"" Oct 11 11:14:09.064508 master-2 kubenswrapper[4776]: I1011 11:14:09.064387 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-networker-deploy-networkers-7chrf" event={"ID":"76c47213-ac0b-4317-8d04-c62782b350ca","Type":"ContainerDied","Data":"d988d3aa7e418d77815e76f53fabe06a76d9deef23117ecee521dd14a195dad3"} Oct 11 11:14:09.064508 master-2 kubenswrapper[4776]: I1011 11:14:09.064429 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d988d3aa7e418d77815e76f53fabe06a76d9deef23117ecee521dd14a195dad3" Oct 11 11:14:09.064871 master-2 kubenswrapper[4776]: I1011 11:14:09.064663 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:14:16.036363 master-2 kubenswrapper[4776]: I1011 11:14:16.036283 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-networker-deploy-networkers-8p46m"] Oct 11 11:14:16.037192 master-2 kubenswrapper[4776]: E1011 11:14:16.036779 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76c47213-ac0b-4317-8d04-c62782b350ca" containerName="configure-os-networker-deploy-networkers" Oct 11 11:14:16.037192 master-2 kubenswrapper[4776]: I1011 11:14:16.036797 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="76c47213-ac0b-4317-8d04-c62782b350ca" containerName="configure-os-networker-deploy-networkers" Oct 11 11:14:16.037192 master-2 kubenswrapper[4776]: I1011 11:14:16.037030 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="76c47213-ac0b-4317-8d04-c62782b350ca" containerName="configure-os-networker-deploy-networkers" Oct 11 11:14:16.037872 master-2 kubenswrapper[4776]: I1011 11:14:16.037838 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.043099 master-2 kubenswrapper[4776]: I1011 11:14:16.043036 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:14:16.043176 master-2 kubenswrapper[4776]: I1011 11:14:16.043145 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:14:16.045218 master-2 kubenswrapper[4776]: I1011 11:14:16.045188 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-networkers" Oct 11 11:14:16.076276 master-2 kubenswrapper[4776]: I1011 11:14:16.076218 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-networker-deploy-networkers-8p46m"] Oct 11 11:14:16.162383 master-2 kubenswrapper[4776]: I1011 11:14:16.162293 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-inventory\") pod \"configure-os-networker-deploy-networkers-8p46m\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.162628 master-2 kubenswrapper[4776]: I1011 11:14:16.162394 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-ssh-key\") pod \"configure-os-networker-deploy-networkers-8p46m\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.162628 master-2 kubenswrapper[4776]: I1011 11:14:16.162445 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmdcj\" (UniqueName: \"kubernetes.io/projected/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-kube-api-access-hmdcj\") pod \"configure-os-networker-deploy-networkers-8p46m\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.265012 master-2 kubenswrapper[4776]: I1011 11:14:16.264948 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-inventory\") pod \"configure-os-networker-deploy-networkers-8p46m\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.265228 master-2 kubenswrapper[4776]: I1011 11:14:16.265022 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-ssh-key\") pod \"configure-os-networker-deploy-networkers-8p46m\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.265228 master-2 kubenswrapper[4776]: I1011 11:14:16.265060 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmdcj\" (UniqueName: \"kubernetes.io/projected/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-kube-api-access-hmdcj\") pod \"configure-os-networker-deploy-networkers-8p46m\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.268461 master-2 kubenswrapper[4776]: I1011 11:14:16.268424 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-ssh-key\") pod \"configure-os-networker-deploy-networkers-8p46m\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.270027 master-2 kubenswrapper[4776]: I1011 11:14:16.269982 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-inventory\") pod \"configure-os-networker-deploy-networkers-8p46m\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.297580 master-2 kubenswrapper[4776]: I1011 11:14:16.297461 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmdcj\" (UniqueName: \"kubernetes.io/projected/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-kube-api-access-hmdcj\") pod \"configure-os-networker-deploy-networkers-8p46m\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.360660 master-2 kubenswrapper[4776]: I1011 11:14:16.360551 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.972873 master-2 kubenswrapper[4776]: I1011 11:14:16.972804 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-networker-deploy-networkers-8p46m"] Oct 11 11:14:17.127393 master-2 kubenswrapper[4776]: I1011 11:14:17.127224 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-networker-deploy-networkers-8p46m" event={"ID":"0f11458d-e09c-4f36-bfbb-b2b0fe04642e","Type":"ContainerStarted","Data":"76511c6a1df793bb03ad64104874f546aa85facc461739b86f47db1c14ea4393"} Oct 11 11:14:18.136240 master-2 kubenswrapper[4776]: I1011 11:14:18.136163 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-networker-deploy-networkers-8p46m" event={"ID":"0f11458d-e09c-4f36-bfbb-b2b0fe04642e","Type":"ContainerStarted","Data":"08de6f93d73a127c8f66d64af6bac3e09c14a749a69eab80aa886fdae1e5ae3f"} Oct 11 11:14:18.165298 master-2 kubenswrapper[4776]: I1011 11:14:18.165206 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-networker-deploy-networkers-8p46m" podStartSLOduration=1.743691896 podStartE2EDuration="2.16518972s" podCreationTimestamp="2025-10-11 11:14:16 +0000 UTC" firstStartedPulling="2025-10-11 11:14:16.972628032 +0000 UTC m=+2891.757054741" lastFinishedPulling="2025-10-11 11:14:17.394125846 +0000 UTC m=+2892.178552565" observedRunningTime="2025-10-11 11:14:18.162387484 +0000 UTC m=+2892.946814193" watchObservedRunningTime="2025-10-11 11:14:18.16518972 +0000 UTC m=+2892.949616429" Oct 11 11:14:55.333038 master-1 kubenswrapper[4771]: I1011 11:14:55.332955 4771 generic.go:334] "Generic (PLEG): container finished" podID="453fb91b-f029-45ef-abab-ee2e81fe09d4" containerID="e3721d28b852247005699682d334c78c4d7fde3a9e258d447e6fee94baa45c00" exitCode=0 Oct 11 11:14:55.333038 master-1 kubenswrapper[4771]: I1011 11:14:55.333028 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-dataplane-edpm-kndnx" event={"ID":"453fb91b-f029-45ef-abab-ee2e81fe09d4","Type":"ContainerDied","Data":"e3721d28b852247005699682d334c78c4d7fde3a9e258d447e6fee94baa45c00"} Oct 11 11:14:56.981044 master-1 kubenswrapper[4771]: I1011 11:14:56.980962 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-dataplane-edpm-kndnx" Oct 11 11:14:57.090810 master-1 kubenswrapper[4771]: I1011 11:14:57.090642 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5dv8\" (UniqueName: \"kubernetes.io/projected/453fb91b-f029-45ef-abab-ee2e81fe09d4-kube-api-access-j5dv8\") pod \"453fb91b-f029-45ef-abab-ee2e81fe09d4\" (UID: \"453fb91b-f029-45ef-abab-ee2e81fe09d4\") " Oct 11 11:14:57.091224 master-1 kubenswrapper[4771]: I1011 11:14:57.090933 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/453fb91b-f029-45ef-abab-ee2e81fe09d4-ssh-key\") pod \"453fb91b-f029-45ef-abab-ee2e81fe09d4\" (UID: \"453fb91b-f029-45ef-abab-ee2e81fe09d4\") " Oct 11 11:14:57.091224 master-1 kubenswrapper[4771]: I1011 11:14:57.091143 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/453fb91b-f029-45ef-abab-ee2e81fe09d4-inventory\") pod \"453fb91b-f029-45ef-abab-ee2e81fe09d4\" (UID: \"453fb91b-f029-45ef-abab-ee2e81fe09d4\") " Oct 11 11:14:57.097492 master-1 kubenswrapper[4771]: I1011 11:14:57.094383 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/453fb91b-f029-45ef-abab-ee2e81fe09d4-kube-api-access-j5dv8" (OuterVolumeSpecName: "kube-api-access-j5dv8") pod "453fb91b-f029-45ef-abab-ee2e81fe09d4" (UID: "453fb91b-f029-45ef-abab-ee2e81fe09d4"). InnerVolumeSpecName "kube-api-access-j5dv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:14:57.113658 master-1 kubenswrapper[4771]: I1011 11:14:57.113488 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/453fb91b-f029-45ef-abab-ee2e81fe09d4-inventory" (OuterVolumeSpecName: "inventory") pod "453fb91b-f029-45ef-abab-ee2e81fe09d4" (UID: "453fb91b-f029-45ef-abab-ee2e81fe09d4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:14:57.122094 master-1 kubenswrapper[4771]: I1011 11:14:57.122040 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/453fb91b-f029-45ef-abab-ee2e81fe09d4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "453fb91b-f029-45ef-abab-ee2e81fe09d4" (UID: "453fb91b-f029-45ef-abab-ee2e81fe09d4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:14:57.194606 master-1 kubenswrapper[4771]: I1011 11:14:57.194535 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/453fb91b-f029-45ef-abab-ee2e81fe09d4-inventory\") on node \"master-1\" DevicePath \"\"" Oct 11 11:14:57.194606 master-1 kubenswrapper[4771]: I1011 11:14:57.194590 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5dv8\" (UniqueName: \"kubernetes.io/projected/453fb91b-f029-45ef-abab-ee2e81fe09d4-kube-api-access-j5dv8\") on node \"master-1\" DevicePath \"\"" Oct 11 11:14:57.194606 master-1 kubenswrapper[4771]: I1011 11:14:57.194606 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/453fb91b-f029-45ef-abab-ee2e81fe09d4-ssh-key\") on node \"master-1\" DevicePath \"\"" Oct 11 11:14:57.353733 master-1 kubenswrapper[4771]: I1011 11:14:57.353641 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-dataplane-edpm-kndnx" event={"ID":"453fb91b-f029-45ef-abab-ee2e81fe09d4","Type":"ContainerDied","Data":"9b21e0d2161f7c72225ba0d8f1f34b86ffb2212388c14f76d64766036b46fcbe"} Oct 11 11:14:57.353733 master-1 kubenswrapper[4771]: I1011 11:14:57.353710 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b21e0d2161f7c72225ba0d8f1f34b86ffb2212388c14f76d64766036b46fcbe" Oct 11 11:14:57.353733 master-1 kubenswrapper[4771]: I1011 11:14:57.353721 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-dataplane-edpm-kndnx" Oct 11 11:14:57.505542 master-1 kubenswrapper[4771]: I1011 11:14:57.505462 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-dataplane-wv265"] Oct 11 11:14:57.505913 master-1 kubenswrapper[4771]: E1011 11:14:57.505898 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="453fb91b-f029-45ef-abab-ee2e81fe09d4" containerName="configure-os-dataplane-edpm" Oct 11 11:14:57.505994 master-1 kubenswrapper[4771]: I1011 11:14:57.505914 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="453fb91b-f029-45ef-abab-ee2e81fe09d4" containerName="configure-os-dataplane-edpm" Oct 11 11:14:57.506771 master-1 kubenswrapper[4771]: I1011 11:14:57.506739 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="453fb91b-f029-45ef-abab-ee2e81fe09d4" containerName="configure-os-dataplane-edpm" Oct 11 11:14:57.508240 master-1 kubenswrapper[4771]: I1011 11:14:57.508203 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-dataplane-wv265" Oct 11 11:14:57.512679 master-1 kubenswrapper[4771]: I1011 11:14:57.512630 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:14:57.512989 master-1 kubenswrapper[4771]: I1011 11:14:57.512648 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-edpm" Oct 11 11:14:57.513163 master-1 kubenswrapper[4771]: I1011 11:14:57.513100 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:14:57.519811 master-1 kubenswrapper[4771]: I1011 11:14:57.519740 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-dataplane-wv265"] Oct 11 11:14:57.606496 master-1 kubenswrapper[4771]: I1011 11:14:57.606422 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e0206b4d-5dcd-4fef-9265-f952c8f0023c-inventory-0\") pod \"ssh-known-hosts-dataplane-wv265\" (UID: \"e0206b4d-5dcd-4fef-9265-f952c8f0023c\") " pod="openstack/ssh-known-hosts-dataplane-wv265" Oct 11 11:14:57.606496 master-1 kubenswrapper[4771]: I1011 11:14:57.606487 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p59cj\" (UniqueName: \"kubernetes.io/projected/e0206b4d-5dcd-4fef-9265-f952c8f0023c-kube-api-access-p59cj\") pod \"ssh-known-hosts-dataplane-wv265\" (UID: \"e0206b4d-5dcd-4fef-9265-f952c8f0023c\") " pod="openstack/ssh-known-hosts-dataplane-wv265" Oct 11 11:14:57.606942 master-1 kubenswrapper[4771]: I1011 11:14:57.606852 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-edpm\" (UniqueName: \"kubernetes.io/secret/e0206b4d-5dcd-4fef-9265-f952c8f0023c-ssh-key-edpm\") pod \"ssh-known-hosts-dataplane-wv265\" (UID: \"e0206b4d-5dcd-4fef-9265-f952c8f0023c\") " pod="openstack/ssh-known-hosts-dataplane-wv265" Oct 11 11:14:57.710145 master-1 kubenswrapper[4771]: I1011 11:14:57.709972 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-edpm\" (UniqueName: \"kubernetes.io/secret/e0206b4d-5dcd-4fef-9265-f952c8f0023c-ssh-key-edpm\") pod \"ssh-known-hosts-dataplane-wv265\" (UID: \"e0206b4d-5dcd-4fef-9265-f952c8f0023c\") " pod="openstack/ssh-known-hosts-dataplane-wv265" Oct 11 11:14:57.710473 master-1 kubenswrapper[4771]: I1011 11:14:57.710195 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e0206b4d-5dcd-4fef-9265-f952c8f0023c-inventory-0\") pod \"ssh-known-hosts-dataplane-wv265\" (UID: \"e0206b4d-5dcd-4fef-9265-f952c8f0023c\") " pod="openstack/ssh-known-hosts-dataplane-wv265" Oct 11 11:14:57.710473 master-1 kubenswrapper[4771]: I1011 11:14:57.710237 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p59cj\" (UniqueName: \"kubernetes.io/projected/e0206b4d-5dcd-4fef-9265-f952c8f0023c-kube-api-access-p59cj\") pod \"ssh-known-hosts-dataplane-wv265\" (UID: \"e0206b4d-5dcd-4fef-9265-f952c8f0023c\") " pod="openstack/ssh-known-hosts-dataplane-wv265" Oct 11 11:14:57.714599 master-1 kubenswrapper[4771]: I1011 11:14:57.714529 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e0206b4d-5dcd-4fef-9265-f952c8f0023c-inventory-0\") pod \"ssh-known-hosts-dataplane-wv265\" (UID: \"e0206b4d-5dcd-4fef-9265-f952c8f0023c\") " pod="openstack/ssh-known-hosts-dataplane-wv265" Oct 11 11:14:57.714675 master-1 kubenswrapper[4771]: I1011 11:14:57.714624 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-edpm\" (UniqueName: \"kubernetes.io/secret/e0206b4d-5dcd-4fef-9265-f952c8f0023c-ssh-key-edpm\") pod \"ssh-known-hosts-dataplane-wv265\" (UID: \"e0206b4d-5dcd-4fef-9265-f952c8f0023c\") " pod="openstack/ssh-known-hosts-dataplane-wv265" Oct 11 11:14:57.734524 master-1 kubenswrapper[4771]: I1011 11:14:57.734465 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p59cj\" (UniqueName: \"kubernetes.io/projected/e0206b4d-5dcd-4fef-9265-f952c8f0023c-kube-api-access-p59cj\") pod \"ssh-known-hosts-dataplane-wv265\" (UID: \"e0206b4d-5dcd-4fef-9265-f952c8f0023c\") " pod="openstack/ssh-known-hosts-dataplane-wv265" Oct 11 11:14:57.836852 master-1 kubenswrapper[4771]: I1011 11:14:57.836757 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-dataplane-wv265" Oct 11 11:14:58.417536 master-1 kubenswrapper[4771]: I1011 11:14:58.417452 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-dataplane-wv265"] Oct 11 11:14:59.380460 master-1 kubenswrapper[4771]: I1011 11:14:59.380327 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-dataplane-wv265" event={"ID":"e0206b4d-5dcd-4fef-9265-f952c8f0023c","Type":"ContainerStarted","Data":"ad9ad2f14084bffb4b6b64c241efd045693af3e205d809c8184e6e7d059cbc2c"} Oct 11 11:14:59.380460 master-1 kubenswrapper[4771]: I1011 11:14:59.380452 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-dataplane-wv265" event={"ID":"e0206b4d-5dcd-4fef-9265-f952c8f0023c","Type":"ContainerStarted","Data":"2da56407dab575ae818c42606fee39da936da62ad1fe6ce7dc8cd66a01e86623"} Oct 11 11:14:59.413574 master-1 kubenswrapper[4771]: I1011 11:14:59.413172 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-dataplane-wv265" podStartSLOduration=1.987718166 podStartE2EDuration="2.413146331s" podCreationTimestamp="2025-10-11 11:14:57 +0000 UTC" firstStartedPulling="2025-10-11 11:14:58.420226509 +0000 UTC m=+2930.394452950" lastFinishedPulling="2025-10-11 11:14:58.845654654 +0000 UTC m=+2930.819881115" observedRunningTime="2025-10-11 11:14:59.407369236 +0000 UTC m=+2931.381595677" watchObservedRunningTime="2025-10-11 11:14:59.413146331 +0000 UTC m=+2931.387372772" Oct 11 11:15:00.179730 master-1 kubenswrapper[4771]: I1011 11:15:00.179635 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg"] Oct 11 11:15:00.181941 master-1 kubenswrapper[4771]: I1011 11:15:00.181895 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg" Oct 11 11:15:00.185781 master-1 kubenswrapper[4771]: I1011 11:15:00.185708 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-hbjq2" Oct 11 11:15:00.186288 master-1 kubenswrapper[4771]: I1011 11:15:00.186198 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 11 11:15:00.187551 master-1 kubenswrapper[4771]: I1011 11:15:00.187512 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 11 11:15:00.189296 master-1 kubenswrapper[4771]: I1011 11:15:00.189217 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg"] Oct 11 11:15:00.375160 master-1 kubenswrapper[4771]: I1011 11:15:00.375034 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmrz6\" (UniqueName: \"kubernetes.io/projected/9e6bbddb-86b5-4be5-b585-b318e8254f1e-kube-api-access-rmrz6\") pod \"collect-profiles-29336355-mmqkg\" (UID: \"9e6bbddb-86b5-4be5-b585-b318e8254f1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg" Oct 11 11:15:00.376166 master-1 kubenswrapper[4771]: I1011 11:15:00.376071 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e6bbddb-86b5-4be5-b585-b318e8254f1e-secret-volume\") pod \"collect-profiles-29336355-mmqkg\" (UID: \"9e6bbddb-86b5-4be5-b585-b318e8254f1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg" Oct 11 11:15:00.376610 master-1 kubenswrapper[4771]: I1011 11:15:00.376568 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e6bbddb-86b5-4be5-b585-b318e8254f1e-config-volume\") pod \"collect-profiles-29336355-mmqkg\" (UID: \"9e6bbddb-86b5-4be5-b585-b318e8254f1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg" Oct 11 11:15:00.480677 master-1 kubenswrapper[4771]: I1011 11:15:00.480467 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e6bbddb-86b5-4be5-b585-b318e8254f1e-secret-volume\") pod \"collect-profiles-29336355-mmqkg\" (UID: \"9e6bbddb-86b5-4be5-b585-b318e8254f1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg" Oct 11 11:15:00.481016 master-1 kubenswrapper[4771]: I1011 11:15:00.480965 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e6bbddb-86b5-4be5-b585-b318e8254f1e-config-volume\") pod \"collect-profiles-29336355-mmqkg\" (UID: \"9e6bbddb-86b5-4be5-b585-b318e8254f1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg" Oct 11 11:15:00.481195 master-1 kubenswrapper[4771]: I1011 11:15:00.481147 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmrz6\" (UniqueName: \"kubernetes.io/projected/9e6bbddb-86b5-4be5-b585-b318e8254f1e-kube-api-access-rmrz6\") pod \"collect-profiles-29336355-mmqkg\" (UID: \"9e6bbddb-86b5-4be5-b585-b318e8254f1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg" Oct 11 11:15:00.482142 master-1 kubenswrapper[4771]: I1011 11:15:00.482084 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e6bbddb-86b5-4be5-b585-b318e8254f1e-config-volume\") pod \"collect-profiles-29336355-mmqkg\" (UID: \"9e6bbddb-86b5-4be5-b585-b318e8254f1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg" Oct 11 11:15:00.516627 master-1 kubenswrapper[4771]: I1011 11:15:00.516551 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e6bbddb-86b5-4be5-b585-b318e8254f1e-secret-volume\") pod \"collect-profiles-29336355-mmqkg\" (UID: \"9e6bbddb-86b5-4be5-b585-b318e8254f1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg" Oct 11 11:15:00.521686 master-1 kubenswrapper[4771]: I1011 11:15:00.520119 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmrz6\" (UniqueName: \"kubernetes.io/projected/9e6bbddb-86b5-4be5-b585-b318e8254f1e-kube-api-access-rmrz6\") pod \"collect-profiles-29336355-mmqkg\" (UID: \"9e6bbddb-86b5-4be5-b585-b318e8254f1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg" Oct 11 11:15:00.816268 master-1 kubenswrapper[4771]: I1011 11:15:00.816152 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg" Oct 11 11:15:01.320490 master-1 kubenswrapper[4771]: I1011 11:15:01.316690 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg"] Oct 11 11:15:01.405501 master-1 kubenswrapper[4771]: I1011 11:15:01.405421 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg" event={"ID":"9e6bbddb-86b5-4be5-b585-b318e8254f1e","Type":"ContainerStarted","Data":"f6867e711943ee7453d11ebd357f7902a3afb229518b252be5c7f06ed5a4e6ab"} Oct 11 11:15:02.420269 master-1 kubenswrapper[4771]: I1011 11:15:02.420183 4771 generic.go:334] "Generic (PLEG): container finished" podID="9e6bbddb-86b5-4be5-b585-b318e8254f1e" containerID="8ab819b3c7d07574afc9eec1d49bf5d6d0fcbbd18337a802948a48143d1a402e" exitCode=0 Oct 11 11:15:02.420269 master-1 kubenswrapper[4771]: I1011 11:15:02.420237 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg" event={"ID":"9e6bbddb-86b5-4be5-b585-b318e8254f1e","Type":"ContainerDied","Data":"8ab819b3c7d07574afc9eec1d49bf5d6d0fcbbd18337a802948a48143d1a402e"} Oct 11 11:15:03.962828 master-1 kubenswrapper[4771]: I1011 11:15:03.962736 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg" Oct 11 11:15:04.084856 master-1 kubenswrapper[4771]: I1011 11:15:04.084786 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e6bbddb-86b5-4be5-b585-b318e8254f1e-secret-volume\") pod \"9e6bbddb-86b5-4be5-b585-b318e8254f1e\" (UID: \"9e6bbddb-86b5-4be5-b585-b318e8254f1e\") " Oct 11 11:15:04.085080 master-1 kubenswrapper[4771]: I1011 11:15:04.085042 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmrz6\" (UniqueName: \"kubernetes.io/projected/9e6bbddb-86b5-4be5-b585-b318e8254f1e-kube-api-access-rmrz6\") pod \"9e6bbddb-86b5-4be5-b585-b318e8254f1e\" (UID: \"9e6bbddb-86b5-4be5-b585-b318e8254f1e\") " Oct 11 11:15:04.086029 master-1 kubenswrapper[4771]: I1011 11:15:04.085985 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e6bbddb-86b5-4be5-b585-b318e8254f1e-config-volume" (OuterVolumeSpecName: "config-volume") pod "9e6bbddb-86b5-4be5-b585-b318e8254f1e" (UID: "9e6bbddb-86b5-4be5-b585-b318e8254f1e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:15:04.086103 master-1 kubenswrapper[4771]: I1011 11:15:04.085351 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e6bbddb-86b5-4be5-b585-b318e8254f1e-config-volume\") pod \"9e6bbddb-86b5-4be5-b585-b318e8254f1e\" (UID: \"9e6bbddb-86b5-4be5-b585-b318e8254f1e\") " Oct 11 11:15:04.086875 master-1 kubenswrapper[4771]: I1011 11:15:04.086828 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e6bbddb-86b5-4be5-b585-b318e8254f1e-config-volume\") on node \"master-1\" DevicePath \"\"" Oct 11 11:15:04.090599 master-1 kubenswrapper[4771]: I1011 11:15:04.090555 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e6bbddb-86b5-4be5-b585-b318e8254f1e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9e6bbddb-86b5-4be5-b585-b318e8254f1e" (UID: "9e6bbddb-86b5-4be5-b585-b318e8254f1e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:15:04.094316 master-1 kubenswrapper[4771]: I1011 11:15:04.094245 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e6bbddb-86b5-4be5-b585-b318e8254f1e-kube-api-access-rmrz6" (OuterVolumeSpecName: "kube-api-access-rmrz6") pod "9e6bbddb-86b5-4be5-b585-b318e8254f1e" (UID: "9e6bbddb-86b5-4be5-b585-b318e8254f1e"). InnerVolumeSpecName "kube-api-access-rmrz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:15:04.189908 master-1 kubenswrapper[4771]: I1011 11:15:04.189838 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmrz6\" (UniqueName: \"kubernetes.io/projected/9e6bbddb-86b5-4be5-b585-b318e8254f1e-kube-api-access-rmrz6\") on node \"master-1\" DevicePath \"\"" Oct 11 11:15:04.189908 master-1 kubenswrapper[4771]: I1011 11:15:04.189897 4771 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e6bbddb-86b5-4be5-b585-b318e8254f1e-secret-volume\") on node \"master-1\" DevicePath \"\"" Oct 11 11:15:04.445932 master-1 kubenswrapper[4771]: I1011 11:15:04.445765 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg" Oct 11 11:15:04.454521 master-1 kubenswrapper[4771]: I1011 11:15:04.454432 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg" event={"ID":"9e6bbddb-86b5-4be5-b585-b318e8254f1e","Type":"ContainerDied","Data":"f6867e711943ee7453d11ebd357f7902a3afb229518b252be5c7f06ed5a4e6ab"} Oct 11 11:15:04.454521 master-1 kubenswrapper[4771]: I1011 11:15:04.454504 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6867e711943ee7453d11ebd357f7902a3afb229518b252be5c7f06ed5a4e6ab" Oct 11 11:15:05.094373 master-1 kubenswrapper[4771]: I1011 11:15:05.094281 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v"] Oct 11 11:15:05.102391 master-1 kubenswrapper[4771]: I1011 11:15:05.102306 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v"] Oct 11 11:15:06.454153 master-1 kubenswrapper[4771]: I1011 11:15:06.454050 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4904d67-3c44-40d9-8ea8-026d727e9486" path="/var/lib/kubelet/pods/f4904d67-3c44-40d9-8ea8-026d727e9486/volumes" Oct 11 11:15:06.470327 master-1 kubenswrapper[4771]: I1011 11:15:06.470053 4771 generic.go:334] "Generic (PLEG): container finished" podID="e0206b4d-5dcd-4fef-9265-f952c8f0023c" containerID="ad9ad2f14084bffb4b6b64c241efd045693af3e205d809c8184e6e7d059cbc2c" exitCode=0 Oct 11 11:15:06.470327 master-1 kubenswrapper[4771]: I1011 11:15:06.470132 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-dataplane-wv265" event={"ID":"e0206b4d-5dcd-4fef-9265-f952c8f0023c","Type":"ContainerDied","Data":"ad9ad2f14084bffb4b6b64c241efd045693af3e205d809c8184e6e7d059cbc2c"} Oct 11 11:15:08.144561 master-1 kubenswrapper[4771]: I1011 11:15:08.144513 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-dataplane-wv265" Oct 11 11:15:08.189910 master-1 kubenswrapper[4771]: I1011 11:15:08.189829 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e0206b4d-5dcd-4fef-9265-f952c8f0023c-inventory-0\") pod \"e0206b4d-5dcd-4fef-9265-f952c8f0023c\" (UID: \"e0206b4d-5dcd-4fef-9265-f952c8f0023c\") " Oct 11 11:15:08.190150 master-1 kubenswrapper[4771]: I1011 11:15:08.190048 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p59cj\" (UniqueName: \"kubernetes.io/projected/e0206b4d-5dcd-4fef-9265-f952c8f0023c-kube-api-access-p59cj\") pod \"e0206b4d-5dcd-4fef-9265-f952c8f0023c\" (UID: \"e0206b4d-5dcd-4fef-9265-f952c8f0023c\") " Oct 11 11:15:08.190443 master-1 kubenswrapper[4771]: I1011 11:15:08.190408 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-edpm\" (UniqueName: \"kubernetes.io/secret/e0206b4d-5dcd-4fef-9265-f952c8f0023c-ssh-key-edpm\") pod \"e0206b4d-5dcd-4fef-9265-f952c8f0023c\" (UID: \"e0206b4d-5dcd-4fef-9265-f952c8f0023c\") " Oct 11 11:15:08.196934 master-1 kubenswrapper[4771]: I1011 11:15:08.196853 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0206b4d-5dcd-4fef-9265-f952c8f0023c-kube-api-access-p59cj" (OuterVolumeSpecName: "kube-api-access-p59cj") pod "e0206b4d-5dcd-4fef-9265-f952c8f0023c" (UID: "e0206b4d-5dcd-4fef-9265-f952c8f0023c"). InnerVolumeSpecName "kube-api-access-p59cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:15:08.213570 master-1 kubenswrapper[4771]: I1011 11:15:08.213477 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0206b4d-5dcd-4fef-9265-f952c8f0023c-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "e0206b4d-5dcd-4fef-9265-f952c8f0023c" (UID: "e0206b4d-5dcd-4fef-9265-f952c8f0023c"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:15:08.216928 master-1 kubenswrapper[4771]: I1011 11:15:08.216850 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0206b4d-5dcd-4fef-9265-f952c8f0023c-ssh-key-edpm" (OuterVolumeSpecName: "ssh-key-edpm") pod "e0206b4d-5dcd-4fef-9265-f952c8f0023c" (UID: "e0206b4d-5dcd-4fef-9265-f952c8f0023c"). InnerVolumeSpecName "ssh-key-edpm". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:15:08.293722 master-1 kubenswrapper[4771]: I1011 11:15:08.293628 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-edpm\" (UniqueName: \"kubernetes.io/secret/e0206b4d-5dcd-4fef-9265-f952c8f0023c-ssh-key-edpm\") on node \"master-1\" DevicePath \"\"" Oct 11 11:15:08.293722 master-1 kubenswrapper[4771]: I1011 11:15:08.293686 4771 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e0206b4d-5dcd-4fef-9265-f952c8f0023c-inventory-0\") on node \"master-1\" DevicePath \"\"" Oct 11 11:15:08.293722 master-1 kubenswrapper[4771]: I1011 11:15:08.293699 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p59cj\" (UniqueName: \"kubernetes.io/projected/e0206b4d-5dcd-4fef-9265-f952c8f0023c-kube-api-access-p59cj\") on node \"master-1\" DevicePath \"\"" Oct 11 11:15:08.491289 master-1 kubenswrapper[4771]: I1011 11:15:08.491228 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-dataplane-wv265" event={"ID":"e0206b4d-5dcd-4fef-9265-f952c8f0023c","Type":"ContainerDied","Data":"2da56407dab575ae818c42606fee39da936da62ad1fe6ce7dc8cd66a01e86623"} Oct 11 11:15:08.491464 master-1 kubenswrapper[4771]: I1011 11:15:08.491292 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2da56407dab575ae818c42606fee39da936da62ad1fe6ce7dc8cd66a01e86623" Oct 11 11:15:08.491464 master-1 kubenswrapper[4771]: I1011 11:15:08.491389 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-dataplane-wv265" Oct 11 11:15:08.580080 master-2 kubenswrapper[4776]: I1011 11:15:08.580008 4776 generic.go:334] "Generic (PLEG): container finished" podID="0f11458d-e09c-4f36-bfbb-b2b0fe04642e" containerID="08de6f93d73a127c8f66d64af6bac3e09c14a749a69eab80aa886fdae1e5ae3f" exitCode=0 Oct 11 11:15:08.580663 master-2 kubenswrapper[4776]: I1011 11:15:08.580087 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-networker-deploy-networkers-8p46m" event={"ID":"0f11458d-e09c-4f36-bfbb-b2b0fe04642e","Type":"ContainerDied","Data":"08de6f93d73a127c8f66d64af6bac3e09c14a749a69eab80aa886fdae1e5ae3f"} Oct 11 11:15:08.788331 master-1 kubenswrapper[4771]: I1011 11:15:08.788069 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-dataplane-edpm-2b9k2"] Oct 11 11:15:08.789016 master-1 kubenswrapper[4771]: E1011 11:15:08.788955 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0206b4d-5dcd-4fef-9265-f952c8f0023c" containerName="ssh-known-hosts-dataplane" Oct 11 11:15:08.789016 master-1 kubenswrapper[4771]: I1011 11:15:08.788997 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0206b4d-5dcd-4fef-9265-f952c8f0023c" containerName="ssh-known-hosts-dataplane" Oct 11 11:15:08.789211 master-1 kubenswrapper[4771]: E1011 11:15:08.789031 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6bbddb-86b5-4be5-b585-b318e8254f1e" containerName="collect-profiles" Oct 11 11:15:08.789211 master-1 kubenswrapper[4771]: I1011 11:15:08.789046 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6bbddb-86b5-4be5-b585-b318e8254f1e" containerName="collect-profiles" Oct 11 11:15:08.789428 master-1 kubenswrapper[4771]: I1011 11:15:08.789350 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0206b4d-5dcd-4fef-9265-f952c8f0023c" containerName="ssh-known-hosts-dataplane" Oct 11 11:15:08.789596 master-1 kubenswrapper[4771]: I1011 11:15:08.789564 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e6bbddb-86b5-4be5-b585-b318e8254f1e" containerName="collect-profiles" Oct 11 11:15:08.790881 master-1 kubenswrapper[4771]: I1011 11:15:08.790828 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-dataplane-edpm-2b9k2" Oct 11 11:15:08.795472 master-1 kubenswrapper[4771]: I1011 11:15:08.795334 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-edpm" Oct 11 11:15:08.795957 master-1 kubenswrapper[4771]: I1011 11:15:08.795876 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:15:08.795957 master-1 kubenswrapper[4771]: I1011 11:15:08.795887 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:15:08.804393 master-1 kubenswrapper[4771]: I1011 11:15:08.802556 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-dataplane-edpm-2b9k2"] Oct 11 11:15:08.817423 master-1 kubenswrapper[4771]: I1011 11:15:08.817289 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7fb916b-469a-46ca-806b-bd61c8b674fd-ssh-key\") pod \"run-os-dataplane-edpm-2b9k2\" (UID: \"c7fb916b-469a-46ca-806b-bd61c8b674fd\") " pod="openstack/run-os-dataplane-edpm-2b9k2" Oct 11 11:15:08.817865 master-1 kubenswrapper[4771]: I1011 11:15:08.817616 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7fb916b-469a-46ca-806b-bd61c8b674fd-inventory\") pod \"run-os-dataplane-edpm-2b9k2\" (UID: \"c7fb916b-469a-46ca-806b-bd61c8b674fd\") " pod="openstack/run-os-dataplane-edpm-2b9k2" Oct 11 11:15:08.817865 master-1 kubenswrapper[4771]: I1011 11:15:08.817681 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkqqv\" (UniqueName: \"kubernetes.io/projected/c7fb916b-469a-46ca-806b-bd61c8b674fd-kube-api-access-lkqqv\") pod \"run-os-dataplane-edpm-2b9k2\" (UID: \"c7fb916b-469a-46ca-806b-bd61c8b674fd\") " pod="openstack/run-os-dataplane-edpm-2b9k2" Oct 11 11:15:08.920498 master-1 kubenswrapper[4771]: I1011 11:15:08.920413 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7fb916b-469a-46ca-806b-bd61c8b674fd-inventory\") pod \"run-os-dataplane-edpm-2b9k2\" (UID: \"c7fb916b-469a-46ca-806b-bd61c8b674fd\") " pod="openstack/run-os-dataplane-edpm-2b9k2" Oct 11 11:15:08.920867 master-1 kubenswrapper[4771]: I1011 11:15:08.920505 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkqqv\" (UniqueName: \"kubernetes.io/projected/c7fb916b-469a-46ca-806b-bd61c8b674fd-kube-api-access-lkqqv\") pod \"run-os-dataplane-edpm-2b9k2\" (UID: \"c7fb916b-469a-46ca-806b-bd61c8b674fd\") " pod="openstack/run-os-dataplane-edpm-2b9k2" Oct 11 11:15:08.920867 master-1 kubenswrapper[4771]: I1011 11:15:08.920725 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7fb916b-469a-46ca-806b-bd61c8b674fd-ssh-key\") pod \"run-os-dataplane-edpm-2b9k2\" (UID: \"c7fb916b-469a-46ca-806b-bd61c8b674fd\") " pod="openstack/run-os-dataplane-edpm-2b9k2" Oct 11 11:15:08.926177 master-1 kubenswrapper[4771]: I1011 11:15:08.926112 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7fb916b-469a-46ca-806b-bd61c8b674fd-ssh-key\") pod \"run-os-dataplane-edpm-2b9k2\" (UID: \"c7fb916b-469a-46ca-806b-bd61c8b674fd\") " pod="openstack/run-os-dataplane-edpm-2b9k2" Oct 11 11:15:08.926267 master-1 kubenswrapper[4771]: I1011 11:15:08.926191 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7fb916b-469a-46ca-806b-bd61c8b674fd-inventory\") pod \"run-os-dataplane-edpm-2b9k2\" (UID: \"c7fb916b-469a-46ca-806b-bd61c8b674fd\") " pod="openstack/run-os-dataplane-edpm-2b9k2" Oct 11 11:15:08.941006 master-1 kubenswrapper[4771]: I1011 11:15:08.940942 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkqqv\" (UniqueName: \"kubernetes.io/projected/c7fb916b-469a-46ca-806b-bd61c8b674fd-kube-api-access-lkqqv\") pod \"run-os-dataplane-edpm-2b9k2\" (UID: \"c7fb916b-469a-46ca-806b-bd61c8b674fd\") " pod="openstack/run-os-dataplane-edpm-2b9k2" Oct 11 11:15:09.126605 master-1 kubenswrapper[4771]: I1011 11:15:09.126432 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-dataplane-edpm-2b9k2" Oct 11 11:15:09.713994 master-1 kubenswrapper[4771]: I1011 11:15:09.713890 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-dataplane-edpm-2b9k2"] Oct 11 11:15:09.727752 master-1 kubenswrapper[4771]: I1011 11:15:09.727689 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 11:15:10.513780 master-1 kubenswrapper[4771]: I1011 11:15:10.513643 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-dataplane-edpm-2b9k2" event={"ID":"c7fb916b-469a-46ca-806b-bd61c8b674fd","Type":"ContainerStarted","Data":"15ef9683cf31a8e289e74a845c8cbc6879c65abafe501299d63d536a993eae47"} Oct 11 11:15:10.513780 master-1 kubenswrapper[4771]: I1011 11:15:10.513726 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-dataplane-edpm-2b9k2" event={"ID":"c7fb916b-469a-46ca-806b-bd61c8b674fd","Type":"ContainerStarted","Data":"f5d26dd462306ae00b18c5ee42db79300aa348307309be5e3086dc5621c421a3"} Oct 11 11:15:10.574873 master-2 kubenswrapper[4776]: I1011 11:15:10.574835 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:15:10.599789 master-2 kubenswrapper[4776]: I1011 11:15:10.599741 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-networker-deploy-networkers-8p46m" event={"ID":"0f11458d-e09c-4f36-bfbb-b2b0fe04642e","Type":"ContainerDied","Data":"76511c6a1df793bb03ad64104874f546aa85facc461739b86f47db1c14ea4393"} Oct 11 11:15:10.599789 master-2 kubenswrapper[4776]: I1011 11:15:10.599794 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76511c6a1df793bb03ad64104874f546aa85facc461739b86f47db1c14ea4393" Oct 11 11:15:10.600038 master-2 kubenswrapper[4776]: I1011 11:15:10.599801 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:15:10.664711 master-2 kubenswrapper[4776]: I1011 11:15:10.664371 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-ssh-key\") pod \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " Oct 11 11:15:10.664711 master-2 kubenswrapper[4776]: I1011 11:15:10.664464 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmdcj\" (UniqueName: \"kubernetes.io/projected/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-kube-api-access-hmdcj\") pod \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " Oct 11 11:15:10.664711 master-2 kubenswrapper[4776]: I1011 11:15:10.664491 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-inventory\") pod \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " Oct 11 11:15:10.668253 master-2 kubenswrapper[4776]: I1011 11:15:10.668198 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-kube-api-access-hmdcj" (OuterVolumeSpecName: "kube-api-access-hmdcj") pod "0f11458d-e09c-4f36-bfbb-b2b0fe04642e" (UID: "0f11458d-e09c-4f36-bfbb-b2b0fe04642e"). InnerVolumeSpecName "kube-api-access-hmdcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:15:10.684632 master-2 kubenswrapper[4776]: I1011 11:15:10.684592 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0f11458d-e09c-4f36-bfbb-b2b0fe04642e" (UID: "0f11458d-e09c-4f36-bfbb-b2b0fe04642e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:15:10.687168 master-2 kubenswrapper[4776]: I1011 11:15:10.687125 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-inventory" (OuterVolumeSpecName: "inventory") pod "0f11458d-e09c-4f36-bfbb-b2b0fe04642e" (UID: "0f11458d-e09c-4f36-bfbb-b2b0fe04642e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:15:10.767044 master-2 kubenswrapper[4776]: I1011 11:15:10.766898 4776 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-ssh-key\") on node \"master-2\" DevicePath \"\"" Oct 11 11:15:10.767044 master-2 kubenswrapper[4776]: I1011 11:15:10.766965 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmdcj\" (UniqueName: \"kubernetes.io/projected/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-kube-api-access-hmdcj\") on node \"master-2\" DevicePath \"\"" Oct 11 11:15:10.767044 master-2 kubenswrapper[4776]: I1011 11:15:10.766993 4776 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-inventory\") on node \"master-2\" DevicePath \"\"" Oct 11 11:15:10.861214 master-2 kubenswrapper[4776]: I1011 11:15:10.861155 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-networker-deploy-br44d"] Oct 11 11:15:10.861515 master-2 kubenswrapper[4776]: E1011 11:15:10.861496 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f11458d-e09c-4f36-bfbb-b2b0fe04642e" containerName="configure-os-networker-deploy-networkers" Oct 11 11:15:10.861515 master-2 kubenswrapper[4776]: I1011 11:15:10.861512 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f11458d-e09c-4f36-bfbb-b2b0fe04642e" containerName="configure-os-networker-deploy-networkers" Oct 11 11:15:10.861768 master-2 kubenswrapper[4776]: I1011 11:15:10.861748 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f11458d-e09c-4f36-bfbb-b2b0fe04642e" containerName="configure-os-networker-deploy-networkers" Oct 11 11:15:10.862526 master-2 kubenswrapper[4776]: I1011 11:15:10.862504 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:10.926840 master-2 kubenswrapper[4776]: I1011 11:15:10.926787 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-networker-deploy-br44d"] Oct 11 11:15:10.970171 master-2 kubenswrapper[4776]: I1011 11:15:10.970124 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-networkers\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-ssh-key-networkers\") pod \"ssh-known-hosts-networker-deploy-br44d\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:10.970510 master-2 kubenswrapper[4776]: I1011 11:15:10.970494 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k647\" (UniqueName: \"kubernetes.io/projected/8d499e5e-9473-4609-a496-3d6005471c60-kube-api-access-4k647\") pod \"ssh-known-hosts-networker-deploy-br44d\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:10.970597 master-2 kubenswrapper[4776]: I1011 11:15:10.970585 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-inventory-0\") pod \"ssh-known-hosts-networker-deploy-br44d\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:11.072357 master-2 kubenswrapper[4776]: I1011 11:15:11.072216 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k647\" (UniqueName: \"kubernetes.io/projected/8d499e5e-9473-4609-a496-3d6005471c60-kube-api-access-4k647\") pod \"ssh-known-hosts-networker-deploy-br44d\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:11.072357 master-2 kubenswrapper[4776]: I1011 11:15:11.072280 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-inventory-0\") pod \"ssh-known-hosts-networker-deploy-br44d\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:11.072738 master-2 kubenswrapper[4776]: I1011 11:15:11.072395 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-networkers\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-ssh-key-networkers\") pod \"ssh-known-hosts-networker-deploy-br44d\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:11.075886 master-2 kubenswrapper[4776]: I1011 11:15:11.075838 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-networkers\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-ssh-key-networkers\") pod \"ssh-known-hosts-networker-deploy-br44d\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:11.078059 master-2 kubenswrapper[4776]: I1011 11:15:11.078025 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-inventory-0\") pod \"ssh-known-hosts-networker-deploy-br44d\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:11.093337 master-2 kubenswrapper[4776]: I1011 11:15:11.093290 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k647\" (UniqueName: \"kubernetes.io/projected/8d499e5e-9473-4609-a496-3d6005471c60-kube-api-access-4k647\") pod \"ssh-known-hosts-networker-deploy-br44d\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:11.175826 master-2 kubenswrapper[4776]: I1011 11:15:11.175766 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:12.171404 master-2 kubenswrapper[4776]: W1011 11:15:12.171350 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d499e5e_9473_4609_a496_3d6005471c60.slice/crio-741b2c080c17d74c7755050ba6692dca3e95816e1c05b4b16fc9d28b439d9bbf WatchSource:0}: Error finding container 741b2c080c17d74c7755050ba6692dca3e95816e1c05b4b16fc9d28b439d9bbf: Status 404 returned error can't find the container with id 741b2c080c17d74c7755050ba6692dca3e95816e1c05b4b16fc9d28b439d9bbf Oct 11 11:15:12.172313 master-2 kubenswrapper[4776]: I1011 11:15:12.172259 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-networker-deploy-br44d"] Oct 11 11:15:12.614682 master-2 kubenswrapper[4776]: I1011 11:15:12.614528 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-networker-deploy-br44d" event={"ID":"8d499e5e-9473-4609-a496-3d6005471c60","Type":"ContainerStarted","Data":"741b2c080c17d74c7755050ba6692dca3e95816e1c05b4b16fc9d28b439d9bbf"} Oct 11 11:15:13.622704 master-2 kubenswrapper[4776]: I1011 11:15:13.621931 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-networker-deploy-br44d" event={"ID":"8d499e5e-9473-4609-a496-3d6005471c60","Type":"ContainerStarted","Data":"76b99fe0e29fff620b2a93c0143495b11deea46a199bc5df567001f7dd6881fa"} Oct 11 11:15:13.658156 master-2 kubenswrapper[4776]: I1011 11:15:13.658054 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-networker-deploy-br44d" podStartSLOduration=3.187304554 podStartE2EDuration="3.658033632s" podCreationTimestamp="2025-10-11 11:15:10 +0000 UTC" firstStartedPulling="2025-10-11 11:15:12.173389876 +0000 UTC m=+2946.957816585" lastFinishedPulling="2025-10-11 11:15:12.644118954 +0000 UTC m=+2947.428545663" observedRunningTime="2025-10-11 11:15:13.646329957 +0000 UTC m=+2948.430756676" watchObservedRunningTime="2025-10-11 11:15:13.658033632 +0000 UTC m=+2948.442460341" Oct 11 11:15:19.619587 master-1 kubenswrapper[4771]: I1011 11:15:19.619428 4771 generic.go:334] "Generic (PLEG): container finished" podID="c7fb916b-469a-46ca-806b-bd61c8b674fd" containerID="15ef9683cf31a8e289e74a845c8cbc6879c65abafe501299d63d536a993eae47" exitCode=0 Oct 11 11:15:19.619587 master-1 kubenswrapper[4771]: I1011 11:15:19.619542 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-dataplane-edpm-2b9k2" event={"ID":"c7fb916b-469a-46ca-806b-bd61c8b674fd","Type":"ContainerDied","Data":"15ef9683cf31a8e289e74a845c8cbc6879c65abafe501299d63d536a993eae47"} Oct 11 11:15:20.682881 master-2 kubenswrapper[4776]: I1011 11:15:20.682800 4776 generic.go:334] "Generic (PLEG): container finished" podID="8d499e5e-9473-4609-a496-3d6005471c60" containerID="76b99fe0e29fff620b2a93c0143495b11deea46a199bc5df567001f7dd6881fa" exitCode=0 Oct 11 11:15:20.682881 master-2 kubenswrapper[4776]: I1011 11:15:20.682875 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-networker-deploy-br44d" event={"ID":"8d499e5e-9473-4609-a496-3d6005471c60","Type":"ContainerDied","Data":"76b99fe0e29fff620b2a93c0143495b11deea46a199bc5df567001f7dd6881fa"} Oct 11 11:15:21.286982 master-1 kubenswrapper[4771]: I1011 11:15:21.286909 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-dataplane-edpm-2b9k2" Oct 11 11:15:21.457763 master-1 kubenswrapper[4771]: I1011 11:15:21.457669 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7fb916b-469a-46ca-806b-bd61c8b674fd-ssh-key\") pod \"c7fb916b-469a-46ca-806b-bd61c8b674fd\" (UID: \"c7fb916b-469a-46ca-806b-bd61c8b674fd\") " Oct 11 11:15:21.457763 master-1 kubenswrapper[4771]: I1011 11:15:21.457753 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkqqv\" (UniqueName: \"kubernetes.io/projected/c7fb916b-469a-46ca-806b-bd61c8b674fd-kube-api-access-lkqqv\") pod \"c7fb916b-469a-46ca-806b-bd61c8b674fd\" (UID: \"c7fb916b-469a-46ca-806b-bd61c8b674fd\") " Oct 11 11:15:21.458118 master-1 kubenswrapper[4771]: I1011 11:15:21.457814 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7fb916b-469a-46ca-806b-bd61c8b674fd-inventory\") pod \"c7fb916b-469a-46ca-806b-bd61c8b674fd\" (UID: \"c7fb916b-469a-46ca-806b-bd61c8b674fd\") " Oct 11 11:15:21.471910 master-1 kubenswrapper[4771]: I1011 11:15:21.471758 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7fb916b-469a-46ca-806b-bd61c8b674fd-kube-api-access-lkqqv" (OuterVolumeSpecName: "kube-api-access-lkqqv") pod "c7fb916b-469a-46ca-806b-bd61c8b674fd" (UID: "c7fb916b-469a-46ca-806b-bd61c8b674fd"). InnerVolumeSpecName "kube-api-access-lkqqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:15:21.501734 master-1 kubenswrapper[4771]: I1011 11:15:21.501649 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7fb916b-469a-46ca-806b-bd61c8b674fd-inventory" (OuterVolumeSpecName: "inventory") pod "c7fb916b-469a-46ca-806b-bd61c8b674fd" (UID: "c7fb916b-469a-46ca-806b-bd61c8b674fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:15:21.514622 master-1 kubenswrapper[4771]: I1011 11:15:21.514532 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7fb916b-469a-46ca-806b-bd61c8b674fd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c7fb916b-469a-46ca-806b-bd61c8b674fd" (UID: "c7fb916b-469a-46ca-806b-bd61c8b674fd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:15:21.560979 master-1 kubenswrapper[4771]: I1011 11:15:21.560896 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7fb916b-469a-46ca-806b-bd61c8b674fd-ssh-key\") on node \"master-1\" DevicePath \"\"" Oct 11 11:15:21.560979 master-1 kubenswrapper[4771]: I1011 11:15:21.560959 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkqqv\" (UniqueName: \"kubernetes.io/projected/c7fb916b-469a-46ca-806b-bd61c8b674fd-kube-api-access-lkqqv\") on node \"master-1\" DevicePath \"\"" Oct 11 11:15:21.560979 master-1 kubenswrapper[4771]: I1011 11:15:21.560983 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7fb916b-469a-46ca-806b-bd61c8b674fd-inventory\") on node \"master-1\" DevicePath \"\"" Oct 11 11:15:21.660470 master-1 kubenswrapper[4771]: I1011 11:15:21.658260 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-dataplane-edpm-2b9k2" event={"ID":"c7fb916b-469a-46ca-806b-bd61c8b674fd","Type":"ContainerDied","Data":"f5d26dd462306ae00b18c5ee42db79300aa348307309be5e3086dc5621c421a3"} Oct 11 11:15:21.660470 master-1 kubenswrapper[4771]: I1011 11:15:21.658331 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5d26dd462306ae00b18c5ee42db79300aa348307309be5e3086dc5621c421a3" Oct 11 11:15:21.660470 master-1 kubenswrapper[4771]: I1011 11:15:21.658413 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-dataplane-edpm-2b9k2" Oct 11 11:15:21.768793 master-1 kubenswrapper[4771]: I1011 11:15:21.768599 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-dataplane-edpm-nvx25"] Oct 11 11:15:21.769127 master-1 kubenswrapper[4771]: E1011 11:15:21.769070 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7fb916b-469a-46ca-806b-bd61c8b674fd" containerName="run-os-dataplane-edpm" Oct 11 11:15:21.769127 master-1 kubenswrapper[4771]: I1011 11:15:21.769091 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7fb916b-469a-46ca-806b-bd61c8b674fd" containerName="run-os-dataplane-edpm" Oct 11 11:15:21.769427 master-1 kubenswrapper[4771]: I1011 11:15:21.769306 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7fb916b-469a-46ca-806b-bd61c8b674fd" containerName="run-os-dataplane-edpm" Oct 11 11:15:21.770304 master-1 kubenswrapper[4771]: I1011 11:15:21.770238 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-dataplane-edpm-nvx25" Oct 11 11:15:21.773607 master-1 kubenswrapper[4771]: I1011 11:15:21.773532 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-edpm" Oct 11 11:15:21.774076 master-1 kubenswrapper[4771]: I1011 11:15:21.774001 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:15:21.774735 master-1 kubenswrapper[4771]: I1011 11:15:21.774675 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:15:21.787441 master-1 kubenswrapper[4771]: I1011 11:15:21.787273 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-dataplane-edpm-nvx25"] Oct 11 11:15:21.971709 master-1 kubenswrapper[4771]: I1011 11:15:21.971588 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg6qz\" (UniqueName: \"kubernetes.io/projected/c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff-kube-api-access-tg6qz\") pod \"reboot-os-dataplane-edpm-nvx25\" (UID: \"c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff\") " pod="openstack/reboot-os-dataplane-edpm-nvx25" Oct 11 11:15:21.972054 master-1 kubenswrapper[4771]: I1011 11:15:21.971827 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff-inventory\") pod \"reboot-os-dataplane-edpm-nvx25\" (UID: \"c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff\") " pod="openstack/reboot-os-dataplane-edpm-nvx25" Oct 11 11:15:21.972054 master-1 kubenswrapper[4771]: I1011 11:15:21.971893 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff-ssh-key\") pod \"reboot-os-dataplane-edpm-nvx25\" (UID: \"c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff\") " pod="openstack/reboot-os-dataplane-edpm-nvx25" Oct 11 11:15:22.073341 master-1 kubenswrapper[4771]: I1011 11:15:22.073242 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg6qz\" (UniqueName: \"kubernetes.io/projected/c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff-kube-api-access-tg6qz\") pod \"reboot-os-dataplane-edpm-nvx25\" (UID: \"c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff\") " pod="openstack/reboot-os-dataplane-edpm-nvx25" Oct 11 11:15:22.073672 master-1 kubenswrapper[4771]: I1011 11:15:22.073423 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff-inventory\") pod \"reboot-os-dataplane-edpm-nvx25\" (UID: \"c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff\") " pod="openstack/reboot-os-dataplane-edpm-nvx25" Oct 11 11:15:22.073672 master-1 kubenswrapper[4771]: I1011 11:15:22.073482 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff-ssh-key\") pod \"reboot-os-dataplane-edpm-nvx25\" (UID: \"c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff\") " pod="openstack/reboot-os-dataplane-edpm-nvx25" Oct 11 11:15:22.077697 master-1 kubenswrapper[4771]: I1011 11:15:22.077633 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff-ssh-key\") pod \"reboot-os-dataplane-edpm-nvx25\" (UID: \"c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff\") " pod="openstack/reboot-os-dataplane-edpm-nvx25" Oct 11 11:15:22.080014 master-1 kubenswrapper[4771]: I1011 11:15:22.079930 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff-inventory\") pod \"reboot-os-dataplane-edpm-nvx25\" (UID: \"c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff\") " pod="openstack/reboot-os-dataplane-edpm-nvx25" Oct 11 11:15:22.107326 master-1 kubenswrapper[4771]: I1011 11:15:22.107206 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg6qz\" (UniqueName: \"kubernetes.io/projected/c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff-kube-api-access-tg6qz\") pod \"reboot-os-dataplane-edpm-nvx25\" (UID: \"c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff\") " pod="openstack/reboot-os-dataplane-edpm-nvx25" Oct 11 11:15:22.159718 master-1 kubenswrapper[4771]: I1011 11:15:22.159651 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-dataplane-edpm-nvx25" Oct 11 11:15:22.212122 master-2 kubenswrapper[4776]: I1011 11:15:22.212071 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:22.315771 master-2 kubenswrapper[4776]: I1011 11:15:22.315690 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k647\" (UniqueName: \"kubernetes.io/projected/8d499e5e-9473-4609-a496-3d6005471c60-kube-api-access-4k647\") pod \"8d499e5e-9473-4609-a496-3d6005471c60\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " Oct 11 11:15:22.316037 master-2 kubenswrapper[4776]: I1011 11:15:22.315888 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-inventory-0\") pod \"8d499e5e-9473-4609-a496-3d6005471c60\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " Oct 11 11:15:22.316037 master-2 kubenswrapper[4776]: I1011 11:15:22.315968 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-networkers\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-ssh-key-networkers\") pod \"8d499e5e-9473-4609-a496-3d6005471c60\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " Oct 11 11:15:22.318727 master-2 kubenswrapper[4776]: I1011 11:15:22.318645 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d499e5e-9473-4609-a496-3d6005471c60-kube-api-access-4k647" (OuterVolumeSpecName: "kube-api-access-4k647") pod "8d499e5e-9473-4609-a496-3d6005471c60" (UID: "8d499e5e-9473-4609-a496-3d6005471c60"). InnerVolumeSpecName "kube-api-access-4k647". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:15:22.338383 master-2 kubenswrapper[4776]: I1011 11:15:22.338319 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "8d499e5e-9473-4609-a496-3d6005471c60" (UID: "8d499e5e-9473-4609-a496-3d6005471c60"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:15:22.344097 master-2 kubenswrapper[4776]: I1011 11:15:22.344061 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-ssh-key-networkers" (OuterVolumeSpecName: "ssh-key-networkers") pod "8d499e5e-9473-4609-a496-3d6005471c60" (UID: "8d499e5e-9473-4609-a496-3d6005471c60"). InnerVolumeSpecName "ssh-key-networkers". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:15:22.418045 master-2 kubenswrapper[4776]: I1011 11:15:22.417975 4776 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-inventory-0\") on node \"master-2\" DevicePath \"\"" Oct 11 11:15:22.418045 master-2 kubenswrapper[4776]: I1011 11:15:22.418028 4776 reconciler_common.go:293] "Volume detached for volume \"ssh-key-networkers\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-ssh-key-networkers\") on node \"master-2\" DevicePath \"\"" Oct 11 11:15:22.418045 master-2 kubenswrapper[4776]: I1011 11:15:22.418044 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k647\" (UniqueName: \"kubernetes.io/projected/8d499e5e-9473-4609-a496-3d6005471c60-kube-api-access-4k647\") on node \"master-2\" DevicePath \"\"" Oct 11 11:15:22.706214 master-2 kubenswrapper[4776]: I1011 11:15:22.706102 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-networker-deploy-br44d" event={"ID":"8d499e5e-9473-4609-a496-3d6005471c60","Type":"ContainerDied","Data":"741b2c080c17d74c7755050ba6692dca3e95816e1c05b4b16fc9d28b439d9bbf"} Oct 11 11:15:22.706214 master-2 kubenswrapper[4776]: I1011 11:15:22.706143 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="741b2c080c17d74c7755050ba6692dca3e95816e1c05b4b16fc9d28b439d9bbf" Oct 11 11:15:22.706214 master-2 kubenswrapper[4776]: I1011 11:15:22.706154 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:22.852210 master-1 kubenswrapper[4771]: I1011 11:15:22.851833 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-networker-deploy-networkers-8hnn8"] Oct 11 11:15:22.857929 master-1 kubenswrapper[4771]: I1011 11:15:22.857430 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-networker-deploy-networkers-8hnn8" Oct 11 11:15:22.862561 master-1 kubenswrapper[4771]: I1011 11:15:22.861907 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-networkers" Oct 11 11:15:22.871727 master-1 kubenswrapper[4771]: I1011 11:15:22.871658 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-dataplane-edpm-nvx25"] Oct 11 11:15:22.882108 master-1 kubenswrapper[4771]: I1011 11:15:22.881955 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-networker-deploy-networkers-8hnn8"] Oct 11 11:15:23.006082 master-1 kubenswrapper[4771]: I1011 11:15:23.005982 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/399d2deb-21cc-4976-ae9e-170fa7d754b2-inventory\") pod \"run-os-networker-deploy-networkers-8hnn8\" (UID: \"399d2deb-21cc-4976-ae9e-170fa7d754b2\") " pod="openstack/run-os-networker-deploy-networkers-8hnn8" Oct 11 11:15:23.006373 master-1 kubenswrapper[4771]: I1011 11:15:23.006104 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-255x2\" (UniqueName: \"kubernetes.io/projected/399d2deb-21cc-4976-ae9e-170fa7d754b2-kube-api-access-255x2\") pod \"run-os-networker-deploy-networkers-8hnn8\" (UID: \"399d2deb-21cc-4976-ae9e-170fa7d754b2\") " pod="openstack/run-os-networker-deploy-networkers-8hnn8" Oct 11 11:15:23.006373 master-1 kubenswrapper[4771]: I1011 11:15:23.006176 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/399d2deb-21cc-4976-ae9e-170fa7d754b2-ssh-key\") pod \"run-os-networker-deploy-networkers-8hnn8\" (UID: \"399d2deb-21cc-4976-ae9e-170fa7d754b2\") " pod="openstack/run-os-networker-deploy-networkers-8hnn8" Oct 11 11:15:23.110518 master-1 kubenswrapper[4771]: I1011 11:15:23.110278 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/399d2deb-21cc-4976-ae9e-170fa7d754b2-inventory\") pod \"run-os-networker-deploy-networkers-8hnn8\" (UID: \"399d2deb-21cc-4976-ae9e-170fa7d754b2\") " pod="openstack/run-os-networker-deploy-networkers-8hnn8" Oct 11 11:15:23.110518 master-1 kubenswrapper[4771]: I1011 11:15:23.110456 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-255x2\" (UniqueName: \"kubernetes.io/projected/399d2deb-21cc-4976-ae9e-170fa7d754b2-kube-api-access-255x2\") pod \"run-os-networker-deploy-networkers-8hnn8\" (UID: \"399d2deb-21cc-4976-ae9e-170fa7d754b2\") " pod="openstack/run-os-networker-deploy-networkers-8hnn8" Oct 11 11:15:23.110980 master-1 kubenswrapper[4771]: I1011 11:15:23.110541 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/399d2deb-21cc-4976-ae9e-170fa7d754b2-ssh-key\") pod \"run-os-networker-deploy-networkers-8hnn8\" (UID: \"399d2deb-21cc-4976-ae9e-170fa7d754b2\") " pod="openstack/run-os-networker-deploy-networkers-8hnn8" Oct 11 11:15:23.115093 master-1 kubenswrapper[4771]: I1011 11:15:23.115021 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/399d2deb-21cc-4976-ae9e-170fa7d754b2-inventory\") pod \"run-os-networker-deploy-networkers-8hnn8\" (UID: \"399d2deb-21cc-4976-ae9e-170fa7d754b2\") " pod="openstack/run-os-networker-deploy-networkers-8hnn8" Oct 11 11:15:23.116295 master-1 kubenswrapper[4771]: I1011 11:15:23.116235 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/399d2deb-21cc-4976-ae9e-170fa7d754b2-ssh-key\") pod \"run-os-networker-deploy-networkers-8hnn8\" (UID: \"399d2deb-21cc-4976-ae9e-170fa7d754b2\") " pod="openstack/run-os-networker-deploy-networkers-8hnn8" Oct 11 11:15:23.129937 master-1 kubenswrapper[4771]: I1011 11:15:23.129850 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-255x2\" (UniqueName: \"kubernetes.io/projected/399d2deb-21cc-4976-ae9e-170fa7d754b2-kube-api-access-255x2\") pod \"run-os-networker-deploy-networkers-8hnn8\" (UID: \"399d2deb-21cc-4976-ae9e-170fa7d754b2\") " pod="openstack/run-os-networker-deploy-networkers-8hnn8" Oct 11 11:15:23.223605 master-1 kubenswrapper[4771]: I1011 11:15:23.223500 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-networker-deploy-networkers-8hnn8" Oct 11 11:15:23.698510 master-1 kubenswrapper[4771]: I1011 11:15:23.698394 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-dataplane-edpm-nvx25" event={"ID":"c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff","Type":"ContainerStarted","Data":"7dad6181f616c54767b9aa28947089dd8d2181c28cfd68675c9cdbe33f1d675b"} Oct 11 11:15:23.698820 master-1 kubenswrapper[4771]: I1011 11:15:23.698518 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-dataplane-edpm-nvx25" event={"ID":"c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff","Type":"ContainerStarted","Data":"38d65c631695208ab4af38ed5f867d266e1a6cf55f11c5f792930a86a2ba576d"} Oct 11 11:15:23.725612 master-1 kubenswrapper[4771]: I1011 11:15:23.725468 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-dataplane-edpm-nvx25" podStartSLOduration=2.196868979 podStartE2EDuration="2.725440853s" podCreationTimestamp="2025-10-11 11:15:21 +0000 UTC" firstStartedPulling="2025-10-11 11:15:22.856959959 +0000 UTC m=+2954.831186410" lastFinishedPulling="2025-10-11 11:15:23.385531843 +0000 UTC m=+2955.359758284" observedRunningTime="2025-10-11 11:15:23.724479075 +0000 UTC m=+2955.698705526" watchObservedRunningTime="2025-10-11 11:15:23.725440853 +0000 UTC m=+2955.699667294" Oct 11 11:15:23.845951 master-1 kubenswrapper[4771]: W1011 11:15:23.844123 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod399d2deb_21cc_4976_ae9e_170fa7d754b2.slice/crio-203f88f268696cc6105fcf08ae1d25636aca0628e0290a26f22d7381ec854ebb WatchSource:0}: Error finding container 203f88f268696cc6105fcf08ae1d25636aca0628e0290a26f22d7381ec854ebb: Status 404 returned error can't find the container with id 203f88f268696cc6105fcf08ae1d25636aca0628e0290a26f22d7381ec854ebb Oct 11 11:15:23.845951 master-1 kubenswrapper[4771]: I1011 11:15:23.844157 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-networker-deploy-networkers-8hnn8"] Oct 11 11:15:24.716024 master-1 kubenswrapper[4771]: I1011 11:15:24.715900 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-networker-deploy-networkers-8hnn8" event={"ID":"399d2deb-21cc-4976-ae9e-170fa7d754b2","Type":"ContainerStarted","Data":"d09b8107b6076b69407115dd79fa8a732a65779488ad44540ab90f20b2f25240"} Oct 11 11:15:24.717223 master-1 kubenswrapper[4771]: I1011 11:15:24.716029 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-networker-deploy-networkers-8hnn8" event={"ID":"399d2deb-21cc-4976-ae9e-170fa7d754b2","Type":"ContainerStarted","Data":"203f88f268696cc6105fcf08ae1d25636aca0628e0290a26f22d7381ec854ebb"} Oct 11 11:15:24.746959 master-1 kubenswrapper[4771]: I1011 11:15:24.746616 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-networker-deploy-networkers-8hnn8" podStartSLOduration=2.313171919 podStartE2EDuration="2.746589092s" podCreationTimestamp="2025-10-11 11:15:22 +0000 UTC" firstStartedPulling="2025-10-11 11:15:23.847297177 +0000 UTC m=+2955.821523618" lastFinishedPulling="2025-10-11 11:15:24.28071435 +0000 UTC m=+2956.254940791" observedRunningTime="2025-10-11 11:15:24.74442157 +0000 UTC m=+2956.718648031" watchObservedRunningTime="2025-10-11 11:15:24.746589092 +0000 UTC m=+2956.720815573" Oct 11 11:15:25.323135 master-1 kubenswrapper[4771]: I1011 11:15:25.323047 4771 scope.go:117] "RemoveContainer" containerID="ea50bb78d4de53e43e9be3f2830fede428957c124838ed0305c9a99b641c0252" Oct 11 11:15:34.829097 master-1 kubenswrapper[4771]: I1011 11:15:34.829002 4771 generic.go:334] "Generic (PLEG): container finished" podID="399d2deb-21cc-4976-ae9e-170fa7d754b2" containerID="d09b8107b6076b69407115dd79fa8a732a65779488ad44540ab90f20b2f25240" exitCode=0 Oct 11 11:15:34.829097 master-1 kubenswrapper[4771]: I1011 11:15:34.829078 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-networker-deploy-networkers-8hnn8" event={"ID":"399d2deb-21cc-4976-ae9e-170fa7d754b2","Type":"ContainerDied","Data":"d09b8107b6076b69407115dd79fa8a732a65779488ad44540ab90f20b2f25240"} Oct 11 11:15:36.447501 master-1 kubenswrapper[4771]: I1011 11:15:36.447436 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-networker-deploy-networkers-8hnn8" Oct 11 11:15:36.570344 master-1 kubenswrapper[4771]: I1011 11:15:36.570103 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-255x2\" (UniqueName: \"kubernetes.io/projected/399d2deb-21cc-4976-ae9e-170fa7d754b2-kube-api-access-255x2\") pod \"399d2deb-21cc-4976-ae9e-170fa7d754b2\" (UID: \"399d2deb-21cc-4976-ae9e-170fa7d754b2\") " Oct 11 11:15:36.570344 master-1 kubenswrapper[4771]: I1011 11:15:36.570236 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/399d2deb-21cc-4976-ae9e-170fa7d754b2-inventory\") pod \"399d2deb-21cc-4976-ae9e-170fa7d754b2\" (UID: \"399d2deb-21cc-4976-ae9e-170fa7d754b2\") " Oct 11 11:15:36.571490 master-1 kubenswrapper[4771]: I1011 11:15:36.571113 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/399d2deb-21cc-4976-ae9e-170fa7d754b2-ssh-key\") pod \"399d2deb-21cc-4976-ae9e-170fa7d754b2\" (UID: \"399d2deb-21cc-4976-ae9e-170fa7d754b2\") " Oct 11 11:15:36.573578 master-1 kubenswrapper[4771]: I1011 11:15:36.573522 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/399d2deb-21cc-4976-ae9e-170fa7d754b2-kube-api-access-255x2" (OuterVolumeSpecName: "kube-api-access-255x2") pod "399d2deb-21cc-4976-ae9e-170fa7d754b2" (UID: "399d2deb-21cc-4976-ae9e-170fa7d754b2"). InnerVolumeSpecName "kube-api-access-255x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:15:36.594062 master-1 kubenswrapper[4771]: I1011 11:15:36.593760 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/399d2deb-21cc-4976-ae9e-170fa7d754b2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "399d2deb-21cc-4976-ae9e-170fa7d754b2" (UID: "399d2deb-21cc-4976-ae9e-170fa7d754b2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:15:36.594456 master-1 kubenswrapper[4771]: I1011 11:15:36.594383 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/399d2deb-21cc-4976-ae9e-170fa7d754b2-inventory" (OuterVolumeSpecName: "inventory") pod "399d2deb-21cc-4976-ae9e-170fa7d754b2" (UID: "399d2deb-21cc-4976-ae9e-170fa7d754b2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:15:36.673409 master-1 kubenswrapper[4771]: I1011 11:15:36.673330 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-255x2\" (UniqueName: \"kubernetes.io/projected/399d2deb-21cc-4976-ae9e-170fa7d754b2-kube-api-access-255x2\") on node \"master-1\" DevicePath \"\"" Oct 11 11:15:36.673409 master-1 kubenswrapper[4771]: I1011 11:15:36.673394 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/399d2deb-21cc-4976-ae9e-170fa7d754b2-inventory\") on node \"master-1\" DevicePath \"\"" Oct 11 11:15:36.673409 master-1 kubenswrapper[4771]: I1011 11:15:36.673406 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/399d2deb-21cc-4976-ae9e-170fa7d754b2-ssh-key\") on node \"master-1\" DevicePath \"\"" Oct 11 11:15:36.860332 master-1 kubenswrapper[4771]: I1011 11:15:36.860114 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-networker-deploy-networkers-8hnn8" event={"ID":"399d2deb-21cc-4976-ae9e-170fa7d754b2","Type":"ContainerDied","Data":"203f88f268696cc6105fcf08ae1d25636aca0628e0290a26f22d7381ec854ebb"} Oct 11 11:15:36.860332 master-1 kubenswrapper[4771]: I1011 11:15:36.860173 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="203f88f268696cc6105fcf08ae1d25636aca0628e0290a26f22d7381ec854ebb" Oct 11 11:15:36.860332 master-1 kubenswrapper[4771]: I1011 11:15:36.860245 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-networker-deploy-networkers-8hnn8" Oct 11 11:15:36.993427 master-1 kubenswrapper[4771]: I1011 11:15:36.992618 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-networker-deploy-networkers-fgsg5"] Oct 11 11:15:36.993427 master-1 kubenswrapper[4771]: E1011 11:15:36.993088 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="399d2deb-21cc-4976-ae9e-170fa7d754b2" containerName="run-os-networker-deploy-networkers" Oct 11 11:15:36.993427 master-1 kubenswrapper[4771]: I1011 11:15:36.993111 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="399d2deb-21cc-4976-ae9e-170fa7d754b2" containerName="run-os-networker-deploy-networkers" Oct 11 11:15:36.993427 master-1 kubenswrapper[4771]: I1011 11:15:36.993395 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="399d2deb-21cc-4976-ae9e-170fa7d754b2" containerName="run-os-networker-deploy-networkers" Oct 11 11:15:36.995518 master-1 kubenswrapper[4771]: I1011 11:15:36.994633 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-networker-deploy-networkers-fgsg5" Oct 11 11:15:36.997595 master-1 kubenswrapper[4771]: I1011 11:15:36.997558 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-networkers" Oct 11 11:15:37.040056 master-1 kubenswrapper[4771]: I1011 11:15:37.039937 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-networker-deploy-networkers-fgsg5"] Oct 11 11:15:37.083842 master-1 kubenswrapper[4771]: E1011 11:15:37.083751 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod399d2deb_21cc_4976_ae9e_170fa7d754b2.slice\": RecentStats: unable to find data in memory cache]" Oct 11 11:15:37.182079 master-1 kubenswrapper[4771]: I1011 11:15:37.181842 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa11837b-6428-471c-b8d4-ffafd00d954b-ssh-key\") pod \"reboot-os-networker-deploy-networkers-fgsg5\" (UID: \"fa11837b-6428-471c-b8d4-ffafd00d954b\") " pod="openstack/reboot-os-networker-deploy-networkers-fgsg5" Oct 11 11:15:37.182079 master-1 kubenswrapper[4771]: I1011 11:15:37.181948 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa11837b-6428-471c-b8d4-ffafd00d954b-inventory\") pod \"reboot-os-networker-deploy-networkers-fgsg5\" (UID: \"fa11837b-6428-471c-b8d4-ffafd00d954b\") " pod="openstack/reboot-os-networker-deploy-networkers-fgsg5" Oct 11 11:15:37.182079 master-1 kubenswrapper[4771]: I1011 11:15:37.182036 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vdnn\" (UniqueName: \"kubernetes.io/projected/fa11837b-6428-471c-b8d4-ffafd00d954b-kube-api-access-5vdnn\") pod \"reboot-os-networker-deploy-networkers-fgsg5\" (UID: \"fa11837b-6428-471c-b8d4-ffafd00d954b\") " pod="openstack/reboot-os-networker-deploy-networkers-fgsg5" Oct 11 11:15:37.284529 master-1 kubenswrapper[4771]: I1011 11:15:37.284444 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vdnn\" (UniqueName: \"kubernetes.io/projected/fa11837b-6428-471c-b8d4-ffafd00d954b-kube-api-access-5vdnn\") pod \"reboot-os-networker-deploy-networkers-fgsg5\" (UID: \"fa11837b-6428-471c-b8d4-ffafd00d954b\") " pod="openstack/reboot-os-networker-deploy-networkers-fgsg5" Oct 11 11:15:37.284959 master-1 kubenswrapper[4771]: I1011 11:15:37.284740 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa11837b-6428-471c-b8d4-ffafd00d954b-ssh-key\") pod \"reboot-os-networker-deploy-networkers-fgsg5\" (UID: \"fa11837b-6428-471c-b8d4-ffafd00d954b\") " pod="openstack/reboot-os-networker-deploy-networkers-fgsg5" Oct 11 11:15:37.284959 master-1 kubenswrapper[4771]: I1011 11:15:37.284791 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa11837b-6428-471c-b8d4-ffafd00d954b-inventory\") pod \"reboot-os-networker-deploy-networkers-fgsg5\" (UID: \"fa11837b-6428-471c-b8d4-ffafd00d954b\") " pod="openstack/reboot-os-networker-deploy-networkers-fgsg5" Oct 11 11:15:37.292845 master-1 kubenswrapper[4771]: I1011 11:15:37.292804 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa11837b-6428-471c-b8d4-ffafd00d954b-ssh-key\") pod \"reboot-os-networker-deploy-networkers-fgsg5\" (UID: \"fa11837b-6428-471c-b8d4-ffafd00d954b\") " pod="openstack/reboot-os-networker-deploy-networkers-fgsg5" Oct 11 11:15:37.293083 master-1 kubenswrapper[4771]: I1011 11:15:37.293046 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa11837b-6428-471c-b8d4-ffafd00d954b-inventory\") pod \"reboot-os-networker-deploy-networkers-fgsg5\" (UID: \"fa11837b-6428-471c-b8d4-ffafd00d954b\") " pod="openstack/reboot-os-networker-deploy-networkers-fgsg5" Oct 11 11:15:37.320541 master-1 kubenswrapper[4771]: I1011 11:15:37.320469 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vdnn\" (UniqueName: \"kubernetes.io/projected/fa11837b-6428-471c-b8d4-ffafd00d954b-kube-api-access-5vdnn\") pod \"reboot-os-networker-deploy-networkers-fgsg5\" (UID: \"fa11837b-6428-471c-b8d4-ffafd00d954b\") " pod="openstack/reboot-os-networker-deploy-networkers-fgsg5" Oct 11 11:15:37.618933 master-1 kubenswrapper[4771]: I1011 11:15:37.618850 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-networker-deploy-networkers-fgsg5" Oct 11 11:15:38.186240 master-1 kubenswrapper[4771]: I1011 11:15:38.186036 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-networker-deploy-networkers-fgsg5"] Oct 11 11:15:38.192285 master-1 kubenswrapper[4771]: W1011 11:15:38.192209 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa11837b_6428_471c_b8d4_ffafd00d954b.slice/crio-7710262e3855b3c019a49b8d470f18483fe3fa4ea05080e4da182e7e68559971 WatchSource:0}: Error finding container 7710262e3855b3c019a49b8d470f18483fe3fa4ea05080e4da182e7e68559971: Status 404 returned error can't find the container with id 7710262e3855b3c019a49b8d470f18483fe3fa4ea05080e4da182e7e68559971 Oct 11 11:15:38.882404 master-1 kubenswrapper[4771]: I1011 11:15:38.882192 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-networker-deploy-networkers-fgsg5" event={"ID":"fa11837b-6428-471c-b8d4-ffafd00d954b","Type":"ContainerStarted","Data":"7710262e3855b3c019a49b8d470f18483fe3fa4ea05080e4da182e7e68559971"} Oct 11 11:15:39.895220 master-1 kubenswrapper[4771]: I1011 11:15:39.895118 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-networker-deploy-networkers-fgsg5" event={"ID":"fa11837b-6428-471c-b8d4-ffafd00d954b","Type":"ContainerStarted","Data":"89933431cf76a9a43f09580949f86bd5d879200bed420eae4edc172dd83c04be"} Oct 11 11:15:39.927400 master-1 kubenswrapper[4771]: I1011 11:15:39.927240 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-networker-deploy-networkers-fgsg5" podStartSLOduration=3.49739094 podStartE2EDuration="3.927216s" podCreationTimestamp="2025-10-11 11:15:36 +0000 UTC" firstStartedPulling="2025-10-11 11:15:38.194962368 +0000 UTC m=+2970.169188809" lastFinishedPulling="2025-10-11 11:15:38.624787428 +0000 UTC m=+2970.599013869" observedRunningTime="2025-10-11 11:15:39.926573572 +0000 UTC m=+2971.900800023" watchObservedRunningTime="2025-10-11 11:15:39.927216 +0000 UTC m=+2971.901442441" Oct 11 11:16:18.351344 master-1 kubenswrapper[4771]: I1011 11:16:18.351211 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dzcrl"] Oct 11 11:16:18.354569 master-1 kubenswrapper[4771]: I1011 11:16:18.354536 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dzcrl" Oct 11 11:16:18.403272 master-1 kubenswrapper[4771]: I1011 11:16:18.403170 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dzcrl"] Oct 11 11:16:18.451567 master-1 kubenswrapper[4771]: I1011 11:16:18.451474 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4smm\" (UniqueName: \"kubernetes.io/projected/34b4de24-eb85-4793-9435-44bc5c82aab9-kube-api-access-k4smm\") pod \"community-operators-dzcrl\" (UID: \"34b4de24-eb85-4793-9435-44bc5c82aab9\") " pod="openshift-marketplace/community-operators-dzcrl" Oct 11 11:16:18.451567 master-1 kubenswrapper[4771]: I1011 11:16:18.451547 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34b4de24-eb85-4793-9435-44bc5c82aab9-catalog-content\") pod \"community-operators-dzcrl\" (UID: \"34b4de24-eb85-4793-9435-44bc5c82aab9\") " pod="openshift-marketplace/community-operators-dzcrl" Oct 11 11:16:18.452124 master-1 kubenswrapper[4771]: I1011 11:16:18.451952 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34b4de24-eb85-4793-9435-44bc5c82aab9-utilities\") pod \"community-operators-dzcrl\" (UID: \"34b4de24-eb85-4793-9435-44bc5c82aab9\") " pod="openshift-marketplace/community-operators-dzcrl" Oct 11 11:16:18.556025 master-1 kubenswrapper[4771]: I1011 11:16:18.555872 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4smm\" (UniqueName: \"kubernetes.io/projected/34b4de24-eb85-4793-9435-44bc5c82aab9-kube-api-access-k4smm\") pod \"community-operators-dzcrl\" (UID: \"34b4de24-eb85-4793-9435-44bc5c82aab9\") " pod="openshift-marketplace/community-operators-dzcrl" Oct 11 11:16:18.556025 master-1 kubenswrapper[4771]: I1011 11:16:18.555936 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34b4de24-eb85-4793-9435-44bc5c82aab9-catalog-content\") pod \"community-operators-dzcrl\" (UID: \"34b4de24-eb85-4793-9435-44bc5c82aab9\") " pod="openshift-marketplace/community-operators-dzcrl" Oct 11 11:16:18.556025 master-1 kubenswrapper[4771]: I1011 11:16:18.556038 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34b4de24-eb85-4793-9435-44bc5c82aab9-utilities\") pod \"community-operators-dzcrl\" (UID: \"34b4de24-eb85-4793-9435-44bc5c82aab9\") " pod="openshift-marketplace/community-operators-dzcrl" Oct 11 11:16:18.556599 master-1 kubenswrapper[4771]: I1011 11:16:18.556573 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34b4de24-eb85-4793-9435-44bc5c82aab9-utilities\") pod \"community-operators-dzcrl\" (UID: \"34b4de24-eb85-4793-9435-44bc5c82aab9\") " pod="openshift-marketplace/community-operators-dzcrl" Oct 11 11:16:18.557135 master-1 kubenswrapper[4771]: I1011 11:16:18.557082 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34b4de24-eb85-4793-9435-44bc5c82aab9-catalog-content\") pod \"community-operators-dzcrl\" (UID: \"34b4de24-eb85-4793-9435-44bc5c82aab9\") " pod="openshift-marketplace/community-operators-dzcrl" Oct 11 11:16:18.584813 master-1 kubenswrapper[4771]: I1011 11:16:18.584709 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4smm\" (UniqueName: \"kubernetes.io/projected/34b4de24-eb85-4793-9435-44bc5c82aab9-kube-api-access-k4smm\") pod \"community-operators-dzcrl\" (UID: \"34b4de24-eb85-4793-9435-44bc5c82aab9\") " pod="openshift-marketplace/community-operators-dzcrl" Oct 11 11:16:18.724671 master-1 kubenswrapper[4771]: I1011 11:16:18.724462 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dzcrl" Oct 11 11:16:19.230397 master-1 kubenswrapper[4771]: W1011 11:16:19.229812 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34b4de24_eb85_4793_9435_44bc5c82aab9.slice/crio-40d10a341484c58e4a87284c38624488c83e1dec77bc2337e7b4e3223c2bf0ce WatchSource:0}: Error finding container 40d10a341484c58e4a87284c38624488c83e1dec77bc2337e7b4e3223c2bf0ce: Status 404 returned error can't find the container with id 40d10a341484c58e4a87284c38624488c83e1dec77bc2337e7b4e3223c2bf0ce Oct 11 11:16:19.233436 master-1 kubenswrapper[4771]: I1011 11:16:19.233340 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dzcrl"] Oct 11 11:16:19.330146 master-1 kubenswrapper[4771]: I1011 11:16:19.330048 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzcrl" event={"ID":"34b4de24-eb85-4793-9435-44bc5c82aab9","Type":"ContainerStarted","Data":"40d10a341484c58e4a87284c38624488c83e1dec77bc2337e7b4e3223c2bf0ce"} Oct 11 11:16:20.342807 master-1 kubenswrapper[4771]: I1011 11:16:20.342725 4771 generic.go:334] "Generic (PLEG): container finished" podID="34b4de24-eb85-4793-9435-44bc5c82aab9" containerID="4569c17a93983fc7c9806039d762a1c0353f579ec1e297c01f1597f9f52f1846" exitCode=0 Oct 11 11:16:20.342807 master-1 kubenswrapper[4771]: I1011 11:16:20.342807 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzcrl" event={"ID":"34b4de24-eb85-4793-9435-44bc5c82aab9","Type":"ContainerDied","Data":"4569c17a93983fc7c9806039d762a1c0353f579ec1e297c01f1597f9f52f1846"} Oct 11 11:16:21.354443 master-1 kubenswrapper[4771]: I1011 11:16:21.354332 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzcrl" event={"ID":"34b4de24-eb85-4793-9435-44bc5c82aab9","Type":"ContainerStarted","Data":"7d791a7487548b9eff666906ee0b8402629e31ea95f7d2a171992e5a10d919fb"} Oct 11 11:16:22.367499 master-1 kubenswrapper[4771]: I1011 11:16:22.367410 4771 generic.go:334] "Generic (PLEG): container finished" podID="34b4de24-eb85-4793-9435-44bc5c82aab9" containerID="7d791a7487548b9eff666906ee0b8402629e31ea95f7d2a171992e5a10d919fb" exitCode=0 Oct 11 11:16:22.368536 master-1 kubenswrapper[4771]: I1011 11:16:22.367508 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzcrl" event={"ID":"34b4de24-eb85-4793-9435-44bc5c82aab9","Type":"ContainerDied","Data":"7d791a7487548b9eff666906ee0b8402629e31ea95f7d2a171992e5a10d919fb"} Oct 11 11:16:23.379150 master-1 kubenswrapper[4771]: I1011 11:16:23.379016 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzcrl" event={"ID":"34b4de24-eb85-4793-9435-44bc5c82aab9","Type":"ContainerStarted","Data":"7554b4551b7e1efbf0288693c2eee8e8636ee8f4c5e8b6b631bbdca7cc40f3ad"} Oct 11 11:16:23.413277 master-1 kubenswrapper[4771]: I1011 11:16:23.413181 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dzcrl" podStartSLOduration=2.969984584 podStartE2EDuration="5.413155465s" podCreationTimestamp="2025-10-11 11:16:18 +0000 UTC" firstStartedPulling="2025-10-11 11:16:20.345516488 +0000 UTC m=+3012.319742939" lastFinishedPulling="2025-10-11 11:16:22.788687349 +0000 UTC m=+3014.762913820" observedRunningTime="2025-10-11 11:16:23.406986038 +0000 UTC m=+3015.381212489" watchObservedRunningTime="2025-10-11 11:16:23.413155465 +0000 UTC m=+3015.387381926" Oct 11 11:16:28.725795 master-1 kubenswrapper[4771]: I1011 11:16:28.725691 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dzcrl" Oct 11 11:16:28.725795 master-1 kubenswrapper[4771]: I1011 11:16:28.725808 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dzcrl" Oct 11 11:16:28.805118 master-1 kubenswrapper[4771]: I1011 11:16:28.804990 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dzcrl" Oct 11 11:16:29.502764 master-1 kubenswrapper[4771]: I1011 11:16:29.502696 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dzcrl" Oct 11 11:16:29.591189 master-1 kubenswrapper[4771]: I1011 11:16:29.591111 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dzcrl"] Oct 11 11:16:31.467723 master-1 kubenswrapper[4771]: I1011 11:16:31.467628 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dzcrl" podUID="34b4de24-eb85-4793-9435-44bc5c82aab9" containerName="registry-server" containerID="cri-o://7554b4551b7e1efbf0288693c2eee8e8636ee8f4c5e8b6b631bbdca7cc40f3ad" gracePeriod=2 Oct 11 11:16:32.105050 master-1 kubenswrapper[4771]: I1011 11:16:32.104981 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dzcrl" Oct 11 11:16:32.195037 master-1 kubenswrapper[4771]: I1011 11:16:32.194956 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34b4de24-eb85-4793-9435-44bc5c82aab9-utilities\") pod \"34b4de24-eb85-4793-9435-44bc5c82aab9\" (UID: \"34b4de24-eb85-4793-9435-44bc5c82aab9\") " Oct 11 11:16:32.195341 master-1 kubenswrapper[4771]: I1011 11:16:32.195132 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34b4de24-eb85-4793-9435-44bc5c82aab9-catalog-content\") pod \"34b4de24-eb85-4793-9435-44bc5c82aab9\" (UID: \"34b4de24-eb85-4793-9435-44bc5c82aab9\") " Oct 11 11:16:32.195341 master-1 kubenswrapper[4771]: I1011 11:16:32.195203 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4smm\" (UniqueName: \"kubernetes.io/projected/34b4de24-eb85-4793-9435-44bc5c82aab9-kube-api-access-k4smm\") pod \"34b4de24-eb85-4793-9435-44bc5c82aab9\" (UID: \"34b4de24-eb85-4793-9435-44bc5c82aab9\") " Oct 11 11:16:32.197374 master-1 kubenswrapper[4771]: I1011 11:16:32.197312 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34b4de24-eb85-4793-9435-44bc5c82aab9-utilities" (OuterVolumeSpecName: "utilities") pod "34b4de24-eb85-4793-9435-44bc5c82aab9" (UID: "34b4de24-eb85-4793-9435-44bc5c82aab9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:16:32.199814 master-1 kubenswrapper[4771]: I1011 11:16:32.199720 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b4de24-eb85-4793-9435-44bc5c82aab9-kube-api-access-k4smm" (OuterVolumeSpecName: "kube-api-access-k4smm") pod "34b4de24-eb85-4793-9435-44bc5c82aab9" (UID: "34b4de24-eb85-4793-9435-44bc5c82aab9"). InnerVolumeSpecName "kube-api-access-k4smm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:16:32.300520 master-1 kubenswrapper[4771]: I1011 11:16:32.300335 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4smm\" (UniqueName: \"kubernetes.io/projected/34b4de24-eb85-4793-9435-44bc5c82aab9-kube-api-access-k4smm\") on node \"master-1\" DevicePath \"\"" Oct 11 11:16:32.300520 master-1 kubenswrapper[4771]: I1011 11:16:32.300419 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34b4de24-eb85-4793-9435-44bc5c82aab9-utilities\") on node \"master-1\" DevicePath \"\"" Oct 11 11:16:32.320967 master-1 kubenswrapper[4771]: I1011 11:16:32.320885 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34b4de24-eb85-4793-9435-44bc5c82aab9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34b4de24-eb85-4793-9435-44bc5c82aab9" (UID: "34b4de24-eb85-4793-9435-44bc5c82aab9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:16:32.402582 master-1 kubenswrapper[4771]: I1011 11:16:32.402482 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34b4de24-eb85-4793-9435-44bc5c82aab9-catalog-content\") on node \"master-1\" DevicePath \"\"" Oct 11 11:16:32.482567 master-1 kubenswrapper[4771]: I1011 11:16:32.482333 4771 generic.go:334] "Generic (PLEG): container finished" podID="34b4de24-eb85-4793-9435-44bc5c82aab9" containerID="7554b4551b7e1efbf0288693c2eee8e8636ee8f4c5e8b6b631bbdca7cc40f3ad" exitCode=0 Oct 11 11:16:32.482567 master-1 kubenswrapper[4771]: I1011 11:16:32.482401 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzcrl" event={"ID":"34b4de24-eb85-4793-9435-44bc5c82aab9","Type":"ContainerDied","Data":"7554b4551b7e1efbf0288693c2eee8e8636ee8f4c5e8b6b631bbdca7cc40f3ad"} Oct 11 11:16:32.482567 master-1 kubenswrapper[4771]: I1011 11:16:32.482455 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzcrl" event={"ID":"34b4de24-eb85-4793-9435-44bc5c82aab9","Type":"ContainerDied","Data":"40d10a341484c58e4a87284c38624488c83e1dec77bc2337e7b4e3223c2bf0ce"} Oct 11 11:16:32.482567 master-1 kubenswrapper[4771]: I1011 11:16:32.482488 4771 scope.go:117] "RemoveContainer" containerID="7554b4551b7e1efbf0288693c2eee8e8636ee8f4c5e8b6b631bbdca7cc40f3ad" Oct 11 11:16:32.483225 master-1 kubenswrapper[4771]: I1011 11:16:32.482728 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dzcrl" Oct 11 11:16:32.513397 master-1 kubenswrapper[4771]: I1011 11:16:32.513284 4771 scope.go:117] "RemoveContainer" containerID="7d791a7487548b9eff666906ee0b8402629e31ea95f7d2a171992e5a10d919fb" Oct 11 11:16:32.555433 master-1 kubenswrapper[4771]: I1011 11:16:32.555301 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dzcrl"] Oct 11 11:16:32.562056 master-1 kubenswrapper[4771]: I1011 11:16:32.562000 4771 scope.go:117] "RemoveContainer" containerID="4569c17a93983fc7c9806039d762a1c0353f579ec1e297c01f1597f9f52f1846" Oct 11 11:16:32.567575 master-1 kubenswrapper[4771]: I1011 11:16:32.567521 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dzcrl"] Oct 11 11:16:32.606961 master-1 kubenswrapper[4771]: I1011 11:16:32.606667 4771 scope.go:117] "RemoveContainer" containerID="7554b4551b7e1efbf0288693c2eee8e8636ee8f4c5e8b6b631bbdca7cc40f3ad" Oct 11 11:16:32.607317 master-1 kubenswrapper[4771]: E1011 11:16:32.607274 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7554b4551b7e1efbf0288693c2eee8e8636ee8f4c5e8b6b631bbdca7cc40f3ad\": container with ID starting with 7554b4551b7e1efbf0288693c2eee8e8636ee8f4c5e8b6b631bbdca7cc40f3ad not found: ID does not exist" containerID="7554b4551b7e1efbf0288693c2eee8e8636ee8f4c5e8b6b631bbdca7cc40f3ad" Oct 11 11:16:32.607460 master-1 kubenswrapper[4771]: I1011 11:16:32.607326 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7554b4551b7e1efbf0288693c2eee8e8636ee8f4c5e8b6b631bbdca7cc40f3ad"} err="failed to get container status \"7554b4551b7e1efbf0288693c2eee8e8636ee8f4c5e8b6b631bbdca7cc40f3ad\": rpc error: code = NotFound desc = could not find container \"7554b4551b7e1efbf0288693c2eee8e8636ee8f4c5e8b6b631bbdca7cc40f3ad\": container with ID starting with 7554b4551b7e1efbf0288693c2eee8e8636ee8f4c5e8b6b631bbdca7cc40f3ad not found: ID does not exist" Oct 11 11:16:32.607460 master-1 kubenswrapper[4771]: I1011 11:16:32.607370 4771 scope.go:117] "RemoveContainer" containerID="7d791a7487548b9eff666906ee0b8402629e31ea95f7d2a171992e5a10d919fb" Oct 11 11:16:32.608145 master-1 kubenswrapper[4771]: E1011 11:16:32.607894 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d791a7487548b9eff666906ee0b8402629e31ea95f7d2a171992e5a10d919fb\": container with ID starting with 7d791a7487548b9eff666906ee0b8402629e31ea95f7d2a171992e5a10d919fb not found: ID does not exist" containerID="7d791a7487548b9eff666906ee0b8402629e31ea95f7d2a171992e5a10d919fb" Oct 11 11:16:32.608145 master-1 kubenswrapper[4771]: I1011 11:16:32.607954 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d791a7487548b9eff666906ee0b8402629e31ea95f7d2a171992e5a10d919fb"} err="failed to get container status \"7d791a7487548b9eff666906ee0b8402629e31ea95f7d2a171992e5a10d919fb\": rpc error: code = NotFound desc = could not find container \"7d791a7487548b9eff666906ee0b8402629e31ea95f7d2a171992e5a10d919fb\": container with ID starting with 7d791a7487548b9eff666906ee0b8402629e31ea95f7d2a171992e5a10d919fb not found: ID does not exist" Oct 11 11:16:32.608145 master-1 kubenswrapper[4771]: I1011 11:16:32.607999 4771 scope.go:117] "RemoveContainer" containerID="4569c17a93983fc7c9806039d762a1c0353f579ec1e297c01f1597f9f52f1846" Oct 11 11:16:32.608636 master-1 kubenswrapper[4771]: E1011 11:16:32.608575 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4569c17a93983fc7c9806039d762a1c0353f579ec1e297c01f1597f9f52f1846\": container with ID starting with 4569c17a93983fc7c9806039d762a1c0353f579ec1e297c01f1597f9f52f1846 not found: ID does not exist" containerID="4569c17a93983fc7c9806039d762a1c0353f579ec1e297c01f1597f9f52f1846" Oct 11 11:16:32.608721 master-1 kubenswrapper[4771]: I1011 11:16:32.608641 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4569c17a93983fc7c9806039d762a1c0353f579ec1e297c01f1597f9f52f1846"} err="failed to get container status \"4569c17a93983fc7c9806039d762a1c0353f579ec1e297c01f1597f9f52f1846\": rpc error: code = NotFound desc = could not find container \"4569c17a93983fc7c9806039d762a1c0353f579ec1e297c01f1597f9f52f1846\": container with ID starting with 4569c17a93983fc7c9806039d762a1c0353f579ec1e297c01f1597f9f52f1846 not found: ID does not exist" Oct 11 11:16:34.455913 master-1 kubenswrapper[4771]: I1011 11:16:34.455795 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b4de24-eb85-4793-9435-44bc5c82aab9" path="/var/lib/kubelet/pods/34b4de24-eb85-4793-9435-44bc5c82aab9/volumes" Oct 11 11:16:38.571196 master-1 kubenswrapper[4771]: I1011 11:16:38.571098 4771 generic.go:334] "Generic (PLEG): container finished" podID="c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff" containerID="7dad6181f616c54767b9aa28947089dd8d2181c28cfd68675c9cdbe33f1d675b" exitCode=0 Oct 11 11:16:38.571196 master-1 kubenswrapper[4771]: I1011 11:16:38.571163 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-dataplane-edpm-nvx25" event={"ID":"c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff","Type":"ContainerDied","Data":"7dad6181f616c54767b9aa28947089dd8d2181c28cfd68675c9cdbe33f1d675b"} Oct 11 11:16:40.300306 master-1 kubenswrapper[4771]: I1011 11:16:40.300225 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-dataplane-edpm-nvx25" Oct 11 11:16:40.406598 master-1 kubenswrapper[4771]: I1011 11:16:40.406342 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg6qz\" (UniqueName: \"kubernetes.io/projected/c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff-kube-api-access-tg6qz\") pod \"c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff\" (UID: \"c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff\") " Oct 11 11:16:40.407017 master-1 kubenswrapper[4771]: I1011 11:16:40.406761 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff-inventory\") pod \"c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff\" (UID: \"c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff\") " Oct 11 11:16:40.407017 master-1 kubenswrapper[4771]: I1011 11:16:40.406824 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff-ssh-key\") pod \"c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff\" (UID: \"c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff\") " Oct 11 11:16:40.412726 master-1 kubenswrapper[4771]: I1011 11:16:40.412625 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff-kube-api-access-tg6qz" (OuterVolumeSpecName: "kube-api-access-tg6qz") pod "c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff" (UID: "c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff"). InnerVolumeSpecName "kube-api-access-tg6qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:16:40.430670 master-1 kubenswrapper[4771]: I1011 11:16:40.430604 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff" (UID: "c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:16:40.455698 master-1 kubenswrapper[4771]: I1011 11:16:40.455483 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff-inventory" (OuterVolumeSpecName: "inventory") pod "c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff" (UID: "c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:16:40.511148 master-1 kubenswrapper[4771]: I1011 11:16:40.511047 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg6qz\" (UniqueName: \"kubernetes.io/projected/c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff-kube-api-access-tg6qz\") on node \"master-1\" DevicePath \"\"" Oct 11 11:16:40.511148 master-1 kubenswrapper[4771]: I1011 11:16:40.511119 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff-inventory\") on node \"master-1\" DevicePath \"\"" Oct 11 11:16:40.511148 master-1 kubenswrapper[4771]: I1011 11:16:40.511134 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff-ssh-key\") on node \"master-1\" DevicePath \"\"" Oct 11 11:16:40.603399 master-1 kubenswrapper[4771]: I1011 11:16:40.603284 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-dataplane-edpm-nvx25" event={"ID":"c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff","Type":"ContainerDied","Data":"38d65c631695208ab4af38ed5f867d266e1a6cf55f11c5f792930a86a2ba576d"} Oct 11 11:16:40.603399 master-1 kubenswrapper[4771]: I1011 11:16:40.603382 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-dataplane-edpm-nvx25" Oct 11 11:16:40.603399 master-1 kubenswrapper[4771]: I1011 11:16:40.603415 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38d65c631695208ab4af38ed5f867d266e1a6cf55f11c5f792930a86a2ba576d" Oct 11 11:16:40.740561 master-2 kubenswrapper[4776]: I1011 11:16:40.740436 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-dataplane-edpm-wqd9x"] Oct 11 11:16:40.741268 master-2 kubenswrapper[4776]: E1011 11:16:40.740908 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d499e5e-9473-4609-a496-3d6005471c60" containerName="ssh-known-hosts-networker-deploy" Oct 11 11:16:40.741268 master-2 kubenswrapper[4776]: I1011 11:16:40.740932 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d499e5e-9473-4609-a496-3d6005471c60" containerName="ssh-known-hosts-networker-deploy" Oct 11 11:16:40.741268 master-2 kubenswrapper[4776]: I1011 11:16:40.741168 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d499e5e-9473-4609-a496-3d6005471c60" containerName="ssh-known-hosts-networker-deploy" Oct 11 11:16:40.741997 master-2 kubenswrapper[4776]: I1011 11:16:40.741965 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.745730 master-2 kubenswrapper[4776]: I1011 11:16:40.745661 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"edpm-neutron-metadata-default-certs-0" Oct 11 11:16:40.746044 master-2 kubenswrapper[4776]: I1011 11:16:40.746013 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:16:40.746377 master-2 kubenswrapper[4776]: I1011 11:16:40.746326 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"edpm-ovn-default-certs-0" Oct 11 11:16:40.746437 master-2 kubenswrapper[4776]: I1011 11:16:40.746402 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"edpm-telemetry-default-certs-0" Oct 11 11:16:40.746497 master-2 kubenswrapper[4776]: I1011 11:16:40.746347 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"edpm-libvirt-default-certs-0" Oct 11 11:16:40.746611 master-2 kubenswrapper[4776]: I1011 11:16:40.746585 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-edpm" Oct 11 11:16:40.746685 master-2 kubenswrapper[4776]: I1011 11:16:40.746658 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:16:40.781318 master-2 kubenswrapper[4776]: I1011 11:16:40.761748 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-dataplane-edpm-wqd9x"] Oct 11 11:16:40.825261 master-2 kubenswrapper[4776]: I1011 11:16:40.825204 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-bootstrap-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825497 master-2 kubenswrapper[4776]: I1011 11:16:40.825278 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-libvirt-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825497 master-2 kubenswrapper[4776]: I1011 11:16:40.825304 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-telemetry-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825497 master-2 kubenswrapper[4776]: I1011 11:16:40.825382 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825497 master-2 kubenswrapper[4776]: I1011 11:16:40.825411 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-libvirt-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825497 master-2 kubenswrapper[4776]: I1011 11:16:40.825435 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-neutron-metadata-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825497 master-2 kubenswrapper[4776]: I1011 11:16:40.825464 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-inventory\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825701 master-2 kubenswrapper[4776]: I1011 11:16:40.825595 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ovn-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825748 master-2 kubenswrapper[4776]: I1011 11:16:40.825723 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-nova-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825782 master-2 kubenswrapper[4776]: I1011 11:16:40.825764 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv76g\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-kube-api-access-vv76g\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825818 master-2 kubenswrapper[4776]: I1011 11:16:40.825801 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ssh-key\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825933 master-2 kubenswrapper[4776]: I1011 11:16:40.825898 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-telemetry-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825982 master-2 kubenswrapper[4776]: I1011 11:16:40.825954 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-ovn-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928140 master-2 kubenswrapper[4776]: I1011 11:16:40.928071 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-bootstrap-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928401 master-2 kubenswrapper[4776]: I1011 11:16:40.928184 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-libvirt-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928401 master-2 kubenswrapper[4776]: I1011 11:16:40.928219 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-telemetry-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928401 master-2 kubenswrapper[4776]: I1011 11:16:40.928293 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928401 master-2 kubenswrapper[4776]: I1011 11:16:40.928329 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-libvirt-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928401 master-2 kubenswrapper[4776]: I1011 11:16:40.928364 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-neutron-metadata-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928620 master-2 kubenswrapper[4776]: I1011 11:16:40.928409 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-inventory\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928620 master-2 kubenswrapper[4776]: I1011 11:16:40.928462 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ovn-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928620 master-2 kubenswrapper[4776]: I1011 11:16:40.928521 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-nova-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928620 master-2 kubenswrapper[4776]: I1011 11:16:40.928550 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv76g\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-kube-api-access-vv76g\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928620 master-2 kubenswrapper[4776]: I1011 11:16:40.928585 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ssh-key\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928905 master-2 kubenswrapper[4776]: I1011 11:16:40.928728 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-telemetry-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928905 master-2 kubenswrapper[4776]: I1011 11:16:40.928770 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-ovn-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.933308 master-2 kubenswrapper[4776]: I1011 11:16:40.931694 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-inventory\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.933308 master-2 kubenswrapper[4776]: I1011 11:16:40.931955 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-telemetry-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.933308 master-2 kubenswrapper[4776]: I1011 11:16:40.932874 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-libvirt-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.934019 master-2 kubenswrapper[4776]: I1011 11:16:40.933667 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-bootstrap-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.934721 master-2 kubenswrapper[4776]: I1011 11:16:40.934610 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ovn-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.934917 master-2 kubenswrapper[4776]: I1011 11:16:40.934771 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.936284 master-2 kubenswrapper[4776]: I1011 11:16:40.936252 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-libvirt-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.936541 master-2 kubenswrapper[4776]: I1011 11:16:40.936511 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ssh-key\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.936592 master-2 kubenswrapper[4776]: I1011 11:16:40.936563 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-telemetry-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.937151 master-2 kubenswrapper[4776]: I1011 11:16:40.937109 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-nova-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.937421 master-2 kubenswrapper[4776]: I1011 11:16:40.937388 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-neutron-metadata-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.938969 master-2 kubenswrapper[4776]: I1011 11:16:40.938931 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-ovn-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.949937 master-2 kubenswrapper[4776]: I1011 11:16:40.949900 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv76g\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-kube-api-access-vv76g\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:41.101787 master-2 kubenswrapper[4776]: I1011 11:16:41.101582 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:41.660745 master-2 kubenswrapper[4776]: I1011 11:16:41.660631 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-dataplane-edpm-wqd9x"] Oct 11 11:16:41.663983 master-2 kubenswrapper[4776]: W1011 11:16:41.663882 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80228f17_5924_456b_8353_45c055831ed5.slice/crio-47dcdac25ed44c34bcf447eede90133e4d3a388caf728ccbffa21a56003e9abe WatchSource:0}: Error finding container 47dcdac25ed44c34bcf447eede90133e4d3a388caf728ccbffa21a56003e9abe: Status 404 returned error can't find the container with id 47dcdac25ed44c34bcf447eede90133e4d3a388caf728ccbffa21a56003e9abe Oct 11 11:16:41.665952 master-2 kubenswrapper[4776]: I1011 11:16:41.665915 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 11:16:42.495581 master-2 kubenswrapper[4776]: I1011 11:16:42.495487 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-dataplane-edpm-wqd9x" event={"ID":"80228f17-5924-456b-8353-45c055831ed5","Type":"ContainerStarted","Data":"ed4a5ac26e0f995190c896a5b973a39f209d36b8cc703d2afed5b4eb99b93995"} Oct 11 11:16:42.495581 master-2 kubenswrapper[4776]: I1011 11:16:42.495565 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-dataplane-edpm-wqd9x" event={"ID":"80228f17-5924-456b-8353-45c055831ed5","Type":"ContainerStarted","Data":"47dcdac25ed44c34bcf447eede90133e4d3a388caf728ccbffa21a56003e9abe"} Oct 11 11:16:42.528901 master-2 kubenswrapper[4776]: I1011 11:16:42.527873 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-dataplane-edpm-wqd9x" podStartSLOduration=2.064192651 podStartE2EDuration="2.527851358s" podCreationTimestamp="2025-10-11 11:16:40 +0000 UTC" firstStartedPulling="2025-10-11 11:16:41.665815083 +0000 UTC m=+3036.450241792" lastFinishedPulling="2025-10-11 11:16:42.12947379 +0000 UTC m=+3036.913900499" observedRunningTime="2025-10-11 11:16:42.524346893 +0000 UTC m=+3037.308773602" watchObservedRunningTime="2025-10-11 11:16:42.527851358 +0000 UTC m=+3037.312278077" Oct 11 11:16:55.767324 master-1 kubenswrapper[4771]: I1011 11:16:55.767102 4771 generic.go:334] "Generic (PLEG): container finished" podID="fa11837b-6428-471c-b8d4-ffafd00d954b" containerID="89933431cf76a9a43f09580949f86bd5d879200bed420eae4edc172dd83c04be" exitCode=0 Oct 11 11:16:55.767324 master-1 kubenswrapper[4771]: I1011 11:16:55.767212 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-networker-deploy-networkers-fgsg5" event={"ID":"fa11837b-6428-471c-b8d4-ffafd00d954b","Type":"ContainerDied","Data":"89933431cf76a9a43f09580949f86bd5d879200bed420eae4edc172dd83c04be"} Oct 11 11:16:57.465770 master-1 kubenswrapper[4771]: I1011 11:16:57.465701 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-networker-deploy-networkers-fgsg5" Oct 11 11:16:57.523046 master-1 kubenswrapper[4771]: I1011 11:16:57.522949 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vdnn\" (UniqueName: \"kubernetes.io/projected/fa11837b-6428-471c-b8d4-ffafd00d954b-kube-api-access-5vdnn\") pod \"fa11837b-6428-471c-b8d4-ffafd00d954b\" (UID: \"fa11837b-6428-471c-b8d4-ffafd00d954b\") " Oct 11 11:16:57.523522 master-1 kubenswrapper[4771]: I1011 11:16:57.523493 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa11837b-6428-471c-b8d4-ffafd00d954b-ssh-key\") pod \"fa11837b-6428-471c-b8d4-ffafd00d954b\" (UID: \"fa11837b-6428-471c-b8d4-ffafd00d954b\") " Oct 11 11:16:57.523775 master-1 kubenswrapper[4771]: I1011 11:16:57.523749 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa11837b-6428-471c-b8d4-ffafd00d954b-inventory\") pod \"fa11837b-6428-471c-b8d4-ffafd00d954b\" (UID: \"fa11837b-6428-471c-b8d4-ffafd00d954b\") " Oct 11 11:16:57.527868 master-1 kubenswrapper[4771]: I1011 11:16:57.527799 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa11837b-6428-471c-b8d4-ffafd00d954b-kube-api-access-5vdnn" (OuterVolumeSpecName: "kube-api-access-5vdnn") pod "fa11837b-6428-471c-b8d4-ffafd00d954b" (UID: "fa11837b-6428-471c-b8d4-ffafd00d954b"). InnerVolumeSpecName "kube-api-access-5vdnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:16:57.550948 master-1 kubenswrapper[4771]: I1011 11:16:57.550874 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa11837b-6428-471c-b8d4-ffafd00d954b-inventory" (OuterVolumeSpecName: "inventory") pod "fa11837b-6428-471c-b8d4-ffafd00d954b" (UID: "fa11837b-6428-471c-b8d4-ffafd00d954b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:16:57.551298 master-1 kubenswrapper[4771]: I1011 11:16:57.551227 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa11837b-6428-471c-b8d4-ffafd00d954b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fa11837b-6428-471c-b8d4-ffafd00d954b" (UID: "fa11837b-6428-471c-b8d4-ffafd00d954b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:16:57.626230 master-1 kubenswrapper[4771]: I1011 11:16:57.626154 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa11837b-6428-471c-b8d4-ffafd00d954b-inventory\") on node \"master-1\" DevicePath \"\"" Oct 11 11:16:57.626230 master-1 kubenswrapper[4771]: I1011 11:16:57.626202 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vdnn\" (UniqueName: \"kubernetes.io/projected/fa11837b-6428-471c-b8d4-ffafd00d954b-kube-api-access-5vdnn\") on node \"master-1\" DevicePath \"\"" Oct 11 11:16:57.626230 master-1 kubenswrapper[4771]: I1011 11:16:57.626213 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa11837b-6428-471c-b8d4-ffafd00d954b-ssh-key\") on node \"master-1\" DevicePath \"\"" Oct 11 11:16:57.790256 master-1 kubenswrapper[4771]: I1011 11:16:57.790178 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-networker-deploy-networkers-fgsg5" event={"ID":"fa11837b-6428-471c-b8d4-ffafd00d954b","Type":"ContainerDied","Data":"7710262e3855b3c019a49b8d470f18483fe3fa4ea05080e4da182e7e68559971"} Oct 11 11:16:57.790256 master-1 kubenswrapper[4771]: I1011 11:16:57.790232 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7710262e3855b3c019a49b8d470f18483fe3fa4ea05080e4da182e7e68559971" Oct 11 11:16:57.790719 master-1 kubenswrapper[4771]: I1011 11:16:57.790277 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-networker-deploy-networkers-fgsg5" Oct 11 11:16:57.940635 master-1 kubenswrapper[4771]: I1011 11:16:57.940462 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-networker-deploy-networkers-nv56q"] Oct 11 11:16:57.941146 master-1 kubenswrapper[4771]: E1011 11:16:57.940919 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b4de24-eb85-4793-9435-44bc5c82aab9" containerName="extract-content" Oct 11 11:16:57.941146 master-1 kubenswrapper[4771]: I1011 11:16:57.940948 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b4de24-eb85-4793-9435-44bc5c82aab9" containerName="extract-content" Oct 11 11:16:57.941146 master-1 kubenswrapper[4771]: E1011 11:16:57.940972 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b4de24-eb85-4793-9435-44bc5c82aab9" containerName="registry-server" Oct 11 11:16:57.941146 master-1 kubenswrapper[4771]: I1011 11:16:57.940984 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b4de24-eb85-4793-9435-44bc5c82aab9" containerName="registry-server" Oct 11 11:16:57.941146 master-1 kubenswrapper[4771]: E1011 11:16:57.941004 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b4de24-eb85-4793-9435-44bc5c82aab9" containerName="extract-utilities" Oct 11 11:16:57.941146 master-1 kubenswrapper[4771]: I1011 11:16:57.941016 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b4de24-eb85-4793-9435-44bc5c82aab9" containerName="extract-utilities" Oct 11 11:16:57.941146 master-1 kubenswrapper[4771]: E1011 11:16:57.941037 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff" containerName="reboot-os-dataplane-edpm" Oct 11 11:16:57.941146 master-1 kubenswrapper[4771]: I1011 11:16:57.941046 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff" containerName="reboot-os-dataplane-edpm" Oct 11 11:16:57.941146 master-1 kubenswrapper[4771]: E1011 11:16:57.941071 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa11837b-6428-471c-b8d4-ffafd00d954b" containerName="reboot-os-networker-deploy-networkers" Oct 11 11:16:57.941146 master-1 kubenswrapper[4771]: I1011 11:16:57.941081 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa11837b-6428-471c-b8d4-ffafd00d954b" containerName="reboot-os-networker-deploy-networkers" Oct 11 11:16:57.941494 master-1 kubenswrapper[4771]: I1011 11:16:57.941319 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b4de24-eb85-4793-9435-44bc5c82aab9" containerName="registry-server" Oct 11 11:16:57.941494 master-1 kubenswrapper[4771]: I1011 11:16:57.941350 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa11837b-6428-471c-b8d4-ffafd00d954b" containerName="reboot-os-networker-deploy-networkers" Oct 11 11:16:57.941494 master-1 kubenswrapper[4771]: I1011 11:16:57.941400 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e880ca-1acb-4f3a-8d7c-98fa3c84e2ff" containerName="reboot-os-dataplane-edpm" Oct 11 11:16:57.942994 master-1 kubenswrapper[4771]: I1011 11:16:57.942950 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:57.948688 master-1 kubenswrapper[4771]: I1011 11:16:57.948600 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"networkers-neutron-metadata-default-certs-0" Oct 11 11:16:57.949448 master-1 kubenswrapper[4771]: I1011 11:16:57.949408 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"networkers-ovn-default-certs-0" Oct 11 11:16:57.949546 master-1 kubenswrapper[4771]: I1011 11:16:57.949491 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:16:57.949589 master-1 kubenswrapper[4771]: I1011 11:16:57.949535 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:16:57.949691 master-1 kubenswrapper[4771]: I1011 11:16:57.949653 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-networkers" Oct 11 11:16:57.968940 master-1 kubenswrapper[4771]: I1011 11:16:57.968882 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-networker-deploy-networkers-nv56q"] Oct 11 11:16:58.034812 master-1 kubenswrapper[4771]: I1011 11:16:58.034723 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-inventory\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.035132 master-1 kubenswrapper[4771]: I1011 11:16:58.035042 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networkers-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-networkers-ovn-default-certs-0\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.035275 master-1 kubenswrapper[4771]: I1011 11:16:58.035236 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9zgn\" (UniqueName: \"kubernetes.io/projected/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-kube-api-access-d9zgn\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.035319 master-1 kubenswrapper[4771]: I1011 11:16:58.035297 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-ssh-key\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.035577 master-1 kubenswrapper[4771]: I1011 11:16:58.035543 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.035722 master-1 kubenswrapper[4771]: I1011 11:16:58.035690 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networkers-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-networkers-neutron-metadata-default-certs-0\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.036215 master-1 kubenswrapper[4771]: I1011 11:16:58.036185 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-bootstrap-combined-ca-bundle\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.036279 master-1 kubenswrapper[4771]: I1011 11:16:58.036234 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-ovn-combined-ca-bundle\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.138552 master-1 kubenswrapper[4771]: I1011 11:16:58.138446 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-inventory\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.138818 master-1 kubenswrapper[4771]: I1011 11:16:58.138762 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networkers-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-networkers-ovn-default-certs-0\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.138926 master-1 kubenswrapper[4771]: I1011 11:16:58.138861 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9zgn\" (UniqueName: \"kubernetes.io/projected/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-kube-api-access-d9zgn\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.138985 master-1 kubenswrapper[4771]: I1011 11:16:58.138951 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-ssh-key\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.139809 master-1 kubenswrapper[4771]: I1011 11:16:58.139740 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.139990 master-1 kubenswrapper[4771]: I1011 11:16:58.139944 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networkers-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-networkers-neutron-metadata-default-certs-0\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.140157 master-1 kubenswrapper[4771]: I1011 11:16:58.140114 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-bootstrap-combined-ca-bundle\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.140250 master-1 kubenswrapper[4771]: I1011 11:16:58.140216 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-ovn-combined-ca-bundle\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.145267 master-1 kubenswrapper[4771]: I1011 11:16:58.145206 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.145536 master-1 kubenswrapper[4771]: I1011 11:16:58.145474 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-inventory\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.146205 master-1 kubenswrapper[4771]: I1011 11:16:58.146145 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-ovn-combined-ca-bundle\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.146278 master-1 kubenswrapper[4771]: I1011 11:16:58.146219 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-bootstrap-combined-ca-bundle\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.147274 master-1 kubenswrapper[4771]: I1011 11:16:58.147212 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networkers-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-networkers-neutron-metadata-default-certs-0\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.149126 master-1 kubenswrapper[4771]: I1011 11:16:58.149053 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networkers-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-networkers-ovn-default-certs-0\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.149720 master-1 kubenswrapper[4771]: I1011 11:16:58.149676 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-ssh-key\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.165150 master-1 kubenswrapper[4771]: I1011 11:16:58.165068 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9zgn\" (UniqueName: \"kubernetes.io/projected/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-kube-api-access-d9zgn\") pod \"install-certs-networker-deploy-networkers-nv56q\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.277586 master-1 kubenswrapper[4771]: I1011 11:16:58.277343 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:16:58.870056 master-1 kubenswrapper[4771]: I1011 11:16:58.869955 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-networker-deploy-networkers-nv56q"] Oct 11 11:16:58.877213 master-1 kubenswrapper[4771]: W1011 11:16:58.876890 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e295eb4_9a6f_493c_a3a9_85508f5d6f3c.slice/crio-2037c8e340f6cf5a5c0f8d8031e67b6348b08e864cc1806517bcb81191359f52 WatchSource:0}: Error finding container 2037c8e340f6cf5a5c0f8d8031e67b6348b08e864cc1806517bcb81191359f52: Status 404 returned error can't find the container with id 2037c8e340f6cf5a5c0f8d8031e67b6348b08e864cc1806517bcb81191359f52 Oct 11 11:16:59.814225 master-1 kubenswrapper[4771]: I1011 11:16:59.814159 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-networker-deploy-networkers-nv56q" event={"ID":"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c","Type":"ContainerStarted","Data":"bb403e0a7b754caf72ba310593e321557413f8e6fc2973bf2fa8b211d6a1c105"} Oct 11 11:16:59.814480 master-1 kubenswrapper[4771]: I1011 11:16:59.814240 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-networker-deploy-networkers-nv56q" event={"ID":"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c","Type":"ContainerStarted","Data":"2037c8e340f6cf5a5c0f8d8031e67b6348b08e864cc1806517bcb81191359f52"} Oct 11 11:16:59.850030 master-1 kubenswrapper[4771]: I1011 11:16:59.849930 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-networker-deploy-networkers-nv56q" podStartSLOduration=2.424244444 podStartE2EDuration="2.849908014s" podCreationTimestamp="2025-10-11 11:16:57 +0000 UTC" firstStartedPulling="2025-10-11 11:16:58.880835918 +0000 UTC m=+3050.855062379" lastFinishedPulling="2025-10-11 11:16:59.306499468 +0000 UTC m=+3051.280725949" observedRunningTime="2025-10-11 11:16:59.844646073 +0000 UTC m=+3051.818872524" watchObservedRunningTime="2025-10-11 11:16:59.849908014 +0000 UTC m=+3051.824134465" Oct 11 11:17:10.267273 master-1 kubenswrapper[4771]: I1011 11:17:10.267176 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kdf87"] Oct 11 11:17:10.269495 master-1 kubenswrapper[4771]: I1011 11:17:10.269440 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdf87" Oct 11 11:17:10.288436 master-1 kubenswrapper[4771]: I1011 11:17:10.287534 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg9sj\" (UniqueName: \"kubernetes.io/projected/7d441233-bd29-43df-9709-b6619d4ee7cb-kube-api-access-qg9sj\") pod \"redhat-operators-kdf87\" (UID: \"7d441233-bd29-43df-9709-b6619d4ee7cb\") " pod="openshift-marketplace/redhat-operators-kdf87" Oct 11 11:17:10.288436 master-1 kubenswrapper[4771]: I1011 11:17:10.287771 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d441233-bd29-43df-9709-b6619d4ee7cb-utilities\") pod \"redhat-operators-kdf87\" (UID: \"7d441233-bd29-43df-9709-b6619d4ee7cb\") " pod="openshift-marketplace/redhat-operators-kdf87" Oct 11 11:17:10.288436 master-1 kubenswrapper[4771]: I1011 11:17:10.287856 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d441233-bd29-43df-9709-b6619d4ee7cb-catalog-content\") pod \"redhat-operators-kdf87\" (UID: \"7d441233-bd29-43df-9709-b6619d4ee7cb\") " pod="openshift-marketplace/redhat-operators-kdf87" Oct 11 11:17:10.306592 master-1 kubenswrapper[4771]: I1011 11:17:10.305137 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kdf87"] Oct 11 11:17:10.391118 master-1 kubenswrapper[4771]: I1011 11:17:10.391043 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d441233-bd29-43df-9709-b6619d4ee7cb-utilities\") pod \"redhat-operators-kdf87\" (UID: \"7d441233-bd29-43df-9709-b6619d4ee7cb\") " pod="openshift-marketplace/redhat-operators-kdf87" Oct 11 11:17:10.391411 master-1 kubenswrapper[4771]: I1011 11:17:10.391168 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d441233-bd29-43df-9709-b6619d4ee7cb-catalog-content\") pod \"redhat-operators-kdf87\" (UID: \"7d441233-bd29-43df-9709-b6619d4ee7cb\") " pod="openshift-marketplace/redhat-operators-kdf87" Oct 11 11:17:10.391411 master-1 kubenswrapper[4771]: I1011 11:17:10.391384 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg9sj\" (UniqueName: \"kubernetes.io/projected/7d441233-bd29-43df-9709-b6619d4ee7cb-kube-api-access-qg9sj\") pod \"redhat-operators-kdf87\" (UID: \"7d441233-bd29-43df-9709-b6619d4ee7cb\") " pod="openshift-marketplace/redhat-operators-kdf87" Oct 11 11:17:10.392213 master-1 kubenswrapper[4771]: I1011 11:17:10.392137 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d441233-bd29-43df-9709-b6619d4ee7cb-catalog-content\") pod \"redhat-operators-kdf87\" (UID: \"7d441233-bd29-43df-9709-b6619d4ee7cb\") " pod="openshift-marketplace/redhat-operators-kdf87" Oct 11 11:17:10.392269 master-1 kubenswrapper[4771]: I1011 11:17:10.392225 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d441233-bd29-43df-9709-b6619d4ee7cb-utilities\") pod \"redhat-operators-kdf87\" (UID: \"7d441233-bd29-43df-9709-b6619d4ee7cb\") " pod="openshift-marketplace/redhat-operators-kdf87" Oct 11 11:17:10.414565 master-1 kubenswrapper[4771]: I1011 11:17:10.413912 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg9sj\" (UniqueName: \"kubernetes.io/projected/7d441233-bd29-43df-9709-b6619d4ee7cb-kube-api-access-qg9sj\") pod \"redhat-operators-kdf87\" (UID: \"7d441233-bd29-43df-9709-b6619d4ee7cb\") " pod="openshift-marketplace/redhat-operators-kdf87" Oct 11 11:17:10.603345 master-1 kubenswrapper[4771]: I1011 11:17:10.603271 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdf87" Oct 11 11:17:11.102936 master-1 kubenswrapper[4771]: W1011 11:17:11.102863 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d441233_bd29_43df_9709_b6619d4ee7cb.slice/crio-0709253ecf6b2429f21c40dc085b220b079f71dc73b4ce3af67c0718902f13d2 WatchSource:0}: Error finding container 0709253ecf6b2429f21c40dc085b220b079f71dc73b4ce3af67c0718902f13d2: Status 404 returned error can't find the container with id 0709253ecf6b2429f21c40dc085b220b079f71dc73b4ce3af67c0718902f13d2 Oct 11 11:17:11.104948 master-1 kubenswrapper[4771]: I1011 11:17:11.104904 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kdf87"] Oct 11 11:17:11.936382 master-1 kubenswrapper[4771]: I1011 11:17:11.934482 4771 generic.go:334] "Generic (PLEG): container finished" podID="7d441233-bd29-43df-9709-b6619d4ee7cb" containerID="3137974f7b7302e607a1ee856871307db4a8f7114a1270eb04dee32b9ddba29d" exitCode=0 Oct 11 11:17:11.936382 master-1 kubenswrapper[4771]: I1011 11:17:11.934573 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdf87" event={"ID":"7d441233-bd29-43df-9709-b6619d4ee7cb","Type":"ContainerDied","Data":"3137974f7b7302e607a1ee856871307db4a8f7114a1270eb04dee32b9ddba29d"} Oct 11 11:17:11.936382 master-1 kubenswrapper[4771]: I1011 11:17:11.934665 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdf87" event={"ID":"7d441233-bd29-43df-9709-b6619d4ee7cb","Type":"ContainerStarted","Data":"0709253ecf6b2429f21c40dc085b220b079f71dc73b4ce3af67c0718902f13d2"} Oct 11 11:17:12.944159 master-1 kubenswrapper[4771]: I1011 11:17:12.944004 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdf87" event={"ID":"7d441233-bd29-43df-9709-b6619d4ee7cb","Type":"ContainerStarted","Data":"4c90b12391971b398abffe3daf33d2e40d9a40f331ec28b65522dbd2c0e41252"} Oct 11 11:17:13.955005 master-1 kubenswrapper[4771]: I1011 11:17:13.954924 4771 generic.go:334] "Generic (PLEG): container finished" podID="7d441233-bd29-43df-9709-b6619d4ee7cb" containerID="4c90b12391971b398abffe3daf33d2e40d9a40f331ec28b65522dbd2c0e41252" exitCode=0 Oct 11 11:17:13.955005 master-1 kubenswrapper[4771]: I1011 11:17:13.954989 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdf87" event={"ID":"7d441233-bd29-43df-9709-b6619d4ee7cb","Type":"ContainerDied","Data":"4c90b12391971b398abffe3daf33d2e40d9a40f331ec28b65522dbd2c0e41252"} Oct 11 11:17:14.969191 master-1 kubenswrapper[4771]: I1011 11:17:14.969064 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdf87" event={"ID":"7d441233-bd29-43df-9709-b6619d4ee7cb","Type":"ContainerStarted","Data":"b4cac195b13f3409e289432f51c828555dddc2f13e084d003d4348b6ab307f86"} Oct 11 11:17:14.999940 master-1 kubenswrapper[4771]: I1011 11:17:14.999844 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kdf87" podStartSLOduration=2.544242682 podStartE2EDuration="4.999825715s" podCreationTimestamp="2025-10-11 11:17:10 +0000 UTC" firstStartedPulling="2025-10-11 11:17:11.936966523 +0000 UTC m=+3063.911192984" lastFinishedPulling="2025-10-11 11:17:14.392549536 +0000 UTC m=+3066.366776017" observedRunningTime="2025-10-11 11:17:14.994659706 +0000 UTC m=+3066.968886157" watchObservedRunningTime="2025-10-11 11:17:14.999825715 +0000 UTC m=+3066.974052156" Oct 11 11:17:15.762015 master-2 kubenswrapper[4776]: I1011 11:17:15.761952 4776 generic.go:334] "Generic (PLEG): container finished" podID="80228f17-5924-456b-8353-45c055831ed5" containerID="ed4a5ac26e0f995190c896a5b973a39f209d36b8cc703d2afed5b4eb99b93995" exitCode=0 Oct 11 11:17:15.762015 master-2 kubenswrapper[4776]: I1011 11:17:15.762014 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-dataplane-edpm-wqd9x" event={"ID":"80228f17-5924-456b-8353-45c055831ed5","Type":"ContainerDied","Data":"ed4a5ac26e0f995190c896a5b973a39f209d36b8cc703d2afed5b4eb99b93995"} Oct 11 11:17:17.255691 master-2 kubenswrapper[4776]: I1011 11:17:17.255639 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:17:17.429844 master-2 kubenswrapper[4776]: I1011 11:17:17.429346 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-telemetry-default-certs-0\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.429844 master-2 kubenswrapper[4776]: I1011 11:17:17.429440 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-libvirt-default-certs-0\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.429844 master-2 kubenswrapper[4776]: I1011 11:17:17.429506 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-neutron-metadata-default-certs-0\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.429844 master-2 kubenswrapper[4776]: I1011 11:17:17.429552 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-neutron-metadata-combined-ca-bundle\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.429844 master-2 kubenswrapper[4776]: I1011 11:17:17.429655 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-ovn-default-certs-0\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.429844 master-2 kubenswrapper[4776]: I1011 11:17:17.429730 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-inventory\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.430294 master-2 kubenswrapper[4776]: I1011 11:17:17.429828 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-nova-combined-ca-bundle\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.430294 master-2 kubenswrapper[4776]: I1011 11:17:17.429968 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ssh-key\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.430294 master-2 kubenswrapper[4776]: I1011 11:17:17.430004 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ovn-combined-ca-bundle\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.430294 master-2 kubenswrapper[4776]: I1011 11:17:17.430040 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv76g\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-kube-api-access-vv76g\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.430294 master-2 kubenswrapper[4776]: I1011 11:17:17.430080 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-telemetry-combined-ca-bundle\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.430294 master-2 kubenswrapper[4776]: I1011 11:17:17.430117 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-bootstrap-combined-ca-bundle\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.430294 master-2 kubenswrapper[4776]: I1011 11:17:17.430166 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-libvirt-combined-ca-bundle\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.433493 master-2 kubenswrapper[4776]: I1011 11:17:17.433292 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-telemetry-default-certs-0" (OuterVolumeSpecName: "edpm-telemetry-default-certs-0") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "edpm-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:17:17.433594 master-2 kubenswrapper[4776]: I1011 11:17:17.433513 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-ovn-default-certs-0" (OuterVolumeSpecName: "edpm-ovn-default-certs-0") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "edpm-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:17:17.433648 master-2 kubenswrapper[4776]: I1011 11:17:17.433608 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "edpm-neutron-metadata-default-certs-0") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "edpm-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:17:17.434228 master-2 kubenswrapper[4776]: I1011 11:17:17.434172 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:17.434501 master-2 kubenswrapper[4776]: I1011 11:17:17.434377 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:17.435028 master-2 kubenswrapper[4776]: I1011 11:17:17.434980 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:17.435178 master-2 kubenswrapper[4776]: I1011 11:17:17.435125 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:17.435600 master-2 kubenswrapper[4776]: I1011 11:17:17.435486 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:17.436118 master-2 kubenswrapper[4776]: I1011 11:17:17.436071 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:17.437236 master-2 kubenswrapper[4776]: I1011 11:17:17.437204 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-libvirt-default-certs-0" (OuterVolumeSpecName: "edpm-libvirt-default-certs-0") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "edpm-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:17:17.439922 master-2 kubenswrapper[4776]: I1011 11:17:17.439873 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-kube-api-access-vv76g" (OuterVolumeSpecName: "kube-api-access-vv76g") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "kube-api-access-vv76g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:17:17.457398 master-2 kubenswrapper[4776]: I1011 11:17:17.457315 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-inventory" (OuterVolumeSpecName: "inventory") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:17.470391 master-2 kubenswrapper[4776]: I1011 11:17:17.470325 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:17.532177 master-2 kubenswrapper[4776]: I1011 11:17:17.532124 4776 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-libvirt-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532177 master-2 kubenswrapper[4776]: I1011 11:17:17.532162 4776 reconciler_common.go:293] "Volume detached for volume \"edpm-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-telemetry-default-certs-0\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532177 master-2 kubenswrapper[4776]: I1011 11:17:17.532174 4776 reconciler_common.go:293] "Volume detached for volume \"edpm-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-libvirt-default-certs-0\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532177 master-2 kubenswrapper[4776]: I1011 11:17:17.532184 4776 reconciler_common.go:293] "Volume detached for volume \"edpm-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-neutron-metadata-default-certs-0\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532493 master-2 kubenswrapper[4776]: I1011 11:17:17.532194 4776 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-neutron-metadata-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532493 master-2 kubenswrapper[4776]: I1011 11:17:17.532203 4776 reconciler_common.go:293] "Volume detached for volume \"edpm-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-ovn-default-certs-0\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532493 master-2 kubenswrapper[4776]: I1011 11:17:17.532213 4776 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-inventory\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532493 master-2 kubenswrapper[4776]: I1011 11:17:17.532222 4776 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-nova-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532493 master-2 kubenswrapper[4776]: I1011 11:17:17.532230 4776 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ssh-key\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532493 master-2 kubenswrapper[4776]: I1011 11:17:17.532238 4776 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ovn-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532493 master-2 kubenswrapper[4776]: I1011 11:17:17.532246 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv76g\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-kube-api-access-vv76g\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532493 master-2 kubenswrapper[4776]: I1011 11:17:17.532254 4776 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-telemetry-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532493 master-2 kubenswrapper[4776]: I1011 11:17:17.532263 4776 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-bootstrap-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.779128 master-2 kubenswrapper[4776]: I1011 11:17:17.778989 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-dataplane-edpm-wqd9x" event={"ID":"80228f17-5924-456b-8353-45c055831ed5","Type":"ContainerDied","Data":"47dcdac25ed44c34bcf447eede90133e4d3a388caf728ccbffa21a56003e9abe"} Oct 11 11:17:17.779128 master-2 kubenswrapper[4776]: I1011 11:17:17.779058 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47dcdac25ed44c34bcf447eede90133e4d3a388caf728ccbffa21a56003e9abe" Oct 11 11:17:17.779128 master-2 kubenswrapper[4776]: I1011 11:17:17.779068 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:17:17.930652 master-2 kubenswrapper[4776]: I1011 11:17:17.930573 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-dataplane-edpm-rgmsv"] Oct 11 11:17:17.935053 master-2 kubenswrapper[4776]: E1011 11:17:17.935009 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80228f17-5924-456b-8353-45c055831ed5" containerName="install-certs-dataplane-edpm" Oct 11 11:17:17.935053 master-2 kubenswrapper[4776]: I1011 11:17:17.935046 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="80228f17-5924-456b-8353-45c055831ed5" containerName="install-certs-dataplane-edpm" Oct 11 11:17:17.935301 master-2 kubenswrapper[4776]: I1011 11:17:17.935237 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="80228f17-5924-456b-8353-45c055831ed5" containerName="install-certs-dataplane-edpm" Oct 11 11:17:17.935924 master-2 kubenswrapper[4776]: I1011 11:17:17.935901 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:17.942696 master-2 kubenswrapper[4776]: I1011 11:17:17.941436 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:17:17.942696 master-2 kubenswrapper[4776]: I1011 11:17:17.941485 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:17:17.942696 master-2 kubenswrapper[4776]: I1011 11:17:17.941444 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 11 11:17:17.945092 master-2 kubenswrapper[4776]: I1011 11:17:17.943923 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-edpm" Oct 11 11:17:17.957699 master-2 kubenswrapper[4776]: I1011 11:17:17.956795 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-dataplane-edpm-rgmsv"] Oct 11 11:17:18.039856 master-2 kubenswrapper[4776]: I1011 11:17:18.039705 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ovn-combined-ca-bundle\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.039856 master-2 kubenswrapper[4776]: I1011 11:17:18.039774 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0ae3c981-e8c9-488a-94fb-91368f17324a-ovncontroller-config-0\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.040443 master-2 kubenswrapper[4776]: I1011 11:17:18.040391 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp57w\" (UniqueName: \"kubernetes.io/projected/0ae3c981-e8c9-488a-94fb-91368f17324a-kube-api-access-dp57w\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.040520 master-2 kubenswrapper[4776]: I1011 11:17:18.040493 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ssh-key\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.040560 master-2 kubenswrapper[4776]: I1011 11:17:18.040528 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-inventory\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.142279 master-2 kubenswrapper[4776]: I1011 11:17:18.142206 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ovn-combined-ca-bundle\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.142279 master-2 kubenswrapper[4776]: I1011 11:17:18.142265 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0ae3c981-e8c9-488a-94fb-91368f17324a-ovncontroller-config-0\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.142279 master-2 kubenswrapper[4776]: I1011 11:17:18.142298 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp57w\" (UniqueName: \"kubernetes.io/projected/0ae3c981-e8c9-488a-94fb-91368f17324a-kube-api-access-dp57w\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.142607 master-2 kubenswrapper[4776]: I1011 11:17:18.142321 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ssh-key\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.142607 master-2 kubenswrapper[4776]: I1011 11:17:18.142340 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-inventory\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.143988 master-2 kubenswrapper[4776]: I1011 11:17:18.143946 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0ae3c981-e8c9-488a-94fb-91368f17324a-ovncontroller-config-0\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.145684 master-2 kubenswrapper[4776]: I1011 11:17:18.145621 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ovn-combined-ca-bundle\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.146514 master-2 kubenswrapper[4776]: I1011 11:17:18.146478 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ssh-key\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.146657 master-2 kubenswrapper[4776]: I1011 11:17:18.146593 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-inventory\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.171856 master-2 kubenswrapper[4776]: I1011 11:17:18.171806 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp57w\" (UniqueName: \"kubernetes.io/projected/0ae3c981-e8c9-488a-94fb-91368f17324a-kube-api-access-dp57w\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.303319 master-2 kubenswrapper[4776]: I1011 11:17:18.303197 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.899579 master-2 kubenswrapper[4776]: I1011 11:17:18.899511 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-dataplane-edpm-rgmsv"] Oct 11 11:17:18.903005 master-2 kubenswrapper[4776]: W1011 11:17:18.902960 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ae3c981_e8c9_488a_94fb_91368f17324a.slice/crio-21d463e1e629c9491368b651284d741c90df52fbbbfb81e125c5bc9727efc4db WatchSource:0}: Error finding container 21d463e1e629c9491368b651284d741c90df52fbbbfb81e125c5bc9727efc4db: Status 404 returned error can't find the container with id 21d463e1e629c9491368b651284d741c90df52fbbbfb81e125c5bc9727efc4db Oct 11 11:17:19.799112 master-2 kubenswrapper[4776]: I1011 11:17:19.799028 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-dataplane-edpm-rgmsv" event={"ID":"0ae3c981-e8c9-488a-94fb-91368f17324a","Type":"ContainerStarted","Data":"5b0b9c3038438449e0c72e9b96930aa5aa926416dfcd43c1f308b33a3bcb9e1f"} Oct 11 11:17:19.799112 master-2 kubenswrapper[4776]: I1011 11:17:19.799088 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-dataplane-edpm-rgmsv" event={"ID":"0ae3c981-e8c9-488a-94fb-91368f17324a","Type":"ContainerStarted","Data":"21d463e1e629c9491368b651284d741c90df52fbbbfb81e125c5bc9727efc4db"} Oct 11 11:17:19.834483 master-2 kubenswrapper[4776]: I1011 11:17:19.834332 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-dataplane-edpm-rgmsv" podStartSLOduration=2.37965694 podStartE2EDuration="2.834300455s" podCreationTimestamp="2025-10-11 11:17:17 +0000 UTC" firstStartedPulling="2025-10-11 11:17:18.905786368 +0000 UTC m=+3073.690213077" lastFinishedPulling="2025-10-11 11:17:19.360429883 +0000 UTC m=+3074.144856592" observedRunningTime="2025-10-11 11:17:19.820539694 +0000 UTC m=+3074.604966403" watchObservedRunningTime="2025-10-11 11:17:19.834300455 +0000 UTC m=+3074.618727204" Oct 11 11:17:20.604458 master-1 kubenswrapper[4771]: I1011 11:17:20.604317 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kdf87" Oct 11 11:17:20.605659 master-1 kubenswrapper[4771]: I1011 11:17:20.604479 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kdf87" Oct 11 11:17:20.683849 master-1 kubenswrapper[4771]: I1011 11:17:20.683791 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kdf87" Oct 11 11:17:21.129915 master-1 kubenswrapper[4771]: I1011 11:17:21.129839 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kdf87" Oct 11 11:17:21.221509 master-1 kubenswrapper[4771]: I1011 11:17:21.221397 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kdf87"] Oct 11 11:17:23.079729 master-1 kubenswrapper[4771]: I1011 11:17:23.079599 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kdf87" podUID="7d441233-bd29-43df-9709-b6619d4ee7cb" containerName="registry-server" containerID="cri-o://b4cac195b13f3409e289432f51c828555dddc2f13e084d003d4348b6ab307f86" gracePeriod=2 Oct 11 11:17:23.629428 master-1 kubenswrapper[4771]: I1011 11:17:23.628782 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdf87" Oct 11 11:17:23.725037 master-1 kubenswrapper[4771]: I1011 11:17:23.724943 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d441233-bd29-43df-9709-b6619d4ee7cb-utilities\") pod \"7d441233-bd29-43df-9709-b6619d4ee7cb\" (UID: \"7d441233-bd29-43df-9709-b6619d4ee7cb\") " Oct 11 11:17:23.725391 master-1 kubenswrapper[4771]: I1011 11:17:23.725070 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d441233-bd29-43df-9709-b6619d4ee7cb-catalog-content\") pod \"7d441233-bd29-43df-9709-b6619d4ee7cb\" (UID: \"7d441233-bd29-43df-9709-b6619d4ee7cb\") " Oct 11 11:17:23.725484 master-1 kubenswrapper[4771]: I1011 11:17:23.725434 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg9sj\" (UniqueName: \"kubernetes.io/projected/7d441233-bd29-43df-9709-b6619d4ee7cb-kube-api-access-qg9sj\") pod \"7d441233-bd29-43df-9709-b6619d4ee7cb\" (UID: \"7d441233-bd29-43df-9709-b6619d4ee7cb\") " Oct 11 11:17:23.726078 master-1 kubenswrapper[4771]: I1011 11:17:23.725997 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d441233-bd29-43df-9709-b6619d4ee7cb-utilities" (OuterVolumeSpecName: "utilities") pod "7d441233-bd29-43df-9709-b6619d4ee7cb" (UID: "7d441233-bd29-43df-9709-b6619d4ee7cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:17:23.726175 master-1 kubenswrapper[4771]: I1011 11:17:23.726161 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d441233-bd29-43df-9709-b6619d4ee7cb-utilities\") on node \"master-1\" DevicePath \"\"" Oct 11 11:17:23.730886 master-1 kubenswrapper[4771]: I1011 11:17:23.730824 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d441233-bd29-43df-9709-b6619d4ee7cb-kube-api-access-qg9sj" (OuterVolumeSpecName: "kube-api-access-qg9sj") pod "7d441233-bd29-43df-9709-b6619d4ee7cb" (UID: "7d441233-bd29-43df-9709-b6619d4ee7cb"). InnerVolumeSpecName "kube-api-access-qg9sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:17:23.829524 master-1 kubenswrapper[4771]: I1011 11:17:23.829453 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg9sj\" (UniqueName: \"kubernetes.io/projected/7d441233-bd29-43df-9709-b6619d4ee7cb-kube-api-access-qg9sj\") on node \"master-1\" DevicePath \"\"" Oct 11 11:17:23.852083 master-1 kubenswrapper[4771]: I1011 11:17:23.851986 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d441233-bd29-43df-9709-b6619d4ee7cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d441233-bd29-43df-9709-b6619d4ee7cb" (UID: "7d441233-bd29-43df-9709-b6619d4ee7cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:17:23.933535 master-1 kubenswrapper[4771]: I1011 11:17:23.933306 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d441233-bd29-43df-9709-b6619d4ee7cb-catalog-content\") on node \"master-1\" DevicePath \"\"" Oct 11 11:17:24.097119 master-1 kubenswrapper[4771]: I1011 11:17:24.096988 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-networker-deploy-networkers-nv56q" event={"ID":"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c","Type":"ContainerDied","Data":"bb403e0a7b754caf72ba310593e321557413f8e6fc2973bf2fa8b211d6a1c105"} Oct 11 11:17:24.098128 master-1 kubenswrapper[4771]: I1011 11:17:24.097000 4771 generic.go:334] "Generic (PLEG): container finished" podID="5e295eb4-9a6f-493c-a3a9-85508f5d6f3c" containerID="bb403e0a7b754caf72ba310593e321557413f8e6fc2973bf2fa8b211d6a1c105" exitCode=0 Oct 11 11:17:24.101413 master-1 kubenswrapper[4771]: I1011 11:17:24.101296 4771 generic.go:334] "Generic (PLEG): container finished" podID="7d441233-bd29-43df-9709-b6619d4ee7cb" containerID="b4cac195b13f3409e289432f51c828555dddc2f13e084d003d4348b6ab307f86" exitCode=0 Oct 11 11:17:24.101615 master-1 kubenswrapper[4771]: I1011 11:17:24.101406 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdf87" event={"ID":"7d441233-bd29-43df-9709-b6619d4ee7cb","Type":"ContainerDied","Data":"b4cac195b13f3409e289432f51c828555dddc2f13e084d003d4348b6ab307f86"} Oct 11 11:17:24.101615 master-1 kubenswrapper[4771]: I1011 11:17:24.101481 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdf87" Oct 11 11:17:24.101615 master-1 kubenswrapper[4771]: I1011 11:17:24.101519 4771 scope.go:117] "RemoveContainer" containerID="b4cac195b13f3409e289432f51c828555dddc2f13e084d003d4348b6ab307f86" Oct 11 11:17:24.101940 master-1 kubenswrapper[4771]: I1011 11:17:24.101494 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdf87" event={"ID":"7d441233-bd29-43df-9709-b6619d4ee7cb","Type":"ContainerDied","Data":"0709253ecf6b2429f21c40dc085b220b079f71dc73b4ce3af67c0718902f13d2"} Oct 11 11:17:24.137659 master-1 kubenswrapper[4771]: I1011 11:17:24.137584 4771 scope.go:117] "RemoveContainer" containerID="4c90b12391971b398abffe3daf33d2e40d9a40f331ec28b65522dbd2c0e41252" Oct 11 11:17:24.186436 master-1 kubenswrapper[4771]: I1011 11:17:24.186180 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kdf87"] Oct 11 11:17:24.190665 master-1 kubenswrapper[4771]: I1011 11:17:24.190584 4771 scope.go:117] "RemoveContainer" containerID="3137974f7b7302e607a1ee856871307db4a8f7114a1270eb04dee32b9ddba29d" Oct 11 11:17:24.196020 master-1 kubenswrapper[4771]: I1011 11:17:24.195958 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kdf87"] Oct 11 11:17:24.230344 master-1 kubenswrapper[4771]: I1011 11:17:24.230268 4771 scope.go:117] "RemoveContainer" containerID="b4cac195b13f3409e289432f51c828555dddc2f13e084d003d4348b6ab307f86" Oct 11 11:17:24.231253 master-1 kubenswrapper[4771]: E1011 11:17:24.231161 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4cac195b13f3409e289432f51c828555dddc2f13e084d003d4348b6ab307f86\": container with ID starting with b4cac195b13f3409e289432f51c828555dddc2f13e084d003d4348b6ab307f86 not found: ID does not exist" containerID="b4cac195b13f3409e289432f51c828555dddc2f13e084d003d4348b6ab307f86" Oct 11 11:17:24.231527 master-1 kubenswrapper[4771]: I1011 11:17:24.231252 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4cac195b13f3409e289432f51c828555dddc2f13e084d003d4348b6ab307f86"} err="failed to get container status \"b4cac195b13f3409e289432f51c828555dddc2f13e084d003d4348b6ab307f86\": rpc error: code = NotFound desc = could not find container \"b4cac195b13f3409e289432f51c828555dddc2f13e084d003d4348b6ab307f86\": container with ID starting with b4cac195b13f3409e289432f51c828555dddc2f13e084d003d4348b6ab307f86 not found: ID does not exist" Oct 11 11:17:24.231527 master-1 kubenswrapper[4771]: I1011 11:17:24.231304 4771 scope.go:117] "RemoveContainer" containerID="4c90b12391971b398abffe3daf33d2e40d9a40f331ec28b65522dbd2c0e41252" Oct 11 11:17:24.232193 master-1 kubenswrapper[4771]: E1011 11:17:24.232125 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c90b12391971b398abffe3daf33d2e40d9a40f331ec28b65522dbd2c0e41252\": container with ID starting with 4c90b12391971b398abffe3daf33d2e40d9a40f331ec28b65522dbd2c0e41252 not found: ID does not exist" containerID="4c90b12391971b398abffe3daf33d2e40d9a40f331ec28b65522dbd2c0e41252" Oct 11 11:17:24.232311 master-1 kubenswrapper[4771]: I1011 11:17:24.232203 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c90b12391971b398abffe3daf33d2e40d9a40f331ec28b65522dbd2c0e41252"} err="failed to get container status \"4c90b12391971b398abffe3daf33d2e40d9a40f331ec28b65522dbd2c0e41252\": rpc error: code = NotFound desc = could not find container \"4c90b12391971b398abffe3daf33d2e40d9a40f331ec28b65522dbd2c0e41252\": container with ID starting with 4c90b12391971b398abffe3daf33d2e40d9a40f331ec28b65522dbd2c0e41252 not found: ID does not exist" Oct 11 11:17:24.232311 master-1 kubenswrapper[4771]: I1011 11:17:24.232252 4771 scope.go:117] "RemoveContainer" containerID="3137974f7b7302e607a1ee856871307db4a8f7114a1270eb04dee32b9ddba29d" Oct 11 11:17:24.232861 master-1 kubenswrapper[4771]: E1011 11:17:24.232808 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3137974f7b7302e607a1ee856871307db4a8f7114a1270eb04dee32b9ddba29d\": container with ID starting with 3137974f7b7302e607a1ee856871307db4a8f7114a1270eb04dee32b9ddba29d not found: ID does not exist" containerID="3137974f7b7302e607a1ee856871307db4a8f7114a1270eb04dee32b9ddba29d" Oct 11 11:17:24.232971 master-1 kubenswrapper[4771]: I1011 11:17:24.232862 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3137974f7b7302e607a1ee856871307db4a8f7114a1270eb04dee32b9ddba29d"} err="failed to get container status \"3137974f7b7302e607a1ee856871307db4a8f7114a1270eb04dee32b9ddba29d\": rpc error: code = NotFound desc = could not find container \"3137974f7b7302e607a1ee856871307db4a8f7114a1270eb04dee32b9ddba29d\": container with ID starting with 3137974f7b7302e607a1ee856871307db4a8f7114a1270eb04dee32b9ddba29d not found: ID does not exist" Oct 11 11:17:24.448467 master-1 kubenswrapper[4771]: I1011 11:17:24.448275 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d441233-bd29-43df-9709-b6619d4ee7cb" path="/var/lib/kubelet/pods/7d441233-bd29-43df-9709-b6619d4ee7cb/volumes" Oct 11 11:17:25.721866 master-1 kubenswrapper[4771]: I1011 11:17:25.721779 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:17:25.882196 master-1 kubenswrapper[4771]: I1011 11:17:25.882066 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-ovn-combined-ca-bundle\") pod \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " Oct 11 11:17:25.882617 master-1 kubenswrapper[4771]: I1011 11:17:25.882223 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9zgn\" (UniqueName: \"kubernetes.io/projected/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-kube-api-access-d9zgn\") pod \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " Oct 11 11:17:25.882617 master-1 kubenswrapper[4771]: I1011 11:17:25.882541 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-neutron-metadata-combined-ca-bundle\") pod \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " Oct 11 11:17:25.882905 master-1 kubenswrapper[4771]: I1011 11:17:25.882848 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"networkers-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-networkers-ovn-default-certs-0\") pod \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " Oct 11 11:17:25.883016 master-1 kubenswrapper[4771]: I1011 11:17:25.882971 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-inventory\") pod \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " Oct 11 11:17:25.883263 master-1 kubenswrapper[4771]: I1011 11:17:25.883205 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"networkers-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-networkers-neutron-metadata-default-certs-0\") pod \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " Oct 11 11:17:25.883463 master-1 kubenswrapper[4771]: I1011 11:17:25.883393 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-ssh-key\") pod \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " Oct 11 11:17:25.883790 master-1 kubenswrapper[4771]: I1011 11:17:25.883463 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-bootstrap-combined-ca-bundle\") pod \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\" (UID: \"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c\") " Oct 11 11:17:25.888059 master-1 kubenswrapper[4771]: I1011 11:17:25.887966 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "5e295eb4-9a6f-493c-a3a9-85508f5d6f3c" (UID: "5e295eb4-9a6f-493c-a3a9-85508f5d6f3c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:25.888208 master-1 kubenswrapper[4771]: I1011 11:17:25.887990 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "5e295eb4-9a6f-493c-a3a9-85508f5d6f3c" (UID: "5e295eb4-9a6f-493c-a3a9-85508f5d6f3c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:25.889041 master-1 kubenswrapper[4771]: I1011 11:17:25.888971 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-networkers-ovn-default-certs-0" (OuterVolumeSpecName: "networkers-ovn-default-certs-0") pod "5e295eb4-9a6f-493c-a3a9-85508f5d6f3c" (UID: "5e295eb4-9a6f-493c-a3a9-85508f5d6f3c"). InnerVolumeSpecName "networkers-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:17:25.890547 master-1 kubenswrapper[4771]: I1011 11:17:25.890484 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "5e295eb4-9a6f-493c-a3a9-85508f5d6f3c" (UID: "5e295eb4-9a6f-493c-a3a9-85508f5d6f3c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:25.890682 master-1 kubenswrapper[4771]: I1011 11:17:25.890615 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-kube-api-access-d9zgn" (OuterVolumeSpecName: "kube-api-access-d9zgn") pod "5e295eb4-9a6f-493c-a3a9-85508f5d6f3c" (UID: "5e295eb4-9a6f-493c-a3a9-85508f5d6f3c"). InnerVolumeSpecName "kube-api-access-d9zgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:17:25.890997 master-1 kubenswrapper[4771]: I1011 11:17:25.890921 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-networkers-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "networkers-neutron-metadata-default-certs-0") pod "5e295eb4-9a6f-493c-a3a9-85508f5d6f3c" (UID: "5e295eb4-9a6f-493c-a3a9-85508f5d6f3c"). InnerVolumeSpecName "networkers-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:17:25.911231 master-1 kubenswrapper[4771]: I1011 11:17:25.911164 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-inventory" (OuterVolumeSpecName: "inventory") pod "5e295eb4-9a6f-493c-a3a9-85508f5d6f3c" (UID: "5e295eb4-9a6f-493c-a3a9-85508f5d6f3c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:25.925058 master-1 kubenswrapper[4771]: I1011 11:17:25.924949 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5e295eb4-9a6f-493c-a3a9-85508f5d6f3c" (UID: "5e295eb4-9a6f-493c-a3a9-85508f5d6f3c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:25.989201 master-1 kubenswrapper[4771]: I1011 11:17:25.989080 4771 reconciler_common.go:293] "Volume detached for volume \"networkers-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-networkers-ovn-default-certs-0\") on node \"master-1\" DevicePath \"\"" Oct 11 11:17:25.989201 master-1 kubenswrapper[4771]: I1011 11:17:25.989191 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-inventory\") on node \"master-1\" DevicePath \"\"" Oct 11 11:17:25.989543 master-1 kubenswrapper[4771]: I1011 11:17:25.989214 4771 reconciler_common.go:293] "Volume detached for volume \"networkers-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-networkers-neutron-metadata-default-certs-0\") on node \"master-1\" DevicePath \"\"" Oct 11 11:17:25.989543 master-1 kubenswrapper[4771]: I1011 11:17:25.989237 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-ssh-key\") on node \"master-1\" DevicePath \"\"" Oct 11 11:17:25.989543 master-1 kubenswrapper[4771]: I1011 11:17:25.989255 4771 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-bootstrap-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 11:17:25.989543 master-1 kubenswrapper[4771]: I1011 11:17:25.989272 4771 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-ovn-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 11:17:25.989543 master-1 kubenswrapper[4771]: I1011 11:17:25.989291 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9zgn\" (UniqueName: \"kubernetes.io/projected/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-kube-api-access-d9zgn\") on node \"master-1\" DevicePath \"\"" Oct 11 11:17:25.989543 master-1 kubenswrapper[4771]: I1011 11:17:25.989310 4771 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e295eb4-9a6f-493c-a3a9-85508f5d6f3c-neutron-metadata-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 11:17:26.135866 master-1 kubenswrapper[4771]: I1011 11:17:26.135610 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-networker-deploy-networkers-nv56q" event={"ID":"5e295eb4-9a6f-493c-a3a9-85508f5d6f3c","Type":"ContainerDied","Data":"2037c8e340f6cf5a5c0f8d8031e67b6348b08e864cc1806517bcb81191359f52"} Oct 11 11:17:26.135866 master-1 kubenswrapper[4771]: I1011 11:17:26.135689 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-networker-deploy-networkers-nv56q" Oct 11 11:17:26.135866 master-1 kubenswrapper[4771]: I1011 11:17:26.135693 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2037c8e340f6cf5a5c0f8d8031e67b6348b08e864cc1806517bcb81191359f52" Oct 11 11:17:26.278596 master-1 kubenswrapper[4771]: I1011 11:17:26.278482 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-networker-deploy-networkers-t2hq7"] Oct 11 11:17:26.279134 master-1 kubenswrapper[4771]: E1011 11:17:26.279088 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d441233-bd29-43df-9709-b6619d4ee7cb" containerName="extract-utilities" Oct 11 11:17:26.279134 master-1 kubenswrapper[4771]: I1011 11:17:26.279124 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d441233-bd29-43df-9709-b6619d4ee7cb" containerName="extract-utilities" Oct 11 11:17:26.279313 master-1 kubenswrapper[4771]: E1011 11:17:26.279198 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e295eb4-9a6f-493c-a3a9-85508f5d6f3c" containerName="install-certs-networker-deploy-networkers" Oct 11 11:17:26.279313 master-1 kubenswrapper[4771]: I1011 11:17:26.279216 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e295eb4-9a6f-493c-a3a9-85508f5d6f3c" containerName="install-certs-networker-deploy-networkers" Oct 11 11:17:26.279313 master-1 kubenswrapper[4771]: E1011 11:17:26.279246 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d441233-bd29-43df-9709-b6619d4ee7cb" containerName="extract-content" Oct 11 11:17:26.279313 master-1 kubenswrapper[4771]: I1011 11:17:26.279259 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d441233-bd29-43df-9709-b6619d4ee7cb" containerName="extract-content" Oct 11 11:17:26.279313 master-1 kubenswrapper[4771]: E1011 11:17:26.279286 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d441233-bd29-43df-9709-b6619d4ee7cb" containerName="registry-server" Oct 11 11:17:26.279313 master-1 kubenswrapper[4771]: I1011 11:17:26.279299 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d441233-bd29-43df-9709-b6619d4ee7cb" containerName="registry-server" Oct 11 11:17:26.279918 master-1 kubenswrapper[4771]: I1011 11:17:26.279827 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e295eb4-9a6f-493c-a3a9-85508f5d6f3c" containerName="install-certs-networker-deploy-networkers" Oct 11 11:17:26.279918 master-1 kubenswrapper[4771]: I1011 11:17:26.279887 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d441233-bd29-43df-9709-b6619d4ee7cb" containerName="registry-server" Oct 11 11:17:26.281345 master-1 kubenswrapper[4771]: I1011 11:17:26.281290 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-networker-deploy-networkers-t2hq7" Oct 11 11:17:26.285239 master-1 kubenswrapper[4771]: I1011 11:17:26.285144 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-networkers" Oct 11 11:17:26.285875 master-1 kubenswrapper[4771]: I1011 11:17:26.285822 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 11 11:17:26.287635 master-1 kubenswrapper[4771]: I1011 11:17:26.286276 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:17:26.287635 master-1 kubenswrapper[4771]: I1011 11:17:26.286318 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:17:26.295664 master-1 kubenswrapper[4771]: I1011 11:17:26.295593 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-networker-deploy-networkers-t2hq7"] Oct 11 11:17:26.400556 master-1 kubenswrapper[4771]: I1011 11:17:26.400346 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7bwg\" (UniqueName: \"kubernetes.io/projected/e8ccb67e-b425-48ea-a221-5991d470f77e-kube-api-access-h7bwg\") pod \"ovn-networker-deploy-networkers-t2hq7\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " pod="openstack/ovn-networker-deploy-networkers-t2hq7" Oct 11 11:17:26.400556 master-1 kubenswrapper[4771]: I1011 11:17:26.400505 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e8ccb67e-b425-48ea-a221-5991d470f77e-ovncontroller-config-0\") pod \"ovn-networker-deploy-networkers-t2hq7\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " pod="openstack/ovn-networker-deploy-networkers-t2hq7" Oct 11 11:17:26.401126 master-1 kubenswrapper[4771]: I1011 11:17:26.400983 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e8ccb67e-b425-48ea-a221-5991d470f77e-ssh-key\") pod \"ovn-networker-deploy-networkers-t2hq7\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " pod="openstack/ovn-networker-deploy-networkers-t2hq7" Oct 11 11:17:26.402164 master-1 kubenswrapper[4771]: I1011 11:17:26.402092 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8ccb67e-b425-48ea-a221-5991d470f77e-ovn-combined-ca-bundle\") pod \"ovn-networker-deploy-networkers-t2hq7\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " pod="openstack/ovn-networker-deploy-networkers-t2hq7" Oct 11 11:17:26.402459 master-1 kubenswrapper[4771]: I1011 11:17:26.402392 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8ccb67e-b425-48ea-a221-5991d470f77e-inventory\") pod \"ovn-networker-deploy-networkers-t2hq7\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " pod="openstack/ovn-networker-deploy-networkers-t2hq7" Oct 11 11:17:26.506944 master-1 kubenswrapper[4771]: I1011 11:17:26.506793 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e8ccb67e-b425-48ea-a221-5991d470f77e-ssh-key\") pod \"ovn-networker-deploy-networkers-t2hq7\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " pod="openstack/ovn-networker-deploy-networkers-t2hq7" Oct 11 11:17:26.507286 master-1 kubenswrapper[4771]: I1011 11:17:26.507177 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8ccb67e-b425-48ea-a221-5991d470f77e-ovn-combined-ca-bundle\") pod \"ovn-networker-deploy-networkers-t2hq7\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " pod="openstack/ovn-networker-deploy-networkers-t2hq7" Oct 11 11:17:26.507286 master-1 kubenswrapper[4771]: I1011 11:17:26.507268 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8ccb67e-b425-48ea-a221-5991d470f77e-inventory\") pod \"ovn-networker-deploy-networkers-t2hq7\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " pod="openstack/ovn-networker-deploy-networkers-t2hq7" Oct 11 11:17:26.507480 master-1 kubenswrapper[4771]: I1011 11:17:26.507434 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7bwg\" (UniqueName: \"kubernetes.io/projected/e8ccb67e-b425-48ea-a221-5991d470f77e-kube-api-access-h7bwg\") pod \"ovn-networker-deploy-networkers-t2hq7\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " pod="openstack/ovn-networker-deploy-networkers-t2hq7" Oct 11 11:17:26.507575 master-1 kubenswrapper[4771]: I1011 11:17:26.507509 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e8ccb67e-b425-48ea-a221-5991d470f77e-ovncontroller-config-0\") pod \"ovn-networker-deploy-networkers-t2hq7\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " pod="openstack/ovn-networker-deploy-networkers-t2hq7" Oct 11 11:17:26.510016 master-1 kubenswrapper[4771]: I1011 11:17:26.509926 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e8ccb67e-b425-48ea-a221-5991d470f77e-ovncontroller-config-0\") pod \"ovn-networker-deploy-networkers-t2hq7\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " pod="openstack/ovn-networker-deploy-networkers-t2hq7" Oct 11 11:17:26.514719 master-1 kubenswrapper[4771]: I1011 11:17:26.514638 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8ccb67e-b425-48ea-a221-5991d470f77e-inventory\") pod \"ovn-networker-deploy-networkers-t2hq7\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " pod="openstack/ovn-networker-deploy-networkers-t2hq7" Oct 11 11:17:26.517419 master-1 kubenswrapper[4771]: I1011 11:17:26.517307 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8ccb67e-b425-48ea-a221-5991d470f77e-ovn-combined-ca-bundle\") pod \"ovn-networker-deploy-networkers-t2hq7\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " pod="openstack/ovn-networker-deploy-networkers-t2hq7" Oct 11 11:17:26.527506 master-1 kubenswrapper[4771]: I1011 11:17:26.525694 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e8ccb67e-b425-48ea-a221-5991d470f77e-ssh-key\") pod \"ovn-networker-deploy-networkers-t2hq7\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " pod="openstack/ovn-networker-deploy-networkers-t2hq7" Oct 11 11:17:26.548317 master-1 kubenswrapper[4771]: I1011 11:17:26.548210 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7bwg\" (UniqueName: \"kubernetes.io/projected/e8ccb67e-b425-48ea-a221-5991d470f77e-kube-api-access-h7bwg\") pod \"ovn-networker-deploy-networkers-t2hq7\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " pod="openstack/ovn-networker-deploy-networkers-t2hq7" Oct 11 11:17:26.610535 master-1 kubenswrapper[4771]: I1011 11:17:26.610446 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-networker-deploy-networkers-t2hq7" Oct 11 11:17:27.256507 master-1 kubenswrapper[4771]: I1011 11:17:27.256427 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-networker-deploy-networkers-t2hq7"] Oct 11 11:17:27.270331 master-1 kubenswrapper[4771]: W1011 11:17:27.270251 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8ccb67e_b425_48ea_a221_5991d470f77e.slice/crio-c868682262a8cb25efda7afaad016dfa9d45f465eeb3b92c6f246cbcf4775a5f WatchSource:0}: Error finding container c868682262a8cb25efda7afaad016dfa9d45f465eeb3b92c6f246cbcf4775a5f: Status 404 returned error can't find the container with id c868682262a8cb25efda7afaad016dfa9d45f465eeb3b92c6f246cbcf4775a5f Oct 11 11:17:28.159844 master-1 kubenswrapper[4771]: I1011 11:17:28.159727 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-networker-deploy-networkers-t2hq7" event={"ID":"e8ccb67e-b425-48ea-a221-5991d470f77e","Type":"ContainerStarted","Data":"ec469c05a1e6c2dba56551c551b8625e8765aef989a9c4f27620156c2420a755"} Oct 11 11:17:28.159844 master-1 kubenswrapper[4771]: I1011 11:17:28.159837 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-networker-deploy-networkers-t2hq7" event={"ID":"e8ccb67e-b425-48ea-a221-5991d470f77e","Type":"ContainerStarted","Data":"c868682262a8cb25efda7afaad016dfa9d45f465eeb3b92c6f246cbcf4775a5f"} Oct 11 11:17:28.191781 master-1 kubenswrapper[4771]: I1011 11:17:28.191677 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-networker-deploy-networkers-t2hq7" podStartSLOduration=1.758368256 podStartE2EDuration="2.191653792s" podCreationTimestamp="2025-10-11 11:17:26 +0000 UTC" firstStartedPulling="2025-10-11 11:17:27.27391078 +0000 UTC m=+3079.248137231" lastFinishedPulling="2025-10-11 11:17:27.707196326 +0000 UTC m=+3079.681422767" observedRunningTime="2025-10-11 11:17:28.187890614 +0000 UTC m=+3080.162117095" watchObservedRunningTime="2025-10-11 11:17:28.191653792 +0000 UTC m=+3080.165880233" Oct 11 11:18:29.464256 master-2 kubenswrapper[4776]: I1011 11:18:29.464185 4776 generic.go:334] "Generic (PLEG): container finished" podID="0ae3c981-e8c9-488a-94fb-91368f17324a" containerID="5b0b9c3038438449e0c72e9b96930aa5aa926416dfcd43c1f308b33a3bcb9e1f" exitCode=0 Oct 11 11:18:29.464256 master-2 kubenswrapper[4776]: I1011 11:18:29.464246 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-dataplane-edpm-rgmsv" event={"ID":"0ae3c981-e8c9-488a-94fb-91368f17324a","Type":"ContainerDied","Data":"5b0b9c3038438449e0c72e9b96930aa5aa926416dfcd43c1f308b33a3bcb9e1f"} Oct 11 11:18:31.002771 master-2 kubenswrapper[4776]: I1011 11:18:31.002708 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:18:31.038194 master-2 kubenswrapper[4776]: I1011 11:18:31.038120 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp57w\" (UniqueName: \"kubernetes.io/projected/0ae3c981-e8c9-488a-94fb-91368f17324a-kube-api-access-dp57w\") pod \"0ae3c981-e8c9-488a-94fb-91368f17324a\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " Oct 11 11:18:31.038393 master-2 kubenswrapper[4776]: I1011 11:18:31.038250 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ovn-combined-ca-bundle\") pod \"0ae3c981-e8c9-488a-94fb-91368f17324a\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " Oct 11 11:18:31.038438 master-2 kubenswrapper[4776]: I1011 11:18:31.038406 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ssh-key\") pod \"0ae3c981-e8c9-488a-94fb-91368f17324a\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " Oct 11 11:18:31.038479 master-2 kubenswrapper[4776]: I1011 11:18:31.038444 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0ae3c981-e8c9-488a-94fb-91368f17324a-ovncontroller-config-0\") pod \"0ae3c981-e8c9-488a-94fb-91368f17324a\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " Oct 11 11:18:31.038517 master-2 kubenswrapper[4776]: I1011 11:18:31.038502 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-inventory\") pod \"0ae3c981-e8c9-488a-94fb-91368f17324a\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " Oct 11 11:18:31.042839 master-2 kubenswrapper[4776]: I1011 11:18:31.042777 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0ae3c981-e8c9-488a-94fb-91368f17324a" (UID: "0ae3c981-e8c9-488a-94fb-91368f17324a"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:18:31.043783 master-2 kubenswrapper[4776]: I1011 11:18:31.043733 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ae3c981-e8c9-488a-94fb-91368f17324a-kube-api-access-dp57w" (OuterVolumeSpecName: "kube-api-access-dp57w") pod "0ae3c981-e8c9-488a-94fb-91368f17324a" (UID: "0ae3c981-e8c9-488a-94fb-91368f17324a"). InnerVolumeSpecName "kube-api-access-dp57w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:18:31.066487 master-2 kubenswrapper[4776]: I1011 11:18:31.066429 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-inventory" (OuterVolumeSpecName: "inventory") pod "0ae3c981-e8c9-488a-94fb-91368f17324a" (UID: "0ae3c981-e8c9-488a-94fb-91368f17324a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:18:31.067009 master-2 kubenswrapper[4776]: I1011 11:18:31.066962 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0ae3c981-e8c9-488a-94fb-91368f17324a" (UID: "0ae3c981-e8c9-488a-94fb-91368f17324a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:18:31.069055 master-2 kubenswrapper[4776]: I1011 11:18:31.068999 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ae3c981-e8c9-488a-94fb-91368f17324a-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "0ae3c981-e8c9-488a-94fb-91368f17324a" (UID: "0ae3c981-e8c9-488a-94fb-91368f17324a"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:18:31.141410 master-2 kubenswrapper[4776]: I1011 11:18:31.140776 4776 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ssh-key\") on node \"master-2\" DevicePath \"\"" Oct 11 11:18:31.141410 master-2 kubenswrapper[4776]: I1011 11:18:31.140821 4776 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0ae3c981-e8c9-488a-94fb-91368f17324a-ovncontroller-config-0\") on node \"master-2\" DevicePath \"\"" Oct 11 11:18:31.141410 master-2 kubenswrapper[4776]: I1011 11:18:31.140832 4776 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-inventory\") on node \"master-2\" DevicePath \"\"" Oct 11 11:18:31.141410 master-2 kubenswrapper[4776]: I1011 11:18:31.140842 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp57w\" (UniqueName: \"kubernetes.io/projected/0ae3c981-e8c9-488a-94fb-91368f17324a-kube-api-access-dp57w\") on node \"master-2\" DevicePath \"\"" Oct 11 11:18:31.141410 master-2 kubenswrapper[4776]: I1011 11:18:31.140850 4776 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ovn-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 11:18:31.482748 master-2 kubenswrapper[4776]: I1011 11:18:31.480711 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-dataplane-edpm-rgmsv" event={"ID":"0ae3c981-e8c9-488a-94fb-91368f17324a","Type":"ContainerDied","Data":"21d463e1e629c9491368b651284d741c90df52fbbbfb81e125c5bc9727efc4db"} Oct 11 11:18:31.482748 master-2 kubenswrapper[4776]: I1011 11:18:31.480760 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21d463e1e629c9491368b651284d741c90df52fbbbfb81e125c5bc9727efc4db" Oct 11 11:18:31.482748 master-2 kubenswrapper[4776]: I1011 11:18:31.480739 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:18:31.650568 master-1 kubenswrapper[4771]: I1011 11:18:31.650478 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-dataplane-edpm-btpl5"] Oct 11 11:18:31.652407 master-1 kubenswrapper[4771]: I1011 11:18:31.652318 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:31.655129 master-1 kubenswrapper[4771]: I1011 11:18:31.655066 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Oct 11 11:18:31.655299 master-1 kubenswrapper[4771]: I1011 11:18:31.655225 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Oct 11 11:18:31.657249 master-1 kubenswrapper[4771]: I1011 11:18:31.657033 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-edpm" Oct 11 11:18:31.684646 master-1 kubenswrapper[4771]: I1011 11:18:31.682718 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-dataplane-edpm-btpl5"] Oct 11 11:18:31.722454 master-1 kubenswrapper[4771]: I1011 11:18:31.722385 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-inventory\") pod \"neutron-metadata-dataplane-edpm-btpl5\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:31.722649 master-1 kubenswrapper[4771]: I1011 11:18:31.722545 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-dataplane-edpm-btpl5\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:31.722649 master-1 kubenswrapper[4771]: I1011 11:18:31.722584 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-ssh-key\") pod \"neutron-metadata-dataplane-edpm-btpl5\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:31.722649 master-1 kubenswrapper[4771]: I1011 11:18:31.722640 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-dataplane-edpm-btpl5\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:31.722757 master-1 kubenswrapper[4771]: I1011 11:18:31.722695 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzcx4\" (UniqueName: \"kubernetes.io/projected/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-kube-api-access-bzcx4\") pod \"neutron-metadata-dataplane-edpm-btpl5\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:31.722824 master-1 kubenswrapper[4771]: I1011 11:18:31.722781 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-dataplane-edpm-btpl5\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:31.824722 master-1 kubenswrapper[4771]: I1011 11:18:31.824669 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-inventory\") pod \"neutron-metadata-dataplane-edpm-btpl5\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:31.824982 master-1 kubenswrapper[4771]: I1011 11:18:31.824754 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-dataplane-edpm-btpl5\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:31.824982 master-1 kubenswrapper[4771]: I1011 11:18:31.824782 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-ssh-key\") pod \"neutron-metadata-dataplane-edpm-btpl5\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:31.824982 master-1 kubenswrapper[4771]: I1011 11:18:31.824816 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-dataplane-edpm-btpl5\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:31.824982 master-1 kubenswrapper[4771]: I1011 11:18:31.824861 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzcx4\" (UniqueName: \"kubernetes.io/projected/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-kube-api-access-bzcx4\") pod \"neutron-metadata-dataplane-edpm-btpl5\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:31.824982 master-1 kubenswrapper[4771]: I1011 11:18:31.824926 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-dataplane-edpm-btpl5\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:31.829103 master-1 kubenswrapper[4771]: I1011 11:18:31.829051 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-dataplane-edpm-btpl5\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:31.830523 master-1 kubenswrapper[4771]: I1011 11:18:31.830452 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-dataplane-edpm-btpl5\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:31.830747 master-1 kubenswrapper[4771]: I1011 11:18:31.830581 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-ssh-key\") pod \"neutron-metadata-dataplane-edpm-btpl5\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:31.832186 master-1 kubenswrapper[4771]: I1011 11:18:31.832150 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-dataplane-edpm-btpl5\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:31.835143 master-1 kubenswrapper[4771]: I1011 11:18:31.835112 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-inventory\") pod \"neutron-metadata-dataplane-edpm-btpl5\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:31.851380 master-1 kubenswrapper[4771]: I1011 11:18:31.851313 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzcx4\" (UniqueName: \"kubernetes.io/projected/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-kube-api-access-bzcx4\") pod \"neutron-metadata-dataplane-edpm-btpl5\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:31.982788 master-1 kubenswrapper[4771]: I1011 11:18:31.982584 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:18:32.617628 master-1 kubenswrapper[4771]: I1011 11:18:32.617568 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-dataplane-edpm-btpl5"] Oct 11 11:18:32.624034 master-1 kubenswrapper[4771]: W1011 11:18:32.623871 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2abfc2a_4d79_4b42_ab00_c7ae196304f0.slice/crio-387c5dbd0c37c97f3a2383936a353b710430ae82ed847558aff32c3c182df4b4 WatchSource:0}: Error finding container 387c5dbd0c37c97f3a2383936a353b710430ae82ed847558aff32c3c182df4b4: Status 404 returned error can't find the container with id 387c5dbd0c37c97f3a2383936a353b710430ae82ed847558aff32c3c182df4b4 Oct 11 11:18:32.900185 master-1 kubenswrapper[4771]: I1011 11:18:32.900118 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-dataplane-edpm-btpl5" event={"ID":"e2abfc2a-4d79-4b42-ab00-c7ae196304f0","Type":"ContainerStarted","Data":"387c5dbd0c37c97f3a2383936a353b710430ae82ed847558aff32c3c182df4b4"} Oct 11 11:18:33.915428 master-1 kubenswrapper[4771]: I1011 11:18:33.915195 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-dataplane-edpm-btpl5" event={"ID":"e2abfc2a-4d79-4b42-ab00-c7ae196304f0","Type":"ContainerStarted","Data":"eb46d2fb69688ed3bc12562768989eccbd773169ee43961ad0896310902fea8e"} Oct 11 11:18:33.952815 master-1 kubenswrapper[4771]: I1011 11:18:33.952669 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-dataplane-edpm-btpl5" podStartSLOduration=2.433834096 podStartE2EDuration="2.952642327s" podCreationTimestamp="2025-10-11 11:18:31 +0000 UTC" firstStartedPulling="2025-10-11 11:18:32.629270189 +0000 UTC m=+3144.603496630" lastFinishedPulling="2025-10-11 11:18:33.14807838 +0000 UTC m=+3145.122304861" observedRunningTime="2025-10-11 11:18:33.942309071 +0000 UTC m=+3145.916535592" watchObservedRunningTime="2025-10-11 11:18:33.952642327 +0000 UTC m=+3145.926868798" Oct 11 11:18:55.149192 master-1 kubenswrapper[4771]: I1011 11:18:55.149117 4771 generic.go:334] "Generic (PLEG): container finished" podID="e8ccb67e-b425-48ea-a221-5991d470f77e" containerID="ec469c05a1e6c2dba56551c551b8625e8765aef989a9c4f27620156c2420a755" exitCode=0 Oct 11 11:18:55.149846 master-1 kubenswrapper[4771]: I1011 11:18:55.149175 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-networker-deploy-networkers-t2hq7" event={"ID":"e8ccb67e-b425-48ea-a221-5991d470f77e","Type":"ContainerDied","Data":"ec469c05a1e6c2dba56551c551b8625e8765aef989a9c4f27620156c2420a755"} Oct 11 11:18:56.349239 master-2 kubenswrapper[4776]: I1011 11:18:56.349068 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qdcmh"] Oct 11 11:18:56.350285 master-2 kubenswrapper[4776]: E1011 11:18:56.349469 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae3c981-e8c9-488a-94fb-91368f17324a" containerName="ovn-dataplane-edpm" Oct 11 11:18:56.350285 master-2 kubenswrapper[4776]: I1011 11:18:56.349484 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae3c981-e8c9-488a-94fb-91368f17324a" containerName="ovn-dataplane-edpm" Oct 11 11:18:56.350285 master-2 kubenswrapper[4776]: I1011 11:18:56.349697 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae3c981-e8c9-488a-94fb-91368f17324a" containerName="ovn-dataplane-edpm" Oct 11 11:18:56.351035 master-2 kubenswrapper[4776]: I1011 11:18:56.350985 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.378752 master-2 kubenswrapper[4776]: I1011 11:18:56.373904 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qdcmh"] Oct 11 11:18:56.496633 master-2 kubenswrapper[4776]: I1011 11:18:56.496561 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-catalog-content\") pod \"certified-operators-qdcmh\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.496988 master-2 kubenswrapper[4776]: I1011 11:18:56.496743 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5zpt\" (UniqueName: \"kubernetes.io/projected/f5435850-e29f-489e-8534-a73e291e2ae7-kube-api-access-h5zpt\") pod \"certified-operators-qdcmh\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.496988 master-2 kubenswrapper[4776]: I1011 11:18:56.496817 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-utilities\") pod \"certified-operators-qdcmh\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.598411 master-2 kubenswrapper[4776]: I1011 11:18:56.598331 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-catalog-content\") pod \"certified-operators-qdcmh\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.598696 master-2 kubenswrapper[4776]: I1011 11:18:56.598463 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5zpt\" (UniqueName: \"kubernetes.io/projected/f5435850-e29f-489e-8534-a73e291e2ae7-kube-api-access-h5zpt\") pod \"certified-operators-qdcmh\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.598696 master-2 kubenswrapper[4776]: I1011 11:18:56.598531 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-utilities\") pod \"certified-operators-qdcmh\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.599008 master-2 kubenswrapper[4776]: I1011 11:18:56.598972 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-catalog-content\") pod \"certified-operators-qdcmh\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.599059 master-2 kubenswrapper[4776]: I1011 11:18:56.599012 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-utilities\") pod \"certified-operators-qdcmh\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.624881 master-2 kubenswrapper[4776]: I1011 11:18:56.624781 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5zpt\" (UniqueName: \"kubernetes.io/projected/f5435850-e29f-489e-8534-a73e291e2ae7-kube-api-access-h5zpt\") pod \"certified-operators-qdcmh\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.699896 master-2 kubenswrapper[4776]: I1011 11:18:56.699835 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.748012 master-1 kubenswrapper[4771]: I1011 11:18:56.747775 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-networker-deploy-networkers-t2hq7" Oct 11 11:18:56.852912 master-1 kubenswrapper[4771]: I1011 11:18:56.852845 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8ccb67e-b425-48ea-a221-5991d470f77e-ovn-combined-ca-bundle\") pod \"e8ccb67e-b425-48ea-a221-5991d470f77e\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " Oct 11 11:18:56.853144 master-1 kubenswrapper[4771]: I1011 11:18:56.853022 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7bwg\" (UniqueName: \"kubernetes.io/projected/e8ccb67e-b425-48ea-a221-5991d470f77e-kube-api-access-h7bwg\") pod \"e8ccb67e-b425-48ea-a221-5991d470f77e\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " Oct 11 11:18:56.853208 master-1 kubenswrapper[4771]: I1011 11:18:56.853190 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e8ccb67e-b425-48ea-a221-5991d470f77e-ssh-key\") pod \"e8ccb67e-b425-48ea-a221-5991d470f77e\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " Oct 11 11:18:56.853411 master-1 kubenswrapper[4771]: I1011 11:18:56.853361 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8ccb67e-b425-48ea-a221-5991d470f77e-inventory\") pod \"e8ccb67e-b425-48ea-a221-5991d470f77e\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " Oct 11 11:18:56.853479 master-1 kubenswrapper[4771]: I1011 11:18:56.853461 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e8ccb67e-b425-48ea-a221-5991d470f77e-ovncontroller-config-0\") pod \"e8ccb67e-b425-48ea-a221-5991d470f77e\" (UID: \"e8ccb67e-b425-48ea-a221-5991d470f77e\") " Oct 11 11:18:56.857187 master-1 kubenswrapper[4771]: I1011 11:18:56.857094 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8ccb67e-b425-48ea-a221-5991d470f77e-kube-api-access-h7bwg" (OuterVolumeSpecName: "kube-api-access-h7bwg") pod "e8ccb67e-b425-48ea-a221-5991d470f77e" (UID: "e8ccb67e-b425-48ea-a221-5991d470f77e"). InnerVolumeSpecName "kube-api-access-h7bwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:18:56.857392 master-1 kubenswrapper[4771]: I1011 11:18:56.857324 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8ccb67e-b425-48ea-a221-5991d470f77e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e8ccb67e-b425-48ea-a221-5991d470f77e" (UID: "e8ccb67e-b425-48ea-a221-5991d470f77e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:18:56.884528 master-1 kubenswrapper[4771]: I1011 11:18:56.884443 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ccb67e-b425-48ea-a221-5991d470f77e-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "e8ccb67e-b425-48ea-a221-5991d470f77e" (UID: "e8ccb67e-b425-48ea-a221-5991d470f77e"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:18:56.887327 master-1 kubenswrapper[4771]: I1011 11:18:56.886159 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8ccb67e-b425-48ea-a221-5991d470f77e-inventory" (OuterVolumeSpecName: "inventory") pod "e8ccb67e-b425-48ea-a221-5991d470f77e" (UID: "e8ccb67e-b425-48ea-a221-5991d470f77e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:18:56.890602 master-1 kubenswrapper[4771]: I1011 11:18:56.890488 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8ccb67e-b425-48ea-a221-5991d470f77e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e8ccb67e-b425-48ea-a221-5991d470f77e" (UID: "e8ccb67e-b425-48ea-a221-5991d470f77e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:18:56.956939 master-1 kubenswrapper[4771]: I1011 11:18:56.956869 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8ccb67e-b425-48ea-a221-5991d470f77e-inventory\") on node \"master-1\" DevicePath \"\"" Oct 11 11:18:56.956939 master-1 kubenswrapper[4771]: I1011 11:18:56.956923 4771 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e8ccb67e-b425-48ea-a221-5991d470f77e-ovncontroller-config-0\") on node \"master-1\" DevicePath \"\"" Oct 11 11:18:56.956939 master-1 kubenswrapper[4771]: I1011 11:18:56.956936 4771 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8ccb67e-b425-48ea-a221-5991d470f77e-ovn-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 11:18:56.956939 master-1 kubenswrapper[4771]: I1011 11:18:56.956946 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7bwg\" (UniqueName: \"kubernetes.io/projected/e8ccb67e-b425-48ea-a221-5991d470f77e-kube-api-access-h7bwg\") on node \"master-1\" DevicePath \"\"" Oct 11 11:18:56.956939 master-1 kubenswrapper[4771]: I1011 11:18:56.956960 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e8ccb67e-b425-48ea-a221-5991d470f77e-ssh-key\") on node \"master-1\" DevicePath \"\"" Oct 11 11:18:57.182341 master-1 kubenswrapper[4771]: I1011 11:18:57.181316 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-networker-deploy-networkers-t2hq7" event={"ID":"e8ccb67e-b425-48ea-a221-5991d470f77e","Type":"ContainerDied","Data":"c868682262a8cb25efda7afaad016dfa9d45f465eeb3b92c6f246cbcf4775a5f"} Oct 11 11:18:57.182341 master-1 kubenswrapper[4771]: I1011 11:18:57.181394 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c868682262a8cb25efda7afaad016dfa9d45f465eeb3b92c6f246cbcf4775a5f" Oct 11 11:18:57.182341 master-1 kubenswrapper[4771]: I1011 11:18:57.181370 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-networker-deploy-networkers-t2hq7" Oct 11 11:18:57.187277 master-2 kubenswrapper[4776]: I1011 11:18:57.187206 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qdcmh"] Oct 11 11:18:57.191450 master-2 kubenswrapper[4776]: W1011 11:18:57.191359 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5435850_e29f_489e_8534_a73e291e2ae7.slice/crio-cf04f038161f6e02149819e27e04edc6a1248a7ef7d84020426beed99d05e6a7 WatchSource:0}: Error finding container cf04f038161f6e02149819e27e04edc6a1248a7ef7d84020426beed99d05e6a7: Status 404 returned error can't find the container with id cf04f038161f6e02149819e27e04edc6a1248a7ef7d84020426beed99d05e6a7 Oct 11 11:18:57.311975 master-1 kubenswrapper[4771]: I1011 11:18:57.311896 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-networker-deploy-networkers-p85s9"] Oct 11 11:18:57.312332 master-1 kubenswrapper[4771]: E1011 11:18:57.312298 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ccb67e-b425-48ea-a221-5991d470f77e" containerName="ovn-networker-deploy-networkers" Oct 11 11:18:57.312332 master-1 kubenswrapper[4771]: I1011 11:18:57.312323 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ccb67e-b425-48ea-a221-5991d470f77e" containerName="ovn-networker-deploy-networkers" Oct 11 11:18:57.312999 master-1 kubenswrapper[4771]: I1011 11:18:57.312543 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8ccb67e-b425-48ea-a221-5991d470f77e" containerName="ovn-networker-deploy-networkers" Oct 11 11:18:57.313350 master-1 kubenswrapper[4771]: I1011 11:18:57.313317 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.317628 master-1 kubenswrapper[4771]: I1011 11:18:57.317597 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-networkers" Oct 11 11:18:57.330470 master-1 kubenswrapper[4771]: I1011 11:18:57.330414 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-networker-deploy-networkers-p85s9"] Oct 11 11:18:57.477450 master-1 kubenswrapper[4771]: I1011 11:18:57.477241 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4sxx\" (UniqueName: \"kubernetes.io/projected/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-kube-api-access-n4sxx\") pod \"neutron-metadata-networker-deploy-networkers-p85s9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.477450 master-1 kubenswrapper[4771]: I1011 11:18:57.477405 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-networker-deploy-networkers-p85s9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.477768 master-1 kubenswrapper[4771]: I1011 11:18:57.477458 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-networker-deploy-networkers-p85s9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.477768 master-1 kubenswrapper[4771]: I1011 11:18:57.477676 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-ssh-key\") pod \"neutron-metadata-networker-deploy-networkers-p85s9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.477868 master-1 kubenswrapper[4771]: I1011 11:18:57.477787 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-networker-deploy-networkers-p85s9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.477919 master-1 kubenswrapper[4771]: I1011 11:18:57.477867 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-inventory\") pod \"neutron-metadata-networker-deploy-networkers-p85s9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.580417 master-1 kubenswrapper[4771]: I1011 11:18:57.580235 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4sxx\" (UniqueName: \"kubernetes.io/projected/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-kube-api-access-n4sxx\") pod \"neutron-metadata-networker-deploy-networkers-p85s9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.580980 master-1 kubenswrapper[4771]: I1011 11:18:57.580933 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-networker-deploy-networkers-p85s9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.580980 master-1 kubenswrapper[4771]: I1011 11:18:57.580989 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-networker-deploy-networkers-p85s9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.581603 master-1 kubenswrapper[4771]: I1011 11:18:57.581568 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-ssh-key\") pod \"neutron-metadata-networker-deploy-networkers-p85s9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.581749 master-1 kubenswrapper[4771]: I1011 11:18:57.581683 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-networker-deploy-networkers-p85s9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.581749 master-1 kubenswrapper[4771]: I1011 11:18:57.581714 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-inventory\") pod \"neutron-metadata-networker-deploy-networkers-p85s9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.585514 master-1 kubenswrapper[4771]: I1011 11:18:57.585453 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-networker-deploy-networkers-p85s9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.585514 master-1 kubenswrapper[4771]: I1011 11:18:57.585472 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-networker-deploy-networkers-p85s9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.586201 master-1 kubenswrapper[4771]: I1011 11:18:57.586134 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-ssh-key\") pod \"neutron-metadata-networker-deploy-networkers-p85s9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.586307 master-1 kubenswrapper[4771]: I1011 11:18:57.586283 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-networker-deploy-networkers-p85s9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.588458 master-1 kubenswrapper[4771]: I1011 11:18:57.587642 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-inventory\") pod \"neutron-metadata-networker-deploy-networkers-p85s9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.601365 master-1 kubenswrapper[4771]: I1011 11:18:57.601316 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4sxx\" (UniqueName: \"kubernetes.io/projected/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-kube-api-access-n4sxx\") pod \"neutron-metadata-networker-deploy-networkers-p85s9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.630910 master-1 kubenswrapper[4771]: I1011 11:18:57.630833 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:18:57.729497 master-2 kubenswrapper[4776]: I1011 11:18:57.729420 4776 generic.go:334] "Generic (PLEG): container finished" podID="f5435850-e29f-489e-8534-a73e291e2ae7" containerID="4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de" exitCode=0 Oct 11 11:18:57.730069 master-2 kubenswrapper[4776]: I1011 11:18:57.729520 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdcmh" event={"ID":"f5435850-e29f-489e-8534-a73e291e2ae7","Type":"ContainerDied","Data":"4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de"} Oct 11 11:18:57.730069 master-2 kubenswrapper[4776]: I1011 11:18:57.729707 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdcmh" event={"ID":"f5435850-e29f-489e-8534-a73e291e2ae7","Type":"ContainerStarted","Data":"cf04f038161f6e02149819e27e04edc6a1248a7ef7d84020426beed99d05e6a7"} Oct 11 11:18:58.213259 master-1 kubenswrapper[4771]: I1011 11:18:58.213202 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-networker-deploy-networkers-p85s9"] Oct 11 11:18:58.739170 master-2 kubenswrapper[4776]: I1011 11:18:58.739102 4776 generic.go:334] "Generic (PLEG): container finished" podID="f5435850-e29f-489e-8534-a73e291e2ae7" containerID="83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9" exitCode=0 Oct 11 11:18:58.739170 master-2 kubenswrapper[4776]: I1011 11:18:58.739156 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdcmh" event={"ID":"f5435850-e29f-489e-8534-a73e291e2ae7","Type":"ContainerDied","Data":"83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9"} Oct 11 11:18:59.204114 master-1 kubenswrapper[4771]: I1011 11:18:59.204058 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" event={"ID":"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9","Type":"ContainerStarted","Data":"92717e94e2058eedb11e073abfbbd407a95049fc4e25548ae9458ea5df577b0d"} Oct 11 11:18:59.204114 master-1 kubenswrapper[4771]: I1011 11:18:59.204119 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" event={"ID":"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9","Type":"ContainerStarted","Data":"29bf8aee9b4a890043d1883e13d1e18024c66bd6e92a8b19f57d72fb24e4d8ab"} Oct 11 11:18:59.231686 master-1 kubenswrapper[4771]: I1011 11:18:59.231576 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" podStartSLOduration=1.672994216 podStartE2EDuration="2.231557037s" podCreationTimestamp="2025-10-11 11:18:57 +0000 UTC" firstStartedPulling="2025-10-11 11:18:58.232921943 +0000 UTC m=+3170.207148424" lastFinishedPulling="2025-10-11 11:18:58.791484784 +0000 UTC m=+3170.765711245" observedRunningTime="2025-10-11 11:18:59.228922401 +0000 UTC m=+3171.203148892" watchObservedRunningTime="2025-10-11 11:18:59.231557037 +0000 UTC m=+3171.205783488" Oct 11 11:18:59.749694 master-2 kubenswrapper[4776]: I1011 11:18:59.749612 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdcmh" event={"ID":"f5435850-e29f-489e-8534-a73e291e2ae7","Type":"ContainerStarted","Data":"3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9"} Oct 11 11:18:59.781510 master-2 kubenswrapper[4776]: I1011 11:18:59.781426 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qdcmh" podStartSLOduration=2.33312347 podStartE2EDuration="3.781408176s" podCreationTimestamp="2025-10-11 11:18:56 +0000 UTC" firstStartedPulling="2025-10-11 11:18:57.731905585 +0000 UTC m=+3172.516332294" lastFinishedPulling="2025-10-11 11:18:59.180190291 +0000 UTC m=+3173.964617000" observedRunningTime="2025-10-11 11:18:59.773651737 +0000 UTC m=+3174.558078446" watchObservedRunningTime="2025-10-11 11:18:59.781408176 +0000 UTC m=+3174.565834885" Oct 11 11:19:06.700753 master-2 kubenswrapper[4776]: I1011 11:19:06.700624 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:19:06.700753 master-2 kubenswrapper[4776]: I1011 11:19:06.700758 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:19:06.759754 master-2 kubenswrapper[4776]: I1011 11:19:06.759437 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:19:06.869476 master-2 kubenswrapper[4776]: I1011 11:19:06.869385 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:19:07.028241 master-2 kubenswrapper[4776]: I1011 11:19:07.028062 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qdcmh"] Oct 11 11:19:08.830263 master-2 kubenswrapper[4776]: I1011 11:19:08.830187 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qdcmh" podUID="f5435850-e29f-489e-8534-a73e291e2ae7" containerName="registry-server" containerID="cri-o://3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9" gracePeriod=2 Oct 11 11:19:09.414750 master-2 kubenswrapper[4776]: I1011 11:19:09.414653 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:19:09.489211 master-2 kubenswrapper[4776]: I1011 11:19:09.489148 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-utilities\") pod \"f5435850-e29f-489e-8534-a73e291e2ae7\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " Oct 11 11:19:09.489468 master-2 kubenswrapper[4776]: I1011 11:19:09.489246 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5zpt\" (UniqueName: \"kubernetes.io/projected/f5435850-e29f-489e-8534-a73e291e2ae7-kube-api-access-h5zpt\") pod \"f5435850-e29f-489e-8534-a73e291e2ae7\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " Oct 11 11:19:09.489468 master-2 kubenswrapper[4776]: I1011 11:19:09.489305 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-catalog-content\") pod \"f5435850-e29f-489e-8534-a73e291e2ae7\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " Oct 11 11:19:09.490298 master-2 kubenswrapper[4776]: I1011 11:19:09.490250 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-utilities" (OuterVolumeSpecName: "utilities") pod "f5435850-e29f-489e-8534-a73e291e2ae7" (UID: "f5435850-e29f-489e-8534-a73e291e2ae7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:19:09.492650 master-2 kubenswrapper[4776]: I1011 11:19:09.492586 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5435850-e29f-489e-8534-a73e291e2ae7-kube-api-access-h5zpt" (OuterVolumeSpecName: "kube-api-access-h5zpt") pod "f5435850-e29f-489e-8534-a73e291e2ae7" (UID: "f5435850-e29f-489e-8534-a73e291e2ae7"). InnerVolumeSpecName "kube-api-access-h5zpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:19:09.592844 master-2 kubenswrapper[4776]: I1011 11:19:09.592699 4776 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-utilities\") on node \"master-2\" DevicePath \"\"" Oct 11 11:19:09.592844 master-2 kubenswrapper[4776]: I1011 11:19:09.592776 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5zpt\" (UniqueName: \"kubernetes.io/projected/f5435850-e29f-489e-8534-a73e291e2ae7-kube-api-access-h5zpt\") on node \"master-2\" DevicePath \"\"" Oct 11 11:19:09.699937 master-2 kubenswrapper[4776]: I1011 11:19:09.699863 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5435850-e29f-489e-8534-a73e291e2ae7" (UID: "f5435850-e29f-489e-8534-a73e291e2ae7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:19:09.796133 master-2 kubenswrapper[4776]: I1011 11:19:09.796060 4776 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-catalog-content\") on node \"master-2\" DevicePath \"\"" Oct 11 11:19:09.839592 master-2 kubenswrapper[4776]: I1011 11:19:09.839536 4776 generic.go:334] "Generic (PLEG): container finished" podID="f5435850-e29f-489e-8534-a73e291e2ae7" containerID="3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9" exitCode=0 Oct 11 11:19:09.839592 master-2 kubenswrapper[4776]: I1011 11:19:09.839593 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdcmh" event={"ID":"f5435850-e29f-489e-8534-a73e291e2ae7","Type":"ContainerDied","Data":"3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9"} Oct 11 11:19:09.840223 master-2 kubenswrapper[4776]: I1011 11:19:09.839611 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:19:09.840223 master-2 kubenswrapper[4776]: I1011 11:19:09.839641 4776 scope.go:117] "RemoveContainer" containerID="3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9" Oct 11 11:19:09.840223 master-2 kubenswrapper[4776]: I1011 11:19:09.839631 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdcmh" event={"ID":"f5435850-e29f-489e-8534-a73e291e2ae7","Type":"ContainerDied","Data":"cf04f038161f6e02149819e27e04edc6a1248a7ef7d84020426beed99d05e6a7"} Oct 11 11:19:09.859501 master-2 kubenswrapper[4776]: I1011 11:19:09.859408 4776 scope.go:117] "RemoveContainer" containerID="83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9" Oct 11 11:19:09.881155 master-2 kubenswrapper[4776]: I1011 11:19:09.880863 4776 scope.go:117] "RemoveContainer" containerID="4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de" Oct 11 11:19:09.886157 master-2 kubenswrapper[4776]: I1011 11:19:09.886098 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qdcmh"] Oct 11 11:19:09.894530 master-2 kubenswrapper[4776]: I1011 11:19:09.894476 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qdcmh"] Oct 11 11:19:09.918191 master-2 kubenswrapper[4776]: I1011 11:19:09.918061 4776 scope.go:117] "RemoveContainer" containerID="3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9" Oct 11 11:19:09.918854 master-2 kubenswrapper[4776]: E1011 11:19:09.918805 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9\": container with ID starting with 3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9 not found: ID does not exist" containerID="3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9" Oct 11 11:19:09.918980 master-2 kubenswrapper[4776]: I1011 11:19:09.918862 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9"} err="failed to get container status \"3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9\": rpc error: code = NotFound desc = could not find container \"3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9\": container with ID starting with 3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9 not found: ID does not exist" Oct 11 11:19:09.918980 master-2 kubenswrapper[4776]: I1011 11:19:09.918883 4776 scope.go:117] "RemoveContainer" containerID="83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9" Oct 11 11:19:09.919306 master-2 kubenswrapper[4776]: E1011 11:19:09.919272 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9\": container with ID starting with 83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9 not found: ID does not exist" containerID="83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9" Oct 11 11:19:09.919306 master-2 kubenswrapper[4776]: I1011 11:19:09.919298 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9"} err="failed to get container status \"83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9\": rpc error: code = NotFound desc = could not find container \"83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9\": container with ID starting with 83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9 not found: ID does not exist" Oct 11 11:19:09.919439 master-2 kubenswrapper[4776]: I1011 11:19:09.919310 4776 scope.go:117] "RemoveContainer" containerID="4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de" Oct 11 11:19:09.919643 master-2 kubenswrapper[4776]: E1011 11:19:09.919618 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de\": container with ID starting with 4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de not found: ID does not exist" containerID="4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de" Oct 11 11:19:09.919740 master-2 kubenswrapper[4776]: I1011 11:19:09.919646 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de"} err="failed to get container status \"4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de\": rpc error: code = NotFound desc = could not find container \"4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de\": container with ID starting with 4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de not found: ID does not exist" Oct 11 11:19:10.070758 master-2 kubenswrapper[4776]: I1011 11:19:10.070694 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5435850-e29f-489e-8534-a73e291e2ae7" path="/var/lib/kubelet/pods/f5435850-e29f-489e-8534-a73e291e2ae7/volumes" Oct 11 11:19:35.595755 master-1 kubenswrapper[4771]: I1011 11:19:35.595698 4771 generic.go:334] "Generic (PLEG): container finished" podID="e2abfc2a-4d79-4b42-ab00-c7ae196304f0" containerID="eb46d2fb69688ed3bc12562768989eccbd773169ee43961ad0896310902fea8e" exitCode=0 Oct 11 11:19:35.595755 master-1 kubenswrapper[4771]: I1011 11:19:35.595766 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-dataplane-edpm-btpl5" event={"ID":"e2abfc2a-4d79-4b42-ab00-c7ae196304f0","Type":"ContainerDied","Data":"eb46d2fb69688ed3bc12562768989eccbd773169ee43961ad0896310902fea8e"} Oct 11 11:19:37.160073 master-1 kubenswrapper[4771]: I1011 11:19:37.159903 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:19:37.297085 master-1 kubenswrapper[4771]: I1011 11:19:37.297006 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-inventory\") pod \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " Oct 11 11:19:37.297411 master-1 kubenswrapper[4771]: I1011 11:19:37.297349 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-nova-metadata-neutron-config-0\") pod \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " Oct 11 11:19:37.297480 master-1 kubenswrapper[4771]: I1011 11:19:37.297431 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-ssh-key\") pod \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " Oct 11 11:19:37.297569 master-1 kubenswrapper[4771]: I1011 11:19:37.297510 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-neutron-metadata-combined-ca-bundle\") pod \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " Oct 11 11:19:37.297648 master-1 kubenswrapper[4771]: I1011 11:19:37.297595 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " Oct 11 11:19:37.297648 master-1 kubenswrapper[4771]: I1011 11:19:37.297640 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzcx4\" (UniqueName: \"kubernetes.io/projected/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-kube-api-access-bzcx4\") pod \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\" (UID: \"e2abfc2a-4d79-4b42-ab00-c7ae196304f0\") " Oct 11 11:19:37.301153 master-1 kubenswrapper[4771]: I1011 11:19:37.301096 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e2abfc2a-4d79-4b42-ab00-c7ae196304f0" (UID: "e2abfc2a-4d79-4b42-ab00-c7ae196304f0"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:19:37.302125 master-1 kubenswrapper[4771]: I1011 11:19:37.302051 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-kube-api-access-bzcx4" (OuterVolumeSpecName: "kube-api-access-bzcx4") pod "e2abfc2a-4d79-4b42-ab00-c7ae196304f0" (UID: "e2abfc2a-4d79-4b42-ab00-c7ae196304f0"). InnerVolumeSpecName "kube-api-access-bzcx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:19:37.323133 master-1 kubenswrapper[4771]: I1011 11:19:37.323072 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "e2abfc2a-4d79-4b42-ab00-c7ae196304f0" (UID: "e2abfc2a-4d79-4b42-ab00-c7ae196304f0"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:19:37.328680 master-1 kubenswrapper[4771]: I1011 11:19:37.328633 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-inventory" (OuterVolumeSpecName: "inventory") pod "e2abfc2a-4d79-4b42-ab00-c7ae196304f0" (UID: "e2abfc2a-4d79-4b42-ab00-c7ae196304f0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:19:37.339450 master-1 kubenswrapper[4771]: I1011 11:19:37.339341 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "e2abfc2a-4d79-4b42-ab00-c7ae196304f0" (UID: "e2abfc2a-4d79-4b42-ab00-c7ae196304f0"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:19:37.352269 master-1 kubenswrapper[4771]: I1011 11:19:37.351271 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e2abfc2a-4d79-4b42-ab00-c7ae196304f0" (UID: "e2abfc2a-4d79-4b42-ab00-c7ae196304f0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:19:37.400549 master-1 kubenswrapper[4771]: I1011 11:19:37.400447 4771 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-neutron-metadata-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 11:19:37.400549 master-1 kubenswrapper[4771]: I1011 11:19:37.400534 4771 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-neutron-ovn-metadata-agent-neutron-config-0\") on node \"master-1\" DevicePath \"\"" Oct 11 11:19:37.400549 master-1 kubenswrapper[4771]: I1011 11:19:37.400554 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzcx4\" (UniqueName: \"kubernetes.io/projected/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-kube-api-access-bzcx4\") on node \"master-1\" DevicePath \"\"" Oct 11 11:19:37.400917 master-1 kubenswrapper[4771]: I1011 11:19:37.400571 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-inventory\") on node \"master-1\" DevicePath \"\"" Oct 11 11:19:37.400917 master-1 kubenswrapper[4771]: I1011 11:19:37.400584 4771 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-nova-metadata-neutron-config-0\") on node \"master-1\" DevicePath \"\"" Oct 11 11:19:37.400917 master-1 kubenswrapper[4771]: I1011 11:19:37.400598 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2abfc2a-4d79-4b42-ab00-c7ae196304f0-ssh-key\") on node \"master-1\" DevicePath \"\"" Oct 11 11:19:37.617016 master-1 kubenswrapper[4771]: I1011 11:19:37.616945 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-dataplane-edpm-btpl5" event={"ID":"e2abfc2a-4d79-4b42-ab00-c7ae196304f0","Type":"ContainerDied","Data":"387c5dbd0c37c97f3a2383936a353b710430ae82ed847558aff32c3c182df4b4"} Oct 11 11:19:37.617016 master-1 kubenswrapper[4771]: I1011 11:19:37.617015 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="387c5dbd0c37c97f3a2383936a353b710430ae82ed847558aff32c3c182df4b4" Oct 11 11:19:37.617397 master-1 kubenswrapper[4771]: I1011 11:19:37.617144 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-dataplane-edpm-btpl5" Oct 11 11:19:37.758716 master-1 kubenswrapper[4771]: I1011 11:19:37.758565 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-dataplane-edpm-9sl9q"] Oct 11 11:19:37.759124 master-1 kubenswrapper[4771]: E1011 11:19:37.759013 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2abfc2a-4d79-4b42-ab00-c7ae196304f0" containerName="neutron-metadata-dataplane-edpm" Oct 11 11:19:37.759124 master-1 kubenswrapper[4771]: I1011 11:19:37.759028 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2abfc2a-4d79-4b42-ab00-c7ae196304f0" containerName="neutron-metadata-dataplane-edpm" Oct 11 11:19:37.759289 master-1 kubenswrapper[4771]: I1011 11:19:37.759237 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2abfc2a-4d79-4b42-ab00-c7ae196304f0" containerName="neutron-metadata-dataplane-edpm" Oct 11 11:19:37.760074 master-1 kubenswrapper[4771]: I1011 11:19:37.760055 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-dataplane-edpm-9sl9q" Oct 11 11:19:37.763460 master-1 kubenswrapper[4771]: I1011 11:19:37.763408 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-edpm" Oct 11 11:19:37.766563 master-1 kubenswrapper[4771]: I1011 11:19:37.766522 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Oct 11 11:19:37.782119 master-1 kubenswrapper[4771]: I1011 11:19:37.782041 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-dataplane-edpm-9sl9q"] Oct 11 11:19:37.807112 master-1 kubenswrapper[4771]: I1011 11:19:37.807038 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-inventory\") pod \"libvirt-dataplane-edpm-9sl9q\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " pod="openstack/libvirt-dataplane-edpm-9sl9q" Oct 11 11:19:37.807334 master-1 kubenswrapper[4771]: I1011 11:19:37.807128 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j577p\" (UniqueName: \"kubernetes.io/projected/85772d7d-920d-478b-88f8-6b7f135c79f4-kube-api-access-j577p\") pod \"libvirt-dataplane-edpm-9sl9q\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " pod="openstack/libvirt-dataplane-edpm-9sl9q" Oct 11 11:19:37.807334 master-1 kubenswrapper[4771]: I1011 11:19:37.807221 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-ssh-key\") pod \"libvirt-dataplane-edpm-9sl9q\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " pod="openstack/libvirt-dataplane-edpm-9sl9q" Oct 11 11:19:37.807334 master-1 kubenswrapper[4771]: I1011 11:19:37.807264 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-libvirt-combined-ca-bundle\") pod \"libvirt-dataplane-edpm-9sl9q\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " pod="openstack/libvirt-dataplane-edpm-9sl9q" Oct 11 11:19:37.807334 master-1 kubenswrapper[4771]: I1011 11:19:37.807302 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-libvirt-secret-0\") pod \"libvirt-dataplane-edpm-9sl9q\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " pod="openstack/libvirt-dataplane-edpm-9sl9q" Oct 11 11:19:37.908830 master-1 kubenswrapper[4771]: I1011 11:19:37.908778 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-ssh-key\") pod \"libvirt-dataplane-edpm-9sl9q\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " pod="openstack/libvirt-dataplane-edpm-9sl9q" Oct 11 11:19:37.909159 master-1 kubenswrapper[4771]: I1011 11:19:37.909143 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-libvirt-combined-ca-bundle\") pod \"libvirt-dataplane-edpm-9sl9q\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " pod="openstack/libvirt-dataplane-edpm-9sl9q" Oct 11 11:19:37.909264 master-1 kubenswrapper[4771]: I1011 11:19:37.909251 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-libvirt-secret-0\") pod \"libvirt-dataplane-edpm-9sl9q\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " pod="openstack/libvirt-dataplane-edpm-9sl9q" Oct 11 11:19:37.909448 master-1 kubenswrapper[4771]: I1011 11:19:37.909433 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-inventory\") pod \"libvirt-dataplane-edpm-9sl9q\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " pod="openstack/libvirt-dataplane-edpm-9sl9q" Oct 11 11:19:37.909556 master-1 kubenswrapper[4771]: I1011 11:19:37.909543 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j577p\" (UniqueName: \"kubernetes.io/projected/85772d7d-920d-478b-88f8-6b7f135c79f4-kube-api-access-j577p\") pod \"libvirt-dataplane-edpm-9sl9q\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " pod="openstack/libvirt-dataplane-edpm-9sl9q" Oct 11 11:19:37.913760 master-1 kubenswrapper[4771]: I1011 11:19:37.913720 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-ssh-key\") pod \"libvirt-dataplane-edpm-9sl9q\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " pod="openstack/libvirt-dataplane-edpm-9sl9q" Oct 11 11:19:37.914891 master-1 kubenswrapper[4771]: I1011 11:19:37.914847 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-libvirt-combined-ca-bundle\") pod \"libvirt-dataplane-edpm-9sl9q\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " pod="openstack/libvirt-dataplane-edpm-9sl9q" Oct 11 11:19:37.915103 master-1 kubenswrapper[4771]: I1011 11:19:37.915035 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-libvirt-secret-0\") pod \"libvirt-dataplane-edpm-9sl9q\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " pod="openstack/libvirt-dataplane-edpm-9sl9q" Oct 11 11:19:37.916298 master-1 kubenswrapper[4771]: I1011 11:19:37.915864 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-inventory\") pod \"libvirt-dataplane-edpm-9sl9q\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " pod="openstack/libvirt-dataplane-edpm-9sl9q" Oct 11 11:19:37.937406 master-1 kubenswrapper[4771]: I1011 11:19:37.937317 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j577p\" (UniqueName: \"kubernetes.io/projected/85772d7d-920d-478b-88f8-6b7f135c79f4-kube-api-access-j577p\") pod \"libvirt-dataplane-edpm-9sl9q\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " pod="openstack/libvirt-dataplane-edpm-9sl9q" Oct 11 11:19:38.153684 master-1 kubenswrapper[4771]: I1011 11:19:38.153334 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-dataplane-edpm-9sl9q" Oct 11 11:19:38.720552 master-1 kubenswrapper[4771]: I1011 11:19:38.720463 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-dataplane-edpm-9sl9q"] Oct 11 11:19:38.726702 master-1 kubenswrapper[4771]: W1011 11:19:38.726663 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85772d7d_920d_478b_88f8_6b7f135c79f4.slice/crio-469581d1f2ba91444a34fa09ee246cb89f27d8a7c5197db8e53776ec31618f65 WatchSource:0}: Error finding container 469581d1f2ba91444a34fa09ee246cb89f27d8a7c5197db8e53776ec31618f65: Status 404 returned error can't find the container with id 469581d1f2ba91444a34fa09ee246cb89f27d8a7c5197db8e53776ec31618f65 Oct 11 11:19:39.643549 master-1 kubenswrapper[4771]: I1011 11:19:39.643460 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-dataplane-edpm-9sl9q" event={"ID":"85772d7d-920d-478b-88f8-6b7f135c79f4","Type":"ContainerStarted","Data":"5be7a0054307e1243100b620013bd83d4151a9dcec02bed8d5e9fdb9ac6d5d35"} Oct 11 11:19:39.643549 master-1 kubenswrapper[4771]: I1011 11:19:39.643541 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-dataplane-edpm-9sl9q" event={"ID":"85772d7d-920d-478b-88f8-6b7f135c79f4","Type":"ContainerStarted","Data":"469581d1f2ba91444a34fa09ee246cb89f27d8a7c5197db8e53776ec31618f65"} Oct 11 11:19:39.677136 master-1 kubenswrapper[4771]: I1011 11:19:39.677006 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-dataplane-edpm-9sl9q" podStartSLOduration=2.169731144 podStartE2EDuration="2.676975833s" podCreationTimestamp="2025-10-11 11:19:37 +0000 UTC" firstStartedPulling="2025-10-11 11:19:38.728961182 +0000 UTC m=+3210.703187663" lastFinishedPulling="2025-10-11 11:19:39.236205901 +0000 UTC m=+3211.210432352" observedRunningTime="2025-10-11 11:19:39.670065105 +0000 UTC m=+3211.644291626" watchObservedRunningTime="2025-10-11 11:19:39.676975833 +0000 UTC m=+3211.651202314" Oct 11 11:20:14.033114 master-1 kubenswrapper[4771]: I1011 11:20:14.033039 4771 generic.go:334] "Generic (PLEG): container finished" podID="050ae5f1-9a55-4c0b-8b83-f353aff0b3a9" containerID="92717e94e2058eedb11e073abfbbd407a95049fc4e25548ae9458ea5df577b0d" exitCode=0 Oct 11 11:20:14.033114 master-1 kubenswrapper[4771]: I1011 11:20:14.033070 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" event={"ID":"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9","Type":"ContainerDied","Data":"92717e94e2058eedb11e073abfbbd407a95049fc4e25548ae9458ea5df577b0d"} Oct 11 11:20:15.781612 master-1 kubenswrapper[4771]: I1011 11:20:15.781543 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:20:15.881071 master-1 kubenswrapper[4771]: I1011 11:20:15.880968 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-nova-metadata-neutron-config-0\") pod \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " Oct 11 11:20:15.881427 master-1 kubenswrapper[4771]: I1011 11:20:15.881154 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-ssh-key\") pod \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " Oct 11 11:20:15.881590 master-1 kubenswrapper[4771]: I1011 11:20:15.881534 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-inventory\") pod \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " Oct 11 11:20:15.881732 master-1 kubenswrapper[4771]: I1011 11:20:15.881686 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " Oct 11 11:20:15.881820 master-1 kubenswrapper[4771]: I1011 11:20:15.881763 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4sxx\" (UniqueName: \"kubernetes.io/projected/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-kube-api-access-n4sxx\") pod \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " Oct 11 11:20:15.881820 master-1 kubenswrapper[4771]: I1011 11:20:15.881807 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-neutron-metadata-combined-ca-bundle\") pod \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\" (UID: \"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9\") " Oct 11 11:20:15.887927 master-1 kubenswrapper[4771]: I1011 11:20:15.887857 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "050ae5f1-9a55-4c0b-8b83-f353aff0b3a9" (UID: "050ae5f1-9a55-4c0b-8b83-f353aff0b3a9"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:20:15.888489 master-1 kubenswrapper[4771]: I1011 11:20:15.888412 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-kube-api-access-n4sxx" (OuterVolumeSpecName: "kube-api-access-n4sxx") pod "050ae5f1-9a55-4c0b-8b83-f353aff0b3a9" (UID: "050ae5f1-9a55-4c0b-8b83-f353aff0b3a9"). InnerVolumeSpecName "kube-api-access-n4sxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:20:15.917054 master-1 kubenswrapper[4771]: I1011 11:20:15.916965 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "050ae5f1-9a55-4c0b-8b83-f353aff0b3a9" (UID: "050ae5f1-9a55-4c0b-8b83-f353aff0b3a9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:20:15.933189 master-1 kubenswrapper[4771]: I1011 11:20:15.933124 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "050ae5f1-9a55-4c0b-8b83-f353aff0b3a9" (UID: "050ae5f1-9a55-4c0b-8b83-f353aff0b3a9"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:20:15.934695 master-1 kubenswrapper[4771]: I1011 11:20:15.934604 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "050ae5f1-9a55-4c0b-8b83-f353aff0b3a9" (UID: "050ae5f1-9a55-4c0b-8b83-f353aff0b3a9"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:20:15.937053 master-1 kubenswrapper[4771]: I1011 11:20:15.936542 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-inventory" (OuterVolumeSpecName: "inventory") pod "050ae5f1-9a55-4c0b-8b83-f353aff0b3a9" (UID: "050ae5f1-9a55-4c0b-8b83-f353aff0b3a9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:20:15.984220 master-1 kubenswrapper[4771]: I1011 11:20:15.984136 4771 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-neutron-ovn-metadata-agent-neutron-config-0\") on node \"master-1\" DevicePath \"\"" Oct 11 11:20:15.984220 master-1 kubenswrapper[4771]: I1011 11:20:15.984194 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4sxx\" (UniqueName: \"kubernetes.io/projected/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-kube-api-access-n4sxx\") on node \"master-1\" DevicePath \"\"" Oct 11 11:20:15.984220 master-1 kubenswrapper[4771]: I1011 11:20:15.984211 4771 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-neutron-metadata-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 11:20:15.984220 master-1 kubenswrapper[4771]: I1011 11:20:15.984228 4771 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-nova-metadata-neutron-config-0\") on node \"master-1\" DevicePath \"\"" Oct 11 11:20:15.984652 master-1 kubenswrapper[4771]: I1011 11:20:15.984242 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-ssh-key\") on node \"master-1\" DevicePath \"\"" Oct 11 11:20:15.984652 master-1 kubenswrapper[4771]: I1011 11:20:15.984256 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/050ae5f1-9a55-4c0b-8b83-f353aff0b3a9-inventory\") on node \"master-1\" DevicePath \"\"" Oct 11 11:20:16.061144 master-1 kubenswrapper[4771]: I1011 11:20:16.061066 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" event={"ID":"050ae5f1-9a55-4c0b-8b83-f353aff0b3a9","Type":"ContainerDied","Data":"29bf8aee9b4a890043d1883e13d1e18024c66bd6e92a8b19f57d72fb24e4d8ab"} Oct 11 11:20:16.061611 master-1 kubenswrapper[4771]: I1011 11:20:16.061590 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29bf8aee9b4a890043d1883e13d1e18024c66bd6e92a8b19f57d72fb24e4d8ab" Oct 11 11:20:16.062091 master-1 kubenswrapper[4771]: I1011 11:20:16.061188 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-networker-deploy-networkers-p85s9" Oct 11 11:21:34.115548 master-1 kubenswrapper[4771]: I1011 11:21:34.115417 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tvfgn"] Oct 11 11:21:34.116576 master-1 kubenswrapper[4771]: E1011 11:21:34.115801 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050ae5f1-9a55-4c0b-8b83-f353aff0b3a9" containerName="neutron-metadata-networker-deploy-networkers" Oct 11 11:21:34.116576 master-1 kubenswrapper[4771]: I1011 11:21:34.115816 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="050ae5f1-9a55-4c0b-8b83-f353aff0b3a9" containerName="neutron-metadata-networker-deploy-networkers" Oct 11 11:21:34.116576 master-1 kubenswrapper[4771]: I1011 11:21:34.115964 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="050ae5f1-9a55-4c0b-8b83-f353aff0b3a9" containerName="neutron-metadata-networker-deploy-networkers" Oct 11 11:21:34.117388 master-1 kubenswrapper[4771]: I1011 11:21:34.117335 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvfgn" Oct 11 11:21:34.155270 master-1 kubenswrapper[4771]: I1011 11:21:34.155135 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvfgn"] Oct 11 11:21:34.204932 master-1 kubenswrapper[4771]: I1011 11:21:34.204872 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/127d4d79-6da3-4be4-9271-fb04d6d3fb06-catalog-content\") pod \"redhat-marketplace-tvfgn\" (UID: \"127d4d79-6da3-4be4-9271-fb04d6d3fb06\") " pod="openshift-marketplace/redhat-marketplace-tvfgn" Oct 11 11:21:34.205338 master-1 kubenswrapper[4771]: I1011 11:21:34.205013 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/127d4d79-6da3-4be4-9271-fb04d6d3fb06-utilities\") pod \"redhat-marketplace-tvfgn\" (UID: \"127d4d79-6da3-4be4-9271-fb04d6d3fb06\") " pod="openshift-marketplace/redhat-marketplace-tvfgn" Oct 11 11:21:34.205338 master-1 kubenswrapper[4771]: I1011 11:21:34.205100 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrd5q\" (UniqueName: \"kubernetes.io/projected/127d4d79-6da3-4be4-9271-fb04d6d3fb06-kube-api-access-jrd5q\") pod \"redhat-marketplace-tvfgn\" (UID: \"127d4d79-6da3-4be4-9271-fb04d6d3fb06\") " pod="openshift-marketplace/redhat-marketplace-tvfgn" Oct 11 11:21:34.307480 master-1 kubenswrapper[4771]: I1011 11:21:34.307382 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/127d4d79-6da3-4be4-9271-fb04d6d3fb06-catalog-content\") pod \"redhat-marketplace-tvfgn\" (UID: \"127d4d79-6da3-4be4-9271-fb04d6d3fb06\") " pod="openshift-marketplace/redhat-marketplace-tvfgn" Oct 11 11:21:34.307744 master-1 kubenswrapper[4771]: I1011 11:21:34.307559 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/127d4d79-6da3-4be4-9271-fb04d6d3fb06-utilities\") pod \"redhat-marketplace-tvfgn\" (UID: \"127d4d79-6da3-4be4-9271-fb04d6d3fb06\") " pod="openshift-marketplace/redhat-marketplace-tvfgn" Oct 11 11:21:34.307744 master-1 kubenswrapper[4771]: I1011 11:21:34.307685 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrd5q\" (UniqueName: \"kubernetes.io/projected/127d4d79-6da3-4be4-9271-fb04d6d3fb06-kube-api-access-jrd5q\") pod \"redhat-marketplace-tvfgn\" (UID: \"127d4d79-6da3-4be4-9271-fb04d6d3fb06\") " pod="openshift-marketplace/redhat-marketplace-tvfgn" Oct 11 11:21:34.308236 master-1 kubenswrapper[4771]: I1011 11:21:34.307954 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/127d4d79-6da3-4be4-9271-fb04d6d3fb06-catalog-content\") pod \"redhat-marketplace-tvfgn\" (UID: \"127d4d79-6da3-4be4-9271-fb04d6d3fb06\") " pod="openshift-marketplace/redhat-marketplace-tvfgn" Oct 11 11:21:34.308867 master-1 kubenswrapper[4771]: I1011 11:21:34.308395 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/127d4d79-6da3-4be4-9271-fb04d6d3fb06-utilities\") pod \"redhat-marketplace-tvfgn\" (UID: \"127d4d79-6da3-4be4-9271-fb04d6d3fb06\") " pod="openshift-marketplace/redhat-marketplace-tvfgn" Oct 11 11:21:34.336311 master-1 kubenswrapper[4771]: I1011 11:21:34.336240 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrd5q\" (UniqueName: \"kubernetes.io/projected/127d4d79-6da3-4be4-9271-fb04d6d3fb06-kube-api-access-jrd5q\") pod \"redhat-marketplace-tvfgn\" (UID: \"127d4d79-6da3-4be4-9271-fb04d6d3fb06\") " pod="openshift-marketplace/redhat-marketplace-tvfgn" Oct 11 11:21:34.447257 master-1 kubenswrapper[4771]: I1011 11:21:34.446645 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvfgn" Oct 11 11:21:34.941748 master-1 kubenswrapper[4771]: I1011 11:21:34.941686 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvfgn"] Oct 11 11:21:35.950029 master-1 kubenswrapper[4771]: I1011 11:21:35.949954 4771 generic.go:334] "Generic (PLEG): container finished" podID="127d4d79-6da3-4be4-9271-fb04d6d3fb06" containerID="1e35b38f0ac33b45160c40a25839f70590318147b90ddd186179931705b30142" exitCode=0 Oct 11 11:21:35.950796 master-1 kubenswrapper[4771]: I1011 11:21:35.950085 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvfgn" event={"ID":"127d4d79-6da3-4be4-9271-fb04d6d3fb06","Type":"ContainerDied","Data":"1e35b38f0ac33b45160c40a25839f70590318147b90ddd186179931705b30142"} Oct 11 11:21:35.950796 master-1 kubenswrapper[4771]: I1011 11:21:35.950152 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvfgn" event={"ID":"127d4d79-6da3-4be4-9271-fb04d6d3fb06","Type":"ContainerStarted","Data":"bb0de95a85571686983444043ed59014f3eebfb860364be2717ad7ec4716bf05"} Oct 11 11:21:35.953197 master-1 kubenswrapper[4771]: I1011 11:21:35.953042 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 11:21:37.983507 master-1 kubenswrapper[4771]: I1011 11:21:37.983423 4771 generic.go:334] "Generic (PLEG): container finished" podID="127d4d79-6da3-4be4-9271-fb04d6d3fb06" containerID="80a371a805e99adac53136620f611284a90e75bd54e82d550133c8126203cc09" exitCode=0 Oct 11 11:21:37.984525 master-1 kubenswrapper[4771]: I1011 11:21:37.983514 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvfgn" event={"ID":"127d4d79-6da3-4be4-9271-fb04d6d3fb06","Type":"ContainerDied","Data":"80a371a805e99adac53136620f611284a90e75bd54e82d550133c8126203cc09"} Oct 11 11:21:38.998597 master-1 kubenswrapper[4771]: I1011 11:21:38.998486 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvfgn" event={"ID":"127d4d79-6da3-4be4-9271-fb04d6d3fb06","Type":"ContainerStarted","Data":"db17f449ae0fb02b4d0c2bab45419797ba50f59a59b6ac45e42a324754b1a94e"} Oct 11 11:21:39.040416 master-1 kubenswrapper[4771]: I1011 11:21:39.040227 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tvfgn" podStartSLOduration=2.5833395919999997 podStartE2EDuration="5.040194241s" podCreationTimestamp="2025-10-11 11:21:34 +0000 UTC" firstStartedPulling="2025-10-11 11:21:35.952927383 +0000 UTC m=+3327.927153824" lastFinishedPulling="2025-10-11 11:21:38.409782032 +0000 UTC m=+3330.384008473" observedRunningTime="2025-10-11 11:21:39.034613631 +0000 UTC m=+3331.008840072" watchObservedRunningTime="2025-10-11 11:21:39.040194241 +0000 UTC m=+3331.014420682" Oct 11 11:21:44.450428 master-1 kubenswrapper[4771]: I1011 11:21:44.450131 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tvfgn" Oct 11 11:21:44.450428 master-1 kubenswrapper[4771]: I1011 11:21:44.450287 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tvfgn" Oct 11 11:21:44.503093 master-1 kubenswrapper[4771]: I1011 11:21:44.503025 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tvfgn" Oct 11 11:21:45.114200 master-1 kubenswrapper[4771]: I1011 11:21:45.114130 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tvfgn" Oct 11 11:21:45.194182 master-1 kubenswrapper[4771]: I1011 11:21:45.194109 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvfgn"] Oct 11 11:21:47.083758 master-1 kubenswrapper[4771]: I1011 11:21:47.083674 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tvfgn" podUID="127d4d79-6da3-4be4-9271-fb04d6d3fb06" containerName="registry-server" containerID="cri-o://db17f449ae0fb02b4d0c2bab45419797ba50f59a59b6ac45e42a324754b1a94e" gracePeriod=2 Oct 11 11:21:47.691930 master-1 kubenswrapper[4771]: I1011 11:21:47.691861 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvfgn" Oct 11 11:21:47.754461 master-1 kubenswrapper[4771]: I1011 11:21:47.754308 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/127d4d79-6da3-4be4-9271-fb04d6d3fb06-utilities\") pod \"127d4d79-6da3-4be4-9271-fb04d6d3fb06\" (UID: \"127d4d79-6da3-4be4-9271-fb04d6d3fb06\") " Oct 11 11:21:47.754461 master-1 kubenswrapper[4771]: I1011 11:21:47.754466 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/127d4d79-6da3-4be4-9271-fb04d6d3fb06-catalog-content\") pod \"127d4d79-6da3-4be4-9271-fb04d6d3fb06\" (UID: \"127d4d79-6da3-4be4-9271-fb04d6d3fb06\") " Oct 11 11:21:47.754831 master-1 kubenswrapper[4771]: I1011 11:21:47.754558 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrd5q\" (UniqueName: \"kubernetes.io/projected/127d4d79-6da3-4be4-9271-fb04d6d3fb06-kube-api-access-jrd5q\") pod \"127d4d79-6da3-4be4-9271-fb04d6d3fb06\" (UID: \"127d4d79-6da3-4be4-9271-fb04d6d3fb06\") " Oct 11 11:21:47.756082 master-1 kubenswrapper[4771]: I1011 11:21:47.756002 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/127d4d79-6da3-4be4-9271-fb04d6d3fb06-utilities" (OuterVolumeSpecName: "utilities") pod "127d4d79-6da3-4be4-9271-fb04d6d3fb06" (UID: "127d4d79-6da3-4be4-9271-fb04d6d3fb06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:21:47.757889 master-1 kubenswrapper[4771]: I1011 11:21:47.757829 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/127d4d79-6da3-4be4-9271-fb04d6d3fb06-kube-api-access-jrd5q" (OuterVolumeSpecName: "kube-api-access-jrd5q") pod "127d4d79-6da3-4be4-9271-fb04d6d3fb06" (UID: "127d4d79-6da3-4be4-9271-fb04d6d3fb06"). InnerVolumeSpecName "kube-api-access-jrd5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:21:47.778803 master-1 kubenswrapper[4771]: I1011 11:21:47.778671 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/127d4d79-6da3-4be4-9271-fb04d6d3fb06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "127d4d79-6da3-4be4-9271-fb04d6d3fb06" (UID: "127d4d79-6da3-4be4-9271-fb04d6d3fb06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:21:47.858131 master-1 kubenswrapper[4771]: I1011 11:21:47.857982 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/127d4d79-6da3-4be4-9271-fb04d6d3fb06-utilities\") on node \"master-1\" DevicePath \"\"" Oct 11 11:21:47.858131 master-1 kubenswrapper[4771]: I1011 11:21:47.858051 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/127d4d79-6da3-4be4-9271-fb04d6d3fb06-catalog-content\") on node \"master-1\" DevicePath \"\"" Oct 11 11:21:47.858131 master-1 kubenswrapper[4771]: I1011 11:21:47.858075 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrd5q\" (UniqueName: \"kubernetes.io/projected/127d4d79-6da3-4be4-9271-fb04d6d3fb06-kube-api-access-jrd5q\") on node \"master-1\" DevicePath \"\"" Oct 11 11:21:48.100759 master-1 kubenswrapper[4771]: I1011 11:21:48.100510 4771 generic.go:334] "Generic (PLEG): container finished" podID="127d4d79-6da3-4be4-9271-fb04d6d3fb06" containerID="db17f449ae0fb02b4d0c2bab45419797ba50f59a59b6ac45e42a324754b1a94e" exitCode=0 Oct 11 11:21:48.100759 master-1 kubenswrapper[4771]: I1011 11:21:48.100610 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvfgn" event={"ID":"127d4d79-6da3-4be4-9271-fb04d6d3fb06","Type":"ContainerDied","Data":"db17f449ae0fb02b4d0c2bab45419797ba50f59a59b6ac45e42a324754b1a94e"} Oct 11 11:21:48.100759 master-1 kubenswrapper[4771]: I1011 11:21:48.100651 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvfgn" event={"ID":"127d4d79-6da3-4be4-9271-fb04d6d3fb06","Type":"ContainerDied","Data":"bb0de95a85571686983444043ed59014f3eebfb860364be2717ad7ec4716bf05"} Oct 11 11:21:48.100759 master-1 kubenswrapper[4771]: I1011 11:21:48.100693 4771 scope.go:117] "RemoveContainer" containerID="db17f449ae0fb02b4d0c2bab45419797ba50f59a59b6ac45e42a324754b1a94e" Oct 11 11:21:48.102107 master-1 kubenswrapper[4771]: I1011 11:21:48.100884 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvfgn" Oct 11 11:21:48.132104 master-1 kubenswrapper[4771]: I1011 11:21:48.131968 4771 scope.go:117] "RemoveContainer" containerID="80a371a805e99adac53136620f611284a90e75bd54e82d550133c8126203cc09" Oct 11 11:21:48.166226 master-1 kubenswrapper[4771]: I1011 11:21:48.165401 4771 scope.go:117] "RemoveContainer" containerID="1e35b38f0ac33b45160c40a25839f70590318147b90ddd186179931705b30142" Oct 11 11:21:48.173916 master-1 kubenswrapper[4771]: I1011 11:21:48.173819 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvfgn"] Oct 11 11:21:48.183082 master-1 kubenswrapper[4771]: I1011 11:21:48.183019 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvfgn"] Oct 11 11:21:48.230943 master-1 kubenswrapper[4771]: I1011 11:21:48.230863 4771 scope.go:117] "RemoveContainer" containerID="db17f449ae0fb02b4d0c2bab45419797ba50f59a59b6ac45e42a324754b1a94e" Oct 11 11:21:48.231505 master-1 kubenswrapper[4771]: E1011 11:21:48.231464 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db17f449ae0fb02b4d0c2bab45419797ba50f59a59b6ac45e42a324754b1a94e\": container with ID starting with db17f449ae0fb02b4d0c2bab45419797ba50f59a59b6ac45e42a324754b1a94e not found: ID does not exist" containerID="db17f449ae0fb02b4d0c2bab45419797ba50f59a59b6ac45e42a324754b1a94e" Oct 11 11:21:48.232241 master-1 kubenswrapper[4771]: I1011 11:21:48.231514 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db17f449ae0fb02b4d0c2bab45419797ba50f59a59b6ac45e42a324754b1a94e"} err="failed to get container status \"db17f449ae0fb02b4d0c2bab45419797ba50f59a59b6ac45e42a324754b1a94e\": rpc error: code = NotFound desc = could not find container \"db17f449ae0fb02b4d0c2bab45419797ba50f59a59b6ac45e42a324754b1a94e\": container with ID starting with db17f449ae0fb02b4d0c2bab45419797ba50f59a59b6ac45e42a324754b1a94e not found: ID does not exist" Oct 11 11:21:48.232241 master-1 kubenswrapper[4771]: I1011 11:21:48.231550 4771 scope.go:117] "RemoveContainer" containerID="80a371a805e99adac53136620f611284a90e75bd54e82d550133c8126203cc09" Oct 11 11:21:48.232241 master-1 kubenswrapper[4771]: E1011 11:21:48.231995 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80a371a805e99adac53136620f611284a90e75bd54e82d550133c8126203cc09\": container with ID starting with 80a371a805e99adac53136620f611284a90e75bd54e82d550133c8126203cc09 not found: ID does not exist" containerID="80a371a805e99adac53136620f611284a90e75bd54e82d550133c8126203cc09" Oct 11 11:21:48.232241 master-1 kubenswrapper[4771]: I1011 11:21:48.232084 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80a371a805e99adac53136620f611284a90e75bd54e82d550133c8126203cc09"} err="failed to get container status \"80a371a805e99adac53136620f611284a90e75bd54e82d550133c8126203cc09\": rpc error: code = NotFound desc = could not find container \"80a371a805e99adac53136620f611284a90e75bd54e82d550133c8126203cc09\": container with ID starting with 80a371a805e99adac53136620f611284a90e75bd54e82d550133c8126203cc09 not found: ID does not exist" Oct 11 11:21:48.232241 master-1 kubenswrapper[4771]: I1011 11:21:48.232172 4771 scope.go:117] "RemoveContainer" containerID="1e35b38f0ac33b45160c40a25839f70590318147b90ddd186179931705b30142" Oct 11 11:21:48.233058 master-1 kubenswrapper[4771]: E1011 11:21:48.232995 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e35b38f0ac33b45160c40a25839f70590318147b90ddd186179931705b30142\": container with ID starting with 1e35b38f0ac33b45160c40a25839f70590318147b90ddd186179931705b30142 not found: ID does not exist" containerID="1e35b38f0ac33b45160c40a25839f70590318147b90ddd186179931705b30142" Oct 11 11:21:48.233239 master-1 kubenswrapper[4771]: I1011 11:21:48.233069 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e35b38f0ac33b45160c40a25839f70590318147b90ddd186179931705b30142"} err="failed to get container status \"1e35b38f0ac33b45160c40a25839f70590318147b90ddd186179931705b30142\": rpc error: code = NotFound desc = could not find container \"1e35b38f0ac33b45160c40a25839f70590318147b90ddd186179931705b30142\": container with ID starting with 1e35b38f0ac33b45160c40a25839f70590318147b90ddd186179931705b30142 not found: ID does not exist" Oct 11 11:21:48.452635 master-1 kubenswrapper[4771]: I1011 11:21:48.452421 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="127d4d79-6da3-4be4-9271-fb04d6d3fb06" path="/var/lib/kubelet/pods/127d4d79-6da3-4be4-9271-fb04d6d3fb06/volumes" Oct 11 11:24:22.981757 master-1 kubenswrapper[4771]: I1011 11:24:22.981654 4771 generic.go:334] "Generic (PLEG): container finished" podID="85772d7d-920d-478b-88f8-6b7f135c79f4" containerID="5be7a0054307e1243100b620013bd83d4151a9dcec02bed8d5e9fdb9ac6d5d35" exitCode=0 Oct 11 11:24:22.981757 master-1 kubenswrapper[4771]: I1011 11:24:22.981741 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-dataplane-edpm-9sl9q" event={"ID":"85772d7d-920d-478b-88f8-6b7f135c79f4","Type":"ContainerDied","Data":"5be7a0054307e1243100b620013bd83d4151a9dcec02bed8d5e9fdb9ac6d5d35"} Oct 11 11:24:24.623392 master-1 kubenswrapper[4771]: I1011 11:24:24.623249 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-dataplane-edpm-9sl9q" Oct 11 11:24:24.764607 master-1 kubenswrapper[4771]: I1011 11:24:24.763817 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-libvirt-combined-ca-bundle\") pod \"85772d7d-920d-478b-88f8-6b7f135c79f4\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " Oct 11 11:24:24.764607 master-1 kubenswrapper[4771]: I1011 11:24:24.763996 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-ssh-key\") pod \"85772d7d-920d-478b-88f8-6b7f135c79f4\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " Oct 11 11:24:24.764607 master-1 kubenswrapper[4771]: I1011 11:24:24.764022 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-libvirt-secret-0\") pod \"85772d7d-920d-478b-88f8-6b7f135c79f4\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " Oct 11 11:24:24.764607 master-1 kubenswrapper[4771]: I1011 11:24:24.764182 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j577p\" (UniqueName: \"kubernetes.io/projected/85772d7d-920d-478b-88f8-6b7f135c79f4-kube-api-access-j577p\") pod \"85772d7d-920d-478b-88f8-6b7f135c79f4\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " Oct 11 11:24:24.764607 master-1 kubenswrapper[4771]: I1011 11:24:24.764234 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-inventory\") pod \"85772d7d-920d-478b-88f8-6b7f135c79f4\" (UID: \"85772d7d-920d-478b-88f8-6b7f135c79f4\") " Oct 11 11:24:24.769772 master-1 kubenswrapper[4771]: I1011 11:24:24.769687 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "85772d7d-920d-478b-88f8-6b7f135c79f4" (UID: "85772d7d-920d-478b-88f8-6b7f135c79f4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:24:24.770491 master-1 kubenswrapper[4771]: I1011 11:24:24.770436 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85772d7d-920d-478b-88f8-6b7f135c79f4-kube-api-access-j577p" (OuterVolumeSpecName: "kube-api-access-j577p") pod "85772d7d-920d-478b-88f8-6b7f135c79f4" (UID: "85772d7d-920d-478b-88f8-6b7f135c79f4"). InnerVolumeSpecName "kube-api-access-j577p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:24:24.791387 master-1 kubenswrapper[4771]: I1011 11:24:24.791233 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-inventory" (OuterVolumeSpecName: "inventory") pod "85772d7d-920d-478b-88f8-6b7f135c79f4" (UID: "85772d7d-920d-478b-88f8-6b7f135c79f4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:24:24.796185 master-1 kubenswrapper[4771]: I1011 11:24:24.796090 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "85772d7d-920d-478b-88f8-6b7f135c79f4" (UID: "85772d7d-920d-478b-88f8-6b7f135c79f4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:24:24.799351 master-1 kubenswrapper[4771]: I1011 11:24:24.799241 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "85772d7d-920d-478b-88f8-6b7f135c79f4" (UID: "85772d7d-920d-478b-88f8-6b7f135c79f4"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:24:24.867128 master-1 kubenswrapper[4771]: I1011 11:24:24.866988 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-inventory\") on node \"master-1\" DevicePath \"\"" Oct 11 11:24:24.867128 master-1 kubenswrapper[4771]: I1011 11:24:24.867052 4771 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-libvirt-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 11:24:24.867128 master-1 kubenswrapper[4771]: I1011 11:24:24.867085 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-ssh-key\") on node \"master-1\" DevicePath \"\"" Oct 11 11:24:24.867128 master-1 kubenswrapper[4771]: I1011 11:24:24.867104 4771 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85772d7d-920d-478b-88f8-6b7f135c79f4-libvirt-secret-0\") on node \"master-1\" DevicePath \"\"" Oct 11 11:24:24.867128 master-1 kubenswrapper[4771]: I1011 11:24:24.867121 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j577p\" (UniqueName: \"kubernetes.io/projected/85772d7d-920d-478b-88f8-6b7f135c79f4-kube-api-access-j577p\") on node \"master-1\" DevicePath \"\"" Oct 11 11:24:25.010591 master-1 kubenswrapper[4771]: I1011 11:24:25.010466 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-dataplane-edpm-9sl9q" event={"ID":"85772d7d-920d-478b-88f8-6b7f135c79f4","Type":"ContainerDied","Data":"469581d1f2ba91444a34fa09ee246cb89f27d8a7c5197db8e53776ec31618f65"} Oct 11 11:24:25.010591 master-1 kubenswrapper[4771]: I1011 11:24:25.010540 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="469581d1f2ba91444a34fa09ee246cb89f27d8a7c5197db8e53776ec31618f65" Oct 11 11:24:25.010591 master-1 kubenswrapper[4771]: I1011 11:24:25.010580 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-dataplane-edpm-9sl9q" Oct 11 11:24:25.140554 master-1 kubenswrapper[4771]: I1011 11:24:25.140286 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-dataplane-edpm-w7vwd"] Oct 11 11:24:25.140980 master-1 kubenswrapper[4771]: E1011 11:24:25.140856 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127d4d79-6da3-4be4-9271-fb04d6d3fb06" containerName="registry-server" Oct 11 11:24:25.140980 master-1 kubenswrapper[4771]: I1011 11:24:25.140878 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="127d4d79-6da3-4be4-9271-fb04d6d3fb06" containerName="registry-server" Oct 11 11:24:25.140980 master-1 kubenswrapper[4771]: E1011 11:24:25.140920 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127d4d79-6da3-4be4-9271-fb04d6d3fb06" containerName="extract-content" Oct 11 11:24:25.140980 master-1 kubenswrapper[4771]: I1011 11:24:25.140928 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="127d4d79-6da3-4be4-9271-fb04d6d3fb06" containerName="extract-content" Oct 11 11:24:25.140980 master-1 kubenswrapper[4771]: E1011 11:24:25.140952 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127d4d79-6da3-4be4-9271-fb04d6d3fb06" containerName="extract-utilities" Oct 11 11:24:25.140980 master-1 kubenswrapper[4771]: I1011 11:24:25.140959 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="127d4d79-6da3-4be4-9271-fb04d6d3fb06" containerName="extract-utilities" Oct 11 11:24:25.140980 master-1 kubenswrapper[4771]: E1011 11:24:25.140976 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85772d7d-920d-478b-88f8-6b7f135c79f4" containerName="libvirt-dataplane-edpm" Oct 11 11:24:25.140980 master-1 kubenswrapper[4771]: I1011 11:24:25.140985 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="85772d7d-920d-478b-88f8-6b7f135c79f4" containerName="libvirt-dataplane-edpm" Oct 11 11:24:25.141348 master-1 kubenswrapper[4771]: I1011 11:24:25.141203 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="85772d7d-920d-478b-88f8-6b7f135c79f4" containerName="libvirt-dataplane-edpm" Oct 11 11:24:25.141348 master-1 kubenswrapper[4771]: I1011 11:24:25.141246 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="127d4d79-6da3-4be4-9271-fb04d6d3fb06" containerName="registry-server" Oct 11 11:24:25.142266 master-1 kubenswrapper[4771]: I1011 11:24:25.142201 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.145958 master-1 kubenswrapper[4771]: I1011 11:24:25.145890 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 11 11:24:25.146228 master-1 kubenswrapper[4771]: I1011 11:24:25.146197 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 11 11:24:25.146309 master-1 kubenswrapper[4771]: I1011 11:24:25.146219 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:24:25.146441 master-1 kubenswrapper[4771]: I1011 11:24:25.146410 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-edpm" Oct 11 11:24:25.148214 master-1 kubenswrapper[4771]: I1011 11:24:25.148175 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:24:25.172031 master-1 kubenswrapper[4771]: I1011 11:24:25.171849 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-dataplane-edpm-w7vwd"] Oct 11 11:24:25.275455 master-1 kubenswrapper[4771]: I1011 11:24:25.275346 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-cell1-compute-config-0\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.275712 master-1 kubenswrapper[4771]: I1011 11:24:25.275512 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-migration-ssh-key-1\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.275790 master-1 kubenswrapper[4771]: I1011 11:24:25.275708 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-cell1-compute-config-1\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.276014 master-1 kubenswrapper[4771]: I1011 11:24:25.275963 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-migration-ssh-key-0\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.276142 master-1 kubenswrapper[4771]: I1011 11:24:25.276108 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c9vd\" (UniqueName: \"kubernetes.io/projected/06076001-16e9-4f5c-91c5-cf4a70441a10-kube-api-access-5c9vd\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.276281 master-1 kubenswrapper[4771]: I1011 11:24:25.276238 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-inventory\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.276452 master-1 kubenswrapper[4771]: I1011 11:24:25.276399 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-combined-ca-bundle\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.276540 master-1 kubenswrapper[4771]: I1011 11:24:25.276464 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-ssh-key\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.378309 master-1 kubenswrapper[4771]: I1011 11:24:25.378192 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-inventory\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.378878 master-1 kubenswrapper[4771]: I1011 11:24:25.378401 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-combined-ca-bundle\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.378878 master-1 kubenswrapper[4771]: I1011 11:24:25.378437 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-ssh-key\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.378878 master-1 kubenswrapper[4771]: I1011 11:24:25.378486 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-cell1-compute-config-0\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.378878 master-1 kubenswrapper[4771]: I1011 11:24:25.378734 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-migration-ssh-key-1\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.378878 master-1 kubenswrapper[4771]: I1011 11:24:25.378802 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-cell1-compute-config-1\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.378878 master-1 kubenswrapper[4771]: I1011 11:24:25.378885 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-migration-ssh-key-0\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.379319 master-1 kubenswrapper[4771]: I1011 11:24:25.378935 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c9vd\" (UniqueName: \"kubernetes.io/projected/06076001-16e9-4f5c-91c5-cf4a70441a10-kube-api-access-5c9vd\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.384083 master-1 kubenswrapper[4771]: I1011 11:24:25.384015 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-ssh-key\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.384351 master-1 kubenswrapper[4771]: I1011 11:24:25.384293 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-combined-ca-bundle\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.384558 master-1 kubenswrapper[4771]: I1011 11:24:25.384482 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-inventory\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.385080 master-1 kubenswrapper[4771]: I1011 11:24:25.385031 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-cell1-compute-config-1\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.387313 master-1 kubenswrapper[4771]: I1011 11:24:25.387226 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-migration-ssh-key-1\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.387588 master-1 kubenswrapper[4771]: I1011 11:24:25.387337 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-migration-ssh-key-0\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.389239 master-1 kubenswrapper[4771]: I1011 11:24:25.389165 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-cell1-compute-config-0\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.404932 master-1 kubenswrapper[4771]: I1011 11:24:25.404867 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c9vd\" (UniqueName: \"kubernetes.io/projected/06076001-16e9-4f5c-91c5-cf4a70441a10-kube-api-access-5c9vd\") pod \"nova-dataplane-edpm-w7vwd\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:25.470388 master-1 kubenswrapper[4771]: I1011 11:24:25.470263 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:24:26.117304 master-1 kubenswrapper[4771]: I1011 11:24:26.117227 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-dataplane-edpm-w7vwd"] Oct 11 11:24:26.122285 master-1 kubenswrapper[4771]: W1011 11:24:26.122218 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06076001_16e9_4f5c_91c5_cf4a70441a10.slice/crio-29ce132ef5120873821167a745fbe8c43a3904410b9c4ead60e6d296b2b0a14f WatchSource:0}: Error finding container 29ce132ef5120873821167a745fbe8c43a3904410b9c4ead60e6d296b2b0a14f: Status 404 returned error can't find the container with id 29ce132ef5120873821167a745fbe8c43a3904410b9c4ead60e6d296b2b0a14f Oct 11 11:24:27.035627 master-1 kubenswrapper[4771]: I1011 11:24:27.035548 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-dataplane-edpm-w7vwd" event={"ID":"06076001-16e9-4f5c-91c5-cf4a70441a10","Type":"ContainerStarted","Data":"6a4c70e45eeacacda98edd84f348a353424f270b816032a92a2760dc6b017867"} Oct 11 11:24:27.035627 master-1 kubenswrapper[4771]: I1011 11:24:27.035629 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-dataplane-edpm-w7vwd" event={"ID":"06076001-16e9-4f5c-91c5-cf4a70441a10","Type":"ContainerStarted","Data":"29ce132ef5120873821167a745fbe8c43a3904410b9c4ead60e6d296b2b0a14f"} Oct 11 11:24:27.070739 master-1 kubenswrapper[4771]: I1011 11:24:27.070389 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-dataplane-edpm-w7vwd" podStartSLOduration=1.577823888 podStartE2EDuration="2.070336608s" podCreationTimestamp="2025-10-11 11:24:25 +0000 UTC" firstStartedPulling="2025-10-11 11:24:26.125792726 +0000 UTC m=+3498.100019207" lastFinishedPulling="2025-10-11 11:24:26.618305436 +0000 UTC m=+3498.592531927" observedRunningTime="2025-10-11 11:24:27.057233673 +0000 UTC m=+3499.031460144" watchObservedRunningTime="2025-10-11 11:24:27.070336608 +0000 UTC m=+3499.044563079" Oct 11 11:26:18.125219 master-1 kubenswrapper[4771]: I1011 11:26:18.125134 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w2wmc"] Oct 11 11:26:18.128246 master-1 kubenswrapper[4771]: I1011 11:26:18.128204 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2wmc" Oct 11 11:26:18.163511 master-1 kubenswrapper[4771]: I1011 11:26:18.162993 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2wmc"] Oct 11 11:26:18.282720 master-1 kubenswrapper[4771]: I1011 11:26:18.282641 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8814fe28-55c9-4c68-b4a9-eb28935ac4e6-utilities\") pod \"community-operators-w2wmc\" (UID: \"8814fe28-55c9-4c68-b4a9-eb28935ac4e6\") " pod="openshift-marketplace/community-operators-w2wmc" Oct 11 11:26:18.282970 master-1 kubenswrapper[4771]: I1011 11:26:18.282780 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8814fe28-55c9-4c68-b4a9-eb28935ac4e6-catalog-content\") pod \"community-operators-w2wmc\" (UID: \"8814fe28-55c9-4c68-b4a9-eb28935ac4e6\") " pod="openshift-marketplace/community-operators-w2wmc" Oct 11 11:26:18.282970 master-1 kubenswrapper[4771]: I1011 11:26:18.282804 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xblgx\" (UniqueName: \"kubernetes.io/projected/8814fe28-55c9-4c68-b4a9-eb28935ac4e6-kube-api-access-xblgx\") pod \"community-operators-w2wmc\" (UID: \"8814fe28-55c9-4c68-b4a9-eb28935ac4e6\") " pod="openshift-marketplace/community-operators-w2wmc" Oct 11 11:26:18.384918 master-1 kubenswrapper[4771]: I1011 11:26:18.384771 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8814fe28-55c9-4c68-b4a9-eb28935ac4e6-utilities\") pod \"community-operators-w2wmc\" (UID: \"8814fe28-55c9-4c68-b4a9-eb28935ac4e6\") " pod="openshift-marketplace/community-operators-w2wmc" Oct 11 11:26:18.384918 master-1 kubenswrapper[4771]: I1011 11:26:18.384916 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8814fe28-55c9-4c68-b4a9-eb28935ac4e6-catalog-content\") pod \"community-operators-w2wmc\" (UID: \"8814fe28-55c9-4c68-b4a9-eb28935ac4e6\") " pod="openshift-marketplace/community-operators-w2wmc" Oct 11 11:26:18.385160 master-1 kubenswrapper[4771]: I1011 11:26:18.384942 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xblgx\" (UniqueName: \"kubernetes.io/projected/8814fe28-55c9-4c68-b4a9-eb28935ac4e6-kube-api-access-xblgx\") pod \"community-operators-w2wmc\" (UID: \"8814fe28-55c9-4c68-b4a9-eb28935ac4e6\") " pod="openshift-marketplace/community-operators-w2wmc" Oct 11 11:26:18.386318 master-1 kubenswrapper[4771]: I1011 11:26:18.385779 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8814fe28-55c9-4c68-b4a9-eb28935ac4e6-utilities\") pod \"community-operators-w2wmc\" (UID: \"8814fe28-55c9-4c68-b4a9-eb28935ac4e6\") " pod="openshift-marketplace/community-operators-w2wmc" Oct 11 11:26:18.386318 master-1 kubenswrapper[4771]: I1011 11:26:18.385851 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8814fe28-55c9-4c68-b4a9-eb28935ac4e6-catalog-content\") pod \"community-operators-w2wmc\" (UID: \"8814fe28-55c9-4c68-b4a9-eb28935ac4e6\") " pod="openshift-marketplace/community-operators-w2wmc" Oct 11 11:26:18.408379 master-1 kubenswrapper[4771]: I1011 11:26:18.408319 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xblgx\" (UniqueName: \"kubernetes.io/projected/8814fe28-55c9-4c68-b4a9-eb28935ac4e6-kube-api-access-xblgx\") pod \"community-operators-w2wmc\" (UID: \"8814fe28-55c9-4c68-b4a9-eb28935ac4e6\") " pod="openshift-marketplace/community-operators-w2wmc" Oct 11 11:26:18.447489 master-1 kubenswrapper[4771]: I1011 11:26:18.447429 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2wmc" Oct 11 11:26:18.945112 master-1 kubenswrapper[4771]: I1011 11:26:18.945066 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2wmc"] Oct 11 11:26:19.300790 master-1 kubenswrapper[4771]: I1011 11:26:19.300570 4771 generic.go:334] "Generic (PLEG): container finished" podID="8814fe28-55c9-4c68-b4a9-eb28935ac4e6" containerID="fde57b07ddd785eb1143c36abc3c8d980d901ef8b84dd027eddebee0a6b62eab" exitCode=0 Oct 11 11:26:19.300790 master-1 kubenswrapper[4771]: I1011 11:26:19.300662 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2wmc" event={"ID":"8814fe28-55c9-4c68-b4a9-eb28935ac4e6","Type":"ContainerDied","Data":"fde57b07ddd785eb1143c36abc3c8d980d901ef8b84dd027eddebee0a6b62eab"} Oct 11 11:26:19.300790 master-1 kubenswrapper[4771]: I1011 11:26:19.300765 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2wmc" event={"ID":"8814fe28-55c9-4c68-b4a9-eb28935ac4e6","Type":"ContainerStarted","Data":"4d4154e2ffc86d619d5ee16fedba8023b4cee821e2a1d036f9f04cd2a1ed8991"} Oct 11 11:26:20.317897 master-1 kubenswrapper[4771]: I1011 11:26:20.317825 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2wmc" event={"ID":"8814fe28-55c9-4c68-b4a9-eb28935ac4e6","Type":"ContainerStarted","Data":"2767272dcf855a5de6642a9b5c31317bbeb02d57ce1544951aabe1404080fda9"} Oct 11 11:26:21.334278 master-1 kubenswrapper[4771]: I1011 11:26:21.334159 4771 generic.go:334] "Generic (PLEG): container finished" podID="8814fe28-55c9-4c68-b4a9-eb28935ac4e6" containerID="2767272dcf855a5de6642a9b5c31317bbeb02d57ce1544951aabe1404080fda9" exitCode=0 Oct 11 11:26:21.334278 master-1 kubenswrapper[4771]: I1011 11:26:21.334251 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2wmc" event={"ID":"8814fe28-55c9-4c68-b4a9-eb28935ac4e6","Type":"ContainerDied","Data":"2767272dcf855a5de6642a9b5c31317bbeb02d57ce1544951aabe1404080fda9"} Oct 11 11:26:22.351302 master-1 kubenswrapper[4771]: I1011 11:26:22.351228 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2wmc" event={"ID":"8814fe28-55c9-4c68-b4a9-eb28935ac4e6","Type":"ContainerStarted","Data":"b45ca70a1032969936a1590655655e760522fa0042fc4efd3da92629d1ae6d2d"} Oct 11 11:26:22.388735 master-1 kubenswrapper[4771]: I1011 11:26:22.388617 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w2wmc" podStartSLOduration=1.691812393 podStartE2EDuration="4.38858277s" podCreationTimestamp="2025-10-11 11:26:18 +0000 UTC" firstStartedPulling="2025-10-11 11:26:19.303108801 +0000 UTC m=+3611.277335282" lastFinishedPulling="2025-10-11 11:26:21.999879218 +0000 UTC m=+3613.974105659" observedRunningTime="2025-10-11 11:26:22.38612508 +0000 UTC m=+3614.360351581" watchObservedRunningTime="2025-10-11 11:26:22.38858277 +0000 UTC m=+3614.362809251" Oct 11 11:26:28.454687 master-1 kubenswrapper[4771]: I1011 11:26:28.454526 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w2wmc" Oct 11 11:26:28.455662 master-1 kubenswrapper[4771]: I1011 11:26:28.455634 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w2wmc" Oct 11 11:26:28.508718 master-1 kubenswrapper[4771]: I1011 11:26:28.508664 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w2wmc" Oct 11 11:26:29.490727 master-1 kubenswrapper[4771]: I1011 11:26:29.490629 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w2wmc" Oct 11 11:26:29.587304 master-1 kubenswrapper[4771]: I1011 11:26:29.587204 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2wmc"] Oct 11 11:26:31.452251 master-1 kubenswrapper[4771]: I1011 11:26:31.452137 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w2wmc" podUID="8814fe28-55c9-4c68-b4a9-eb28935ac4e6" containerName="registry-server" containerID="cri-o://b45ca70a1032969936a1590655655e760522fa0042fc4efd3da92629d1ae6d2d" gracePeriod=2 Oct 11 11:26:32.070792 master-1 kubenswrapper[4771]: I1011 11:26:32.070762 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2wmc" Oct 11 11:26:32.176903 master-1 kubenswrapper[4771]: I1011 11:26:32.176821 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8814fe28-55c9-4c68-b4a9-eb28935ac4e6-catalog-content\") pod \"8814fe28-55c9-4c68-b4a9-eb28935ac4e6\" (UID: \"8814fe28-55c9-4c68-b4a9-eb28935ac4e6\") " Oct 11 11:26:32.177141 master-1 kubenswrapper[4771]: I1011 11:26:32.176975 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xblgx\" (UniqueName: \"kubernetes.io/projected/8814fe28-55c9-4c68-b4a9-eb28935ac4e6-kube-api-access-xblgx\") pod \"8814fe28-55c9-4c68-b4a9-eb28935ac4e6\" (UID: \"8814fe28-55c9-4c68-b4a9-eb28935ac4e6\") " Oct 11 11:26:32.177997 master-1 kubenswrapper[4771]: I1011 11:26:32.177899 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8814fe28-55c9-4c68-b4a9-eb28935ac4e6-utilities\") pod \"8814fe28-55c9-4c68-b4a9-eb28935ac4e6\" (UID: \"8814fe28-55c9-4c68-b4a9-eb28935ac4e6\") " Oct 11 11:26:32.179903 master-1 kubenswrapper[4771]: I1011 11:26:32.179824 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8814fe28-55c9-4c68-b4a9-eb28935ac4e6-utilities" (OuterVolumeSpecName: "utilities") pod "8814fe28-55c9-4c68-b4a9-eb28935ac4e6" (UID: "8814fe28-55c9-4c68-b4a9-eb28935ac4e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:26:32.181719 master-1 kubenswrapper[4771]: I1011 11:26:32.181662 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8814fe28-55c9-4c68-b4a9-eb28935ac4e6-kube-api-access-xblgx" (OuterVolumeSpecName: "kube-api-access-xblgx") pod "8814fe28-55c9-4c68-b4a9-eb28935ac4e6" (UID: "8814fe28-55c9-4c68-b4a9-eb28935ac4e6"). InnerVolumeSpecName "kube-api-access-xblgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:26:32.228398 master-1 kubenswrapper[4771]: I1011 11:26:32.228288 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8814fe28-55c9-4c68-b4a9-eb28935ac4e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8814fe28-55c9-4c68-b4a9-eb28935ac4e6" (UID: "8814fe28-55c9-4c68-b4a9-eb28935ac4e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:26:32.281873 master-1 kubenswrapper[4771]: I1011 11:26:32.281808 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8814fe28-55c9-4c68-b4a9-eb28935ac4e6-catalog-content\") on node \"master-1\" DevicePath \"\"" Oct 11 11:26:32.281873 master-1 kubenswrapper[4771]: I1011 11:26:32.281850 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xblgx\" (UniqueName: \"kubernetes.io/projected/8814fe28-55c9-4c68-b4a9-eb28935ac4e6-kube-api-access-xblgx\") on node \"master-1\" DevicePath \"\"" Oct 11 11:26:32.281873 master-1 kubenswrapper[4771]: I1011 11:26:32.281867 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8814fe28-55c9-4c68-b4a9-eb28935ac4e6-utilities\") on node \"master-1\" DevicePath \"\"" Oct 11 11:26:32.462879 master-1 kubenswrapper[4771]: I1011 11:26:32.462800 4771 generic.go:334] "Generic (PLEG): container finished" podID="8814fe28-55c9-4c68-b4a9-eb28935ac4e6" containerID="b45ca70a1032969936a1590655655e760522fa0042fc4efd3da92629d1ae6d2d" exitCode=0 Oct 11 11:26:32.462879 master-1 kubenswrapper[4771]: I1011 11:26:32.462879 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2wmc" event={"ID":"8814fe28-55c9-4c68-b4a9-eb28935ac4e6","Type":"ContainerDied","Data":"b45ca70a1032969936a1590655655e760522fa0042fc4efd3da92629d1ae6d2d"} Oct 11 11:26:32.464505 master-1 kubenswrapper[4771]: I1011 11:26:32.462920 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2wmc" Oct 11 11:26:32.464505 master-1 kubenswrapper[4771]: I1011 11:26:32.462948 4771 scope.go:117] "RemoveContainer" containerID="b45ca70a1032969936a1590655655e760522fa0042fc4efd3da92629d1ae6d2d" Oct 11 11:26:32.464505 master-1 kubenswrapper[4771]: I1011 11:26:32.462926 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2wmc" event={"ID":"8814fe28-55c9-4c68-b4a9-eb28935ac4e6","Type":"ContainerDied","Data":"4d4154e2ffc86d619d5ee16fedba8023b4cee821e2a1d036f9f04cd2a1ed8991"} Oct 11 11:26:32.490310 master-1 kubenswrapper[4771]: I1011 11:26:32.490212 4771 scope.go:117] "RemoveContainer" containerID="2767272dcf855a5de6642a9b5c31317bbeb02d57ce1544951aabe1404080fda9" Oct 11 11:26:32.518690 master-1 kubenswrapper[4771]: I1011 11:26:32.518506 4771 scope.go:117] "RemoveContainer" containerID="fde57b07ddd785eb1143c36abc3c8d980d901ef8b84dd027eddebee0a6b62eab" Oct 11 11:26:32.521338 master-1 kubenswrapper[4771]: I1011 11:26:32.521228 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2wmc"] Oct 11 11:26:32.534638 master-1 kubenswrapper[4771]: I1011 11:26:32.534562 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w2wmc"] Oct 11 11:26:32.582678 master-1 kubenswrapper[4771]: I1011 11:26:32.580851 4771 scope.go:117] "RemoveContainer" containerID="b45ca70a1032969936a1590655655e760522fa0042fc4efd3da92629d1ae6d2d" Oct 11 11:26:32.582678 master-1 kubenswrapper[4771]: E1011 11:26:32.581615 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b45ca70a1032969936a1590655655e760522fa0042fc4efd3da92629d1ae6d2d\": container with ID starting with b45ca70a1032969936a1590655655e760522fa0042fc4efd3da92629d1ae6d2d not found: ID does not exist" containerID="b45ca70a1032969936a1590655655e760522fa0042fc4efd3da92629d1ae6d2d" Oct 11 11:26:32.582678 master-1 kubenswrapper[4771]: I1011 11:26:32.581666 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b45ca70a1032969936a1590655655e760522fa0042fc4efd3da92629d1ae6d2d"} err="failed to get container status \"b45ca70a1032969936a1590655655e760522fa0042fc4efd3da92629d1ae6d2d\": rpc error: code = NotFound desc = could not find container \"b45ca70a1032969936a1590655655e760522fa0042fc4efd3da92629d1ae6d2d\": container with ID starting with b45ca70a1032969936a1590655655e760522fa0042fc4efd3da92629d1ae6d2d not found: ID does not exist" Oct 11 11:26:32.582678 master-1 kubenswrapper[4771]: I1011 11:26:32.581696 4771 scope.go:117] "RemoveContainer" containerID="2767272dcf855a5de6642a9b5c31317bbeb02d57ce1544951aabe1404080fda9" Oct 11 11:26:32.582678 master-1 kubenswrapper[4771]: E1011 11:26:32.582459 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2767272dcf855a5de6642a9b5c31317bbeb02d57ce1544951aabe1404080fda9\": container with ID starting with 2767272dcf855a5de6642a9b5c31317bbeb02d57ce1544951aabe1404080fda9 not found: ID does not exist" containerID="2767272dcf855a5de6642a9b5c31317bbeb02d57ce1544951aabe1404080fda9" Oct 11 11:26:32.582678 master-1 kubenswrapper[4771]: I1011 11:26:32.582516 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2767272dcf855a5de6642a9b5c31317bbeb02d57ce1544951aabe1404080fda9"} err="failed to get container status \"2767272dcf855a5de6642a9b5c31317bbeb02d57ce1544951aabe1404080fda9\": rpc error: code = NotFound desc = could not find container \"2767272dcf855a5de6642a9b5c31317bbeb02d57ce1544951aabe1404080fda9\": container with ID starting with 2767272dcf855a5de6642a9b5c31317bbeb02d57ce1544951aabe1404080fda9 not found: ID does not exist" Oct 11 11:26:32.582678 master-1 kubenswrapper[4771]: I1011 11:26:32.582556 4771 scope.go:117] "RemoveContainer" containerID="fde57b07ddd785eb1143c36abc3c8d980d901ef8b84dd027eddebee0a6b62eab" Oct 11 11:26:32.583167 master-1 kubenswrapper[4771]: E1011 11:26:32.583039 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fde57b07ddd785eb1143c36abc3c8d980d901ef8b84dd027eddebee0a6b62eab\": container with ID starting with fde57b07ddd785eb1143c36abc3c8d980d901ef8b84dd027eddebee0a6b62eab not found: ID does not exist" containerID="fde57b07ddd785eb1143c36abc3c8d980d901ef8b84dd027eddebee0a6b62eab" Oct 11 11:26:32.583167 master-1 kubenswrapper[4771]: I1011 11:26:32.583066 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fde57b07ddd785eb1143c36abc3c8d980d901ef8b84dd027eddebee0a6b62eab"} err="failed to get container status \"fde57b07ddd785eb1143c36abc3c8d980d901ef8b84dd027eddebee0a6b62eab\": rpc error: code = NotFound desc = could not find container \"fde57b07ddd785eb1143c36abc3c8d980d901ef8b84dd027eddebee0a6b62eab\": container with ID starting with fde57b07ddd785eb1143c36abc3c8d980d901ef8b84dd027eddebee0a6b62eab not found: ID does not exist" Oct 11 11:26:34.452545 master-1 kubenswrapper[4771]: I1011 11:26:34.452442 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8814fe28-55c9-4c68-b4a9-eb28935ac4e6" path="/var/lib/kubelet/pods/8814fe28-55c9-4c68-b4a9-eb28935ac4e6/volumes" Oct 11 11:28:10.665117 master-1 kubenswrapper[4771]: I1011 11:28:10.665025 4771 generic.go:334] "Generic (PLEG): container finished" podID="06076001-16e9-4f5c-91c5-cf4a70441a10" containerID="6a4c70e45eeacacda98edd84f348a353424f270b816032a92a2760dc6b017867" exitCode=0 Oct 11 11:28:10.665117 master-1 kubenswrapper[4771]: I1011 11:28:10.665090 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-dataplane-edpm-w7vwd" event={"ID":"06076001-16e9-4f5c-91c5-cf4a70441a10","Type":"ContainerDied","Data":"6a4c70e45eeacacda98edd84f348a353424f270b816032a92a2760dc6b017867"} Oct 11 11:28:12.312501 master-1 kubenswrapper[4771]: I1011 11:28:12.312457 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:28:12.399045 master-1 kubenswrapper[4771]: I1011 11:28:12.398985 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-cell1-compute-config-0\") pod \"06076001-16e9-4f5c-91c5-cf4a70441a10\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " Oct 11 11:28:12.399277 master-1 kubenswrapper[4771]: I1011 11:28:12.399133 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-migration-ssh-key-0\") pod \"06076001-16e9-4f5c-91c5-cf4a70441a10\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " Oct 11 11:28:12.399277 master-1 kubenswrapper[4771]: I1011 11:28:12.399270 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c9vd\" (UniqueName: \"kubernetes.io/projected/06076001-16e9-4f5c-91c5-cf4a70441a10-kube-api-access-5c9vd\") pod \"06076001-16e9-4f5c-91c5-cf4a70441a10\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " Oct 11 11:28:12.399372 master-1 kubenswrapper[4771]: I1011 11:28:12.399341 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-migration-ssh-key-1\") pod \"06076001-16e9-4f5c-91c5-cf4a70441a10\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " Oct 11 11:28:12.399425 master-1 kubenswrapper[4771]: I1011 11:28:12.399378 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-inventory\") pod \"06076001-16e9-4f5c-91c5-cf4a70441a10\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " Oct 11 11:28:12.399425 master-1 kubenswrapper[4771]: I1011 11:28:12.399398 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-ssh-key\") pod \"06076001-16e9-4f5c-91c5-cf4a70441a10\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " Oct 11 11:28:12.399499 master-1 kubenswrapper[4771]: I1011 11:28:12.399437 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-cell1-compute-config-1\") pod \"06076001-16e9-4f5c-91c5-cf4a70441a10\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " Oct 11 11:28:12.399499 master-1 kubenswrapper[4771]: I1011 11:28:12.399464 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-combined-ca-bundle\") pod \"06076001-16e9-4f5c-91c5-cf4a70441a10\" (UID: \"06076001-16e9-4f5c-91c5-cf4a70441a10\") " Oct 11 11:28:12.403179 master-1 kubenswrapper[4771]: I1011 11:28:12.403074 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06076001-16e9-4f5c-91c5-cf4a70441a10-kube-api-access-5c9vd" (OuterVolumeSpecName: "kube-api-access-5c9vd") pod "06076001-16e9-4f5c-91c5-cf4a70441a10" (UID: "06076001-16e9-4f5c-91c5-cf4a70441a10"). InnerVolumeSpecName "kube-api-access-5c9vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:28:12.405473 master-1 kubenswrapper[4771]: I1011 11:28:12.405435 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "06076001-16e9-4f5c-91c5-cf4a70441a10" (UID: "06076001-16e9-4f5c-91c5-cf4a70441a10"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:28:12.422606 master-1 kubenswrapper[4771]: I1011 11:28:12.422526 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "06076001-16e9-4f5c-91c5-cf4a70441a10" (UID: "06076001-16e9-4f5c-91c5-cf4a70441a10"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:28:12.426011 master-1 kubenswrapper[4771]: I1011 11:28:12.425920 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "06076001-16e9-4f5c-91c5-cf4a70441a10" (UID: "06076001-16e9-4f5c-91c5-cf4a70441a10"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:28:12.429100 master-1 kubenswrapper[4771]: I1011 11:28:12.429047 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "06076001-16e9-4f5c-91c5-cf4a70441a10" (UID: "06076001-16e9-4f5c-91c5-cf4a70441a10"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:28:12.447213 master-1 kubenswrapper[4771]: I1011 11:28:12.447149 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-inventory" (OuterVolumeSpecName: "inventory") pod "06076001-16e9-4f5c-91c5-cf4a70441a10" (UID: "06076001-16e9-4f5c-91c5-cf4a70441a10"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:28:12.447996 master-1 kubenswrapper[4771]: I1011 11:28:12.447842 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "06076001-16e9-4f5c-91c5-cf4a70441a10" (UID: "06076001-16e9-4f5c-91c5-cf4a70441a10"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:28:12.449075 master-1 kubenswrapper[4771]: I1011 11:28:12.449028 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "06076001-16e9-4f5c-91c5-cf4a70441a10" (UID: "06076001-16e9-4f5c-91c5-cf4a70441a10"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:28:12.504275 master-1 kubenswrapper[4771]: I1011 11:28:12.503280 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c9vd\" (UniqueName: \"kubernetes.io/projected/06076001-16e9-4f5c-91c5-cf4a70441a10-kube-api-access-5c9vd\") on node \"master-1\" DevicePath \"\"" Oct 11 11:28:12.504275 master-1 kubenswrapper[4771]: I1011 11:28:12.503346 4771 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-migration-ssh-key-1\") on node \"master-1\" DevicePath \"\"" Oct 11 11:28:12.504275 master-1 kubenswrapper[4771]: I1011 11:28:12.503398 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-inventory\") on node \"master-1\" DevicePath \"\"" Oct 11 11:28:12.504275 master-1 kubenswrapper[4771]: I1011 11:28:12.503417 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-ssh-key\") on node \"master-1\" DevicePath \"\"" Oct 11 11:28:12.504275 master-1 kubenswrapper[4771]: I1011 11:28:12.503436 4771 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-cell1-compute-config-1\") on node \"master-1\" DevicePath \"\"" Oct 11 11:28:12.504275 master-1 kubenswrapper[4771]: I1011 11:28:12.503456 4771 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 11:28:12.504275 master-1 kubenswrapper[4771]: I1011 11:28:12.503482 4771 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-cell1-compute-config-0\") on node \"master-1\" DevicePath \"\"" Oct 11 11:28:12.504275 master-1 kubenswrapper[4771]: I1011 11:28:12.503500 4771 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/06076001-16e9-4f5c-91c5-cf4a70441a10-nova-migration-ssh-key-0\") on node \"master-1\" DevicePath \"\"" Oct 11 11:28:12.687213 master-1 kubenswrapper[4771]: I1011 11:28:12.687113 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-dataplane-edpm-w7vwd" event={"ID":"06076001-16e9-4f5c-91c5-cf4a70441a10","Type":"ContainerDied","Data":"29ce132ef5120873821167a745fbe8c43a3904410b9c4ead60e6d296b2b0a14f"} Oct 11 11:28:12.687213 master-1 kubenswrapper[4771]: I1011 11:28:12.687166 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29ce132ef5120873821167a745fbe8c43a3904410b9c4ead60e6d296b2b0a14f" Oct 11 11:28:12.687656 master-1 kubenswrapper[4771]: I1011 11:28:12.687265 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-dataplane-edpm-w7vwd" Oct 11 11:28:12.852506 master-1 kubenswrapper[4771]: I1011 11:28:12.852449 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-dataplane-edpm-rdjt7"] Oct 11 11:28:12.852911 master-1 kubenswrapper[4771]: E1011 11:28:12.852889 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8814fe28-55c9-4c68-b4a9-eb28935ac4e6" containerName="registry-server" Oct 11 11:28:12.852973 master-1 kubenswrapper[4771]: I1011 11:28:12.852913 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8814fe28-55c9-4c68-b4a9-eb28935ac4e6" containerName="registry-server" Oct 11 11:28:12.852973 master-1 kubenswrapper[4771]: E1011 11:28:12.852948 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8814fe28-55c9-4c68-b4a9-eb28935ac4e6" containerName="extract-content" Oct 11 11:28:12.852973 master-1 kubenswrapper[4771]: I1011 11:28:12.852957 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8814fe28-55c9-4c68-b4a9-eb28935ac4e6" containerName="extract-content" Oct 11 11:28:12.852973 master-1 kubenswrapper[4771]: E1011 11:28:12.852972 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8814fe28-55c9-4c68-b4a9-eb28935ac4e6" containerName="extract-utilities" Oct 11 11:28:12.853100 master-1 kubenswrapper[4771]: I1011 11:28:12.852982 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8814fe28-55c9-4c68-b4a9-eb28935ac4e6" containerName="extract-utilities" Oct 11 11:28:12.853100 master-1 kubenswrapper[4771]: E1011 11:28:12.852992 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06076001-16e9-4f5c-91c5-cf4a70441a10" containerName="nova-dataplane-edpm" Oct 11 11:28:12.853100 master-1 kubenswrapper[4771]: I1011 11:28:12.853001 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="06076001-16e9-4f5c-91c5-cf4a70441a10" containerName="nova-dataplane-edpm" Oct 11 11:28:12.853218 master-1 kubenswrapper[4771]: I1011 11:28:12.853194 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="06076001-16e9-4f5c-91c5-cf4a70441a10" containerName="nova-dataplane-edpm" Oct 11 11:28:12.853265 master-1 kubenswrapper[4771]: I1011 11:28:12.853223 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8814fe28-55c9-4c68-b4a9-eb28935ac4e6" containerName="registry-server" Oct 11 11:28:12.854334 master-1 kubenswrapper[4771]: I1011 11:28:12.854298 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:12.857772 master-1 kubenswrapper[4771]: I1011 11:28:12.857679 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-edpm" Oct 11 11:28:12.857772 master-1 kubenswrapper[4771]: I1011 11:28:12.857703 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:28:12.857927 master-1 kubenswrapper[4771]: I1011 11:28:12.857755 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Oct 11 11:28:12.858507 master-1 kubenswrapper[4771]: I1011 11:28:12.858478 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:28:12.860752 master-1 kubenswrapper[4771]: I1011 11:28:12.860706 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-dataplane-edpm-rdjt7"] Oct 11 11:28:12.911716 master-1 kubenswrapper[4771]: I1011 11:28:12.911599 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ceilometer-compute-config-data-2\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:12.911716 master-1 kubenswrapper[4771]: I1011 11:28:12.911712 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-telemetry-combined-ca-bundle\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:12.912031 master-1 kubenswrapper[4771]: I1011 11:28:12.911764 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ssh-key\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:12.912031 master-1 kubenswrapper[4771]: I1011 11:28:12.911803 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ceilometer-compute-config-data-0\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:12.912031 master-1 kubenswrapper[4771]: I1011 11:28:12.911825 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzw24\" (UniqueName: \"kubernetes.io/projected/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-kube-api-access-hzw24\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:12.912031 master-1 kubenswrapper[4771]: I1011 11:28:12.911874 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ceilometer-compute-config-data-1\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:12.912239 master-1 kubenswrapper[4771]: I1011 11:28:12.912103 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-inventory\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:13.014741 master-1 kubenswrapper[4771]: I1011 11:28:13.014653 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ceilometer-compute-config-data-2\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:13.014741 master-1 kubenswrapper[4771]: I1011 11:28:13.014719 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-telemetry-combined-ca-bundle\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:13.014741 master-1 kubenswrapper[4771]: I1011 11:28:13.014745 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ssh-key\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:13.015215 master-1 kubenswrapper[4771]: I1011 11:28:13.014776 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ceilometer-compute-config-data-0\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:13.015215 master-1 kubenswrapper[4771]: I1011 11:28:13.014814 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzw24\" (UniqueName: \"kubernetes.io/projected/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-kube-api-access-hzw24\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:13.015215 master-1 kubenswrapper[4771]: I1011 11:28:13.014888 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ceilometer-compute-config-data-1\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:13.015215 master-1 kubenswrapper[4771]: I1011 11:28:13.015048 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-inventory\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:13.019844 master-1 kubenswrapper[4771]: I1011 11:28:13.019786 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ceilometer-compute-config-data-0\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:13.020000 master-1 kubenswrapper[4771]: I1011 11:28:13.019868 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ssh-key\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:13.020312 master-1 kubenswrapper[4771]: I1011 11:28:13.020265 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-inventory\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:13.021306 master-1 kubenswrapper[4771]: I1011 11:28:13.021258 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ceilometer-compute-config-data-1\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:13.021616 master-1 kubenswrapper[4771]: I1011 11:28:13.021559 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-telemetry-combined-ca-bundle\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:13.022816 master-1 kubenswrapper[4771]: I1011 11:28:13.022758 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ceilometer-compute-config-data-2\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:13.050766 master-1 kubenswrapper[4771]: I1011 11:28:13.050655 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzw24\" (UniqueName: \"kubernetes.io/projected/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-kube-api-access-hzw24\") pod \"telemetry-dataplane-edpm-rdjt7\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:13.175013 master-1 kubenswrapper[4771]: I1011 11:28:13.174853 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:28:13.842056 master-1 kubenswrapper[4771]: W1011 11:28:13.841988 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1d9f67_e3e9_4545_9e5d_07f2c213fe55.slice/crio-d621ccd58559916f2503ebdd3bbdd04a63949a337855cab279f0c0f2f83d42a6 WatchSource:0}: Error finding container d621ccd58559916f2503ebdd3bbdd04a63949a337855cab279f0c0f2f83d42a6: Status 404 returned error can't find the container with id d621ccd58559916f2503ebdd3bbdd04a63949a337855cab279f0c0f2f83d42a6 Oct 11 11:28:13.846909 master-1 kubenswrapper[4771]: I1011 11:28:13.846867 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 11:28:13.847120 master-1 kubenswrapper[4771]: I1011 11:28:13.847071 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-dataplane-edpm-rdjt7"] Oct 11 11:28:14.715069 master-1 kubenswrapper[4771]: I1011 11:28:14.714985 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-dataplane-edpm-rdjt7" event={"ID":"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55","Type":"ContainerStarted","Data":"71b6cbea99b537fb97df8cac56078c84bd8fb6f72db7edf36ac99e71415f9777"} Oct 11 11:28:14.715069 master-1 kubenswrapper[4771]: I1011 11:28:14.715073 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-dataplane-edpm-rdjt7" event={"ID":"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55","Type":"ContainerStarted","Data":"d621ccd58559916f2503ebdd3bbdd04a63949a337855cab279f0c0f2f83d42a6"} Oct 11 11:28:14.750576 master-1 kubenswrapper[4771]: I1011 11:28:14.750472 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-dataplane-edpm-rdjt7" podStartSLOduration=2.269454172 podStartE2EDuration="2.750451518s" podCreationTimestamp="2025-10-11 11:28:12 +0000 UTC" firstStartedPulling="2025-10-11 11:28:13.846759166 +0000 UTC m=+3725.820985637" lastFinishedPulling="2025-10-11 11:28:14.327756532 +0000 UTC m=+3726.301982983" observedRunningTime="2025-10-11 11:28:14.744323782 +0000 UTC m=+3726.718550223" watchObservedRunningTime="2025-10-11 11:28:14.750451518 +0000 UTC m=+3726.724677979" Oct 11 11:28:34.144021 master-1 kubenswrapper[4771]: I1011 11:28:34.142746 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lhdxz"] Oct 11 11:28:34.146633 master-1 kubenswrapper[4771]: I1011 11:28:34.146487 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhdxz" Oct 11 11:28:34.178677 master-1 kubenswrapper[4771]: I1011 11:28:34.178454 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhdxz"] Oct 11 11:28:34.242147 master-1 kubenswrapper[4771]: I1011 11:28:34.241786 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99934513-1a05-40c6-8b59-cca04d84b0cd-catalog-content\") pod \"redhat-operators-lhdxz\" (UID: \"99934513-1a05-40c6-8b59-cca04d84b0cd\") " pod="openshift-marketplace/redhat-operators-lhdxz" Oct 11 11:28:34.242147 master-1 kubenswrapper[4771]: I1011 11:28:34.241877 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbp6l\" (UniqueName: \"kubernetes.io/projected/99934513-1a05-40c6-8b59-cca04d84b0cd-kube-api-access-fbp6l\") pod \"redhat-operators-lhdxz\" (UID: \"99934513-1a05-40c6-8b59-cca04d84b0cd\") " pod="openshift-marketplace/redhat-operators-lhdxz" Oct 11 11:28:34.242147 master-1 kubenswrapper[4771]: I1011 11:28:34.241924 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99934513-1a05-40c6-8b59-cca04d84b0cd-utilities\") pod \"redhat-operators-lhdxz\" (UID: \"99934513-1a05-40c6-8b59-cca04d84b0cd\") " pod="openshift-marketplace/redhat-operators-lhdxz" Oct 11 11:28:34.344618 master-1 kubenswrapper[4771]: I1011 11:28:34.344528 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99934513-1a05-40c6-8b59-cca04d84b0cd-catalog-content\") pod \"redhat-operators-lhdxz\" (UID: \"99934513-1a05-40c6-8b59-cca04d84b0cd\") " pod="openshift-marketplace/redhat-operators-lhdxz" Oct 11 11:28:34.345029 master-1 kubenswrapper[4771]: I1011 11:28:34.344674 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbp6l\" (UniqueName: \"kubernetes.io/projected/99934513-1a05-40c6-8b59-cca04d84b0cd-kube-api-access-fbp6l\") pod \"redhat-operators-lhdxz\" (UID: \"99934513-1a05-40c6-8b59-cca04d84b0cd\") " pod="openshift-marketplace/redhat-operators-lhdxz" Oct 11 11:28:34.345029 master-1 kubenswrapper[4771]: I1011 11:28:34.344748 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99934513-1a05-40c6-8b59-cca04d84b0cd-utilities\") pod \"redhat-operators-lhdxz\" (UID: \"99934513-1a05-40c6-8b59-cca04d84b0cd\") " pod="openshift-marketplace/redhat-operators-lhdxz" Oct 11 11:28:34.345944 master-1 kubenswrapper[4771]: I1011 11:28:34.345866 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99934513-1a05-40c6-8b59-cca04d84b0cd-catalog-content\") pod \"redhat-operators-lhdxz\" (UID: \"99934513-1a05-40c6-8b59-cca04d84b0cd\") " pod="openshift-marketplace/redhat-operators-lhdxz" Oct 11 11:28:34.346016 master-1 kubenswrapper[4771]: I1011 11:28:34.345927 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99934513-1a05-40c6-8b59-cca04d84b0cd-utilities\") pod \"redhat-operators-lhdxz\" (UID: \"99934513-1a05-40c6-8b59-cca04d84b0cd\") " pod="openshift-marketplace/redhat-operators-lhdxz" Oct 11 11:28:34.371484 master-1 kubenswrapper[4771]: I1011 11:28:34.371402 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbp6l\" (UniqueName: \"kubernetes.io/projected/99934513-1a05-40c6-8b59-cca04d84b0cd-kube-api-access-fbp6l\") pod \"redhat-operators-lhdxz\" (UID: \"99934513-1a05-40c6-8b59-cca04d84b0cd\") " pod="openshift-marketplace/redhat-operators-lhdxz" Oct 11 11:28:34.490485 master-1 kubenswrapper[4771]: I1011 11:28:34.490292 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhdxz" Oct 11 11:28:34.995456 master-1 kubenswrapper[4771]: W1011 11:28:34.995369 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99934513_1a05_40c6_8b59_cca04d84b0cd.slice/crio-75d4598e2a9b6653177ee29e0428d0a916a1effcd05dcf3ff168f9729f0343d7 WatchSource:0}: Error finding container 75d4598e2a9b6653177ee29e0428d0a916a1effcd05dcf3ff168f9729f0343d7: Status 404 returned error can't find the container with id 75d4598e2a9b6653177ee29e0428d0a916a1effcd05dcf3ff168f9729f0343d7 Oct 11 11:28:34.997226 master-1 kubenswrapper[4771]: I1011 11:28:34.997182 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhdxz"] Oct 11 11:28:35.968405 master-1 kubenswrapper[4771]: I1011 11:28:35.967859 4771 generic.go:334] "Generic (PLEG): container finished" podID="99934513-1a05-40c6-8b59-cca04d84b0cd" containerID="ce4d5eb6c9a45eb921d73906256a74d113968eab42b2553898b933519e5b9dfd" exitCode=0 Oct 11 11:28:35.968405 master-1 kubenswrapper[4771]: I1011 11:28:35.967910 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhdxz" event={"ID":"99934513-1a05-40c6-8b59-cca04d84b0cd","Type":"ContainerDied","Data":"ce4d5eb6c9a45eb921d73906256a74d113968eab42b2553898b933519e5b9dfd"} Oct 11 11:28:35.968405 master-1 kubenswrapper[4771]: I1011 11:28:35.967938 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhdxz" event={"ID":"99934513-1a05-40c6-8b59-cca04d84b0cd","Type":"ContainerStarted","Data":"75d4598e2a9b6653177ee29e0428d0a916a1effcd05dcf3ff168f9729f0343d7"} Oct 11 11:28:37.996599 master-1 kubenswrapper[4771]: I1011 11:28:37.996461 4771 generic.go:334] "Generic (PLEG): container finished" podID="99934513-1a05-40c6-8b59-cca04d84b0cd" containerID="7c84458d8bbe597bd2d841d515cd3513825a2c87d56edcca948b8ed78e5eb4f8" exitCode=0 Oct 11 11:28:37.996599 master-1 kubenswrapper[4771]: I1011 11:28:37.996526 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhdxz" event={"ID":"99934513-1a05-40c6-8b59-cca04d84b0cd","Type":"ContainerDied","Data":"7c84458d8bbe597bd2d841d515cd3513825a2c87d56edcca948b8ed78e5eb4f8"} Oct 11 11:28:39.010726 master-1 kubenswrapper[4771]: I1011 11:28:39.010615 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhdxz" event={"ID":"99934513-1a05-40c6-8b59-cca04d84b0cd","Type":"ContainerStarted","Data":"bd68e0827538badf8a204719e423d94d72ac284e9f6fe9041ce8ef18c8fcdc37"} Oct 11 11:28:39.048160 master-1 kubenswrapper[4771]: I1011 11:28:39.048009 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lhdxz" podStartSLOduration=2.5197304149999997 podStartE2EDuration="5.047990424s" podCreationTimestamp="2025-10-11 11:28:34 +0000 UTC" firstStartedPulling="2025-10-11 11:28:35.974025516 +0000 UTC m=+3747.948251967" lastFinishedPulling="2025-10-11 11:28:38.502285515 +0000 UTC m=+3750.476511976" observedRunningTime="2025-10-11 11:28:39.04191988 +0000 UTC m=+3751.016146351" watchObservedRunningTime="2025-10-11 11:28:39.047990424 +0000 UTC m=+3751.022216875" Oct 11 11:28:44.490598 master-1 kubenswrapper[4771]: I1011 11:28:44.490502 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lhdxz" Oct 11 11:28:44.490598 master-1 kubenswrapper[4771]: I1011 11:28:44.490588 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lhdxz" Oct 11 11:28:45.545688 master-1 kubenswrapper[4771]: I1011 11:28:45.545565 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lhdxz" podUID="99934513-1a05-40c6-8b59-cca04d84b0cd" containerName="registry-server" probeResult="failure" output=< Oct 11 11:28:45.545688 master-1 kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Oct 11 11:28:45.545688 master-1 kubenswrapper[4771]: > Oct 11 11:28:54.573952 master-1 kubenswrapper[4771]: I1011 11:28:54.573868 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lhdxz" Oct 11 11:28:54.662191 master-1 kubenswrapper[4771]: I1011 11:28:54.662122 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lhdxz" Oct 11 11:28:54.830520 master-1 kubenswrapper[4771]: I1011 11:28:54.830335 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lhdxz"] Oct 11 11:28:56.194128 master-1 kubenswrapper[4771]: I1011 11:28:56.194021 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lhdxz" podUID="99934513-1a05-40c6-8b59-cca04d84b0cd" containerName="registry-server" containerID="cri-o://bd68e0827538badf8a204719e423d94d72ac284e9f6fe9041ce8ef18c8fcdc37" gracePeriod=2 Oct 11 11:28:56.777694 master-1 kubenswrapper[4771]: I1011 11:28:56.777584 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhdxz" Oct 11 11:28:56.831719 master-1 kubenswrapper[4771]: I1011 11:28:56.831638 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbp6l\" (UniqueName: \"kubernetes.io/projected/99934513-1a05-40c6-8b59-cca04d84b0cd-kube-api-access-fbp6l\") pod \"99934513-1a05-40c6-8b59-cca04d84b0cd\" (UID: \"99934513-1a05-40c6-8b59-cca04d84b0cd\") " Oct 11 11:28:56.832111 master-1 kubenswrapper[4771]: I1011 11:28:56.831836 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99934513-1a05-40c6-8b59-cca04d84b0cd-catalog-content\") pod \"99934513-1a05-40c6-8b59-cca04d84b0cd\" (UID: \"99934513-1a05-40c6-8b59-cca04d84b0cd\") " Oct 11 11:28:56.832111 master-1 kubenswrapper[4771]: I1011 11:28:56.831890 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99934513-1a05-40c6-8b59-cca04d84b0cd-utilities\") pod \"99934513-1a05-40c6-8b59-cca04d84b0cd\" (UID: \"99934513-1a05-40c6-8b59-cca04d84b0cd\") " Oct 11 11:28:56.834006 master-1 kubenswrapper[4771]: I1011 11:28:56.833922 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99934513-1a05-40c6-8b59-cca04d84b0cd-utilities" (OuterVolumeSpecName: "utilities") pod "99934513-1a05-40c6-8b59-cca04d84b0cd" (UID: "99934513-1a05-40c6-8b59-cca04d84b0cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:28:56.834956 master-1 kubenswrapper[4771]: I1011 11:28:56.834891 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99934513-1a05-40c6-8b59-cca04d84b0cd-kube-api-access-fbp6l" (OuterVolumeSpecName: "kube-api-access-fbp6l") pod "99934513-1a05-40c6-8b59-cca04d84b0cd" (UID: "99934513-1a05-40c6-8b59-cca04d84b0cd"). InnerVolumeSpecName "kube-api-access-fbp6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:28:56.920176 master-1 kubenswrapper[4771]: I1011 11:28:56.920077 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99934513-1a05-40c6-8b59-cca04d84b0cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99934513-1a05-40c6-8b59-cca04d84b0cd" (UID: "99934513-1a05-40c6-8b59-cca04d84b0cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:28:56.934554 master-1 kubenswrapper[4771]: I1011 11:28:56.934386 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99934513-1a05-40c6-8b59-cca04d84b0cd-catalog-content\") on node \"master-1\" DevicePath \"\"" Oct 11 11:28:56.934554 master-1 kubenswrapper[4771]: I1011 11:28:56.934414 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99934513-1a05-40c6-8b59-cca04d84b0cd-utilities\") on node \"master-1\" DevicePath \"\"" Oct 11 11:28:56.934554 master-1 kubenswrapper[4771]: I1011 11:28:56.934427 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbp6l\" (UniqueName: \"kubernetes.io/projected/99934513-1a05-40c6-8b59-cca04d84b0cd-kube-api-access-fbp6l\") on node \"master-1\" DevicePath \"\"" Oct 11 11:28:57.212393 master-1 kubenswrapper[4771]: I1011 11:28:57.212133 4771 generic.go:334] "Generic (PLEG): container finished" podID="99934513-1a05-40c6-8b59-cca04d84b0cd" containerID="bd68e0827538badf8a204719e423d94d72ac284e9f6fe9041ce8ef18c8fcdc37" exitCode=0 Oct 11 11:28:57.212393 master-1 kubenswrapper[4771]: I1011 11:28:57.212228 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhdxz" event={"ID":"99934513-1a05-40c6-8b59-cca04d84b0cd","Type":"ContainerDied","Data":"bd68e0827538badf8a204719e423d94d72ac284e9f6fe9041ce8ef18c8fcdc37"} Oct 11 11:28:57.212393 master-1 kubenswrapper[4771]: I1011 11:28:57.212282 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhdxz" Oct 11 11:28:57.212393 master-1 kubenswrapper[4771]: I1011 11:28:57.212318 4771 scope.go:117] "RemoveContainer" containerID="bd68e0827538badf8a204719e423d94d72ac284e9f6fe9041ce8ef18c8fcdc37" Oct 11 11:28:57.213135 master-1 kubenswrapper[4771]: I1011 11:28:57.212296 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhdxz" event={"ID":"99934513-1a05-40c6-8b59-cca04d84b0cd","Type":"ContainerDied","Data":"75d4598e2a9b6653177ee29e0428d0a916a1effcd05dcf3ff168f9729f0343d7"} Oct 11 11:28:57.249206 master-1 kubenswrapper[4771]: I1011 11:28:57.249141 4771 scope.go:117] "RemoveContainer" containerID="7c84458d8bbe597bd2d841d515cd3513825a2c87d56edcca948b8ed78e5eb4f8" Oct 11 11:28:57.283607 master-1 kubenswrapper[4771]: I1011 11:28:57.283518 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lhdxz"] Oct 11 11:28:57.291488 master-1 kubenswrapper[4771]: I1011 11:28:57.291425 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lhdxz"] Oct 11 11:28:57.301744 master-1 kubenswrapper[4771]: I1011 11:28:57.301168 4771 scope.go:117] "RemoveContainer" containerID="ce4d5eb6c9a45eb921d73906256a74d113968eab42b2553898b933519e5b9dfd" Oct 11 11:28:57.328391 master-1 kubenswrapper[4771]: I1011 11:28:57.328317 4771 scope.go:117] "RemoveContainer" containerID="bd68e0827538badf8a204719e423d94d72ac284e9f6fe9041ce8ef18c8fcdc37" Oct 11 11:28:57.329548 master-1 kubenswrapper[4771]: E1011 11:28:57.329494 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd68e0827538badf8a204719e423d94d72ac284e9f6fe9041ce8ef18c8fcdc37\": container with ID starting with bd68e0827538badf8a204719e423d94d72ac284e9f6fe9041ce8ef18c8fcdc37 not found: ID does not exist" containerID="bd68e0827538badf8a204719e423d94d72ac284e9f6fe9041ce8ef18c8fcdc37" Oct 11 11:28:57.329672 master-1 kubenswrapper[4771]: I1011 11:28:57.329543 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd68e0827538badf8a204719e423d94d72ac284e9f6fe9041ce8ef18c8fcdc37"} err="failed to get container status \"bd68e0827538badf8a204719e423d94d72ac284e9f6fe9041ce8ef18c8fcdc37\": rpc error: code = NotFound desc = could not find container \"bd68e0827538badf8a204719e423d94d72ac284e9f6fe9041ce8ef18c8fcdc37\": container with ID starting with bd68e0827538badf8a204719e423d94d72ac284e9f6fe9041ce8ef18c8fcdc37 not found: ID does not exist" Oct 11 11:28:57.329672 master-1 kubenswrapper[4771]: I1011 11:28:57.329571 4771 scope.go:117] "RemoveContainer" containerID="7c84458d8bbe597bd2d841d515cd3513825a2c87d56edcca948b8ed78e5eb4f8" Oct 11 11:28:57.329933 master-1 kubenswrapper[4771]: E1011 11:28:57.329885 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c84458d8bbe597bd2d841d515cd3513825a2c87d56edcca948b8ed78e5eb4f8\": container with ID starting with 7c84458d8bbe597bd2d841d515cd3513825a2c87d56edcca948b8ed78e5eb4f8 not found: ID does not exist" containerID="7c84458d8bbe597bd2d841d515cd3513825a2c87d56edcca948b8ed78e5eb4f8" Oct 11 11:28:57.329933 master-1 kubenswrapper[4771]: I1011 11:28:57.329916 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c84458d8bbe597bd2d841d515cd3513825a2c87d56edcca948b8ed78e5eb4f8"} err="failed to get container status \"7c84458d8bbe597bd2d841d515cd3513825a2c87d56edcca948b8ed78e5eb4f8\": rpc error: code = NotFound desc = could not find container \"7c84458d8bbe597bd2d841d515cd3513825a2c87d56edcca948b8ed78e5eb4f8\": container with ID starting with 7c84458d8bbe597bd2d841d515cd3513825a2c87d56edcca948b8ed78e5eb4f8 not found: ID does not exist" Oct 11 11:28:57.329933 master-1 kubenswrapper[4771]: I1011 11:28:57.329934 4771 scope.go:117] "RemoveContainer" containerID="ce4d5eb6c9a45eb921d73906256a74d113968eab42b2553898b933519e5b9dfd" Oct 11 11:28:57.330340 master-1 kubenswrapper[4771]: E1011 11:28:57.330268 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce4d5eb6c9a45eb921d73906256a74d113968eab42b2553898b933519e5b9dfd\": container with ID starting with ce4d5eb6c9a45eb921d73906256a74d113968eab42b2553898b933519e5b9dfd not found: ID does not exist" containerID="ce4d5eb6c9a45eb921d73906256a74d113968eab42b2553898b933519e5b9dfd" Oct 11 11:28:57.330485 master-1 kubenswrapper[4771]: I1011 11:28:57.330342 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce4d5eb6c9a45eb921d73906256a74d113968eab42b2553898b933519e5b9dfd"} err="failed to get container status \"ce4d5eb6c9a45eb921d73906256a74d113968eab42b2553898b933519e5b9dfd\": rpc error: code = NotFound desc = could not find container \"ce4d5eb6c9a45eb921d73906256a74d113968eab42b2553898b933519e5b9dfd\": container with ID starting with ce4d5eb6c9a45eb921d73906256a74d113968eab42b2553898b933519e5b9dfd not found: ID does not exist" Oct 11 11:28:58.449302 master-1 kubenswrapper[4771]: I1011 11:28:58.449218 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99934513-1a05-40c6-8b59-cca04d84b0cd" path="/var/lib/kubelet/pods/99934513-1a05-40c6-8b59-cca04d84b0cd/volumes" Oct 11 11:29:56.301909 master-2 kubenswrapper[4776]: I1011 11:29:56.301552 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4z7g2"] Oct 11 11:29:56.302782 master-2 kubenswrapper[4776]: E1011 11:29:56.302005 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5435850-e29f-489e-8534-a73e291e2ae7" containerName="extract-content" Oct 11 11:29:56.302782 master-2 kubenswrapper[4776]: I1011 11:29:56.302020 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5435850-e29f-489e-8534-a73e291e2ae7" containerName="extract-content" Oct 11 11:29:56.302782 master-2 kubenswrapper[4776]: E1011 11:29:56.302039 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5435850-e29f-489e-8534-a73e291e2ae7" containerName="registry-server" Oct 11 11:29:56.302782 master-2 kubenswrapper[4776]: I1011 11:29:56.302045 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5435850-e29f-489e-8534-a73e291e2ae7" containerName="registry-server" Oct 11 11:29:56.302782 master-2 kubenswrapper[4776]: E1011 11:29:56.302072 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5435850-e29f-489e-8534-a73e291e2ae7" containerName="extract-utilities" Oct 11 11:29:56.302782 master-2 kubenswrapper[4776]: I1011 11:29:56.302079 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5435850-e29f-489e-8534-a73e291e2ae7" containerName="extract-utilities" Oct 11 11:29:56.302782 master-2 kubenswrapper[4776]: I1011 11:29:56.302270 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5435850-e29f-489e-8534-a73e291e2ae7" containerName="registry-server" Oct 11 11:29:56.303523 master-2 kubenswrapper[4776]: I1011 11:29:56.303497 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.326165 master-2 kubenswrapper[4776]: I1011 11:29:56.325940 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4z7g2"] Oct 11 11:29:56.426642 master-2 kubenswrapper[4776]: I1011 11:29:56.426584 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhbxb\" (UniqueName: \"kubernetes.io/projected/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-kube-api-access-xhbxb\") pod \"certified-operators-4z7g2\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.426957 master-2 kubenswrapper[4776]: I1011 11:29:56.426944 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-catalog-content\") pod \"certified-operators-4z7g2\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.427065 master-2 kubenswrapper[4776]: I1011 11:29:56.427052 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-utilities\") pod \"certified-operators-4z7g2\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.529057 master-2 kubenswrapper[4776]: I1011 11:29:56.528988 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-catalog-content\") pod \"certified-operators-4z7g2\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.529401 master-2 kubenswrapper[4776]: I1011 11:29:56.529377 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-utilities\") pod \"certified-operators-4z7g2\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.529726 master-2 kubenswrapper[4776]: I1011 11:29:56.529704 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhbxb\" (UniqueName: \"kubernetes.io/projected/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-kube-api-access-xhbxb\") pod \"certified-operators-4z7g2\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.530003 master-2 kubenswrapper[4776]: I1011 11:29:56.529927 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-catalog-content\") pod \"certified-operators-4z7g2\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.530141 master-2 kubenswrapper[4776]: I1011 11:29:56.530093 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-utilities\") pod \"certified-operators-4z7g2\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.552849 master-2 kubenswrapper[4776]: I1011 11:29:56.552271 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhbxb\" (UniqueName: \"kubernetes.io/projected/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-kube-api-access-xhbxb\") pod \"certified-operators-4z7g2\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.629730 master-2 kubenswrapper[4776]: I1011 11:29:56.629653 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:57.139060 master-2 kubenswrapper[4776]: I1011 11:29:57.137324 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4z7g2"] Oct 11 11:29:57.139060 master-2 kubenswrapper[4776]: W1011 11:29:57.137514 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffb7ba2d_b676_4dc0_8809_0613818a3ea6.slice/crio-64cb239022063cb12fe37c86218229ce4a24a0051517d1c5ccb388dad4b429cb WatchSource:0}: Error finding container 64cb239022063cb12fe37c86218229ce4a24a0051517d1c5ccb388dad4b429cb: Status 404 returned error can't find the container with id 64cb239022063cb12fe37c86218229ce4a24a0051517d1c5ccb388dad4b429cb Oct 11 11:29:57.977739 master-2 kubenswrapper[4776]: I1011 11:29:57.977620 4776 generic.go:334] "Generic (PLEG): container finished" podID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerID="a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed" exitCode=0 Oct 11 11:29:57.977739 master-2 kubenswrapper[4776]: I1011 11:29:57.977710 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4z7g2" event={"ID":"ffb7ba2d-b676-4dc0-8809-0613818a3ea6","Type":"ContainerDied","Data":"a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed"} Oct 11 11:29:57.977739 master-2 kubenswrapper[4776]: I1011 11:29:57.977740 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4z7g2" event={"ID":"ffb7ba2d-b676-4dc0-8809-0613818a3ea6","Type":"ContainerStarted","Data":"64cb239022063cb12fe37c86218229ce4a24a0051517d1c5ccb388dad4b429cb"} Oct 11 11:29:57.980976 master-2 kubenswrapper[4776]: I1011 11:29:57.980760 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 11:29:58.988947 master-2 kubenswrapper[4776]: I1011 11:29:58.988874 4776 generic.go:334] "Generic (PLEG): container finished" podID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerID="3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d" exitCode=0 Oct 11 11:29:58.988947 master-2 kubenswrapper[4776]: I1011 11:29:58.988932 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4z7g2" event={"ID":"ffb7ba2d-b676-4dc0-8809-0613818a3ea6","Type":"ContainerDied","Data":"3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d"} Oct 11 11:30:00.001540 master-2 kubenswrapper[4776]: I1011 11:30:00.001454 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4z7g2" event={"ID":"ffb7ba2d-b676-4dc0-8809-0613818a3ea6","Type":"ContainerStarted","Data":"76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029"} Oct 11 11:30:00.031506 master-2 kubenswrapper[4776]: I1011 11:30:00.031397 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4z7g2" podStartSLOduration=2.5982209259999998 podStartE2EDuration="4.031376712s" podCreationTimestamp="2025-10-11 11:29:56 +0000 UTC" firstStartedPulling="2025-10-11 11:29:57.980691199 +0000 UTC m=+3832.765117908" lastFinishedPulling="2025-10-11 11:29:59.413846945 +0000 UTC m=+3834.198273694" observedRunningTime="2025-10-11 11:30:00.030076307 +0000 UTC m=+3834.814503026" watchObservedRunningTime="2025-10-11 11:30:00.031376712 +0000 UTC m=+3834.815803421" Oct 11 11:30:00.170106 master-1 kubenswrapper[4771]: I1011 11:30:00.170045 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts"] Oct 11 11:30:00.171039 master-1 kubenswrapper[4771]: E1011 11:30:00.170487 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99934513-1a05-40c6-8b59-cca04d84b0cd" containerName="extract-content" Oct 11 11:30:00.171039 master-1 kubenswrapper[4771]: I1011 11:30:00.170502 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="99934513-1a05-40c6-8b59-cca04d84b0cd" containerName="extract-content" Oct 11 11:30:00.171039 master-1 kubenswrapper[4771]: E1011 11:30:00.170522 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99934513-1a05-40c6-8b59-cca04d84b0cd" containerName="extract-utilities" Oct 11 11:30:00.171039 master-1 kubenswrapper[4771]: I1011 11:30:00.170528 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="99934513-1a05-40c6-8b59-cca04d84b0cd" containerName="extract-utilities" Oct 11 11:30:00.171039 master-1 kubenswrapper[4771]: E1011 11:30:00.170544 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99934513-1a05-40c6-8b59-cca04d84b0cd" containerName="registry-server" Oct 11 11:30:00.171039 master-1 kubenswrapper[4771]: I1011 11:30:00.170551 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="99934513-1a05-40c6-8b59-cca04d84b0cd" containerName="registry-server" Oct 11 11:30:00.171039 master-1 kubenswrapper[4771]: I1011 11:30:00.170744 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="99934513-1a05-40c6-8b59-cca04d84b0cd" containerName="registry-server" Oct 11 11:30:00.171723 master-1 kubenswrapper[4771]: I1011 11:30:00.171538 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts" Oct 11 11:30:00.174386 master-1 kubenswrapper[4771]: I1011 11:30:00.174339 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 11 11:30:00.175084 master-1 kubenswrapper[4771]: I1011 11:30:00.174566 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 11 11:30:00.175280 master-1 kubenswrapper[4771]: I1011 11:30:00.175255 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-hbjq2" Oct 11 11:30:00.187244 master-1 kubenswrapper[4771]: I1011 11:30:00.186868 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts"] Oct 11 11:30:00.334928 master-1 kubenswrapper[4771]: I1011 11:30:00.334844 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a19b5781-05b5-4109-a6e0-c54746c813a6-secret-volume\") pod \"collect-profiles-29336370-9vpts\" (UID: \"a19b5781-05b5-4109-a6e0-c54746c813a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts" Oct 11 11:30:00.335205 master-1 kubenswrapper[4771]: I1011 11:30:00.334982 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r22x\" (UniqueName: \"kubernetes.io/projected/a19b5781-05b5-4109-a6e0-c54746c813a6-kube-api-access-7r22x\") pod \"collect-profiles-29336370-9vpts\" (UID: \"a19b5781-05b5-4109-a6e0-c54746c813a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts" Oct 11 11:30:00.335205 master-1 kubenswrapper[4771]: I1011 11:30:00.335041 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a19b5781-05b5-4109-a6e0-c54746c813a6-config-volume\") pod \"collect-profiles-29336370-9vpts\" (UID: \"a19b5781-05b5-4109-a6e0-c54746c813a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts" Oct 11 11:30:00.436824 master-1 kubenswrapper[4771]: I1011 11:30:00.436647 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a19b5781-05b5-4109-a6e0-c54746c813a6-config-volume\") pod \"collect-profiles-29336370-9vpts\" (UID: \"a19b5781-05b5-4109-a6e0-c54746c813a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts" Oct 11 11:30:00.437072 master-1 kubenswrapper[4771]: I1011 11:30:00.436820 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a19b5781-05b5-4109-a6e0-c54746c813a6-secret-volume\") pod \"collect-profiles-29336370-9vpts\" (UID: \"a19b5781-05b5-4109-a6e0-c54746c813a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts" Oct 11 11:30:00.437072 master-1 kubenswrapper[4771]: I1011 11:30:00.436980 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r22x\" (UniqueName: \"kubernetes.io/projected/a19b5781-05b5-4109-a6e0-c54746c813a6-kube-api-access-7r22x\") pod \"collect-profiles-29336370-9vpts\" (UID: \"a19b5781-05b5-4109-a6e0-c54746c813a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts" Oct 11 11:30:00.438649 master-1 kubenswrapper[4771]: I1011 11:30:00.438580 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a19b5781-05b5-4109-a6e0-c54746c813a6-config-volume\") pod \"collect-profiles-29336370-9vpts\" (UID: \"a19b5781-05b5-4109-a6e0-c54746c813a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts" Oct 11 11:30:00.443020 master-1 kubenswrapper[4771]: I1011 11:30:00.442887 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a19b5781-05b5-4109-a6e0-c54746c813a6-secret-volume\") pod \"collect-profiles-29336370-9vpts\" (UID: \"a19b5781-05b5-4109-a6e0-c54746c813a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts" Oct 11 11:30:00.469696 master-1 kubenswrapper[4771]: I1011 11:30:00.469623 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r22x\" (UniqueName: \"kubernetes.io/projected/a19b5781-05b5-4109-a6e0-c54746c813a6-kube-api-access-7r22x\") pod \"collect-profiles-29336370-9vpts\" (UID: \"a19b5781-05b5-4109-a6e0-c54746c813a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts" Oct 11 11:30:00.515811 master-1 kubenswrapper[4771]: I1011 11:30:00.515737 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts" Oct 11 11:30:01.041214 master-1 kubenswrapper[4771]: I1011 11:30:01.041149 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts"] Oct 11 11:30:01.949803 master-1 kubenswrapper[4771]: I1011 11:30:01.949744 4771 generic.go:334] "Generic (PLEG): container finished" podID="a19b5781-05b5-4109-a6e0-c54746c813a6" containerID="c64fb4f8711dfea2086b2ddbeb235c0580214bb088c49212c033f84f4ec3bced" exitCode=0 Oct 11 11:30:01.949803 master-1 kubenswrapper[4771]: I1011 11:30:01.949806 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts" event={"ID":"a19b5781-05b5-4109-a6e0-c54746c813a6","Type":"ContainerDied","Data":"c64fb4f8711dfea2086b2ddbeb235c0580214bb088c49212c033f84f4ec3bced"} Oct 11 11:30:01.950780 master-1 kubenswrapper[4771]: I1011 11:30:01.949846 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts" event={"ID":"a19b5781-05b5-4109-a6e0-c54746c813a6","Type":"ContainerStarted","Data":"6b905a90da79c6fe1c0f82c63b036fdb56d7a5d66007ecbfadd8bd681f8492a9"} Oct 11 11:30:03.508979 master-1 kubenswrapper[4771]: I1011 11:30:03.508917 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts" Oct 11 11:30:03.634707 master-1 kubenswrapper[4771]: I1011 11:30:03.634615 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a19b5781-05b5-4109-a6e0-c54746c813a6-config-volume\") pod \"a19b5781-05b5-4109-a6e0-c54746c813a6\" (UID: \"a19b5781-05b5-4109-a6e0-c54746c813a6\") " Oct 11 11:30:03.635060 master-1 kubenswrapper[4771]: I1011 11:30:03.634727 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r22x\" (UniqueName: \"kubernetes.io/projected/a19b5781-05b5-4109-a6e0-c54746c813a6-kube-api-access-7r22x\") pod \"a19b5781-05b5-4109-a6e0-c54746c813a6\" (UID: \"a19b5781-05b5-4109-a6e0-c54746c813a6\") " Oct 11 11:30:03.635060 master-1 kubenswrapper[4771]: I1011 11:30:03.634802 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a19b5781-05b5-4109-a6e0-c54746c813a6-secret-volume\") pod \"a19b5781-05b5-4109-a6e0-c54746c813a6\" (UID: \"a19b5781-05b5-4109-a6e0-c54746c813a6\") " Oct 11 11:30:03.635403 master-1 kubenswrapper[4771]: I1011 11:30:03.635317 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a19b5781-05b5-4109-a6e0-c54746c813a6-config-volume" (OuterVolumeSpecName: "config-volume") pod "a19b5781-05b5-4109-a6e0-c54746c813a6" (UID: "a19b5781-05b5-4109-a6e0-c54746c813a6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:30:03.640567 master-1 kubenswrapper[4771]: I1011 11:30:03.638670 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19b5781-05b5-4109-a6e0-c54746c813a6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a19b5781-05b5-4109-a6e0-c54746c813a6" (UID: "a19b5781-05b5-4109-a6e0-c54746c813a6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:30:03.640567 master-1 kubenswrapper[4771]: I1011 11:30:03.639113 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a19b5781-05b5-4109-a6e0-c54746c813a6-kube-api-access-7r22x" (OuterVolumeSpecName: "kube-api-access-7r22x") pod "a19b5781-05b5-4109-a6e0-c54746c813a6" (UID: "a19b5781-05b5-4109-a6e0-c54746c813a6"). InnerVolumeSpecName "kube-api-access-7r22x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:30:03.737789 master-1 kubenswrapper[4771]: I1011 11:30:03.737717 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a19b5781-05b5-4109-a6e0-c54746c813a6-config-volume\") on node \"master-1\" DevicePath \"\"" Oct 11 11:30:03.737789 master-1 kubenswrapper[4771]: I1011 11:30:03.737771 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7r22x\" (UniqueName: \"kubernetes.io/projected/a19b5781-05b5-4109-a6e0-c54746c813a6-kube-api-access-7r22x\") on node \"master-1\" DevicePath \"\"" Oct 11 11:30:03.737789 master-1 kubenswrapper[4771]: I1011 11:30:03.737786 4771 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a19b5781-05b5-4109-a6e0-c54746c813a6-secret-volume\") on node \"master-1\" DevicePath \"\"" Oct 11 11:30:03.971295 master-1 kubenswrapper[4771]: I1011 11:30:03.971082 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts" event={"ID":"a19b5781-05b5-4109-a6e0-c54746c813a6","Type":"ContainerDied","Data":"6b905a90da79c6fe1c0f82c63b036fdb56d7a5d66007ecbfadd8bd681f8492a9"} Oct 11 11:30:03.971295 master-1 kubenswrapper[4771]: I1011 11:30:03.971170 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b905a90da79c6fe1c0f82c63b036fdb56d7a5d66007ecbfadd8bd681f8492a9" Oct 11 11:30:03.971295 master-1 kubenswrapper[4771]: I1011 11:30:03.971121 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts" Oct 11 11:30:04.760757 master-2 kubenswrapper[4776]: I1011 11:30:04.760664 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv"] Oct 11 11:30:04.767801 master-2 kubenswrapper[4776]: I1011 11:30:04.767768 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv"] Oct 11 11:30:06.072204 master-2 kubenswrapper[4776]: I1011 11:30:06.072129 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4aea0e1-d6c8-4542-85c7-e46b945d61a0" path="/var/lib/kubelet/pods/a4aea0e1-d6c8-4542-85c7-e46b945d61a0/volumes" Oct 11 11:30:06.631135 master-2 kubenswrapper[4776]: I1011 11:30:06.630873 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:30:06.631135 master-2 kubenswrapper[4776]: I1011 11:30:06.630971 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:30:06.673066 master-2 kubenswrapper[4776]: I1011 11:30:06.673012 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:30:07.124170 master-2 kubenswrapper[4776]: I1011 11:30:07.124116 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:30:07.211085 master-2 kubenswrapper[4776]: I1011 11:30:07.211011 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4z7g2"] Oct 11 11:30:09.087903 master-2 kubenswrapper[4776]: I1011 11:30:09.087816 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4z7g2" podUID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerName="registry-server" containerID="cri-o://76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029" gracePeriod=2 Oct 11 11:30:09.601182 master-2 kubenswrapper[4776]: I1011 11:30:09.601133 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:30:09.731998 master-2 kubenswrapper[4776]: I1011 11:30:09.731924 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-utilities\") pod \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " Oct 11 11:30:09.732243 master-2 kubenswrapper[4776]: I1011 11:30:09.732122 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-catalog-content\") pod \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " Oct 11 11:30:09.732243 master-2 kubenswrapper[4776]: I1011 11:30:09.732170 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhbxb\" (UniqueName: \"kubernetes.io/projected/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-kube-api-access-xhbxb\") pod \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " Oct 11 11:30:09.734446 master-2 kubenswrapper[4776]: I1011 11:30:09.734368 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-utilities" (OuterVolumeSpecName: "utilities") pod "ffb7ba2d-b676-4dc0-8809-0613818a3ea6" (UID: "ffb7ba2d-b676-4dc0-8809-0613818a3ea6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:30:09.735710 master-2 kubenswrapper[4776]: I1011 11:30:09.735620 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-kube-api-access-xhbxb" (OuterVolumeSpecName: "kube-api-access-xhbxb") pod "ffb7ba2d-b676-4dc0-8809-0613818a3ea6" (UID: "ffb7ba2d-b676-4dc0-8809-0613818a3ea6"). InnerVolumeSpecName "kube-api-access-xhbxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:30:09.779699 master-2 kubenswrapper[4776]: I1011 11:30:09.779561 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffb7ba2d-b676-4dc0-8809-0613818a3ea6" (UID: "ffb7ba2d-b676-4dc0-8809-0613818a3ea6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:30:09.835737 master-2 kubenswrapper[4776]: I1011 11:30:09.835620 4776 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-catalog-content\") on node \"master-2\" DevicePath \"\"" Oct 11 11:30:09.835737 master-2 kubenswrapper[4776]: I1011 11:30:09.835715 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhbxb\" (UniqueName: \"kubernetes.io/projected/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-kube-api-access-xhbxb\") on node \"master-2\" DevicePath \"\"" Oct 11 11:30:09.835737 master-2 kubenswrapper[4776]: I1011 11:30:09.835732 4776 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-utilities\") on node \"master-2\" DevicePath \"\"" Oct 11 11:30:10.101148 master-2 kubenswrapper[4776]: I1011 11:30:10.101080 4776 generic.go:334] "Generic (PLEG): container finished" podID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerID="76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029" exitCode=0 Oct 11 11:30:10.101148 master-2 kubenswrapper[4776]: I1011 11:30:10.101144 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4z7g2" event={"ID":"ffb7ba2d-b676-4dc0-8809-0613818a3ea6","Type":"ContainerDied","Data":"76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029"} Oct 11 11:30:10.101787 master-2 kubenswrapper[4776]: I1011 11:30:10.101169 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:30:10.101787 master-2 kubenswrapper[4776]: I1011 11:30:10.101198 4776 scope.go:117] "RemoveContainer" containerID="76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029" Oct 11 11:30:10.101787 master-2 kubenswrapper[4776]: I1011 11:30:10.101180 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4z7g2" event={"ID":"ffb7ba2d-b676-4dc0-8809-0613818a3ea6","Type":"ContainerDied","Data":"64cb239022063cb12fe37c86218229ce4a24a0051517d1c5ccb388dad4b429cb"} Oct 11 11:30:10.126760 master-2 kubenswrapper[4776]: I1011 11:30:10.126731 4776 scope.go:117] "RemoveContainer" containerID="3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d" Oct 11 11:30:10.148064 master-2 kubenswrapper[4776]: I1011 11:30:10.147318 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4z7g2"] Oct 11 11:30:10.157191 master-2 kubenswrapper[4776]: I1011 11:30:10.157140 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4z7g2"] Oct 11 11:30:10.162455 master-2 kubenswrapper[4776]: I1011 11:30:10.162420 4776 scope.go:117] "RemoveContainer" containerID="a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed" Oct 11 11:30:10.195933 master-2 kubenswrapper[4776]: I1011 11:30:10.195295 4776 scope.go:117] "RemoveContainer" containerID="76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029" Oct 11 11:30:10.196220 master-2 kubenswrapper[4776]: E1011 11:30:10.196187 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029\": container with ID starting with 76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029 not found: ID does not exist" containerID="76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029" Oct 11 11:30:10.196266 master-2 kubenswrapper[4776]: I1011 11:30:10.196218 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029"} err="failed to get container status \"76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029\": rpc error: code = NotFound desc = could not find container \"76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029\": container with ID starting with 76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029 not found: ID does not exist" Oct 11 11:30:10.196266 master-2 kubenswrapper[4776]: I1011 11:30:10.196239 4776 scope.go:117] "RemoveContainer" containerID="3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d" Oct 11 11:30:10.196658 master-2 kubenswrapper[4776]: E1011 11:30:10.196622 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d\": container with ID starting with 3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d not found: ID does not exist" containerID="3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d" Oct 11 11:30:10.196721 master-2 kubenswrapper[4776]: I1011 11:30:10.196653 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d"} err="failed to get container status \"3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d\": rpc error: code = NotFound desc = could not find container \"3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d\": container with ID starting with 3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d not found: ID does not exist" Oct 11 11:30:10.196721 master-2 kubenswrapper[4776]: I1011 11:30:10.196690 4776 scope.go:117] "RemoveContainer" containerID="a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed" Oct 11 11:30:10.196988 master-2 kubenswrapper[4776]: E1011 11:30:10.196957 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed\": container with ID starting with a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed not found: ID does not exist" containerID="a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed" Oct 11 11:30:10.197029 master-2 kubenswrapper[4776]: I1011 11:30:10.196986 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed"} err="failed to get container status \"a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed\": rpc error: code = NotFound desc = could not find container \"a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed\": container with ID starting with a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed not found: ID does not exist" Oct 11 11:30:12.074441 master-2 kubenswrapper[4776]: I1011 11:30:12.074301 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" path="/var/lib/kubelet/pods/ffb7ba2d-b676-4dc0-8809-0613818a3ea6/volumes" Oct 11 11:30:18.709020 master-2 kubenswrapper[4776]: I1011 11:30:18.708903 4776 scope.go:117] "RemoveContainer" containerID="84c5dbd5af0fe30b5900ca449a4f0411c4cff8ceb60f39636cf23f6147444fb3" Oct 11 11:31:13.770546 master-1 kubenswrapper[4771]: I1011 11:31:13.770442 4771 generic.go:334] "Generic (PLEG): container finished" podID="fb1d9f67-e3e9-4545-9e5d-07f2c213fe55" containerID="71b6cbea99b537fb97df8cac56078c84bd8fb6f72db7edf36ac99e71415f9777" exitCode=0 Oct 11 11:31:13.770546 master-1 kubenswrapper[4771]: I1011 11:31:13.770528 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-dataplane-edpm-rdjt7" event={"ID":"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55","Type":"ContainerDied","Data":"71b6cbea99b537fb97df8cac56078c84bd8fb6f72db7edf36ac99e71415f9777"} Oct 11 11:31:15.392560 master-1 kubenswrapper[4771]: I1011 11:31:15.392466 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:31:15.541147 master-1 kubenswrapper[4771]: I1011 11:31:15.541042 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-telemetry-combined-ca-bundle\") pod \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " Oct 11 11:31:15.541147 master-1 kubenswrapper[4771]: I1011 11:31:15.541130 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ceilometer-compute-config-data-0\") pod \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " Oct 11 11:31:15.541602 master-1 kubenswrapper[4771]: I1011 11:31:15.541208 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzw24\" (UniqueName: \"kubernetes.io/projected/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-kube-api-access-hzw24\") pod \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " Oct 11 11:31:15.541602 master-1 kubenswrapper[4771]: I1011 11:31:15.541309 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ssh-key\") pod \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " Oct 11 11:31:15.541602 master-1 kubenswrapper[4771]: I1011 11:31:15.541362 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ceilometer-compute-config-data-2\") pod \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " Oct 11 11:31:15.541602 master-1 kubenswrapper[4771]: I1011 11:31:15.541440 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ceilometer-compute-config-data-1\") pod \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " Oct 11 11:31:15.541602 master-1 kubenswrapper[4771]: I1011 11:31:15.541467 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-inventory\") pod \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\" (UID: \"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55\") " Oct 11 11:31:15.546779 master-1 kubenswrapper[4771]: I1011 11:31:15.546512 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-kube-api-access-hzw24" (OuterVolumeSpecName: "kube-api-access-hzw24") pod "fb1d9f67-e3e9-4545-9e5d-07f2c213fe55" (UID: "fb1d9f67-e3e9-4545-9e5d-07f2c213fe55"). InnerVolumeSpecName "kube-api-access-hzw24". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:31:15.547413 master-1 kubenswrapper[4771]: I1011 11:31:15.547292 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "fb1d9f67-e3e9-4545-9e5d-07f2c213fe55" (UID: "fb1d9f67-e3e9-4545-9e5d-07f2c213fe55"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:31:15.565547 master-1 kubenswrapper[4771]: I1011 11:31:15.565433 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "fb1d9f67-e3e9-4545-9e5d-07f2c213fe55" (UID: "fb1d9f67-e3e9-4545-9e5d-07f2c213fe55"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:31:15.570055 master-1 kubenswrapper[4771]: I1011 11:31:15.569998 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-inventory" (OuterVolumeSpecName: "inventory") pod "fb1d9f67-e3e9-4545-9e5d-07f2c213fe55" (UID: "fb1d9f67-e3e9-4545-9e5d-07f2c213fe55"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:31:15.571222 master-1 kubenswrapper[4771]: I1011 11:31:15.571148 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "fb1d9f67-e3e9-4545-9e5d-07f2c213fe55" (UID: "fb1d9f67-e3e9-4545-9e5d-07f2c213fe55"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:31:15.571747 master-1 kubenswrapper[4771]: I1011 11:31:15.571708 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "fb1d9f67-e3e9-4545-9e5d-07f2c213fe55" (UID: "fb1d9f67-e3e9-4545-9e5d-07f2c213fe55"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:31:15.598144 master-1 kubenswrapper[4771]: I1011 11:31:15.598073 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fb1d9f67-e3e9-4545-9e5d-07f2c213fe55" (UID: "fb1d9f67-e3e9-4545-9e5d-07f2c213fe55"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:31:15.646387 master-1 kubenswrapper[4771]: I1011 11:31:15.646268 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ssh-key\") on node \"master-1\" DevicePath \"\"" Oct 11 11:31:15.646387 master-1 kubenswrapper[4771]: I1011 11:31:15.646336 4771 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ceilometer-compute-config-data-2\") on node \"master-1\" DevicePath \"\"" Oct 11 11:31:15.646975 master-1 kubenswrapper[4771]: I1011 11:31:15.646594 4771 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ceilometer-compute-config-data-1\") on node \"master-1\" DevicePath \"\"" Oct 11 11:31:15.646975 master-1 kubenswrapper[4771]: I1011 11:31:15.646626 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-inventory\") on node \"master-1\" DevicePath \"\"" Oct 11 11:31:15.646975 master-1 kubenswrapper[4771]: I1011 11:31:15.646653 4771 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-telemetry-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 11 11:31:15.646975 master-1 kubenswrapper[4771]: I1011 11:31:15.646676 4771 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-ceilometer-compute-config-data-0\") on node \"master-1\" DevicePath \"\"" Oct 11 11:31:15.646975 master-1 kubenswrapper[4771]: I1011 11:31:15.646698 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzw24\" (UniqueName: \"kubernetes.io/projected/fb1d9f67-e3e9-4545-9e5d-07f2c213fe55-kube-api-access-hzw24\") on node \"master-1\" DevicePath \"\"" Oct 11 11:31:15.789535 master-1 kubenswrapper[4771]: I1011 11:31:15.789451 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-dataplane-edpm-rdjt7" event={"ID":"fb1d9f67-e3e9-4545-9e5d-07f2c213fe55","Type":"ContainerDied","Data":"d621ccd58559916f2503ebdd3bbdd04a63949a337855cab279f0c0f2f83d42a6"} Oct 11 11:31:15.789535 master-1 kubenswrapper[4771]: I1011 11:31:15.789508 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d621ccd58559916f2503ebdd3bbdd04a63949a337855cab279f0c0f2f83d42a6" Oct 11 11:31:15.789835 master-1 kubenswrapper[4771]: I1011 11:31:15.789563 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-dataplane-edpm-rdjt7" Oct 11 11:31:43.877474 master-1 kubenswrapper[4771]: I1011 11:31:43.877267 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fg6sc/must-gather-t7wkh"] Oct 11 11:31:43.878285 master-2 kubenswrapper[4776]: I1011 11:31:43.874447 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fg6sc/must-gather-ctrmz"] Oct 11 11:31:43.878285 master-2 kubenswrapper[4776]: E1011 11:31:43.877800 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerName="extract-content" Oct 11 11:31:43.878285 master-2 kubenswrapper[4776]: I1011 11:31:43.877835 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerName="extract-content" Oct 11 11:31:43.878285 master-2 kubenswrapper[4776]: E1011 11:31:43.877876 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerName="registry-server" Oct 11 11:31:43.878285 master-2 kubenswrapper[4776]: I1011 11:31:43.877884 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerName="registry-server" Oct 11 11:31:43.878285 master-2 kubenswrapper[4776]: E1011 11:31:43.877912 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerName="extract-utilities" Oct 11 11:31:43.878285 master-2 kubenswrapper[4776]: I1011 11:31:43.877921 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerName="extract-utilities" Oct 11 11:31:43.878608 master-1 kubenswrapper[4771]: E1011 11:31:43.877945 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19b5781-05b5-4109-a6e0-c54746c813a6" containerName="collect-profiles" Oct 11 11:31:43.878608 master-1 kubenswrapper[4771]: I1011 11:31:43.877969 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19b5781-05b5-4109-a6e0-c54746c813a6" containerName="collect-profiles" Oct 11 11:31:43.878608 master-1 kubenswrapper[4771]: E1011 11:31:43.877992 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1d9f67-e3e9-4545-9e5d-07f2c213fe55" containerName="telemetry-dataplane-edpm" Oct 11 11:31:43.878608 master-1 kubenswrapper[4771]: I1011 11:31:43.878006 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1d9f67-e3e9-4545-9e5d-07f2c213fe55" containerName="telemetry-dataplane-edpm" Oct 11 11:31:43.878608 master-1 kubenswrapper[4771]: I1011 11:31:43.878285 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19b5781-05b5-4109-a6e0-c54746c813a6" containerName="collect-profiles" Oct 11 11:31:43.878608 master-1 kubenswrapper[4771]: I1011 11:31:43.878313 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb1d9f67-e3e9-4545-9e5d-07f2c213fe55" containerName="telemetry-dataplane-edpm" Oct 11 11:31:43.880280 master-1 kubenswrapper[4771]: I1011 11:31:43.880112 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fg6sc/must-gather-t7wkh" Oct 11 11:31:43.881444 master-2 kubenswrapper[4776]: I1011 11:31:43.880028 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerName="registry-server" Oct 11 11:31:43.883890 master-1 kubenswrapper[4771]: I1011 11:31:43.883838 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fg6sc"/"kube-root-ca.crt" Oct 11 11:31:43.884150 master-2 kubenswrapper[4776]: I1011 11:31:43.883794 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fg6sc/must-gather-ctrmz" Oct 11 11:31:43.884167 master-1 kubenswrapper[4771]: I1011 11:31:43.884127 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fg6sc"/"openshift-service-ca.crt" Oct 11 11:31:43.886659 master-2 kubenswrapper[4776]: I1011 11:31:43.886335 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fg6sc"/"kube-root-ca.crt" Oct 11 11:31:43.886826 master-2 kubenswrapper[4776]: I1011 11:31:43.886769 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fg6sc"/"openshift-service-ca.crt" Oct 11 11:31:43.888958 master-2 kubenswrapper[4776]: I1011 11:31:43.888442 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fg6sc/must-gather-ctrmz"] Oct 11 11:31:43.893556 master-1 kubenswrapper[4771]: I1011 11:31:43.893003 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fg6sc/must-gather-t7wkh"] Oct 11 11:31:43.946177 master-2 kubenswrapper[4776]: I1011 11:31:43.946091 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca7d98dc-8efb-46d2-bb15-c769709ccb4c-must-gather-output\") pod \"must-gather-ctrmz\" (UID: \"ca7d98dc-8efb-46d2-bb15-c769709ccb4c\") " pod="openshift-must-gather-fg6sc/must-gather-ctrmz" Oct 11 11:31:43.946562 master-2 kubenswrapper[4776]: I1011 11:31:43.946532 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92vpm\" (UniqueName: \"kubernetes.io/projected/ca7d98dc-8efb-46d2-bb15-c769709ccb4c-kube-api-access-92vpm\") pod \"must-gather-ctrmz\" (UID: \"ca7d98dc-8efb-46d2-bb15-c769709ccb4c\") " pod="openshift-must-gather-fg6sc/must-gather-ctrmz" Oct 11 11:31:44.030389 master-1 kubenswrapper[4771]: I1011 11:31:44.027047 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcstv\" (UniqueName: \"kubernetes.io/projected/47607144-c636-4055-b3d3-fc54a36cf0e5-kube-api-access-tcstv\") pod \"must-gather-t7wkh\" (UID: \"47607144-c636-4055-b3d3-fc54a36cf0e5\") " pod="openshift-must-gather-fg6sc/must-gather-t7wkh" Oct 11 11:31:44.030389 master-1 kubenswrapper[4771]: I1011 11:31:44.027147 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47607144-c636-4055-b3d3-fc54a36cf0e5-must-gather-output\") pod \"must-gather-t7wkh\" (UID: \"47607144-c636-4055-b3d3-fc54a36cf0e5\") " pod="openshift-must-gather-fg6sc/must-gather-t7wkh" Oct 11 11:31:44.049509 master-2 kubenswrapper[4776]: I1011 11:31:44.049413 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca7d98dc-8efb-46d2-bb15-c769709ccb4c-must-gather-output\") pod \"must-gather-ctrmz\" (UID: \"ca7d98dc-8efb-46d2-bb15-c769709ccb4c\") " pod="openshift-must-gather-fg6sc/must-gather-ctrmz" Oct 11 11:31:44.049776 master-2 kubenswrapper[4776]: I1011 11:31:44.049562 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92vpm\" (UniqueName: \"kubernetes.io/projected/ca7d98dc-8efb-46d2-bb15-c769709ccb4c-kube-api-access-92vpm\") pod \"must-gather-ctrmz\" (UID: \"ca7d98dc-8efb-46d2-bb15-c769709ccb4c\") " pod="openshift-must-gather-fg6sc/must-gather-ctrmz" Oct 11 11:31:44.050066 master-2 kubenswrapper[4776]: I1011 11:31:44.050004 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca7d98dc-8efb-46d2-bb15-c769709ccb4c-must-gather-output\") pod \"must-gather-ctrmz\" (UID: \"ca7d98dc-8efb-46d2-bb15-c769709ccb4c\") " pod="openshift-must-gather-fg6sc/must-gather-ctrmz" Oct 11 11:31:44.079120 master-2 kubenswrapper[4776]: I1011 11:31:44.079039 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92vpm\" (UniqueName: \"kubernetes.io/projected/ca7d98dc-8efb-46d2-bb15-c769709ccb4c-kube-api-access-92vpm\") pod \"must-gather-ctrmz\" (UID: \"ca7d98dc-8efb-46d2-bb15-c769709ccb4c\") " pod="openshift-must-gather-fg6sc/must-gather-ctrmz" Oct 11 11:31:44.132455 master-1 kubenswrapper[4771]: I1011 11:31:44.128674 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcstv\" (UniqueName: \"kubernetes.io/projected/47607144-c636-4055-b3d3-fc54a36cf0e5-kube-api-access-tcstv\") pod \"must-gather-t7wkh\" (UID: \"47607144-c636-4055-b3d3-fc54a36cf0e5\") " pod="openshift-must-gather-fg6sc/must-gather-t7wkh" Oct 11 11:31:44.132455 master-1 kubenswrapper[4771]: I1011 11:31:44.128745 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47607144-c636-4055-b3d3-fc54a36cf0e5-must-gather-output\") pod \"must-gather-t7wkh\" (UID: \"47607144-c636-4055-b3d3-fc54a36cf0e5\") " pod="openshift-must-gather-fg6sc/must-gather-t7wkh" Oct 11 11:31:44.132455 master-1 kubenswrapper[4771]: I1011 11:31:44.129276 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47607144-c636-4055-b3d3-fc54a36cf0e5-must-gather-output\") pod \"must-gather-t7wkh\" (UID: \"47607144-c636-4055-b3d3-fc54a36cf0e5\") " pod="openshift-must-gather-fg6sc/must-gather-t7wkh" Oct 11 11:31:44.164380 master-1 kubenswrapper[4771]: I1011 11:31:44.164031 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcstv\" (UniqueName: \"kubernetes.io/projected/47607144-c636-4055-b3d3-fc54a36cf0e5-kube-api-access-tcstv\") pod \"must-gather-t7wkh\" (UID: \"47607144-c636-4055-b3d3-fc54a36cf0e5\") " pod="openshift-must-gather-fg6sc/must-gather-t7wkh" Oct 11 11:31:44.202380 master-1 kubenswrapper[4771]: I1011 11:31:44.201836 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fg6sc/must-gather-t7wkh" Oct 11 11:31:44.210698 master-2 kubenswrapper[4776]: I1011 11:31:44.208290 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fg6sc/must-gather-ctrmz" Oct 11 11:31:44.650430 master-2 kubenswrapper[4776]: W1011 11:31:44.650378 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca7d98dc_8efb_46d2_bb15_c769709ccb4c.slice/crio-d56cf2965b0cc9e86a38e09f72c8d1a0a0f66c7472924ccbd80a66b2a8a36620 WatchSource:0}: Error finding container d56cf2965b0cc9e86a38e09f72c8d1a0a0f66c7472924ccbd80a66b2a8a36620: Status 404 returned error can't find the container with id d56cf2965b0cc9e86a38e09f72c8d1a0a0f66c7472924ccbd80a66b2a8a36620 Oct 11 11:31:44.651648 master-2 kubenswrapper[4776]: I1011 11:31:44.651614 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fg6sc/must-gather-ctrmz"] Oct 11 11:31:44.681538 master-1 kubenswrapper[4771]: I1011 11:31:44.681064 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fg6sc/must-gather-t7wkh"] Oct 11 11:31:45.026914 master-2 kubenswrapper[4776]: I1011 11:31:45.026834 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fg6sc/must-gather-ctrmz" event={"ID":"ca7d98dc-8efb-46d2-bb15-c769709ccb4c","Type":"ContainerStarted","Data":"d56cf2965b0cc9e86a38e09f72c8d1a0a0f66c7472924ccbd80a66b2a8a36620"} Oct 11 11:31:45.113288 master-1 kubenswrapper[4771]: I1011 11:31:45.113185 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fg6sc/must-gather-t7wkh" event={"ID":"47607144-c636-4055-b3d3-fc54a36cf0e5","Type":"ContainerStarted","Data":"c7c3fdea0c455c10846c5aae6c73c93ccd9ceafbba080e5eefb34abdb563b139"} Oct 11 11:31:46.038967 master-2 kubenswrapper[4776]: I1011 11:31:46.038829 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fg6sc/must-gather-ctrmz" event={"ID":"ca7d98dc-8efb-46d2-bb15-c769709ccb4c","Type":"ContainerStarted","Data":"1ea5e25caf43c046d8b5a7743dc0fe6110c578cd5aff0bd0e86ba0a81c064c43"} Oct 11 11:31:47.052202 master-2 kubenswrapper[4776]: I1011 11:31:47.052118 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fg6sc/must-gather-ctrmz" event={"ID":"ca7d98dc-8efb-46d2-bb15-c769709ccb4c","Type":"ContainerStarted","Data":"bd8d93ddec39daef4e83da5e147bac7e76c85e6cea4c1db9ed9f14f7de0d1315"} Oct 11 11:31:47.084162 master-2 kubenswrapper[4776]: I1011 11:31:47.084038 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fg6sc/must-gather-ctrmz" podStartSLOduration=2.953526013 podStartE2EDuration="4.084005315s" podCreationTimestamp="2025-10-11 11:31:43 +0000 UTC" firstStartedPulling="2025-10-11 11:31:44.652535201 +0000 UTC m=+3939.436961910" lastFinishedPulling="2025-10-11 11:31:45.783014503 +0000 UTC m=+3940.567441212" observedRunningTime="2025-10-11 11:31:47.078346813 +0000 UTC m=+3941.862773522" watchObservedRunningTime="2025-10-11 11:31:47.084005315 +0000 UTC m=+3941.868432024" Oct 11 11:31:47.796452 master-2 kubenswrapper[4776]: I1011 11:31:47.796398 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-55bd67947c-tpbwx_b7b07707-84bd-43a6-a43d-6680decaa210/cluster-version-operator/0.log"